SAP Cloud Platform
SAP Cloud Platform
PUBLIC
2020-01-30
3 Development. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1070
3.1 Development in the Cloud Foundry Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071
Developing Your First Application on SAP Cloud Platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071
Business Application Pattern. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1072
Deploy Business Applications in the Cloud Foundry Environment. . . . . . . . . . . . . . . . . . . . . . 1075
Deploy Docker Images in the Cloud Foundry Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . .1075
Developing SAP HANA in the Cloud Foundry Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . 1077
Developing HTML5 Applications in the Cloud Foundry Environment. . . . . . . . . . . . . . . . . . . . 1080
Developing Java in the Cloud Foundry Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1162
Developing Node.js in the Cloud Foundry Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1207
Developing Python in the Cloud Foundry Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219
Developing SAPUI5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1230
Multitarget Applications in the Cloud Foundry Environment. . . . . . . . . . . . . . . . . . . . . . . . . . 1232
Using Services in the Cloud Foundry Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1318
Working with the SAP Cloud Application Programming Model. . . . . . . . . . . . . . . . . . . . . . . . . 1332
3.2 Development in the ABAP Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1337
ABAP Development User Guides. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1338
ABAP Keyword Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1339
ABAP RESTful Programming Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1339
Connect to the ABAP System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1341
Consuming Reuse Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1343
Create a Communication Arrangement for Inbound Communication with Service Key Basic
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1346
HTTP Communication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1347
HTTP Service Development. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1349
Importing Code with abapGit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1349
Integrating On-Premise Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1354
Change Document Solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1361
Number Range Solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1378
Getting Numbers from an Interval. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1387
Pulling Git Repositories to an SAP Cloud Platform ABAP Environment System. . . . . . . . . . . . . 1389
Application Jobs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1392
4 Extensions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1743
6 Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2246
6.1 Security in the Cloud Foundry Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2247
Authorization and Trust Management in the Cloud Foundry Environment. . . . . . . . . . . . . . . . .2247
Audit Logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2353
6.2 Security in the Neo Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2358
Authorization and Trust Management in the Neo Environment. . . . . . . . . . . . . . . . . . . . . . . . 2358
Platform Identity Provider. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2431
OAuth 2.0 Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2436
Keystore Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2459
Audit Logging in the Neo Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2493
6.3 Principal Propagation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2499
Principal Propagation Between Neo Subaccounts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2500
Principal Propagation from the Neo to the Cloud Foundry Environment. . . . . . . . . . . . . . . . . . 2504
Principal Propagation from the Cloud Foundry to the Neo Environment. . . . . . . . . . . . . . . . . . 2510
6.4 Protection from Web Attacks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2518
Protection from Cross-Site Scripting (XSS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2518
Protection from Cross-Site Request Forgery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2519
6.5 Data Protection and Privacy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2527
Glossary for Data Protection and Privacy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2528
8 Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2554
SAP Cloud Platform is an enterprise platform-as-a-service (enterprise PaaS) that provides comprehensive
application development services and capabilities, which lets you build, extend, and integrate business
applications in the cloud.
SAP Cloud Platform is SAP's innovative cloud development and deployment platform. It is supported by
multiple cloud infrastructure providers and enables innovative technologies such as the Internet of Things,
machine learning, artificial intelligence, and big data, thereby enabling you to achieve business agility and
accelerate digital transformation across your business.
Scenarios
Environments
Environments constitute the actual platform-as-a-service offering of SAP Cloud Platform that allows for the
development and administration of business applications. Each environment provides at least one application
runtime and comes with its own domain model, user and role management logic, and tools (for example,
command line utility). SAP Cloud Platform provides different environments:
According to your preferred development environment and your use cases, you may want to consume a set of
services that are provided by SAP Cloud Platform. For more information, see Capabilities [page 15] and
Availability of SAP Cloud Platform Services.
SAP Cloud Platform facilitates secure integration with on-premise systems that are running software from SAP
and other vendors. Using platform services such as the Connectivity service applications can establish secure
connections to on-premise solutions, enabling integration scenarios with your cloud-based applications. For
more information about the Connectivity service, see Connectivity [page 16]
Secure Data
The comprehensive, multilevel security measures that are built into SAP Cloud Platform are engineered to
protect your mission-critical business data and assets, and to provide the necessary industry-standard
compliance certifications.
Quality Certificates
Third-party certification bodies provide independent confirmation that SAP meets the requirements of
international standards. You can find all certificates at https://www.sap.com/corporate/en/company/
quality.html .
1.1 Accounts
Learn more about the different types of accounts on SAP Cloud Platform and how they relate to each other.
Global accounts are hosted environments that represent the scope of the functionality and the level of support
based on a customer or partner’s entitlement to platform resources and services.
The global account is the realization of the commercial contract with SAP. A global account can contain one or
more subaccounts in which you deploy applications, use services, and manage your subscriptions. Depending
Global accounts are region- and environment-independent. Within a global account, you manage all of your
subaccounts, which in turn are specific to one region and one environment.
Related Information
1.1.2 Subaccounts
Subaccounts let you structure a global account according to your organization’s and project’s requirements
with regard to members, authorizations, and quotas.
Subaccounts in a global account are independent from each other. This is important to consider with respect to
security, member management, data management, data migration, integration, and so on, when you plan your
landscape and overall architecture. Each subaccount is associated with a region, which is the physical location
where applications, data, or services are hosted. It is also associated with one environment. The specific region
and environment are relevant when you deploy applications and access the SAP Cloud Platform cockpit. The
quotas that have been purchased for a global account have to be assigned to the individual subaccounts.
Related Information
SAP may offer, and a customer may choose to accept access to functionality, such as a service or application,
which is not generally available and has not been validated and quality assured in accordance with SAP
standard processes. Such functionality is defined as a beta feature.
Beta features let customers, developers, and partners test new features on SAP Cloud Platform. The beta
features have the following characteristics:
● SAP may require that customers accept additional terms to use beta features.
● Beta features are released for enterprise accounts, trial accounts, or both.
● To allow the use of beta features in the subaccounts available to you in the SAP Cloud Platform cockpit, you
need to set the Enable beta features option. You do this on global account level by choosing the edit icon on
the subaccount's tile.
● No personal data may be processed by beta functionality in the context of contractual data processing
without additional written agreement.
You should not use SAP Cloud Platform beta features in subaccounts that belong to productive enterprise
accounts. Any use of beta functionality is at the customer's own risk, and SAP shall not be liable for errors
or damages caused by the use of beta features.
Related Information
Accounts [page 8]
Regions [page 11]
Change Subaccount Details [page 1749]
Create a Subaccount in the Cloud Foundry Environment [AWS, Azure, or GCP Regions]
A global account can group together different subaccounts that an administrator makes available to users.
Administrators can assign the available quotas of a global account to its different subaccounts and move it
between subaccounts that belong to the same global account.
The hierarchical structure of global accounts and subaccounts lets you define an account model that
accurately fits your business and development needs. For example, if you want to separate development,
testing, and productive usage, you can create a subaccount for each of these scenarios in your global account.
You can also create subaccounts for different development teams or departments in your organizations.
Subaccounts in a global account are independent from each other. This is important to consider with respect to
security, member management, data management, data migration and management, integration, and so on,
when you plan your landscape and overall architecture.
Each subaccount is associated with a particular region, which is the physical location where applications, data,
or services are hosted. The specific region associated with a subaccount is relevant when you deploy
applications (region host) and access the SAP Cloud Platform cockpit (cockpit URL). The region assigned to
your subaccount doesn't have to be directly related to your location. You could be located in the United States,
for example, but operate your subaccount in Europe.
For more information about the relationship between a global account and its subaccounts, see the graphic in
Basic Platform Concepts. For best practices, see Setting Up Your Account Model.
You can enable a subaccount to use beta features, including services and applications, which are occasionally
made available by SAP for SAP Cloud Platform. This option, unselected by default, is available only to
administrators, for your enterprise account.
Caution
You should not use SAP Cloud Platform beta features in subaccounts that belong to productive enterprise
accounts. Any use of beta functionality is at the customer's own risk, and SAP shall not be liable for errors
or damages caused by the use of beta features.
A user account corresponds to a particular user in an identity provider, such as the SAP ID service (for
example, an S-user, P-user) and consists, for example, of an SAP user ID and password.
There are two types of users on SAP Cloud Platform: platform users and business users. Platform users are the
members of global accounts and subaccounts: usually developers, administrators or operators who deploy,
administer, and troubleshoot applications and services. They can view a list of all global accounts,
subaccounts, and Cloud Foundry spaces that are available to them, and access them using the cockpit.
Business users are those who use the applications that are deployed to SAP Cloud Platform. For example,
users of subscribed apps or services, such as SAP Web IDE, are business users.
1.2 Regions
Each region represents the location of a data center, the physical location (for example, Europe, US East) where
applications, data, or services are hosted.
Application performance (response time, latency) can be optimized by selecting a region close to the users.
When deploying applications, consider that a subaccount is associated with a particular region and that this is
independent of your own location. You may be located in the United States, for example, but operate your
subaccount in a region in Europe.
Regions for the Cloud Foundry environment are provided by third-party data center providers such as Amazon
or Microsoft. These third-party data center providers operate the infrastructure layer of regions. By contrast,
SAP operates the platform layer and Cloud Foundry.
To deploy an application in more than one region, execute the deployment separately for each host.
Note
The table below only contains information about regions operated by SAP. You can find the list of all SAP
Cloud Platform regions in the SAP Cloud Platform Regions and Service Portfolio.
If you need additional information about regions not included in this topic, contact your operator.
Technical Key
Global Account of IaaS Pro
Type Region Iaas Provider Technical Key vider API Endpoint Cockpit Logon
Enterprise ac US West (WA) Microsoft Azure cf-us20 West US 2 api.cf.us20.han Link
count a.onde
mand.com
Enterprise & Europe (Frank Amazon Web cf-eu10 eu-central-1 api.cf.eu10.han Link
Services
trial account furt) a.onde
mand.com
To find the IP ranges for a Cloud Foundry region, use the technical key of the IaaS provider and look up the
information on the IaaS provider's website.
Caution
Some customer contracts include EU access, which means that we only use European subprocessors to
access personal data in cloud services, such as when we provide support. We currently cannot guarantee
EU access in the Cloud Foundry environment. If your contract includes EU access, we cannot move services
to the Cloud Foundry environment, without changing your contract.
Related Information
The region represents the location of a data center, the physical location (for example, Europe, US East) where
applications, data, or services are hosted.
Global Account Type Region Technical Key API Endpoint Cockpit Logon
Caution
Some customer contracts include EU access, which means that we only use European subprocessors to
access personal data in cloud services, such as when we provide support. We currently cannot guarantee
EU access in the ABAP environment. If your contract includes EU access, we cannot move services to the
ABAP environment, without changing your contract.
Related Information
Each region represents the location of a data center, the physical location (for example, Europe, US East) where
applications, data, or services are hosted.
Application performance (response time, latency) can be optimized by selecting a region close to the users.
When deploying applications, consider that a subaccount is associated with a particular region and that this is
independent of your own location. You may be located in the United States, for example, but operate your
subaccount in a region in Europe. All regions that are available for the Neo environment are exclusively
provided by SAP.
To deploy an application in more than one region, execute the deployment separately for each host.
eu1.hana.onde
mand.com
Related Information
1.3 Capabilities
SAP Cloud Platform provides a rich set of capabilities that group together different technical components, like
services, tools and runtimes.
To find an overview of all available capabilities and services, see SAP Cloud Platform Regions and Service
Portfolio. The figure below provides links to the overview, prefiltered by capability.
1.4 Connectivity
SAP Cloud Platform Connectivity: overview, features, restrictions.
Content
In this Topic
Hover over the elements for a description. Click an element for more information.
In this Guide
Overview
The Connectivity service allows SAP Cloud Platform applications to securely access remote services that run
on the Internet or on-premise. This service:
● Your company owns a global account on SAP Cloud Platform and one or more subaccounts that are
assigned to this global account.
● Using SAP Cloud Platform, you subscribe to or deploy your own applications.
● To connect to these applications from your on-premise network, the Cloud Connector administrator sets
up a secure tunnel to your company's subaccount on SAP Cloud Platform.
● The platform ensures that the tunnel can only be used by applications that are assigned to your
subaccount.
● Applications assigned to other (sub)accounts cannot access the tunnel. It is encrypted via transport layer
security (TLS), which guarantees connection privacy.
Features
Restrictions
Note
For information about general SAP Cloud Platform restrictions, see Product Prerequisites and Restrictions
[page 1008].
General
Java Connector To develop a Java Connector (JCo) application for RFC com
munication, your SDK local runtime must be hosted by a 64-
bit JVM, on a x86_64 operating system (Microsoft Windows
OS, Linux OS, or Mac OS X).
Ports For Internet connections, you are allowed to use any port
>1024. For cloud to on-premise solutions there are no port
limitations.
Destination Configuration ● You can use destination configuration files with exten
sion .props, .properties, .jks, and .txt, as
well as files with no extension.
● If a destination configuration consists of a keystore or
truststore, it must be stored in JKS files with a stand
ard .jks extension.
Protocols
For the cloud to on-premise connectivity scenario, the following protocols are currently supported:
Topic Restriction
Service Channels Service channels are supported only for SAP HANA data
base, see Using Service Channels [page 480].
Neo Environment
Topic Restriction
Cloud Connector
Related Information
Consuming SAP Cloud Platform Connectivity for your application in the Cloud Foundry environment.
Hover over the elements for a description. Click an element for more information.
Use SAP Cloud Platform Connectivity for your application in the Cloud Foundry environment: available
services, connectivity scenarios, user roles.
Content
Hover over the elements for a description. Click an element for more information.
Services
SAP Cloud Platform Connectivity provides two services for the Cloud Foundry environment, the Connectivity
service and the Destination service.
The Destination service and the Connectivity service together provide virtually the same functionality that is
included in the Connectivity service of the Neo environment.
In the Cloud Foundry environment however, this functionality is split into two separate services:
● The Connectivity service provides a connectivity proxy that you can use to access on-premise resources.
● Using the Destination service, you can retrieve and store the technical information about the target
resource (destination) that you need to connect your application to a remote service or system.
You can use both services together as well as separately, depending on the needs of your specific scenario.
Scenarios
● Use the Connectivity service to connect your application or an SAP HANA database to on-premise
systems:
User Roles
The end-to-end use of the Connectivity service and the Destination service requires these user groups:
● Application operators - are responsible for productive deployment and operation of an application on SAP
Cloud Platform. Application operators are also responsible for configuring the remote connections
(destination and trust management) that an application might need, see Initial Setup [page 26].
● Application developers - create a connectivity-enabled SAP Cloud Platform application by consuming the
Connectivity service and/or the Destination service, see Developing Applications [page 103].
● IT administrators - set up the connectivity to SAP Cloud Platform in your on-premise network, using the
Cloud Connector [page 345].
Some procedures on the SAP Cloud Platform can be done by developers as well as by application operators.
Others may include a mix of development and operation tasks. These procedures are labeled using icons for
the respective task type.
Task Types
To perform connectivity tasks in the Cloud Foundry environment, the following user roles and authorizations
apply:
Related Information
Find the latest features, enhancements and bug fixes for SAP Cloud Platform Connectivity.
Task Description
Managing Destinations [page 26] Manage HTTP destinations for Cloud Foundry applications in
the SAP Cloud Platform cockpit.
HTTP Destinations [page 38] You can choose from a broad range of authentication types
for HTTP destinations, to meet the requirements of your
specific communication scenario.
RFC Destinations [page 55] Use RFC destinations to communicate with an on-premise
ABAP system via the RFC protocol.
Principal Propagation [page 62] Use principal propagation (user propagation) to securely for
ward cloud users to a back-end system (single sign-on).
Set up Trust Between Systems [page 65] Download and configure X.509 certificates as a prerequisite
for user propagation from the Cloud Foundry environment to
a remote system outside SAP Cloud Platform.
Multitenancy in the Connectivity Service [page 96] Manage destinations for multitenancy-enabled applications
that require a connection to a remote service or on-premise
application.
Create and Bind a Connectivity Service Instance [page 98] To use the Connectivity service in your application, you must
first create and bind an instance of the service.
Create and Bind a Destination Service Instance [page 101] To use the Destination service in your application, you must
first create and bind an instance of the service.
Use the Destinations editor in the SAP Cloud Platform cockpit to configure HTTP or RFC destinations.
Using a destination, you can connect your application to the Internet (via HTTP), as well as to an on-premise
system (via HTTP or RFC).
You can create, delete, clone, modify, import and export destinations.
Use this editor to work with destinations on subaccount or service instance level.
Prerequisites
Restrictions
● A destination name must be unique for the current application. It must contain only alphanumeric
characters, underscores, and dashes. The maximum length is 200 characters.
● The currently supported destination types are HTTP and RFC.
○ HTTP Destinations [page 38] - provide data communication via the HTTP protocol and are used for
both Internet and on-premise connections.
○ RFC Destinations [page 55] - make connections to ABAP on-premise systems via RFC protocol using
the Java Connector (JCo) as API.
Tasks
Related Information
Access the Destinations Editor in the SAP Cloud Platform cockpit to create and manage destinations.
● Subaccount level
● Service instance level
On subaccount level, you can specifiy a destination for the entire subaccount, defining the used
communication protocol and more properties, like authentication method, proxy type and URL.
Prerequisites
Procedure
1. In the cockpit, select your Global Account and your subaccount name from the Subaccount menu in the
breadcrumbs.
2. From the left-side panel, choose Connectivity Destinations .
Note
To perform these steps, you must have created a Destination service instance in your space, see Create and
Bind a Destination Service Instance [page 101]. On service instance level, you can set destinations only for
Destination service instances.
1. In the cockpit, choose your Global Account from the Region Overview and select a Subaccount.
Note
2. From the left-side menu, choose Spaces and select a space name.
3. From the left-side menu, choose Services Service Instances .
4. Select a service instance and choose Destinations from the left-side panel.
Create HTTP destinations in the Destinations editor (SAP Cloud Platform cockpit).
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Procedure
Note
Note
If you set an HTTPS destination, you need to also add a Trust Store. For more information, see Use
Destination Certificates [page 33].
Related Information
How to create RFC destinations in the Destinations editor (SAP Cloud Platform cockpit).
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Procedure
Note
For a detailed description of RFC-specific properties (JCo properties), see RFC Destinations [page 55].
Related Information
How to clone destinations in the Destinations editor (SAP Cloud Platform cockpit).
Prerequisites
You have previously created or imported an HTTP destination in the Destinations editor of the cockpit.
Procedure
1. In the Destinations editor, go to the existing destination which you want to clone.
How to edit and delete destinations in the Destinations editor (SAP Cloud Platform cockpit).
Prerequisites
You have previously created or imported an HTTP destination in the Destinations editor of the cockpit.
Procedure
● Edit a destination:
Tip
For complete consistency, we recommend that you first stop your application, then apply your
destination changes, and then start again the application. Also, bear in mind that these steps will
cause application downtime.
● Delete a destination:
To remove an existing destination, choose the button. The changes will take effect in up to five minutes.
Related Information
Maintain truststore and keystore certificates in the Destinations editor (SAP Cloud Platform cockpit).
Prerequisites
You have logged into the cockpit and opened the Destinations editor. For more information, see Access the
Destinations Editor [page 27].
Context
You can upload, add and delete certificates for your connectivity destinations. Bear in mind that:
● You can only use JKS, PFX and P12 files for destination key store, and JKS, CRT, CER, DER for destination
trust store.
● You can add certificates only for HTTPS destinations. Truststore can be used for all authentication types.
Keystore is available only for ClientCertificateAuthentication and
OAuth2SAMLBearerAssertion.
● An uploaded certificate file should contain the entire certificate chain.
Procedure
Uploading Certificates
Note
You can upload a certificate during creation or editing of a destination, by clicking the Upload and Delete
Certificates link.
Deleting Certificates
1. Choose the Certificates button or click the Upload and Delete Certificates link.
2. Select the certificate you want to remove and choose Delete Selected.
3. Upload another certificate, or close the Certificates window.
More Information
How to import destinations in the Destinations editor (SAP Cloud Platform cockpit).
Prerequisites
Note
The Destinations editor allows importing destination files with extension .props, .properties, .jks,
and .txt, as well as files with no extension. Destination files must be encoded in ISO 8859-1 character
encoding.
Procedure
○ If the configuration file contains valid data, it is displayed in the Destinations editor with no errors. The
Save button is enabled so that you can successfully save the imported destination.
○ If the configuration file contains invalid properties or values, under the relevant fields in the
Destinations editor are displayed error messages in red which prompt you to correct them accordingly.
Related Information
How to export destinations in the Destinations editor (SAP Cloud Platform cockpit).
Prerequisites
Procedure
○ If the destination does not contain client certificate authentication, it is saved as a single configuration
file.
○ If the destination provides client certificate data, it is saved as an archive, which contains the main
configuration file and a JKS file.
Related Information
Content
User → jco.client.user
Password → jco.client.passwd
Note
For security reasons, do not use these additional properties but use the corresponding main properties'
fields.
Related Information
Find information about HTTP destinations for Internet and on-premise connections.
Destination Levels
The runtime tries to resolve a destination in the order: Subaccount Level → Service Instance Level.
Proxy Types
● Internet - The application can connect to an external REST or SOAP service on the Internet.
● OnPremise - The application can connect to an on-premise back-end system through the Cloud
Connector.
The proxy type used for a destination is specified by the destination property ProxyType. The default value is
Internet.
If you work in your local development environment behind a proxy server and want to use a service from the
Internet, you need to configure your proxy settings on JVM level. To do this, proceed as follows:
1. On the Servers view, double-click the added server and choose Overview to open the editor.
2. Click the Open Launch Configuration link.
3. Choose the (x)=Arguments tab page.
4. In the VM Arguments box, add the following row:
-Dhttp.proxyHost=yourproxyHost -Dhttp.proxyPort=yourProxyPort -
Dhttps.proxyHost=yourproxyHost -Dhttps.proxyPort=yourProxyPort
Configuring Authentication
When creating an HTTP destination, you can use different authentication types for access control:
Create and configure a Server Certificate destination for an application in the Cloud Foundry environment.
Context
The server certificate authentication is applicable for all client authentication types, described below.
Note
TLS 1.2 became the default TLS version of HTTP destinations. If an HTTP destination is consumed by a java
application the change will be effective after restart. All HTTP destinations that use the HTTPS protocol and
have ProxyType=Internet can be affected. Previous TLS version can be used by configuring an additional
property TLSVersion=TLSv1.0 or TLSVersion=TLSv1.1.
Properties
TLSVersion Optional property. Can be used to specify the preferred TLS version to be used by
the current destination. Since TLS 1.2 is not enabled by default on the older java
versions this property can be used to configure TLS 1.2 in case this is required by
the server configured in this destination. It is usable only in HTTP destinations.
Example: TLSVersion=TLSv1.2 .
TrustStoreLocation Path to the JKS file which contains trusted certificates (Certificate Authorities)
1. When used in local environment for authentication against a remote client.
2. When used in cloud environment 1. The relative path to the JKS file. The root path is the server's location on the
file system.
2. The name of the JKS file.
Note
The default JDK truststore is appended to the truststore defined in the desti
nation configuration. As a result, the destination simultaneously uses both
truststores. If the TrustStoreLocation property is not specified, the
JDK truststore is used as a default truststore for the destination.
TrustStorePassword Password for the JKS trust store file. This property is mandatory in case
TrustStoreLocation is used.
TrustAll If this property is set to TRUE in the destination, the server certificate will not be
checked for SSL connections. It is intended for test scenarios only, and should
not be used in production (since the SSL server certificate is not checked, the
server is not authenticated). The possible values are TRUE or FALSE; the default
value is FALSE (that is, if the property is not present at all).
HostnameVerifier Optional property. It has two values: Strict and BrowserCompatible. This
property specifies how the server hostname matches the names stored inside the
server's X.509 certificate. This verifying process is only applied if TLS or SSL pro
tocols are used and is not applied if the TrustAll property is specified. The de
fault value (used if no value is explicitly specified) is Strict.
Note
You can upload TrustStore JKS files using the same command for uploading destination configuration
property file. You only need to specify the JKS file instead of the destination configuration file.
Note
Connections to remote services which require Java Cryptography Extension (JCE) unlimited strength
jurisdiction policy are not supported.
Configuration
Related Information
Create and configure an SAP Assertion SSO destination for an application in the Cloud Foundry environment.
Context
By default, all SAP systems accept SAP assertion tickets for user propagation.
Note
The SAP assertion ticket is a special type of logon ticket. For more information, see SAP Logon Tickets and
Logon Using Tickets.
The aim of the SAPAssertionSSO destination is to generate such an assertion ticket in order to propagate the
currently logged-on SAP Cloud Platform user to an SAP back-end system. You can only use this authentication
type if the user IDs on both sides are the same. The following diagram shows the elements of the configuration
process on the SAP Cloud Platform and in the corresponding back-end system:
Configuration Steps
1. Configure the back-end system so that it can accept SAP assertion tickets signed by a trusted x.509 key
pair. For more information, see Configuring a Trust Relationship for SAP Assertion Tickets.
2. Create and configure a SAPAssertionSSO destination by using the properties listed below, and deploy it on
SAP Cloud Platform, see also Managing Destinations [page 26].
Property Description
ProxyType You can use both proxy types Internet and OnPremise.
Example
Name=weather
Type=HTTP
Authentication=SAPAssertionSSO
IssuerSID=JAV
IssuerClient=000
RecipientSID=SAP
RecipientClient=100
Certificate=MIICiDCCAkegAwI...rvHTQ\=\=
SigningKey=MIIBSwIB...RuqNKGA\=
Forward the identity of an on-demand user from a Cloud Foundry application to a back-end system.
Context
The aim of the PrincipalPropagation destination is to forward the identity of an on-demand user to the Cloud
Connector, and from there – to the back end of the relevant on-premise system. In this way, the on-demand
user will no longer need to provide his/her identity every time he/she makes a connection to an on-premise
system via the same Cloud Connector.
Configuration Steps
You can create and configure a PrincipalPropagation destination by using the properties listed below, and
deploy it on SAP Cloud Platform. For more information, see Managing Destinations [page 26].
Note
This property is only available for destination configurations created on the cloud.
Properties
Property Description
Example
Name=OnPremiseDestination
Type=HTTP
URL= http://virtualhost:80
Authentication=PrincipalPropagation
ProxyType=OnPremise
Create and configure an SAML Bearer Assertion destination for an application in the Cloud Foundry
environment.
Context
You can call an OAuth2-protected remote system/API and propagate a user ID to the remote system by using
the OAuth2SAMLBearerAssertion authentication type. The Destination service provides functionality for
automatic token retrieval and caching, by automating the construction and sending of the SAML assertion.
This simplifies application development, leaving you with only constructing the request to the remote system
by providing the token, which is fetched for you by the Destination service. For more information, see User
Propagation via SAML 2.0 Bearer Assertion Flow [page 170].
Properties
The table below lists the destination properties for OAuth2SAMLBearerAssertion authentication type. You can
find the values for these properties in the provider-specific documentation of OAuth-protected services.
Usually, only a subset of the optional properties is required by a particular service provider.
Property Description
Required
KeyStoreLocation Path to the JKS file that contains the client certificate(s) for
1. When used in local environment authentication against a remote server.
2. When used in cloud environment 1. The relative path to the JKS file. The root path is the
server's location on the file system.
2. The name of the JKS file.
Note
You can upload KeyStore JKS files using the same
command for uploading destination configuration prop
erty file. You only need to specify the JKS file instead of
the destination configuration file.
Note
You can configure the keystore properties only in the
Destination editor on subaccount level.
KeyStorePassword The password for the key storage. This property is manda
tory in case KeyStoreLocation is used.
tokenServiceURL The URL of the token service, against which the token ex
change is performed. Depending on the Token Service
URL type, this property is interpreted in different ways dur
ing the automatic token retrieval:
● https://
authentication.us10.hana.ondemand.com
/oauth/token → https://
mytenant.authentication.us10.hana.ond
emand.com/oauth/token
● https://
{tenant}.authentication.us10.hana.ond
emand.com/oauth/token → https://
mytenant.authentication.us10.hana.ond
emand.com/oauth/token
● https://
authentication.myauthserver.com/
tenant/{tenant}/oauth/token → https://
authentication.myauthserver.com/
tenant/mytenant/oauth/token
● https://oauth.
{tenant}.myauthserver.com/token →
https://
oauth.mytenant.myauthserver.com/token
Additional
nameQualifier Security domain of the user for which access token is re
quested.
Example
The connectivity destination below provides HTTP access to the OData API of the SuccessFactors Jam.
URL=https://demo.sapjam.com/OData/OData.svc
Name=sap_jam_odata
TrustAll=true
ProxyType=Internet
Type=HTTP
Authentication=OAuth2SAMLBearerAssertion
tokenServiceURL=https://demo.sapjam.com/api/v1/auth/token
clientKey=<unique_generated_string>
audience=cubetree.com
nameQualifier=www.successfactors.com
Related Information
Find details about client authentication types for HTTP destinations in the Cloud Foundry environment.
Context
This section lists the supported client authentication types and the relevant supported properties.
No Authentication
This is used for destinations that refer to a service on the Internet or an on-premise system that does not
require authentication. The relevant property value is:
Authentication=NoAuthentication
Note
When a destination is using HTTPS protocol to connect to a Web resource, the JDK truststore is used as
truststore for the destination.
Basic Authentication
This is used for destinations that refer to a service on the Internet or an on-premise system that requires basic
authentication. The relevant property value is:
Authentication=BasicAuthentication
Property Description
Password Password
Preemptive If this property is not set or is set to TRUE (that is, the default behavior is to use
preemptive sending), the authentication token is sent preemptively. Otherwise, it
relies on the challenge from the server (401 HTTP code). The default value (used
if no value is explicitly specified) is TRUE. For more information about preemp
tiveness, see http://tools.ietf.org/html/rfc2617#section-3.3 .
When a destination is using the HTTPS protocol to connect to a Web resource, the JDK truststore is used as
truststore for the destination.
Note
This is used for destinations that refer to a service on the Internet. The relevant property value is:
Authentication=ClientCertificateAuthentication
Property Description
KeyStoreLocation Path to the JKS file that contains the client certificate(s) for authentication
1. When used in local environment against a remote server.
2. When used in cloud environment 1. The relative path to the JKS file. The root path is the server's location on the
file system.
2. The name of the JKS file.
KeyStorePassword The password for the key storage. This property is mandatory in case
KeyStoreLocation is used.
Note
You can upload KeyStore JKS files using the same command for uploading destination configuration
property file. You only need to specify the JKS file instead of the destination configuration file.
Configuration
SAP Cloud Platform supports applications to use the OAuth client credentials flow for consuming OAuth-
protected resources.
The client credentials are used to request an access token from an OAuth authorization server.
Note
The retrieved access token is cached and auto-renovated. When a token is about to expire, a new token is
created shortly before the expiration of the old one.
Configuration Steps
You can create and configure an OAuth2ClientCredentials destination using the properties listed below, and
deploy it on SAP Cloud Platform. To create and configure a destination, follow the steps described in Managing
Destinations [page 26].
Properties
The table below lists the destination properties required for the OAuth2ClientCredentials authentication type.
Property Description
Required
Type Destination type. Use HTTP as value for all HTTP(S) destina
tions.
Additional
Note
When the OAuth authorization server is called, it accepts the trust settings of the destination, see Server
Certificate Authentication [page 39].
Example
Sample Code
URL=https://demo.sapjam.com/OData/OData.svc
Name=sap_jam_odata
TrustAll=true
ProxyType=Internet
Type=HTTP
Authentication=OAuth2ClientCredentials
tokenServiceURL=http://demo.sapjam.com/api/v1/auth/token
tokenServiceUser=tokenserviceuser
tokenServicePassword=pass
clientId=clientId
clientSecret=secret
Learn about the OAuth2UserTokenExchange authentication type for HTTP destinations in the Cloud Foundry
environment: use cases, supported properties and ways to retrieve an access token in an automated way.
Overview
When a user is logged into an application that needs to call another application and pass the user context, the
caller application must perform a user token exchange.
The user token exchange is a sequence of steps during which the initial user token is handed over to the
authorization server and, in exchange, another access token is returned.
The calling application first receives a refresh token out of which the actual user access token is created. The
resulting user access token contains the user and tenant context as well as technical access metadata, like
scopes, that are required for accessing the target application.
Using the OAuth2UserTokenExchange authentication type, the Destination service performs all these steps
automatically, which lets you simplify your application development in the Cloud Foundry environment.
Properties
To configure a destination of this type, you must specify all the required properties. You can create destinations
of this type via the cloud cockpit (Access the Destinations Editor [page 27]) or the Destination Service REST
API [page 174].
The following table shows the required properties along with their semantics.
Required
Token tokenServi The URL of the token service, against which the token exchange is performed. De
Service URL ceURL
pending on the Token Service URL Type, this property is interpreted in dif
ferent ways during the automatic token retrieval:
Examples of interpreting the token service URL for the token service URL type
Common, if the call to the Destination service is on behalf of a subaccount subdo
main with value mytenant:
● https://authentication.us10.hana.ondemand.com/oauth/
token → https://
mytenant.authentication.us10.hana.ondemand.com/oauth/
token
● https://
{tenant}.authentication.us10.hana.ondemand.com/oauth/
token → https://
mytenant.authentication.us10.hana.ondemand.com/oauth/
token
● https://authentication.myauthserver.com/tenant/
{tenant}/oauth/token → https://
authentication.myauthserver.com/tenant/mytenant/
oauth/token
● https://oauth.{tenant}.myauthserver.com/token →
https://oauth.mytenant.myauthserver.com/token
Name Name Name of the destination. Must be unique for the destination level.
Client clientSecret OAuth 2.0 client secret to be used for the user access token exchange.
Secret
Client ID clientId OAuth 2.0 client ID to be used for the user access token exchange.
Proxy Type ProxyType Choose Internet. OnPremise is not supported for this authentication type.
Token tokenServi ● Choose Dedicated if the token service URL serves only a single tenant.
Service URL ceURLType
● Choose Common if the token service URL serves multiple tenants.
Type
Optional
Additional
Scope scope The value of the OAuth 2.0 scope parameter expressed as a list of space-delimited,
case-sensitive strings.
Related Information
RFC destinations provide the configuration required for communication with an on-premise ABAP system via
Remote Function Call. The RFC destination data is used by the Java Connector (JCo) version that is available
within SAP Cloud Platform to establish and manage the connection.
The RFC destination specific configuration in SAP Cloud Platform consists of properties arranged in groups, as
described below. The supported set of properties is a subset of the standard JCo properties in arbitrary
environments. The configuration data is divided into the following groups:
The minimal configuration contains user logon properties and information identifying the target host. This
means you must provide at least a set of properties containing this information.
Name=SalesSystem
Type=RFC
jco.client.client=000
jco.client.lang=EN
jco.client.user=consultant
jco.client.passwd=<password>
jco.client.ashost=sales-system.cloud
jco.client.sysnr=42
jco.destination.pool_capacity=5
jco.destination.peak_limit=10
Related Information
JCo properties that cover different types of user credentials, as well as the ABAP system client and the logon
language.
Property Description
Note
When working with the Destinations editor in the cock
pit, enter the value in the <User> field. Do not enter it as
additional property.
Note
Passwords in systems of SAP NetWeaver releases lower
than 7.0 are case-insensitive and can be only eight char
acters long. For releases 7.0 and higher, passwords are
case-sensitive with a maximum length of 40.
Note
When working with the Destinations editor in the cock
pit, enter this password in the <Password> field. Do not
enter it as additional property.
Note
For PrincipalPropagation, you should configure
the properties
jco.destination.repository.user and
jco.destination.repository.passwd in
stead, since there are special permissions needed (for
metadata lookup in the back end) that not all business
application users might have.
Learn about the JCo properties you can use to configure pooling in an RFC destination.
Overview
This group of JCo properties covers different settings for the behavior of the destination's connection pool. All
properties are optional.
Note
Turning on this check has performance impact
for stateless communication. This is due to an
additional low-level ping to the server, which
takes a certain amount of time for non-cor
rupted connections, depending on latency.
Pooling Details
● Each destination is associated with a connection factory and, if the pooling feature is used, with a
connection pool.
● Initially, the destination's connection pool is empty, and the JCo runtime does not preallocate any
connection. The first connection will be created when the first function module invocation is performed.
The peak_limit property describes how many connections can be created simultaneously, if applications
allocate connections in different sessions at the same time. A connection is allocated either when a
stateless function call is executed, or when a connection for a stateful call sequence is reserved within a
session.
● After the <peak_limit> number of connections has been allocated (in <peak_limit> number of
sessions), the next session will wait for at most <max_get_client_time> milliseconds until a different
session releases a connection (either finishes a stateless call or ends a stateful call sequence). In case the
JCo properties that allow you to define the behavior of the repository that dynamically retrieves function
module metadata.
All properties below are optional. Alternatively, you can create the metadata in the application code, using the
metadata factory methods within the JCo class, to avoid additional round-trips to the on-premise system.
Property Description
Note
When working with the Destinations editor in the cock
pit, enter the value in the <Repository User> field. Do
not enter it as additional property.
Note
When working with the Destinations editor in the cock
pit, enter this password in the <Repository
Password> field. Do not enter it as additional property.
Learn about the JCo properties you can use to configure the target sytem information in an RFC destination
(Cloud Foundry environment).
Overview
Depending on the configuration you use, different properties are mandatory or optional.
Note
When modifying existing or creating new destinations, you must provide jco.destination.proxy_type
with value OnPremise.
Property Description
jco.client.sysnr Represents the so-called "system number" and has two dig
its. It identifies the logical port on which the application
server is listening for incoming requests. For configurations
on SAP Cloud Platform, the property must match a virtual
port entry in the Cloud Connector Access Control configura-
tion.
Note
The virtual port in the above access control entry must
be named sapgw<##>, where <##> is the value of
sysnr.
Property Description
Note
The virtual port in the above access control entry must
be named sapms<###>, where <###> is the value of
r3name.
JCo properties that allow you to control the connection to an ABAP system.
Property Description
jco.client.trace Defines whether protocol traces are created. Valid values are
1 (trace is on) and 0 (trace is off). The default value is 0.
jco.client.codepage Declares the 4-digit SAP codepage that is used when initiat
ing the connection to the backend. The default value is 1100
(comparable to iso-8859-1). It is important to provide this
property if the password that is used contains characters
that cannot be represented in 1100.
Note
When working with the Destinations editor in the cock
pit, enter the Cloud Connector location ID in the
<Location ID> field. Do not enter it as additional
property.
Enable single sign-on by forwarding the identity of cloud users to a remote system or service (Cloud Foundry
environment).
Overview
Scenarios
● The Connectivity service provides a secure way of forwarding the identity of a cloud user to an on-premise
system through the Cloud Connector. This process is called principal propagation (also known as user
propagation). It uses SAML tokens as exchange format for the user information. User mapping takes place
in the back-end system. The SAML token is forwarded directly or by using an X.509 certificate.
For on-premise scenarios, you can create and configure a connectivity destination that uses of the
authentication type PrincipalPropagation, see Create HTTP Destinations [page 29].
Note
This scenario is only applicable if you want to connect to your on-premise system via the Cloud
Connector.
● Using the Destination service, you can configure user propagation to another remote (non on-premise)
system or service through OAuth2SAMLBearerAssertion authentication.
Configuration
Developer
Download and configure X.509 certificates as a prerequisite for user propagation from the Cloud Foundry
environment.
Setting up a trust scenario for user propagation requires the exchange of public keys and certificates between
the affected systems, as well as the respective trust configuration within these systems. This enables you to
use an HTTP destination with authentication type OAuth2SAMLBearerAssertion for the communication.
A trust scenario can include user propagation from the Cloud Foundry environment to another SAP Cloud
Platform environment, to another Cloud Foundry subaccount, or to a remote system outside SAP Cloud
Platform, like S/4HANA Cloud, C4C, Success Factors, and others.
Set Up a Certificate
Download and save locally the identifying X509 certificate of the subaccount in the Cloud Foundry
environment.
Renew a Certificate
If the X.509 certificate validity is about to expire, you can renew the certificate and extend its validity by
another 2 years.
5. Choose the Download Trust button and save locally the X.509 certificate that identifies this subaccount.
6. Configure the renewed X.509 certificate in the target system to which you want to propagate the user.
This tutorial lets you configure user propagation (single sign-on) step-by-step, using OAuth communication
from the SAP Cloud Platform Cloud Foundry environment to S/4HANA Cloud. As OAuth mechanism, you use
the OAuth 2.0 SAML Bearer Assertion Flow.
Scenario
As a customer, you own an SAP Cloud Platform global account and have created at least one subaccount
therein. Within the subaccount, you have deployed a Web application. Authentication against the Web
application is based on a trusted identity provider (IdP) that you need to configure for the subaccount.
On the S/4HANA Cloud side, you own an S/4HANA ABAP tenant. Authentication against the S/4HANA ABAP
tenant is based on the trusted IdP which is always your Identity Authentication Service (IAS) tenant. Typically,
you will configure this S/4HANA Cloud Identity tenant to forward authentication requests to your corporate IdP.
Prerequisites
● You have an S/4HANA Cloud tenant and a user with the following business catalogs assigned:
SAP_BCR_CORE_EXT Extensibility
● You have administrator permission for the configured S/4HANA Cloud IAS tenant.
● You have a subaccount and PaaS tenant in the SAP Cloud Platform Cloud Foundry environment.
Next Step
Perform these steps to set up user propagation between S/4HANA Cloud and the SAP Cloud Platform Cloud
Foundry environment.
Tasks
1. Configure Single Sign-On between S/4HANA Cloud and the Cloud Foundry Organization on SAP Cloud
Platform [page 68]
2. Configure OAuth Communication [page 69]
3. Configure Communication Settings in S/4HANA Cloud [page 69]
4. Configure Communication Settings in SAP Cloud Platform [page 73]
5. Consume the Destination and Execute the Scenario [page 75]
Configure Single Sign-On between S/4HANA Cloud and the Cloud Foundry
Organization on SAP Cloud Platform
To configure SSO with S/4HANA you must configure trust between the S/4HANA IAS tenant and theCloud
Foundry organization, see Establish Trust and Federation with UAA Using SAP Cloud Platform Identity
Authentication Service [page 2283].
Download the certificate from your Cloud Foundry subaccount on SAP Cloud Platform.
1. From the SAP Cloud Platform cockpit, choose Cloud Foundry environment your global account .
2. Choose or create a subaccount, and from your left-side subaccount menu, go to Connectivity
Destinations .
3. Press the Download Trust button.
Note
For the complete list of standard regions, see Regions and Hosts Available for the Neo Environment
[page 14].
6. Upload the subaccount certificate that you have downloaded before from the SAP Cloud Platform
cockpit.
1. From the SAP Cloud Platform cockpit, choose Cloud Foundry environment your global account .
2. Choose your subaccount, and from the left-side subaccount menu, go to Connectivity Destinations .
3. Press the New Destination button.
4. Enter the following parameters for your destination:
Parameter Value
Type HTTP
Authentication OAuth2SAMLBearerAssertion
Note
This URL does not contain my300117-api, but only
my300117.
Client Key The name of the communication user you have in the SAP
S/4HANA ABAP tenant, e.g VIKTOR.
Token Service URL For this field, you need the part of the URL before /
sap/... that you copied before from Communications
Arrangements service URL/service interface:
https://my300117-
api.s4hana.ondemand.com/sap/bc/sec/
oauth2/token?
scope=ADT_0001%20%2fUI5%2fAPP_INDEX_000
1%20%2fIWFND%2fSG_MED_CATALOG_0002
Note
This URL is pointing to the scope of the Inbound
Services of the communication scenario that we have
defined when creating the communication arrange
ment. The scopes have a fixed naming and are sepa
rated by %20 for the space and %2f for the slash :
Token Service User The same user as for the Client Key parameter.
System User This parameter is not used, leave the field empty.
authnContextClassRef urn:oasis:names:tc:SAML:
2.0:ac:classes:X509
To perform the scenario and execute the request from the source application towards the target application,
proceed as follows:
1. Decide on where the user identity will be located when calling the Destination service. For details, see User
Propagation via SAML 2.0 Bearer Assertion Flow [page 170]. This will determine how exactly you will
perform step 2.
2. Execute a "find destination" request from the source application to the Destination service. For details, see
Consuming the Destination Service [page 159] and the REST API documentation .
3. From the Destination service response, extract the access token and URL, and construct your request to
the target application. See "Find Destination" Response Structure [page 167] for details on the structure of
the response from the Destination service.
This tutorial lets you configure user propagation step-by-step, from the SAP Cloud Platform Cloud Foundry
environment to SAP SuccessFactors.
Steps
Create and Consume a Destination for the Cloud Foundry Application [page 80]
Scenario
● From an application in the SAP Cloud Platform Cloud Foundry environment, you want to consume OData
APIs exposed by SuccessFactors modules.
● To enable single sign-on, you want to propagate the identity of the application's logged-in user to
SuccessFactors.
Prerequisites
Concept Overview
A user logs in to the Cloud Foundry application. Its identity is established by an Identity Provider (IdP). This
could be the default IdP for the Cloud Foundry subaccount or a trusted IdP, for example the SuccessFactors
IdP.
To accept the SAML assertion and return an access token, a trust relationship must be set up between
SuccessFactors and the Cloud Foundry subaccount public key. You can achieve this by providing the Cloud
Foundry subaccount X.509 certificate when creating the OAuth client in SuccessFactors.
Users that are propagated from the Cloud Foundry application, are verified by the SuccessFactors OAuth server
before granting them access tokens. This means, users that do not exist in the SuccessFactors user store will
be rejected.
For valid users, the OAuth server accepts the SAML assertion and returns an OAuth access token. In turn, the
Destination service returns both the destination and the access token to the requesting application. The
application then uses the destination properties and the access token to consume SuccessFactors APIs.
Next Steps
Create an OAuth client in SuccessFactors for user propagation from the SAP Cloud Platform Cloud Foundry
environment.
4. In the field <X.509 Certificate>, paste the certificate that you downloaded in step 1.
5. Choose Register to save the OAuth client.
6. Now, locate your client in the list by its application name, choose View in the Actions column and take note
of the <API Key> that has been generated for it. You will use this key later in the OAuth2SAMLBearer
destination in the Cloud Foundry environment.
Next Step
● Create and Consume a Destination for the Cloud Foundry Application [page 80]
Create and consume an OAuth2SAMLBearerAssertion destination for your Cloud Foundry application.
1. In the cloud cockpit, navigate to your Cloud Foundry subaccount and from the left-side subaccount menu,
choose Connectivity Destinations . Choose New Destination and enter a name Then provide the
following settings:
○ <URL>: URL of the SuccessFactors OData API you want to consume.
○ <Authentication>: OAuth2SAMLBearerAssertion
○ <Audience>: www.successfactors.com
○ <Client Key>: API Key of the OAuth client you created in SuccessFactors.
○ <Token Service URL>: API endpoint URL for the SuccessFactors instance, followed by /oauth/
token and the URL parameter company_id with the company ID, for example https://
apisalesdemo2.successfactors.eu/oauth/token?company_id=SFPART019820.
2. Enter three additional properties:
○ apiKey: the API Key of the OAuth client you created in SuccessFactors.
○ authnContextClassRef: urn:oasis:names:tc:SAML:2.0:ac:classes:PreviousSession
○ nameIdFormat:
○ urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified, if the user ID will be
propagated to a SuccessFactors application, or
○ urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress, if the user e-mail will be
propagated to SuccessFactors.
To perform the scenario and execute the request from the source application towards the target application,
proceed as follows:
1. Decide on where the user identity will be located when calling the Destination service. For details, see User
Propagation via SAML 2.0 Bearer Assertion Flow [page 170]. This will determine how exactly you will
perform step 2.
2. Execute a "find destination" request from the source application to the Destination service. For details, see
Consuming the Destination Service [page 159] and the REST API documentation .
3. From the Destination service response, extract the access token and URL, and construct your request to
the target application. See "Find Destination" Response Structure [page 167] for details on the structure of
the response from the Destination service.
Propagate the identity of a user between Cloud Foundry applications that are located in different subaccounts
or regions.
Steps
Scenario
Prerequisites
● You have two applications (application 1 and application 2) deployed in Cloud Foundry spaces in different
subaccounts in the same region or even in different regions.
● You have an instance of the Destination service bound to application 1.
● You have a user JWT (JSON Web Token) in application 1 where the call to application 2 is performed.
Concept
The identity of a user logged in to application 1 is established by an identity provider (IdP) of the respective
subaccount (subaccount 1).
Note
You can use the default IdP for the Cloud Foundry subaccount or a custom-configured IdP.
When the application retrieves an OAuthSAMLBearer destination, the user is made available to the Cloud
Foundry Destination service by means of a user exchange JWT. See also User Propagation via SAML 2.0 Bearer
Assertion Flow [page 170].
The service then wraps the user identity in a SAML assertion, signs it with subaccount 1's private key (which is
part of the special key pair for the subaccount, maintained by the Destination service) and sends it to the
authentication endpoint of subaccount 2, which hosts application 2.
To make the authentication endpoint accept the SAML assertion and return an access token, you must set up a
trust relationship between the two subaccounts, by using subaccount 1's public key. You can achieve this by
assembling the SAML IdP metadata, using subaccount 1's public key and setting up a new trust configuration
for subaccount 2, which is based on that metadata.
This way, users propagated from application 1 can be verified by subaccount 2's IdP before granting them
access tokens with their respective scopes in the context of subaccount 2.
The authentication endpoint accepts the SAML assertion and returns an OAuth access token. In turn, the
Destination service returns both the destination configuration and the access token to the requesting
application (application 1). Application 1 then uses the destination properties and the access token to call
application 2.
Procedure
1. Download the X.509 certificate of subaccount 1. For instructions, see Set up Trust Between Systems [page
65]. The content of the file is shown as:
Sample Code
Sample Code
<ns3:EntityDescriptor
ID="cfapps.${S1_LANDSCAPE_HOST}/${S1_SUBACCOUNT_ID}"
entityID="cfapps.${S1_LANDSCAPE_HOST}/${S1_SUBACCOUNT_ID}"
xmlns="http://www.w3.org/2000/09/xmldsig#"
xmlns:ns2="http://www.w3.org/2001/04/xmlenc#"
xmlns:ns4="urn:oasis:names:tc:SAML:2.0:assertion"
xmlns:ns3="urn:oasis:names:tc:SAML:2.0:metadata">
<ns3:SPSSODescriptor AuthnRequestsSigned="true"
Note
Additionally, you must add users to this new trust configuration and assign appropriate scopes to them.
Sample Code
Property Value
Name Choose any name for your destination. You will use this
name to request the destination from the Destination
service.
Type HTTP
Authentication OAuth2SAMLBearerAssertion
Audience ${S2_ALIAS}
Token Service URL The URL of the XSUAA instance in subaccount 2. Can be
acquired via a binding or service key + /oauth/token/
alias/${S2_ALIAS} appended to the end.
Token Service User The clientid of the XSUAA instance in subaccount 2. Can
be acquired via a binding or service key.
Additional Properties
Property Value
nameIdFormat urn:oasis:names:tc:SAML:1.1:nameid-
format:emailAddress
authnContextClassRef urn:oasis:names:tc:SAML:
2.0:ac:classes:PreviousSession
Example
7. Choose Save.
To perform the scenario and execute the request from application 1, targeting application 2, proceed as follows:
1. Decide on where the user identity will be located when calling the Destination service. For details, see User
Propagation via SAML 2.0 Bearer Assertion Flow [page 170]. This will determine how exactly you will
perform step 2.
2. Execute a "find destination" request from application 1 to the Destination service. For details, see
Consuming the Destination Service [page 159] and the REST API documentation .
3. From the Destination service response, extract the access token and URL, and construct your request to
application 2. See "Find Destination" Response Structure [page 167] for details on the structure of the
response from the Destination service.
Propagate the identity of a user from a Cloud Foundry application to a Neo application.
Steps
1. Configure a Local Service Provider for the Neo Subaccount [page 91]
2. Establish Trust between Cloud Foundry and Neo Subaccounts [page 91]
3. Create an OAuth Client for the Neo Application [page 93]
4. Create an OAuth2SAMLBearerAssertion Destination for the Cloud Foundry Application [page 94]
5. Consume the Destination and Execute the Scenario [page 96]
Scenario
Prerequisites
Concept
The identity of a user logged in to the Cloud Foundry application is established by an identity provider (IdP).
Note
You can use the default IdP of the Cloud Foundry subaccount or any trusted IdP, for example, the Neo
subaccount IdP.
When the application retrieves an OAuthSAMLBearer destination, the user is made available to Cloud Foundry
Destination service by means of a user exchange JWT (JSON Web Token).
The service then wraps the user identity in a SAML assertion, signs it with the Cloud Foundry subaccount
private key and sends it to the token endpoint of the OAuth service for the Neo application.
To make the Neo application accept the SAML assertion, you must set up a trust relationship between the Neo
subaccount and the Cloud Foundry subaccount public key. You can achieve this by adding the Cloud Foundry
subaccount X.509 certificate as trusted IdP in the Neo subaccount. Thus, the Cloud Foundry application starts
acting as an IdP and any users propagated by it are accepted by the Neo application, even users that do not
exist in the IdP.
The OAuth service accepts the SAML assertion and returns an OAuth access token. In turn, the Destination
service returns both the destination and the access token to the requesting application. The application then
uses the destination properties and the access token to consume the remote API.
Procedure
1. In the cockpit, navigate to your Neo subaccount, choose Security Trust from the left menu, and go to
tab Local Service Provider on the right. For <Configuration Type>, select Custom and choose Generate
Key Pair.
2. Save the configuration.
Note
IMPORTANT: When you choose Custom for the Local Service Provider type, the default IdP (SAP ID service)
will no longer be available. If your scenario requires login to the SAP ID service as well, you can safely skip
this step and leave the default settings for the Local Service Provider.
In the <Name> field, enter the cfapps host followed by the subaccount GUID, for example
cfapps.sap.hana.ondemand.com/bf7f2876-5080-40ad-a56b-fff3ee5cff9d. This information is
available in the cockpit, on the overview page of your Cloud Foundry subaccount:
In the <Signing Certificate> field, paste the X.509 certificate you downloaded in step 1. Make sure
you remove the BEGIN CERTIFICATE and END CERTIFICATE strings. Then check Only for IDP-Initiated SSO
and save the configuration:
1. In the cockpit, navigate to the Neo subaccount, choose Security OAuth from the left menu, select
tab Client, and choose Register New Client:
Note
Make sure you remember the secret, because it will not be visible later.
6. <Redirect URI> is irrelevant for the OAuth SAML Bearer Assertion flow, so you can provide any URL in
the Cloud Foundry application.
1. In the cockpit, navigate to the Cloud Foundry subaccount, choose Connectivity Destinations from
the left menu, select the Client tab and press New Destination.
2. Enter a <Name> for the destination, then provide:
○ <URL>: the URL of the Neo application/API you want to consume.
○ <Authentication>: OAuth2SAMLBearerAssertion
○ <Audience>: can be taken from the Neo subaccount, if you choose Security Trust form the left
menu, go to the Local Service Provider tab, and copy the value of <Local Provider Name>:
○ <Token Service User>: again the ID of the OAuth client for the Neo application.
○ <Token Service Password>: the OAuth client secret.
● authnContextClassRef: urn:oasis:names:tc:SAML:2.0:ac:classes:PreviousSession
● nameIdFormat:
○ urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified, if the user ID is propagated to
the Neo application, or
○ urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress, if the user email is propagated
to the Neo application.
To perform the scenario and execute the request from the source application towards the target application,
proceed as follows:
1. Decide on where the user identity will be located when calling the Destination service. For details, see User
Propagation via SAML 2.0 Bearer Assertion Flow [page 170]. This will determine how exactly you will
perform step 2.
2. Execute a "find destination" request from the source application to the Destination service. For details, see
Consuming the Destination Service [page 159] and the REST API documentation .
3. From the Destination service response, extract the access token and URL, and construct your request to
the target application. See "Find Destination" Response Structure [page 167] for details on the structure of
the response from the Destination service.
Using multitenancy for Cloud Foundry applications that require a connection to a remote service or on-premise
application.
Endpoint Configuration
Applications that require a connection to a remote service can use the Connectivity service to configure HTTP
or RFC endpoints. In a provider-managed application, such an endpoint can either be once defined by the
application provider (Provider-Specific Destination [page 97]), or by each application subscriber (Subscriber-
Specific Destination [page 98]).
If the application needs to use the same endpoint, independently from the current application subscriber, the
destination that contains the endpoint configuration is uploaded by the application provider. If the endpoint
should be different for each application subscriber, the destination can be uploaded by each particular
application subscriber.
Note
This connectivity type is fully applicable also for on-demand to on-premise connectivity.
Destination Levels
Level Lookup
Service instance level Lookup via particular service instance (in provider or sub
scriber subaccount associated with this service instance).
When the application accesses the destination at runtime, the Connectivity service does the following:
Provider-Specific Destination
To use the Connectivity service in your application, you need an instance of the service.
Procedure
You have two options for creating a service instance – using the CLI or using the SAP Cloud Platform cockpit:
● Create and Bind a Service Instance from the CLI [page 98]
○ Example [page 99]
○ Result [page 100]
● Create and Bind a Service Instance from the Cockpit [page 99]
○ Result [page 100]
Use the following CLI commands to create a service instance and bind it to an application:
1. cf marketplace
Example
To bind an instance of the Connectivity service "lite" plan to application "myapp", use following commands on
the Cloud Foundry command line:
Assuming that you have already deployed your application to the platform, follow these steps to create a
service instance and bind it to an application:
Result
When the binding is created, the application gets the corresponding connectivity credentials in its environment
variables:
Sample Code
"VCAP_SERVICES": {
"connectivity": [
{
"credentials": {
"onpremise_proxy_host": "10.0.85.1",
"onpremise_proxy_port": "20003",
"onpremise_proxy_http_port": "20003",
"clientid": "sb-connectivity-app",
"clientsecret": "KXqObiN6d9gLA4cS2rOVAahPCX0=",
"token_service_url": "<token_service_url>",
Note
To use the Destination service in your application, you need an instance of the service.
Procedure
You have two options for creating a service instance – using the CLI or using the SAP Cloud Platform cockpit:
● Create and Bind a Service Instance from the CLI [page 101]
○ Result [page 103]
● Create and Bind a Service Instance from the Cockpit [page 102]
○ Result [page 103]
Use the following CLI commands to create a service instance and bind it to an application:
1. cf marketplace
2. cf create-service destination <service-plan> <service-name>
Assuming that you have already deployed your application to the platform, follow these steps to create a
service instance and bind it to an application:
7. On the next page of the wizard, select the Create new instance radio button.
8. From the drop-down box, select a service plan, then choose Next.
9. The next page is used for specifying user-provided parameters in JSON format. If you do not want to do
that, skip this step by choosing Next.
10. In the <Instance Name> textbox, enter an unique name for your service instance, for example, dest-
lite.
11. Choose Finish.
When the binding is created, the application gets the corresponding destination credentials in its environment
variables:
Sample Code
"VCAP_SERVICES": {
"destination": [
{
"credentials": {
...
"uri": "https://destination-configuration.cfapps.<region host>",
...
},
"syslog_drain_url": null,
"volume_mounts": [],
"label": "destination",
"provider": null,
"plan": "lite",
"name": "dest-lite",
"tags": [
"destination",
"document"
]
}
],
Consume the Connectivity service and the Destination service from an application in the Cloud Foundry
environment.
Task Description
Consuming the Connectivity Service [page 104] Connect your Cloud Foundry application to an on-premise
system.
Consuming the Destination Service [page 159] Retrieve and store externalized technical information about
the destination that is required to consume a target remote
service from your application.
Note
To use the Connectivity service with a protocol other than HTTP, see
Tasks
Developer 107]
2. Provide the Destination Information [page 107]
3. Set up the HTTP Proxy for On-Premise Connectivity
[page 108]
Overview
Using the Connectivity service, you can connect your Cloud Foundry application to an on-premise system
through the Cloud Connector. To achieve this, you must provide the required information about the target
system (destination), and set up an HTTP proxy that lets your application access the on-premise system.
Prerequisites
● You must be a Global Account member to connect through the Connectivity service with the Cloud
Connector.
Also Security Administrators (which must be either Global Account members or Cloud Foundry
Organization/Space members) can do it.
● You have installed and configured a Cloud Connector in your on-premise landscape for to the scenario you
want to use. See Installation [page 351] and Configuration [page 381].
● You have deployed an application in a landscape of the Cloud Foundry environment.
● Your application is secured as described in Configuring Authentication for Java API Using Spring Security
[page 2272].
Note
Currently, the only supported protocol for connecting the Cloud Foundry environment to an on-premise
system is HTTP. HTTPS is not needed, since the tunnel used by the Cloud Connector is TLS-encrypted.
Basic Steps
To consume the Connectivity service from your Cloud Foundry application, perform the following basic steps:
Consuming the Connectivity service requires credentials from the xsuaa and Connectivity service instances
which are bound to the application. By binding the application to service instances of the xsuaa and
Connectivity service as described in the prerequisites, these credentials become part of the environment
variables of the application. You can access them as follows:
Sample Code
Note
If you have multiple instances of the same service bound to the application, you must perform additional
filtering to extract the correct credential from jsonArr. You must go through the elements of jsonArr and
find the one matching the correct instance name.
This code stores a JSON object in the credentials variable. Additional parsing is required to extract the value for
a specific key.
Note
We refer to the result of the above code block as connectivityCredentials, when called for
connectivity, and xsuaaCredentials for xsuaa.
To consume the Connectivity service, you must provide some information about your on-premise system and
the system mappings for it in the Cloud Connector. You require the following:
● The endpoint in the Cloud Connector (virtual host and virtual port) and accessible URL paths on it
(destinations). See Configure Access Control (HTTP) [page 425].
We recommend that you use the Destination service (see Consuming the Destination Service [page 159]) to
procure this information. However, using the Destination service is optional. You can also provide (look up) this
information in another appropriate way.
Proxy Setup
The Connectivity service provides a standard HTTP proxy for on-premise connectivity that is accessible by any
application. Proxy host and port are available as the environment variables <onpremise_proxy_host> and
<onpremise_proxy_http_port>. You can set up the on-premise HTTP proxy like this:
Sample Code
Note
To make calls to on-premise services configured in the Cloud Connector through the HTTP proxy, you must
authorize at the HTTP proxy. For this, the OAuth Client Credentials flow is used: applications must create an
OAuth access token using using the parameters clientid and clientsecret that are provided by the
Connectivity service in the environment, as shown in the example code below. When the application has
retrieved the access token, it must pass the token to the connectivity proxy using the Proxy-Authorization
header.
Sample Code
Note
Depending on the required authentication type for the desired on-premise resource, you may have to set an
additional header in your request. This header provides the required information for the authentication process
against the on-premise resource. See Authentication to the On-Premise System [page 111].
Note
This is an advanced option when using more than one Cloud Connector for a subaccount. For more
information how to set the location ID in the Cloud Connector, see Managing Subaccounts [page 392],
step 4 in section Subaccount Dashboard.
As of Cloud Connector 2.9.0, you can connect multiple Cloud Connectors to a subaccount if their location
ID is different. Using the header SAP-Connectivity-SCC-Location_ID you can specify the Cloud
Connector over which the connection should be opened. If this header is not specified, the connection is
opened to the Cloud Connector that is connected without any location ID. This also applies for all Cloud
Connector versions prior to 2.9.0. For example:
Sample Code
To consume the Connectivity service from an SaaS application in a multitenant way, the only requirement is
that the SaaS application returns the Connectivity service as a dependent service in its dependencies list.
For more information about the subscription flow, see Develop the Multitenant Business Application.
Procedure
For each authentication type, you must provide specific information in the request to the virtual host:
Note
For the principal propagation scenario, the SAP-Connectivity-Authentication header is only required if you
do not use the user exchange token flow, see Configure Principal Propagation via User Exchange Token
[page 113].
Applications must propagate the user JWT token (userToken) using the SAP-Connectivity-Authentication
header. This is required for the Connectivity service to open a tunnel to the subaccount for which a
Sample Code
Authentication Types
No Authentication
If the on-premise system does not need to identify the user, you should use this authentication type. It requires
no additional information to be passed with the request.
Principal Propagation
When you open the application router to access your cloud application, you are prompted to log in. Doing so
means that the cloud application now knows your identity. Principal propagation forwards this identity via the
Cloud Connector to the on-premise system. This information is then used to grant access without additional
input from the user. To achieve this, you do not need to send any additional information from your application,
but you must set up the Cloud Connector for principal propagation. See Configuring Principal Propagation
[page 401].
Basic Authentication
If the on-premise system requires username and password to grant access, the cloud application must provide
these data using the Authorization header. The following example shows how to do this:
Sample Code
Configure a user exchange token for principal propagation (user propagation) from your Cloud Foundry
application to an on-premise system.
Tasks
Scenario
For a Cloud Foundry application that uses the Connectivity service, you want the currently logged-in user to be
propagated to an on-premise system. For more information, see Principal Propagation [page 62].
1. The application sends two headers to the Connectivity proxy (see also Consuming the Connectivity Service
[page 104]):
Note
2. Recommended: The application sends one header containing the user exchange token to the Connectivity
proxy:
To propagate a user to an on-premise system, you must call the Connectivity proxy using a special JWT (JSON
Web token). This token is obtained by the user_token OAuth grant .
Example: Obtaining a Refresh Token for the Current User [page 114]
Example: Exchanging the Refresh Token for an Access Token [page 115]
Example: Calling the Connectivity Proxy with the User Exchange Token [page 116]
Sample Code
Response:
{
"access_token": null,
"token_type": "bearer",
"refresh_token": "31da6de09bfc484db7943c15156d643c-r",
"expires_in": 43199,
"scope": "openid",
"ext_attr": {
"enhancer": "XSUAA",
"serviceinstanceid":"c79fa84a-2a4c-4542-beb0-99d67936271c"
},
"jti": "31da6de09bfc484db7943c15156d643c-r"
}
Sample Code
Response:
{
"access_token": "<JWT>",
"token_type": "bearer",
"refresh_token": "<another JWT>",
"expires_in": "43199",
"scope": "openid",
"ext_attr": {
"enhancer": "XSUAA",
"serviceinstanceid": "c79fa84a-2a4c-4542-beb0-99d67936271c"
},
"jti": "7cc917b8bf6347a2aa18d7ac8f38a1c2"
}
The JWT in the "access token" property, also referred to as user exchange token, now contains user details
and can be used to consume the Connectivity service.
For Java applications, a library is available that implements the user exchange OAuth flow. The example below
shows how you can obtain the userExchangeAcessToken using the com.sap.xs2.security library:
<dependency>
<groupId>com.sap.xs2.security</groupId>
<artifactId>java-container-security-api</artifactId>
<version>...</version>
</dependency>
<dependency>
Note
Sample Code
After obtaining the userExchangeAcessToken, you can use it to consume the Connectivity service.
Example: Calling the Connectivity Proxy with the User Exchange Token
As a prerequisite for this step, you must configure the Connectivity proxy to be used by your client (see Set up
the HTTP Proxy for On-Premise Connectivity [page 108]). Once the application has retrieved the user exchange
token, it must pass the token to the Connectivity proxy via the Proxy-Authorization header. In this
example, we use urlConnection as a client.
Sample Code
Note
Sample Code
Call a remote-enabled function module in an on-premise ABAP server from your Cloud Foundry application,
using the RFC protocol.
Find the tasks and prerequisites that are required to consume an on-premise ABAP function module via RFC,
using the Java Connector (JCo) API as a built-in feature of SAP Cloud Platform.
Tasks
Operator
Prerequisites
Before you can use RFC communication for an SAP Cloud Platform application, you must configure:
About JCo
To learn in detail about the SAP JCo API, see the JCo 3.0 documentation on SAP Support Portal .
Note
Some sections of this documentation are not applicable to SAP Cloud Platform:
● Architecture: CPIC is only used in the last mile from your Cloud Connector to the back end. From SAP
Cloud Platform to the Cloud Connector, TLS-protected communication is used.
● Installation: SAP Cloud Platform runtimes already include all required artifacts.
● Customizing and Integration: On SAP Cloud Platform, the integration is already done by the runtime.
You can concentrate on your business application logic.
● Server Programming: The programming model of JCo on SAP Cloud Platform does not include server-
side RFC communication.
● IDoc Support for External Java Applications: Currently, there is no IDocLibrary for JCo available on
SAP Cloud Platform
● You have downloaded the Cloud Connector installation archive from SAP Development Tools for Eclipse
and connected the Cloud Connector to your subaccount.
● You have downloaded and set up your Eclipse IDE and the Eclipse Tools for Cloud Foundry .
Invoking function modules via RFC is enabled by a JCo API that is comparable to the one available in SAP
NetWeaver Application Server Java (version 7.10+), and in JCo standalone 3.0. If you are an experienced JCo
developer, you can easily develop a Web application using JCo: you simply consume the APIs like you do in
other Java environments. Restrictions that apply in the cloud environment are mentioned in the Restrictions
section below.
Restrictions
Note
You must use the Tomcat or TomEE runtime offered by the build pack to make JCo work correctly. You
cannot bring a container of your own.
Call a function module in an on-premise ABAP system via RFC, using a sample Web application (Cloud Foundry
environment).
Tasks
Developer
Developer
Operator
Scenario Overview
Control Flow for Using the Java Connector (JCo) with Basic Authentication
Process Steps:
Used Values
Different user roles are involved in the cloud to on-premise connectivity end-to-end scenario. The particular
steps for the relevant roles are described below:
IT Administrator
Application Developer
1. Installs Eclipse IDE, the Cloud Foundry Plugin for Eclipse and the Cloud Foundry CLI.
2. Develops a Java EE application using the JCo APIs.
3. Configures connectivity destinations via the SAP Cloud Platform cockpit.
4. Deploys and tests the Java EE application on SAP Cloud Platform.
Account Operator
Deploys Web applications, creates application routers, creates and binds service instances, conducts tests.
Scenario steps:
Installation Prerequisites
● You have downloaded the Cloud Connector installation archive from SAP Development Tools for Eclipse
and connected the Cloud Connector to your subaccount.
● You have downloaded and set up your Eclipse IDE and the Eclipse Tools for Cloud Foundry .
Next Steps
Related Information
1. In the Project Explorer view, right-click on the project jco-demo and choose Configure Convert to
Maven Project .
2. In the dialog window, leave the default settings unchanged and choose Finish.
3. Open the pom.xml file and include the following dependency:
<dependencies>
<dependency>
<groupId>com.sap.cloud</groupId>
<artifactId>neo-java-web-api</artifactId>
<version>[3.71.8,4.0.0)</version>
<scope>provided</scope>
</dependency>
</dependencies>
1. From the jco_demo project node, choose New Servlet in the context menu.
2. Enter com.sap.demo.jco as the <> and ConnectivityRFCExampleJava as the <Class name>.
Choose Next.
3. Choose Finish to create the servlet and open it in the Java editor.
4. Replace the entire servlet class to make use of the JCo API. The JCo API is visible by default for cloud
applications. You do not need to add it explicitly to the application class path.
Sample Code
package com.sap.demo.jco;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.conn.jco.AbapException;
import com.sap.conn.jco.JCoDestination;
import com.sap.conn.jco.JCoDestinationManager;
import com.sap.conn.jco.JCoException;
import com.sap.conn.jco.JCoFunction;
import com.sap.conn.jco.JCoParameterList;
import com.sap.conn.jco.JCoRepository;
/**
* Sample application that uses the Connectivity
service. In particular, it is
* making use of the capability to invoke a function module in an ABAP
system
JCoParameterList imports =
stfcConnection.getImportParameterList();
imports.setValue("REQUTEXT", "SAP Cloud Platform Connectivity
runs with JCo");
stfcConnection.execute(destination);
JCoParameterList exports =
stfcConnection.getExportParameterList();
String echotext = exports.getString("ECHOTEXT");
String resptext = exports.getString("RESPTEXT");
response.addHeader("Content-type", "text/html");
responseWriter.println("<html><body>");
responseWriter.println("<h1>Executed STFC_CONNECTION in system
JCoDemoSystem</h1>");
responseWriter.println("<p>Export parameter ECHOTEXT of
STFC_CONNECTION:<br>");
responseWriter.println(echotext);
responseWriter.println("<p>Export parameter RESPTEXT of
STFC_CONNECTION:<br>");
responseWriter.println(resptext);
responseWriter.println("</body></html>");
} catch (AbapException ae) {
// just for completeness: As this function module does not
have an exception
// in its signature, this exception cannot occur. But you
should always
// take care of AbapExceptions
} catch (JCoException e) {
response.addHeader("Content-type", "text/html");
responseWriter.println("<html><body>");
responseWriter
.println("<h1>Exception occurred while executing
STFC_CONNECTION in system JCoDemoSystem</h1>");
responseWriter.println("<pre>");
e.printStackTrace(responseWriter);
responseWriter.println("</pre>");
responseWriter.println("</body></html>");
}
}
}
5. Save the Java editor and make sure that the project compiles without errors.
You must create and bind several service instances, before you can use your application.
Procedure
Destination Service
Sample Code
4. Go to tab Confirm, insert an instance name (for example, xsuaa_jco), and choose Finish.
Note
The chosen instance name must match the one defined in the manifest file.
Next Steps
Deploy your Cloud Foundry application to call an ABAP function module via RFC.
Prerequisites
You have created and bound the required service instances, see Create and Bind Service Instances [page 128].
1. To deploy your Web application, you can use the following two alternative procedures:
○ Deploying from the Eclipse IDE
○ Deploying from the CLI, see Developing Java in the Cloud Foundry Environment [page 1162]
2. In the following, we publish it with the CLI.
3. Therefore, create a manifest.ymlJava package file – the important parameter is
xsuaa_connectivity_instance_name which must match the bound XSUAA instance name later on.
manifest.yml
Sample Code
---
applications:
- name: jco-demo-p1234
buildpacks:
- sap_java_buildpack
env:
# Accept any OAuth client of any identity zone
SAP_JWT_TRUST_ACL: '[{"clientid":"*","identityzone":"*"}]'
xsuaa_connectivity_instance_name: "xsuaa_jco"
JAVA_OPTS: '-Xss349k'
services:
- xsuaa_jco
- connectivity_jco
- destination_jco
Next Steps
Configure a role that enables your user to access your Web application.
To add and assign roles, navigate to the subaccount view of the cloud cockpit and choose Security Role
Collections .
7. You should now be able to click Assign Role Collection. Choose role collection all and assign it.
Next Steps
For authentication purposes, you must configure and deploy an application router for your test application.
1. To set up an application router, follow the steps in Application Router [page 1092] or use the demo file
approuter.zip (download).
2. For deployment, you need a manifest file, similar to this one:
Sample Code
---
applications:
- name: approuter-jco-demo-p1234
path: ./
buildpacks:
- nodejs_buildpack
memory: 120M
routes:
- route: approuter-jco-demo-p1234.cfapps.eu10.hana.ondemand.com
env:
NODE_TLS_REJECT_UNAUTHORIZED: 0
destinations: >
[
{"name":"dest-to-example", "url" :"https://jco-demo-
p1234.cfapps.eu10.hana.ondemand.com/ConnectivityRFCExample",
"forwardAuthToken": true }
]
services:
- xsuaa_jco
Note
○ The routes and destination URLs need to fit your test application.
○ In this example, we already bound our XSUAA instance to the application router. Alternatively, you
could also do this via the cloud cockpit.
5. When choosing the application route, you are requested to login. Provide the credentials known by the IdP
you configured in Roles & Trust.
6. After successful login, you are routed to the test application which is then executed.
Next Steps
Configure an RFC destination on SAP Cloud Platform that you can use in your Web application to call the on-
premise ABAP system.
To configure the destination, you must use a virtual application server host name (abapserver.hana.cloud)
and a virtual system number (42) that you will expose later in the Cloud Connector. Alternatively, you could use
a load balancing configuration with a message server host and a system ID.
Name=JCoDemoSystem
Type=RFC
jco.client.ashost=abapserver.hana.cloud
jco.client.sysnr=42
jco.client.user=<DEMOUSER>
jco.client.passwd=<Password>
jco.client.client=000
jco.client.lang=EN
jco.destination.pool_capacity=5
4. This means that the Cloud Connector denied opening a connection to this system. As a next step, you
must configure the system in your installed Cloud Connector.
Next Steps
Configure the system mapping and the function module in the Cloud Connector.
The Cloud Connector only allows access to white-listed back-end systems. To configure this, follow the steps
below:
1. Optional: In the Cloud Connector administration UI, you can check under Audits whether access has been
denied:
Denying access for user DEMOUSER to system abapserver.hana.cloud:sapgw42
[connectionId=-1547299395]
2. In the Cloud Connector administration UI, choose Cloud To On-Premise from your Subaccount menu, tab
Access Control.
3. In section Mapping Virtual To Internal System choose Add to define a new system.
1. For Back-end Type, select ABAP System and choose Next.
Note
The values must match with the ones of the destination configuration in the cloud cockpit.
Example:
6. Summary (example):
5. This means that the Cloud Connector denied invoking STFC_CONNECTION in this system. As a final step,
you must provide access to this function module.
The Cloud Connector only allows access to white-listed resources (which, in an RFC scenario, are defined on
the basis of function module names). To configure the function module, follow the steps below:
1. Optional: In the Cloud Connector administration UI, you can check under Monitor Audit whether
access has been denied:
Denying access for user DEMOUSER to resource STFC_CONNECTION on system
abapserver.hana.cloud:sapgw42 [connectionId=609399452]
2. In the Cloud Connector administration UI, choose again Cloud To On-Premise from your Subaccount menu,
and go to tab Access Control.
5. Call again the URL that references the cloud application in the Web browser. The application should now
return a message showing the export parameters of the function module.
Monitor the state and logs of your Web application deployed on SAP Cloud Platform, using the Application
Logging service.
For this purpose, create an instance of the Application Logging service (as you did for the Destination and
Connectivity service) and bind it to your application, see Create and Bind Service Instances [page 128].
To activate JCo logging, set the following property in the env section of your manifest file:
For more detailed information, you can also activate the internal JCo logs:
Learn about the required steps to make your Cloud Foundry JCo application tenant-aware.
Using this tutorial, you can enable the sample JCo application created in Tutorial: Invoke ABAP Function
Modules in On-Premise ABAP Systems [page 119], for multitenancy.
Steps
Prerequisites
● Your runtime environment uses SAP Java Buildpack version 1.9.0 or higher.
● You have successfully completed the Tutorial: Invoke ABAP Function Modules in On-Premise ABAP
Systems [page 119].
● You have created a second subaccount (in the same global account), that is used to subscribe to your
application.
The application router needs to be tenant-aware with a TENANT_HOST_PATTERN to recognize different tenants
from the URL, see Multitenancy [page 1145]. TENANT_HOST_PATTERN should have the following format:
"^(.*).<application domain>". The application router extracts the token captured by "(.*)" to use it as
the subscriber tenant. The manifest file might look like this:
Sample Code
manifest.yml
---
applications:
- name: approuter-jco-demo-p42424242
path: ./
buildpacks:
- nodejs_buildpack
memory: 120M
routes:
- route: approuter-jco-demo-p42424242.cfapps.eu10.hana.ondemand.com
env:
TENANT_HOST_PATTERN: "^(.*).approuter-jco-demo-
p42424242.cfapps.eu10.hana.ondemand.com"
NODE_TLS_REJECT_UNAUTHORIZED: 0
destinations: >
[
{"name":"dest-to-example", "url" :"https://jco-demo-
p42424242.cfapps.eu10.hana.ondemand.com/ConnectivityRFCExample",
"forwardAuthToken": true }
]
services:
- xsuaa_jco
To call the XSUAA in a tenant-aware way, you must adjust the configuration JSON file. The tenant mode must
now have the value "shared". Also, you must allow calling the previously defined REST APIs (callbacks).
Sample Code
{
"xsappname" : "jco-demo-p42424242",
"tenant-mode": "shared",
"scopes": [
{
"name":"$XSAPPNAME.Callback",
"description":"With this scope set, the callbacks for tenant
onboarding, offboarding and getDependencies can be called.",
"grant-as-authority-to-apps":[
"$XSAPPNAME(application,sap-provisioning,tenant-onboarding)"
]
Add Roles
1. In the cloud cockpit, navigate to the subaccount view and go the tab Role Collections under Security (see
step Configure Roles and Trust [page 132] from the previous tutorial).
2. Click on the role collection name.
3. Choose Add Role.
4. In the popup window, select the demo application as <Application Identifier>.
5. For <Role Template> and <Role>, use MultitenancyCallbackRoleTemplate and choose Save.
6. Choose Add Role again.
7. Select the demo application as <Application Identifier>.
8. For Role Template and Role use UAAaccess and choose Save.
Firstly, in order to make the application subscribable, it must provide at least the following REST APIs:
In our sample application, we implement new servlets for each of these APIs.
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.8.5</version>
</dependency>
<dependency>
<groupId>com.unboundid.components</groupId>
<artifactId>json</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>javax.ws.rs</groupId>
<artifactId>javax.ws.rs-api</artifactId>
<version>2.1.1</version>
</dependency>
GET Dependencies
The current JCo dependencies are the Connectivity and Destination service. Thus, the GET API must return
information about these two services:
Sample Code
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.ws.rs.core.MediaType;
import org.json.JSONException;
import org.json.JSONObject;
import com.google.gson.Gson;
@WebServlet("/callback/v1.0/dependencies")
public class GetDependencyServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
response.setStatus(200);
response.setContentType(MediaType.APPLICATION_JSON);
DependantServiceDto.java
EnvironmentVariableAccessor.java
import java.text.MessageFormat;
This API is called whenever a tenant is subscribing. In our example, we just read the received JSON, and return
the tenant-aware URL of the application router which points to our application. Also, if a tenant wants to
unsubscribe, DELETE does currently nothing.
Sample Code
SubscribeServlet
import java.io.IOException;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.google.gson.Gson;
@WebServlet("/callback/v1.0/tenants/*")
public class SubscribeServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
@Override
protected void doDelete(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException {
super.doDelete(req, resp);
}
}
import java.util.Map;
public class PayloadDataDto {
private String subscriptionAppName;
private String subscriptionAppId;
private String subscribedTenantId;
private String subscribedSubdomain;
private String subscriptionAppPlan;
private long subscriptionAppAmount;
private String[] dependantServiceInstanceAppIds = null;
private Map<String, String> additionalInformation;
public PayloadDataDto() {}
public PayloadDataDto(String subscriptionAppName, String subscriptionAppId,
String subscribedTenantId, String subscribedSubdomain, String
subscriptionAppPlan,
Map<String, String> additionalInformation) {
this.subscriptionAppName = subscriptionAppName;
this.subscriptionAppId = subscriptionAppId;
this.subscribedTenantId = subscribedTenantId;
this.subscribedSubdomain = subscribedSubdomain;
this.subscriptionAppPlan = subscriptionAppPlan;
this.additionalInformation = additionalInformation;
}
public String getSubscriptionAppName() {
return subscriptionAppName;
}
public void setSubscriptionAppName(String subscriptionAppName) {
this.subscriptionAppName = subscriptionAppName;
}
public String getSubscriptionAppId() {
return subscriptionAppId;
}
public void setSubscriptionAppId(String subscriptionAppId) {
this.subscriptionAppId = subscriptionAppId;
}
public String getSubscribedTenantId() {
return subscribedTenantId;
}
public void setSubscribedTenantId(String subscribedTenantId) {
this.subscribedTenantId = subscribedTenantId;
}
public String getSubscribedSubdomain() {
return subscribedSubdomain;
}
public void setSubscribedSubdomain(String subscribedSubdomain) {
this.subscribedSubdomain = subscribedSubdomain;
}
public String getSubscriptionAppPlan() {
return subscriptionAppPlan;
}
public void setSubscriptionAppPlan(String subscriptionAppPlan) {
this.subscriptionAppPlan = subscriptionAppPlan;
}
public Map<String, String> getAdditionalInformation() {
return additionalInformation;
}
public void setAdditionalInformation(Map<String, String>
additionalInformation) {
this.additionalInformation = additionalInformation;
}
public long getSubscriptionAppAmount() {
return subscriptionAppAmount;
}
public void setSubscriptionAppAmount(long subscriptionAppAmount) {
this.subscriptionAppAmount = subscriptionAppAmount;
}
For the subscription of other tenants, your application must have a bound SAAS provisioning service instance.
You can do this using the cockpit:
Sample Code
{
"xsappname" : "jco-demo-p42424242",
"appName" : "JCo-Demo",
"appUrls": {
"getDependencies": "https://jco-demo-
p42424242.cfapps.eu10.hana.ondemand.com/callback/v1.0/dependencies",
"onSubscription" : "https://jco-demo-
p42424242.cfapps.eu10.hana.ondemand.com/callback/v1.0/tenants/{tenantId}"
}
}
5. Choose Next, and select the sample application jco-demo-p42424242 in the drop-down menu to assign
the SAAS service to it.
6. Choose Next, insert an instance name, for example, saas_jco, and confirm the creation by pressing
Finish.
1. To subscribe the new application from a different subaccount, go to Subscriptions in the cockpit:
2. Click on JCo-Demo.
3. In the next window, choose Subscribe:
1. To call the application with a new tenant, you must create a new route (URL). In the cockpit, choose
Routes New Route :
Access on-premise systems from a Cloud Foundry application via TCP-based protocols, using a SOCKS5 Proxy.
Content
Concept
SAP Cloud Platform Connectivity provides a SOCKS5 proxy that you can use to access on-premise systems via
TCP-based protocols. SOCKS5 is the industry standard for proxying TCP-based traffic (for more information,
see IETF RFC 1928 ).
The SOCKS5 proxy host and port are accessible through the environment variables, which are generated after
binding an application to a Connectivity service instance. For more information, see Consuming the
Connectivity Service [page 104].
You can access the host under onpremise_proxy_host, and the port through
onpremise_socks5_proxy_port, obtained from the Connectivity service instance.
The value of the SOCKS5 protocol authentication method is defined as 0x80 (defined as X'80' in IETF, refer to
the official specification SOCKS Protocol Version 5 ). This value should be sent as part of the authentication
method's negotiation request (known as Initial Request in SOCKS5). The server then confirms with a response
containing its decimal representation (either 128 or -128, depending on the client implementation).
After a successful SOCKS5 Initial Request, the authentication procedure follows the standard SOCKS5
authentication sub-procedure, that is SOCKS5 Authentication Request. The request bytes, in sequence, should
look like this:
Bytes Description
The Cloud Connector location ID identifies Cloud Connector instances that are deployed in various locations of
a customer's premises and connected to the same subaccount. Since the location ID is an optional property,
you should include it in the request only if it has already been configured in the Cloud Connector. For more
information, see Set up Connection Parameters and HTTPS Proxy [page 384] (Step 4).
If not set in the Cloud Connector, the byte representing the length of the location ID in the Authentication
Request should have the value 0, without including any value for the Cloud Connector location ID
(sccLocationId).
Restrictions
The following code snippet demonstrates an example based on the Apache Http Client library and Java
code, which represents a way to replace the standard socket used in the Apache HTTP client with one that is
responsible for authenticating with the Connectivity SOCKS5 proxy:
Sample Code
@Override
public void connect(SocketAddress endpoint, int timeout) throws IOException {
super.connect(getProxyAddress(), timeout);
executeSOCKS5AuthenticationRequest(outputStream); // 2. Negotiate
authentication sub-version and send the JWT (and optionally the Cloud
Connector Location ID)
executeSOCKS5ConnectRequest(outputStream, (InetSocketAddress)
endpoint); // 3. Initiate connection to target on-premise backend system
}
Sample Code
assertServerInitialResponse();
}
Sample Code
byteArraysStream.write(ByteBuffer.allocate(4).putInt(jwtToken.getBytes().lengt
h).array());
byteArraysStream.write(jwtToken.getBytes());
assertAuthenticationResponse();
}
In version 4.2.6 of the Apache HTTP client, the class responsible for connecting the socket is
DefaultClientConnectionOperator. By extending the class and replacing the standard socket with the
complete example code below, which implements a Java Socket, you can handle the SOCKS5 authentication
with ID 0x80. It is based on a JWT and supports the Cloud Connector location ID.
Sample Code
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.Socket;
import java.net.SocketAddress;
import java.net.SocketException;
import java.nio.ByteBuffer;
@Override
public void connect(SocketAddress endpoint, int timeout) throws
IOException {
super.connect(getProxyAddress(), timeout);
executeSOCKS5InitialRequest(outputStream);
executeSOCKS5AuthenticationRequest(outputStream);
executeSOCKS5ConnectRequest(outputStream, (InetSocketAddress)
endpoint);
}
assertServerInitialResponse();
}
assertAuthenticationResponse();
}
byteArraysStream.write(ByteBuffer.allocate(4).putInt(jwtToken.getBytes().lengt
h).array());
byteArraysStream.write(jwtToken.getBytes());
byteArraysStream.write(ByteBuffer.allocate(1).put((byte)
sccLocationId.getBytes().length).array());
byteArraysStream.write(sccLocationId.getBytes());
return byteArraysStream.toByteArray();
} finally {
byteArraysStream.close();
}
}
assertConnectCommandResponse();
}
byteArraysStream.write(SOCKS5_COMMAND_ADDRESS_TYPE_DOMAIN_BYTE);
byteArraysStream.write(ByteBuffer.allocate(1).put((byte)
host.getBytes().length).array());
byteArraysStream.write(host.getBytes());
}
byteArraysStream.write(ByteBuffer.allocate(2).putShort((short)
port).array());
return byteArraysStream.toByteArray();
} finally {
byteArraysStream.close();
}
}
readRemainingCommandResponseBytes(inputStream);
}
String commandConnectStatusTranslation;
switch (commandConnectStatus) {
case 1:
commandConnectStatusTranslation = "FAILURE";
break;
case 2:
commandConnectStatusTranslation = "FORBIDDEN";
break;
case 3:
commandConnectStatusTranslation = "NETWORK_UNREACHABLE";
break;
case 4:
commandConnectStatusTranslation = "HOST_UNREACHABLE";
break;
case 5:
commandConnectStatusTranslation = "CONNECTION_REFUSED";
break;
case 6:
commandConnectStatusTranslation = "TTL_EXPIRED";
break;
case 7:
commandConnectStatusTranslation = "COMMAND_UNSUPPORTED";
return parsedHostName;
}
Troubleshooting
Retrieve and store externalized technical information about the destination to consume a target remote service
from your Cloud Foundry application.
Developer 162]
2. Generate a JSON Web Token (JWT) [page 163]
3. Call the Destination Service [page 164]
Overview
The Destination service lets you find the destination information that is required to access a remote service or
system from your Cloud Foundry application.
● For the connection to an on-premise system, you can optionally use this service, together with (i.e. in
addition to) the Connectivity service, see Consuming the Connectivity Service [page 104].
● For the connection to any other Web application (remote service), you can use the Destination service
without the Connectivity service.
Consuming the Destination Service includes user authorization via a JSON Web Token (JWT) that is provided
by the xsuaa service.
Prerequisites
● To manage destinations and certificates on service instance level (all CRUD operations), you must be
assigned to one of the following roles: OrgManager, SpaceManager or SpaceDeveloper.
Note
The role SpaceAuditor has only Read permission for destinations and certificates.
● To consume the Destination service from an application, you must create a service instance and bind it to
the application. See Create and Bind a Destination Service Instance [page 101].
● To generate the required JSON Web Token (JWT), you must bind the application to an instance of the xsuaa
service using the service plan 'application'. The xsuaa service instance acts as an OAuth 2.0 client and
Steps
To consume the Destination service from your application, perform the following basic steps:
The Destination service stores its credentials in the environment variables. To consume the service, you require
the following information:
● The value of clientid, clientsecret and uri from the Destination service credentials.
● The values of url from the xsuaa credentials.
● From the CLI, the following command lists the environment variables of <app-name>:
cf env <app-name>
● From within the application, the service credential can be accessed as described in Consuming the
Connectivity Service [page 104].
Note
Your application must create an OAuth client using the attributes clientid and clientsecret, which are
provided by the Destination service instance. Then, you must retrieve a new JWT from UAA and pass it in the
Authorization HTTP header.
Java:
Sample Code
Note
Sample Code
curl -X POST \
<xsuaa-url>/oauth/token \
-H 'authorization: Basic <<clientid>:<clientsecret> encoded with Base64>' \
-H 'content-type: application/x-www-form-urlencoded' \
-d 'client_id=<clientid>&grant_type=client_credentials'
When calling the Destination service, use the uri attribute, provided in VCAP_SERVICES, to build the request
URLs.
Read a Destination by only Specifying its Name ("Find Destination") [page 164]
This lets you provide simply a name of the destination while the service will search for it. First, the service
searches the destinations that are associated with the service instance. If none of the destinations match the
requested name, the service searches the destinations that are associated with the subaccount.
● Path: /destination-configuration/v1/destinations/<destination-name>
● Example of a call (cURL):
Sample Code
curl "<uri>/destination-configuration/v1/destinations/<destination-name>" \
-X GET \
-H "Authorization: Bearer <jwtToken>"
● Example of a response (this is a destination found when going through the subaccount destinations):
Sample Code
{
"owner":
{
Note
The response from this type of call contains not only the configuration of the requested destination, but
also some additional data. See "Find Destination" Response Structure [page 167].
This lets you retrieve the configurations of a destination that is defined within a subaccount, by providing the
name of the destination.
● Path: /destination-configuration/v1/subaccountDestinations/<destination-name>
Sample Code
curl "<uri>/destination-configuration/v1/subaccountDestinations/
<destination name>" \
-X GET \
-H "Authorization: Bearer <jwtToken>"
● Example of a response:
Sample Code
{
"Name": "demo-internet-destination",
"URL": "http://www.google.com",
"ProxyType": "Internet",
"Type": "HTTP",
"Authentication": "NoAuthentication"
}
This lets you retrieve the configurations of all destinations that are defined within a subaccount.
● Path: /destination-configuration/v1/subaccountDestinations
Sample Code
curl "<uri>/destination-configuration/v1/subaccountDestinations" \
-X GET \
-H "Authorization: Bearer <jwtToken>"
● Example of a response:
Sample Code
[
{
"Name": "demo-onpremise-destination1",
"URL": "http:/virtualhost:1234",
"ProxyType": "OnPremise",
"Type": "HTTP",
"Authentication": "NoAuthentication"
},
{
"Name": "demo-onpremise-destination2",
"URL": "http:/virtualhost:4321",
"ProxyType": "OnPremise",
"Type": "HTTP",
"Authentication": "BasicAuthentication",
"User": "myname123",
"Password": "123456"
}
]
Response Codes
When calling the Destination service, you may get the following response codes:
Related Information
User Propagation via SAML 2.0 Bearer Assertion Flow [page 170]
Destination Service REST API [page 174]
Exchanging User JWTs via OAuth2UserTokenExchange Destinations [page 174]
Overview of data that are returned by the Destination service for the call type "find destination".
Response Structure
When you use the "find destination" call (read a destination by only specifying its name), the structure of the
response includes four parts:
Each of these parts is represented in the JSON object as a key-value pair and their values are JSON objects, see
Example [page 170].
Destination Owner
● Key: owner
The JSON object that represents the value of this property contains two properties itself: SubaccountId
and InstanceId. Depending on where the destination was found (as a service instance destination or a
Sample Code
"owner": {
"SubaccountId": "9acf4877-5a3d-43d2-b67d-7516efe15b11",
"InstanceId": null
}
Destination Configuration
● Key: destinationConfiguration
The JSON object that represents the value of this property contains the actual properties of the
destination. To learn more about the available properties, see HTTP Destinations [page 38].
● Example:
Sample Code
"destinationConfiguration": {
"Name": "TestBasic",
"Type": "HTTP",
"URL": "http://sap.com",
"Authentication": "BasicAuthentication",
"ProxyType": "OnPremise",
"User": "test",
"Password": "pass12345"
}
Authentication Tokens
Note
This property is only applicable to destinations that use the following authentication types:
BasicAuthentication, OAuth2SAMLBearerAssertion, OAuth2ClientCredentials, and
OAuthUserTokenExchange.
● Key: authTokens
Sample Code
"authTokens": [
{
"type": "Basic",
"value": "dGVzdDpwYXNzMTIzNDU=",
}
]
Certificates
Note
This property is only applicable to destinations that use the following authentication types:
ClientCertificateAuthentication, OAuth2SAMLBearerAssertion (when default JDK trust store is not used).
● Key: certificates
The JSON array that represents the value of this property contains the certificates, specified in the
destination configuration. These certificates are represented by JSON objects with these properties
(expect more new properties to be added in the future):
○ type
○ content: the encoded content of the certificate.
○ name: the name of the certificate, as specified in the destination configuration.
● Example:
Sample Code
"certificates": [
{
"Name": "keystore.jks",
"Content": "<value>"
"Type": "CERTIFICATE"
},
{
Example
Sample Code
{
"owner": {
"SubaccountId": "9acf4877-5a3d-43d2-b67d-7516efe15b11",
"InstanceId": null
},
"destinationConfiguration": {
"Name": "TestBasic",
"Type": "HTTP",
"URL": "http://sap.com",
"Authentication": "BasicAuthentication",
"ProxyType": "OnPremise",
"User": "test",
"Password": "pass12345"
},
"authTokens": [
{
"type": "Basic",
"value": "dGVzdDpwYXNzMTIzNDU="
}
]
}
Learn about the process for automatic token retrieval, using the OAuth2SAMLBearerAssertion
authentication type for HTTP destinations.
Prerequisites
Note
Though actually not being a strict requirement, it is likely that you need a user JWT to get the relevant
information. See Authorization and Trust Management in the Cloud Foundry Environment [page 2247].
● If you are using custom user attributes to determine the user, the JWT representing the user (that is
passed to the Destination service) must have the user_attributes scope.
For an OAuth2SAMLBearerAssertion destination, you can use the automated token retrieval functionality
that is available via the "find destination" endpoint. See Destination Service REST API [page 174].
Find the available sources in the table below, in order of their priority.
Field in the JWT In this case, the Destination service looks for the user ID as a
field in the provided JWT. When you make the HTTP call to
the Destination service, you must provide the
Authorization header. The value must be a JWT in its
encoded form (see RFC 7519 ). The procedure is as fol
lows:
Custom User Attribute Like Field in the JWT, this source must use the
Authorization header. In this case, its value is used to
retrieve the custom user attributes from the Identity Pro
vider (XSUAA). One of those attributes can be used as the
propagated user ID. The access token from the header must
be a user JWT with the user_attributes scope. Other
wise the custom attributes cannot be retrieved, and the op
eration results in an error.
Scenarios
Refer to the table below to find the JWT requirements for a specific scenario:
SystemUser property of an ● via the client credentials of the Destination service in
OAuth2SAMLBearerAssertion destination main stance (bound to the application).
tained in the subscriber subaccount, and used by the pro ● using the subscriber's tenant-specific Token
vider application. Service URL.
SystemUser property of an ● via the client credentials of the Destination service in
OAuth2SAMLBearerAssertion destination main stance (bound to the application).
tained in the same subaccount where the application is de ● using the provider's tenant-specific Token Service
ployed. URL.
Propagate a business user principal, using an The JWT, previously retrieved from the application
OAuth2SAMLBearerAssertion destination main ● by exchanging the JWT (that represents the user) to an
tained in the subscriber subaccount where the application other user access token via the client credentials of the
is deployed. Destination service instance (bound to the application).
● using the subscriber's tenant-specific Token
The business user is represented by a JWT that was issued
by the subscriber. Service URL.
Propagate a business user principal, using an The JWT, previously retrieved by the application
OAuth2SAMLBearerAssertion destination main ● by exchanging the JWT (that represents the user) to an
tained in the same subaccount where the application is de other user access token via the client credentials of the
ployed. Destination service instance (bound to the application).
The business user is represented by a JWT that was issued ● using the provider's tenant-specific Token Service
by the provider. URL.
Destination service REST API specification for the SAP Cloud Foundry environment.
The Destination service provides a REST API that you can use to read and manage destinations and certificates
on all available levels. This API is documented in the SAP API Business Hub .
It shows all available endpoints, the supported operations, parameters, possible error cases and related status
codes, etc. You can also execute requests using the credentials (for example, the service key) of your
Destination service instance, see Create and Bind a Destination Service Instance [page 101].
Automatic token retrieval using the OAuth2UserTokenExchange authentication type for HTTP destinations.
Content
Prerequisites
You have configured an OAuth2UserTokenExchange destination. See OAuth User Token Exchange
Authentication [page 52].
For destinations of authentication type OAuth2UserTokenExchange, you can use the automated token
retrieval functionality via the "find destination" endpoint, see Call the Destination Service [page 164].
If you provide the user token exchange header with the request to the Destination service and its value is not
empty, it is used instead of the Authorization header to specify the user and the tenant subdomain. It will be
the token for which token exchange is performed.
● The header value must be a user JWT (JSON Web token) in encoded form, see RFC 7519 .
● If the user token exchange header is not provided with the request to the Destination Service or it is
provided, but its value is empty, the token from the Authorization header is used instead. In this case,
the JWT in the Authorization header must be a user JWT in encoded form, otherwise the token
exchange does not work.
For information about the response structure of this request, see "Find Destination" Response Structure [page
167].
Scenarios
To achieve specific token exchange goals, you can use the following headers and values when calling the
Destination service:
1.4.1.5 Security
Find an overview of recommended security measures for SAP Cloud Platform Connectivity.
Enable single sign-on by forwarding the identity of cloud Principal Propagation [page 62]
users to a remote system or service.
Set up and run the Cloud Connector according to the highest Security Guidelines [page 545]
security standards.
Find information on monitoring and troubleshooting for SAP Cloud Platform Connectivity.
Getting Support
If you encounter an issue with this service, we recommend to follow the procedure below:
Check the availability of the platform at SAP Cloud Platform Status Page .
For more information about selected platform incidents, see Root Cause Analyses.
In the SAP Support Portal, check the Guided Answers section for SAP Cloud Platform. You can find solutions
for general SAP Cloud Platform issues as well as for specific services there.
More Information
Monitor the Cloud Connector from the SAP Cloud Platform Monitoring [page 510]
cockpit and from the Cloud Connector administration UI.
Troubleshoot connection problems and view different types Troubleshooting [page 530]
of logs and traces in the Cloud Connector.
Use SAP Cloud Platform Connectivity for your application in the Neo environment: destination management,
connectivity scenarios, user roles.
Content
In this Topic
Hover over the elements for a description. Click an element for more information.
In this Guide
Hover over the elements for a description. Click an element for more information.
Destinations
To use of the Connectivity service, you must first create and configure destinations, using the corresponding
communication protocol and other destination properties.
You have several options to create and edit destinations, see Managing Destinations [page 180].
Scenarios
Connect Web applications and external servers via HTTP Consume Internet Services (Java Web or Java EE 6 Web Pro
file) [page 268]
Make connections between Web applications and on-prem Consume Back-End Systems (Java Web or Java EE 6 Web
ise backend services via HTTP Profile) [page 283]
Connect Web applications and on-premise backend services Tutorial: Invoke ABAP Function Modules in On-Premise
via RFC ABAP Systems [page 307]
Use LDAP-based user authentication for your cloud applica LDAP Destinations [page 241]
tion
Access on-premise systems via TCP-based protocols using a Using the TCP Protocol for Cloud Applications [page 317]
SOCKS5 proxy
Send and fetch e-mail via mail protocols Sending and Fetching E-Mail [page 321]
The following user groups are involved in the end-to-end use of the Connectivity service:
● Application operators - are responsible for productive deployment and operation of an application on SAP
Cloud Platform. Application operators are also responsible for configuring the remote connections that an
application might need, see Operations [page 180].
● Application developers - create a connectivity-enabled SAP Cloud Platform application by using the
Connectivity service API, see Development [page 251].
● IT administrators - set up the connectivity to SAP Cloud Platform in your on-premise network, using the
Cloud Connector [page 345].
Some procedures on the SAP Cloud Platform can be done by developers as well as by application operators.
Others may include a mix of development and operation tasks. These procedures are labeled using icons for
the respective task type.
Task Types
To perform connectivity tasks in the Neo environment, the following user roles and authorizations apply:
Developer
For more information on the configuration levels available for destination management, see Managing
Destinations [page 180] (section Configuration Levels (HTTP and RFC)).
See also:
Related Information
1.4.2.1 Operations
Task Description
Managing Destinations [page 180] Create and configure destinations. You can use destinations
for outbound communication between a cloud application
and a remote system.
Principal Propagation [page 246] Use principal propagation to forward the identity of cloud
users to a back-end system (single sign-on).
Multitenancy in the Connectivity Service [page 248] Manage destinations for multitenancy-enabled applications
that require a connection to a remote service or on-premise
application.
Overview
Destinations are used for the outbound communication of a cloud application to a remote system and contain
the required connection information. They are represented by symbolic names that are used by cloud
applications to refer to a remote connection.
To configure a destination, you can use files with extension .props, .properties, .jks, and .txt, as well as
files with no extension.
Destination Names
A destination name must be unique for the current application. It must contain only alphanumeric characters,
underscores, and dashes. The maximum length is 200 characters.
Destination Types
The currently supported destination types are HTTP, RFC, LDAP and Mail.
● HTTP Destinations [page 217] - provide data communication via the HTTP protocol and are used for both
Internet and on-premise connections.
● RFC Destinations [page 234] - make connections to ABAP on-premise systems via RFC protocol using the
Java Connector (JCo) as API.
● LDAP Destinations [page 241] - enable LDAP-based user management if you are operating an LDAP server
within your network.
● Mail Destinations [page 244] - specify an e-mail provider for sending and retrieving e-mails via SMTP, IMAP,
and POP3 protocols.
Configuration Tools
To configure and use a destination to connect your cloud application, you can use one of the following tools:
Destinations can be simultaneously configured on three levels: application, consumer subaccount, and
subscription. This means it is possible to have one and the same destination on more than one configuration
level.
The runtime tries to resolve a destination in the following order: Subscription level → Consumer subaccount
level → Provider application level.
For more information about the usage of consumer subaccount, provider subaccount, and provider
application, see Configure Destinations from the Console Client [page 182].
Configuration Cache
● Destination configuration files and Java keystore (JKS) files are cached at runtime. The cache expiration
time is set to a small time interval (currently around 4 minutes). This means that once you update an
existing destination configuration or a JKS file, the application needs about 4 minutes until the new
destination configuration is applied. To avoid this waiting time, the application can be restarted on the
cloud; following the restart, the new destination configuration takes effect immediately.
● When you configure a destination for the first time, it takes effect immediately.
● If you change a mail destination, the application needs to be restarted before the new configuration
becomes effective.
Examples
You can find examples in the SDK package that you previously downloaded from http://
tools.hana.ondemand.com.
Open the SDK location and go to /tools/samples/connectivity. This folder contains a standard
template.properties file, weather destination, and weather.destinations.properties file, which
provides all the necessary properties for uploading the weather destination.
As an application operator, you can configure your application using SAP Cloud Platform console client. You
can configure HTTP, Mail, or RFC destinations using a standard properties file.
The tasks listed below demonstrate how to upload, download, and delete connectivity destinations. You can
perform these operations for destinations related to your own subaccount, a provider subaccount, your own
application, or an application provided by another subaccount.
To use an application from another subaccount, you must be subscribed to this application through your
subaccount.
Prerequisites
● You have downloaded and set up the console client. For more information, see Set Up the Console Client
[page 1412].
● For specific information about all connectivity restrictions, see Connectivity [page 16] → section
"Restrictions".
The number of mandatory property keys varies depending of the authentication type you choose. For more
information about HTTP destination properties files, HTTP Destinations [page 217].
Key stores and trust stores must be stored in JKS files with a standard .jks extension.
If mandatory fields are missing or data is specified incorrectly, you will be prompted accordingly by the console
client.
For more information about mail destination properties files, see Mail Destinations [page 244].
If mandatory fields are missing or data is specified incorrectly, you will be prompted accordingly by the console
client.
All properties except Name and Type must start with "jco.client." or "jco.destination". For more
information about RFC destination properties files, see RFC Destinations [page 234].
If mandatory fields are missing or data is specified incorrectly, you will be prompted accordingly by the console
client.
Tasks
Related Information
Context
The procedure below explains how you can upload destination configuration properties files and certificate
files. You can upload them on subaccount, application or subscribed application level.
Bear in mind that, by default, your destinations are configured on SAP Cloud Platform, that is the
hana.ondemand.com landscape. If you need to specify a particular region host, you need to add the --host
parameter, as shown in the examples. Otherwise, you can skip this parameter.
Procedure
Tips
Note
When uploading a destination configuration file that contains a password field, the password value remains
available in the file. However, if you later download this file, using the get-destination command, the
password value will no more be visible. Instead, after Password =..., you will only see an empty space.
Note
The configuration parameters used by SAP Cloud Platform console client can be defined in a properties file
as well. This may be done instead of specifying them directly in the command (with the exception of the -
password parameter, which must be specified when the command is executed). When you use a
properties file, enter the path to it as the last command line parameter.
Example:
Context
The procedure below explains how you can download (read) destination configuration properties files and
certificate files. You can download them on subaccount, application or subscribed application level.
You can read destination files with extension .props, .properties, .jks, and .txt, as well as files with no
extension. Destination files must be encoded in ISO 8859-1 character encoding.
Note
Bear in mind that, by default, your destinations are configured on SAP Cloud Platform, that is the
hana.ondemand.com landscape. If you need to specify a particular region host, you need to add the --host
parameter, as shown in the examples. Otherwise, you can skip this parameter.
Procedure
Tips
Note
If you download a destination configuration file that contains a password field, the password value will not
be visible. Instead, after Password =..., you will only see an empty space. You will need to learn the
password in other ways.
Note
The configuration parameters used by SAP Cloud Platform console client can be defined in a properties file
as well. This may be done instead of specifying them directly in the command (with the exception of the -
password parameter, which must be specified when the command is executed). When you use a
properties file, enter the path to it as the last command line parameter. A sample weather properties file
can be found in directory <SDK_location>\tools\samples\connectivity.
Example:
Related Information
Context
The procedure below explains how you can delete destination configuration properties files and certificate files.
You can delete them on subaccount, application or subscribed application level.
Note
Bear in mind that, by default, your destinations are configured on SAP Cloud Platform, that is the
hana.ondemand.com landscape. If you need to specify a particular region host, you need to add the --host
parameter, as shown in the examples. Otherwise, you can skip this parameter.
Tips
Note
The configuration parameters used by SAP Cloud Platform console client can be defined in a properties file
as well. This may be done instead of specifying them directly in the command (with the exception of the -
password parameter, which must be specified when the command is executed). When you use a
properties file, enter the path to it as the last command line parameter.
Example:
Related Information
You can use the Connectivity editor in the Eclipse IDE to configure HTTP, Mail, RFC and LDAP destinations in
order to:
● Connect your cloud application to the Internet or make it consume an on-premise back-end system via
HTTP(S);
● Send an e-mail from a simple Web application using an e-mail provider that is accessible on the Internet;
● Make your cloud application invoke a function module in an on-premise ABAP system via RFC.
● Use LDAP-based user authentication for your cloud application.
You can create, delete and modify destinations to use them for direct connections or export them for further
usage. You can also import destinations from existing files.
Note
Prerequisites
● You have downloaded and set up your Eclipse IDE. For more information, see Setting Up the Development
Environment [page 1402] or Updating Java Tools for Eclipse andSAP Cloud Platform SDK for Neo
Environment [page 1413].
● You have created a Java EE application. For more information, see Creating a Hello World Application [page
1416] or Using Java EE Web Profile Runtimes [page 1445].
Tasks
Context
The procedure below demonstrates how you can create and configure connectivity destinations (HTTP, Mail,
or RFC) on a local SAP Cloud Platform server.
Procedure
Also, a Servers folder is created and appears in the navigation tree of the Eclipse IDE. It contains
configurable folders and files you can use, for example, to change your HTTP or JMX port.
5. On the Servers view, double-click the added server to open its editor.
6. Go to the Connectivity tab view.
a. In the All Destinations section, choose the button to create a new destination.
b. From the dialog window, enter a name for your destination, select its type and then choose OK.
c. In the URL field, enter the URL of the target service to which the destination should refer.
d. In the Authentication dropdown box, choose the authentication type required by the target service to
authenticate the calls.
○ If the target service does not require authentication, choose NoAuthentication.
○ If the target service requires basic authentication, choose BasicAuthentication. You need to enter a
user name and a password.
○ If the target service requires a client certificate authentication, choose
ClientCertificateAuthentication. See Use Destination Certificates (IDE) [page 196].
e. Optional: In the Properties or Additional Properties section, choose the button to specify additional
destination properties.
Related Information
When using a local server, the destination configuration is stored in the file system as plain text by default. The
plain text storage includes password fields, which can be a security issue.
Perform the following procedure to encrypt those fields for a particular destination configuration file.
Generate a Key
To encrypt and decrypt the password fields, you need a key for an AES-128-CBC algorithm (Advanced
Encryption Standard). The following steps show you how to generate this key using OpenSSL. Alternatively,
you can use any other appropriate procedure.
Note
If a stronger AES algorithm is required (for example, AES with 256-bit keys), you must install the JCE
Unlimited Strength Jurisdiction Policy Files in the JDK/JRE.
Prerequisites
OpenSSL is provided by Linux and Mac by default. For Windows, you must install it from http://
gnuwin32.sourceforge.net/packages/openssl.htm .
Note
For Windows, the installer does not add the path of the openssl.exe file to the PATH environment variable.
You should do this manually or navigate to the file before executing the OpenSSL commands in the
terminal.
Procedure
Sample Code
4. This procedure generates a key and stores it in the specified file (and creates the file if necessary). The key
file has the following format:
salt=3F190F676A469E24
key=C9BA8910B87D25242AF759001842EFCF
iv =AD5EE334AE9694BE96E1754B6E736C7D
Note
Only the <key> and <iv> fields are needed. If you use a different method to create the key file, you only
need to include those two fields.
Configure Encryption
To store the password fields of a destination in an encrypted format, you must set the encryption-key-
location property. The value of this property is the absolute path of the key file, containing an encryption key
in the format described above.
Note
You should store the key file on a removable storage device. Otherwise, the decryption key can always be
accessed.
Encryption/Decryption Failure
● Encryption
Encryption is performed when the destination is saved to the file system. If an error occurs, the Save
operation fails and a message shows the cause.
● Decryption
The following error cases may occur. If:
○ a key file is missing in the file system, the editor lets you edit the destination and specify a new location
of the key.
Note
The Save operation fails until a valid key (which can decrypt the loaded destination) is provided.
We strongly recommend that you provide the new location of the key immediately and save the
destination. Then you can continue working with the destination as usual.
○ a key file is corrupted, the editor treats it as if the key was not found. You can specify a new location
and, if the key is valid, continue working with the destination.
○ a particular field (or multiple fields) cannot be decrypted, the editor loads the destination and changes
the value of the failed properties to blank. In this case, you must modify (specify new values) or
remove each of these fields to fix the corrupted data.
○ the initialization of the decrypting library fails, all password fields are changed to blank.
SDK
● Decryption
If decryption fails, the retrieval of an encrypted destination always causes an exception, no matter the
cause of the failure. This exception is either IllegalStateException (if the failure is caused by a Java
problem), or IllegalArgumentException (if the failure is caused by a problem in the destination or key file).
Note
Context
The procedure below demonstrates how you can create and configure connectivity destinations (HTTP, Mail,
or RFC) on SAP Cloud Platform.
Procedure
a. In the All Destinations section, choose the button to create a new destination.
b. From the dialog window, enter a name for your destination, select its type and the choose OK.
c. In the URL field, enter the URL of the target service to which the destination should refer.
d. In the Authentication dropdown box, choose the authentication type required by the target service to
authenticate the calls.
○ If the target service does not require authentication, choose NoAuthentication.
○ If the target service requires basic authentication, choose BasicAuthentication. You need to enter a
user name and a password.
○ If the target service requires a client certificate authentication, choose
ClientCertificateAuthentication. See Use Destination Certificates (IDE) [page 196].
○ If the target service requires your cloud user authentication, choose PrincipalPropagation. You also
need to select Proxy Type: OnPremise and should enter the additional property
CloudConnectorVersion with value 2.
e. In the Proxy Type dropdown box, choose the required type of proxy connection.
Note
This dropdown box allows you to choose the type of your proxy and is only available when
deploying on SAP Cloud Platform. The default value is Internet. In this case, the destination uses
the HTTP proxy for the outbound communication with the Internet. For consumption of an on-
premise target service, choose the OnPremise option so that the proxy to the SSL tunnel is chosen
and the tunnel is established to the connected Cloud Connector.
f. Optional: In the Properties or Additional Properties section, choose the button to specify additional
destination properties.
g. Save the editor. This saves the specified destination configuration in SAP Cloud Platform.
Note
Bear in mind that changes are currently cached with a cache expiration of up to 4 minutes. That’s why
if you modify a destination configuration, the changes might not take effect immediately. However, if
the relevant Web application is restarted on the cloud, the destination changes will take effect
immediately.
Related Information
Prerequisites
Context
You can maintain keystore certificates in the Connectivity editor. You can upload, add and delete certificates for
your connectivity destinations. Bear in mind that:
● You can use JKS, PFX and P12 files for destination keystore, and JKS, CRT, CER, DER files for destination
truststore.
● You add certificates in a keystore file and then you upload, add, or delete this keystore.
● You can add certificates only for HTTPS destinations. Keystore is available only for
ClientCertificateAuthentication.
Uploading Certificates
1. Press the Upload/Delete keystore button. You can find it in the All Destinations section in the Conectivity
editor.
2. Choose Upload Keystore and select the certificate you want to upload. Choose Open or double-click the
certificate.
Note
You can upload a certificate during creation or editing of a destination, by choosing Manage Keystore or by
pressing the Upload/Delete keystore button.
Deleting Certificates
Related Information
Prerequisites
Note
The Connectivity editor allows importing destination files with extension .props, .properties, and .txt,
as well as files with no extension. Destination files must be encoded in ISO 8859-1 character encoding.
Procedure
○ If the destination does not contain client certificate authentication, it is saved as a single configuration
file.
○ If the destination provides client certificate data, it is saved as an archive, which contains the main
configuration file and a Keystore file.
5. The destination file is imported within the Connectivity editor.
Note
If the properties file contains incorrect properties or values, for example wrong destination type, the
editor only displays the valid ones in the Properties table.
Related Information
Prerequisites
You have imported or created a new destination (HTTP, Mail, or RFC) in the Eclipse IDE.
Procedure
○ If the destination does not contain client certificate authentication, it is saved as a single configuration
file.
○ If the destination provides client certificate data, it is saved as an archive, which contains the main
configuration file and a Keystore file.
Tip
You can keep the default name of the destination, or rename it to avoid overriding with previous files
with the same name.
Next Steps
After exporting the destination, you can open it to check its content. Bear in mind that all password fields will
be commented (with # symbols), and their values - deleted.
Example:
Use the Destinations editor in SAP Cloud Platform cockpit to configure HTTP, Mail, RFC, and LDAP
destinations in order to:
● Connect your cloud application to the Internet or make it consume an on-premise back-end system via
HTTP(S).
● Send an e-mail from a simple Web application using an e-mail provider that is accessible on the Internet.
● Make your cloud application invoke a function module in an on-premise ABAP system via RFC.
● Use LDAP-based user authentication for your cloud application.
You can create, delete, clone, modify, import and export destinations.
Use this editor to work with destinations on subscription, subaccount, and application level.
Note
Prerequisites
1. You have logged into the cockpit from the SAP Cloud Platform landing page, depending on your
subaccount type. For more information, see Regions and Hosts Available for the Neo Environment [page
14].
2. Depending on the level you need to make destination configurations from the Destinations editor, make
sure the following is fulfilled:
○ Subscription level – you need to have at least one application subscribed to your subaccount.
○ Application level – you need to have at least one application deployed on your subaccount.
○ Subaccount level – no prerequisites.
For more information, see Access the Destinations Editor (Neo Environment) [page 204].
Tasks
Related Information
Prerequisites
● You have logged into the cockpit from the SAP Cloud Platform landing page, depending on your global
account type. For more information, see Regions and Hosts Available for the Neo Environment [page 14].
Procedure
1. In the cockpit, select your subaccount name from the Subaccount menu in the breadcrumbs.
2. From the left-side navigation, choose Applications Subscriptions to open the page with your
currently subscribed Java applications (if any).
3. Select the application for which you need to create a destination.
4. From the left-side panel, choose Destinations.
1. In the cockpit, select your subaccount name from the Subaccount menu in the breadcrumbs.
1. In the cockpit, select your subaccount name from the Subaccount menu in the breadcrumbs.
2. From the left-side navigation, choose Applications Java Applications to open the page with your
currently deployed Java Web applications (if any).
3. Select the application for which you need to create a destination.
4. From the left-side panel, choose Configuration Destinations .
5. The Destinations editor is opened.
Related Information
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Context
To learn how to create HTTP, RFC, and Mail destinations, follow the steps on the relevant pages:
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Procedure
Note
8. From the Authentication dropdown box, select the authentication type you need for the connection.
Note
If you set an HTTPS destination, you need to also add a Trust Store. For more information, see Use
Destination Certificates (Cockpit) [page 212].
Note
Related Information
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Procedure
Note
For a detailed description of RFC-specific properties (JCo properties), see RFC Destinations [page
234].
Related Information
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Procedure
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Context
You can use the Check Connection button in the Destinations editor of the cockpit to verify if the URL
configured for an HTTP Destination is reachable and if the connection to the specified system is possible.
Note
For each destination, the check button is available in the destination detail view and in the destination overview
list (icon Check availability of destination connection in section Actions).
Note
The check does not guarantee that a backend is operational. It only verifies if a connection to the backend
is possible.
This check is supported only for destinations with Proxy Type Internet and OnPremise:
Backend status could not be deter ● The Cloud Connector version is ● Upgrade the Cloud Connector to
mined. less than 2.7.1. version 2.7.1 or higher.
● The Cloud Connector is not con ● Connect the Cloud Connector to
nected to the subaccount. the corresponding subaccount.
● Check the server status (availabil
● The backend returns a HTTP sta
ity) of the back-end system.
tus code above or equal to 500
● Check the basic Cloud Connector
(server error).
configuration steps:
● The Cloud Connector is not config- Initial Configuration [page 382]
ured properly.
Backend is not available in the list of de The Cloud Connector is not configured Check the basic Cloud Connector con
figuration steps:
fined system mappings in Cloud properly.
Connector. Initial Configuration [page 382]
Resource is not accessible in Cloud The Cloud Connector is not configured Check the basic Cloud Connector con
figuration steps:
Connector or backend is not reachable. properly.
Initial Configuration [page 382]
Backend is not reachable from Cloud Cloud connector configuration is ok but Check the backend (server) availability.
Connector. the backend is not reachable.
Prerequisites
You have previously created or imported a connectivity destination (HTTP, Mail, or RFC ) in the Destinations
editor of the cockpit.
Procedure
1. In the Destinations editor, go to the existing destination which you want to clone.
Related Information
Prerequisites
You have previously created or imported a connectivity destination (HTTP, Mail, or RFC) in the Destinations
editor of the cockpit.
Procedure
● Edit a destination:
Tip
For complete consistency, we recommend that you first stop your application, then apply your
destination changes, and then start again the application. Also, bear in mind that these steps will
cause application downtime.
● Delete a destination:
To remove an existing destination, choose the button. The changes will take effect in up to five minutes.
Prerequisites
You have logged into the cockpit and opened the Destinations editor. For more information, see Access the
Destinations Editor (Neo Environment) [page 204].
Context
This page explains how you can maintain truststore and keystore certificates in the Destinations editor. You can
upload, add and delete certificates for your connectivity destinations. Bear in mind that:
● You can only use JKS, PFX and P12 files for destination key store, and JKS, CRT, CER, DER for destination
trust store.
● You can add certificates only for HTTPS destinations. Truststore can be used for all authentication types.
Keystore is available only for ClientCertificateAuthentication.
● An uploaded certificate file should contain the entire certificate chain.
Procedure
Uploading Certificates
Note
You can upload a certificate during creation or editing of a destination, by clicking the Upload and Delete
Certificates link.
Deleting Certificates
1. Choose the Certificates button or click the Upload and Delete Certificates link.
2. Select the certificate you want to remove and choose Delete Selected.
3. Upload another certificate, or close the Certificates window.
Related Information
Prerequisites
Note
The Destinations editor allows importing destination files with extension .props, .properties, .jks,
and .txt, as well as files with no extension. Destination files must be encoded in ISO 8859-1 character
encoding.
○ If the configuration file contains valid data, it is displayed in the Destinations editor with no errors. The
Save button is enabled so that you can successfully save the imported destination.
○ If the configuration file contains invalid properties or values, under the relevant fields in the
Destinations editor are displayed error messages in red which prompt you to correct them accordingly.
Related Information
Prerequisites
You have created a connectivity destination (HTTP, Mail, or RFC) in the Destinations editor.
Procedure
○ If the destination does not contain client certificate authentication, it is saved as a single configuration
file.
○ If the destination provides client certificate data, it is saved as an archive, which contains the main
configuration file and a JKS file.
Related Information
User → jco.client.user
Password → jco.client.passwd
Note
For security reasons, do not use these additional properties but use the corresponding main properties'
fields.
Related Information
Overview
The HTTP destinations provide data communication via HTTP protocol and are used for both Internet and on-
premise connections.
The runtime tries to resolve a destination in the order: Subscription Level → Subaccount Level → Application
Level. By using the optional "DestinationProvider" property, a destination can be limited to application
level only, that is, the runtime tries to resolve the destination on application level.
Property Description
Note
If you use Java Web Tomcat 7 runtime container, the DestinationProvider property is not supported.
Instead, you can use AuthenticationHeaderProvider API [page 257].
Example
Name=weather
Type=HTTP
Authentication=NoAuthentication
DestinationProvider=Application
● Internet - The application can connect to an external REST or SOAP service on the Internet.
● OnPremise - The application can connect to an on-premise back-end system through the Cloud Connector.
If you work in your local development environment behind a proxy server and want to use a service from the
Internet, you need to configure your proxy settings on JVM level. To do this, proceed as follows:
1. On the Servers view, double-click the added server and choose Overview to open the editor.
2. Click the Open Launch Configuration link.
3. Choose the (x)=Arguments tab page.
4. In the VM Arguments box, add the following row:
-Dhttp.proxyHost=yourproxyHost -Dhttp.proxyPort=yourProxyPort -
Dhttps.proxyHost=yourproxyHost -Dhttps.proxyPort=yourProxyPort
5. Choose OK.
6. Start or restart your SAP HANA Cloud local runtime.
For more information and example, see Consume Internet Services (Java Web or Java EE 6 Web Profile) [page
268].
● When using the Internet proxy type, you do not need to perform any additional configuration steps.
● When using the OnPremise proxy type, you configure the setting the standard way through the Connectivity
editor in the Eclipse IDE.
For more information and example, see Consume Back-End Systems (Java Web or Java EE 6 Web Profile)
[page 283].
Configuring Authentication
When creating an HTTP destination, you can use different authentication types for access control::
Context
The server certificate authentication is applicable for all client authentication types, described below.
Note
TLS 1.2 became the default TLS version of HTTP destinations. If an HTTP destination is consumed by a java
application the change will be effective after restart. All HTTP destinations that use the HTTPS protocol and
have ProxyType=Internet can be affected. Previous TLS version can be used by configuring an additional
property TLSVersion=TLSv1.0 or TLSVersion=TLSv1.1.
Properties
Property Description
TLSVersion Optional property. Can be used to specify the preferred TLS version to be used by
the current destination. Since TLS 1.2 is not enabled by default on the older java
versions this property can be used to configure TLS 1.2 in case this is required by
the server configured in this destination. It is usable only in HTTP destinations.
Example: TLSVersion=TLSv1.2 .
TrustStoreLocation Path to the JKS file which contains trusted certificates (Certificate Authorities)
1. When used in local environment for authentication against a remote client.
2. When used in cloud environment 1. The relative path to the JKS file. The root path is the server's location on the
file system.
2. The name of the JKS file.
Note
The default JDK truststore is appended to the truststore defined in the desti
nation configuration. As a result, the destination simultaneously uses both
truststores. If the TrustStoreLocation property is not specified, the
JDK truststore is used as a default truststore for the destination.
TrustStorePassword Password for the JKS trust store file. This property is mandatory in case
TrustStoreLocation is used.
TrustAll If this property is set to TRUE in the destination, the server certificate will not be
checked for SSL connections. It is intended for test scenarios only, and should
not be used in production (since the SSL server certificate is not checked, the
server is not authenticated). The possible values are TRUE or FALSE; the default
value is FALSE (that is, if the property is not present at all).
HostnameVerifier Optional property. It has two values: Strict and BrowserCompatible. This
property specifies how the server hostname matches the names stored inside the
server's X.509 certificate. This verifying process is only applied if TLS or SSL pro
tocols are used and is not applied if the TrustAll property is specified. The de
fault value (used if no value is explicitly specified) is Strict.
Note
You can upload TrustStore JKS files using the same command for uploading destination configuration
property file. You only need to specify the JKS file instead of the destination configuration file.
Note
Connections to remote services which require Java Cryptography Extension (JCE) unlimited strength
jurisdiction policy are not supported.
Related Information
Context
By default, all SAP systems accept SAP assertion tickets for user propagation.
Note
The SAP assertion ticket is a special type of logon ticket. For more information, see SAP Logon Tickets and
Logon Using Tickets.
The aim of the SAPAssertionSSO destination is to generate such an assertion ticket in order to propagate the
currently logged-on SAP Cloud Platform user to an SAP back-end system. You can only use this authentication
type if the user IDs on both sides are the same. The following diagram shows the elements of the configuration
process on the SAP Cloud Platform and in the corresponding back-end system:
1. Configure the back-end system so that it can accept SAP assertion tickets signed by a trusted x.509 key
pair. For more information, see Configuring a Trust Relationship for SAP Assertion Tickets.
2. Create and configure a SAPAssertionSSO destination by using the properties listed below, and deploy it on
SAP Cloud Platform.
○ Configure Destinations from the Cockpit [page 203]
○ Configure Destinations from the Console Client [page 182]
Note
Configuring SAPAssertionSSO destinations from the Eclipse IDE is not yet supported.
Properties
Property Description
ProxyType You can use both proxy types Internet and OnPremise.
Example
Name=weather
Type=HTTP
Authentication=SAPAssertionSSO
IssuerSID=JAV
IssuerClient=000
RecipientSID=SAP
RecipientClient=100
Certificate=MIICiDCCAkegAwI...rvHTQ\=\=
SigningKey=MIIBSwIB...RuqNKGA\=
Context
The aim of the PrincipalPropagation destination is to forward the identity of an on-demand user to the Cloud
Connector, and from there – to the back-end of the relevant on-premise system. In this way, the on-demand
user will no longer need to provide his/her identity every time he/she makes a connection to an on-premise
system via the same Cloud Connector.
Configuration Steps
You can create and configure a PrincipalPropagation destination by using the properties listed below, and
deploy it on SAP Cloud Platform. For more information, see:
This property is only available for destination configurations created on the cloud.
Properties
Property Description
Example
Name=OnPremiseDestination
Type=HTTP
URL= http://virtualhost:80
Authentication=PrincipalPropagation
ProxyType=OnPremise
Related Information
Context
SAP Cloud Platform supports applications to use the SAML Bearer assertion flow for consuming OAuth-
protected resources. As a result, applications do not need to deal with some of the complexities of OAuth and
can reuse existing identity providers for user data. Users are authenticated by using SAML against the
configured trusted identity providers. The SAML assertion is then used to request an access token from an
OAuth authorization server. This access token is automatically injected in all HTTP requests to the OAuth-
protected resources.
Tip
Тhe access tokens are auto-renovated. When a token is about to expire, a new token is created shortly
before the expiration of the old one.
Configuration Steps
You can create and configure an OAuth2SAMLBearerAssertion destination by using the properties listed below,
and deploy it on SAP Cloud Platform. For more information, see:
Note
Configuring OAuth2SAMLBearerAssertion destinations from the Eclipse IDE is not yet supported.
If you use the proxy type OnPremise, both OAuth server and the protected resource must be located on
premise and exposed via the Cloud Connector. Make sure to set URL to the virtual address of the protected
resource and tokenServiceURL to the virtual address of the OAuth server (see section Properties below).
Note
The combination on-premise OAuth server and protected resource on the Internet is not supported, as well
as OAuth server on the Internet and protected resource on premise.
Properties
The table below lists the destination properties for OAuth2SAMLBearerAssertion authentication type. You can
find the values for these properties in the provider-specific documentation of OAuth-protected services.
Usually, only a subset of the optional properties is required by a particular service provider.
Required
Type Destination type. Use HTTP as a value for all HTTP(S) desti
nations.
Additional
nameQualifier Security domain of the user for which access token will be
requested
SkipSSOTokenGenerationWhenNoUser If this parameter is set and there is no user logged in, token
generation is skipped, thus allowing anonymous access to
public resources. If set, it may have any value.
When the OAuth authorization server is called, it accepts the trust settings of the destination. For more
information, see Server Certificate Authentication [page 219].
Example
The connectivity destination below provides HTTP access to the OData API of the SuccessFactors Jam.
URL=https://demo.sapjam.com/OData/OData.svc
Name=sap_jam_odata
TrustAll=true
ProxyType=Internet
Type=HTTP
Authentication=OAuth2SAMLBearerAssertion
tokenServiceURL=https://demo.sapjam.com/api/v1/auth/token
clientKey=Aa1Bb2Cc3DdEe4F5GHIJ
audience=cubetree.com
nameQualifier=www.successfactors.com
apiKey=<apiKey>
Related Information
Context
The AppToAppSSO destinations are used in scenario of application-to-application communication where the
caller needs to propagate its logged-in user. Both applications are deployed on SAP Cloud Platform.
Configuration Steps
1. Configure your subaccount to allow principal propagation. For more information, see Application Identity
Provider [page 2407] → section "Specifying Custom Local Provider Settings".
This setting is done per subaccount, which means that once set to Enabled all applications within the
subaccount will accept user propagation.
2. Create and configure an AppToAppSSO destination by using the properties listed below, and deploy it on
SAP Cloud Platform. For more information, see:
○ Configure Destinations from the Cockpit [page 203]
○ Configure Destinations from the Console Client [page 182]
Note
Configuring AppToAppSSO destinations from the Eclipse IDE is not yet supported.
Properties
Property Description
Type Destination type. Use HTTP as a value for all HTTP(S) desti
nations.
SessionCookieNames Optional.
Note
In case that a session cookie name has a variable part
you can specify it as a regular expression.
Example:
JSESSIONID, JTENANTSESSIONID_.*,
CookieName, Cookie*Name, CookieName.*
Note
The spaces after comma are optional.
Note
Recommended value for the target Java app on SAP
Cloud Platform is: JTENANTSESSIONID_.*, and for
the HANA XS app is: xsId.*.
Note
If not specified, both applications must be consumed in
the same subaccount.
SkipSSOTokenGenerationWhenNoUser Optional.
#
#Wed Jan 13 12:25:47 UTC 2016
Name=apptоapp
URL=https://someurl.com
ProxyType=Internet
Type=HTTP
SessionCookieNames=JTENANTSESSIONID_.*
Authentication=AppToAppSSO
Related Information
Context
This section lists the supported client authentication types and the relevant supported properties.
No Authentication
This is used for destinations that refer to a service on the Internet or an on-premise system that does not
require authentication. The relevant property value is:
Authentication=NoAuthentication
Note
When a destination is using HTTPS protocol to connect to a Web resource, the JDK truststore is used as
truststore for the destination.
This is used for destinations that refer to a service on the Internet or an on-premise system that requires basic
authentication. The relevant property value is:
Authentication=BasicAuthentication
Property Description
Password Password
Preemptive If this property is not set or is set to TRUE (that is, the default behavior is to use
preemptive sending), the authentication token is sent preemptively. Otherwise, it
relies on the challenge from the server (401 HTTP code). The default value (used
if no value is explicitly specified) is TRUE. For more information about preemp
tiveness, see http://tools.ietf.org/html/rfc2617#section-3.3 .
Note
When a destination is using the HTTPS protocol to connect to a Web resource, the JDK truststore is used as
truststore for the destination.
Note
This is used for destinations that refer to a service on the Internet. The relevant property value is:
Authentication=ClientCertificateAuthentication
KeyStoreLocation Path to the JKS file that contains the client certificate(s) for authentication
1. When used in local environment against a remote server.
2. When used in cloud environment 1. The relative path to the JKS file. The root path is the server's location on the
file system.
2. The name of the JKS file.
KeyStorePassword The password for the key storage. This property is mandatory in case
KeyStoreLocation is used.
Note
You can upload KeyStore JKS files using the same command for uploading destination configuration
property file. You only need to specify the JKS file instead of the destination configuration file.
Configuration
Related Information
SAP Cloud Platform supports applications to use the OAuth client credentials flow for consuming OAuth-
protected resources.
The client credentials are used to request an access token from an OAuth authorization server. If you use the
HttpDestination API and DestinationFactory [page 253], the access token is automatically injected in all HTTP
requests to the OAuth-protected resources. If you use the ConnectivityConfiguration API [page 255], you must
retrieve the access token manually, using the AuthenticationHeaderProvider API [page 257] and inject it in the
HTTP requests.
Note
The retrieved access token is cached and auto-renovated. When a token is about to expire, a new token is
created shortly before the expiration of the old one.
You can create and configure an OAuth2ClientCredentials destination using the properties listed below, and
deploy it on SAP Cloud Platform. To create and configure a destination, follow the steps described in:
Note
Configuring OAuth2ClientCredentials destinations from the Eclipse IDE is not yet supported.
Properties
The table below lists the destination properties required for the OAuth2ClientCredentials authentication type.
Property Description
Required
Type Destination type. Use HTTP as value for all HTTP(S) destina
tions.
Additional
When the OAuth authorization server is called, it accepts the trust settings of the destination, see Server
Certificate Authentication [page 219].
Example
Sample Code
URL=https://demo.sapjam.com/OData/OData.svc
Name=sap_jam_odata
TrustAll=true
ProxyType=Internet
Type=HTTP
Authentication=OAuth2ClientCredentials
tokenServiceURL=http://demo.sapjam.com/api/v1/auth/token
tokenServiceUser=tokenserviceuser
tokenServicePassword=pass
clientId=clientId
clientSecret=secret
RFC destinations provide the configuration needed for communicating with an on-premise ABAP system via
RFC. The RFC destination data is used by the JCo version that is offered within SAP Cloud Platform to establish
and manage the connection.
The RFC destination specific configuration in SAP Cloud Platform consists of properties arranged in groups, as
described below. The supported set of properties is a subset of the standard JCo properties in arbitrary
environments. The configuration data is divided into the following groups:
The minimal configuration contains user logon properties and information identifying the target host. This
means you must provide at least a set of properties containing this information.
Name=SalesSystem
Type=RFC
jco.client.client=000
jco.client.lang=EN
jco.client.user=consultant
jco.client.passwd=<password>
jco.client.ashost=sales-system.cloud
jco.client.sysnr=42
jco.destination.pool_capacity=5
jco.destination.peak_limit=10
JCo properties that cover different types of user credentials, as well as the ABAP system client and the logon
language.
The currently supported logon mechanism uses user or password as the credentials.
Property Description
Note
When working with the Destinations editor in the cock
pit, enter the value in the User field. Do not enter it as
additional property.
Note
Passwords in systems of SAP NetWeaver releases lower
than 7.0 are case-insensitive and can be only eight char
acters long. For releases 7.0 and higher, passwords are
case-sensitive with a maximum length of 40.
Note
When working with the Destinations editor in the cock
pit, enter this password in the Password field. Do not en
ter it as additional property.
Note
For PrincipalPropagation, you should configure
the properties
jco.destination.repository.user and
jco.destination.repository.passwd in
stead, since there are special permissions needed (for
metadata lookup in the back end) that not all business
application users might have.
Learn about the JCo properties you can use to configure pooling in an RFC destination.
Overview
This group of JCo properties covers different settings for the behavior of the destination's connection pool. All
properties are optional.
Note
Turning on this check has performance impact
for stateless communication. This is due to an
additional low-level ping to the server, which
takes a certain amount of time for non-cor
rupted connections, depending on latency.
Pooling Details
● Each destination is associated with a connection factory and, if the pooling feature is used, with a
connection pool.
● Initially, the destination's connection pool is empty, and the JCo runtime does not preallocate any
connection. The first connection will be created when the first function module invocation is performed.
The peak_limit property describes how many connections can be created simultaneously, if applications
allocate connections in different sessions at the same time. A connection is allocated either when a
stateless function call is executed, or when a connection for a stateful call sequence is reserved within a
session.
● After the <peak_limit> number of connections has been allocated (in <peak_limit> number of
sessions), the next session will wait for at most <max_get_client_time> milliseconds until a different
session releases a connection (either finishes a stateless call or ends a stateful call sequence). In case the
JCo properties that allow you to define the behavior of the repository that dynamically retrieves function
module metadata.
All properties below are optional. Alternatively, you can create the metadata in the application code, using the
metadata factory methods within the JCo class, to avoid additional round-trips to the on-premise system.
Property Description
Note
When working with the Destinations editor in the cock
pit, enter the value in the <Repository User> field. Do
not enter it as additional property.
Note
When working with the Destinations editor in the cock
pit, enter this password in the <Repository
Password> field. Do not enter it as additional property.
Learn about the JCo properties you can use to configure the target sytem information in an RFC destination
(Neo environment).
Overview
Depending on the configuration you use, different properties are mandatory or optional.
Note
When modifying existing or creating new destinations, you must provide jco.destination.proxy_type
with value OnPremise.
Property Description
jco.client.sysnr Represents the so-called "system number" and has two dig
its. It identifies the logical port on which the application
server is listening for incoming requests. For configurations
on SAP Cloud Platform, the property must match a virtual
port entry in the Cloud Connector Access Control configura-
tion.
Note
The virtual port in the above access control entry must
be named sapgw<##>, where <##> is the value of
sysnr.
Property Description
Note
The virtual port in the above access control entry must
be named sapms<###>, where <###> is the value of
r3name.
JCo properties that allow you to control the connection to an ABAP system.
Property Description
jco.client.trace Defines whether protocol traces are created. Valid values are
1 (trace is on) and 0 (trace is off). The default value is 0.
jco.client.codepage Declares the 4-digit SAP codepage that is used when initiat
ing the connection to the backend. The default value is 1100
(comparable to iso-8859-1). It is important to provide this
property if the password that is used contains characters
that cannot be represented in 1100.
Note
When working with the Destinations editor in the cock
pit, enter the Cloud Connector location ID in the
<Location ID> field. Do not enter it as additional
property.
For your cloud applications, you can use LDAP-based user management if you are operating an LDAP server
within your network.
For more information on how to use the Java JNDI/LDAP Service Provider see: http://docs.oracle.com/
javase/7/docs/technotes/guides/jndi/jndi-ldap.html .
Tasks
Developer
Proxy Type ldap.proxyType Possible values: Internet or In case proxy type is OnPre
OnPremise mise, the resulting property
is
java.naming.ldap.fa
ctory.socket with value
com.sap.core.connec
tivity.api.ldap.Lda
pOnPremiseSocketFac
tory.
Example: ldap://ldap
server.examplecompany.com:
389
Example: serviceuser@exam
plecompany.com
As additional properties in an LDAP destination, you can specify the properties defined by the Java JNDI/LDAP
Service Provider. For more details regarding these properties see Environment Properties at http://
docs.oracle.com/javase/7/docs/technotes/guides/jndi/jndi-ldap.html l.
Consume the LDAP tunnel in a Java application, see Using LDAP [page 316].
A mail destination is used to specify the mail server settings for sending or fetching e-mail, such as the e-mail
provider, e-mail account, and protocol configuration.
The name of the mail destination must match the name used for the mail session resource. You can configure a
mail destination directly in a destination editor or in a mail destination properties file. The mail destination then
needs to be made available in the cloud. If a mail destination is updated, an application restart is required so
that the new configuration becomes effective.
Note
SAP does not act as e-mail provider. To use this service, please cooperate with an external e-mail provider
of your choice.
Name The name of the destination. The mail session that is configured Yes
by this mail destination is available by injecting the mail session
resource mail/<Name>. The name of the mail session resource
must match the destination name.
Type The type of destination. It must be MAIL for mail destinations. Yes
mail.* javax.mail properties for configuring the mail session. Depends on the mail protocol
used.
To send e-emails, you must specify at least
mail.transport.protocol and mail.smtp.host.
mail.password Password that is used for authentication. The user name for au Yes, if authentication is used
thentication is specified by mail.user (a standard (mail.smtp.auth=true and
javax.mail property). generally for fetching e-mail).
● mail.smtp.port: The SMTP standard ports 465 (SMTPS) and 587 (SMTP+STARTTLS) are open for
outgoing connections on SAP Cloud Platform.
● mail.pop3.port: The POP3 standard ports 995 (POP3S) and 110 (POP3+STARTTLS) are open for
outgoing connections (used to fetch e-mail).
● mail.imap.port: The IMAP standard ports 993 (IMAPS) and 143 (IMAP +STARTTLS) are open for
outgoing connections (used to fetch e-mail).
● mail.<protocol>.host: The mail server of an e-mail provider accessible on the Internet, such as Google
Mail (for example, smtp.gmail.com, imap.gmail.com, and so on).
The destination below has been configured to use Gmail as the e-mail provider, SMTP with STARTTLS (port
587) for sending e-mail, and IMAP (SSL) for receiving e-mail:
Name=Session
Type=MAIL
mail.user=<gmail account name>
mail.password=<gmail account password>
mail.transport.protocol=smtp
mail.smtp.host=smtp.gmail.com
SMTPS Example
The destination below uses Gmail and SMTPS (port 465) for sending e-mail:
Name=Session
Type=MAIL
mail.user=<gmail account name>
mail.password=<gmail account password>
mail.transport.protocol=smtps
mail.smtps.host=smtp.gmail.com
mail.smtps.auth=true
mail.smtps.port=465
Related Information
Forward the identity of cloud users to an on-premise system to enable single sign-on.
Content
The Connectivity service provides a secure way of forwarding the identity of a cloud user to the Cloud
Connector, and from there to an on-premise system. This process is called principal propagation.
It uses a SAML token as exchange format for the user information. User mapping is done in the back end. The
token is forwarded either directly, or an X.509 certificate is generated, which is then used in the back end.
Restriction
This authentication is only applicable if you connect to your on-premise system via the Cloud Connector.
How It Works
1. The user authenticates at the cloud application front end via the IdP (Identity Provider) using a standard
SAML Web SSO profile. When the back-end connection is established by the cloud application, the
destination service (re)uses the received SAML assertion to create the connection to the on-premise
system (BE1-BEm).
2. The Cloud Connector validates the received SAML assertion for a second time, extracts the attributes, and
uses its STS (Security Token Service) component to issue a new token (an X.509 certificate) with the
same or similar attributes to assert the identity to the back end.
3. The Cloud Connector and the cloud application share the same SAML service provider identity, which
means that the trust is only set up once in the IdP.
You can create and configure connectivity destinations using the PrincipalPropagation property in the
Eclipse IDE and in the cockpit. Keep in mind that this property is only available for destination configurations
created in the cloud.
● Create and Delete Destinations on the Cloud [page 195] (Eclipse IDE, procedure and examples)
● Create Destinations (Cockpit) [page 205] (procedure and examples)
Tasks
Related Information
Using multitenancy for applications that require a connection to a remote service or on-premise application.
Endpoint Configuration
Applications that require a connection to a remote service can use the Connectivity service to configure HTTP
or RFC endpoints. In a provider-managed application, such an endpoint can either be once defined by the
application provider (Provider-Specific Destination [page 249]), or by each application consumer (Consumer-
Specific Destination [page 250]).
To prevent application consumers from using an individual endpoint for a provider application, you can set the
property DestinationProvider=Application in the HTTP or RFC destination. In this case, the destination
is always read from the provider application.
Note
This connectivity type is fully applicable also for on-demand to on-premise connectivity.
Destination Levels
You can configure destinations simultaneously on three levels: subscription, consumer subaccount and
application. This means that it is possible to have one and the same destination on more than one configuration
level. For more information, see Managing Destinations [page 180].
Level Visibility
Application level Visible by all tenants and subaccounts, regardless their per
mission settings.
When the application accesses the destination at runtime, the Connectivity service
1. looks up the requested destination in the consumer subaccount on subscription level. If no destination is
available there, it
2. checks if the destination is available on the subaccount level of the consumer subaccount. If there is still no
destination found, it
3. searches on application level of the provider subaccount.
Provider-Specific Destination
Consumer-Specific Destination
Related Information
Consume the Connectivity service from a Java or HANA XS application in the Neo environment and use the
Destination Configuration service to provide the required destination information.
Task Description
Consuming the Connectivity Service (Java) [page 251] Connect your Java cloud applications to the Internet, make
cloud-to-on-premise connections to SAP or non-SAP sys
tems, or send and fetch e-mail.
Consuming the Connectivity Service (HANA XS) [page 330] Create connectivity destinations for HANA XS applications,
configure security, add roles and test them in an enterprise
or trial landscape.
Consuming the Destination Configuration Service [page 344] Retrieve destination configurations for your cloud applica
tion in the Neo environment, in a secure and reliable way.
Connect your Java cloud applications to the Internet, make cloud-to-on-premise connections to SAP or non-
SAP systems, or send and fetch e-mail.
Task Description
Connectivity and Destination APIs [page 251] Find an overview of the available connectivity and destina
tion APIs.
Exchanging Data via HTTP [page 262] Consume the Connectivity service using the HTTP protocol.
Invoking ABAP Function Modules via RFC [page 304] Call a remote-enabled function module in an ABAP server
using the SAP Java Connector (JCo) API.
Using LDAP [page 316] You can use LDAP-based user management if you are oper
ating an LDAP server within your local network.
Using the TCP Protocol for Cloud Applications [page 317] Access on-premise systems via TCP-based protocols, using
a SOCKS5 proxy.
Sending and Fetching E-Mail [page 321] Send mail messages from your cloud applications using e-
mail providers that are accessible on the Internet.
Destinations are part of SAP Cloud Platform Connectivity and are used for the outbound communication from
a cloud application to a remote system. They contain the connection details for the remote communication of
an application, which can be configured for each customer to accommodate the specific customer back-end
systems and authentication requirements. For more information, see Managing Destinations [page 180].
Destinations should be used by application developers when they aim to provide applications that:
● Integrate with remote services or back-end systems that need to be configured by customers
● Integrate with remote services or back-end systems that are located in a fenced environment (that is,
behind firewalls and not publicly accessible)
Tip
HTTP clients created by destination APIs allow parallel usage of HTTP client instances (via class
ThreadSafeClientConnManager).
Connectivity APIs
Package Description
org.apache.http http://hc.apache.org
org.apache.http.client http://hc.apache.org/httpcomponents-client-ga/
httpclient/apidocs/org/apache/http/client/package-
summary.html
org.apache.http.util http://hc.apache.org/httpcomponents-core-ga/httpcore/
apidocs/org/apache/http/util/package-summary.html
javax.mail https://javamail.java.net/nonav/docs/api/
The SAP Cloud Platform SDK for Java Web uses version
1.4.1 of javax.mail, the SDK for Java EE 6 Web Profile
uses version 1.4.5 of javax.mail, and the SDK for Java
Web Tomcat 7 uses version 1.4.7 of javax.mail.
Destination APIs
All connectivity API packages are visible by default from all Web applications. Applications can consume the
destinations via a JNDI lookup.
Procedure
Prerequisites
You have set up your Java development environment. See also: Setting Up the Development Environment
[page 1402]
To consume destinations using HttpDestination API, you need to define your destination as a resource in
the web.xml file.
1. An example of a destination resource named myBackend, which is described in the web.xml file, is as
follows:
<resource-ref>
<res-ref-name>myBackend</res-ref-name>
<res-type>com.sap.core.connectivity.api.http.HttpDestination</res-type>
2. In your servlet code, you can look up the destination (an HTTP destination in this example) from the JNDI
registry as following:
import javax.naming.Context;
import javax.naming.InitialContext;
import com.sap.core.connectivity.api.http.HttpDestination;
...
Note
If you want the lookup name to differ from the destination name, you can specify the lookup name in
<res-ref-name> and the destination name in <mapped-name>, as shown in the following example:
<resource-ref>
<res-ref-name>myLookupName</res-ref-name>
<res-type>com.sap.core.connectivity.api.http.HttpDestination</res-type>
<mapped-name>myBackend</mapped-name>
</resource-ref>
3. With the retrieved HTTP destination, you can then, for example, send a simple GET request to the
configured remote system by using the following code:
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.HttpResponse;
...
Note
If you want to use <res-ref-name>, which contains "/", the name after the last "/" should be the
same as the destination name. For example, you can use <res-ref-name>connectivity/
myBackend</res-ref-name>. In this case, you should use java:comp/env/connectivity/
myBackend as a lookup string.
If you want to get the URL of your configured destination, use the URI getURI() method. This method
returns the URL, defined in the destination configuration, converted to URI.
<resource-ref>
<res-ref-name>connectivity/DestinationFactory</res-ref-name>
<res-type>com.sap.core.connectivity.api.DestinationFactory</res-type>
</resource-ref>
2. In your Java code, you can then look it up and use it in following way:
Note
If you have two destinations with the same name, one configured on subaccount level and the other on
application level, the getConfiguration() method will return the destination on subaccount level.
The preference order is: subscription level -> subaccount level -> application level.
Related Information
If you need to also add Maven dependencies, take a look at this blog:
See also:
All connectivity API packages are visible by default from all Web applications. Applications can consume the
connectivity configuration via a JNDI lookup.
Context
Besides making destination configurations, you can also allow your applications to use their own HTTP clients.
The ConnectivityConfiguration API provides you a direct access to the destination configurations of your
applications. This API also:
● Can be used independent of the existing destination API so that applications can bring and use their own
HTTP client
● Consists of both a public REST API and a Java client API.
The ConnectivityConfiguration API is supported by all runtimes, including Java Web Tomcat 7. For
more information about runtimes, see Application Runtime Container [page 1430].
Procedure
<resource-ref>
<res-ref-name>connectivityConfiguration</res-ref-name>
<res-
type>com.sap.core.connectivity.api.configuration.ConnectivityConfiguration</
res-type>
</resource-ref>
2. In your servlet code, you can look up the ConnectivityConfiguration API from the JNDI registry as
following:
import javax.naming.Context;
import javax.naming.InitialContext;
import com.sap.core.connectivity.api.configuration.ConnectivityConfiguration;
...
3. With the retrieved ConnectivityConfiguration API, you can read all properties of any destination
defined on subscription, application or subaccount level.
Note
If you have two destinations with the same name, one configured on subaccount level and the other on
application level, the getConfiguration() method will return the destination on subaccount level.
The preference order is: subscription level -> subaccount level -> application level.
4. If truststore and keystore are defined in the corresponding destination, they can be accessed by using
methods getKeyStore and getTrustStore.
// create sslcontext
TrustManagerFactory tmf =
TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
tmf.init(trustStore);
KeyManagerFactory keyManagerFactory =
KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
// get key store password from destination
String keyStorePassword = destConfiguration.getProperty("KeyStorePassword");
keyManagerFactory.init(keyStore, keyStorePassword.toCharArray());
Implement authentication token generation for your Web application using the
AuthenticationHeaderProvider API.
Context
The AuthenticationHeaderProvider API allows your Web applications to use their own HTTP clients,
providing authentication token generation for application-to-application SSO (single sign-on) and on-premise
SSO.
This API:
● Provides additional helper methods, which facilitate the task to initialize an HTTP client (for example, an
authentication method that helps you set headers for application-to-application SSO).
● Consists of both a public REST API and a Java client API. See also the AuthenticationHeaderProvider
javadoc: https://help.hana.ondemand.com/javadoc/index.html (choose
com.sap.core.connectivity.api.authentication AuthenticationHeaderProvider ).
All connectivity API packages are visible by default from all Web applications. Applications can consume the
authentication header provider via a JNDI lookup.
The AuthenticationHeaderProvider API is supported by all runtimes, including Java Web Tomcat
7. For more information about runtimes, see Application Runtime Container [page 1430].
Tasks
1. To consume the AuthenticationHeaderProvider API using JNDI, you need to define it as a resource in
the web.xml file. An example of an AuthenticationHeaderProvider resource named
myAuthHeaderProvider, which is described in the web.xml file, looks like this:
<resource-ref>
<res-ref-name>myAuthHeaderProvider</res-ref-name>
<res-
type>com.sap.core.connectivity.api.authentication.AuthenticationHeaderProvider
</res-type>
</resource-ref>
2. In your servlet code, you can look up the AuthenticationHeaderProvider API from the JNDI registry:
import javax.naming.Context;
import javax.naming.InitialContext;
import
com.sap.core.connectivity.api.authentication.AuthenticationHeaderProvider;
...
Tip
We recommend that you pack the HTTP client (Apache or other) inside the lib folder of your Web
application archive.
Prerequisites:
● Principal propagation must be enabled for the subaccount. For more information, see Application Identity
Provider [page 2407] → section Specifying Custom Local Provider Settings.
● Both applications must run on behalf of the same subaccount.
● The receiving application must use SAML2 authentication.
Note
If you work with the Java Web Tomcat 7 runtime, bear in mind that the following code snippet works
properly only when using the Apache HTTP client version 4.1.3. If you use other (higher) versions of the
Apache HTTP client, you should adapt your code.
To learn how to generate on-premise SSO authentication, see Principal Propagation Using HTTP Proxy [page
267].
The aim of the SAPAssertionSSO headers is to generate an assertion ticket that propagates the currently
logged-in SAP Cloud Platform user to an SAP back-end system. You can use this authentication type only if the
user IDs on both sides are the same.
The AuthenticationHeaderProvider API provides the following method for generating such headers:
AuthenticationHeader getSAPAssertionHeader(DestinationConfiguration
destinationConfiguration);
SAP Cloud Platform supports applications to use the SAML Bearer assertion flow for consuming OAuth-
protected resources. As a result, applications do not need to deal with some of the complexities of OAuth and
can reuse existing identity providers for user data. Users are authenticated by using SAML against the
configured trusted identity providers. The SAML assertion is then used to request an access token from an
OAuth authorization server. This access token should be injected in all HTTP requests to the OAuth-protected
resources.
Note
Тhe access tokens are cached by the AuthenticationHeaderProvider and are auto-renovated. When a
token is about to expire, a new token is created shortly before the expiration of the old one.
The AuthenticationHeaderProvider API provides the following method for generating such headers:
List<AuthenticationHeader>
getOAuth2SAMLBearerAssertionHeaders(DestinationConfiguration
destinationConfiguration);
SAP Cloud Platform also supports applications to use the OAuth client credentials flow for consuming OAuth-
protected resources.
You can use the client credentials to request an access token from an OAuth authorization server. If you use the
HttpDestination API and DestinationFactory [page 253], the access token is automatically injected in all HTTP
requests to the OAuth-protected resources. If you use the ConnectivityConfiguration API [page 255], you must
retrieve the access token using the AuthenticationHeaderProvider API and manually inject it in the HTTP
requests.
Note
Тhe access tokens are cached by the AuthenticationHeaderProvider and are auto-renovated. When a
token is about to expire, a new token is created shortly before the expiration of the old one.
The AuthenticationHeaderProvider API provides the following method for generating such headers:
Related Information
The Java Connector (JCo) is a middleware component that enables you to develop ABAP-compliant
components and applications in Java.
JCo supports communication with Application Server ABAP (AS ABAP) in both directions:
The JCo can be implemented with desktop applications and Web server applications.
Note
You can find generic information regarding authorizations required for the use of JCo in SAP Note 460089
.
Note
This documentation contains sections not applicable to SAP Cloud Platform. In particular:
● Architecture: CPIC is only used in the last mile from your Cloud Connector to the backend. From the
cloud to the Cloud Connector, SSL protected communication is used.
● Installation: SAP Cloud Platform already includes all the necessary artifacts.
● Customizing and Integration: In SAP Cloud Platform, the integration is already done by the runtime.
You can concentrate on your business application logic.
● Server Programming: The programming model of JCo in SAP Cloud Platform does not include server-
side RFC communication.
● IDoc Support for External Java Applications: For the time being, there is no IDocLibrary for JCo
available in SAP Cloud Platform.
Related Information
● Call an Internet service using a simple application that queries some information from a public service:
Consume Internet Services (Java Web or Java EE 6 Web Profile) [page 268]
Consume Internet Services (Java Web Tomcat 7) [page 276]
● Call a service from a fenced customer network using a simple application that consumes an on-premise
ping service:
Consume Back-End Systems (Java Web or Java EE 6 Web Profile) [page 283]
Consume Back-End Systems (Java Web Tomcat 7) [page 294]
You can consume on-premise back-end services in two ways – via HTTP destinations and via the HTTP Proxy.
For more information, see:
To create a loopback connection, you can use the dedicated HTTP port bound to localhost. The port number
can be obtained from the cloud environment variable HC_LOCAL_HTTP_PORT.
For more information, see Using Cloud Environment Variables [page 1450] → section "List of Environment
Variables".
Note
Note that when deploying locally from the Eclipse IDE or the console client, the HTTP port may differ.
Related Information
Using the Keystore Service for Client Side HTTPS Connections [page 2472]
Overview
By default, all connectivity API packages are visible from all Web applications. In this classical case,
applications can consume the destinations via a JNDI lookup. For more information, see Connectivity and
Destination APIs [page 251].
Caution
● If you use SDK for Java Web, we only recommend that you create a destination before deploying the
application.
● If you use SDK for Java EE 6 Web Profile, you must create a destination before deploying the
application.
● If you use SDK for Java Web Tomcat 7, the DestinationFactory API is not supported. Instead, you
can use ConnectivityConfiguration API [page 255].
Tip
When you know in advance the names of all destinations you need, you should better use destinations.
Otherwise, we recommend using DestinationFactory.
Procedure
<resource-ref>
<res-ref-name>connectivity/DestinationFactory</res-ref-name>
<res-type>com.sap.core.connectivity.api.DestinationFactory</res-
type>
</resource-ref>
import com.sap.core.connectivity.api.DestinationFactory;
import com.sap.core.connectivity.api.http.HttpDestination
...
Context ctx = new InitialContext();
DestinationFactory destinationFactory
=(DestinationFactory)ctx.lookup(DestinationFactory.JNDI_NAME);
HttpDestination destination = (HttpDestination)
destinationFactory.getDestination("myBackend");
3. With the retrieved HTTP destination, you can then, for example, send a simple GET request to the
configured remote system by using the following code:
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.HttpResponse;
...
// coding to call service "myService" on the system configured in the given
destination
HttpClient createHttpClient = destination.createHttpClient();
HttpGet get = new HttpGet("myService");
HttpResponse resp = createHttpClient.execute(get);
The Connectivity service provides a standard HTTP Proxy for on-premise connectivity that is accessible by any
application.
Context
Proxy host and port are available as the environment variables HC_OP_HTTP_PROXY_HOST and
HC_OP_HTTP_PROXY_PORT.
Note
● The HTTP Proxy provides a more flexible way to use on-premise connectivity via standard HTTP
clients. It is not suitable for other protocols, such as RFC or Mail. HTTPS requests will not work as well.
● The previous alternative, that is, using on-premise connectivity via the existing HTTP Destination API,
is still supported. For more information, see DestinationFactory API [page 263].
Multitenancy Support
By default, all applications are started in multitenant mode. Such applications are responsible to propagate
consumer subaccounts to the HTTP Proxy, using the header SAP-Connectivity-ConsumerAccount. This
header is mandatory during the first request of each HTTP connection. HTTP connections are associated with
one consumer subaccount and cannot be used with another subaccount. If the SAP-Connectivity-
ConsumerAccount header is sent after the first request, and its value is different from the value in the first
request, the Proxy will return HTTP response code 400.
Starting with SAP HANA Cloud Connector 2.9.0, it is possible to connect multiple Cloud Connectors to a
subaccount as long as their location ID is different. Using the header SAP-Connectivity-SCC-Location_ID
it is possible to specify the Cloud Connector over which the connection is opened. If this header is not
specified, the connection will be opened to the Cloud Connector that is connected without any location ID. This
is also the case for all Cloud Connector versions prior to 2.9.0.
On multitenant VMs, applications are responsible to propagate consumer subaccount via SAP-
Connectivity-ConsumerAccount header. The following example shows how this can be performed.
On single-tenant VMs, the consumer subaccount is known and subaccount propagation via header is not
needed. The following example demonstrates this case.
// create HTTP client and insert the necessary headers in the request
HttpClient httpClient = new DefaultHttpClient();
httpClient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, new
HttpHost(proxyHost, proxyPort));
HttpGet request = new HttpGet("http://virtualhost:1234");
Related Information
Context
The HTTP Proxy can forward the identity of an on-demand user to the Cloud Connector, and from there – to
the back-end of the relevant on-premise system. In this way, on-demand users will no longer need to provide
their identity every time they make connections to on-premise systems via one and the same Cloud Connector.
To propagate the logged-in user, an application must use the AuthenticationHeaderProvider API to
generate a header, which then embeds in the HTTP request to the on-premise system.
Restrictions
● IDPs used by applications protected by SAML2 have to be denoted as trustworthy for the Cloud Connector.
● Non-SAML2 protected applications have to be denoted themselves as trustworthy for the Cloud
Connector.
Example
Note
You can also apply dependency injection by using the @Resource annotation.
Related Information
1.4.2.2.1.2.3 Tutorials
Overview
The Connectivity service allows a secure, reliable, and easy-to-consume access to remote services running
either on the Internet or in an on-premise network.
Use Cases
The tutorials in this section show how you can make connections to Internet services and on-premise
networks:
Consume Internet Services (Java Web or Java EE 6 Web Profile) [page 268]
Consume Back-End Systems (Java Web or Java EE 6 Web Profile) [page 283]
Context
This step-by-step tutorial demonstrates consumption of Internet services using Apache HTTP Client . The
tutorial also shows how a connectivity-enabled Web application can be deployed on a local server and on the
cloud.
Prerequisites
You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
For more information, see Setting Up the Development Environment [page 1402].
Note
You need to install SDK for Java Web or SDK for Java EE 6 Web Profile.
<resource-ref>
<res-ref-name>outbound-internet-destination</res-ref-name>
<res-type>com.sap.core.connectivity.api.http.HttpDestination</res-type>
</resource-ref>
Note
The value of the <res-ref-name> element in the web.xml file should match the name of the
destination that you want to be retrieved at runtime. In this case, the destination name is outbound-
internet-destination.
9. Replace the entire servlet class with the following one to make use of the destination API. The destination
API is visible by default for cloud applications and must not be added explicitly to the application class
path.
package com.sap.cloud.sample.connectivity;
import java.io.IOException;
import java.io.InputStream;
import static java.net.HttpURLConnection.HTTP_OK;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.core.connectivity.api.DestinationFactory;
import com.sap.core.connectivity.api.http.HttpDestination;
/**
* Servlet class making HTTP calls to specified HTTP destinations.
* Destinations are used in the following exemplary connectivity
scenarios:<br>
* - Connecting to an outbound Internet resource using HTTP destinations<br>
* - Connecting to an on-premise backend using on-premise HTTP
destinations,<br>
* where the destinations could have no authentication or basic
authentication.<br>
*
* * NOTE: The Connectivity
service API is located under
* <code>com.sap.core.connectivity.api</code>. The old API under
Note
The given servlet can run with different destination scenarios, for which user should specify the
destination name as a requested parameter in the calling URL. In this case, the destination name
should be <applicationURL>/?destname=outbound-internet-destination. Nevertheless,
your servlet can still run even without specifying the destination name for this outbound scenario.
10. Save the Java editor and make sure the project compiles without errors.
Caution
● If you use SDK for Java Web, we only recommend that you create a destination before deploying the
application.
● If you use SDK for Java EE 6 Web Profile, you must create a destination before deploying the
application.
-Dhttp.proxyHost=<your_proxy_host> -Dhttp.proxyPort=<your_proxy_port> -
Dhttps.proxyHost=<your_proxy_host> -Dhttps.proxyPort=<your_proxy_port>
○ Choose OK.
5. Go to the Connectivity tab page of your local server. Create a destination with the name outbound-
internet-destination, and configure it so it can be consumed by the application at runtime. For more
information, see Configure Destinations from the Eclipse IDE [page 190].
For the sample destination to work properly, the following properties need to be configured:
Name=outbound-internet-destination
Type=HTTP
URL=http://sap.com/index.html
Authentication=NoAuthentication
6. From the ConnectivityServlet.java editor's context menu, choose Run As Run on Server .
7. Make sure that the Choose an existing server option is selected and choose Java Web Server.
8. Choose Finish.
The server is now started, displayed as Java Web Server [Started, Synchronized] in the Servers
view.
Result:
The internal Web browser opens with the expected output of the connectivity-enabled Web application.
Note
The application name should be unique enough to allow your deployed application to be easily
identified in SAP Cloud Platform cockpit.
7. Choose Finish.
8. A new server <application>.<subaccount> [Stopped]> appears in the Servers view.
9. Go to the Connectivity tab page of the server, create a destination with the name outbound-internet-
destination, and configure it using the following properties:
Name=outbound-internet-destination
Type=HTTP
URL=http://sap.com/index.html
Authentication=NoAuthentication
ProxyType=Internet
10. From the ConnectivityServlet.java editor's context menu, choose Run As Run on Server .
11. Make sure that the Choose an existing server option is selected and choose <Server_host_name>
<Server_name> .
12. Choose Finish.
The internal Web browser opens with the URL pointing to SAP Cloud Platform and displaying the expected
output of the connectivity-enabled Web application.
Next Step
You can monitor the state and logs of your Web application deployed on SAP Cloud Platform.
Context
This step-by-step tutorial demonstrates consumption of Internet services using HttpURLConnection. The
tutorial also shows how a connectivity-enabled Web application can be deployed on a local server and on the
cloud.
The servlet code, the web.xml content, and the destination file (outbound-internet-destination) used
in this tutorial are mapped to the connectivity sample project located in <SDK_location>/samples/
connectivity. You can directly import this sample in your Eclipse IDE. For more information, see Import
Samples as Eclipse Projects [page 1423].
Prerequisites
You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
For more information, see Setting Up the Development Environment [page 1402].
Note
5. Choose Finish so that the ConnectivityServlet.java servlet is created and opened in the Java editor.
6. Go to ConnectivityHelloWorld WebContent WEB-INF and open the web.xml file.
7. Choose the Source tab page.
8. To consume connectivity configuration using JNDI, you need to define the
ConnectivityConfiguration API as a resource in the web.xml file. Below is an example of a
ConnectivityConfiguration resource, named connectivityConfiguration.
<resource-ref>
<res-ref-name>connectivityConfiguration</res-ref-name>
<res-
type>com.sap.core.connectivity.api.configuration.ConnectivityConfiguration</
res-type>
</resource-ref>
9. Replace the entire servlet class with the following one to make use of the destination API. The destination
API is visible by default for cloud applications and must not be added explicitly to the application class
path.
package com.sap.cloud.sample.connectivity;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.HttpURLConnection;
import java.net.InetSocketAddress;
import java.net.Proxy;
import java.net.URL;
import javax.annotation.Resource;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.cloud.account.TenantContext;
import com.sap.core.connectivity.api.configuration.ConnectivityConfiguration;
import com.sap.core.connectivity.api.configuration.DestinationConfiguration;
/** {@inheritDoc} */
@Override
public void doGet(HttpServletRequest request, HttpServletResponse
response) throws ServletException, IOException {
HttpURLConnection urlConnection = null;
String destinationName = request.getParameter("destname");
try {
// Look up the connectivity configuration API
Context ctx = new InitialContext();
ConnectivityConfiguration configuration =
(ConnectivityConfiguration) ctx.lookup("java:comp/env/
connectivityConfiguration");
response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR,
String.format("Destination %s is not found. Hint:
Make sure to have the destination configured.", destinationName));
return;
}
if (ON_PREMISE_PROXY.equals(proxyType)) {
// Get proxy for on-premise destinations
proxyHost = System.getenv("HC_OP_HTTP_PROXY_HOST");
proxyPort =
Integer.parseInt(System.getenv("HC_OP_HTTP_PROXY_PORT"));
} else {
// Get proxy for internet destinations
proxyHost = System.getProperty("http.proxyHost");
proxyPort =
Integer.parseInt(System.getProperty("http.proxyPort"));
}
return new Proxy(Proxy.Type.HTTP, new InetSocketAddress(proxyHost,
proxyPort));
}
Note
The given servlet can run with different destination scenarios, for which user should specify the
destination name as a requested parameter in the calling URL. In this case, the destination name
should be <applicationURL>/?destname=outbound-internet-destination. Nevertheless,
your servlet can still run even without specifying the destination name for this outbound scenario.
10. Save the Java editor and make sure the project compiles without errors.
Note
We recommend but not obligate that you create a destination before deploying the application.
-Dhttp.proxyHost=<your_proxy_host> -Dhttp.proxyPort=<your_proxy_port> -
Dhttps.proxyHost=<your_proxy_host> -Dhttps.proxyPort=<your_proxy_port>
○ Choose OK.
5. Go to the Connectivity tab page of your local server, create a destination with the name outbound-
internet-destination, and configure it so it can be consumed by the application at runtime. For more
information, see Configure Destinations from the Eclipse IDE [page 190].
For the sample destination to work properly, the following properties need to be configured:
Name=outbound-internet-destination
Type=HTTP
URL=http://sap.com/index.html
Authentication=NoAuthentication
6. From the ConnectivityServlet.java editor's context menu, choose Run As Run on Server .
7. Make sure that the Choose an existing server option is selected and choose Java Web Tomcat 7 Server.
8. Choose Finish.
The server is now started, displayed as Java Web Tomcat 7 Server [Started, Synchronized] in
the Servers view.
Result:
The internal Web browser opens with the expected output of the connectivity-enabled Web application.
Note
The application name should be unique enough to allow your deployed application to be easily
identified in SAP Cloud Platform cockpit.
7. Choose Finish.
8. A new server <application>.<subaccount> [Stopped]> appears in the Servers view.
9. Go to the Connectivity tab page of the server. Create a destination with the name outbound-internet-
destination, and configure it using the following properties:
Name=outbound-internet-destination
Type=HTTP
10. From the ConnectivityServlet.java editor's context menu, choose Run As Run on Server .
11. Make sure that the Choose an existing server option is selected and choose <Server_host_name>
<Server_name> .
12. Choose Finish.
Result:
The internal Web browser opens with the URL pointing to SAP Cloud Platform and displaying the expected
output of the connectivity-enabled Web application.
Next Step
You can monitor the state and logs of your Web application deployed on SAP Cloud Platform.
Context
This step-by-step tutorial demonstrates how a sample Web application consumes a back-end system via
HTTP(S) by using the Connectivity service. For simplicity, instead of using a real back-end system, we use a
second sample Web application containing BackendServlet. It mimics the back-end system and can be
called via HTTP(S).
The servlet code, the web.xml content, and the destination files (backend-no-auth-destination and
backend-basic-auth-destination) used in this tutorial are mapped to the connectivity sample project
located in <SDK_location>/samples/connectivity. You can directly import this sample in your Eclipse
IDE. For more information, see Import Samples as Eclipse Projects [page 1423].
In the on-demand to on-premise connectivity end-to-end scenario, different user roles are involved. The
particular steps for the relevant roles are described below:
Prerequisites
● You have downloaded and configured the Cloud Connector. For more information, see Cloud Connector
[page 345].
● You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
For more information, see Setting Up the Development Environment [page 1402].
Note
You need to install SDK for Java Web or SDK for Java EE 6 Web Profile.
This tutorial uses a Web application that responds to a request with a ping as a sample back-end system. The
Connectivity service supports HTTP and HTTPS as protocols and provides an easy way to consume REST-
based Web services.
Tip
Instead of the sample back-end system provided in this tutorial, you can use other systems to be
consumed through REST-based Web services.
Once the back-end application is running on your local Tomcat, you need to configure the ping service,
provided by the application, in your installed Cloud Connector. This is required since the Cloud Connector only
allows access to white-listed back-end services. To do this, follow the steps below:
1. Open the Cloud Connector and from the Content navigation (in left), choose Access Control.
2. Under Mapping Virtual To Internal System, choose the Add button and define an entry as shown on the
following screenshot. The Internal Host must be the physical host name of the machine on which the
Tomcat of the back-end application is running.
Note
This step shows the procedure and screenshot for Cloud Connector versions prior to 2.9. For Cloud
Connector versions as of 2.9.0, follow the steps in Configure Access Control (HTTP) [page 425] and
enter the values shown in the screenshot above.
For Cloud Connector versions as of 2.9.0, follow the steps in Configure Access Control (HTTP) [page
425], section Limiting the Accessible Services for HTTP(S), and enter the values as shown in the next
step.
Note
In case you use SDK with version equal to or lower than 1.44.0.1 (Java Web) and 2.24.13 (Java EE 6
Web Profile), you should find the WAR files in directory <SDK_location>/tools/samples/
connectivity/onpremise, under the names PingAppHttpNoAuth.war and
PingAppHttpBasicAuth.war. Also, the URL paths should be /PingAppHttpBasicAuth and /
PingAppHttpNoAuth.
<resource-ref>
<res-ref-name>outbound-internet-destination</res-ref-name>
<res-type>com.sap.core.connectivity.api.http.HttpDestination</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>connectivity/DestinationFactory</res-ref-name>
<res-type>com.sap.core.connectivity.api.DestinationFactory</res-type>
</resource-ref>
Note
8. Replace the entire servlet class to make use of the destination API. The destination API is visible by default
for cloud applications and must not be added explicitly to the application class path.
package com.sap.cloud.sample.connectivity;
import java.io.IOException;
import java.io.InputStream;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.core.connectivity.api.http.HttpDestination;
import com.sap.core.connectivity.api.DestinationFactory;
/** {@inheritDoc} */
@Override
public void doGet(HttpServletRequest request, HttpServletResponse
response) throws ServletException, IOException {
HttpClient httpClient = null;
String destinationName = request.getParameter("destname");
try {
// Get HTTP destination
Context ctx = new InitialContext();
HttpDestination destination = null;
if (destinationName != null) {
DestinationFactory destinationFactory = (DestinationFactory)
ctx.lookup(DestinationFactory.JNDI_NAME);
destination = (HttpDestination)
destinationFactory.getDestination(destinationName);
} else {
// The default request to the Servlet will use outbound-
internet-destination
destinationName = "outbound-internet-destination";
destination = (HttpDestination) ctx.lookup("java:comp/env/"
+ destinationName);
}
Note
The given servlet can be run with different destination scenarios, for which user should specify the
destination name as a requested parameter in the calling URL. In the case of on-premise connection to
a back-end system, the destination name should be either backend-basic-auth-destination or
9. Save the Java editor and make sure the project compiles without errors.
Caution
● If you use SDK for Java Web, we just recommend that you create a destination before starting the
application.
● If you use SDK for Java EE 6 Web Profile, you must create a destination before starting the application.
1. To deploy your Web application locally or on the cloud, follow the steps described in the respective pages:
Deploy Locally from Eclipse IDE [page 1468]
Deploy on the Cloud from Eclipse IDE [page 1469]
2. Once the application is deployed successfully on a local server and on the cloud, the application issues an
exception. This exception says that destination backend-basic-auth-destination or backend-no-
auth-destination has not been specified yet:
HTTP Status 500 - Connectivity operation failed with reason: Destination with
name backend-no-auth-destination cannot be found. Make sure it is created and
configured.. See logs for details.
2014 01 10 08:11:01#
+00#ERROR#com.sap.cloud.sample.connectivity.ConnectivityServlet##anonymous#htt
p-bio-8041-exec-1##conngold#testsample#web#null#null#Connectivity operation
failed
com.sap.core.connectivity.api.DestinationNotFoundException: Destination with
name backend-no-auth-destination cannot be found. Make sure it is created and
configured.
at
com.sap.core.connectivity.destinations.DestinationFactory.getDestination(Desti
nationFactory.java:20)
at
com.sap.core.connectivity.cloud.destinations.CloudDestinationFactory.getDestin
ation(CloudDestinationFactory.java:28)
at
com.sap.cloud.sample.connectivity.ConnectivityServlet.doGet(ConnectivityServle
t.java:50)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:735)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFi
lterChain.java:305)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChai
n.java:210)
at
com.sap.core.communication.server.CertValidatorFilter.doFilter(CertValidatorFi
lter.java:321)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFi
lterChain.java:243)
...
To configure the destination in SAP Cloud Platform, you need to use the virtual host name
(virtualpingbackend) and port (1234) specified in one of the previous steps on the Cloud Connector's
Access Control tab page.
Note
On-premise destinations support HTTP connections only. Thus, when defining a destination in the SAP
Cloud Platform cockpit, always enter the URL as http://virtual.host:virtual.port, even if the backend
requires an HTTPS connection.
The connection from an SAP Cloud Platform application to the Cloud Connector (through the tunnel) is
encrypted with TLS anyway. There is no need to “double-encrypt” the data. Then, for the leg from the
Cloud Connector to the backend, you can choose between using HTTP or HTTPS. The Cloud Connector will
establish an SSL/TLS connection to the backend, if you choose HTTPS.
1. In the Eclipse IDE, open the Servers view and double-click on <application>.<subaccount> to open
the SAP Cloud Platform editor.
2. Open the Connectivity tab page.
3. In the All Destinations section, choose to create a new destination with the name backend-no-auth-
destination or backend-basic-auth-destination.
○ To connect with no authentication, use the following configuration:
Name=backend-no-auth-destination
Type=HTTP
URL=http://virtualpingbackend:1234/BackendAppHttpNoAuth/noauth
Authentication=NoAuthentication
ProxyType=OnPremise
CloudConnectorVersion=2
Name=backend-basic-auth-destination
Type=HTTP
URL=http://virtualpingbackend:1234/BackendAppHttpBasicAuth/basic
Authentication=BasicAuthentication
User=pinguser
Password=pingpassword
ProxyType=OnPremise
CloudConnectorVersion=2
Next Step
You can monitor the state and logs of your Web application deployed on SAP Cloud Platform.
Context
This step-by-step tutorial demonstrates how a sample Web application consumes a back-end system via
HTTP(S) by using the Connectivity service. For simplicity, instead of using a real back-end system, we use a
second sample Web application containing BackendServlet. It mimics the back-end system and can be
called via HTTP(S).
The servlet code, the web.xml content, and the destination file (backend-no-auth-destination) used in
this tutorial are mapped to the connectivity sample project located in <SDK_location>/samples/
connectivity. You can directly import this sample in your Eclipse IDE. For more information, see Import
Samples as Eclipse Projects [page 1423].
In the on-demand to on-premise connectivity end-to-end scenario, different user roles are involved. The
particular steps for the relevant roles are described below:
Prerequisites
● You have downloaded and configured the Cloud Connector. For more information, see Cloud Connector
[page 345].
● You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
For more information, see Setting Up the Development Environment [page 1402].
Note
This tutorial uses a Web application that responds to a request with a ping as a sample back-end system. The
Connectivity service supports HTTP and HTTPS as protocols and provides an easy way to consume REST-
based Web services.
To set up the sample application as a back-end system, see Set Up an Application as a Sample Back-End
System [page 303].
Tip
Instead of the sample back-end system provided in this tutorial, you can use other systems to be
consumed through REST-based Web services.
Once the back-end application is running on your local Tomcat, you need to configure the ping service,
provided by the application, in your installed Cloud Connector. This is required since the Cloud Connector only
allows access to white-listed back-end services. To do this, follow the steps below:
1. Open the Cloud Connector and from the Content navigation (in left), choose Access Control.
2. Under Mapping Virtual To Internal System, choose the Add button and define an entry as shown on the
following screenshot. The Internal Host must be the physical host name of the machine on which the
Tomcat of the back-end application is running.
This step shows the procedure and screenshot for Cloud Connector versions prior to 2.9. For Cloud
Connector versions as of 2.9.0, follow the steps in Configure Access Control (HTTP) [page 425] and
enter the values shown in the screenshot above.
Note
For Cloud Connector versions as of 2.9.0, follow the steps in Configure Access Control (HTTP) [page
425], section Limiting the Accessible Services for HTTP(S), and enter the values as shown in the next
step.
5. Choose Finish so that the ConnectivityServlet.java servlet is created and opened in the Java editor.
6. Go to ConnectivityHelloWorld WebContent WEB-INF and open the web.xml file.
7. To consume connectivity configuration using JNDI, you need to define the
ConnectivityConfiguration API as a resource in the web.xml file. Below is an example of a
ConnectivityConfiguration resource, named connectivityConfiguration.
<resource-ref>
<res-ref-name>connectivityConfiguration</res-ref-name>
<res-
type>com.sap.core.connectivity.api.configuration.ConnectivityConfiguration</
res-type>
</resource-ref>
Note
8. Replace the entire servlet class to make use of the configuration API. The configuration API is visible by
default for cloud applications and must not be added explicitly to the application class path.
package com.sap.cloud.sample.connectivity;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.HttpURLConnection;
import java.net.InetSocketAddress;
import java.net.Proxy;
import java.net.URL;
import javax.annotation.Resource;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/** {@inheritDoc} */
@Override
public void doGet(HttpServletRequest request, HttpServletResponse
response) throws ServletException, IOException {
HttpURLConnection urlConnection = null;
String destinationName = request.getParameter("destname");
try {
// Look up the connectivity configuration API
Context ctx = new InitialContext();
ConnectivityConfiguration configuration =
(ConnectivityConfiguration) ctx.lookup("java:comp/env/
connectivityConfiguration");
response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR,
String.format("Destination %s is not found. Hint:
Make sure to have the destination configured.", destinationName));
return;
}
Note
The given servlet can be run with different destination scenarios, for which user should specify the
destination name as a requested parameter in the calling URL. In the case of on-premise connection to
Note
9. Save the Java editor and make sure the project compiles without errors.
Note
We only recommend but not obligate that you create the destination before starting the application.
1. To deploy your Web application locally or on the cloud, follow the steps described in the respective pages:
Deploy Locally from Eclipse IDE [page 1468]
Deploy on the Cloud from Eclipse IDE [page 1469]
2. Once the application is successfully deployed locally or on the cloud, the application issues an exception
saying that the backend-no-auth-destination destination has not been specified yet:
To configure the destination in SAP Cloud Platform, you need to use the virtual host name
(virtualpingbackend) and port (1234) specified in one of the previous steps on the Cloud Connector's
Access Control tab page.
Note
1. In the Eclipse IDE, open the Servers view and double-click <application>.<subaccount> to open the
cloud server editor.
2. Open the Connectivity tab page.
Name=backend-no-auth-destination
Type=HTTP
URL=http://virtualpingbackend:1234/BackendAppHttpNoAuth/noauth
Authentication=NoAuthentication
ProxyType=OnPremise
CloudConnectorVersion=2
Next Step
You can monitor the state and logs of your Web application deployed on SAP Cloud Platform.
Related Information
JavaDoc ConnectivityConfiguration
JavaDoc DestinationConfiguration
JavaDoc AuthenticationHeaderProvider
Overview
This section describes how you set up a simple ping Web application that is used as a back-end system.
Prerequisites
You have downloaded SAP Cloud Platform SDK on your local file system.
Procedure
<role rolename="pingrole"/>
<user name="pinguser" password="pingpassword" roles="pingrole" />
Note
In case you use SDK with version equal to or lower than, respectively, 1.44.0.1 (Java Web) and 2.24.13
(Java EE 6 Web Profile), you should find the WAR files in directory <SDK_location>/tools/samples/
connectivity/onpremise, under the names PingAppHttpNoAuth.war and
PingAppHttpBasicAuth.war. Also, you should access the applications at the relevant URLs:
● http://localhost:8080/PingAppHttpNoAuth/pingnoauth
● http://localhost:8080/PingAppHttpBasicAuth/pingbasic
Consume Back-End Systems (Java Web or Java EE 6 Web Profile) [page 283]
Call a remote-enabled function module in an on-premise ABAP server from your Neo application, using the
RFC protocol.
Find the tasks and prerequisites that are required to consume an on-premise ABAP function module via RFC,
using the Java Connector (JCo) API as a built-in feature of SAP Cloud Platform.
Tasks
Operator
Prerequisites
Before you can use RFC communication for an SAP Cloud Platform application, you must configure:
About JCo
To learn in detail about the SAP JCo API, see the JCo 3.0 documentation on SAP Support Portal .
Note
Some sections of this documentation are not applicable to SAP Cloud Platform:
● Architecture: CPIC is only used in the last mile from your Cloud Connector to the back end. From SAP
Cloud Platform to the Cloud Connector, TLS-protected communication is used.
● Installation: SAP Cloud Platform runtimes already include all required artifacts.
● Customizing and Integration: On SAP Cloud Platform, the integration is already done by the runtime.
You can concentrate on your business application logic.
● Server Programming: The programming model of JCo on SAP Cloud Platform does not include server-
side RFC communication.
● IDoc Support for External Java Applications: Currently, there is no IDocLibrary for JCo available on
SAP Cloud Platform
● your SDK version must be 1.29.18 (SDK Java Web), or 2.11.6 (SDK for Java EE 6 Web Profile).
● your SDK local runtime must be hosted by a 64-bit JVM. SDKs of Tomcat 7, Tomcat 8, and TomEE 7
runtime support JCo from the very beginning.
● on Windows platforms, you must install Microsoft Visual Studio C++ 2013 runtime libraries
(vcredist_x64.exe). To download this package, go to https://www.microsoft.com/en-us/download/
details.aspx?id=40784 .
You can call a service from a fenced customer network using a simple application which consumes a simple on-
premise, remote-enabled function module.
Invoking function modules via RFC is enabled by a JCo API that is comparable to the one available in SAP
NetWeaver Application Server Java (version 7.10+), and in JCo standalone 3.0. If you are an experienced JCo
developer, you can easily develop a Web application using JCo: you simply consume the APIs like you do in
other Java environments. Restrictions that apply in the cloud environment are mentioned in the Restrictions
section below.
Find a sample Web application in Tutorial: Invoke ABAP Function Modules in On-Premise ABAP Systems [page
307].
Restrictions
Note
To add the parameter to an existing application, select the application and choose Update. When you
are done, you must restart the application.
The minimal runtime versions for supporting this capability are listed below:
● Logon authentication only supports user/password credentials (basic authentication) and principal
propagation. See Create RFC Destinations [page 207] and User Logon Properties [page 235].
● Provider/subscription model for applications is only fully supported in newer runtime versions. If you still
want to use it in older ones, you need to make sure that destinations are named differently in all accounts.
Minimal runtime versions for full support are listed below:
● The supported set of configuration properties is restricted. For details, see RFC Destinations [page 234].
Related Information
Context
This step-by-step tutorial shows how a sample Web application invokes a function module in an on-premise
ABAP system via RFC by using theConnectivity service.
Different user roles are involved in the on-demand to on-premise connectivity end-to-end scenario. The
particular steps for the relevant roles are described below:
IT Administrator
This role sets up and configures the Cloud Connector. Scenario steps:
Application Developer
1. Installs the Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
2. Develops a Java EE application using the destination API.
3. Configures connectivity destinations as resources in the web.xml file.
4. Configures connectivity destinations via the SAP Cloud Platform server adapter in Eclipse IDE.
5. Deploys the Java EE application locally and on the cloud.
Subaccount Operator
This role deploys Web applications, configures their destinations, and conducts tests. Scenario steps:
● You have downloaded and set up your Eclipse IDE and SAP Cloud Platform Tools for Java.
● You have downloaded the SDK. Its version needs to be at least 1.29.18 (SDK for Java Web), 2.11.6 (SDK
for Java EE 6 Web Profile), or 2.9.1 (SDK for Java Web Tomcat 7), respectively.
● Your local runtime needs to be hosted by a 64-bit JVM. On Windows platforms, you need to install Microsoft
Visual C++ 2010 Redistributable Package (x64).
● You have downloaded and configured your Cloud Connector. Its version needs to be at least 1.3.0.
To read the installation documentation, go to Setting Up the Development Environment [page 1402] and
Installation [page 351].
Procedure
2. From the Eclipse main menu, choose New Dynamic Web Project .
3. In the Project name field, enter jco_demo .
4. In the Target Runtime pane, select the runtime you want to use to deploy the HelloWorld application. In this
tutorial, we choose Java Web.
5. In the Configuration pane, leave the default configuration.
6. Choose Finish to complete the creation of your project.
Procedure
package com.sap.demo.jco;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.conn.jco.AbapException;
import com.sap.conn.jco.JCoDestination;
import com.sap.conn.jco.JCoDestinationManager;
import com.sap.conn.jco.JCoException;
import com.sap.conn.jco.JCoFunction;
import com.sap.conn.jco.JCoParameterList;
import com.sap.conn.jco.JCoRepository;
/**
* Sample application that uses the Connectivity
service. In particular,
* it makes use of the capability to invoke a function module in an ABAP
system
* via RFC
*
* Note: The JCo APIs are available under <code>com.sap.conn.jco</code>.
*/
public class ConnectivityRFCExample extends HttpServlet
{
private static final long serialVersionUID = 1L;
public ConnectivityRFCExample()
{
}
protected void doGet(HttpServletRequest request, HttpServletResponse
response)
throws ServletException, IOException
{
PrintWriter responseWriter = response.getWriter();
try
{
// access the RFC Destination "JCoDemoSystem"
JCoDestination
destination=JCoDestinationManager.getDestination("JCoDemoSystem");
// make an invocation of STFC_CONNECTION in the backend;
JCoRepository repo=destination.getRepository();
JCoFunction stfcConnection=repo.getFunction("STFC_CONNECTION");
JCoParameterList imports=stfcConnection.getImportParameterList();
imports.setValue("REQUTEXT", "SAP Cloud Platform Connectivity
runs with JCo");
stfcConnection.execute(destination);
JCoParameterList exports=stfcConnection.getExportParameterList();
String echotext=exports.getString("ECHOTEXT");
String resptext=exports.getString("RESPTEXT");
response.addHeader("Content-type", "text/html");
responseWriter.println("<html><body>");
responseWriter.println("<h1>Executed STFC_CONNECTION in system
JCoDemoSystem</h1>");
responseWriter.println("<p>Export parameter ECHOTEXT of
STFC_CONNECTION:<br>");
responseWriter.println(echotext);
responseWriter.println("<p>Export parameter RESPTEXT of
STFC_CONNECTION:<br>");
responseWriter.println(resptext);
responseWriter.println("</body></html>");
}
catch (AbapException ae)
{
5. Save the Java editor and make sure that the project compiles without errors.
Procedure
1. To deploy your Web application locally or on the cloud, see the following two procedures, respectively:
To configure the destination on SAP Cloud Platform, you need to use a virtual application server host name
(abapserver.hana.cloud) and a virtual system number (42) that you will expose later in the Cloud
Connector. Alternatively, you could use a load balancing configuration with a message server host and a
system ID.
Procedure
Name=JCoDemoSystem
Type=RFC
jco.client.ashost=abapserver.hana.cloud
jco.client.cloud_connector_version=2
jco.client.sysnr=42
jco.client.user=DEMOUSER
jco.client.passwd=<password>
jco.client.client=000
jco.client.lang=EN
jco.destination.pool_capacity=5
2. Upload this file to your Web application in SAP Cloud Platform. For more information, see Configure
Destinations from the Console Client [page 182].
3. Call the URL that references the cloud application again in the Web browser. The application should now
return a different exception:
4. This means the Cloud Connector denied opening a connection to this system. As a next step, you need to
configure the system in your installed Cloud Connector.
This is required since the Cloud Connector only allows access to white-listed back-end systems. To do this,
follow the steps below:
Procedure
1. Optional: In the Cloud Connector administration UI, you can check under Audits whether access has been
denied:
2. In the Cloud Connector administration UI and choose Cloud To On-Premise from your Subaccount menu,
tab Access Control.
3. In section Mapping Virtual To Internal System choose Add to define a new system.
1. For Back-end Type, select ABAP System and choose Next.
2. For Protocol, select RFC and choose Next.
3. Choose option Without load balancing.
4. Enter application server and instance number. The Application Server entry must be the physical host
name of the machine on which the ABAP application server is running. Choose Next.
Example:
4. Call the URL that references the cloud application again in the Web browser. The application should now
throw a different exception:
5. This means the Cloud Connector denied invoking STFC_CONNECTION in this system. As a final step, you
need to provide access to this function module in your installed Cloud Connector.
This is required since the Cloud Connector only allows access to white-listed resources (which are defined on
the basis of function module names with RFC). To do this, follow the steps below:
Procedure
1. Optional: In the Cloud Connector administration UI, you can check under Audits whether access has been
denied:
2. In the Cloud Connector administration UI, choose Cloud To On-Premise from your Subaccount menu, and
go to the Access Control tab.
5. Call the URL that references the cloud application again in the Web browser. The application should now
return with a message showing the export parameters of the function module after a successful invocation.
Related Information
You can monitor the state and logs of your Web application deployed on SAP Cloud Platform.
Find an example how to use an LDAP destination within your cloud application.
To learn more about configuring LDAP destinations, see LDAP Destinations [page 241].
Sample Code
package com.sap.cloud.example.ldap;
import java.io.IOException;
import java.util.Properties;
import javax.annotation.Resource;
import javax.naming.NamingEnumeration;
import javax.naming.NamingException;
response.getWriter().append(result.next().toString()).append("<br/><br/>");
}
} catch (NamingException e) {
throw new ServletException("Could not search LDAP for users", e);
}
}
}
Access on-premise systems from a Neo application via TCP-based protocols, using a SOCKS5 Proxy.
Concept
SAP Cloud Platform Connectivity provides a SOCKS5 proxy that you can use to access on-premise systems via
TCP-based protocols. SOCKS5 is the industry standard for proxying TCP-based traffic (for more information,
see IETF RFC 1928 ).
The proxy server is started by default on all application machines. So you can access it on localhost and port
20004.
In this scenario, it is used to find the correct Cloud Connector to which the data will be routed. Therefore, the
pattern used for username is 1.<subaccount>.<locationId>, where subaccount is a mandatory
parameter, whereas locationId is optional.
Note
The Cloud Connector location ID identifies Cloud Connector instances that are deployed in various locations of
a customer's premises and connected to the same subaccount. Since the location ID is an optional property,
you should include it in the request only if it has already been configured in the Cloud Connector. For more
information, see Set up Connection Parameters and HTTPS Proxy [page 384] (Step 4).
The password part of the authentication scheme is left as an empty string in this scenario.
Restrictions
● You can use the provided SOCKS5 proxy server only to connect to on-premise systems. You cannot use it
as general-purpose SOCKS5 proxy.
● Proxying UDP traffic is not supported.
The following code snippet shows how to provide the proxy authentication values :
Sample Code
import java.net.Authenticator;
import org.apache.commons.codec.binary.Base64; // Or any other Base64 encoder
Authenticator.setDefault(new Authenticator() {
@Override
protected java.net.PasswordAuthentication getPasswordAuthentication()
{
return new java.net.PasswordAuthentication("1." +
encodedSubaccount + "." + encodedLocationId , new char[]{});
}
});
}
In this code snippet you can see how to set up the SOCKS proxy and how to use it to create an HTTP
connection:
Sample Code
import java.net.SocketAddress;
import java.net.InetSocketAddress;
import java.net.Proxy;
import java.net.URL;
import java.net.HttpURLConnection;
SocketAddress addr = new InetSocketAddress("localhost", 20004);
Proxy proxy = new Proxy(Proxy.Type.SOCKS, addr);
setSOCKS5ProxyAuthentication(subaccount, locationId); // where subaccount
is the current subaccount and locationId is the Location ID of the SCC (or
empty string if locationId is not set)
URL url = new URL("http://virtualhost:1234/");
HttpURLConnection connection = (HttpURLConnection)
url.openConnection(proxy);
Interfaces
You can access a subaccount associated with the current execution thread using the TenantContext API.
● Interface TenantContext
● Interface TenantContext: getTenant()
● Interface Tenant: getAccount()
● Interface Account: getId()
Troubleshooting
If the handshake with the SOCKS5 proxy server fails, a SOCKS5 protocol error is returned, see IETF RFC 1928
. The table below shows the most common errors and their root cause in the scenario you use:
Related Information
E-mail connectivity lets you send messages from your Web applications using e-mail providers that are
accessible on the Internet, as well as retrieve e-mails from the mailbox of your e-mail account.
Note
SAP does not act as e-mail provider. To use this service, please cooperate with an external e-mail provider
of your choice.
● Obtain a mail session resource using resource injection or, alternatively, using a JNDI lookup.
● Configure the mail session resource by specifying the protocol settings of your mail server as a mail
destination configuration. SMTP is supported for sending e-mail, and POP3 and IMAP for retrieving
messages from a mailbox account.
● In your Web application, use the JavaMail API (javax.mail) to create and send a MimeMessage object or
retrieve e-mails from a message store.
Related Information
In your Web application, you use the JavaMail API (javax.mail) to create and send a MimeMessage object or
retrieve e-mails from a message store.
Mail Session
You can obtain a mail session resource using resource injection or a JNDI lookup. The properties of the mail
session are specified by a mail destination configuration. So that the resource is linked to this configuration,
the names of the destination configuration and mail session resource must be the same.
● Resource injection
You can directly inject the mail session resource using annotations as shown in the example below. You do
not need to declare the JNDI resource reference in the web.xml deployment descriptor.
@Resource(name = "mail/Session")
private javax.mail.Session mailSession;
● JNDI lookup
To obtain a resource of type javax.mail.Session, you declare a JNDI resource reference in the web.xml
deployment descriptor in the WebContent/WEB-INF directory as shown below. Note that the
recommended resource reference name is Session and the recommended subcontext is mail (mail/
Session):
<resource-ref>
<res-ref-name>mail/Session</res-ref-name>
<res-type>javax.mail.Session</res-type>
</resource-ref>
An initial JNDI context can be obtained by creating a javax.naming.InitialContext object. You can
then consume the resource by looking up the naming environment through the InitialContext, as
follows:
Note that according to the Java EE Specification, the prefix java:comp/env should be added to the JNDI
resource name (as specified in the web.xml) to form the lookup name.
Sending E-Mail
With the javax.mail.Session object you have retrieved, you can use the JavaMail API to create a
MimeMessage object with its constituent parts (instances of MimeMultipart and MimeBodyPart). The
message can then be sent using the send method from the Transport class:
Fetching E-Mail
You can retrieve the e-mails from the inbox folder of your e-mail account using the getFolder method from
the Store class as follows:
Fetched e-mail is not scanned for viruses. This means that e-mail retrieved from an e-mail provider using IMAP
or POP3 could contain a virus that could potentially be distributed (for example, if e-mail is stored in the
database or forwarded). Basic mitigation steps you could take include the following:
Related Information
In order to troubleshoot e-mail delivery and retrieval issues, it is useful to have debug information about the
mail session established between your SAP Cloud Platform application and your e-mail provider.
Context
To include debug information in the standard trace log files written at runtime, you can use the JavaMail
debugging feature and the System.out logger. The System.out logger is preconfigured with the log level
INFO. You require at least INFO or a level with more detailed information.
1. To enable the JavaMail debugging feature, add the mail.debug property to the mail destination
configuration as shown below:
mail.debug=true
2. To check the log level for your application, log on to the cockpit.
Note
You can check the log level of the System.out logger in a similar manner from the Eclipse IDE.
Related Information
This step-by-step tutorial shows how you can send an e-mail from a simple Web application using an e-mail
provider that is accessible on the Internet. As an example, it uses Gmail.
Note
SAP does not act as e-mail provider. To use this service, please cooperate with an external e-mail provider
of your choice.
Prerequisites [page 325] The application is also available as a sample in the SAP
1. Create a Dynamic Web Project and Servlet [page 325] Cloud Platform SDK:
Prerequisites
You have installed the SAP Cloud Platform Tools and created an SAP HANA Cloud server runtime environment
as described in Setting Up the Development Environment [page 1402].
To develop applications for the SAP Cloud Platform, you require a dynamic Web project and servlet.
1. From the Eclipse main menu, choose File New Dynamic Web Project .
2. In the Project name field, enter mail.
3. In the Target Runtime pane, select the runtime you want to use to deploy the application. In this tutorial,
you use Java Web.
4. In the Configuration area, leave the default configuration and choose Finish.
5. To add a servlet to the project you have just created, select the mail node in the Project Explorer view.
6. From the Eclipse main menu, choose File New Servlet .
7. Enter the Java package com.sap.cloud.sample.mail and the class name MailServlet.
8. Choose Finish to generate the servlet.
You add code to create a simple Web UI for composing and sending an e-mail message. The code includes the
following methods:
package com.sap.cloud.sample.mail;
import java.io.IOException;
import java.io.PrintWriter;
import javax.annotation.Resource;
Test your code using the local file system before configuring your mail destination and testing the application in
the cloud.
Note
To send the e-mail through a real e-mail server, you can configure a destination as described in the next
section, but using the local server runtime. Remember that once you have configured a destination for local
testing, messages are no longer sent to the local file system.
Create a mail destination that contains the SMTP settings of your e-mail provider. The name of the mail
destination must match the name used in the resource reference in the web.xml descriptor.
1. In the Eclipse main menu, choose File New Other Server Server .
2. Select the server type SAP Cloud Platform and choose Next.
3. In the SAP Cloud Platform Application dialog box, enter the name of your application, subaccount, user,
and password and choose Finish. The new server is listed in the Servers view.
4. Double-click the server and switch to the Connectivity tab.
7. Configure the destination by adding the properties for port 587 (SMTP+STARTTLS) or 465 (SMTPS). To do
this, choose the Add Property button in the Properties section:
○ To use port 587 (SMTP+STARTTLS), add the following properties:
mail.transport.protocol smtp
mail.smtp.host smtp.gmail.com
mail.smtp.auth true
mail.smtp.starttls.enable true
mail.smtp.port 587
Property Value
mail.transport.protocol smtps
mail.smtps.host smtp.gmail.com
mail.smtps.auth true
mail.smtps.port 465
8. Save the destination to upload it to the cloud. The settings take effect when the application is next started.
9. In the Project Explorer view, select MailServlet.java and choose Run Run As Run on Server .
10. Make sure that the Choose an existing server radio button is selected and select the server you have just
defined.
11. Choose Finish to deploy to the cloud. You should now see the sender screen, where you can compose and
send an e-mail
Create connectivity destinations for HANA XS applications, configure their security, add roles and test them in
an enterprise or trial landscape.
Related Information
Overview
This section represents the usage of the Connectivity service in a productive SAP HANA instance. Below are
listed the scenarios depending on the connectivity and authentication types you use for your development
work.
Connectivity Types
Internet Connectivity
In this case, you can develop an XS application in a productive SAP HANA instance at SAP Cloud Platform. This
enables the application to connect to external Internet services or resources.
The corresponding XS parameters for all enterprise region hosts are the same (see also Regions and Hosts
Available for the Neo Environment [page 14]):
useProxy true
proxyHost proxy
proxyPort 8080
Note
In the outbound scenario, the useSSL property can be set to true or false depending on the XS
application's needs.
For more information, see Use XS Destinations for Internet Connectivity [page 332]
In this case, you can develop an XS application in a productive SAP HANA instance at SAP Cloud Platform. That
way the application connects, via a Cloud Connector tunnel, to on-premise services and resources.
The corresponding XS parameters for all enterprise regions hosts are the same (see also Regions and Hosts
Available for the Neo Environment [page 14]):
XS parameter Value
useProxy true
proxyHost localhost
proxyPort 20003
useSSL false
Note
When XS applications consume the Connectivity service to connect to on-premise systems, the useSSL
property must always be set to false.
The communication between the XS application and the proxy listening on localhost is always via HTTP.
Whether the connection to the on-premise back end should be HTTP or HTTPS is a matter of access
control configuration in the Cloud Connector. For more information, see Configure Access Control (HTTP)
[page 425].
For more information, see Use XS Destinations for On-Demand to On-Premise Connectivity [page 336]
No Authentication
Basic Authentication
You need credentials to access an Internet or on-premise service. To meet this requirement, proceed as
follows:
1. Open a Web browser and start the SAP HANA XS Administration Tool (https://
<schema><account>.<host>/sap/hana/xs/admin/).
2. On the XS Applications page, expand the nodes in the application tree to locate your application.
3. Select the .xshttpdest file to display details of the HTTP destination and then choose Edit.
4. In the AUTHENTICATION section, choose the Basic radio button.
5. Enter the credentials for the on-premise service.
6. Save your entries.
Context
This tutorial explains how to create a simple SAP HANA XS application, which is written in server-side
JavaScript and makes use of theConnectivity service for making Internet connections.
Note
You can check another outbound connectivity example (financial services that display the latest stock
values) in Developer Guide for SAP HANA Studio → section "8.4.1 Tutorial: Using the XSJS Outbound API
". For more information, see the SAP HANA Developer Guides listed in the Related Links section below.
Refer to the SAP Cloud Platform Release Notes to find out which HANA SPS is supported by SAP Cloud
Platform.
Prerequisites
● You have a productive SAP HANA instance. For more information, see Using an SAP HANA XS Database
System [page 1584].
● You have installed the SAP HANA tools. For more information, see Install SAP HANA Tools for Eclipse [page
1569].
1. Initial Steps
To create and assign an XS destination, you need to have a developed HANA XS application.
● If you have already created one and have opened a database tunnel, go straight to procedure 2. Create an
XS Destination File on this page.
● If you need to create an XS application from scratch, go to page Creating an SAP HANA XS Hello World
Application Using SAP HANA Studio [page 1574] and execute procedures 1 to 4. Then execute the
procedures from this page (2 to 5).
Note
The subpackage in which you will later create your XS destination and XSJS files has to be named
connectivity.
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name google.xshttpdest and choose Finish.
3. Copy and paste the following destination configuration settings:
host = "maps.googleapis.com";
port = 80;
pathPrefix = "/maps/api/distancematrix/json";
authType = none;
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name google_test.xsjs and choose Finish.
3. Copy and paste the following JavaScript code into the file:
$.response.contentType = "application/json";
$.response.setBody(response.body.asString());
$.response.status = $.net.http.OK;
} catch (e) {
$.response.contentType = "text/plain";
$.response.setBody(e.message);
}
Note
To consume an Internet service via HTTPS, you need to export your HTTPS service certificate into X.509
format, to import it into a trust store and to assign it to your activated destination. You need to do this in the
SAP HANA XS Administration Tool (https://<schema><account>.<host>/sap/hana/xs/admin/). For more
information, see Developer Guide for SAP HANA Studio → section "3.6.2.1 SAP HANA XS Application
Authentication". For more information, see the SAP HANA Developer Guides listed in the Related Links
section below. Refer to the SAP Cloud Platform Release Notes to find out which HANA SPS is supported
by SAP Cloud Platform.
1. In the Systems view, expand Security Users and then double-click your user ID.
2. On the Granted Roles tab, choose the + (Add) button.
3. Select the model_access role in the list and choose OK. The role is now listed on the Granted Roles tab.
Open the cockpit and proceed as described in Launch SAP HANA XS Applications [page 1583].
You will be authenticated by SAML and should then see the following response:
{
"destination_addresses" : [ "Cologne, Germany" ],
"origin_addresses" : [ "Frankfurt, Germany" ],
"rows" : [
{
"elements" : [
{
"distance" : {
"text" : "190 km",
"value" : 190173
},
"duration" : {
"text" : "1 hour 58 mins",
"value" : 7103
},
"status" : "OK"
}
]
}
],
"status" : "OK"
}
Additional Example
You can also see an example for enabling server-side JavaScript applications to use the outbound connectivity
API. For more information, see Developer Guide for SAP HANA Studio → section "8.4.1 Tutorial: Using the XSJS
Outbound API".
Note
For more information, see the SAP HANA Developer Guides listed below. Refer to the SAP Cloud Platform
Release Notes to find out which HANA SPS is supported by SAP Cloud Platform.
See Also
Context
This tutorial explains how to create a simple SAP HANA XS application that consumes a sample back-end
system exposed via the Cloud Connector.
In this example, the XS application consumes an on-premise system with basic authentication on landscape
hana.ondemand.com.
Prerequisites
● You have a productive SAP HANA instance. For more information, see Using an SAP HANA XS Database
System [page 1584].
● You have installed the SAP HANA tools. For more information, see Install SAP HANA Tools for Eclipse [page
1569]. You need them to open a Database Tunnel.
● You have Cloud Connector 2.x installed on an on-premise system. For more information, see Installation
[page 351].
● A sample back-end system with basic authentication is available on an on-premise host. For more
information, see Set Up an Application as a Sample Back-End System [page 303].
● You have created a tunnel between your subaccount and a Cloud Connector. For more information, see
Initial Configuration [page 382] → section "Establishing Connections to SAP Cloud Platform".
● The back-end system is exposed for the SAP HANA XS application via Cloud Connector configuration
using as settings: virtual_host = virtualpingbackend and virtual_port = 1234. For more
information, see Consume Back-End Systems (Java Web or Java EE 6 Web Profile) [page 283].
Note
The last two prerequisites can be achieved by exposing any other available HTTP service in your on-
premise network. In this case, you shall adjust accordingly the pathPrefix value, mentioned below in
procedure "2. Create an XS Destination File".
To create and assign an XS destination, you need to have a developed HANA XS application.
● If you have already created one and have opened a database tunnel, go straight to procedure 2. Create an
XS Destination File on this page.
● If you need to create an XS application from scratch, go to page Creating an SAP HANA XS Hello World
Application Using SAP HANA Studio [page 1574] and execute procedures 1 to 4. Then execute the
procedures from this page (2 to 5).
Note
The subpackage in which you will later create your XS destination and XSJS files has to be named
connectivity.
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name odop.xshttpdest and choose Finish.
3. Copy and paste the following destination configuration settings:
host = "virtualpingbackend";
port = 1234;
useSSL = false;
pathPrefix = "/BackendAppHttpBasicAuth/basic";
useProxy = true;
proxyHost = "localhost";
proxyPort = 20003;
timeout = 3000;
Note
In case you use SDK with a version equal to or lower than 1.44.0.1 (Java Web) and 2.24.13 (Java EE
6 Web Profile) respectively, you should find the on-premise WAR files in directory <SDK_location>/
tools/samples/connectivity/onpremise. Also, the pathPrefix should be /
PingAppHttpBasicAuth/pingbasic.
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name ODOPTest.xsjs and choose Finish.
3. Copy and paste the following JavaScript code into the file:
$.response.contentType = "text/html";
Note
You also need to enter your on-premise credentials. You should not enter them in the destination file since
they must not be exposed as plain text.
1. Open a Web browser and start the SAP HANA XS Administration Tool (https://
<schema><account>.<host>/sap/hana/xs/admin/).
2. On the XS Applications page, expand the nodes in the application tree to locate your application.
3. Select the odop.xshttpdest file to display the HTTP destination details and then choose Edit.
4. In section AUTHENTICATION, choose the Basic radio button.
5. Enter your on-premise credentials (user and password).
6. Save your entries.
Note
If you later need to make another configuration change to your XS destination, you need to enter your
password again since it is no longer remembered by the editor.
1. In the Systems view, expand Security Users and then double-click your user ID.
2. On the Granted Roles tab, choose the + (Add) button.
3. Select the model_access role in the list and choose OK. The role is now listed on the Granted Roles tab.
4. Choose Deploy in the upper right corner of screen. A message confirms that your user has been modified.
Open the cockpit and proceed as described in Launch SAP HANA XS Applications [page 1583].
Principal Propagation scenario is available for HANA XS applications. It is used for propagating the currently
logged in user to an on-premise back-end system using the Cloud Connector and connectivity service. To
configure the scenario make sure to:
2.Open the Cloud Connector and mark your HANA instance as trusted in the Principal Propagation tab. The
HANA instance name is displayed in the cockpit under SAP HANA/SAP ASE Databases & Schemas . For
more information, see Set Up Trust for Principal Propagation [page 402].
Related Information
port It enables you to specify the port ● For Internet connection: 80, 443
number to use for connections to the
● For on-demand to on-premise
HTTP destination hosting the service
connection: 1080
or data you want your SAP HANA XS
● For service-to-service connection:
application to access.
8443
Note
See also: Connectivity for SAP
HANA XS (Enterprise Version)
[page 330]
Note
See also: Connectivity for SAP
HANA XS (Enterprise Version)
[page 330]
Related Information
Context
This section represents the usage of the Connectivity service when you develop and deploy SAP HANA XS
applications in a trial environment. Currently, you can make XS destinations for consuming HTTP Internet
services only.
The tutorial explains how to create a simple SAP HANA XS application which is written in server-side
JavaScript and makes use of the Connectivity service for making Internet connections. In the HTTP example,
the package is named connectivity and the XS application is mapinfo. The output displays information from
Google Maps showing the distance between Frankfurt and Cologne, together with the consumed time if
traveling with a car, as all this information is provided in American English.
Features
In this case, you can develop an XS application in a trial environment at SAP Cloud Platform so that the
application connects to external Internet services or resources.
XS parameter hanatrial.ondemand.com
useProxy true
proxyHost proxy-trial
proxyPort 8080
Note
The useSSL property can be set to true or false depending on the XS application's needs.
1. Initial Steps
To create and assign an XS destination, you need to have a developed HANA XS application.
● If you have already created one and have opened a database tunnel, go straight to procedure 2. Create an
XS Destination File on this page.
Note
The subpackage in which you will later create your XS destination and XSJS files has to be named
connectivity.
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name google.xshttpdest and choose Finish.
3. Copy and paste the following destination configuration settings:
host = "maps.googleapis.com";
port = 80;
pathPrefix = "/maps/api/distancematrix/json";
useProxy = true;
proxyHost = "proxy-trial";
proxyPort = 8080;
authType = none;
useSSL = false;
timeout = 30000;
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name google_test.xsjs and choose Finish.
3. Copy and paste the following JavaScript code into the file:
$.response.contentType = "application/json";
$.response.setBody(response.body.asString());
$.response.status = $.net.http.OK;
} catch (e) {
$.response.contentType = "text/plain";
$.response.setBody(e.message);
1. In the Systems view, select your system and from the context menu choose SQL Console.
2. In the SQL console, enter the following, replacing <SAP HANA Cloud user> with your user:
call
"HCP"."HCP_GRANT_ROLE_TO_USER"('p1234567890trial.myhanaxs.hello::model_access'
, '<SAP HANA Cloud user>')
3. Execute the procedure. You should see a confirmation that the statement was successfully executed.
Open the cockpit and proceed as described in Launch SAP HANA XS Applications [page 1583].
You will be authenticated by SAML and should then see the following response:
{
"destination_addresses" : [ "Cologne, Germany" ],
"origin_addresses" : [ "Frankfurt, Germany" ],
"rows" : [
{
"elements" : [
{
"distance" : {
"text" : "190 km",
"value" : 190173
},
"duration" : {
"text" : "1 hour 58 mins",
"value" : 7103
},
"status" : "OK"
}
]
}
],
"status" : "OK"
}
Related Information
Creating an SAP HANA XS Hello World Application Using SAP HANA Web-based Development Workbench
[page 1571]
Using the Destination Configuration service, you can create, edit, update and read destinations, keystores and
certificates on application, subaccount, or subscription level, see Managing Destinations [page 180]. You can
access these destinations through your application at runtime or from the SAP Cloud Platform cockpit, see
Configure Destinations from the Cockpit [page 203].
Prerequisites
You must have administrative access to a subaccount within the Neo environment.
Required Credentials
The Destination Configuration service requires OAuth 2.0 credentials for all REST API methods. To manage and
read destinations and certificates, you must create an OAuth client and assign any of the following
permissions: Manage Destinations (read/write) or Read Destination (read only), see Using Platform APIs [page
1737].
The Destination Configuration service provides a REST API, which lets you configure the destinations that you
need to connect your application to another system or service, see SAP API Business Hub .
For the create and update methods, you must send the destination values as a properties file with the
multipart/form-data media type as form data.
Note
When you read a destination, for security reasons the same file is downloaded without the sensitive
properties.
Sample Code
Depending on the authentication type used, different properties are required for a destination. Find the
available properties for each authentication type in HTTP Destinations [page 217].
Learn more about the Cloud Connector: features, scenarios and setup.
Content
In this Topic
Hover over the elements for a description. Click an element for more information.
In this Guide
Hover over the elements for a description. Click an element for more information.
Context
● Serves as a link between SAP Cloud Platform applications and on-premise systems.
○ Combines an easy setup with a clear configuration of the systems that are exposed to the SAP Cloud
Platform.
○ Lets you use existing on-premise assets without exposing the entire internal landscape.
● Lets you use the features that are required for business-critical enterprise scenarios.
○ Recovers broken connections automatically.
○ Provides audit logging of inbound traffic and configuration changes.
○ Can be run in a high-availability setup.
The Cloud Connector must not be used with products other than SAP Cloud Platform or S/4HANA Cloud.
Advantages
Compared to the approach of opening ports in the firewall and using reverse proxies in the DMZ to establish
access to on-premise systems, the Cloud Connector offers the following benefits:
● You don't need to configure the on-premise firewall to allow external access from SAP Cloud Platform to
internal systems. For allowed outbound connections, no modifications are required.
● The Cloud Connector supports HTTP as well as additional protocols. For example, the RFC protocol
supports native access to ABAP systems by invoking function modules.
● You can use the Cloud Connector to connect on-premise databases or BI tools to SAP HANA databases in
the cloud.
● The Cloud Connector lets you propagate the identity of cloud users to on-premise systems in a secure way.
● Easy installation and configuration, which means that the Cloud Connector comes with a low TCO and is
tailored to fit your cloud scenarios.
● SAP provides standard support for the Cloud Connector.
Basic Scenarios
Note
This section refers to the Cloud Connector installation in a standard on-premise network. Find setup
options for other system environments in Extended Scenarios [page 349].
Note
Extended Scenarios
Besides the standard setup: SAP Cloud Platform - Cloud Connector - on-premise system/network, you can also
use the Cloud Connector to connect SAP Cloud Platform applications to other cloud-based environments, as
long as they are operated in a way that is comparable to an on-premise network from a functional perspective.
This is particularly true for infrastructure (IaaS) hosting solutions.
Can be used for: Customer on-premise backend sys SAP ERP, SAP S/4HANA
tems (see Basic Scenarios [page 347])
Note
SAP Hosting SAP HANA Enterprise Cloud (HEC)
Within extended scenarios that al
Third-party IaaS providers (hosting) Amazon Web Services (AWS), Microsoft
low a Cloud Connector setup, spe
Azure, Google Cloud Platform (GCP)
cial procedures may apply for con
figuration. If so, they are mentioned
in the corresponding configuration
steps.
Cannot be used for: SAP SaaS solutions SAP SuccessFactors, SAP Concur, SAP
Ariba
Basic Tasks
The following steps are required to connect the Cloud Connector to your SAP Cloud Platform subaccount:
What's New?
Follow the SAP Cloud Platform Release Notes to stay informed about Cloud Connector and Connectivity
updates.
1.4.3.1 Installation
On Microsoft Windows and Linux, two installation modes are available: a portable version and an
installer version. On Mac OS X, only the portable version is available.
● Portable version: can be installed easily, by extracting a compressed archive into an empty directory. It
does not require administrator or root privileges for the installation, and you can run multiple instances on
the same host.
Restrictions:
○ You cannot run it in the background as a Windows Service or Linux daemon (with automatic start
capabilities at boot time).
○ The portable version does not support an automatic upgrade procedure. To update a portable
installation, you must delete the current one, extract the new version, and then re-do the configuration.
○ Portable versions are meant for non-productive scenarios only.
○ The environment variable JAVA_HOME is relevant when starting the instance, and therefore must be set
properly.
● Installer version: requires administrator or root permissions for the installation and can be set up to run
as a Windows service or Linux daemon in the background. You can upgrade it easily, retaining all the
configuration and customizing.
Note
We strongly recommend that you use this variant for a productive setup.
● There are some general prerequisites you must fulfill to successfully install the Cloud Connector, see
Prerequisites [page 352].
● For OS-specific requirements and procedures, see section Tasks below.
Tasks
Related Information
1.4.3.1.1 Prerequisites
Content
Section Description
Connectivity Restrictions [page 353] General information about SAP Cloud Platform and connec
tivity restrictions.
JDKs [page 354] Java Development Kit (JDK) versions that you can use.
Product Availability Matrix [page 354] Availability of operating systems/versions for specific Cloud
Connector versions.
For additional system requirements, see also System Requirements [page 355].
Connectivity Restrictions
For general information about SAP Cloud Platform restrictions, see Product Prerequisites and Restrictions
[page 1008].
For specific information about all Connectivity restrictions, see Connectivity: Restrictions [page 19].
Hardware
Minimum Recommended
CPU Single core 3 GHz, x86-64 architecture Dual core 2 GHz, x86-64 architecture
compatible compatible
Memory (RAM) 2 GB 4 GB
Software
● You have downloaded the Cloud Connector installation archive from SAP Development Tools for Eclipse.
● A JDK 7 or 8 must be installed. Due to problems with expired root CA certificates contained in older patch
levels of JDK 7, we recommend that you install the most recent patch level. You can download an up-to-
date SAP JVM from SAP Development Tools for Eclipse as well.
Caution
Do not use Apache Portable Runtime (APR) on the system on which you use the Cloud Connector. If you
cannot avoid this restriction and want to use APR at your own risk, you must manually adopt the default-
server.xml configuration file in directory <scc_installation_folder>/config_master/
org.eclipse.gemini.web.tomcat. To do so, follow the steps in HTTPS port configuration for APR.
Note
The support for using Cloud Connector with Java runtime version 7 will end on December 31, 2019. Any
Cloud Connector version released after that date may contain Java byte code requiring at least a JVM 8.
We therefore strongly recommend that you perform fresh installations only with Java 8, and update
existing installations running with Java 7, to Java 8 as of now.
See SAP Cloud Connector – Java 7 support will phase out and Update the Java VM [page 550].
SUSE Linux Enterprise Server 12, Red x86_64 2.5.1 and higher
hat Enterprise Linux 7
Related Information
Additional system requirements for installing and running the Cloud Connector.
Supported Browsers
The browsers you can use for the Cloud Connector Administration UI are the same as those currently
supported by SAPUI5. See: Browser and Platform Support.
The minimum free disk space required to download and install a new Cloud Connector server is as follows:
● Size of downloaded Cloud Connector installation file (ZIP, TAR, MSI files): 50 MB
● Newly installed Cloud Connector server: 70 MB
● Total: 120 MB as a minimum
The Cloud Connector writes configuration files, audit log files and trace files at runtime. We recommend that
you reserve between 1 and 20 GB of disk space for those files.
Trace and log files are written to <scc_dir>/log/ within the Cloud Connector root directory. The
ljs_trace.log file contains traces in general, communication payload traces are stored in
traffic_trace_*.trc. These files may be used by SAP Support to analyze potential issues. The default
trace level is Information, where the amount of written data is generally only a few KB per day. You can turn
off these traces to save disk space. However, we recommend that you don't turn off this trace completely, but
that you leave it at the default settings, to allow root cause analysis if an issue occurs. If you set the trace level
to All, the amount of data can easily reach the range of several GB per day. Use trace level All only to analyze
a specific issue. Payload trace, however, should normally be turned off, and used only for analysis by SAP
Support.
Note
Regularly back up or delete written trace files to clean up the used disk space.
To be compliant with the regulatory requirements of your organization and the regional laws, the audit log files
must be persisted for a certain period of time for traceability purposes. Therefore, we recommend that you
back up the audit log files regularly from the Cloud Connector file system and keep the backup for the length of
time required.
Related Information
A customer network is usually divided into multiple network zones or subnetworks according to the security
level of the contained components. For example, the DMZ that contains and exposes the external-facing
services of an organization to an untrusted network, usually the Internet, and there are one or more other
network zones which contain the components and services provided in the company’s intranet.
You can generally choose the network zone in which to set up the Cloud Connector:
● Internet access to the SAP Cloud Platform region host, either directly or via HTTPS proxy.
● Direct access to the internal systems it provides access to, which means that there is transparent
connectivity between the Cloud Connector and the internal system.
Note
The internal network must allow access to the required ports; the specific configuration depends on the
firewall software used.
The default ports are 80 for HTTP and 443 for HTTPS. For RFC communication, you need to open a
gateway port (default: 33+<instance number> and an arbitrary message server port. For a connection to
a HANA Database (on SAP Cloud Platform) via JDBC, you need to open an arbitrary outbound port in your
network. Mail (SMTP) communication is not supported.
When installing a Cloud Connector, the first thing you need to decide is the sizing of the installation.
This section gives some basic guidance what to consider for this decision. The provided information includes
the shadow instance, which should always be added in productive setups. See also Install a Failover Instance
for High Availability [page 501].
Note
The following recommendations are based on current experiences. However, they are only a rule of thumb
since the actual performance strongly depends on the specific environment. The overall performance of a
Cloud Connector is impacted by many factors (number of hosted subaccounts, bandwidth, latency to the
attached regions, network routers in the corporate network, used JVM, and others).
Restrictions
Note
Up until now, you cannot perform horizontal scaling directly. However, you can distribute the load statically
by operating multiple Cloud Connector installations with different location IDs for all involved subaccounts.
In this scenario, you can use multiple destinations with virtually the same configuration, except for the
location ID. See also Managing Subaccounts [page 392], step 4. Alternatively, each of the Cloud Connector
instances can host its own list of subaccounts without any overlap in the respective lists. Thus, you can
handle more load, if a single installation risks to be overloaded.
How to choose the right sizing for your Cloud Connector installation.
Regarding the hardware, we recommend that you use different setups for master and shadow. One dedicated
machine should be used for the master, another one for the shadow. Usually, a shadow instance takes over the
master role only temporarily. During most of its lifetime, in the shadow state, it needs less resources compared
to the master.
If the master instance is available again after a downtime, we recommend that you switch back to the actual
master.
Note
The sizing recommendations refer to the overall load across all subaccounts that are connected via the
Cloud Connector. This means that you need to accumulate the expected load of all subaccounts and
should not only calculate separately per subaccount (taking the one with the highest load as basis).
Related Information
Learn more about the basic criteria for the sizing of your Cloud Connector master instance.
For the master setup, keep in mind the expected load for communication between the SAP Cloud Platform and
on-premise systems. The setups listed below differ in a mostly qualitative manner, without hard limits for each
of them.
Note
The mentioned sizes are considered as minimal configuration, larger ones are always ok. In general, the
more applications, application instances, and subaccounts are connected, the more competition will exist
for the limited resources on the machine.
Particularly the heap size is critical. If you size it too low for the load passing the Cloud Connector, at some
point the Java Virtual Machine will execute full GCs (garbage collections) more frequently, blocking the
processing of the Cloud Connector completely for multiple seconds, which massively slows down overall
performance. If you experience such situations regularly, you should increase the heap size in the Cloud
Connector UI (choose Configuration Advanced JVM ). See also Configure the Java VM [page 494].
Note
You should use the same value for both <initial heap size> and <maximum heap size>.
The shadow installation is typically not used in standard situations and hence does not need the same sizing,
assuming that the time span in which it takes over the master role is limited.
Note
The shadow only acts as master, for example, during an upgrade or when an abnormal situation occurs on
the master machine, and either the Cloud Connector or the full machine on OS level needs to be restarted.
Master Shadow
Choose the right connection configuration options to improve the performance of the Cloud Connector.
This section provides detailed information how you can adjust the configuration to improve overall
performance. This is typically relevant for an M or L installation (see Hardware Setup [page 358]). For S
installations, the default configuration will probably be sufficient to handle the traffic.
● As of Cloud Connector 2.11, you can configure the number of physical connections through the Cloud
Connector UI. See also Configure Tunnel Connections [page 493].
● In versions prior to 2.11, you have to modify the configuration files with an editor and restart the Cloud
Connector to activate the changes.
In general, the Cloud Connector tunnel is multiplexing multiple virtual connections over a single physical
connection. Thus, a single connection can handle a considerable amount of traffic. However, increasing the
maximum number of physical connections allows you to make use of the full available bandwidth and to
minimize latency effects.
If the bandwidth limit of your network is reached, adding additional connections doesn't increase the
througput, but will only consume more resources.
Note
Different network access parameters may impact and limit your configuration options: if the access to an
external network is a 1 MB line with an added latency of 50 ms, you will not be able to achieve the same
data volumes like with a 10 GB line with an added latency of < 1 ms. However, even if the line is good, for
example 10 GB, but with an added latency of 100 ms, the performance might still be bad.
Related Information
Configure the physical connections for on-demand to on-premise calls in the Cloud Connector.
Adjusting the number of physical connections for this direction is possible both globally in the Cloud Connector
UI ( Configuration Advanced ), and for individual communication partners on cloud side ( On-Demand
To On-Premise Applications ).
Connections are established per communication partner. The current number of opened connections is visible
in the Cloud Connector UI via <Subaccount> Cloud Connections .
The global default is 1 physical connection per communication partner. This value is used across all
subaccounts hosted by the Cloud Connector instance and applies for all communication partners, if no specific
value is set ( On-Demand To On-Premise Applications ). In general, the default should be sufficient for
applications with low traffic. If you expect medium traffic for most applications, it may be useful to set the
default value to 2, instead of specifying individual values per application.
The following simple rule helps you to decide, whether an individual setting for a concrete application is
required:
● Per 20 threads in one process executing requests to on-premise systems, provide 1 physical connection.
● If the request or response net size is larger than 250k, make sure to add an additional connection per 2 of
such clients.
In addition to the number of connections, you can configure the number of <Tunnel Worker Threads>. This
value should be at least equal to the maximum of all individual application tunnel connections in all
subaccounts, to have at least 1 thread available for each connection that can process incoming requests and
outgoing responses.
The value for <Protocol Processor Worker Threads> is mainly relevant if RFC is used as protocol. Since
its communication model towards the ABAP system is a blocking one, each thread can handle only one call at a
time and cannot be shared. Hence, you should provide 1 thread per 5 concurrent RFC requests.
Note
The longer the RFC execution time in the backend, the more threads you should provide. Threads can be
reused only after the response of a call was returned to SAP Cloud Platform.
Configure the number of physical connections for a Cloud Connector service channel.
Service channels let you configure the number of physical connections to the communication partner on cloud
side, see Using Service Channels [page 480]. The default is 1. This value is used as well in versions prior to
Cloud Connector 2.11, which did not offer a configuration option for each service channel. You should define the
number of connections depending on the expected number of clients and, with lower priority, depending on the
size of the exchanged messages.
If there is only a single RFC client for an S/4HANA Cloud channel or only a single HANA client for a HANA DB on
SAP Cloud Platform side, increasing the number doesn't help, as each virtual connection is assigned to one
physical connection. The following simple rule lets you to define the required number of connections per
service channel:
Example
For a HANA system in the SAP Cloud Platform, data is replicated using 18 concurrent clients in the on-premise
network. In average, about 5 of those clients are regularly sending 600k. For the number of clients, you should
use 2 physical connections, for the 5 clients sending larger amounts add an additional 3, which sums up to 5
connections.
You can choose between a simple portable variant of the Cloud Connector and the MSI-based installer.
The installer is the generally recommended version that you can use for both developer and productive
scenarios. It lets you, for example, register the Cloud Connector as a Windows service and this way
automatically start it after machine reboot.
Tip
If you are a developer, you might want to use the portable variant as you can run the Cloud Connector
after a simple unzip (archive extraction). You might want to use it also if you cannot perform a full
installation due to lack of permissions, or if you want to use multiple versions of the Cloud Connector
simultaneously on the same machine.
Prerequisites
● You have either of the following 64-bit operating systems: Windows 7, Windows 8.1, Windows 10, Windows
Server 2008 R2, Windows Server 2012, Windows Server 2012 R2 or Windows Server 2016.
● You have downloaded either the portable variant as ZIP archive for Windows, or the MSI installer
from the SAP Development Tools for Eclipse page.
● You must install Microsoft Visual Studio C++ 2013 runtime libraries (vcredist_x64.exe). For more
information, see Visual C++ Redistributable Packages for Visual Studio 2013 .
Note
Even if you have a more recent version of the Microsoft Visual C++ runtime libraries, you still must
install the Microsoft Visual Studio C++ 2013 libraries.
● Java 7 or Java 8 must be installed. In case you want to use SAP JVM, you can download it from the SAP
Development Tools for Eclipse page.
● When using the portable variant, the environment variable <JAVA_HOME> must be set to the Java
installation directory, so that the bin subdirectory can be found. Alternatively, you can add the relevant
bin subdirectory to the <PATH> variable.
Portable Scenario
1. Extract the <sapcc-<version>-windows-x64.zip> ZIP file to an arbitrary directory on your local file
system.
2. Set the environment variable JAVA_HOME to the installation directory of the JDK that you want to use to
run the Cloud Connector. Alternatively, you can add the bin subdirectory of the JDK installation directory
to the PATH environment variable.
3. Go to the Cloud Connector installation directory and start it using the go.bat batch file.
Note
The Cloud Connector is not started as a service when using the portable variant, and hence will not
automatically start after a reboot of your system. Also, the portable version does not support the automatic
upgrade procedure.
Installer Scenario
Note
The Cloud Connector is started as a Windows service in the productive use case. Therefore, installation
requires administration permissions. After installation, manage this service under Control Panel
Administrative Tools Services . The service name is Cloud Connector (formerly named Cloud
Connector 2.0). Make sure the service is executed with a user that has limited privileges. Typically,
privileges allowed for service users are defined by your company policy. Adjust the folder and file
permissions to be manageable by only this user and system administrators.
On Windows, the file scc_service.log is created and used by the Microsoft MSI installer (during Cloud
Connector installation), and by the scchost.exe executable, which registers and runs the Windows service if
you install the Cloud Connector as a Windows background job.
This log file is only needed if a problem occurs during Cloud Connector installation, or during creation and start
of the Windows service, in which the Cloud Connector is running. You can find the file in the log folder of your
Cloud Connector installation directory.
After installation, the Cloud Connector is registered as a Windows service that is configured to be started
automatically after a system reboot. You can start and stop the service via shortcuts on the desktop ("Start
Cloud Connector" and "Stop Cloud Connector"), or by using the Windows Services manager and look for the
service SAP Cloud Connector.
Access the Cloud Connector administration UI at https://localhost:<port>, where the default port is 8443 (but
this port might have been modified during the installation).
Next Steps
1. Open a browser and enter: https://<hostname>:8443. <hostname> is the host name of the machine
on which you have installed the Cloud Connector. If you access the Cloud Connector locally from the same
machine, you can simply enter localhost.
2. Continue with the initial configuration of the Cloud Connector, see Initial Configuration [page 382].
Related Information
Context
You can choose between a simple portable variant of the Cloud Connector and the RPM-based installer.
The installer is the generally recommended version that you can use for both the developer and the
productive scenario. It registers, for example, the Cloud Connector as a daemon service and this way
automatically starts it after machine reboot.
Tip
If you are a developer, you might want to use the portable variant as you can run the Cloud Connector
after a simple "tar -xzof" execution. You also might want to use it if you cannot perform a full installation
due to missing permissions for the operating system, or if you want to use multiple versions of the Cloud
Connector simultaneously on the same machine.
● You have either of the following 64-bit operating systems: SUSE Linux Enterprise Server 11 or 12, or Redhat
Enterprise Linux 6 or 7
● You have downloaded either the portable variant as tar.gz archive for Linux or the RPM installer
contained in the ZIP for Linux, from SAP Development Tools for Eclipse.
● Java 7 or Java 8 must be installed. If you want to use SAP JVM, you can download an up-to-date version
from SAP Development Tools for Eclipse as well. Use the following command to install it:
rpm -i sapjvm-<version>-linux-x64.rpm
If you want to check the JVM version installed on your system, use the following command:
When installing it using the RPM package, the Cloud Connector will detect it and use it for its runtime.
● When using the tar.gz archive, the environment variable <JAVA_HOME> must be set to the Java
installation directory, so that the bin subdirectory can be found. Alternatively, you can add the Java
installation's bin subdirectory to the <PATH> variable.
Portable Scenario
1. Extract the tar.gz file to an arbitrary directory on your local file system using the following command:
Note
If you use the parameter "o", the extracted files are assigned to the user ID and the group ID of the user
who has unpacked the archive. This is the default behavior for users other than the root user.
2. Go to this directory and start the Cloud Connector using the go.sh script.
3. Continue with the Next Steps section.
Note
In this case, Cloud Connector is not started as a daemon, and therefore will not automatically start after a
reboot of your system. Also, the portable version does not support the automatic upgrade procedure.
Installer Scenario
1. Extract the sapcc-<version>-linux-x64.zip archive to an arbitrary directory by using the following the
command:
unzip sapcc-<version>-linux-x64.zip
rpm -i com.sap.scc-ui-<version>.x86_64.rpm
In the productive case, Cloud Connector 2.x is started as daemon. If you need to manage the daemon process,
execute:
Caution
When adjusting the Cloud Connector installation (for example, restoring a backup), make sure the RPM
package management is synchronized with such changes. If you simply replace files that do not fit to the
information stored in the package management, lifecycle operations (such as upgrade or uninstallation)
might fail with errors. Also, the Cloud Connector might get into unrecoverable state.
Example: After a file system restore, the system files represent Cloud Connector 2.3.0 but the RPM
package management "believes" that version 2.4.3 is installed. In this case, commands like rpm -U and
rpm -e do not work as expected. Furthermore, avoid using the --force parameter as it may lead to an
unpredictable state with two versions being installed concurrently, which is not supported.
After installation via RPM manager, the Cloud Connector process is started automatically and registered as a
daemon process, which ensures the automatic restart of the Cloud Connector after a system reboot.
To start, stop, or restart the process explicitly, open a command shell and use the following commands, which
require root permissions:
Next Steps
1. Open a browser and enter: https://<hostname>:8443. <hostname> is the host name of the machine
on which you installed the Cloud Connector.
If you access the Cloud Connector locally from the same machine, you can simply enter localhost.
2. Continue with the initial configuration of the Cloud Connector, see Initial Configuration [page 382].
Related Information
Prerequisites
Note
Mac OS X is not supported for productive scenarios. The developer version described below must not be
used as productive version.
● You have either of the following 64-bit operating systems: Mac OS X 10.7 (Lion), Mac OS X 10.8 (Mountain
Lion), Mac OS X 10.9 (Mavericks), Mac OS X 10.10 (Yosemite), or Mac OS X 10.11 (El Capitan).
● You have downloaded the tar.gz archive for the developer use case on Mac OS X from SAP Development
Tools for Eclipse.
● Java 7 or 8 must be installed. If you want to use SAP JVM, you can download it from SAP Development
Tools for Eclipse as well.
● Environment variable <JAVA_HOME> must be set to the Java installation directory so that the bin
subdirectory can be found. Alternatively, you can add the Java installation's bin subdirectory to the
<PATH> variable.
Procedure
1. Extract the tar.gz file to an arbitrary directory on your local file system using the following command:
2. Go to this directory and start Cloud Connector using the go.sh script.
3. Continue with the Next Steps section.
Note
The Cloud connector is not started as a daemon, and therefore will not automatically start after a
reboot of your system. Also, the Mac OS X version of Cloud Connector does not support the automatic
upgrade procedure.
1. Open a browser and enter: https://<hostname>:8443. <hostname> is the host name of the machine
on which you installed the Cloud Connector.
If you access the Cloud Connector locally from the same machine, you can simply enter localhost.
2. Continue with the initial configuration of the Cloud Connector, see Initial Configuration [page 382].
Related Information
For the Connectivity service and the Cloud Connector, you should apply the following guidelines to guarantee
the highest level of security for these components.
Security Status
From the Connector menu, choose Security Status to access an overview showing potential security risks and
the recommended actions.
● Choose any of the Actions icons in the corresponding line to navigate to the UI area that deals with that
particular topic and view or edit details.
Note
Navigation is not possible for the last item in the list (Service User).
● The service user is specific to the Windows operating system (see Installation on Microsoft Windows OS
[page 363] for details) and is only visible when running the Cloud Connector on Windows. It cannot be
accessed or edited through the UI. If the service user was set up properly, choose Edit and check the
corresponding checkbox.
The Subaccount-Specific Security Status lists security-related information for each and every subaccount.
Note
The security status only serves as a reminder to address security issues and shows if your installation
complies with all recommended security settings.
Upon installation, the Cloud Connector provides an initial user name and password for the administration UI,
and forces the user (Administrator) to change the password. You must change the password immediately
after installation.
The connector itself does not check the strength of the password. You should select a strong password that
cannot be guessed easily.
Note
To enforce your company's password policy, we recommend that you configure the Administration UI to
use an LDAP server for authorizing access to the UI.
UI Access
The Cloud Connector administration UI can be accessed remotely via HTTPS. The connector uses a standard
X.509 self-signed certificate as SSL server certificate. You can exchange this certificate with a specific
certificate that is trusted by your company. See Recommended: Replace the Default SSL Certificate [page
374].
Note
Since browsers usually do not resolve localhost to the host name whereas the certificate usually is created
under the host name, you might get a certificate warning. In this case, simply skip the warning message.
OS-Level Access
The Cloud Connector is a security-critical component that handles the external access to systems of an
isolated network, comparable to a reverse proxy. We therefore recommend that you restrict the access to the
operating system on which the Cloud Connector is installed to the minimal set of users who would administrate
the Cloud Connector. This minimizes the risk of unauthorized users getting access to credentials, such as
certificates stored in the secure storage of the Cloud Connector.
We also recommend that you use the machine to operate only the Cloud Connector and no other systems.
Administrator Privileges
To log on to the Cloud Connector administration UI, the Administrator user of the connector must not have
an operating system (OS) user for the machine on which the connector is running. This allows the OS
administrator to be distinguished from the Cloud Connector administrator. To make an initial connection
between the connector and a particular SAP Cloud Platform subaccount, you need an SAP Cloud Platform user
with the required permissions for the related subaccount. We recommend that you separate these roles/duties
(that means, you have separate users for Cloud Connector administrator and SAP Cloud Platform).
We recommend that only a small number of users are granted access to the machine as root users.
Hard drive encryption for machines with a Cloud Connector installation ensures that the Cloud Connector
configuration data cannot be read by unauthorized users, even if they obtain access to the hard drive.
Supported Protocols
Currently, the protocols HTTP and RFC are supported for connections between the SAP Cloud Platform and
on-premise systems when the Cloud Connector and the Connectivity service are used. The whole route from
the application virtual machine in the cloud to the Cloud Connector is always SSL-encrypted.
The route from the connector to the back-end system can be SSL-encrypted or SNC-encrypted. See Configure
Access Control (HTTP) [page 425] and Configure Access Control (RFC) [page 432].
We recommend that you turn on the audit log on operating system level to monitor the file operations.
The Cloud Connector audit log must remain switched on during the time it is used with productive systems.
The default audit level is SECURITY. Set it to ALL if required by your company policy. The administrators who
are responsible for a running Cloud Connector must ensure that the audit log files are properly archived, to
conform to the local regulations. You should switch on audit logging also in the connected back-end systems.
Encryption Ciphers
By default, all available encryption ciphers are supported for HTTPS connections to the administration UI.
However, some of them may not conform to your security standards and therefore should be excluded:
1. From the main menu, choose Configuration and select the tab User Interface, section Cipher Suites. By
default, all available ciphers are marked as selected.
2. Choose the Remove icon to unselect the ciphers that do not meet your security requirements.
We recommend that you revert the selection to the default (all ciphers selected) whenever you plan to
switch to another JVM. As the set of supported ciphers may differ, the selected ciphers may not be
supported by the new JVM. In that case the Cloud Connector does not start anymore. You need to fix
the issue manually by adapting the file default-server.xml (cp. attribute ciphers, see section
Accessing the Cloud Connector Administrator UI above). After switching the JVM, you can adjust the list
of eligible ciphers.
3. Choose Save.
Related Information
Overview
By default, the Cloud Connector includes a self-signed UI certificate. It is used to encrypt the communication
between the browser-based user interface and the Cloud Connector itself. For security reasons, however, you
should replace this certificate with your own certificate so that the browser accepts the certificate without
security warnings.
Note
As of version 2.6.0, you can easily replace the default certificate within the Cloud Connector administration
UI . See Exchange UI Certificates in the Administration UI [page 378].
Caution
The Cloud Connector's keystore may contain a certificate used in the High Availability setup. This
certificate has the alias "ha". Any changes on it or removal would cause a disruption of communication
between the shadow and the master instance, and therefore to a failed procedure. We recommend that you
replace the keystore on both the master and shadow server before establishing the connection between
the two instances.
Procedure
● on Linux OS:
Note
Memorize the keystore password, as you will need it for later operations. See related links.
Make sure you go to directory /opt/sap/scc/config before executing the commands described in the
following procedures.
Note
Related Information
Generate a self-signed certificate for special purposes like, for example, a demo setup.
Context
Note
As of Cloud Connector 2.10 you can generate self-signed certificates also from the administration UI. See
Configure a CA Certificate for Principal Propagation [page 404] and Initial Configuration (HTTP) [page
388]. In this case, the steps below are not required.
If you want to use a simple, self-signed certificate, follow the procedure below.
Note
The server configuration delivered by SAP uses the same password for key store (option \-storepass) and
key (option \-keypass) under alias tomcat.
Procedure
2. Generate a certificate:
3. Self-sign it - you will be prompted for the keypass password defined in step 2:
Procedure
If you have a signed certificate produced by a trusted certificate authority (CA), go directly to step 3.
You now have a file called <csr-file-name> that you can submit to the certificate authority. In return,
you get a certificate.
3. Import the certificate chain that you obtained from your trusted CA:
The password is created at installation time and stored in the secure storage. Thus, only applications with
access can read the password. You can read password using Java:
Note
We recommend that you do not modify the configuration unless you have expertise in this area.
Related Information
By default, the Cloud Connector includes a self-signed UI certificate. It is used to encrypt the communication
between the browser-based user interface and the Cloud Connector itself. For security reasons, however, you
should replace this certificate with your own one to let the browser accept the certificate without security
warnings.
Procedure
Master Instance
1. From the main menu, choose Configuration and go to the User Interface tab.
2. In the UI Certificate section, start a certificate signing request procedure by choosing the icon Generate a
Certificate Signing Request.
The field <SAN> allows a simple value as well as formatted complex values:
○ A simple value is treated as DNS name, for example, xyz.sap.com means that the allowed host is
xyz.sap.com.
○ <SAN> also allows a list of DNS names, IPs (4 byte or IPv6), URIs, and RFC 822 names (for
example, e-mail addresses).
Note
In this case, the field <SAN> contains key:value pairs separated by ';'. ';' must not be used in a
value.
Note
6. To import the signing response, choose the Upload icon. Select Browse to locate the file and then choose
the Import button.
7. Review the certificate details that are displayed.
8. Restart the Cloud Connector to activate the new certificate.
Shadow Instance
In a High Availability setup, perform the same operation on the shadow instance.
1.4.3.2 Configuration
Configure the Cloud Connector to make it operational for connections between your SAP Cloud Platform
applications and on-premise systems.
Topic Description
Initial Configuration [page 382] After installing the Cloud Connector and starting the Cloud
Connector daemon, you can log on and perform the required
configuration to make your Cloud Connector operational.
Managing Subaccounts [page 392] How to connect SAP Cloud Platform subaccounts to your
Cloud Connector.
Authenticating Users against On-Premise Systems [page Basic authentication and principal propagation (user propa
400] gation) are the authentication types currently supported by
the Cloud Connector.
Configure Access Control [page 424] Configure access control or copy the complete access con
trol settings from another subaccount on the same Cloud
Connector.
Configuration REST APIs [page 448] Configure a newly installed Cloud Connector (initial configu-
ration, subaccounts, access control) using the configuration
REST API.
Configure the User Store [page 478] Configure applications running on SAP Cloud Platform to
use your corporate LDAP server as a user store.
Using Service Channels [page 480] Service channels provide secure and reliable access from an
external network to certain services on SAP Cloud Platform,
which are not exposed to direct access from the Internet.
Configure Trust [page 488] Set up a whitelist for trusted cloud applications and a trust
store for on-premise systems in the Cloud Connector.
Connect DB Tools to SAP HANA via Service Channels [page How to connect database, BI, or replication tools running in
483] the on-premise network to a HANA database on SAP Cloud
Platform using the service channels of the Cloud Connector.
Configure Domain Mappings for Cookies [page 490] Map virtual and internal domains to ensure correct handling
of cookies in client/server communication.
Configure Solution Management Integration [page 492] Activate Solution Management reporting in the Cloud
Connector.
Configure Tunnel Connections [page 493] Adapt connectivity settings that control the throughput by
choosing the appropriate limits (maximal values).
Configure the Java VM [page 494] Adapt the JVM settings that control memory management.
Configuration Backup [page 495] Backup and restore your Cloud Connector configuration.
Tasks
Prerequisites
● On SAP Cloud Platform, the subaccount user that you use for initial setup must be a member of the global
account that the subaccount belongs to.
Note
After establishing the Cloud Connector connection, this user is not needed any more since it serves only for
initial connection setup. You may revoke the corresponding role assignment then and remove the user from
the Members list.
We strongly recommend that you read and follow the steps described in Recommendations for Secure Setup
[page 370]. For operating the Cloud Connector securely, see also Security Guidelines [page 545].
To administer the Cloud Connector, you need a Web browser. To check the list of supported browsers, see
Product Prerequisites and Restrictions [page 1008] → section Browser Support.
1. When you first log in, you must change the password before you continue, regardless of the installation
type you have chosen.
3. You can edit the password for the Administrator user from Configuration in the main menu, tab User
Interface, section Authentication:
If your internal landscape is protected by a firewall that blocks any outgoing TCP traffic, you must specify an
HTTPS proxy that the Cloud Connector can use to connect to SAP Cloud Platform. Normally, you must use the
same proxy settings as those being used by your standard Web browser. The Cloud Connector needs this proxy
for two operations:
● Download the correct connection configuration corresponding to your subaccount ID in SAP Cloud
Platform.
● Establish the SSL tunnel connection from the Cloud Connector user to your SAP Cloud Platform
subaccount.
If you want to skip the initial configuration, you can click the icon in the upper right corner. You might
need this in case of connectivity issues shown in your logs. You can add subaccounts later as described in
Managing Subaccounts [page 392].
When you first log on, the Cloud Connector collects the following required information:
1. For <Region>, specify the SAP Cloud Platform region that should be used.
Note
You can also configure a region yourself, if it is not part of the standard list. Either insert the region host
manually, or create a custom region, as described in Configure Custom Regions [page 400].
2. For <Subaccount>, <Subaccount User> and <Password>, enter the values you obtained when you
registered your subaccount on SAP Cloud Platform.
3. (Optional) You can define a <Display Name> that lets you easily recognize a specific subaccount in the UI
compared to the technical subaccount name.
4. (Optional) You can define a <Location ID> identifying the location of this Cloud Connector for a specific
subaccount. As of Cloud Connector release 2.9.0, the location ID is used as routing information and
therefore you can connect multiple Cloud Connectors to a single subaccount. If you don't specify any value
for <Location ID>, the default is used, which represents the behavior of previous Cloud Connector
versions. The location ID must be unique per subaccount and should be an identifier that can be used in a
URI. To route requests to a Cloud Connector with a location ID, the location ID must be configured in the
respective destinations.
Note
Location IDs provided in older versions of the Cloud Connector are discarded during upgrade to ensure
compatibility for existing scenarios.
5. Enter a suitable proxy host from your network and the port that is specified for this proxy. If your network
requires an authentication for the proxy, enter a corresponding proxy user and password. You must specify
a proxy server that supports SSL communication (a standard HTTP proxy does not suffice).
Note
These settings strongly depend on your specific network setup. If you need more detailed information,
please contact your local system administrator.
6. (Optional) You can provide a <Description> (free-text) of the subaccount that is shown when choosing
the Details icon in the Actions column of the Subaccount Dashboard. It lets you identify the particular Cloud
Connector you use.
7. Choose Save.
Note
The internal network must allow access to the port. Specific configuration for opening the respective
port(s) depends on the firewall software used. The default ports are 80 for HTTP and 443 for HTTPS. For
RFC communication, you must open a gateway port (default: 33+<instance number> and an arbitrary
message server port. For a connection to a HANA Database (on SAP Cloud Platform) via JDBC, you must
open an arbitrary outbound port in your network. Mail (SMTP) communication is not supported.
● If you later want to change your proxy settings (for example, because the company firewall rules have
changed), choose Configuration from the main menu and go to the Cloud tab, section HTTPS Proxy. Some
proxy servers require credentials for authentication. In this case, you must provide the relevant user/
password information.
As soon as the initial setup is complete, the tunnel to the cloud endpoint is open, but no requests are allowed to
pass until you have performed the Access Control setup, see Configure Access Control [page 424].
To manually close (and reopen) the connection to SAP Cloud Platform, choose your subaccount from the main
menu and select the Disconnect button (or the Connect button to reconnect to SAP Cloud Platform).
● The green icon next to Region Host indicates that it is valid and can be reached.
● If an HTTPS Proxy is configured, its availability is shown the same way. In the screenshot, the grey diamond
icon next to HTTPS Proxy indicates that connectivity is possible without proxy configuration.
In case of a timeout or a connectivity issue, these icons are yellow (warning) or red (error), and a tooltip shows
the cause of the problem. Initiated By refers to the user that has originally established the tunnel. During
normal operations, this user is no longer needed. Instead, a certificate is used to open the connection to a
subaccount.
Note
When connected, you can monitor the Cloud Connector also in the Connectivity section of the SAP Cloud
Platform cockpit. There, you can track attributes like version, description and high availability set up. Every
Cloud Connector configured for your subaccount automatically appears in the Connectivity section of the
cockpit.
Related Information
Configure the Cloud Connector for communication using the HTTP protocol.
In order to set up a mutual authentication between the Cloud Connector and any back-end system it connects
to, you can import an X.509 client certificate into the Cloud Connector. The Cloud Connector will then use the
so-called "system certificate" for all HTTPS requests to back-ends that request or require a client certificate.
This means, that the CA, which signed the Cloud Connector's client certificate, needs to be trusted by all back-
end systems to which the Cloud Connector is supposed to connect.
This system certificate needs to be provided as PKCS#12 file containing the client certificate, the
corresponding private key and the CA root certificate that signed the client certificate (plus potentially the
certificates of any intermediate CAs, if the certificate chain is longer than 2). Via the file upload dialog, this
PKCS#12 file can be chosen from the file system. Its password also needs to be supplied for the import
process.
As of version 2.6.0, there is a second option - starting a Certificate Signing Request procedure,
similar to the UI certificate described in Exchange UI Certificates in the Administration UI [page 378].
Procedure
From the left panel, choose Configuration. On the tab On Premise, choose System Certificate Import a
certificate to upload a certificate:
As of version 2.10 there is a third option - generating a self-signed certificate. It might be of use if no CA is
needed, for example, in a demo setup or if you want to use a dedicated CA. For this option, press Create and
import a self-signed certificate:
If a system certificate has been imported successfully, its distinguished name, the name of the issuer, and the
validity dates are displayed:
Related Information
To set up a mutual authentication between Cloud Connector and an ABAP back-end system (connected via
RFC), you can configure SNC for the Cloud Connector. It will then use the associated PSE for all RFC SNC
requests. This means that the SNC identity, represented by this PSE needs to:
● Be trusted by all back-end systems to which the Cloud Connector is supposed to connect;
● Play the role of a trusted external system by adding the SNC name of the Cloud Connector to the
SNCSYSACL table. You can find more details in the SNC configuration documentation for the release of
your ABAP system.
Prerequisites
You have configured your ABAP system(s) for SNC. For detailed information on configuring SNC for an ABAP
system, see also Configuring SNC on AS ABAP. In order to establish trust for Principal Propagation, follow the
steps described in Configure Principal Propagation to an ABAP System for RFC [page 411].
○ Library Name: Provides the location of the SNC library you are using for the Cloud Connector.
Note
Bear in mind that you must use one and the same security product on both sides of the
communication.
○ My Name: The SNC name that identifies the Cloud Connector. It represents a valid scheme for the
SNC implementation that is used.
○ Quality of Protection: Determines the level of protection that you require for the connectivity to the
ABAP systems.
Note
When using CommonCryptoLibrary as SNC implementation, note 1525059 will help you to configure
the PSE to be associated with the user running the Cloud Connector process.
Related Information
Configure the Cloud Connector to support LDAP in different scenarios (cloud applications using LDAP or Cloud
Connector authentication).
Prerequisites
You have installed the Cloud Connector and done the basic configuration:
Steps
When using LDAP-based user management, you have to confgure the Cloud Connector to support this feature.
Depending on the scenario, you need to perform the following steps:
Scenario 1: Cloud applications using LDAP for authentication. Configure the destination of the LDAP server in
the Cloud Connector: Configure Access Control (LDAP) [page 439].
Scenario 2: Internal Cloud Connector user management. Activate LDAP user management in the Cloud
Connector: Use LDAP for Authentication [page 498].
Add and connect your SAP Cloud Platform subaccounts to the Cloud Connector.
Context
As of version 2.2, you can connect to several subaccounts within a single Cloud Connector installation. Those
subaccounts can use the Cloud Connector concurrently with different configurations. By selecting a
subaccount from the drop-down box, all tab entries show the configuration, audit, and state, specific to this
subaccount. In case of audit and traces, cross-subaccount info is merged with the subaccount-specific parts of
the UI.
Note
We recommend that you group only subaccounts with the same qualities in a single installation:
● Productive subaccounts should reside on a Cloud Connector that is used for productive subaccounts
only.
Prerequisites
● On SAP Cloud Platform, the subaccount user that you use for initial setup must be a member of the global
account that the subaccount belongs to.
Note
After establishing the Cloud Connector connection, this user is not needed any more since it serves only for
initial connection setup. You may revoke the corresponding role assignment then and remove the user from
the Members list.
Subaccount Dashboard
In the subaccount dashboard (choose your Subaccount from the main menu), you can check the state of all
subaccount connections managed by this Cloud Connector at a glance.
In the screenshot above, the test1 subaccount is already connected, but has no active resources exposed.
The test2 subaccount is currently disconnected.
The dashboard also lets you disconnect or connect the subaccounts by choosing the respective button in the
Actions column.
If you want to connect an additional subaccount to your on-premise landscape, choose the Add Subaccount
button. A dialog appears, which is similar to the Initial Configuration operation when establishing the first
connection.
1. The <Region> field specifies the SAP Cloud Platform region that should be used, for example, Europe
(Rot). Choose the one you need from the drop-down list. See SAP Cloud Platform Cockpit [page 1006] →
section "Logon".
Note
You can also configure a region yourself, if it is not part of the standard list. Either insert the region host
manually, or create a custom region, as described in Configure Custom Regions [page 400].
2. For <Subaccount> and <Subaccount User> (user/password), enter the values you obtained when you
registered your account on SAP Cloud Platform.
3. (Optional) You can define a <Display Name> that allows you to easily recognize a specific subaccount in
the UI compared to the technical subaccount name.
4. (Optional) You can define a <Location ID> that identifies the location of this Cloud Connector for a
specific subaccount. As of Cloud Connector release 2.9.0, the location ID is used as routing information
and therefore you can connect multiple Cloud Connectors to a single subaccount. If you don't specify any
value for <Location ID>, the default is used, which represents the behavior of previous Cloud Connector
versions. The location ID must be unique per subaccount and should be an identifier that can be used in a
URI. To route requests to a Cloud Connector with a location ID, the location ID must be configured in the
respective destinations.
5. (Optional) You can provide a <Description> of the subaccount that is shown when clicking on the Details
icon in the Actions column.
6. Choose Save.
● To modify an existing subaccount, choose the Edit icon and change the <Display Name>, <Location
ID> and/or <Description>.
● You can also delete a subaccount from the list of connections.The subaccount will be disconnected and all
configurations will be removed from the installation.
Related Information
You can copy the configuration of a subaccount's Cloud To On-Premise and On-Premise To Coud sections to a
new subaccount, by using the export and import functions in the Cloud Connector administration UI.
Note
Principal propagation configuration (section Cloud To On-Premise) is not exported or imported, since it
contains subaccount-specific data.
1. In the Cloud Connector administration UI, choose your subaccount from the navigation menu.
2. To export the existing configuration, choose the Export button in the upper right corner. The configuration
is downloadad as a zip file to your local file system.
1. From the navigation menu, choose the subaccount to which you want to copy an existing configuration.
2. To import an existing configuration, choose the Import button in the upper right corner.
The certificates that are used by the Cloud Connector are issued with a limited validity period. To prevent a
downtime while refreshing the certificate, you can update it for your subaccount directly from the
administration UI.
Prerequisite is that you are using the enhanced disaster revovery, see What is Enhanced Disaster Recovery.
The disaster recovery subaccount is intended to take over if the region host of its associated original
subaccount faces severe issues.
A disaster recovery account inherits the configuration from its original subaccount except for the region host.
The user can, but does not have to be the same.
Note
The selected region host must be different from the region host of the original subaccount.
Note
The technical subaccount name is the same as for the original subaccount, and set automatically.
Note
You cannot choose another original subaccount to become a disaster recovery subaccount.
Note
If you want to change a disaster recovery subaccount, you must delete it first and then configure it again.
To switch from the original subaccount to the disaster recovery subaccount, choose Employ disaster recovery
subaccount.
The disaster recovery subaccount then becomes active, and the original subaccount is deactivated.
You can switch back to the original subaccount as soon as it is available again.
Note
As of Cloud Connector 2.11, the cloud side informs about a disaster by issuing an event. In this case, the
switch is performed automatically.
Convert a disaster recovery sucaccount into a standard subaccount if the former primary subaccount's region
cannot be recovered.
Disaster recovery subaccounts that were switched to disaster recovery mode can be elevated to standard
subaccounts if a disaster recovery region replaces an original region that is not expected to recover.
If a disaster recovery subaccount should be used as primary subaccount, you can convert it by choosing the
button Discard original subaccount and replace it with disaster recovery subaccount.
Get your subaccount ID to configure the Cloud Connector in the Cloud Foundry environment.
In order to set up your subaccount in the Cloud Connector, you must know the subaccount ID. Follow these
steps to acquire it:
If you want to use a custom region for your subaccount, you can configure regions in the Cloud Connector,
which are not listed in the selection of standard regions.
1. From the Cloud Connector main menu, choose Configuration Cloud and go to the Custom Regions
section.
2. To add a region to the list, choose the Add icon.
3. In the Add Region dialog, enter the <Region> and <Region Host> you want to use.
4. Choose Save.
5. To edit a region from the list, select the corresponding line and choose the Edit icon.
Currently, the Cloud Connector supports basic authentication and principal propagation (user propagation)
as user authentication types towards internal systems. The destination configuration of the used cloud
● To use basic authentication, configure an on-premise system to accept basic authentication and to
provide one or multiple service users. No additional steps are necessary in the Cloud Connector for this
authentication type.
● To use principal propagation, you must explicitly configure trust to those cloud entities from which user
tokens are accepted as valid. You can do this in the Trust view of the Cloud Connector, see Set Up Trust for
Principal Propagation [page 402].
Related Information
Use principal propagation to simplify the access of SAP Cloud Platform users to on-premise systems.
Task Description
Set Up Trust for Principal Propagation [page 402] Configure a trusted relationship in the Cloud Connector to
support principal propagation. Principal propagation lets you
forward the logged-on identity in the cloud to the internal
system without requesting a password.
Configure a CA Certificate for Principal Propagation [page Install and configure an X.509 certificate to enable support
404] for principal propagation.
Configuring Principal Propagation to an ABAP system [page Learn more about the different types of configuring and sup
408] porting principal propagation for a particular AS ABAP.
Configure a Subject Pattern for Principal Propagation [page Define a pattern identifying the user for the subject of the
416] generated short-lived X.509 certificate, as well as its valid
ity period.
Configure a Secure Login Server [page 418] Configuration steps for Java Secure Login Server (SLS) sup
port.
Configure Kerberos [page 422] The Cloud Connector lets you propagate users authenti
cated in SAP Cloud Platform via Kerberos against back-end
systems. It uses the Service For User and Constrained Del
egation protocol extension of Kerberos.
Tasks
You perform trust configuration to support principal propagation. By default, your Cloud Connector does not
trust any entity that issues tokens for principal propagation. Therefore, the list of trusted identity providers is
empty by default. If you decide to use the principal propagation feature, you must establish trust to at least one
identiy provider. Currently, SAML2 identity providers are supported. You can configure trust to one or more
SAML2 IdPs per subaccount. After you've configured trust in the cockpit for your subaccount, for example, to
your own company's identity provider(s), you can synchronize this list with your Cloud Connector.
As of Cloud Connector 2.4, you can also trust HANA instances and Java applications to act as identity
providers.
From your subaccount menu, choose Cloud to On-Premise and go to the Principal Propagation tab. Choose the
Synchronize button to store the list of existing identity providers locally in your Cloud Connector.
You can decide for each entry, whether to trust it for the principal propagation use case by choosing Edit and
(de)selecting the Trusted checkbox.
Note
Whenever you update the SAML IdP configuration for a subaccount on cloud side, you must synchronize
the trusted entities in theCloud Connector. Otherwise the validation of the forwarded SAML assertion will
fail with an exception containing an exception message similar to this: Caused by:
com.sap.engine.lib.xml.signature.SignatureException: Unable to validate signature ->
java.security.SignatureException: Signature decryption error: javax.crypto.BadPaddingException: Invalid
PKCS#1 padding: encrypted message and modulus lengths do not match!.
Set up principal propagation from SAP Cloud Platform to your internal system that is used in a hybrid scenario.
Note
As a prerequisite for principal propagation for RFC, the following cloud application runtime versions are
required:
1. Set up trust to an entity that is issuing an assertion for the logged-on user (see section above).
2. Set up the system identity for the Cloud Connector.
○ For HTTPS, you must import a system certificate into your Cloud Connector.
○ For RFC, you must import an SNC PSE into your Cloud Connector.
3. Configure the target system to trust the Cloud Connector. There are two levels of trust:
1. First, you must allow the Cloud Connector to identify itself with its system certificate (for HTTPS), or
with the SNC PSE (for RFC).
2. Then, you must allow this identity to propagate the user accordingly:
○ For HTTPS, the Cloud Connector forwards the true identity in a short-lived X.509 certificate in an
HTTP header named SSL_CLIENT_CERT. The system must use this certificate for logging on the
real user. The SSL handshake, however, is performed through the system certificate.
Note
If you use an identity provider that issues unsigned assertions, you must mark all relevant applications as
trusted by the Cloud Connector in tab Principal Propagation, section Trust Configuration.
Configure a whitelist for trusted cloud applications, see Configure Trust [page 488].
Configure a trust store that acts as a whitelist for trusted on-premise systems. See Configure Trust [page 488].
Install and configure an X.509 certificate to enable support for principal propagation in the Cloud Connector.
You can enable support for principal propagation with X.509 certificates by performing either of the following
procedures:
Note
Prior to version 2.7.0, this was the only option and the system certificate was acting both as client
certificate and CA certificate in the context of principal propagation.
The Cloud Connector uses the configured CA approach to issue short-lived certificates for logging on the same
identity in the back end that is logged on in the cloud. For establishing trust with the back end, the respective
configuration steps are independent of the approach that you choose for the CA.
To issue short-lived certificates that are used for principal propagation to a back-end system, you can import an
X.509 client certificate into the Cloud Connector. This CA certificate must be provided as PKCS#12 file
containing the (intermediate) certificate, the corresponding private key, and the CA root certificate that signed
the intermediate certificate (plus the certificates of any other intermediate CAs, if the certificate chain includes
more than those two certificates).
● Option 1: Choose the PKCS#12 file from the file system, using the file upload dialog. For the import
process, you must also provide the file password.
● Option 2: Start a Certificate Signing Request (CSR) procedure like for the UI certificate, see Exchange UI
Certificates in the Administration UI [page 378].
● Option 3: (As of version 2.10) Generate a self-signed certificate, which might be useful in a demo setup or if
you need a dedicated CA. In particular for this option, it is useful to export the public key of the CA via the
button Download certificate in DER format.
Note
The CA certificate should have the KeyUsage attribute keyCertSign. Many systems verify that the issuer
of a certificate includes this attribute and deny a client certificate without this attribute. When using the
CSR procedure, the attribute is requested for the CA certificate. Also, when generating a self-signed
certificate, this attribute is added automatically.
After successful import of the CA certificate, its distinguished name, the name of the issuer, and the validity
dates are shown:
If a CA certificate is no longer required, you can delete it. Use the respective Delete button and confirm the
deletion.
If you want to delegate the CA functionality to a Secure Login Server, choose the CA using Secure Login Server
option and configure the Secure Login Server as follows, after having configured the Secure Login server as
described in Configure a Secure Login Server [page 418].
● <Host Name>: The host, on which your Secure Login Server (SLS) is installed.
● <Profiles Port>: The profiles port must be provided only when your Secure Login Server is configured
to not allow to fetch profiles via the privileged authentication port. In this case, you can provide here the
port that is configured for that functionality.
● <Authentication Port>: The port, over which the Cloud Connector is requesting the short-lived
certificates from SLS. Choose Next.
Note
For this privileged port, a client certificate authentication is required, for which the Cloud Connector
system certificate is used.
● <Profile>: The Secure Login Server profile that allows to issue certificates as needed for principal
propagation with the Cloud Connector.
Related Information
Learn more about the different types of configuring and supporting principal propagation for a particular AS
ABAP.
Task Description
Configure Principal Propagation to an ABAP System for Step-by-step instructions to configure principal propagation
HTTPS [page 408] to an ABAP server for HTTPS.
Configure Principal Propagation to an ABAP System for RFC Step-by-step instructions to configure principal propagation
[page 411] to an ABAP server for RFC.
Rule-based Mapping of Certificates [page 415] Map short-lived certificates to users in the ABAP server.
In this example, you can find step-by-step instructions how to configure principal propagation to an ABAP
server for HTTPS.
Example Data
● System certificate was issued by: CN=MyCompany CA, O=Trust Community, C=DE.
● It has the subject: CN=SCC, OU=HCP Scenarios, O=Trust Community, C=DE.
● The short-lived certificate has the subject CN=P1234567890, where P1234567890 is the platform user.
Tasks
Configure an ABAP System to Trust the Cloud Connector's System Certificate [page 409]
To perform the following steps, you must have the corresponding authorizations in the ABAP system for the
transactions mentioned below (administrator role according to your specific authorization management) as
well as an administrator user for the Cloud Connector.
Configure the ABAP system to trust the Cloud Connector's system certificate [page 409]
Configure the Internet Communication Manager (ICM) to trust the system certificate for principal propagation
[page 409]
Configure the ABAP system to trust the Cloud Connector's system certificate:
Configure the Internet Communication Manager (ICM) to trust the system certificate for principal
propagation:
If you have applied SAP Note 2052899 to your system, you can alternatively provide an additional
parameter for icm/trusted_reverse_proxy_<x>, for example: icm/trusted_reverse_proxy_2
= SUBJECT="CN=SCC, OU=HCP Scenarios, O=Trust Community, C=DE", ISSUER="CN=MyCompany
CA, O=Trust Community, C=DE".
Note
If you have a Web dispatcher installed in front of the ABAP system, trust must be added in its configuration
files with the same parameters as for the ICM. Also, you must add the system certificate of the Cloud
Connector to the trust list of the Web dispatcher Server PSE.
You can do this manually in the system as described below or use an identity management solution for a more
comfortable approach. For example, for large numbers of users the rule-based certificate mapping is a good
way to save time and effort. For more information, see Rule-based Mapping of Certificates [page 415].
To access the required ICF services for your scenario in the ABAP system, choose one of the following
procedures:
● To access ICF services via certificate logon, choose the principal type X.509 Certificate (general
usage) in the corresponding system mapping. This setting lets you use the system certificate for trust as
well as for user authentication. For details, see Configure Access Control (HTTP) [page 425], step 7.
Additionally, make sure that all required ICF services allow Logon Through SSL Certificate as logon
method.
● To access ICF services via the logon method Basic Authentication (logon with user/password) and
principal propagation, choose the principal type X.509 Certificate (strict usage) in the
corresponding system mapping. This setting lets you use the system certificate for trust, but prevents its
usage for user authentication. For details, see Configure Access Control (HTTP) [page 425], step 7.
Additionally, make sure that all required ICF services allow Basic Authentication and Logon Through
SSL Certificate as logon methods.
● If some of the ICF services require Basic Authentication, while others should be accessed via system
certificate logon, proceed as follows:
1. In the Cloud Connector sytem mapping, choose the principal type X.509 Certificate (general
usage) as described above.
2. In the ABAP system, choose transaction code SICF and go to Maintain Services.
3. Select the service that requires Basic Authentication as logon method.
4. Double-click the service and go to tab Logon Data.
5. Switch to Alternative Logon Procedure and ensure that the Basic Authentication logon procedure
is listed before Logon Through SSL Certificate.
Related Information
Configuring principal propagation for RFC requires an SNC (Secure Network Communications) connection.
To enable SNC, you must configure the ABAP system and the Cloud Connector accordingly.
Note
It is important that you use the same SNC implementation on both communication sides. Contact the
vendor of your SNC solution to check the compatibility rules.
Example Data
Note
The parameters provided in this example are based on an SNC implementation that uses the SAP
Cryptographic Library. Other vendors' libraries may require different values.
● An SNC identity has been generated and installed on the Cloud Connector host. Generating this identity for
the SAP Cryptographic Library is typically done using the tool SAPGENPSE. For more information, see
Configuring SNC for SAPCRYPTOLIB Using SAPGENPSE.
● The Cloud Connector system identity's SNC name is p:CN=SCC, OU=SAP CP Scenarios, O=Trust
Community, C=DE.
● The ABAP system's SNC identity name is p:CN=SID, O=Trust Community, C=DE. This value can
typically be found in the ABAP system instance profile parameter snc/identity/as and hence is
provided per application server.
● When using the SAP Cryptographic Library, the ABAP system's SNC identity and the Cloud Connector's
system identity should be signed by the same CA for mutual authentication.
● The example short-lived certificate has the subject CN=P1234567, where P1234567 is the SAP Cloud
Platform application user.
Tasks
1. Open the SNC Access Control List for Systems (transaction SNC0).
2. As the Cloud Connector does not have a system ID, use an arbitray value for <System ID> and enter it
together with its SNC name: p:CN=SCC, OU=SAP CP Scenarios, O=Trust Community, C=DE.
3. Save the entry and choose the Details button.
4. In the next screen, activate the checkboxes for Entry for RFC activated and Entry for certificate activated.
5. Save your settings.
You can do this manually in the system as described below or use an identity management solution for a more
comfortable approach. For example, for large numbers of users the rule-based certificate mapping is a good
way to save time and effort. See Rule-Based Certificate Mapping.
Set up the Cloud Connector to Use the SNC Implementation [page 414]
Prerequisites
● The required security product for the SNC flavor that is used by your ABAP back-end systems, is installed
on the Cloud Connector host.
● The Cloud Connector's system SNC identity is associated with the operating system user under which the
Cloud Connector process is running.
SAP note 2642538 provides a description how you can associate an SNC identity of the SAP
Cryptographic Library with a user running an external program that uses JCo. If you use the SAP
Cryptographic Library as SNC implementation, perform the corresponding steps for the Cloud
Connector. When using a different product, contact the SNC library vendor for details.
1. In the Cloud Connector UI, choose Configuration from the main menu, select the On Premise tab, and go to
the SNC section.
2. Provide the fully qualified name of the SNC library (the security product's shared library implementing the
GSS API), the SNC name of the above system identity, and the desired quality of protection by choosing
the Edit icon.
For more information, see Initial Configuration (RFC) [page 390].
Note
The example in Initial Configuration (RFC) [page 390] shows the library location if you use the SAP
Secure Login Client as your SNC security product. In this case (as well as for some other security
products), SNC My Name is optional, because the security product automatically uses the identity
associated with the current operating system user under which the process is running, so you can
leave that field empty. (Otherwise, in this example it should be filled with p:CN=SCC, OU=SAP CP
Scenarios, O=Trust Community, C=DE.)
We recommend that you enter Maximum Protection for <Quality of Protection>, if your
security solution supports it, as it provides the best protection.
1. In the Access Control section of the Cloud Connector, create a hostname mapping corresponding to the
cloud-side RFC destination. See Configure Access Control (RFC) [page 432].
2. Make sure you choose RFC SNC as <Protocol> and ABAP System as <Back-end Type>. In the <SNC
Partner Name> field, enter the ABAP system's SNC identitiy name, for example, p:CN=SID, O=Trust
Community, C=DE.
3. Save your mapping.
Learn how to efficiently map short-lived certificates to users in the ABAP server.
Note
If dynamic parameters are disabled, enter the value using transaction RZ10 and restart the whole
ABAP system.
Note
To access transaction CERTRULE, you need the corresponding authorizations (see: Assign
Authorization Objects for Rule-based Mapping [page 416]).
Note
When you save the changes and return to transaction CERTRULE, the sample certificate which you
imported in Step 2b will not be saved. This is just a sample editor view to see the sample
certificates and mappings.
Define a pattern identifying the user for the subject of the generated short-lived X.509 certificate, as well as its
validity period.
To configure such a pattern, choose Configuration On Premise and press the Edit icon in section Principal
Propagation:
Use either of the following procedures to define the subject's distinguished name (DN), for which the certificate
will be issued:
Using the selection menu, you can assign values for the following parameters:
● ${name}
● ${mail}
● ${display_name}
● ${login_name} (as of Cloud Connector version 2.8.1.1)
Note
If the token provided by the Identity Provider contains additional values that are stored in attributes with
different names, but you still want to use it for the subject pattern, you can edit the variable name to place
the corresponding attribute value in the subject accordingly. For example, provide ${email}, if a SAML
assertion uses email instead of providing mail.
The values for these variables are provided by the trusted Identiy Provider in the token which is passed to the
Cloud Connector and specifies the user that has logged on to the cloud application.
Sample Certificate
By choosing Generate Sample Certificate you can create a sample certificate that looks like one of the short-
lived certificates created at runtime. You can use this certificate to, for example, generate user mapping rules in
the target system, via transaction CERTRULE in an ABAP system. If your subject pattern contains variable
fields, a wizard lets you provide meaningful values for each of them and eventually you can save the sample
certificate in DER format.
Related Information
Content
The Cloud Connector can use on-the-fly generated X.509 user certificates to log in to on-premise systems if
the external user session is authenticated (for example by means of SAML). If you do not want to use the built-
in certification authority (CA) functionality of the Cloud Connector (for example because of security
considerations), you can connect SAP SSO 2.0 Secure Login Server (SLS).
SLS is a Java application running on AS JAVA 7.20 or higher, which provides interfaces for certificate
enrollment.
● HTTPS
● REST
● JSON
● PKCS#10/PKCS#7
Note
Any enrollment requires a successful user or client authentication, which can be a single, multiple or even a
multi factor authentication.
● LDAP/ADS
● RADIUS
● SAP SSO OTP
● ABAP RFC
● Kerberos/SPNego
● X.509 TLS Client Authentication
SLS lets you define arbitrary enrollment profiles, each with a unique profile UID in its URL, and with a
configurable authentication and certificate generation.
Requirements
For user certification, SLS must provide a profile that adheres to the following:
With SAP SSO 2.0 SP06, SLS provides the following required features:
Implementation
INSTALLATION
Follow the standard installation procedures for SLS. This includes the initial setup of a PKI (public key
infrastructure).
Note
SLS allows you to set up one or more own PKIs with Root CA, User CA, and so on. You can also import CAs
as PKCS#12 file or use a hardware security module (HSM) as "External User CA".
Note
You should only use HTTPS connections for any communication with SLS. AS JAVA / ICM supports TLS,
and the default configuration comes with a self-signed sever certificate. You may use SLS to replace this
certificate by a PKI certificate.
CONFIGURATION
SSL Ports
1. Open the NetWeaver Administrator, choose Configuration SSL and define a new port with Client
Authentication Mode = REQUIRED.
Note
You may also define another port with Client Authentication Mode = Do not request if you
did not do so yet.
2. Import the root CA of the PKI that issued your Cloud Connector service certificate.
3. Save the configuration and restart the Internet Communication Manager (ICM).
Authentication Policy
Root CA Certificate
Cloud Connector
Follow the standard installation procedure of the Cloud Connector and configure SLS support:
1. Enter the policy URL that points to the SLS user profile group.
2. Select the profile, for example, Cloud Connector User Certificates.
3. Import the Root CA certificate of SLS into the Cloud Connector´s Trust Store.
Follow the standard configuration procedure for Cloud Connector support in the corresponding target system
and configure SLS support.
To do so, import the Root CA certificate of SLS into the system´s truststore:
● AS ABAP: choose transaction STRUST and follow the steps in Maintaining the SSL Server PSE's Certificate
List.
● AS Java: open the Netweaver Administrator and follow the steps described in Configuring the SSL Key Pair
and Trusted X.509 Certificates.
Context
The Cloud Connector allows you to propagate users authenticated in SAP Cloud Platform via Kerberos against
backend systems. It uses the Service For User and Constrained Delegation protocol extension of Kerberos.
Note
This feature is not supported for ABAP backend systems. In this case, you can use the certificate-based
principal propagation, see Configure a CA Certificate for Principal Propagation [page 404].
The Key Distribution Center (KDC) is used for exchanging messages in order to retrieve Kerberos tokens for a
certain user and backend system.
For more information, see Kerberos Protocol Extensions: Service for User and Constrained Delegation Protocol
.
Procedure
3. In the <KDC Hosts> field (press Add to display the field), enter the host name of your KDC using the
format <host>:<port>. The port is optional; if you leave it empty, the default, 88, is used.
4. Enter the name of your Kerberos realm.
5. Upload a KEYTAB file that contains the secret keys of your service user. The KEYTAB file should contain the
rc4-hmac key for your user.
6. Enter the name of the service user to be used for communication with the KDC. This user should be
allowed to request Kerberos tokens for other users for the backend systems that you are going to access.
7. Choose Save.
Example
You have a backend system protected with SPNego authentication in your corporate network. You want to call
it from a cloud application while preserving the identity of a cloud-authenticated user.
When you now call a backend system, the Cloud Connector obtains an SPNego token from your KDC for the
cloud-authenticated user. This token is sent along with the request to the back end, so that it can authenticate
the user and the identity to be preserved.
Related Information
Specify the backend systems that can be accessed by your cloud applications.
To allow your cloud applications to access a certain backend system on the intranet, you must specify this
system in the Cloud Connector. The procedure is specific to the protocol that you are using for communication.
Find the detailed configuration steps for each communication protocol here:
When you add new subaccounts, you can copy the complete access control settings from another subaccount
on the same Cloud Connector. You can also do it any time later by using the import/export mechanism
provided by the Cloud Connector.
1. From your subaccount menu, choose Cloud To On-Premise and select the tab Access Control.
2. To store the current settings in a ZIP file, choose Download icon in the upper-right corner.
3. You can later import this file into a different Cloud Connector.
There are two locations from which you can import access control settings:
There are also two options that influence the behavior of the import:
● Overwrite: All previously existing system mappings are removed. If you don't select this option, existing
mappings are merged into the list of existing ones. Whether the option is selected or not, if the same virtual
host-port combination already exists, it is overritten by the imported one. By default, imported system
mappings are merged into the existing ones.
● Include Resources: The resources that belong to the imported systems are also imported. If you do't select
this option, only the list of system mappings is imported, without any exposed resource.
Related Information
Specify the backend systems that can be accessed by your cloud applications using HTTP.
To allow your cloud applications to access a certain backend system on the intranet via HTTP, you must specify
this system in the Cloud Connector.
Make sure that also redirect locations are configured as internal hosts.
If the target server responds with a redirect HTTP status code (30x), the cloud-side HTTP client usually
sends the redirect over the Cloud Connector as well. The Cloud Connector runtime then performs a reverse
lookup to rewrite the location header that indicates where to route the redirected request.
If the redirect location is ambiguous (that is, several mappings point to the same internal host and port),
the first one found is used. If none is found, the location header stays untouched.
Tasks
4. Protocol: This field allows you to decide whether the Cloud Connector should use HTTP or HTTPS for the
connection to the backend system. Note that this is completely independent from the setting on cloud
side. Thus, even if the HTTP destination on cloud side specifies "http://" in its URL, you can select
HTTPS. This way, you are ensured that the entire connection from the cloud application to the actual
backend system (provided through the SSL tunnel) is SSL-encrypted. The only prerequisite is that the
backend system supports HTTPS on that port. For more information, see Initial Configuration (HTTP)
[page 388].
5. Internal Host and Internal Port specify the actual host and port under which the target system can be
reached within the intranet. It needs to be an existing network address that can be resolved on the intranet
and has network visibility for the Cloud Connector without any proxy. Cloud Connector will try to forward
the request to the network address specified by the internal host and port, so this address needs to be real.
6. Virtual Host specifies the host name exactly as it is specified as the URL property in the HTTP destination
configuration in SAP Cloud Platform
See:
Create HTTP Destinations [page 29] (Cloud Foundry environment)
Create HTTP Destinations [page 206] (Neo environment)
The virtual host can be a fake name and does not need to exist. The Virtual Port allows you to distinguish
between different entry points of your backend system, for example, HTTP/80 and HTTPS/443, and have
different sets of access control settings for them. For example, some noncritical resources may be
accessed by HTTP, while some other critical resources are to be called using HTTPS only. The fields will be
prepopulated with the values of the Internal Host and Internal Port. In case you don't modify them, you
must provide your internal host and port also in the cloud-side destination configuration or in the URL used
for your favorite HTTP client.
Note
There are two variants of a principal type X.509 certificate: X.509 certificate (general usage) and X.509
certificate (strict usage). The latter was introduced with Cloud Connector 2.11. If the cloud side sends a
principal, these variants behave identically. If no principal is sent, the injected HTTP headers indicate
that the system certificate used for trust is not used for authentication.
For more information on principal propagation, see Configuring Principal Propagation [page 401].
8. Host In Request Header lets you define, which host is used in the host header that is sent to the target
server. By choosing Use Internal Host, the actual host name is used. When choosing Use Virtual
Host, the virtual host is used. In the first case, the virtual host is still sent via the X-Forwarded-Host
header.
10. The summary shows information about the system to be stored and when saving the host mapping, you
can trigger a ping from the Cloud Connector to the internal host, using the Check availability of internal
host checkbox. This allows you to make sure the Cloud Connector can indeed access the internal system,
and allows you to catch basic things, such as spelling mistakes or firewall problems between the Cloud
Connector and the internal host. If the ping to the internal host is successful, the Cloud Connector saves
the mapping without any remark. If it fails, a warning will pop up, that the host is not reachable. Details for
the reason are available in the log files. You can execute such a check for all selected systems in the Access
In addition to allowing access to a particular host and port, you also must specify which URL paths (Resources)
are allowed to be invoked on that host. The Cloud Connector uses very strict white-lists for its access control,
so only those URLs for which you explicitly granted access are allowed. All other HTTP(S) requests are denied
by the Cloud Connector.
To define the permitted URLs (Resources) for a particular backend system, choose the line corresponding to
that backend system and choose Add in section Resources Accessible On... below. A dialog appears prompting
you to enter the specific URL path that you want to allow to be invoked.
The Active checkbox lets you specify, if that resource is initially enabled or disabled. See the section below for
more information on enabled and disabled resources.
The WebSocket Upgrade checkbox lets you specify, whether that resource allows a protocol upgrade.
In some cases, it is useful for testing purposes to temporarily disable certain resources without having to delete
them from the configuration. This allows you to easily reprovide access to these resources at a later point of
time without having to type in everything once again.
● To activate the resource again, select it and choose the Activate button.
● By choosing Allow WebSocket upgrade/Disallow WebSocket upgrade this is possible for the protocol
upgrade setting as well.
● It is also possible to mark multiple lines and then suspend or activate all of them in one go by clicking the
Activate/Suspend icons in the top row. The same is true for the corresponding Allow WebSocket upgrade/
Disallow WebSocket icons.
● /production/accounting and Path only (sub-paths are excluded) are selected. Only requests of the form
GET /production/accounting or GET /production/accounting?
name1=value1&name2=value2... are allowed. (GET can also be replaced by POST, PUT, DELETE, and so
on.)
● /production/accounting and Path and all sub-paths are selected. All requests of the form GET /
production/accounting-plus-some-more-stuff-here?name1=value1... are allowed.
● / and Path and all sub-paths are selected. All requests to this server are allowed.
Related Information
Specify the backend systems that can be accessed by your cloud applications using RFC.
Tasks
To allow your cloud applications to access a certain backend system on the intranet, insert a new entry in the
Cloud Connector Access Control management.
1. Choose Cloud To On-Premise from your Subaccount menu and go to tab Access Control.
2. Choose Add.
3. Backend Type: Select the backend system type ( ABAP System or SAP Gateway for RFC).
Note
The value RFC SNC is independent from your settings on the cloud side, since it only specifies the
communication beween Cloud Connector and backend system. Using RFC SNC, you can ensure that
the entire connection from the cloud application to the actual backend system (provided by the SSL
tunnel) is secured, partly with SSL and partly with SNC. For more information, see Initial Configuration
(RFC) [page 390].
Note
6. Choose Next.
7. Choose whether you want to configure a load balancing logon or connect to a specific application server.
○ When using direct logon, the Application Server specifies one application server of the ABAP system.
The instance number is a two-digit number that is also found in the SAP Logon configuration.
Alternatively, it's possible to directly specify the gateway port in the Instance Number field.
9. Optional: You can virtualize the system information in case you like to hide your internal host names from
the cloud. The virtual information can be a fake name which does not need to exist. The fields will be pre-
○ Virtual Message Server - specifies the host name exactly as specified as the jco.client.mshost
property in the RFC destination configuration in the cloud. The Virtual System ID allows you to
distinguish between different entry points of your backend system that have different sets of access
control settings. The value needs to be the same like for the jco.client.r3name property in the RFC
destination configuration in the cloud.
○ Virtual Application Server - it specifies the host name exactly as specified as the jco.client.ashost
property in the RFC destination configuration in the cloud. The Virtual Instance Number allows you to
distinguish between different entry points of your backend system that have different sets of access
control settings. The value needs to be the same like for the jco.client.sysnr property in the RFC
destination configuration in the cloud.
10. This step is only relevant if you have chosen RFC SNC. The <Principal Type> field defines what kind of
principal is used when configuring a destination on the cloud side using this system mapping with
authentication type Principal Propagation. No matter what you choose, make sure that the general
configuration for the <Principal Type> has been done to make it work correctly. For destinations using
different authentication types, this setting is ignored. In case you choose None as <Principal Type>, it
is not possible to apply principal propagation to this system.
Note
If you use an RFC connection, you cannot choose between different principal types. Only the X.509
certificate is supported. You need an SNC-enabled backend connection to use it. For RFC, the two X.
509 certificate variants X.509 certificate (general usage) and X.509 certificate (strict usage) do not
differ in behavior.
For more information on principal propagation, see Configuring Principal Propagation [page 401].
12. Mapping Virtual to Internal System: You can enter an optional description at this stage. The respective
description will be shown as a rich tooltip when the mouse hovers over the entries of the virtual host
column (table ).
13. The summary shows information about the system to be stored. When saving the system mapping, you
can trigger a ping from the Cloud Connector to the internal host, using the Check availability of internal
host checkbox. This allows you to make sure the Cloud Connector can indeed access the internal system,
and allows you to catch basic things, such as spelling mistakes or firewall problems between the Cloud
Connector and the internal host. If the ping to the internal host is successful, the Cloud Connector saves
the mapping without any remark. If it fails, a warning will pop up, that the host is not reachable. Details for
14. Optional: You can later edit a system mapping (choose Edit) to make the Cloud Connector route the
requests for sales-system.cloud:sapgw42 to a different backend system. This can be useful if the
system is currently down and there is a back-up system that can serve these requests in the meantime.
However, you cannot edit the virtual name of this system mapping. If you want to use a different fictional
host name in your cloud application, you must delete the mapping and create a new one. Here, you can
also change the Principal Type to None in case you don't want to allow principal propagation to a certain
system.
15. Optional. You can later edit a system mapping to add more protection to your system when using RFC via
theCloud Connector, by restricting the mapping to specified clients and users: in column Actions, choose
the button Maintain Authority Lists (only RFC) to open a whitelist/blacklist dialog. In section Authority
Client Whitelist, enter all clients of the corresponding system in the field <Client ID> that you want to
allow to use the Cloud Connector connection. In section Authority User Blacklist, press the button Add a
user authority (+) to enter all users you want to exclude from this connection. Each user must be assigned
to a specified client. When you are done, press Save.
Note
In addition to allowing access to a particular host and port, you also must specify which function modules
(ResourcesYou can enter an optional description at this stage. The respective description) are allowed to be
invoked on that host. The Cloud Connector uses very strict white lists for its access control, so allowed are only
function modules for which you explicitly granted access. All other RFC requests are denied by the Cloud
Connector.
1. To define the permitted function modules (Resources) for a particular backend system, choose the row
corresponding to that backend system and press Add in section Resources Accessible On... below. A dialog
appears, prompting you to enter the specific function module name whose invoking you want to allow.
2. The Cloud Connector checks that the function module name of an incoming request is exactly as specified
in the configuration. If it is not, the request is denied.
3. If you select the Prefix option, the Cloud Connector allows all incoming requests, for which the function
module name begins with the specified string.
4. The Enabled checkbox allows you to specify whether that resource should be initially enabled or disabled.
Add a specified system mapping to the Cloud Connector if you want to use an on-premise LDAP server or user
authentication in your cloud application.
To allow your cloud applications to access an on-premise LDAP server, insert a new entry in the Cloud
Connector access control management.
4. Protocol: Select LDAP or LDAPS for the connection to the backend system. When you are done, choose
Next.
5. Internal Host and Internal Port: specify the host and port under which the target system can be reached
within the intranet. It needs to be an existing network address that can be resolved on the intranet and has
network visibility for the Cloud Connector. The Cloud Connector will try to forward the request to the
network address specified by the internal host and port, so this address needs to be real.
6. Enter a Virtual Host and Virtual Port. The virtual host can be a fake name and does not need to exist. The
fields are pre-populated with the values of the Internal Host and Internal Port.
8. The summary shows information about the system to be stored. When saving the host mapping, you can
trigger a ping from the Cloud Connector to the internal host, using the Check Internal Host check box. This
allows you to make sure the Cloud Connector can indeed access the internal system. Also, you can catch
basic things, such as spelling mistakes or firewall problems between the Cloud Connector and the internal
host.
If the ping to the internal host is successful, the Cloud Connector saves the mapping without any remark. If
it fails, a warning is displayed in column Check Result, that the host is not reachable. Details for the reason
are available in the log files. You can execute such a check at any time later for all selected systems in the
Mapping Virtual To Internal System overview by pressing Check Availability of Internal Host in column
Actions.
9. Optional: You can later edit the system mapping (by choosing Edit) to make the Cloud Connector route the
requests to a different LDAP server. This can be useful if the system is currently down and there is a back-
up LDAP server that can serve these requests in the meantime. However, you cannot edit the virtual name
of this system mapping. If you want to use a different fictional host name in your cloud application, you
have to delete the mapping and create a new one.
Add a specified system mapping to the Cloud Connector if you want to use the TCP protocol for
communication with a backend system.
To allow your cloud applications to access a certain backend system on the intranet via TCP, insert a new entry
in the Cloud Connector access control management.
4. Protocol: Select TCP or TCP SSL for the connection to the backend system. When you are done, choose
Next.
Note
When selecting TCP as protocol, the following warning message is displayed: TCP connections can
pose a security risk by permitting unmonitored traffic. Ensure only
trustworthy applications have access. The reason is that using plain TCP, the Cloud
Connector cannot see or log any detail information about the calls. Therefore, in contrast to HTTP or
RFC (both running on top of TCP), the Cloud Connector cannot check the validity of a request. To
minimize this risk, make sure you
5. Internal Host and Internal Port: specify the host and port under which the target system can be reached
within the intranet. It needs to be an existing network address that can be resolved on the intranet and has
network visibility for the Cloud Connector. The Cloud Connector will try to forward the request to the
network address specified by the internal host and port. That is why this address needs to be real.
6. Enter a Virtual Host and Virtual Port. The virtual host can be a fake name and does not need to exist. The
fields are prepopulated with the values of the Internal Host and Internal Port.
7. You can enter an optional description at this stage. The respective description will be shown as a tooltip
when you press the button Show Details in column Actions of the Mapping Virtual To Internal System
overview.
8. The summary shows information about the system to be stored. When saving the host mapping, you can
trigger a ping from the Cloud Connector to the internal host, using the Check Internal Host checkbox. This
allows you to make sure the Cloud Connector can indeed access the internal system. Also, you can catch
basic things, such as spelling mistakes or firewall problems between the Cloud Connector and the internal
host.
If the ping to the internal host is successful, the Cloud Connector saves the mapping without any remark. If
it fails, a warning is displayed in column Check Result, that the host is not reachable. Details for the reason
9. Optional: You can later edit the system mapping (by choosing Edit) to make the Cloud Connector route the
requests to a different backend system. This can be useful if the system is currently down and there is a
back-up system that can serve these requests in the meantime. However, you cannot edit the virtual name
of this system mapping. If you want to use a different fictional host name in your cloud application, you
have to delete the mapping and create a new one.
Configure backend systems and resources in the Cloud Connector, to make them available for a cloud
application.
Tasks
Initially, after installing a new Cloud Connector, no network systems or resources are exposed to the cloud. You
must configure each system and resource used by applications of the connected cloud subaccount. To do this,
choose Cloud To On Premise from your subaccount menu and go to tab Access Control:
The Cloud Connector supports any type of system (SAP or non-SAP system) that can be called via one of the
supported protocols. For example, a convenient way to access an ABAP system from a cloud application is via
SAP NetWeaver Gateway, as it allows the consumption of ABAP content via HTTP and open standards.
● For systems using HTTP communication, see: Configure Access Control (HTTP) [page 425].
● For information on configuring RFC resources, see: Configure Access Control (RFC) [page 432].
We recommend that you limit the access to backend services and resources. Instead of configuring a system
and granting access to all its resources, grant access only to the resources needed by the cloud application. For
example, define access to an HTTP service by specifying the service URL root path and allowing access to all its
subpaths.
When configuring an on-premise system, you can define a virtual host and port for the specified system. The
virtual host name and port represent the fully qualified domain name of the related system in the cloud. We
recommend that you use the virtual host name/port mapping to prevent leaking information about a system's
physical machine name and port to the cloud.
As of version 2.12, the Cloud Connector lets you define a set of resources as a scenario that you can export,
and import into another Cloud Connector.
Scenarios are useful if you provide an application to many consumers, which invokes a large number of
resources in an on-premise system. In this case, you must expose a system on your Cloud Connector that
covers all required resources, which increases the risk of incorrect configuration.
If you, as application owner, have implemented and tested a scenario, and configured a Cloud Connector
accordingly, you can define the scenario as follows:
Note
For applications provided by SAP, default scenario definitions may be available. To verify this, check the
corresponding application documentation.
Import a Scenario
As an administrator taking care for a scenario configuration in some other Cloud Connector, perform the
following steps:
1. Choose the Import Scenario button to add all required resources to the desired access control entry.
2. In the dialog, navigate to the folder of the archive that contains the scenario definition.
3. Choose Import. The resources of the scenario are merged with the existing set of resources which are
already available in the access control entry.
Remove a Scenario
To remove a scenario:
You can use a set of APIs to perform the basic setup of the Cloud Connector.
Context
As of version 2.11, the Cloud Connector provides several REST APIs that let you configure a newly installed
Cloud Connector. The configuration options correspond to the following steps:
Note
Prerequisites
● After installing the Cloud Connector, you have changed the initial password.
● You have specified the high availability role of the Cloud Connector (master or shadow).
● You have configured the proxy on the master instance if required for your network.
Requests and responses are coded in JSon format. In case of errors with return code 400, the status line
contains the error structure in json format:
Security
The Cloud Connector supports basic authentication and form based authentication. Upon first request
under /api/v1, a CSRF token is generated and sent back in the response header. The client application must
keep this token and send it in all subsequent requests as header X-CSRF-Token.
By default, REST APIs work stateless. In this case, the client must always send the Authorization header. A
CSRF token is not required, and you must not send a JSESSION header/cookie.
However, the Cloud Connector also supports stateful requests (sessions) that can help you to avoid
possible performance issues due to logon overhead. If the client runs in a single session (technically, it
sends a JSESSION header/cookie), a CSRF token is required. The CSRF token is always generated on the
first call and returned as X-CSRF-Token header. This means that in the stateful case the client must send
the X-CSRF-Token header, but does not need an Authorization header.
User Roles
As of Cloud Connector 2.12, the REST API supports different user roles. Depending on the role, an API grants or
denies access. In default configuration, the Cloud Connector uses local user storage and supports the single
user Administrator (administrator role). Using LDAP user storage, you can use various users (see also
Configure Named Cloud Connector Users [page 497]):
Return Codes
Successful requests return the code 200, or, if there is no content, 204. POST actions that create new entities
return 201, with the location link in the header.
400 – invalid request. For example, if parameters are invalid or the API is not supported anymore, an
unexpected state occurs and in case of other non-critical errors.
403 – the current Cloud Connector instance does not allow changes. For example, the instance has been
assigned to the shadow role and therefore does not allow configuration changes, or the user role does not have
required permission.
409 – current state of the Cloud Connector does not allow changes. For example, the password has to be
changed first.
Note
Entities returned by the APIs contain links as suggested by the current draft JSON Hypertext Application
Language (see https://tools.ietf.org/html/draft-kelly-json-hal-08 ).
Available APIs
System Mapping Resources [page 470] ● Get list of system mapping resources
● Create system mapping resources
● Delete system mapping resources
● Edit system mapping
● Read system mapping
Read and edit the Cloud Connector's common description via API.
Method GET
Request
Errors
URI /api/v1/configuration/connector
Method PUT
Request {description=<value>}
Response
Errors
Roles Administrator
Read and edit the high availability settings of a Cloud Connector instance via API.
When installing a Cloud Connector instance, you usually define its high availability role (master or shadow
instance) during initial configuration, see Change your Password and Choose Installation Type [page 383].
If the high availability role was not defined before, you can set the master or shadow role via this API.
If a shadow instance is connected to the master, this API also lets you switch the roles: the master instance
requests the shadow instance to take over the master role, and then takes the shadow role itself.
Note
Editing the high availability settings is only allowed on the master instance, and supports only shadow as
input.
URI /api/v1/configuration/connector/haRole
Method GET
Request
Errors
As of version 2.12: In a high availability setup, this API also allows to switch the roles if a shadow instance is
connected to the master. In this case, the API is only allowed on the master instance and supports only the
value shadow as input. The master instance requests the shadow instance to take over the master role and
then assumes the shadow role itself.
URI /api/v1/configuration/connector/haRole
Method POST
Request {haRole=[master|shadow]}
Response
Errors 409 - if the role is already defined or, on the master, if there
is no connected shadow instance:
Roles Administrator
Read and edit the configuration settings for a Cloud Connector shadow instance via API (available as of Cloud
Connector version 2.12.0).
Note
The APIs below are only permitted on a Cloud Connector shadow instance. The master instance will reject
the requests with error code 400 – Invalid Request.
URI /api/v1/configuration/connector/ha/
shadow/config
Method GET
Request
Errors
Response parameters:
● state: string describing the current high availability link state between master and shadow.
● masterHost and masterPort: currently configured host and port of the master instance.
● ownHost: the currently configured host name.
● ownHosts: contains the list of possible host names queried from local network interface.
● checkIntervalInSeconds: how often the health check against the master instance will be executed.
● takeoverDelayInSeconds: how long may master instance stays unavailable until the shadow instance
take over.
Set Configuration
URI /api/v1/configuration/connector/ha/
shadow/config
Method PUT
Response
Errors 400:
Roles Administrator
Request parameters:
Change State
URI /api/v1/configuration/connector/ha/
shadow/state
Method POST
Response
Roles Administrator
Request parameters:
Read and edit the Cloud Connector's proxy settings via API.
URI /api/v1/configuration/connector/proxy
Method GET
Request
Errors
URI /api/v1/configuration/connector/proxy
Method PUT
Roles Administrator
Read and edit the Cloud Connector's authentication settings via API.
URI /api/v1/configuration/connector/
authentication
Method GET
Request
Errors
URI /api/v1/configuration/connector/
authentication/basic
Method PUT
{oldPassword, newPassword}
2.
{password, user}
Response {}
Roles Administrator
URI /api/v1/configuration/connector/
authentication/ldap
Method PUT
where configuration is
host is
Response
Read and edit the Cloud Connector's solution management configuration via API.
URI /api/v1/configuration/connector/
solutionManagement
Method GET
Request
Errors
This API turns on the integration with the Solution Manager. The prerequisite is an available Host Agent. You
can specify a path to the Host Agent executable, if you don't use the default path.
URI /api/v1/configuration/connector/
solutionManagement
Method POST
Request {hostAgentPath}
Response
Errors
URI /api/v1/configuration/connector/
solutionManagement
Method DELETE
Request
Response
Errors
Generates a zip file containing the registration file for the solution management LMDB (Landscape
Management Database).
URI /api/v1/configuration/connector/
solutionManagement/registrationFile
Method GET
Request
Response
Errors
1.4.3.2.5.6 Backup
Read and edit the Cloud Connector's configuration backup settings via API.
URI /api/v1/configuration/backup
Method POST
Request {password}
Errors
Roles Administrator
URI /api/v1/configuration/backup
Method PUT
Errors 400
Roles Administrator
Note
Since this API uses a multipart request, it requires a multipart request header.
1.4.3.2.5.7 Subaccount
Read and edit the Cloud Connector's subaccount settings via API.
URI /api/v1/configuration/subaccounts
Method GET
Request
Errors
Create Subaccount
URI /api/v1/configuration/subaccounts
Method POST
Request {cloudUser,cloudPassword,displayName,d
escription,regionHost, subaccount,
locationID}
{description, displayName,
regionHost, subaccount, locationID,
tunnel:{state, connections,
applicationConnections:[],
serviceChannels:[]}, _links}
Roles Administrator
Parameters:
Note
locationId must be set in the destination configuration on SAP Cloud Platform accordingly.
Delete Subaccount
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>
Method DELETE
Request
Response 204
Roles Administrator
Edit Subaccount
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>
Method PUT
Response
Roles Administrator
Connect/Disconnect Subaccount
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/state
Method PUT
Response
Errors 400
Roles Administrator
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/validity
Method POST
Response
Roles Administrator
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/recovery
Method PUT
Response
Errors 400
Roles Administrator
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/recovery
Method DELETE
Request
Response
Roles Administrator
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/recovery/
validity
Method POST
Response
Errors 400
Roles Administrator
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>
Method GET
Request
Response {"tunnel":
{"state","connections":int,"applicatio
nConnections":[],"serviceChannels":
[]},"_links","regionHost","subaccount"
,"locationID"}
Read and edit the Cloud Connector's access control settings via API.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMappings
Method GET
Request
Errors
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMappings
Method POST
Roles Administrator
Parameters:
● virtualHost and virtualPort: virtual host and port as defined by the cloud side.
● localHost and localPort: backend host and port.
● protocol: communication protocol. Valid values are {HTTP, HTTPS, RFC, RFCS, LDAP, LDAPS,
TCP, TCPS}.
● backendType: type of backend system. Valid values are {abapSys, netweaverCE,
applServerJava, BC, PI, hana, netweaverGW, otherSAPsys, otherSAPmid, nonSAPsys,
nonSAPmid}.
● sncPartnerName: SNC name of an ABAP Server, only set for RFCS communication.
● sapRouter: SAP router route, only set if an SAP router is used.
● authenticationMode: valid values are {NONE, X509_CERTIFICATE, X509_CERTIFICATE_LOCAL,
KERBEROS}.
● description: free text, describing the Cloud Connector for cloud monitoring tools.
● allowedClients: array of strings, describing the SAP clients allowing to execute the calls in this system.
Valid clients are 3 letters long. If no clients are defined here, there is no restriction – every client is allowed.
Only applicable for RFC-based communication.
● blacklistedClientUsers: array of {client, user}, describing users, that are not allowed to execute
the call, even if the client is listed under allowed clients. Only applicable for RFC-based communication.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/
systemMappings/virtualHost:virtualPort
Method DELETE
Response {}
Roles Administrator
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMappings
Method DELETE
Request
Response {}
Roles Administrator
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/
systemMappings/virtualHost:virtualPort
Method PUT
Response
Errors 400
Roles Administrator
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/
systemMappings/virtualHost:virtualPort
Method GET
Request
Errors
Read and edit the Cloud Connector's system mapping resources via API.
Method GET
Request
Errors
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/
systemMappings/virtualHost:virtualPort/
resources
Method POST
Roles Administrator
Method DELETE
Request
Response {}
Roles Administrator
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/
systemMappings/virtualHost:virtualPort/
resources/<encodedResourceId>
Method PUT
Response
Errors 400
Roles Administrator
Method GET
Request
Errors
Read and edit the Cloud Connector's configuration for domain mappings via API.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/domainMappings
Method GET
Request
Errors
Method POST
Roles Administrator
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/
domainMappings/<internalDomain>
Method DELETE
Request
Response {}
Roles Administrator
Method PUT
Response
Errors 400
Roles Administrator
Read and edit the Cloud Connector service channel settings via API.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels
Method GET
Request
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels
Method POST
Request {typeKey,details,serviceNumber,connect
ionCount}
Roles Administrator
Parameters:
● typeKey: type of service channel. Valid values are HANA_DB, HCPVM, RFC.
● details:
○ HANA instance name for HANA_DB
○ VM name for HCPVM
○ S/4HANA Cloud tenant host for RFC
● serviceNumber: service number, which is mapped to a port according to the type of service channel.
● connectionCount: number of connections for the channel.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/<id>
Method DELETE
Request
Response {}
Roles Administrator
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/<id>
Method PUT
Request {typeKey,details,serviceNumber,connect
ionCount}
Response 204
Roles Administrator
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/<id>/
state
Method PUT
Request {enabled:boolean}
Response
500
{type:"RUNTIME_FAILURE","message":"Ser
vice channel could not be opened"}
Roles Administrator
Configure SAP Cloud Platform applications to use your corporate LDAP server as a user store.
Prerequisites
● Your have configured your cloud application to use an on-premise user provider and to consume users
from LDAP via the Cloud Connector. To do this, execute the following command:
● You have created a connectivity destination to configure the on-premise user provider, using the following
paremeters:
Name=onpremiseumconnector
Type=HTTP
URL= http://scc.scim:80/scim/v1
Authentication=NoAuthentication
CloudConnectorVersion=2
ProxyType=OnPremise
● You are using only one domain for user authentication. Authentication to multiple domains including sub-
domains is not supported.
Context
If you configure your SAP Cloud Platform applications to use the corporate LDAP server as a user store, the
platform doesn't need to keep the entire user database but requests the necessary information from the LDAP
server. Java applications running on SAP Cloud Platform can use the on-premise system to check credentials,
search for users, and retrieve details. In addition to the user information, the cloud application may request
information about the groups a user belongs to.
Note
The configuration steps below are applicable only for Microsoft Active Directory (AD).
Procedure
Note
Note
The user name must be fully qualified, including the AD domain suffix, for example,
[email protected].
6. In User Path, specify the LDAP subtree that contains the users.
7. In Group Path, specify the LDAP subtree that contains the groups.
8. Choose Save.
Configure Cloud Connector service channels to connect your on-premise network to specific services on SAP
Cloud Platform or to S/4HANA Cloud.
Context
Cloud Connector service channels allow secure and reliable access from an external network to certain
services on SAP Cloud Platform, or to S/4HANA Cloud. The called services are not exposed to direct access
from the Internet. The Cloud Connector ensures that the connection is always available and communication is
secured.
SAP HANA Database on SAP Cloud Platform The service channel for the SAP HANA Database lets you
access SAP HANA databases that run in the Cloud from da
tabase clients (for example, clients using ODBC/JDBC driv
ers). You can use the service channel to connect database,
analytical, BI, or replication tools to your SAP HANA data
base in your SAP Cloud Platform subaccount.
Virtual Machine on SAP Cloud Platform You can use the virtual machine (VM) service channel to ac
cess a SAP Cloud Platform VM using an SSH client, and ad
just it to your needs.
RFC Connection to S/4HANA Cloud The service channel for RFC supports calls from on-premise
systems to S/4HANA Cloud using RFC, establishing a con
nection to an S/4HANA Cloud tenant host.
Restrictions
In the Cloud Foundry environment, service channels are supported only for SAP HANA database.
Related Information
Using Cloud Connector service channels, you can establish a connection to an SAP HANA database in SAP
Cloud Platform that is not directly exposed to external access.
Context
The service channel for SAP HANA Database allows accessing SAP HANA databases running in the cloud via
ODBC/JDBC. You can use the service channel to connect database, analytical, BI, or replication tools to an SAP
HANA database in your SAP Cloud Platform subaccount.
Note
The following procedure requires a productive SAP HANA instance that is available in the same
subaccount. You cannot access an SAP HANA instance that is owned by a different subaccount within the
same or another global account (shared SAP HANA database).
Procedure
In the Cloud Foundry environment, the format of the SAP HANA instance name consists of the Cloud
Foundry space name, the database name and the database ID.
Example:
test:testInstance:3fcc976d-457a-474e-975b-e572600f474e:de19c262-a1fc-4096-bfce-1c41388e4b49
Where
8. Leave Enabled selected to establish the channel immediately after clicking Finish, or unselect it if you don't
want to establish the channel immediately.
9. Choose Finish.
Once you have established an SAP HANA Database service channel, you can connect on-premise database or
BI tools to the selected SAP HANA database in the cloud. This may be done by using
<cloud_connector_host>:<local_HANA_port> in the JDBC/ODBC connect strings.
See Connect DB Tools to SAP HANA via Service Channels [page 483].
Context
You can connect database, BI, or replication tools running in on-premise network to an SAP HANA database on
SAP Cloud Platform using service channels of the Cloud Connector. You can also use the high availability
support of the Cloud Connector on a database connection. The picture below shows the landscape in such a
scenario.
Follow the steps below to set up failover support, configure a service channel, and connect on-premise DB tools
via JDBC or ODBC to the SAP HANA database.
● For the connection string via ODBC you need a corresponding database user and password (see step 4
below).
● Find detailed information on failover support in the SAP HANA Administration Guide: Configuring Clients
for Failover.
1. To establish a highly available connection to one or multiple SAP HANA instances in the cloud, we
recommend that you make use of the failover support of the Cloud Connector. Set up a master and a
shadow instance. See Install a Failover Instance for High Availability [page 501].
2. In the master instance, configure a service channel to the SAP HANA database of the SAP Cloud Platform
subaccount to which you want to connect. If, for example, the chosen HANA instance is 01, the port of the
service channel is 30115. See also Configure a Service Channel for an SAP HANA Database [page 481].
3. Connect on-premise DB tools via JDBC to the SAP HANA database by using the following connection
string:
Example:
jdbc:sap://<cloud-connector-master-host>:30115;<cloud-connector-shadow-host>:
30115[/?<options>]
The SAP HANA JDBC driver supports failover out of the box. All you need is to configure the shadow
instance of the Cloud Connector as a failover server in the JDBC connection string. The different options
supported in the JDBC connection string are described in: Connect to SAP HANA via JDBC
4. You can also connect on-premise DB tools via ODBC to the SAP HANA database. Use the following
connection string:
"DRIVER=HDBODBC32;UID=<user>;PWD=<password>;SERVERNODE=<cloud-connector-
master-host>:30115;<cloud-connector-shadow-host>:30115;"
Related Information
Context
You can establish a connection to a virtual machine (VM) in the SAP Cloud Platform that is not directly exposed
to external access. Use On-Premise to Cloud Service Channels in the Cloud Connector. The service
channel for Virtual Machine lets you access SAP HANA databases running on the cloud via SSH. You can use
the service channel to manage the VM and adjust it to your needs.
Note
The following procedure requires that you have created a VM in your subaccount.
3. In the Add Service Channel dialog, select Virtual Machine from the list of supported channel types.
4. Choose Next. The Virtual Machine dialog opens.
5. Choose the Virtual Machine <Name> from the list of available Virtual Machines. It matches the
corresponding name shown under Virtual Machines in the cockpit.
Note
6. Choose the <Local Port>. You can use any port that is not used yet.
7. Leave <Enabled> selected to establish the channel immediately after clicking Save. Unselect it if you don't
want to establish the channel immediately.
8. Choose Finish.
Next Steps
Once you have established a service channel for the Virtual Machine, you can connect it with your SSH
client. This may be done by accessing <Cloud_connector_host>:<local_VM_port> and the key file
that was generated when creating the virtual machine.
Virtual Machines
For scenarios that need to call from on-premise systems to S/4HANA Cloud using RFC, you can establish a
connection to an S/4HANA Cloud tenant host. To do this, select On-Premise to Cloud Service Channels
in the Cloud Connector.
Prerequisites
You have set up the S/4HANA Cloud environment for communication with the Cloud Connector.
In particular, you must create a communication arrangement for the scenario SAP_COM_0200 (SAP Cloud
Connector Integration). See Integrating On-Premise Systems (SAP S/4HANA Cloud documentation).
Procedure
3. In the Add Service Channel dialog, select S/4HANA Cloud from the drop-down list of supported channel
types.
4. Choose Next. The S/4HANA Cloud dialog opens.
5. Enter the <S/4HANA Cloud Tenant> host name that you want to connect to.
Note
The S/4HANA Cloud tenant host name is case-sensitive. Also make sure that you specify the API
address of your tenant host. For example, if the tenant host of your instance is
my1234567.s4hana.ondemand.com, the API tenant host to be specified is my1234567-
api.s4hana.ondemand.com.
8. Choose Finish.
Note
When addressing an S/4HANA Cloud system in a destination configuration, you must enter the Cloud
Connector host as application server host. As instance number, specify the <Local Instance Number>
that you configured for the service channel. As user, you must provide the business user name but not the
technical user name associated with the same.
A service channel overview lets you see the details of all service channels that are used by a Cloud Connector
installation.
The service channel port overview lists all service channels that are configured in the Cloud Connector. It lets
you see at a glance, which server ports are used by a Cloud Connector installation.
In addition, you can find the following information about each service channel:
From the Actions column, you can switch directly to the On-Premise To Cloud section of the corresponding
subaccount and edit the selected service channel.
To find the overview list, choose Connector from the navigation menu and go to section Service Channel Port
Overview:
Set up a whitelist for cloud applications and a trust store for on-premise systems in the Cloud Connector.
Tasks
By default, all applications within a subaccount are allowed to use the Cloud Connector associated with the
subaccount they run in. However, this behavior might not be desired in any scenario. For example, this may be
acceptable for some applications, as they must interact with on-premise resources, while other applications,
for which it is not transparent whether they try to receive on-premise data, might turn out to be malicious. For
such cases, you can use an application whitelist.
1. From your subaccount menu, choose Cloud to On-Premise and go to the Applications tab.
2. To add an application, choose the Add icon in section Trusted Applications.
3. Enter the <Application Name> in the Add Tunnel Application dialog.
Note
To add all applications that are listed in section Tunnel Connection Limits on the same screen, you can
also use the Upload button next to the Add button. The list Tunnel Connection Limits shows all
applications for which a specific maximal number of tunnel connections was specified. See also:
Configure Tunnel Connections [page 493].
4. (Optional) Enter the maximal number of <Tunnel Connections> only if you want to override the default
value.
5. Choose Save.
Note
The application name is visible in the SAP Cloud Platform cockpit under Applications Java
Applications . To allow a subscribed application, you must add it to the whitelist in the format
<providerSubaccount>:<applicationName>. In particular, when using HTML5 applications, an
implicit subscription to services:dispatcher is required.
By default, the Cloud Connector trusts every on-premise system when connecting to it via HTTPS. As this may
be an undesirable behavior from a security perspective, you can configure a trust store that acts as a whitelist
of trusted on-premise systems, represented by their respective public keys. You can configure the trust store as
follows:
An empty trust store does not impose any restrictions on the trusted on-premise systems. It becomes a
whitelist as soon as you add the first public key.
Note
Context
Some HTTP servers return cookies that contain a domain attribute. For subsequent requests, HTTP clients
should send these cookies to machines that have host names in the specified domain.
However, in a Cloud Connector setup between a client and a Web server, this may lead to problems. For
example, assume that you have defined a virtual host sales-system.cloud and mapped it to the internal host
name ecc60.mycompany.corp. The client "thinks" it is sending an HTTP request to the host name sales-
system.cloud, while the Web server, unaware of the above host name mapping, sets a cookie for the domain
mycompany.corp. The client does not know this domain name and thus, for the next request to that Web
server, doesn't attach the cookie, which it should do. The procedure below prevents this problem.
Procedure
1. From your subaccount menu, choose Cloud To On-Premise, and go to the Cookie Domains tab.
2. Choose Add.
3. Enter cloud as the virtual domain, and your company name as the internal domain.
4. Choose Save.
The Cloud Connector checks the Web server's response for Set-Cookie headers. If it finds one with an
attribute domain=intranet.corp, it replaces it with domain=sales.cloud before returning the HTTP
response to the client. Then, the client recognizes the domain name, and for the next request against
www1.sales.cloud it attaches the cookie, which then successfully arrives at the server on
machine1.intranet.corp.
Note
Some Web servers use a syntax such as domain=.intranet.corp (RFC 2109), even though the
newer RFC 6265 recommends using the notation without a dot.
Note
The value of the domain attribute may be a simple host name, in which case no extra domain mapping
is necessary on the Cloud Connector. If the server sets a cookie with
domain=machine1.intranet.corp, the Cloud Connector automatically reverses the mapping
machine1.intranet.corp to www1.sales.cloud and replaces the cookie domain accordingly.
Related Information
If you want to monitor the Cloud Connector with the SAP Solution Manager, you can install a host agent on the
machine of the Cloud Connector and register the Cloud Connector on your system.
Prerequisites
● You have installed the SAP Diagnostics Agent and SAP Host Agent on the Cloud Connector host and
connected them to the SAP Solution Manager. As of Cloud Connector version 2.11.2, the RPM on Linux
ensures that the host agent configuration is adjusted and that user groups are setup correctly.
For more details about the host agent and diagnostics agent, see SAP Host Agent and the SCN Wiki SAP
Solution Manager Setup/Managed System Checklist .
See also SAP notes 2607632 (SAP Solution Manager 7.2 - Managed System Configuration for SAP Cloud
Connector) and 1018839 (Registering in the System Landscape Directory using sldreg). For consulting,
contact your local SAP partner.
Note
Linux OS: if you installed the host agent after installing the Cloud Connector, you can execute
enableSolMan.sh in the installation directory (available as of Cloud Connector version 2.11.2) to
adjust the host agent configuration and user group setup. This action requires root permission.
Procedure
1. From the Cloud Connector main menu, choose Configuration Reporting . In section Solution
Management of the Reporting tab, select Edit.
Note
To download the registration file lmdbConfig.xml, choose the icon Download registration file from the
Reporting tab.
Related Information
Adapt connectivity settings that control the throughput by choosing the appropriate limits (maximal values).
If required, you can adjust the following parameters for the communication tunnel by changing their default
values:
Note
This parameter specifies the default value for the maximal number of tunnel connections per
application. The value must be higher than 0.
For detailed information on connection configuration requirements, see Configuration Setup [page 361].
1. From the Cloud Connector main menu, choose Configuration Advanced . In section Connectivity,
select Edit.
2. In the Edit Connectivity Settings dialog, change the parameter values as required.
Additionally, you can specify the number of allowed tunnel connections for each application that you have
specified as a trusted application [page 404].
Note
If you don't change the value for a trusted application, it keeps the default setting specified above. If you
change the value, it may be higher or lower than the default and must be higher than 0.
1. From your subaccount menu, choose Cloud To On-Premise Applications . In section Tunnel
Connection Limits, choose Add.
2. In the Edit Tunnel Connections Limit dialog, enter the <Application Name> and change the number of
<Tunnel Connections> as required.
Note
The application name is visible in the SAP Cloud Platform cockpit under Applications Java
Applications . To allow a subscribed application, you must add it to the whitelist in the format
<providerSubaccount>:<applicationName>. In particular, when using HTML5 applications, an
implicit subscription to services:dispatcher is required.
3. Choose Save.
To edit this setting, select the application from the Limits list and choose Edit.
If required, you can adjust the following parameters for the Java VM by changing their default values:
Note
1. From the Cloud Connector main menu, choose Configuration Advanced . In section JVM, select Edit.
2. In the Edit JVM Settings dialog, change the parameter values as required.
3. Choose Save.
2. To backup or restore your configuration, choose the respective icon in the upper right corner of the screen.
1. To backup your configuration, enter and repeat a password in the Backup dialog and choose Backup.
Note
An archive containing a snapshot of the current Cloud Connector configuration is created and
downloaded by your browser. You can use the archive to restore the current state on this or any
other Cloud Connector installation. For security reasons, some files are protected by a password.
2. To restore your configuration, enter the required Archive Password and the Login Password of the
currently logged-in administrator user in the Restore from Archive dialog and choose Restore.
Note
The restore action overwrites the current configuration of the Cloud Connector. It will be
permanently lost unless you have created another backup before restoring. Upon successfully
restoring the configuration, the Cloud Connector restarts automatically. All sessions are then
terminated. The props.ini file however is treated in a special way. If the file in the backup differs
from the one that is used in the current installation, it will be placed next to the original one as
props.ini.restored. If you want to use the props.ini.restored file, replace the existing one
on OS level and restart the Cloud Connector.
Learn more about operating the Cloud Connector, using its administration tools and optimizing its functions.
Topic Description
Configure Named Cloud Connector Users [page 497] If you operate an LDAP server in your system landscape, you
can configure the Cloud Connector to use the named users
who are available on the LDAP server instead of the default
Cloud Connector users.
High Availability Setup [page 501] The Cloud Connector lets you install a redundant (shadow)
instance, which monitors the main (master) instance.
Change the UI Port [page 507] Use the changeport tool (Cloud Connector version 2.6.0+) to
change the port for the Cloud Connector administration UI. .
Connect and Disconnect a Cloud Subaccount [page 507] As a Cloud Connector administrator, you can connect the
Cloud Connector to (and disconnect it from) the configured
cloud subaccount.
Secure the Activation of Traffic Traces [page 508] Tracing of network traffic data may contain business critical
information or security sensitive data. You can implement a
"four-eyes" (double check) principle to protect your traces
(Cloud Connector version 1.3.2+).
Monitoring [page 510] Use various views to monitor the activities and state of the
Cloud Connector.
Alerting [page 524] Configure the Cloud Connector to send email alerts when
ever critical situations occur that may prevent it from oper
ating.
Audit Logging [page 527] Use the auditor tool to view and manage audit log informa
tion (Cloud Connector version 2.2+).
Troubleshooting [page 530] Information about monitoring the state of open tunnel con
nections in the Cloud Connector. Display different types of
logs and traces that can help you troubleshoot connection
problems.
Process Guidelines for Hybrid Scenarios [page 534] How to manage a hybrid scenario, in which applications run
ning on SAP Cloud Platform require access to on-premise
systems using the Cloud Connector.
We recommend that you configure LDAP-based user management for the Cloud Connector to allow only
named administrator users to log on to the administration UI.
This guarantees traceability of the Cloud Connector configuration changes via the Cloud Connector audit log. If
you use the default and built-in Administrator user, you cannot identify the actual person or persons who
perform configuration changes. Also, you will not be able to use different types of user groups.
Configuration
If you have an LDAP server in your landscape, you can configure the Cloud Connector to authenticate Cloud
Connector users against the LDAP server.
Valid users or user groups must be assigned to one of the following roles:
Note
The role sccmonitoring provides access to the monitoring APIs, and is particularly used by the SAP
Solution Manager infrastructure, see Monitoring APIs [page 514]. It cannot be used to access the
Cloud Connector administation UI.
Alternatively, you can define custom role names for each of these user groups, see: Use LDAP for
Authentication [page 498].
Once configured, the default Cloud Connector Administrator user becomes inactive and can no longer be
used to log on to the Cloud Connector.
You can use LDAP (Lightweight Directory Access Protocol) to configure Cloud Connector authentication.
After installation, the Cloud Connector uses file-based user management by default. Alternatively, the Cloud
Connector also supports LDAP-based user management. If you operate an LDAP server in your landscape, you
can configure the Cloud Connector to use the LDAP user base.
If LDAP authentication is active, you can assign users or user groups to the following default roles:
Note
This role cannot be used to access the Cloud Connector
administation UI.
1. From the main menu, choose Configuration and go to the User Interface tab.
2. From the Authentication section, choose Switch to LDAP.
roleBase="ou=groups,dc=scc"
roleName="cn"
roleSearch="(uniqueMember={0})"
userBase="ou=users,dc=scc"
userSearch="(uid={0})"
Change the <ou> and <dc> fields in userBase and roleBase, according to the configuration on your
LDAP server, or use some other LDAP query.
Note
The configuration depends on your specific LDAP server. For details, contact your LDAP administrator.
5. Provide the LDAP server's host and port (port 389 is used by default) in the <Host> field. To use the secure
protocol variant LDAPS based on TLS, select Secure.
6. Provide a failover LDAP server's host and port (port 389 is used by default) in the <Alternate Host>
field. To use the secure protocol variant LDAPS based on TLS, select <Secure Alternate Host>.
7. (Optional) Depending on your LDAP server configuration you may need to specify the <Connection User
Name> and its <Connection Password>. LDAP Servers supporting anonymous binding ignore these
parameters.
8. (Optional) To use your own role names, you can customize the default role names in the Custom Roles
section. If no custom role is provided, the Cloud Connector checks permissions for the corresponding
default role name:
○ <Administrator Role> (default: sccadmin)
○ <Support Role> (default: sccsupport)
○ <Display Role> (default: sccdisplay)
Note
We strongly recommend that you perform an authentication test. If authentication should fail, login is
not possible anymore. The test dialog also provides a test protocol, which could be helpful for
troubleshooting.
For more information about how to set up LDAP authentication, see tomcat.apache.org/tomcat-7.0-doc/
realm-howto.html .
You can also configure LDAP authentication on the shadow instance in a high availability setup (master and
shadow). From the main menu of the shadow instance, select Shadow Configuration, go to tab User
Interface, and check the Authentication section.
Note
If you are using LDAP together with a high availability setup, you cannot use the configuration option
userPattern. Instead, use a combination of userSearch, userSubtree and userBase.
10. After finishing the configuration, choose Activate. Immediately after activating the LDAP configuration you
must restart the Cloud Connector server, which invalidates the current browser session. Refresh the
browser and logon to the Cloud Connector again, using the credentials configured at the LDAP server.
11. To switch back to file-based user management, choose the Switch icon again.
Note
If you have set up an LDAP configuration incorrectly, you may not be able to logon to the Cloud Connector
again. In this case, adjust the Cloud Connector configuration to use the file-based user store again without
the administration UI. For more information, see the next section.
If your LDAP settings do not work as expected, you can use the useFileUserStore tool, provided with Cloud
Connector version 2.8.0 and higher, to revert back to the file-based user store:
1. Change to the installation directory of the Cloud Connector and enter the following command:
○ Microsoft Windows: useFileUserStore
○ Linux, Mac OS: ./useFileUserStore.sh
2. Restart the Cloud Connector to activate the file-based user store.
For versions older than 2.8.0, you must manually edit the configuration files.
<Realm className="org.apache.catalina.realm.LockOutRealm">
<Realm className="org.apache.catalina.realm.CombinedRealm">
<Realm
X509UsernameRetrieverClassName="com.sap.scc.tomcat.utils.SccX509SubjectDnRetri
ever" className="org.apache.catalina.realm.UserDatabaseRealm"
digest="SHA-256" resourceName="UserDatabase"/>
<Realm
X509UsernameRetrieverClassName="com.sap.scc.tomcat.utils.SccX509SubjectDnRetri
ever" className="org.apache.catalina.realm.UserDatabaseRealm" digest="SHA-1"
resourceName="UserDatabase"/>
</Realm>
</Realm>
You can operate the Cloud Connector in a high availability mode, in which a master and a shadow instance are
installed.
Task Description
Install a Failover Instance for High Availability [page 501] Install a redundant Cloud Connector instance (shadow) that
monitors the main instance (master).
Master and Shadow Administration [page 505] Learn how to operate master and shadow instances.
The Cloud Connector allows you to install a redundant instance, which monitors the main instance.
Context
In a failover setup, when the main instance should go down for some reason, a redundant one can take over its
role. The main instance of the Cloud Connector is called master and the redundant instance is called the
Note
For detailed information about sizing of the master and the shadow instance, see also Sizing
Recommendations [page 357].
Procedure
If this flag is not activated, no shadow instance can connect itself to this Cloud Connector. Additionally,
when providing a concrete Shadow Host, you can ensure that only from this host a shadow instance can be
connected.
Install the shadow instance in the same network segment as the master instance. Communication between
master and shadow via proxy is not supported. The same distribution package is used for master and shadow
instance.
Note
If you plan to use LDAP for the user authentication on both master and shadow, make sure you configure it
before you establish the connection from shadow to master.
2. From the main menu, choose Shadow Connector and provide connection data for the master instance, that
is, the master host and port. As of version 2.8.1.1, you can choose from the list of known host names, to use
the host name under which the shadow host is visible to the master. You can specify a host name manually,
if the one you want is not on the list. For the first connection, you must log on to the master instance, using
the user name and password for the master instance. The master and shadow instances exchange X.509
certificates, which will be used for mutual authentication.
If you want to attach the shadow instance to a different master, choose the Reset button.
Note
The Reset button sets all high availability settings to their initial state. High availability is disabled and
the shadow host is cleared. Resetting works only if no shadow is connected.
4. The UI on the master instance shows information about the connected shadow instance. From the main
menu, choose High Availability:
5. As of version 2.6.0, the High Availability view includes an Alert Messages panel. It displays alerts if
configuration changes have not been pushed successfully. This might happen, for example, if a temporary
network failure occurs at the same time a configuration change is made. This panel lets an administrator
know if there is an inconsistency in the configuration data between master and shadow that could cause
trouble if the shadow needs to take over. Typically, the master recognizes this situation and tries to push
the configuration change at a later time automatically. If this is successful, all failure alerts are removed
and replaced by a warning alert showing that there had been trouble before. As of version 2.8.0.1, these
alerts have been integrated in the general Alerting section; there is no longer a separate Alert Messages
panel.
If the master doesn't recover automatically, disconnect, then reconnect the shadow, which triggers a
complete configuration transfer.
There are several administration activities you can perform on the shadow instance. All configuration of tunnel
connections, host mappings, access rules, and so on, must be maintained on the master instance; however,
you can replicate them to the shadow instance for display purposes. You may want to modify the check
interval (time between checks of whether the master is still alive) and the takeover delay (time the shadow
waits to see whether the master would come back online, before taking over the master role itself).
As of Cloud Connector version 2.11.2, you can configure the timeout for the connection check, by pressing the
gear icon in the section Connection To Master of the shadow connector main page.
You can use the Reset button to drop all the configuration information on the shadow that is related to the
master, but only if the shadow is not connected to the master.
Failover Process
The shadow instance regularly checks whether the master instance is still alive. If a check fails, the shadow
instance first attempts to reestablish the connection to the master instance for the time period specified by the
takeover delay parameter.
● If no connection becomes possible during the takeover delay time period, the shadow tries to take over the
master role. At this point, it is still possible for the master to be alive and the trouble to be caused by a
network issue between the shadow and master. The shadow instance next attempts to establish a tunnel
to the given SAP Cloud Platform subaccount. If the original master is still alive (that is, its tunnel to the
cloud subaccount is still active), this attempt is denied and the shadow instance remains in "shadow
status", periodically pinging the master and trying to connect to the cloud, while the master is not yet
reachable.
● If the takeover delay period has fully elapsed, and the shadow instance does make a connection, the cloud
side opens a tunnel and the shadow instance takes over the role of the master. From this point, the shadow
instance shows the UI of the master instance and allows the usual operations of a master instance, for
example, starting/stopping tunnels, modifying the configuration, and so on.
When the original master instance restarts, it first checks whether the registered shadow instance has taken
over the master role. If it has, the master registers itself as a shadow instance on the former shadow (now
master) instance. Thus, the two Cloud Connector installations, in fact, have switched their roles.
Note
Only one shadow instance is supported. Any further shadow instances that attempt to connect are
declined by the master instance.
The master considers a shadow as lost, if no check/ping is received from that shadow instance during a time
interval that is equal to three times the check period. Only after this much time has elapsed can another
shadow system register itself.
On the master, you can manually trigger failover by selecting the Switch Roles button. If the shadow is
available, the switch is made as expected. Even if the shadow instance cannot be reached, the role switch of
the master may still be enforced. Select Switch Roles only if you are absolutely certain it is the correct
action to take for your current circumstances.
Context
By default, the Cloud Connector uses port 8443 for its administration UI. If this port is blocked by another
process, or if you want to change it after the installation, you can use the changeport tool, provided with
Cloud Connector version 2.6.0 and higher.
Note
Procedure
1. Change to the installation directory of the Cloud Connector. To adjust the port and execute one of the
following commands:
○ Microsoft Windows OS:
changeport <desired_port>
./changeport.sh <desired_port>
2. When you see a message stating that the port has been successfully modified, restart the Cloud Connector
to activate the new port.
The major principle for the connectivity established by the Cloud Connector is that the Cloud Connector
administrator should have full control over the connection to the cloud, that is, deciding if and when the Cloud
Connector should be connected to the cloud, the accounts to which it should be connected, and which on-
premise systems and resources should be accessible to applications of the connected subaccount.
Note
Once the Cloud Connector is freshly installed and connected to a cloud subaccount, none of the systems in
the customer network are yet accessible to the applications of the related cloud subaccount. Accessible
systems and resouurces must be configured explicitly in the Cloud Connector one by one, see Configure
Access Control [page 424].
As of Cloud Connector version 2.2.0, a single Cloud Connector instance can be connected to multiple
subaccounts in the cloud. This is useful especially if you need multiple subaccounts to structure your
development or to stage your cloud landscape into development, test, and production. In this case, you can use
a single Cloud Connector instance for multiple subaccounts. However, we recommend that you do not use
subaccounts running in productive scenarios and subaccounts used for development or test purposes within
the same Cloud Connector. You can add or a delete a cloud account to or from a Cloud Connector using the
Add and Delete buttons on the Subaccount Dashboard (see screenshot above).
Related Information
For support purposes, you can trace HTTP and RFC network traffic that passes through the Cloud Connector.
Traffic data may include business-critical information or security-sensitive data, such as user names,
passwords, address data, credit card numbers, and so on. Thus, by activating the corresponding trace level, a
Cloud Connector administrator might see data that he or she is not meant to. To prevent this behavior,
implement the four-eyes principle, which is supported by the Cloud Connector release 1.3.2 and higher.
Once the four-eyes principle is applied, activating a trace level that dumps traffic data will require two separate
users:
● An operating system user on the machine where the Cloud Connector is installed;
● An Administrator user of the Cloud Connector user interface.
By assigning these roles to two different people, you can ensure that both persons are needed to activate a
traffic dump.
1. Create a file named writeHexDump in <scc_install_dir>\scc_config. The owner of this file must be
a user other than the operating system user who runs the cloud connector process.
Note
Usually, this file owner is the user which is specified in the Log On tab in the properties of the cloud
connector service (in the Windows Services console). We recommend that you use a dedicated OS
user for the cloud connector service.
○ Only the file owner should have write permission for the file.
○ The OS user who runs the cloud connector process needs read-only permissions for this file.
○ Initially, the file should contain a line like allowed=false.
○ In the security properties of the file scc_config.ini (same directory), make sure that only the OS
user who runs the cloud connector process has write/modify permissions for this file. The most
efficient way to do this is simply by removing all other users from the list.
2. Once you've created this file, the Cloud Connector refuses any attempt to activate the Payload Trace flag.
3. To activate the payload trace, first the owner of writeHexDump must change the file content from
allowed=false to allowed=true. Thereafter, the Administrator user can activate the payload trace
from the Cloud Connector administration screens.
1.4.3.3.6 Monitoring
Learn how to monitor the Cloud Connector from the SAP Cloud Platform cockpit and from the Cloud
Connector administration UI.
The simplest way to verify whether a Cloud Connector is running is to try to access its administration UI. If you
can open the UI in a Web browser, the cloud connector process is running.
● On Microsoft Windows operating systems, the cloud connector process is registered as a Windows
service, which is configured to start automatically after a new Cloud Connector installation. If the Cloud
Connector server is rebooted, the cloud connector process should also auto-restart immediately. You
can check the state with the following command:
To verify if a Cloud Connector is connected to a certain cloud subaccount, log on to the Cloud Connector
administration UI and go to the Subaccount Dashboard, where the connection state of the connected
subaccounts is visible, as described in section Connect and Disconnect a Cloud Subaccount [page 507].
The cockpit includes a Connectivity view, where users can check the status of the Cloud Connector attached in
the current subaccount, if any, as well as information about the Cloud Connector ID, version, used Java
The Cloud Connector offers various views for monitoring its activities and state.
Performance
All requests that travel through the Cloud Connector to a back end as specified through access control take a
certain amount of time. You can check the duration of requests in a bar chart. The requests are not shown
individually, but are assigned to buckets, each of which represents a time range.
For example, the first bucket contains all requests that took 10ms or less, the second one the requests that
took longer than 10ms, but not longer than 20ms. The last bucket contains all requests that took longer than
5000ms.
The collection of duration statistics starts as soon as the Cloud Connector is operational. You can delete all of
these statistical records by selecting the button Delete All. After that, the collection of duration statistics starts
over.
Note
Delete All deletes not only the list of most recent requests, but it also clears the top time consumers.
A horizontal stacked bar chart breaks down the duration of the request into several parts: external (back end),
open connection, internal (SCC), SSO handling, and latency effects. The numbers in each part represent
milliseconds.
Note
In the above example, the selected request took 25ms, to which the Cloud Connector contributed 1ms.
Opening a connection took 5ms. Back-end processing consumed 7ms. Latency effects accounted for the
remaining 12ms, while there was no SSO handling necessary and hence it took no time at all.
To further restrict the selection of the listed 50 most recent requests, you can edit the resource filter settings
for each virtual host:
Note
If you specify sub-paths for a resource, the request URL must match exactly one of these entries to be
recorded. Without specified sub-paths (and the value Path and all sub-paths set for a resource), all
sub-paths of a specified resource are recorded.
This option is similar to Most Recent Requests; however, requests are not shown in order of appearance, but
rather sorted by their duration (in descending order). Furthermore, you can delete top time consumers, which
has no effect on most recent requests or the performance overview.
Back-End Connections
The maximum idle time appears on the rightmost side of the horizontal axis. For any point t on that axis
(representing a time value ranging between 0ms and the maximal idle time), the ordinate is the number of
connections that have been idle for no longer than t. You can click inside the graph area to view the respective
abscissa t and ordinate.
Hardware Metrics
You can check the current state of critical system resources using pie charts. The history of CPU and memory
usage (recorded in intervals of 15 seconds) is also shown graphically.
● View usage at a certain point in time by clicking inside the main graph area, and
● Zoom in on a certain excerpt of the historic data.
The data in its entirety is always visible in the smaller bottom area right below the main graph.
If you have zoomed in, an excerpt window in the bottom area shows you where you are in the main area with
respect to all of the data. You can:
Related Information
Use the Cloud Connector monitoring APIs to include monitoring information in your own monitoring tool.
Context
You might want to integrate some monitoring information in the monitoring tool you use.
● Health Check
● Subaccount data
● Connection data
● Performance data
● Top Time Consumers
Note
This API set is designed particularly for monitoring the Cloud Connector via the SAP Solution Manager, see
Configure Solution Management Integration [page 492].
Prerequisites
You must use Basic Authentication or form field authentication to read the monitoring data via API.
Users must be assigned to the roles sccmonitoring or sccadmin. The role sccmonitoring is restricted to
managing the monitoring APIs.
Note
The Health Check API does not require a specified user. Separate users are available through LDAP only.
URL Parameters
https://<scchost>:<sccport>/xxx
Available APIs
Note
This API is relevant for the master instance as well as for the shadow instance.
Using the health check API, it is possible to recognize that the Cloud Connector is up and running. The purpose
of this health check is only to verify that the Cloud Connector is not down. It does not check any internal state
or tunnel connection states. Thus, it is a quick check that you can execute frequently:
URL https://<scc_host>:<scc_port>/exposed?action=ping
Note
Using this API, you can read the list of all subaccounts connected to the Cloud Connector and view detail
information for each subaccount:
URL https://<scchost>:<sccport>/api/monitoring/
subaccounts
Input None
Example:
Note
URL https://<scchost>:<sccport>/api/monitoring/
connections/backends
Input None
Output JSON document with list of all open connections and detailed informa
tion about back-end systems:
Example:
Note
Using this API, you can read the data provided by the Cloud Connector performance monitor:
URL https://<scchost>:<sccport>/api/monitoring/
performance/backends
Output JSON document with a list providing the Cloud Connector performance
monitor data with detailed information about back-end performance:
Example:
Note
Using this API, you can read the data of top time consumers provided by the Cloud Connector performance
monitor:
Input None
Example:
1.4.3.3.7 Alerting
Configure the Cloud Connector to send e-mail messages when situations occur that may prevent it from
operating correctly.
To configure alert e-mails, choose Alerting from the top-left navigation menu.
You must specify the receivers of the alert e-mails (E-mail Configuration) as well as the Cloud Connector
resources and components that you want to monitor (Observation Configuration). The corresponding Alert
Messages are also shown in the Cloud Connector administration UI.
1. Select E-mail Configuration to specify the list of em-ail addresses to which alerts should be sent (Send To).
Note
The addresses you enter here can use either of the following formats: [email protected] or John
Doe <[email protected]>.
Observation Configuration
Once you've entered the e-mail addresses to receive alerts, the next step is to identify the resources and
components of the Cloud Connector: E-mail messages are sent when any of the chosen components or
resources have malfunctioned or are in a critical state.
Note
The Cloud Connector does not dispatch the same alert repeatedly. As soon as an issue has been resolved,
an informational alert is generated, sent, and listed in Alert Messages (see section below).
Alert Messages
The Cloud Connector shows alert messages also on screen, in Alerting Alert Messages .
You can remove alerts using Delete or Delete All. If you delete active (unresolved) alerts, they reappear in the
list after the next health check interval.
Audit log data can alert Cloud Connector administrators to unusual or suspicious network and system
behavior.
Additionally, the audit log data can provide auditors with information required to validate security policy
enforcement and proper segregation of duties. IT staff can use the audit log data for root-cause analysis
following a security incident.
The Cloud Connector includes an auditor tool for viewing and managing audit log information about access
between the cloud and the Cloud Connector, as well as for tracking of configuration changes done in the Cloud
Connector. The written audit log files are digitally signed by the Cloud Connector so that their integrity can be
checked, see Manage Audit Logs [page 527].
Note
We recommend that you permanently switch on Cloud Connector audit logging in productive scenarios.
● Under normal circumstances, set the logging level to Security (the default configuration value).
● If legal requirements or company policies dictate it, set the logging level to All. This lets you use the
log files to, for example, detect attacks of a malicious cloud application that tries to access on-premise
services without permission, or in a forensic analysis of a security incident.
We also recommend that you regularly copy the audit log files of the Cloud Connector to an external persistent
storage according to your local regulations. The audit log files can be found in the Cloud Connector root
directory /log/audit/<subaccount-name>/audit-log_<timestamp>.csv.
Configure audit log settings and verify the integrity of audit logs.
Choose Audit from your subaccount menu and go to Settings to specify the type of audit events the Cloud
Connector should log at runtime. You can currently select between the following Audit Levels (for either
<subaccount> and <cross-subaccount> scope):
● Security: Default value. The Cloud Connector writes an audit entry (Access Denied) for each request
that was blocked. It also writes audit entries, whenever an administrator changes one of the critical
configuration settings, such as exposed back-end systems, allowed resources, and so on.
● All: The Cloud Connector writes one audit entry for each received request, regardless of whether it was
allowed to pass or not (Access Allowed and Access Denied). It also writes audit entries that are
relevant to the Security mode.
● Off: No audit entries are written.
We recommend that you don't log all events unless you are required to do so by legal requirements or
company policies. Generally, logging security events only is sufficient.
To enable automatic cleanup of audit log files, choose a period (14 to 365 days) from the list in the field
<Automatic Cleanup>.
Audit entries for configuration changes are written for the following different categories:
In the Audit Viewer section, you can first define filter criteria, then display the selected audit entries.
These filter criteria are combined with a logical AND so that all audit entries that match these criteria are shown.
If you have modified one of the criteria, select Refresh to display the updated selection of audit events that
match the new criteria.
Note
To prevent a single person from being able to both change the audit log level, and delete audit logs, we
recommend that the operating system administrator and the SAP Cloud Platform administrator are
different persons. We also suggest that you turn on the audit log at the operating system level for file
operations.
The Check button checks all files that are filtered by the specified date range.
To check the integrity of the audit logs, go to <scc_installation>/auditor. This directory contains an
executable go script file (respectively, go.cmd on Microsoft Windows and go.sh on other operating systems).
If you start the go file without specifying parameters from <scc_installation>/auditor, all available audit
logs for the current Cloud Connector installation are verified.
The auditor tool is a Java application, and therefore requires a Java runtime, specified in JAVA_HOME, to
execute:
Example
In the following example, the Audit Viewer displays Any audit entries, at Security level, for the time frame
between May 28, 01:00:00 and May 28, 23:59:59. Automatic cleanup of audit logs has been set to 365 days in
the Settings section:
1.4.3.3.9 Troubleshooting
To troubleshoot connection problems, monitor the state of your open tunnel connections in the Cloud
Connector, and view different types of logs and traces.
Monitoring
To view a list of all currently connected applications, choose your Subaccount from the left menu and go to
section Cloud Connections:
● Application name: The name of the application, as also shown in the cockpit, for your subaccount
● Connections: The number of currently existing connections to the application
● Connected Since: The earliest start time of a connection to this application
● Peer Labels: The name of the application processes, as also shown for this application in the cockpit, for
your subaccount
Logs
The Logs tab page includes some files for troubleshooting that are intended primarily for SAP Support. These
files include information about both internal Cloud Connector operations and details about the communication
between the local and the remote (SAP Cloud Platform) tunnel endpoint.
● Cloud Connector Loggers adjusts the levels for Java loggers directly related to Cloud Connector
functionality.
● Other Loggers adjusts the log level for all other Java loggers available at the runtime. Change this level only
when requested to do so by SAP support. When set to a level higher than Information, it generates a
large number of trace entries.
● CPIC Trace Level allows you to set the level between 0 and 3 and provides traces for the CPIC-based RFC
communication with ABAP systems.
● When the Payload Trace is activated for a subaccount, all the HTTP and RFC traffic crossing the tunnel for
that subaccount going through this Cloud Connector, is traced in files with names
traffic_trace_<subaccount id>_on_<regionhost>.trc.
Note
Use payload and CPIC tracing at Level 3 carefully and only when requested to do so for support
reasons. The trace may write sensitive information (such as payload data of HTTP/RFC requests and
responses) to the trace files, and thus, present a potential security risk. As of version 2.2, the Cloud
Connector supports an implementation of a "four-eyes principle" for activating the trace levels that
dump the network traffic into a trace file. This principle requires two users to activate a trace level that
records traffic data. See Secure the Activation of Traffic Traces [page 508].
● TLS Trace: When the TLS trace is activated, the ljs_trace.log file includes information for TLS-
protected communication. To activate a change of this setting, a restart is required. Activate this trace only
when requested by SAP support. It has a high impact on performance as it produces a large amount of
traces.
View all existing trace files and delete the ones that are no longer needed.
To prevent your browser from being overloaded when multiple large files are loaded simultaneously, the Cloud
Connector loads only one page into memory. Use the page buttons to move through the pages.
Use the Download/Download All icons to create a ZIP archive containing one trace file or all trace files.
Download it to your local file system for convenient analysis.
Note
If you want to download more than one file, but not all, select the respective rows of the table and choose
Download All.
When running the Cloud Connector with SAP JVM, it is possible to trigger the creation of a thread dump by
pressing the Thread Dump button, which will be written to the JVM trace file vm_$PID_trace.log . You will be
requested by SAP support to create one if it is expected to help during incident analysis.
Note
From the UI, you can't delete trace files that are currently in use. You can delete them from the Linux OS
command line; however, we recommend that you do not use this option to avoid inconsistencies in the
internal trace management of the Cloud Connector.
Once a problem has been identified, you should turn off the trace again by editing the trace and log settings
accordingly to not flood the files with unnecessary entries.
Use the Refresh button to update the information that appears. For example, you can use this button because
more trace files might have been written since you last updated the display.
If you contact support for help, please always attach the appropriate log files and provide the timestamp or
period, when the reported issue was observed. Depending on the situation, different logs may help to find the
root cause.
Some typical settings to get the required data are listed below:
● <Cloud Connector Loggers> provide details related to connections to SAP Cloud Platform and to
backend systems as well as master-shadow communication in case of a high availability setup.
However, it does not contain any payload data. This kind of trace is written into ljs_trace.log, which is
the most relevant log for the Cloud Connector.
● <Other Loggers> provide details related to the tomcat runtime, in which the Cloud Connector is
running. The traces are written into ljs_trace.log as well, but they are needed only in very special
support situations. If you don't need these traces, leave the level on Information or even lower.
● Payload data are written into the traffic trace file for HTTP or RFC requests if the payload trace is
activated, or into the CPI-C trace file for RFC requests, if the CPI-C trace is set to level 3.
● <TLS trace> is helpful to analyze TLS handshake failures from Cloud Connector to Cloud or from Cloud
Connector to backend. It should be turned off again as soon as the issue has been reproduced and
recorded in the traces.
● Setting the audit log on level ALL for <Subaccount Audit Level> is the easiest way to check if a
request reached the the Cloud Connector and if it is being processed.
A hybrid scenario is one, in which applications running on SAP Cloud Platform require access to on-premise
systems. Define and document your scenario to get an overview of the required process steps.
Tasks
To gain an overview of the cloud and on-premise landscape that is relevant for your hybrid scenario, we
recommend that you diagrammatically document your cloud subaccounts, their connected Cloud Connectors
and any on-premise back-end systems. Include the subaccount names, the purpose of the subaccounts (dev,
test, prod), information about the Cloud Connector machines (host, domains), the URLs of the Cloud
Connectors in the landscape overview document, and any other details you might find useful to include.
Document the users who have administrator access to the cloud subaccounts, to the Cloud Connector
operating system, and to the Cloud Connector administration UI.
Such an administrator role documentation could look like following sample table:
Cloud Subaccount X
(CA) Dev1
CA Dev2 X
CA Test X X
CA Prod X
Create and document separate email distribution lists for both the cloud subaccount administrators and the
Cloud Connector administrators.
Define and document mandatory project and development guidelines for your SAP Cloud Platform projects. An
example of such a guideline could be similar to the following.
Every SAP Cloud Platform project in this organization requires the following:
Define and document how to set a cloud application live and how to configure needed connectivity for such an
application.
For example, the following processes could be seen as relevant and should be defined and document in more
detail:
1. Transferring application to production: Steps for transferring an application to the productive status on the
SAP Cloud Platform.
2. Application connectivity: The steps for adding a connectivity destination to a deployed application for
connections to other resources in the test or productive landscape.
3. Cloud Connector Connectivity: Steps for adding an on-premise resource to the Cloud Connector in the test
or productive landscapes to make it available for the connected cloud subaccounts.
4. On-premise system connectivity: The steps for setting up a trusted relationship between an on-premise
system and the Cloud Connector, and to configure user authentication and authorization in the on-premise
system in the test or productive landscapes.
5. Application authorization: The steps for requesting and assigning an authorization that is available inside
the SAP Cloud Platform application to a user in the test or productive landscapes.
6. Administrator permissions: Steps for requesting and assigning the administrator permissions in a cloud
subaccount to a user in the test or productive landscape.
1.4.3.4 Security
Learn how Cloud Connector features help you to meet the highest security requirements.
Features
Security is a crucial concern for any cloud-based solution. It has a major impact on the business decision of
enterprises whether to make use of such solutions. SAP Cloud Platform is a platform-as-a-service offering
Level Features
Physical and Environmental Layer [page 545] ● Strict physical access control
● High availability and disaster recovery capabilities
SAP Cloud Platform provides the Cloud Connector to allow integration of on-demand applications with services
and systems running in secured customer networks, as well as to support secure database connections from
the customer network to SAP HANA databases running on SAP Cloud Platform. As these are highly security-
sensitive topics, this section gives an overview how the Cloud Connector ensures highest security standards for
the mentioned scenarios.
Target Audience
Related Information
On application level, the main tasks to ensure secure Cloud Connector operations are to provide appropriate
frontend security (for example, validation of entries) and a secure application development.
Basically, you should follow the rules given in the product security standard, for example, protection against
cross-site scripting (XSS) and cross-site request forgery (XSRF).
The scope and design of security measures on application level strongly depend on the specific needs of your
application.
You can use SAP Cloud Platform Connectivity to securely integrate cloud applications with systems running in
isolated customer networks.
Overview
After installing the Cloud Connector as integration agent in your on-premise network, you can use it to
establish a persistent TLS tunnel to SAP Cloud Platform subaccounts.
To establish this tunnel, the Cloud Connector administrator must authenticate himself or herself against the
related SAP Cloud Platform subaccount of which he or she must be a member. Once estabilshed, the tunnel
can be used by applications of the connected subaccount to remotely call systems in your network.
Architecture
The figure below shows a system landscape in which the Cloud Connector is used for secure connectivity
between SAP Cloud Platform applications and on-premise systems.
Related Information
A company network is usually divided into multiple network zones according to the security level of the
contained systems. The DMZ network zone contains and exposes the external-facing services of an
organization to an untrusted network, typically the Internet. Besides this, there can be one or multiple other
network zones which contain the components and services provided in the company’s intranet.
You can set up the Cloud Connector either in the DMZ or in an inner network zone. Technical prerequisites for
the Cloud Connector to work properly are:
● The Cloud Connector must have access to the SAP Cloud Platform landscape host, either directly or via
HTTPS proxy (see also: Prerequisites [page 352]).
● The Cloud Connector must have direct access to the internal systems it shall provide access to. I.e. there
must be transparent connectivity between the Cloud Connector and the internal system.
It’s a company’s decision, whether the Cloud Connector is set up in the DMZ and operated centrally by an IT
department or set up in the intranet and operated by the line of business.
Related Information
Exposing Resources
Once installed, none of the internal systems are accessible by default through the Cloud Connector: you must
configure explicitly each system and each service and resource on every system to be exposed to SAP Cloud
Platform in the Cloud Connector.
You can also specify a virtual host name and port for a configured on-premise system, which is then used in the
cloud. Doing this, you can avoid that information on physical hosts is exposed to the cloud.
TLS Tunnel
The TLS (Transport Layer Security) tunnel is established from the Cloud Connector to SAP Cloud Platform via
a so-called reverse invoke approach. This lets an administrator have full control of the tunnel, since it can’t be
The tunnel itself is using TLS with strong encryption of the communication, and mutual authentication of both
communication sides, the client side (Cloud Connector) and the server side (SAP Cloud Platform).
The X.509 certificates which are used to authenticate the Cloud Connector and the SAP Cloud Platform
subaccount are issued and controlled by SAP Cloud Platform. They are kept in secure storages in the Cloud
Connector and in the cloud. Having encrypted and authenticated the tunnel, confidentiality and authenticity of
the communication between the SAP Cloud Platform applications and the Cloud Connector is guaranteed.
As an additional level of control, the Cloud Connector optionally allows restricting the list of SAP Cloud
Platform applications which are able to use the tunnel. This is useful in situations where multiple applications
are deployed in a single SAP Cloud Platform subaccount while only particular applications require connectivity
to on-premise systems.
SAP Cloud Platform guarantees strict isolation on subaccount level provided by its infrastructure and platform
layer. An application of one subaccount is not able to access and use resources of another subaccount.
Supported Protocols
The Cloud Connector supports inbound connectivity for HTTP and RFC, any other protocol is not supported.
Principal Propagation
The Cloud Connector also supports principal propagation of the cloud user identity to connected on-premise
systems (single sign-on). For this, the system certificate (in case of HTTPS) or the SNC PSE (in case of RFC) is
mandatory to be configured and trust with the respective on-premise system must be established. Trust
configuration, in particular for principal propagation, is the only reason to configure and touch an on-premise
system when using it with the Cloud Connector.
The Cloud Connector supports the communication direction from the on-premise network to SAP Cloud
Platform, using a database tunnel.
The database tunnel is used to connect local database tools via JDBC or ODBC to the SAP HANA DB or other
databases onSAP Cloud Platform, for example, SAP Business Objects tools like Lumira, BOE or Data Services.
● The database tunnel only allows JDBC and ODBC connections from the Cloud Connector into the cloud. A
reuse for other protocols is not possible.
● The tunnel uses the same security mechanisms as for the inbound connectivity:
○ TLS-encryption and mutual authentication
○ Audit logging
To use the database tunnel, two different SAP Cloud Platform users are required:
● A platform user (member of the SAP Cloud Platform subaccount) establishes the database tunnel to the
HANA DB.
● A HANA DB user is needed for the ODBC/JDBC connection to the database itself. For the HANA DB user,
the role and privilege management of HANA can be used to control which actions he or she can perform on
the database.
Related Information
As audit logging is a critical element of an organization’s risk management strategy, the Cloud Connector
provides audit logging for the complete record of access between cloud and Cloud Connector as well as of
configuration changes done in the Cloud Connector.
Integrity Check
The written audit log files are digitally signed by the Cloud Connector so that they can be checked for integrity
(see also: Manage Audit Logs [page 527]).
The audit log data of the Cloud Connector can be used to alert Cloud Connector administrators regarding
unusual or suspicious network and system behavior.
● The audit log data can provide auditors with information required to validate security policy enforcement
and proper segregation of duties.
● IT staff can use the audit log data for root-cause analysis following a security incident.
Related Information
Infrastructure and network facilities of the SAP Cloud Platform ensure security on network layer by limiting
access to authorized persons and specific business purposes.
Isolated Network
The SAP Cloud Platform landscape runs in an isolated network, which is protected from the outside by
firewalls, DMZ, and communication proxies for all inbound and outbound communications to and from the
network.
Sandboxed Environments
Learn about data center security provided for SAP Cloud Platform Connectivity.
SAP Cloud Platform runs in SAP-hosted data centers which are compliant with regulatory requirements. The
security measures include, for example:
● strict physical access control mechanisms using biometrics, video surveillance, and sensors
● high availability and disaster recoverability with redundant power supply and own power generation
Topics
Hover over the elements for a description. Click an element to find the recommended actions in the table
below.
Recommended Actions
Network Zone Depending on the needs of the project, To access highly secure on-premise
the Cloud Connector can be either set systems, operate the Cloud Connector
Back to Topics [page 545]
up in the DMZ and operated centrally centrally by the IT department and in
by the IT department or set up in the in stall it in the DMZ of the company net
tranet and operated by the line-of-busi work.
ness.
Set up trust between the on-premise
system and the Cloud Connector, and
only accept requests from trusted
Cloud Connectors in the system.
OS-Level Protection The Cloud Connector is a security-criti Restrict access to the operating system
cal component that handles the in on which the Cloud Connector is instal
Back to Topics [page 545]
bound access from SAP Cloud Platform led to the minimal set of users who
applications to systems of an on-prem should administrate the Cloud
ise network. Connector.
Administration UI After installation, the Cloud Connector Change the password of the
Back to Topics [page 545] provides an initial user name and pass Administrator user immediately after in
word and forces the user stallation. Choose a strong password
(Administrator) to change the for the user (see also Recommenda
password upon initial logon. tions for Secure Setup [page 370]).
You can access the Cloud Connector Exchange the self-signed X.509 certifi-
administration UI remotely via HTTPS. cate of the Cloud Connector adminis
tration UI by a certificate that is trusted
After installation, it uses a self-signed X.
by your company and the company’s
509 certificate as SSL server certifi-
approved Web browser settings (see
cate, which is not trusted by default by
Recommended: Replace the Default
Web browsers.
SSL Certificate [page 374]).
Audit Logging For end-to-end traceability of configura- Switch on audit logging in the Cloud
tion changes in the Cloud Connector, as Connector: set audit level to “All” (see
Back to Topics [page 545]
well as communication delivered by the Recommendations for Secure Setup
Cloud Connector, switch on audit log [page 370] and Manage Audit Logs
ging for productive scenarios. [page 527])
High Availability To guarantee high availability of the Use the high availability feature of the
connectivity for cloud integration sce Cloud Connector for productive scenar
Back to Topics [page 545]
narios, run productive instances of the ios (see Install a Failover Instance for
Cloud Connector in high availability High Availability [page 501]).
mode, that is, with a second (redun
dant) Cloud Connector in place.
Supported Protocols HTTP, HTTPS, RFC and RFC over SNC The route from the Cloud Connector to
are currently supported as protocols for
the on-premise system should be en
Back to Topics [page 545] the communication direction from the
crypted using TLS (for HTTPS) or SNC
cloud to on-premise.
(for RFC).
The route from the application VM in
the cloud to the Cloud Connector is al Trust between the Cloud Connector and
ways encrypted. the connected on-premise systems
should be established (see Set Up Trust
You can configure the route from the for Principal Propagation [page 402]).
Cloud Connector to the on-premise
system to be encrypted or unen
crypted.
Configuration of On-Premise Systems When configuring the access to an in Use hostname mapping of exposed on-
ternal system in the Cloud Connector,
premise systems in the access control
Back to Topics [page 545] map physical host names to virtual host
of the Cloud Connector (see Configure
names to prevent exposure of infor
mation on physical systems to the Access Control (HTTP) [page 425] and
cloud. Configure Access Control (RFC) [page
432]).
To allow access only for trusted appli Narrow the list of cloud applications
cations of your SAP Cloud Platform which are allowed to use the on-prem
subaccount to on-premise systems, ise tunnel to the ones that need on-
configure the list of trusted applications premise connectivity (see Set Up Trust
in the Cloud Connector. for Principal Propagation [page 402]).
Cloud Connector Instances You can connect a single Cloud Use different Cloud Connector instan
Connector instance to multiple SAP ces to separate productive and non-
Back to Topics [page 545]
Cloud Platform subaccounts. productive scenarios.
Related Information
1.4.3.5 Upgrade
Upgrade your Cloud Connector and avoid connectivity downtime during the update.
The steps for upgrading your Cloud Connector are specific to the operating system that you use. Previous
settings and configurations are automatically preserved.
Note
Upgrade is supported only for installer versions, not for portable versions. See Installation [page 351].
If you have a single-machine Cloud Connector installation, a short downtime is unavoidable during the upgrade
process. However, if you have set up a master and a shadow instance, you can perform the upgrade without
downtime by executing the following procedure:
Result: Both instances have now been upgraded without connectivity downtime and without configuration
loss.
For more information, see Install a Failover Instance for High Availability [page 501].
Microsoft Windows OS
1. Uninstall the Cloud Connector as described in Uninstallation [page 551] and make sure to retain the
existing configuration.
2. Reinstall the Cloud Connector within the same directory. For more information, see Installation on
Microsoft Windows OS [page 363].
3. Before accessing the administration UI, clear your browser cache to avoid any unpredictable behavior due
to the upgraded UI.
Linux OS
rpm -U com.sap.scc-ui-<version>.rpm
2. Before accessing the administration UI, clear your browser cache to avoid any unpredictable behavior due
to the upgraded UI.
Sometimes you must update the Java VM used by the Cloud Connector, for example, because of expired SSL
certificates contained in the JVM, bug fixes, deprecated JVM versions, and so on.
● If you make a replacement in the same directory, shut down the Cloud Connector, upgrade the JVM, and
restart the Cloud Connector when you are done.
● If you change the installation directory of the JVM, follow the steps below for your operating system.
Note
Note
If the JavaHome value does not yet exist, create it here with a "String Value" (REG_SZ) and specify the full
path of the Java installation directory, for example, C:\sapjvm_7.
5. Close the registry editor and restart the Cloud Connector.
Linux OS
After executing the above steps, the Cloud Connector should be running again and should have picked up the
new Java version during startup. You can verify this by logging in to the Cloud Connector with your favorite
browser, opening the About dialogue and checking that the field <Java Details> shows the version number
and build date of the new Java VM. After you verified that the new JVM is indeed used by the Cloud Connector,
delete or uninstall the old JVM.
1.4.3.7 Uninstallation
● If you have installed an installer variant of the Cloud Connector, follow the steps for your operating system
to uninstall the Cloud Connector.
Microsoft Windows OS
1. In the Windows software administration tool, search for Cloud Connector (formerly named SAP HANA
cloud connector 2.x).
2. Select the entry and follow the appropriate steps to uninstall it.
3. When you are uninstalling in the context of an upgrade, make sure to retain the configuration files.
Linux OS
rpm -e com.sap.scc-ui
Caution
Mac OS X
Portable Variants
(Microsoft Windows OS, Linux OS, Mac OS X) If you have installed a portable version (zip or tgz archive) of the
Cloud Connector, simply remove the directory in which you have extracted the Cloud Connector archive.
Related Information
Technical Issues
Does the Cloud Connector send data from on-premise systems to SAP Cloud Platform or the
other way around?
The connection is opened from the on-premise system to the cloud, but is then used in the other direction.
An on-premise system is, in contrast to a cloud system, normally located behind a restrictive firewall and its
services aren’t accessible thru the Internet. This concept follows a widely used pattern often referred to as
reverse invoke proxy.
Is the connection between the SAP Cloud Platform and the Cloud Connector encrypted?
Yes, by default, TLS encryption is used for the tunnel between SAP Cloud Platform and the Cloud Connector.
If used properly, TLS is a highly secure protocol. It is the industry standard for encrypted communication and
also, for example, as a secure channel in HTTPS.
Keep your Cloud Connector installation updated and we will make sure that no weak or deprecated ciphers are
used for TLS.
Can I use a TLS-terminating firewall between Cloud Connector and SAP Cloud Platform?
This is not possible. Basically, this is a desired man-in-the-middle attack, which does not allow the Cloud
Connector to establish a mutual trust to the SAP Cloud Platform side.
What is the oldest version of SAP Business Suite that's compatible with the Cloud Connector?
The Cloud Connector can connect an SAP Business Suite system version 4.6C and newer.
6 7 8
We recommend that you always use the latest supported JRE version.
Note
Version 2.8 and later of theCloud Connector may have problems with ciphers in Google Chrome, if you use
the JVM 7. For more information read this SCN Article .
Which configuration in the SAP Cloud Platform destinations do I need to handle the user
management access to the Cloud User Store of the Cloud Connector?
Is the Cloud Connector sufficient to connect the SAP Cloud Platform to an SAP ABAP back end
or is SAP Cloud Platform Integration needed?
It depends on the scenario: For pure point-to-point connectivity to call on-premise functionality like BAPIs,
RFCs, OData services, and so on, that are exposed via on-premise systems, the Cloud Connector might suffice.
However, if you require advanced functionality, for example, n-to-n connectivity as an integration hub, SAP
Cloud Platform Integration – Process Integration is a more suitable solution. SAP Cloud Platform Integration
can use the Cloud Connector as a communication channel.
The amount of bandwidth depends greatly on the application that is using the Cloud Connector tunnel. If the
tunnel isn’t currently used, but still connected, a few bytes per minute is used simply to keep the connection
alive.
What happens to a response if there's a connection failure while a request is being processed?
The response is lost. The Cloud Connector only provides tunneling, it does not store and forward data when
there are network issues.
For productive instances, we recommend installing the Cloud Connector on a single purpose machine. This is
relevant for Security [page 537]. For more details on which network zones to choose for the Cloud Connector
setup, see Network Zones [page 356].
● Development
● Production master
● Production shadow
Note
Do not run the production master and the production shadow as VMs inside the same physical machine.
Doing so removes the redundancy, which is needed to guarantee high availability. A QA (Quality Assurance)
instance is a useful extension. For disaster recovery, you will also need two additional instances; another
master instance, and another shadow instance.
Can I send push messages from an on-premise system to the SAP Cloud Platform through the
Cloud Connector?
We currently support 64-bit operating systems running only on an x86-64 processor (also known as x64,
x86_64 or AMD64).
Yes, you should be able to connect almost any system that supports the HTTP Protocol, to the SAP Cloud
Platform, for example, Apache HTTP Server, Apache Tomcat, Microsoft IIS, or Nginx.
Can I authenticate with client certificates configured in SAP Cloud Platform destinations at
HTTP services that are exposed via the Cloud Connector?
Administration
Yes, find more details here: Manage Audit Logs [page 527].
No, currently there is only one role that allows complete administration of the Cloud Connector.
Yes, to enable this, you must configure an LDAP server. See: Use LDAP for Authentication [page 498].
How can I reset the Cloud Connector's administrator password when not using LDAP for
authentication?
This resets the password and user name to their default values.
You can manually edit the file; however, we strongly recommend that you use the users.xml file.
Package the following three folders, located in your Cloud Connector installation directory, into an archive file:
● config
● config_master
● scc_config
As the layout of the configuration files may change between versions, we recommend that you don't restore a
configuration backup of a Cloud Connector 2.x installation into a 2.y installation.
Yes, you can create an archive file of the installation directory to create a full backup. Before you restore from a
backup, note the following:
● If you restore the backup on a different host, the UI certificate will be invalidated.
● Before you restore the backup, you should perform a “normal” installation and then replace the files. This
registers the Cloud Connector at your operating systems package manager.
This user opens the tunnel and generates the certificates that are used for mutual trust later on.
The user is not part of the certificate that identifies the Cloud Connector.
In both the Cloud Connector UI and in the SAP Cloud Platform cockpit, this user ID appears as the one who
performed the initial configuration (even though the user may have left the company).
What happens to a Cloud Connector connection if the user who created the tunnel leaves the
company?
This does not affect the tunnel, even if you restart the Cloud Connector.
For how long does SAP continue to support older Cloud Connector versions?
Each Cloud Connector version is supported for 12 months, which means the cloud side infrastructure is
guaranteed to stay compatible with those versions.
After that time frame, compatibility is no longer guaranteed and interoperability could be dropped.
Furthermore, after an additional 3 month, the next feature release published after that period will no longer
support an upgrade from the deprecated version as a starting release.
SAP Cloud Platform customers can purchase subaccounts and deploy applications into these subaccounts.
Additionally, there are users, who have a password and can log in to the cockpit and manage all subaccounts
they have permission for.
● A single subaccount can be managed by multiple users, for example, your company may have several
administrators.
● A single user can manage multiple subaccounts, for example, if you have multiple applications and want
them (for isolation reasons) to be split over multiple subaccounts.
Features
Does the Cloud Connector work with the SAP Cloud Platform Cloud Foundry environment?
As of version 2.10, the Cloud Connector is able to establish a connection to regions with the SAP Cloud
Platform Cloud Foundry environment.
Does the Cloud Connector work with the SAP S/4HANA Cloud?
Starting with version 2.10, the Cloud Connector offers a Service Channel to S/4HANA Cloud instances, given
that they are associated with the respective SAP Cloud Platform subaccount. Also, S/4HANA Cloud
communication scenarios invoking remote enabled function modules (RFMs) in on-premise ABAP systems are
supported as of version 2.10. See also Using Service Channels [page 480].
How do I bind multiple Cloud Connectors to one SAP Cloud Platform subaccount?
As of version 2.9, you can connect multiple Cloud Connectors to a single subaccount. This lets you assign
multiple separate corporate network segments.
Those Cloud Connectors are distinguishable based on the location ID, which you must provide to the
destination configuration on the cloud side.
Note
During an upgrade, location IDs provided in earlier versions of the Cloud Connector are dropped to ensure
that running scenarios are not disturbed.
You can also use the Cloud Connector as a JDBC or ODBC proxy to access the HANA DB instance of your SAP
Cloud Platform subaccount (service channel). This is sometimes called the “HANA Protocol”.
No, the audit log monitors access only from SAP Cloud Platform to on-premise systems.
Troubleshooting
How do I fix the “Could not open Service Manager” error message?
You are probably seeing this error message due to missing administrator privileges. Right-click the cloud
connector shortcut and select Run as administrator.
If you don’t have administrator privileges on your machine you can use the portable variant of the Cloud
Connector.
Note
The portable variants of the Cloud Connector are meant for nonproductive scenarios only.
For the portable versions, JAVA_HOME must point to the installation directory of your JRE, while PATH must
contain the bin folder inside the installation directory of your JRE.
The installer versions automatically detect JVMs in these locations, as well as in other places.
When I try to open the Cloud Connector UI, Google Chrome opens a Save as dialog, Firefox
displays some cryptic signs, and Internet Explorer shows a blank page, how do I fix this?
This happens when you try to access the Cloud Connector over HTTP instead of HTTPS. HTTP is the default
protocol for most browsers.
An alternative approach compared to the SSL VPN solution that is provided by the Cloud Connector is to
expose on-premise services and applications via a reverse proxy to the Internet. This method typically uses a
reverse proxy setup in a customer's "demilitarized zone" (DMZ) subnetwork. The reverse proxy setup does the
following:
● Acts as a mediator between SAP Cloud Platform and the on-premise services
● Provides the services of an Application Delivery Controller (ADC) to, for example, encrypt, filter, route, or
check inbound traffic
The figure below shows the minimal overall network topology of this approach.
On-premise services that are accessible via a reverse proxy are callable from SAP Cloud Platform like other
HTTP services available on the Internet. When you use destinations to call those services, make sure the
configuration of the ProxyType parameter is set to Internet.
Advantages
Depending on your scenario, you may benefit from the reverse proxy:
● Network infrastructure (such as a reverse proxy and ADC services): since it already exists in your network
landscape, you can reuse it to connect to SAP Cloud Platform. There's no need to set up and operate new
components on your (customer) side.
● A reverse proxy is independent of the cloud solution you are using.
● It acts as single entry point to your corporate network.
Disadvantages
● The reverse proxy approach leaves exposed services generally accessible via the Internet. This makes
them vulnerable to attacks from anywhere in the world. In particular, Denial-of-Service attacks are
possible and difficult to protect against. To prevent attacks of this type and others, you must implement the
highest security in the DMZ and reverse proxy. For the productive deployment of a hybrid cloud/on-
Note
Using the Cloud Connector mitigates all of these issues. As it establishes the SSL VPN tunnel to SAP Cloud
Platform using a reverse invoke approach, there is no need to configure the DMZ or external firewall of a
customer network for inbound traffic. Attacks from the Internet are not possible. With its simple setup and
fine-grained access control of exposed systems and resources, the Cloud Connector allows a high level of
security and fast productive implementation of hybrid applications. It also supports multiple application
protocols, such as HTTP and RFC.
New
The destination service (Beta) is available in the Cloud Foundry environment. See Consuming the Destination Service
[page 159].
Enhancement
Cloud Connector
● The URLs of HTTP requests can now be longer than 4096 bytes.
● SAP Solution Manager can be integrated with one click of a button if the host agent is installed on a Cloud Connec
tor machine. See the Solution Management section in Monitoring [page 510].
● The limitation that only 100 subaccounts could be managed with the administration UI has been removed. See Man
aging Subaccounts [page 392].
Fix
Cloud Connector
● The regression of 2.10.0 has been fixed, as principal propagation now works for RFC.
● The cloud user store works with group names that contain a backslash (\) or a slash (/).
● Proxy challenges for NT LAN Manager (NTLM) authentication are ignored in favor of Basic authentication.
● The back-end connection monitor works when using a JVM 7 as a runtime of Cloud Connector.
Enhancement
Cloud Connector
Fix
Cloud Connector
● The is no longer a bottleneck that could lengthen the processing times of requests to exposed back-end systems,
after many hours under high load when using principal propagation, connection pooling, and many concurrent ses
sions.
● Session management is no longer terminating early active sessions in principal propagation scenarios.
● On Windows 10 hardware metering in virtualized environments shows hard disk and CPU data.
New
In case the remote server supports only TLS 1.2, use this property to ensure that your scenario will work. As TLS 1.2 is
more secure than TLS 1.1, the default version used by HTTP destinations, consider switching to TLS 1.2.
Enhancement
The release of SAP Cloud Platform Cloud Connector 2.9.1 includes the following improvements:
● UI renovations based on collected customer feedback. The changes include rounding offs, fixes of wrong/odd be
haviors, and adjustments of controls. For example, in some places tables were replaced by sap.ui.table.Table for bet
ter experience with many entries.
● You can trigger the creation of a thread dump from the Log and Trace Files view.
● The connection monitor graphic for idle connections was made easier to understand.
Fix
● When configuring authentication for LDAP, the alternate host settings are no longer ignored.
● The email configuration for alerts is processing correctly the user and password for access to the email server.
● Some servers used to fail to process HTTP requests when using the HTTP proxy approach (HTTP Proxy for On-
Premise Connectivity [page 265]) on the SAP Cloud Platform side.
● A bottleneck was removed that could lengthen the processing times of requests to exposed back-end systems un
der high load when using principal propagation.
● The Cloud Connector accepts passwords that contain the '§' character when using authentication-mode password.
Enhancement
Update of JCo runtime for SAP Cloud Platform. See Connectivity [page 16].
Fix
● 2016
● 2015
● 2014
SAP Cloud Platform Document Service helps you manage your documents. It is based on the OASIS industry-
standard CMIS and offers versioning, hierarchies, access control, and document management capabilities.
Environment
Features
Store and retrieve Achieve more with the persistent content storage that provides a standardized
unstructured content interface for content on the OASIS industry standard CMIS.
Use Client API for Java Use the client API on top of the protocol for easier consumption of your stored
applications data. This is a OpenCMIS API provided by Apache Chemistry.
Achieve more Structure your content in a meaningful way using folder hierarchies. Version your
content, manage check-in and checkout of document for collaboration and track
the history. In addition, retrieve the metadata attached to content using a query
language like SQL.
General
The document service is an implementation of the CMIS standard and is the primary interface to a reliable and
safe store for content on SAP Cloud Platform.
● A domain model and service bindings that can be used by applications to work with a content management
repository
● An abstraction layer for controlling diverse document management systems and repositories using Web
protocols
CMIS provides a common data model covering typed files and folders with generic properties that can be set or
read. There is a set of services for adding and retrieving documents (called objects). CMIS defines an access
control system, a checkout and version control facility, and the ability to define generic relations. CMIS defines
the following protocol bindings, which use WSDL with Simple Object Access Protocol (SOAP) or
Representational State Transfer (REST):
The consumption of CMIS-enabled document repositories is easy using the Apache Chemistry libraries.
Apache Chemistry provides libraries for several platforms to consume CMIS using Java, PHP, .Net, or Python.
The subproject OpenCMIS, which includes the CMIS Java implementation, also includes tools around CMIS,
like the CMIS Workbench, which is a desktop client for CMIS repositories for developers.
Since the SAP Cloud Platform Document service API includes the OpenCMIS Java library, applications can be
built on SAP Cloud Platform that are independent of a specific content repository.
Restrictions
The following features, which are defined in the OASIS CMIS standard, are supported with restrictions:
● Multifiling
● Policies
● Relationships
● Change logs
● For searchable properties, a maximum of 100 values with a maximum of 5,000 characters is allowed.
● For non-searchable properties, a maximum of 1,000 values with a maximum of 50,000 characters is
allowed.
● Maximal allowed length of one property is 4,500 characters.
If you expect to reach one or the other limitation, we recommend that you open a support ticket on BC-NEO-
ECM-DS and describe your scenario.
Overview
Applications access the document service using the OASIS standard protocol Content Management
Interoperability Services (CMIS). Java applications running on SAP Cloud Platform can easily consume the
document service using the provided client library. Since the document service is exposed using a standard
protocol, it can also be consumed by any other technology that supports the CMIS protocol.
Related Information
Use the SAP Cloud Platform Document service to store unstructured or semi-structured data in the context of
your SAP Cloud Platform application.
Introduction
Many applications need to store and retrieve unstructured content. Traditionally, a file system is used for this
purpose. In a cloud environment, however, the usage of file systems is restricted. File systems are tied to
individual virtual machines, but a Web application often runs distributed across several instances in a cluster.
File systems also have limited capacity.
The document service offers persistent storage for content and provides additional functionality. It also
provides a standardized interface for content using the OASIS CMIS standard.
The following sections describe the basic concepts of the SAP Cloud Platform Document service.
In the coding and the coding samples, ecm is used to refer to the document service. Therefore, for example, the
document service API is called ecm.api.
The SAP Cloud Platform Document service is exposed using the OASIS standard protocol Content
Management Interoperability Service (CMIS).
The CMIS standard defines the protocol level (SOAP, AtomPub, and JSON based protocols). The SAP Cloud
Platform provides a document service client API on top of this protocol for easier consumption. This API is the
Open Source library OpenCMIS provided by the Apache Chemistry Project.
Related Information
To manage documents in the SAP Cloud Platform Document service, you need to connect an application to a
repository of the document service.
A repository is the document store for your application. It has a unique name with which it can later be
accessed, and it is secured using a key provided by the application. Only applications that provide this key are
allowed to connect to this repository.
Note
As a repository has a certain storage footprint in the back end, the total amount of repositories for each
subaccount is limited to 100. When you create repositories, for example, for testing, make sure that these
repositories are deleted after a test is finished to avoid reaching the limit. Should your use case require
more than 100 repositories per subaccount, please create a support ticket.
Note
Due to the tenant isolation in SAP Cloud Platform, the document service cockpit cannot access or view
repostories you create in SAP Document Center or vice versa.
You can manage a repository using the application's program. In this way, you can create, edit, delete, and
connect the repository.
Related Information
You can create a repository with the createRepository(repositoryOptions) method of the EcmService
(document service).
Procedure
Use the createRepository(repositoryOptions) method and define the properties of the repository.
The following code snippet shows how to create a repository where uploaded files are scanned for viruses:
Related Information
Context
There are many ways to connect to a repository. For more information, see the API Documentation [page 1735]
and Reuse OpenCmis Session Objects in Performance Tips (Java) [page 611].
Procedure
Probably the most common use case is to create documents and folders in a repository. Every repository in
CMIS has a root folder. Once you have received a Session, you can create the root folder using the following
syntax:
Once you have a root folder, you can create other folders or documents. In the CMIS domain model, all CMIS
objects are typed. Therefore, you have to provide type information for each object you create. The types carry
the metadata for an object. The metadata is passed in a property map. Some properties are mandatory, others
are optional. You have to provide at least an object type and a name. For properties defined in the standard,
OpenCMIS has predefined constants in the PropertyIds class.
To create a document with content, provide a map of properties. In addition, create a ContentStream object
carrying a Java InputStream plus some additional information for the content, like Content-Type and file
name.
String id = myDocument.getId();
To get the children of a folder, you can use the following code:
Retrieving a Document
You can also retrieve a document using its path with the getObjectByPath() method.
Tip
We recommend that you retrieve objects by ID and not by path. IDs are kept stable even if the object is
moved. Retrieving objects by IDs is also faster than retrieving objects by paths.
Before your application can use the document service, the application must be able to access and consume the
service.
There are several ways in which your application can access the document service:
● Any application deployed on SAP Cloud Platform as a Java Web application can consume the document
service.
● During the development phase, you can also use the document service in the SAP Cloud Platform local
runtime.
Related Information
http://chemistry.apache.org/
User Management
The service treats user names as opaque strings that are defined by the application. All actions in the
document service are executed in the context of this named user or the currently logged-on user. That is, the
service sets the cmis:createdBy and cmis:lastModifiedBy properties to the provided user name. The
service also uses this user name to evaluate access control lists (ACLs). For more information, see the CMIS
specification. The document service is not connected to a user management system and, therefore, does not
perform any user authentication.
Repositories are identified either by their unique name or by their ID. The unique name is a human-readable
name that should be constructed with Java package-name semantics, for example,
com.foo.MySpecialRepository, to avoid naming conflicts. Repositories in the document service are
secured by a key provided by the application. When a repository is created, a key must be supplied. Any further
attempts to connect to this repository only succeed if the key provided by the connecting application matches
the key that was used to create the repository. Therefore, this key must be stored in a secure manner, for
example, using the Java KeyStore. It is, however, up to the application to decide whether to share this key with
other applications from the same subaccount to implement data-sharing scenarios.
Multiple applications can access the same repository. However, applications can only connect to the same
repository using the unique name assigned to this repository if they are deployed within the same subaccount
as the application that created the repository. In contrast, applications that are deployed in a different
subaccount cannot access this repository. A consequence of having repositories isolated within a subaccount
is that data cannot be shared across different subaccounts.
Repository ABC is created when Application1 is deployed in Subaccount1. Application2 is located in the same
Subaccount1 as Application1; therefore, Application2 can also access the same repository using its unique
name ABC. Application3 is deployed in Subaccount2. Application3 calls a repository that has the same unique
name ABC as the other repository that belongs to Subaccount1. However, Application3 cannot access the ABC
repository that belongs to Subaccount1 using the identical unique name, because the repositories are isolated
within the subaccount. Therefore, Application3 in Subaccount2 connects to another ABC repository that
belongs to Subaccount2. In summary, a repository can only be accessed by applications that are deployed in
the same subaccount as the application that created the repository.
Multitenancy
The document service supports multitenancy and isolates data between tenants. Each application consuming
the document service creates a repository and provides a unique name and a secret key. The document service
creates the repository internally in the context of the tenant using the application. While the repository name
uniquely identifies the repository, an internal ID is created for the application for each tenant. This ID identifies
the storage area containing all the data for the tenant in this repository. An application that uses the document
service in this way has multitenancy support. No additional logic is required at the application level.
Tip
One document service session is always bound to one tenant and to one user. If you create the session only
once, then store it statically, and finally reuse it for all subsequent requests, you end up in the tenant where
you first created the document service session. That is: You do not use multitenancy.
We recommend that you create one document service session per tenant and cache these sessions for
future reuse. Make sure that you do not mix up the tenants on your side.
If you expect a high load for a specific tenant, we recommend that you create a pool of sessions for that
tenant. A session is always bound to a particular server of the document service and this will not scale. If
you use a session pool, the different sessions are bound to different document service servers and you will
get a much better performance and scaling.
Related Information
If the data of a repository is changed, for example, by creating, modifying, or deleting documents or folders, an
ECM session fetched from the class EcmFactory is used. Only subsequent read operations of the same session
Prerequisites
● You have downloaded and configured the SAP Eclipse platform. For more information, see Setting Up the
Development Environment [page 1402].
● You have created a HelloWorld Web application as described in Creating a Hello World Application [page
1416].
● You have downloaded the SDK used for local development.
● You have installed MongoDB as described in Setup Local Development [page 579].
Context
This tutorial describes how you extend the HelloWorld Web application so that it uses the SAP Cloud Platform
Document service for managing unstructured content in your application. You test and run the Web application
on your local server and the SAP Cloud Platform.
Note
For historic reasons, ecm is used to refer to the document service in the coding and the coding samples.
Procedure
package hello;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.HashMap;
import java.util.Map;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.chemistry.opencmis.client.api.CmisObject;
import org.apache.chemistry.opencmis.client.api.Document;
import org.apache.chemistry.opencmis.client.api.Folder;
import org.apache.chemistry.opencmis.client.api.ItemIterable;
import org.apache.chemistry.opencmis.client.api.Session;
import org.apache.chemistry.opencmis.commons.PropertyIds;
import org.apache.chemistry.opencmis.commons.data.ContentStream;
import org.apache.chemistry.opencmis.commons.enums.VersioningState;
import
org.apache.chemistry.opencmis.commons.exceptions.CmisNameConstraintViolatio
nException;
import
org.apache.chemistry.opencmis.commons.exceptions.CmisObjectNotFoundExceptio
n;
import com.sap.ecm.api.RepositoryOptions;
import com.sap.ecm.api.RepositoryOptions.Visibility;
import com.sap.ecm.api.EcmService;
import javax.naming.InitialContext;
/**
* Servlet implementation class HelloWorldServlet
*/
public class HelloWorldServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
/**
* @see HttpServlet#HttpServlet()
*/
public HelloWorldServlet() {
super();
}
/**
* @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse
* response)
*/
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws
ServletException, IOException {
response.getWriter().println("<html><body>");
try {
// Use a unique name with package semantics e.g. com.foo.MyRepository
String uniqueName = "com.foo.MyRepository";
// Use a secret key only known to your application (min. 10 chars)
String secretKey = "my_super_secret_key_123";
Session openCmisSession = null;
InitialContext ctx = new InitialContext();
String lookupName = "java:comp/env/" + "EcmService";
EcmService ecmSvc = (EcmService) ctx.lookup(lookupName);
try {
// connect to my repository
openCmisSession = ecmSvc.connect(uniqueName, secretKey);
}
catch (CmisObjectNotFoundException e) {
For more information about using the OpenCMIS API, see the Apache Chemistry documentation.
During execution, this servlet executes the following steps:
1. It connects to a repository. If the repository does not yet exist, the servlet creates the repository.
2. It creates a subfolder.
3. It creates a document.
4. It displays the children of the root folder.
4. Add the resource reference description to the web.xml file.
Note
The document service is consumed by defining a resource in your web.xml file and by using JNDI
lookup to retrieve an instance of the com.sap.ecm.api.EcmService class. Once you have
established a connection to the document service, you can use one of the connect(…) methods to get
a CMIS session (org.apache.chemistry.opencmis.client.api.Session). A few examples of
how to use the OpenCMIS Client API from the Apache Chemistry project are described below. For more
information, see the Apache Chemistry page.
<resource-ref>
<res-ref-name>EcmService</res-ref-name>
<res-type>com.sap.ecm.api.EcmService</res-type>
</resource-ref>
5. Test the Web application locally or in the SAP Cloud Platform. For testing, proceed as described in Deploy
Locally from Eclipse IDE [page 1468] or Deploy on the Cloud from Eclipse IDE [page 1469] linked below.
Related Information
To use the document service in a Web application, download the SDK and install the MongoDB database.
Context
Caution
The local document service emulation is deprecated as of 5 March 2018. Support will be discontinued after
5 July 2018. This does not affect the availability of the document service running on SAP Cloud Platform,
but only its local emulation that is part of the SDK.
We recommend to either deploy applications consuming the document service to SAP Cloud Platform, or
to consume a cloud-located repository locally as described in Access from External Applications [page
580]. This explains how to access a document service repository that is located on SAP Cloud Platform
from local applications.
Procedure
If your setup is correct, you see a text message starting with "You are trying to access MongoDB
on the native driver port. …"
Related Information
Overview
The services on SAP Cloud Platform can be consumed by applications that are deployed on SAP Cloud
Platform but not from external applications. There are cases, however, where applications want to access
content in the cloud but cannot be deployed in the cloud.
The figure below describes a mechanism with which this scenario can be supported and is followed by an
explanation:
This can be addressed by deploying an application on SAP Cloud Platform that accepts incoming requests
from the Internet and forwards them to the document service. We refer to this type of application as a proxy
bridge. The proxy bridge is deployed on SAP Cloud Platform and runs in a subaccount using the common SAP
Cloud Platform patterns. The proxy bridge is responsible for user authentication. The resources consumed in
the document service are billed to the SAP Cloud Platform subaccount that deployed this application.
Related Information
Context
All the standard mechanisms of the document service apply. The SAP Cloud Platform SDK provides a base
class (a Java servlet) that provides the proxy functionality out-of-the-box. This can easily be extended to
customize its behavior. The proxy bridge performs a 1:1 mapping from source CMIS calls to target CMIS calls.
CMIS bindings can be enabled or disabled. Further modifications of the incoming requests, such as allowing
only certain operations or modifying parameters, are not supported. The Apache OpenCMIS project contains a
bridge module that supports advanced scenarios of this type.
To experience the best performance and to benefit from the consistency model described in Consistency
Model (Java) [page 574], ensure that cookies are enabled for client applications that connect to the proxy
bridge. This is the default setting for HTML5 apps. Only if cookies are enabled, will your subsequent requests
The proxy bridge allows you to use standard CMIS clients to connect to the document service of SAP Cloud
Platform. An example is the Apache Chemistry Workbench, which can be useful for development and testing.
Caution
Note that the proxy bridge opens your repository to the public Internet and should always be secured
appropriately.
Note
For historic reasons, ecm is used to refer to the document service in the coding and the coding samples.
Procedure
1. Create an SAP Cloud Platform application as described in Using Java EE Web Profile Runtimes.
2. Create a web.xml file and a servlet class.
3. Derive your servlet from the class com.sap.ecm.api.AbstractCmisProxyServlet.
4. Add a servlet mapping to your web.xml file using a URL pattern that contains a wildcard. See the following
example.
Example
<servlet>
<servlet-name>cmisproxy</servlet-name>
<servlet-class>my.app.CMISProxyServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>cmisproxy</servlet-name>
<url-pattern>/cmis/*</url-pattern>
</servlet-mapping>
You can use prefixes other than /cmis and you can add more servlets in accordance with your needs. The
URL pattern for your servlet derived from the class AbstractCmisProxyServlet must contain a /*
suffix.
5. Override the two abstract methods provided by the AbstractCmisProxyServlet class:
getRepositoryUniqueName() and getRepositoryKey().
These methods return a string containing the unique name and the secret key of the repository to be
accessed. You can override a third method getDestinationName(), which also returns a string. If this
method is overridden, it should return the name of a destination deployed for this application to connect to
the service. This is useful if a service user is used, for example. Ensure that there is a valid custom
destination.
6. If you override the getServletConfig() method ensure that you call the superclass in your method.
○ supportAtomPubBinding()
○ supportBrowserBinding()
<security-constraint>
<web-resource-collection>
<web-resource-name>Proxy</web-resource-name>
<url-pattern>/cmis/*</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>EcmDeveloper</role-name>
</auth-constraint>
</security-constraint>
In some cases it might be useful to grant public access for reading content but not for modifying, creating
or deleting it. For example, a Web content management application might embed pictures into a public
Web site but store them in the document service. For a scenario of this type, override the method
readOnlyMode() so that it returns true. This means that only read requests are forwarded to the
repository and all other requests are rejected. The read-only mode only works with the JSON binding. The
other bindings are disabled in this case.
Note
If you need finer control or dynamic permissions you can override the requireAuthentication()
and authenticate() methods in the AbstractCmisProxyServlet.
9. Optionally, you can override two more methods to customize timeout values for reading and connecting:
getConnectTimeout() and getReadTimeout().
It should only be necessary to use these methods if frequent timeout errors occur.
package my.app;
import com.sap.ecm.api.AbstractCmisProxyServlet;
public class CMISProxyServlet extends AbstractCmisProxyServlet {
@Override
protected String getRepositoryUniqueName() {
return "MySampleRepository";
}
@Override
//For applications in production, use a secure location to store the
secret key.
protected String getRepositoryKey() {
return "abcdef0123456789";
}
}
10. To access the proxy bridge from an external application you need the correct URL.
Example
Your proxy bridge application is deployed as cmisproxy.war. The cockpit shows the following URL for
your app: https://cmisproxysap.hana.ondemand.com/cmisproxy and the web.xml is as
shown above. Then the URLs is as follows:
○ CMIS 1.1:
AtomPub: https://cmisproxysap.hana.ondemand.com/cmisproxy/cmis/1.1/atom
Browser: https://cmisproxysap.hana.ondemand.com/cmisproxy/cmis/json
○ CMIS 1.0:
AtomPub: https://cmisproxysap.hana.ondemand.com/cmisproxy/cmis/atom
Browser: (not available)
These URLs can be passed to the CMIS Workbench from Apache Chemistry, for example.
The workbench requires basic authentication. Please add the following code to your web.xml:
Sample Code
<login-config>
<auth-method>BASIC</auth-method>
</login-config>
Example
A full example that can be deployed consists of two files: a web.xml and a servlet class. This example
only exposes the CMIS browser binding (JSON) using the prefix /cmis in the URL.
Sample Code
web.xml
Sample Code
Servlet
package my.app;
import com.sap.ecm.api.AbstractCmisProxyServlet;
public class CMISProxyServlet extends AbstractCmisProxyServlet {
private static final long serialVersionUID = 1L;
@Override
protected boolean supportAtomPubBinding() {
return false;
}
@Override
protected boolean supportBrowserBinding() {
return true;
}
public CMISProxyServlet() {
super();
}
@Override
protected String getRepositoryUniqueName() {
return "MySampleRepository";
}
@Override
// For applications in production, use a secure location to store the
secret key.
protected String getRepositoryKey() {
return "abcdef0123456789";
}
}
Procedure
Field Value
Type HTTP
Name documentservice
CloudConnectorVersi 2
on
ProxyType Internet
URL https://cmisproxy<subaccount_ID>.hana.ondemand.com/cmisproxy/
cmis/json
5. Create an HTML5 application accessing the document service and open it in the Web IDE. Then create an
index.html file with the following contents:
Example
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Use CMIS from HTML5 Application</title>
<script type="text/javascript">
function setFilename() {
var thefile = document.getElementById('filename').split('\
\').pop();
document.getElementById("cmisname").value = thefile.value;
}
function getChildren() {
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
var children = obj = JSON.parse(this.responseText);
var str = "<ul>";
var repoUrl = "/cmis/<repo-ID>/root/"
for (var i = 0; i <children.objects.length; i++) {
a. Open the URL of the proxy bridge from the previous step in a browser, copy the repository ID, for
example, 8d1c2718db5a2fc0d7242585, from the response.
Example: https://cmisproxyd058463sapdev.int.sap.hana.ondemand.com/cmisproxy/
cmis/json
Example
{
"8d1c2718db5a2fc0d7242585": {
"repositoryId": "8d1c2718db5a2fc0d7242585",
"repositoryName": "Sample Repository",
"repositoryDescription": "Sample repository for external access",
"vendorName": "SAP AG",
"productName": "SAP Cloud Platform, document service",
"productVersion": "1.0",
"rootFolderId": "8d1c2718db5a2fc0d7242585",
"capabilities": {
…
b. In your index.html, replace all occurrences of <repo-id> with the extracted repository ID and all
occurrences of <your-proxy-url> with the URL of the proxy bridge application.
c. Create a neo-app.json file in the root of your project directory with the following contents:
Example
{
"welcomeFile": "/index.html",
"routes": [
{
"path": "/cmis",
"target": {
"type": "destination",
"name": "documentservice"
},
"description": "CMIS Connection Document Service"
}
],
"sendWelcomeFileRedirect": true
}
This handles all URLs starting with /cmis to the path specified in the destination named
“documentservice”.
d. Commit your files in Git, create a new version, and activate the version.
For more information, see Create a Version [page 1715] and Activate a Version [page 1716].
The following sections describe the advanced concepts of the SAP Cloud Platform Document service.
One benefit of Content Management Interoperability Services (CMIS) as compared to a file system is the
extended handling of metadata.
You can use metadata to structure content and make it easier to find documents in a repository, even if it
contains millions of documents. In the CMIS domain model, metadata is structured using types. A type
contains the set of allowed or required properties, for example, an Invoice type that has the InvoiceNo and
CustomerNo properties.
A type is described in a type definition and contains a list of property definitions. CMIS has a set of predefined
types and predefined properties. Custom-specific types and additional custom properties can extend the
predefined types. When a type is created, it is derived from a parent type and extends the set of the parent
properties. In this way, a hierarchy of types is built. The base types do not have parents. Base types are defined
in the CMIS specification. The most important base types are cmis:document and cmis:folder.
Predefined properties contain metadata that is usually available in the existing repositories. These are, for
example, cmis:name, cmis:createdBy, cmis:modifiedBy, cmis:createdAt, and cmis:modifiedAt.
They contain the name of the author, the creation date, and the date of the last modification. Some properties
are type-specific, for example, a folder has a parent folder and a document has a property for content length.
Each property has a data format (String, Integer, Date, Decimal, ID, and so on) and can define additional
constraints, such as:
Each object stored in a CMIS repository has a type and a set of properties. Types and properties provide the
mechanism used to find objects with CMIS queries.
Related Information
http://chemistry.apache.org/
http://chemistry.apache.org/java/developing/guide.html
http://chemistry.apache.org/java/0.9.0/maven/apidocs/
http://chemistry.apache.org/java/examples/index.html
The document store on SAP Cloud Platform supports the cmis:document and cmis:folder types. It also
has a built-in subtype for versioned documents. The types can be investigated using the Apache CMIS
workbench.
In addition to the standard CMIS properties, the document service of SAP Cloud Platform supports additional
SAP properties. The most important ones are:
Related Information
http://chemistry.apache.org/java/download.html
http://docs.oasis-open.org/cmis/CMIS/v1.1
Context
The CMIS client API uses a map to pass properties. The key of the map is the property ID and the value is the
actual value to be passed. The cmis:name and cmis:objectTypeId properties are mandatory.
Procedure
1. Use a name that is unique within the folder and a type ID that is a valid type from the repository.
2. Run the sample code.
// properties
Map<String, Object> properties = new HashMap<String, Object>();
Results
You can inspect the document in the CMIS workbench. You can see that various other properties have been set
by the system, such as the ID, the creation date, and the creating user.
Context
This procedure focuses on the use of the sap:tags property to mark the document. This is a multi-value
attribute, so you can assign more than one tag to it.
Procedure
1. To assign the Hello and Tutorial tags to the document, use the following code:
This section gives a very brief introduction to querying. The OpenCMIS Client API is a Java client-side library
with many capabilities, for example, paging results. For more information, consult the OpenCMIS Javadoc and
the examples on the Apache Chemistry Web site.
Context
The following procedure focuses on a use case where you have created a second folder and some more
documents. The repository then looks like this:
The Hello Document and Hi Document documents have the tags Hello and Tutorial, the Loren Ipsum
document has no tags.
Procedure
1. Use the CMIS query to search documents in the system based on their properties.
Note
In this case, the workbench displays only the first value of multivalued properties.
Tutorial
Tutorial
Related Information
http://chemistry.apache.org/java/0.13.0/maven/apidocs/
http://chemistry.apache.org/java/examples/index.html
For the SAP Cloud Platform Document service, you can create new object types or you can remove those new
object types again in accordance with the CMIS standard.
Context
In CMIS, every object, for example a document or a folder, has an object type. The object type defines the basic
settings of an object of that type. For example, the cmis:document object type defines that objects of that
type are searchable.
Furthermore, the object type defines the properties that can be set for an object of that type, for example, an
object of type cmis:document has a mandatory cmis:name property that must be a string. Therefore, every
object of type cmis:document needs a name. Otherwise, the object is not valid and the repository rejects it.
In CMIS, types are organized hierarchically. The most important (predefined) base types are:
CMIS allows you to define additional types provided that each type is a descendant of one of the predefined
base types. In this type hierarchy, a type inherits all property definitions of its parent type. CMIS 1.1 allows type
hierarchy modifications (see the OASIS page) by providing methods for the creation, the modification, and the
removal of object types. Currently, the document service only supports the creation and removal of types. This
allows a developer to define new types as subtypes of existing types. The new types might possess other
properties in addition to all of the automatically inherited property definitions of the parent type. Creating
objects of that type allows you to assign values for these new properties to the object. Remember to also set
the values for the inherited properties as appropriate.
The following example shows how to create a new document type that possesses one additional property for
storing the summary of a document. The developer must implement the MyDocumentTypeDefinition and
MyStringPropertyDefinition classes. Example implementations for these classes as well as for the
interfaces (FolderTypeDefinition, SecondaryTypeDefinition, PropertyBooleanDefinition,
PropertyDecimalDefinition, and so on) are described in the following topics.
import java.util.HashMap;
import java.util.Map;
import org.apache.chemistry.opencmis.client.api.ObjectType;
import org.apache.chemistry.opencmis.client.api.Session;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.enums.BaseTypeId;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.ContentStreamAllowed;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
import
org.apache.chemistry.opencmis.commons.exceptions.CmisObjectNotFoundException;
import org.apache.chemistry.opencmis.commons.exceptions.CmisRuntimeException;
// specify type attributes
String idAndQueryName = "test:docWithSummary";
String description = "Doc with Summary";
String displayName = "Document with Summary";
String localName = "some local name";
String localNamespace = "some local name space";
String parentTypeId = BaseTypeId.CMIS_DOCUMENT.value();
Boolean isCreatable = true;
Boolean includedInSupertypeQuery = true;
Boolean queryable = true;
ContentStreamAllowed contentStreamAllowed = ContentStreamAllowed.ALLOWED;
Boolean versionable = false;
// specify property definitions
Map<String, PropertyDefinition<?>> propertyDefinitions
= new HashMap<String, PropertyDefinition<?>>();
MyStringPropertyDefinition summaryPropertyDefinitions
= createSummaryPropertyDefinitions();
propertyDefinitions.put(summaryPropertyDefinitions.getId(),
summaryPropertyDefinitions);
// build object type
MyDocumentTypeDefinition docTypeDefinition
= new MyDocumentTypeDefinition(idAndQueryName, description, displayName,
localName, localNamespace, parentTypeId, isCreatable,
includedInSupertypeQuery, queryable, contentStreamAllowed,
versionable, propertyDefinitions);
// add type to repository
ecmSession.createType(docTypeDefinition);
// create document of new type
ecmSession.clear();
Map<String, String> newDocProps = new HashMap<String, String>();
newDocProps.put(PropertyIds.OBJECT_TYPE_ID, docTypeDefinition.getId());
newDocProps.put(PropertyIds.NAME, "testDocWithNewType");
newDocProps.put("test:summary", "This is a document with a summary property");
● The ID and the query name must be identical and meet the following rules:
○ They must match the regular Java expression "[a-zA-Z][a-zA-Z0-9_:]*".
○ Their names must not start with cmis:, sap, or s: in any combination of uppercase and lowercase
letters, for example, cMis: is also not allowed.
● If the base type of the new object type is cmis:secondary, no other type definition may already contain a
property definition with the same ID or query name.
● If the base type of the new object type is not cmis:secondary and another type definition already
contains a property definition with the same ID or query name, this property definition must be identical to
the one of the new type.
● You cannot specify default values or choices.
To delete a new object type, you can use the following code snippet: ecmSession.deleteType(typeId);
You can only delete an object type if it is no longer used by any documents or folders in the repository.
Related Information
Example
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.chemistry.opencmis.commons.data.CmisExtensionElement;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.definitions.TypeDefinition;
import org.apache.chemistry.opencmis.commons.definitions.TypeMutability;
import org.apache.chemistry.opencmis.commons.enums.BaseTypeId;
public abstract class MyTypeDefinition implements TypeDefinition {
private String description = null;
private String displayName = null;
private String idAndQueryName = null;
private String localName = null;
private String localNamespace = null;
private String parentTypeId = null;
private Boolean isCreatable = null;
private Boolean includedInSupertypeQuery = null;
private Boolean queryable = null;
private Map<String, PropertyDefinition<?>> propertyDefinitions
= new HashMap<String, PropertyDefinition<?>>();
public MyTypeDefinition(String idAndQueryName, String description,
String displayName, String localName, String localNamespace,
String parentTypeId, Boolean isCreatable,
Boolean includedInSupertypeQuery, Boolean queryable,
Map<String, PropertyDefinition<?>> propertyDefinitions) {
this.description = description;
this.displayName = displayName;
this.idAndQueryName = idAndQueryName;
this.localName = localName;
this.localNamespace = localNamespace;
this.parentTypeId = parentTypeId;
this.isCreatable = isCreatable;
this.includedInSupertypeQuery = includedInSupertypeQuery;
this.queryable = queryable;
if (propertyDefinitions != null) {
this.propertyDefinitions = propertyDefinitions;
}
}
@Override
abstract public BaseTypeId getBaseTypeId();
@Override
public String getDescription() {
return description;
}
@Override
public String getDisplayName() {
return displayName;
}
@Override
public String getId() {
return idAndQueryName;
}
@Override
public String getLocalName() {
import java.util.List;
import org.apache.chemistry.opencmis.commons.data.CmisExtensionElement;
import org.apache.chemistry.opencmis.commons.definitions.TypeMutability;
public class MyTypeMutability implements TypeMutability {
@Override
public List<CmisExtensionElement> getExtensions() {
return null;
}
@Override
public void setExtensions(List<CmisExtensionElement> arg0) {
}
@Override
public Boolean canCreate() {
return true;
}
@Override
public Boolean canDelete() {
return true;
}
@Override
public Boolean canUpdate() {
return false;
}
}
import java.util.Map;
import org.apache.chemistry.opencmis.commons.definitions.DocumentTypeDefinition;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.enums.BaseTypeId;
import org.apache.chemistry.opencmis.commons.enums.ContentStreamAllowed;
public class MyDocumentTypeDefinition extends MyTypeDefinition implements
DocumentTypeDefinition {
private ContentStreamAllowed contentStreamAllowed = null;
private Boolean versionable = null;
public MyDocumentTypeDefinition(String idAndQueryName, String description,
String displayName, String localName, String localNamespace,
String parentTypeId, Boolean isCreatable,
Boolean includedInSupertypeQuery, Boolean queryable,
ContentStreamAllowed contentStreamAllowed, Boolean versionable,
Map<String, PropertyDefinition<?>> propertyDefinitions) {
super(idAndQueryName, description, displayName, localName, localNamespace,
parentTypeId, isCreatable, includedInSupertypeQuery, queryable,
propertyDefinitions);
this.contentStreamAllowed = contentStreamAllowed;
this.versionable = versionable;
}
@Override
public BaseTypeId getBaseTypeId() {
return BaseTypeId.CMIS_DOCUMENT;
}
@Override
public ContentStreamAllowed getContentStreamAllowed() {
return contentStreamAllowed;
}
@Override
import java.util.Map;
import org.apache.chemistry.opencmis.commons.definitions.FolderTypeDefinition;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.enums.BaseTypeId;
public class MyFolderTypeDefinition extends MyTypeDefinition implements
FolderTypeDefinition {
public MyFolderTypeDefinition(String idAndQueryName, String description,
String displayName, String localName, String localNamespace,
String parentTypeId, Boolean isCreatable,
Boolean includedInSupertypeQuery, Boolean queryable,
Map<String, PropertyDefinition<?>> propertyDefinitions) {
super(idAndQueryName, description, displayName, localName, localNamespace,
parentTypeId, isCreatable, includedInSupertypeQuery, queryable,
propertyDefinitions);
}
@Override
public BaseTypeId getBaseTypeId() {
return BaseTypeId.CMIS_FOLDER;
}
}
import java.util.Map;
import org.apache.chemistry.opencmis.commons.definitions.FolderTypeDefinition;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.enums.BaseTypeId;
public class MySecondaryTypeDefinition extends MyTypeDefinition implements
FolderTypeDefinition {
public MySecondaryTypeDefinition(String idAndQueryName, String description,
String displayName, String localName, String localNamespace,
String parentTypeId, Boolean isCreatable,
Boolean includedInSupertypeQuery, Boolean queryable,
Map<String, PropertyDefinition<?>> propertyDefinitions) {
super(idAndQueryName, description, displayName, localName, localNamespace,
parentTypeId, isCreatable, includedInSupertypeQuery, queryable,
propertyDefinitions);
}
@Override
public BaseTypeId getBaseTypeId() {
return BaseTypeId.CMIS_SECONDARY;
}
}
import java.util.List;
import org.apache.chemistry.opencmis.commons.data.CmisExtensionElement;
import org.apache.chemistry.opencmis.commons.definitions.Choice;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
abstract public class MyPropertyDefinition<T> implements PropertyDefinition<T> {
private String idAndQueryName = null;
private Cardinality cardinality = null;
private String description = null;
private String displayName = null;
private String localName = null;
private String localNameSpace = null;
private Updatability updatability = null;
private Boolean orderable = null;
private Boolean queryable = null;
public MyPropertyDefinition(String idAndQueryName, Cardinality cardinality,
String description, String displayName, String localName,
String localNameSpace, Updatability updatability,
Boolean orderable, Boolean queryable) {
super();
this.idAndQueryName = idAndQueryName;
this.cardinality = cardinality;
this.description = description;
this.displayName = displayName;
this.localName = localName;
this.localNameSpace = localNameSpace;
this.updatability = updatability;
this.orderable = orderable;
this.queryable = queryable;
}
@Override
public String getId() {
return idAndQueryName;
}
@Override
public Cardinality getCardinality() {
return cardinality;
}
@Override
public String getDescription() {
return description;
}
@Override
public String getDisplayName() {
return displayName;
}
@Override
public String getLocalName() {
return localName;
}
@Override
public String getLocalNamespace() {
return localNameSpace;
}
@Override
abstract public PropertyType getPropertyType();
@Override
public String getQueryName() {
return idAndQueryName;
}
@Override
public Updatability getUpdatability() {
import
org.apache.chemistry.opencmis.commons.definitions.PropertyBooleanDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyBooleanPropertyDefinition extends MyPropertyDefinition<Boolean>
implements PropertyBooleanDefinition {
public MyBooleanPropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
import java.util.GregorianCalendar;
import
org.apache.chemistry.opencmis.commons.definitions.PropertyDateTimeDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.DateTimeResolution;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyDateTimePropertyDefinition extends
MyPropertyDefinition<GregorianCalendar> implements PropertyDateTimeDefinition {
public MyDateTimePropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
super(idAndQueryName, cardinality, description, displayName,
localName, localNameSpace, updatability, orderable, queryable);
}
@Override
public PropertyType getPropertyType() {
return PropertyType.DATETIME;
}
@Override
public DateTimeResolution getDateTimeResolution() {
return DateTimeResolution.TIME;
}
}
import java.math.BigDecimal;
import
org.apache.chemistry.opencmis.commons.definitions.PropertyDecimalDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.DecimalPrecision;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyDecimalPropertyDefinition extends
MyPropertyDefinition<BigDecimal> implements
PropertyDecimalDefinition {
public MyDecimalPropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
super(idAndQueryName, cardinality, description, displayName,
localName, localNameSpace, updatability, orderable, queryable);
}
@Override
public PropertyType getPropertyType() {
return PropertyType.DECIMAL;
}
@Override
public BigDecimal getMaxValue() {
return null;
}
import org.apache.chemistry.opencmis.commons.definitions.PropertyHtmlDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyHtmlPropertyDefinition extends MyPropertyDefinition<String>
implements PropertyHtmlDefinition {
public MyHtmlPropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
super(idAndQueryName, cardinality, description, displayName,
localName, localNameSpace, updatability, orderable, queryable);
}
@Override
public PropertyType getPropertyType() {
return PropertyType.HTML;
}
}
import org.apache.chemistry.opencmis.commons.definitions.PropertyIdDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyIdPropertyDefinition extends MyPropertyDefinition<String>
implements PropertyIdDefinition {
public MyIdPropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
super(idAndQueryName, cardinality, description, displayName,
localName, localNameSpace, updatability, orderable, queryable);
}
@Override
public PropertyType getPropertyType() {
return PropertyType.ID;
}
}
import java.math.BigInteger;
import
org.apache.chemistry.opencmis.commons.definitions.PropertyIntegerDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyIntegerPropertyDefinition extends
MyPropertyDefinition<BigInteger> implements PropertyIntegerDefinition {
public MyIntegerPropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
super(idAndQueryName, cardinality, description, displayName,
localName, localNameSpace, updatability, orderable, queryable);
}
@Override
public PropertyType getPropertyType() {
return PropertyType.INTEGER;
}
@Override
public BigInteger getMaxValue() {
return null;
}
@Override
public BigInteger getMinValue() {
return null;
}
}
import java.math.BigInteger;
import
org.apache.chemistry.opencmis.commons.definitions.PropertyStringDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyStringPropertyDefinition extends MyPropertyDefinition<String>
implements PropertyStringDefinition {
public MyStringPropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
super(idAndQueryName, cardinality, description, displayName,
localName, localNameSpace, updatability, orderable, queryable);
}
@Override
public PropertyType getPropertyType() {
return PropertyType.STRING;
}
@Override
public BigInteger getMaxLength() {
return null;
}
}
import org.apache.chemistry.opencmis.commons.definitions.PropertyUriDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyUriPropertyDefinition extends MyPropertyDefinition<String>
implements
PropertyUriDefinition {
public MyUriPropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
super(idAndQueryName, cardinality, description, displayName,
localName, localNameSpace, updatability, orderable, queryable);
}
@Override
public PropertyType getPropertyType() {
return PropertyType.URI;
}
}
● cmis:read
○ Allows fetching an object (folder or document).
○ Allows reading the ACL, properties and the content of an object.
● sap:file
○ Includes all privileges of cmis:read.
○ Allows the creation of objects in a folder and to move an object.
● cmis:write
○ Includes all privileges of sap:file.
○ Allows modifying the properties and the content of an object.
○ Allows checking out of a versionable document.
● sap:delete
○ Includes all privileges of cmis:write.
○ Allows the deletion of an object.
○ Allows checking in and canceling check out of a private working copy.
● cmis:all
○ Includes all privileges of sap:delete.
○ Allows modifying the ACL of an object.
For a repository the initial settings for the root folder are:
● The ACL contains one ACE for the {sap:builtin}everyone principal with the cmis:all permission.
With these settings, all principals have full control over the root folder.
Initially, without specific ACL settings, all documents and folders possess an ACL with one ACE for the built-in
principal {sap:builtin}everyone with the cmis:all permission that grants all users unrestricted access.
ACLs or ACEs are not inherited but explicitly stored at the particular objects. An empty ACL means that no
principal has permission, except the owner of the object. The owner concept is described below in more detail.
Example
The example assumes that every user has full access to the folder. In the following, the access to a folder is
restricted in such a way that User1 has full access and User2 has only read access.
The following methods for modifying ACLs (Access Control Lists) in the CMIS client library are available:
To modify the ACL of the current object only, set the propagation parameter to OBJECTONLY. To modify the
ACL of the current object as well as of the ACLs of all of the object's descendants, set the propagation
parameter to PROPAGATE. You can apply PROPAGATE only to folders. It works as follows: The ACEs that are
added and removed at the root folder of the operation are computed and then applyAcl is called with these
ACE sets for each descendant.
For one principal at most one ACE is stored in an object ACL. Assigning a more powerful permission to a
principal replaces the inferior permission with the more powerful one. cmis:all is, for example, more
powerful than sap:delete. If, for example, the current permission for a principal is cmis:read and the
permission cmis:write is added this results in an ACL with one ACE for the principal containing the
permission cmis:write. Adding an inferior permission has no effect.
Removing a permission for a principal from an object results in no ACE entry for the principal in that ACL. This
is independent of the current settings in the ACL with respect to this principal.
In methods with parameters for adding and removing ACEs, first the specified ACEs are removed and then the
new ones are added.
Every folder and document has the sap:owner property. When an object is created the currently connected
user automatically becomes the owner of the object. The owner of an object always has full access even
without any specific ACEs granting him or her permission.
The owner property could be changed using the updateProperties method with the following restrictions:
● The new value of the owner property must be identical with the currently connected user.
● The currently connected user has cmis:all privilege.
● The application can use a connect method without explicitly providing a parameter containing a user. Then
the current user is forwarded to the document service. The user's right to access particular documents
and folders is determined using the user ID and the attached ACLs.
● The application can provide a user ID explicitly using a parameter of the connect method. Then this ID is
used for checking the access rights.
Note
Note that the document service is not connected to any Identity Provider or Identity Management System
and considers the provided ID as an opaque string. This is also true for the user or principal strings
provided in the ACEs when setting ACLs at objects.
The application is responsible for providing the correct user ID but it can also submit a technical user ID
that does not belong to any physical user, for example, to implement some kind of service user concept.
Besides providing a user, some connect methods have an additional parameter to provide the IDs of additional
principals to the document service.
If additional principals are provided, the user not only has his or her own permissions to access objects but in
addition gets the access rights of these principles. If, for example, the user him or herself has no right to access
a specific document but one of the additionally provided principals is allowed to read the content, then the user
can also access the content in the context of this connection.
With this concept an application could also use roles (or even groups) in the ACLs by setting ACEs indicating
these roles or groups. Then the roles of the current user can be evaluated during his connection calls and he is
granted access rights according to his role (or group) membership.
It is very important to keep in mind that the additional principals are also opaque strings for the document
service. This leaves it up to the application to decide what kind of information it sends as additional principals,
including identifiers only known by the application itself. On the other hand, the application must ensure that
there is no user with an ID similar to the additional principals, which the application uses in its ACLs because
such a user might unintentionally get too many access rights.
Example
This example shows how to assign write and read permissions for two kinds of users: Authors and readers.
Authors should have write access to documents and readers should only have read access to the
documents. The application defines two roles, one for authors called author-role and one for readers
called reader-role.
For more information about securing applications and using roles, see Securing Applications.
To set up permissions for authors and readers as described in our example, set the appropriate ACEs at the
documents. The following code snippet shows how to set these permissions for a single document:
import com.sap.security.um.service.UserManagementAccessor;
import com.sap.security.um.user.User;
import com.sap.security.um.user.UserProvider;
…
String authorRole = "author-role";
String readerRole = "reader-role";
As long as the user's session is active, his or her permission to access the documents is determined by the
user's role assignment. That is, authors can change documents and readers are only allowed to read them.
Related Information
● The {sap:builtin}admin user who always has full access to all objects no matter which ACLs are set.
Note
Note that the document service considers user IDs only as opaque strings. Therefore, the application
must prevent that a normal user connects to the document service using this administration user ID.
● The {sap: builtin}everyone user applies to all users. Therefore, granting a permission to this user
using an ACE grants this permission to all users.
There are some document service specific rules with respect to ACLs.
Object Creation
When creating an object the connected user becomes the owner of the new object. The ACL of the parent
folder is copied to the new object and modified according to the addAcl and removeAcl parameter settings of
the create method.
Access by Path
A user is allowed to fetch an object using the path if the user has at least the cmis:read permission for the
object. In this case, the ACLs of the ancestor folders of the object are not relevant.
Versioning
● All documents of a version series, except the private working copy (PWC), share the same ACL and owner.
● It is only allowed to modify the ACL on the last version of a version series and only if it is not checked out.
● Principals are allowed to check out a document if they have the cmis:write permission for it. They
become the owner of the PWC and the ACL of the PWC initially contains only one ACE with their principal
name and the cmis:all permission.
● The ACL and the owner of a PWC can be changed independently of the other objects of the version series
the PWC belongs to. Only the owner of the PWC and users with the sap:delete permission are allowed to
check in or to cancel a checkout.
● Only principals having the cmis:all permission for the version series are allowed to add or remove ACEs
when checking in a PWC.
● getChildren
Returns all children the principal is allowed to see. If the principal has no read permission for the current
folder, a NodeNotFoundException is thrown.
● getDecendants
Returns only those descendants of a folder F, which the principal is allowed to see. Only those descendants
are returned for which all folders on the path from F to the descendant are accessible to the principal. If the
principal has no read permission for the current folder F, a NodeNotFoundException is thrown.
● getFolderTree
In many ways the document service behaves like a relational database, where each document and folder is one
entry.
Therefore, most of the performance tips for databases also apply to the document service, for example:
To help you improve the performance of your application that uses the document service, we provide the
following tips.
Note
These are only recommendations, and may not be suitable in every case. There may be situations where
you cannot and should not apply them.
Documents and folders are stored in the document service in different repositories. Creating a large number of
repositories entails significant CPU usage and requires a considerable amount of storage, even if no documents
are stored.
Recommendation
We recommend that you keep the total number of repositories to a minimum. Avoid, for example, creating a
separate repository for each user, especially if the users do not have large amounts of data to store. In such
a situation, create just one repository instead and store the user data in several separate folders.
If folders contain many children, performance might be impaired when you navigate to one of these folders
using a getChildren call. If you navigate to a folder to analyze its data, for example, using the CMIS
Workbench, this analysis becomes complicated. In contrast, fetching a child in a folder with many children by
using its object ID or its path is not a problem.
It is difficult to define what qualifies as a "large" folder. If you send only one getChildren call per hour, then a
thousand or more children would be totally acceptable, but if you send many calls per second, then even 100
children might impair performance. In any case, the load caused by calling this method increases linearly with
the number of children.
Instead of having one folder with many children, you might consider subdividing the children into different
subfolders or even a subfolder hierarchy. Another alternative to using the getChildren call option is to use
the query method with the IN_FOLDER predicate together with additional restrictions to limit the number of
matching results.
Several CMIS methods have a skip count parameter, for example, the getChildren or the query method.
Using large skip counts produces a significant load because a huge number of matching result objects is found
and skipped before the final result set can be collected. To prevent the need for large skip counts, try to reduce
the number of matching results by subdividing the children into different subfolders or by using a more
selective query.
Only use a sort criterion if you really need it, because it might reduce performance significantly (see also
Paging with maxItems and skipCount (for example, for getChildren, query) in the Frequently Asked Questions.
In the operational context (see the OperationalContext.java class), you can define the properties that are
to be returned together with the selected objects. Do not query all properties because this might be time
consuming and it increases the amount of data transferred over the network. In particular, requesting the
cmis:path property can be inefficient because it has to be computed for each call. The general rule is to
It is much faster to access an object using its ID than using its path.
Using the getFolderTree or getDescendants method on large hierarchies is very inefficient. The same is
true for the folder predicate IN_TREE that you can use in the statement of the query method. All these
methods are slow for large hierarchies even if the final result set is small.
The reason for the performance problems with these methods is that all the descendant folders of the start
folder have to be loaded from the database into the server where the document service is running. This results
in many calls to the database and many objects are transferred over the network. Finally, a very complex query
with all the IDs of the folders in the hierarchy has to be created and sent to the database to get the final result.
For the query method, the size of the searchable folder hierarchy is already restricted to a maximum of 1000.
For larger hierarchies an exception is thrown. Be aware that even a hierarchy of 1000 folders is quite large and
results in a heavy load on the system as well as bad performance for the request.
When applications use the document service they fetch a session object using one of the connect methods.
Creating a session is quite an expensive operation, which should be reused and shared if possible. A session
object is thread safe and allows parallel method calls.
Usually, a session is bound to a user. To reduce the number of sessions that are created, fetch the session only
for the first request of the user and store it in the user's HTTP session. Then the session can be reused in
subsequent requests of this user.
If an application uses a service user to connect the session to the document service, we recommend that you
store this session in a central place and reuse it for all subsequent requests.
● A session object has an internal cache, for example, for already fetched objects. To make sure that you
fetch the latest version of specific objects, clear the cache from time to time.
● If a session is used for a very long time, problems might occur that result in exceptions (for example,
network connection problems). A possible solution is to replace the failing session with a new one.
However, do not replace a session if an ObjectNotFound exception is thrown because you tried to fetch a
nonexistent document or folder. This also applies to similar situations where the exception is part of the
normal method behavior.
Multitenancy
One document service session is always bound to one tenant and to one user. If you create the session only
once, then store it statically, and finally reuse it for all subsequent requests, you end up in the tenant where you
first created the document service session. That is: You do not use multitenancy.
We recommend that you create one document service session per tenant and cache these sessions for future
reuse. Make sure that you do not mix up the tenants on your side.
If you expect a high load for a specific tenant, we recommend that you create a pool of sessions for that tenant.
A session is always bound to a particular server of the document service and this will not scale. If you use a
session pool, the different sessions are bound to different document service servers and you will get a much
better performance and scaling.
Search Hints
You can indicate hints for queries. The general syntax is:
hint:<hintname>[,<hintname>]*:<cmis query>
● ignoreOwner: Usually, documents are returned for which the current user is the owner OR is present in an
ACE. The ignoreOwner setting returns only documents for which the current user has an ACE; ownership
is ignored in this case. This improves the speed of the query because the owner check is omitted. This is
useful if the owner is present in an ACE anyway.
● noPath: Does not return the path property even if it is requested. This improves the speed of queries on
folders, because paths do not have to be computed internally.
Sample Code
Related Information
The document service executes several backups a day to prevent file loss due to disasters. Backups are kept
for 14 days and then deleted. Backups are not needed for simple hard disk crashes, since all storage hardware
is based on redundant hard disks.
If you implement paging using maxItems and skipCount, be aware that the different calls might be send to
different database servers each returning the result objects in a possibly different order. To get a consistent
result for these calls, add a unique sort criterion so that each server returns the objects using the same order.
Be aware that using a sort criterion might reduce the processing speed significantly. Therefore, only use a sort
criterion if really needed.
You can connect to the document service by treating it as an external service and the document service treats
your HTML5 application as an external app that requests access.
Procedure
To enable external access to your document service repositories, deploy a small proxy application that is
available out-of-the-box. For more information about its usage and deployment, see Access the Document
Service from an HTML5 Application [page 584].
Related Information
In the cockpit, you can create, edit, and delete a document service repository for your subaccounts. In addition,
you can monitor the number and size of the tenant repositories of your document service repository.
Note
Due to the tenant isolation in SAP Cloud Platform, the document service cockpit cannot access or view
repositories you create in SAP Document Center or the other way round.
Related Information
In the cockpit, you can create document service repositories for your subaccounts.
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
Field Entry
Name Mandatory. Enter a unique name consisting of digits, letters, or special characters. The name
is restricted to 100 characters.
Display Name Optional. Enter a display name that is shown instead of the name in the repository list of the
subaccount. The name is restricted to 200 characters. You cannot change this name later on.
Description Optional. Enter a descriptive text for the repository. The name is restricted to 500 characters.
You cannot change the description later on.
When you create a repository, you can activate a virus scanner for write accesses. The virus
scanner scans files during uploads. If it finds a virus, write access is denied and an error mes
sage is displayed. Note that the time for uploading a file is prolonged by the time needed to
scan the file for viruses.
Repository Key Enter a repository key consisting of at least 10 characters but without special characters. This
key is used to access the repository metadata.
You cannot recover this key. Therefore, you must be sure to remember it.
You can, however, create a new key using the console client command reset-ecm-key [page
2106].
4. Choose Save.
Related Information
In the cockpit, you can change the name, key, or virus scan settings of the repository. You cannot change the
display name or the description.
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
2. In Repositories Document Repositories in the navigation area, select the repository for which you
want to change the name or the virus scan setting.
3. Choose Edit, and change the repository name or the virus scan setting.
4. Enter the repository key.
5. To change the repository key itself, choose the Change Repository Key button and fill in the key fields that
appear.
In the cockpit, you can delete a repository including the data of any tenants in the repository.
Context
Caution
Be very careful when using this command. Deleting a repository permanently deletes all data. This data
cannot be recovered.
If you simply forgot the repository key, you can request a new repository key and avoid deleting the repository.
For more information, see reset-ecm-key [page 2106].
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
2. In Repositories Document Repositories in the navigation area, select the repository, which you want
to delete.
3. Choose Delete.
4. On the dialog that appears, enter the repository key.
5. Choose Delete.
In the cockpit, you can monitor the number and size of the tenant repositories of your document service
repository.
Context
If an application runs in several different tenant contexts, a tenant repository is created for each tenant context.
The tenant repository is created automatically when the application connects to the document service and the
respective tenant repository did not exist before.
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
2. In Repositories Document Repositories in the navigation area, click the name of your repository.
3. Choose Tenant Repositories in the navigation area.
Related Information
You can create and manage repositories for the document service with client commands.
The following set of console client commands for managing repositories is available:
Related Information
Procedure
Make sure that you set up the permissions correctly. For more information about building a proxy bridge,
see Build a Proxy Bridge [page 580].
2. Download the Chemistry Workbench from the Apache Web site and connect to your proxy bridge.
3. Download the content of your repository to your local computer.
Results
To set up automated batch operations, you can use "Console" in the CMIS workbench. You can create scripts
that perform queries to filter your content or you can download selective folders only. As starting point, have a
look at the sample scripts that are available in the Console menu.
With the proxy bridge you get a standard CMIS endpoint. So you are not restricted to the CMIS workbench as
only tool for export, you can use any CMIS-compliant tool.
The SAP Cloud Platform Feedback Service provides developers, customers, and partners with the option to
collect end-user feedback for their applications. It also provides predefined analytics on the collected feedback
data. This includes rating distribution and detailed text analysis of user sentiment (positive, negative, or
neutral).
Note
The Feedback Service is currently a beta offering that is available only on the SAP Cloud Platform trial
landscape for trial accounts.
To use the Feedback Service, you must enable it from the SAP Cloud Platform cockpit for your subaccount.
To use the services' UIs, the following roles must be assigned to your user:
If you are a subaccount owner, these roles are automatically assigned to your user when you enable the
Feedback Service. To enable other SAP ID users to access the Analysis and Administration UIs, you need to
assign the roles manually. For more information, see Consuming the Feedback Service [page 622].
In the Administration UI, the administrator adds the applications for which feedback is to be collected. Then the
developer can use the client API to consume the Feedback Service.
Once the Feedback Service is consumed by the application and feedback data is collected, the feedback
analyst can explore feedback text in the Analysis UI. As a result, a developer can use end-user feedback to
improve the performance and appearance of the specific application.
Architecture
The Feedback Service leverages the in-memory technology of the SAP HANA DB.
Related Information
Your application can consume the Feedback Service either via a browser or via a Web application back end.
Note
For the role assignments to take effect, either open a new browser session or log out from the
cockpit and log on to it again.
4. In the Administration UI, add the application for which feedback is to be collected.
5. Modify your application code to use the Feedback Service client API to collect the feedback of your
application users.
Your application can consume the Feedback Service either via a browser or via web application back end.
Related Information
The SAP Cloud Platform Feedback Service is exposed through a client API that you can use to enable users to
send feedback for your application. You do this by adding code to the application that uses the Feedback
Service client API.
Request
An application can consume the Feedback Service using the service's REST API. The messages exchanged
between the client (your application) and the Feedback Service are JSON-encoded. Call the Feedback Service
by issuing an HTTP POST request to the unique application feedback URL that contains your application ID:
https://feedback-account_name.hanatrial.ondemand.com/api/v2/apps/application_id/posts
The application feedback URL is automatically generated after you register your application in the
Administration UI of the Feedback Service.
Set the Content-Type HTTP header of the request to application/json. In the request body, supply a
feedback resource in JSON format. The resource may have the following attributes:
To collect feedback data, you must provide values for at least one rating or one free-text attribute. You can
additionally pass values for:
Caution
According to the data privacy terms defined in the Terms of Use for SAP HANA Cloud Developer Edition, you
must not collect, process, store, or transmit any personal data using your trial account. Therefore, do not
use the context attributes of the Feedback Service client API to collect personal data such as user ID and
user name.
Response
When the request is successful, the Feedback Service returns an HTTP response with code 200-OK and an
empty body.
Error Handling
In case of errors, the Feedback Service returns an HTTP response with an appropriate error code. Any
additional information that describes the error, is contained in the response body as an Error object. For
example:
{
error: {
code: 30,
message: "quota exceeded"
}
}
The value of error.code identifies the cause, and the value of error.message describes the cause. The string in
error.message is not meant to be shown to your application users and is therefore not translated. The purpose
of the string is to assist in the development of your application.
The table below lists the most common errors that the service can return. In addition to this list, a call to the
Feedback Service may also result in a response with another HTTP response code. In this case, the HTTP
response code itself should be enough to describe the issue.
Error Codes
Error Cause HTTP Status Code Content Type error.code error.message
Examples:
Example
● URL: https://feedback-<account_name>.hanatrial.ondemand.com/api/v2/apps/
<application_id>/posts
● HTTP method: POST
● Content-Type: application/json
● Request body:
{
"texts":{
"t1": "Very helpful",
"t2": "Well done",
"t3": "Not usable at all",
"t4": "I don't like it",
"t5": "OK"
},
"ratings":{
"r1": {"value":5},
"r2": {"value":2},
"r3": {"value":5},
Related Information
Developers can consume the SAP Cloud Platform Feedback Service using a web browser.
Prerequisites
Procedure
a. From the Eclipse main menu, navigate to File New Dynamic Web Project .
b. In the Project name field, enter feedback-app. Make sure that SAP HANA Cloud is selected as the
target runtime.
c. Leave the default values for the other project settings and choose Finish.
2. Add an HTML file to the web project:
a. In the Project Explorer view, select the feedback-app node.
b. From the Eclipse main menu, navigate to File New HTML File .
c. Enter index.html as the file name.
d. To generate the file, choose Finish.
<!DOCTYPE HTML>
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
<title>Feedback Application</title>
<script src="https://sapui5.hana.ondemand.com/resources/sap-ui-core.js"
id="sap-ui-bootstrap"
data-sap-ui-libs="sap.m, sap.ui.commons"
data-sap-ui-theme="sap_bluecrystal">
</script>
<script>
var app = new sap.m.App({initialPage:"page1"});
var t1 = new sap.m.Text({text: "Please share your feedback"});
var t2 = new sap.m.Text({text: "Do you like it"});
var ind1 = new sap.m.RatingIndicator({maxValue : 5, value : 4});
var t3 = new sap.m.Text({text: "Some free comments:"});
var textArea = new sap.m.TextArea({rows : 2, cols: 40});
var sendBtn = new sap.m.Button({
text : "Send",
press : function() {
var data = {
"texts": {t1: textArea.getValue()},
"ratings": {r1: {value: ind1.getValue()}},
"context": {page: "page1"}
};
$.ajax({
url: "https://feedback-
<subaccount_name>.hanatrial.ondemand.com/api/v2/apps/<your_application_id>/
posts",
type: "POST",
contentType: "application/json",
data: JSON.stringify(data)
}).done(function() {
jQuery.sap.require("sap.m.MessageToast");
sap.m.MessageToast.show("Thank you. Your feedback was
accepted.");
}).fail(function() {
jQuery.sap.require("sap.m.MessageToast");
sap.m.MessageToast.show("Something went wrong plese try
again later.");
});
}
});
var vbox = new sap.m.VBox({
fitContainer: true,
displayInline: false,
items: [t1, t2, ind1, t3, textArea, sendBtn]
});
var page1 = new sap.m.Page("page1", {
title: "Feedback Application",
content : vbox
});
app.addPage(page1);
app.placeAt("content");
</script>
</head>
<body class="sapUiBody">
<div id="content"></div>
</body>
</html>
<Subaccount_name> is the unique identifier that is automatically generated when the subaccount is
created.
3. Adjust the service URL in the source code to point to the application feedback URL generated for your
application.
4. Test the application on SAP Cloud Platform local runtime:
a. Deploy the application on your SAP Cloud Platform local runtime.
b. Open the application in your web browser: http://<host>:<port>/feedback-app/. Send sample
feedback.
5. Test the application on the SAP Cloud Platform:
a. Deploy the application on the SAP Cloud Platform.
b. Start the application and open it in your web browser.
Related Information
Developers can use the SAP Cloud Platform Feedback Service from the Java code in a simple Java EE Web
application.
Prerequisites
a. From the Eclipse main menu, navigate to File New Dynamic Web Project .
b. In the Project name field, enter feedback-app. Make sure that SAP HANA Cloud is selected as the
target runtime.
c. Leave the default values for the other project settings and choose Finish.
2. Add a servlet to the web project:
a. In the Project Explorer view, select the feedback-app node.
b. From the Eclipse main menu, navigate to File New Servlet .
c. Enter the Java package hello and the class name FeedbackServlet.
d. To generate the servlet, choose Finish.
e. Replace the source code with the following content:
FeedbackServlet.java
package hello;
import java.io.IOException;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.conn.ClientConnectionManager;
import org.apache.http.entity.StringEntity;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.core.connectivity.api.DestinationException;
import com.sap.core.connectivity.api.http.HttpDestination;
/**
* Servlet implementation class FeedbackServlet
*/
public class FeedbackServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private static final Logger LOGGER =
LoggerFactory.getLogger(FeedbackServlet.class);
public FeedbackServlet() {
super();
}
protected void doPost(HttpServletRequest request, HttpServletResponse
response) throws ServletException, IOException {
HttpClient httpClient = null;
try {
Context ctx = new InitialContext();
HttpDestination destination = (HttpDestination)
ctx.lookup("java:comp/env/FeedbackService");
httpClient = destination.createHttpClient();
HttpPost post = new HttpPost();
String text = request.getParameter("text");
String rating = request.getParameter("rating");
String page = request.getParameter("page");
String body = "{\"texts\":{\"t1\": \"" + text + "\"}, \"ratings
\":{\"r1\": {\"value\": " + rating + "}}, \"context\": {\"page\": \"" +
page + "\", \"lang\": \"en\", \"attr1\": \"mobile\"}}";
//Use the proper content type
response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR,
"Something went wrong please try again later.");
} else {
response.getWriter().print("Your feedback was accepted.
Thank You!");
}
} catch (NamingException e) {
LOGGER.error("Cannot lookup the feedback service destination",
e);
response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR, "Cannot
lookup the feedback service destination");
} catch (DestinationException e) {
LOGGER.error("Cannot create HttpClient", e);
response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR,
"Something went wrong please try again later.");
} finally {
if (httpClient != null) {
ClientConnectionManager connectionManager =
httpClient.getConnectionManager();
if (connectionManager != null) {
connectionManager.shutdown();
}
}
}
}
}
<!DOCTYPE HTML>
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
<title>Feedback Application</title>
<script src="https://sapui5.hana.ondemand.com/resources/sap-ui-core.js"
id="sap-ui-bootstrap"
data-sap-ui-libs="sap.m, sap.ui.commons"
data-sap-ui-theme="sap_bluecrystal">
</script>
<script>
var app = new sap.m.App({initialPage:"page1"});
var t1 = new sap.m.Text({text: "Please share your feedback"});
var t2 = new sap.m.Text({text: "Do you like it"});
var ind1 = new sap.m.RatingIndicator({maxValue : 5, value : 4});
var t3 = new sap.m.Text({text: "Some free comments:"});
var textArea = new sap.m.TextArea({rows : 2, cols: 40});
var sendBtn = new sap.m.Button({
web.xml
...
<resource-ref>
<res-ref-name>FeedbackService</res-ref-name>
<res-type>com.sap.core.connectivity.api.http.HttpDestination</res-type>
</resource-ref>
...
Name=FeedbackService
Type=HTTP
URL=https://feedback-<subaccount_name>.hanatrial.ondemand.com/api/v2/apps/
<your_application_id>/posts
Authentication=NoAuthentication
Name=FeedbackService
Type=HTTP
URL=https://feedback-<subaccount_name>.hanatrial.ondemand.com/api/v2/apps/
<your_application_id>/posts
Authentication=NoAuthentication
The application feedback URL, which contains the application ID, is automatically generated after you
register your application in the Administration UI of the Feedback Service.
c. Start the application and open it in your web browser.
Related Information
After you deploy your applications on the SAP Cloud Platform, you need to add the applications for which you
want to collect feedback to the Administration UI of the feedback service.
Adding an application generates a dedicated application feedback URL. The developer uses this URL in the
client API to consume the feedback service. Once the feedback service is consumed by the application and
feedback data is collected, the feedback analyst can explore ratings and text analysis in the Analysis UI.
Developers can then use the feedback to improve the application performance and appearance.
To use the Administration and Analysis UIs, you must be assigned the following roles:
● FeedbackAdministrator
● FeedbackAnalyst
As a subaccount owner, the roles are automatically assigned to your user after you enable the feedback
service. To allow other SAP ID users to access the Analysis and Administration UIs, you need to assign the roles
manually.
Related Information
1.6.2.1 Administration
● Add applications for which feedback is to be collected in the Administration UI of the feedback service
● Customize descriptions of feedback questions
● Customize descriptions of context attributes
● Free up feedback quota space
Once you add an application to your list, you enable it to use the feedback service. As a result, a URL that is
specific to both the subaccount and the application is generated. To start collecting feedback, the developer
integrates the URL into the application UI, to enable end users to post feedback (for example, in a feedback
form). The URL is called through a POST request by the application that wants to send feedback. That is, once
an end user submits the feedback form, the application calls the feedback service using the URL and the
service stores the user feedback.
https://feedback-<subaccount_name>.hanatrial.ondemand.com/api/v2/apps/
<application_id>/posts
To use the Administration UI of the feedback service, you need to be assigned the FeedbackAdministrator role.
To access the Administration UI, open the following URL in your browser:
https://feedback-<subaccount_name>.hanatrial.ondemand.com/admin/mobile
Each subaccount has a feedback quota assigned, that is, a specific amount of feedback data that can be stored
in the SAP HANA DB. The quota is 250 feedback forms filled in by end users. When you reach 70% of the
feedback quota, you see a warning message. Once you reach the limit, the feedback service stops processing
feedback requests and storing feedback data, until you free up quota space. Do this by deleting the feedback
records for a specific time period.
If you have the FeedbackAnalyst role assigned (in addition to the FeedbackAdministrator role), you can analyze
feedback results and export raw feedback data.
Related Information
As a feedback administrator, you can add applications and administer application feedback.
Procedure
1. Open the Administration UI, where you can perform the following tasks:
a. Add an application by choosing the +Add button and enter a name for the application for which
feedback is to be collected.
b. To customize the description of a rating, a free text question, or a context attribute, click the pencil icon
in the respective attribute row.
c. To free up quota space, click the Free Up Quota Space link and choose a specific time period for which
to permanently delete feedback data.
2. Save your changes.
As a feedback analyst, in the Analysis UI of the Feedback Service you can explore the feedback collected from
end users by viewing detailed ratings or text analysis, or exporting the feedback text as raw data.
The rating analysis presents information about rating questions and how feedback rating is distributed
according to time and distribution criteria.
You can choose a specific time period for which to view analyzed feedback data and to export raw data. The
default time period is the last 7 days.
You can export raw feedback data, so that you can perform specific analysis tailored to your needs. You
download raw feedback data in a .CSV format encoded in UTF-8.
Note
If there are characters that do not appear correctly when you open the exported file, reopen it as UTF-8
encoded.
Related Information
As a feedback analyst, you can explore the feedback collected from end users by viewing the detailed text
analysis. Text analysis classifies user feedback by:
For further information about text analysis, read the Text Analysis section in the SAP HANA Developer Guide.
The Overview screen displays a summary of all free text feedback questions. Each question tile provides the
following information:
Select a question tile to see detailed information about the question, including the following:
For example, you can filter your responses for a specific question to show only feedback of type Problem that
has Negative and Neutral sentiment. The returned list is ordered by date (most recent is on top).
Note
No matter what filter is applied, the list always includes responses (if any) that are not classified by type or
sentiment.
You can drill down to see details about a specific feedback response and examine the actual feedback text
analysis. You can view the entire response with all detected text analysis "hits". In addition, you can choose the
types of "hits" to highlight within the text. For example, you can choose to highlight just the Problem type that
has Negative and Neutral text analysis. Alternatively, you can remove all highlights.
Related Information
As a feedback analyst, you can examine the feedback collected from users by viewing a detailed rating analysis.
Users can reply to each rating question by choosing a number on the scale of 1 to 5 where 1 is the lowest rating
and 5 is the highest.
The Overview screen shows a summary of all rating questions. Each question tile provides the following
information:
Select a question tile to see detailed information about the question during the time period you specified,
including the following:
Depending on the time period, the graph and table views show the following data:
● Feedback distribution by rating: A graph or table showing the percentage of the overall feedback responses
that receive a specific rating number. That is, how feedback is distributed in terms of a specific rating.
● Feedback distribution by time period: A graph or table in of feedback distribution among various time
frame granularities, for example, a day or a year. The data shown is the average rating for the specified time
granularity and applies only to the time period initially selected.
Related Information
1.7 Gamification
Overview
The SAP Cloud Platform Gamification allows the rapid introduction of gamification concepts into applications.
The service includes an online development and operations environment (gamification workbench) for
implementing and analyzing gamification concepts. The underlying gamification rule management provides
support for sophisticated gamification concepts, covering time constraints, complex nested missions, and
Product Features
● Web-based IDE (gamification workbench) for modeling game mechanics and rules
● Gamification engine for real-time processing of sophisticated gamification concepts involving time
constraints and cooperation
● Built-in runtime game analytics for continuous improvement of game designs
Related Information
Learn how to enable the gamification service in your subaccount, and how to configure and use the sample
application HelpDesk.
When enabling the service, configuration steps 2, 3, and 4 are executed automatically, as follows:
● All gamification roles are assigned to the user who enabled the service.
● The required destinations are created at the subaccount level. The destination gsdest requires credentials
(user/password). In the trial version you can use an SCN, it is safer to create a dedicated technical user.
Note
If you use your SCN user to configure gsdest, make sure you change the destination configuration after
you've changed the SCN user password in SAP ID Service. Otherwise, your user will be locked when using
the HelpDesk app.
Prerequisites
● Access to a SAP Cloud Platform account for personal development, or to a Trial account.
● A subaccount for which you are assigned the role Administrator.
● An SCN user.
Procedure
Prerequisites
Log in to the SAP Cloud Platform cockpit using your SCN user and password.
Procedure
Related Information
Prerequisites
Log in to the SAP Cloud Platform cockpit using your SCN user and password.
Context
You must configure a destination to allow the communication between your application (in this case, a sample
app) and your subscription to the gamification service. For the sample application, two destinations are
necessary:
Note
Create these destinations at the subaccount level of your personal user account.
Procedure
1. In the cockpit, choose the Destinations subtab from the Connectivity tab.
2. Enter the name: gsdest.
3. Select the type: HTTP.
4. (Optional) Enter a description.
5. Enter the application URL of your service instance: https://<application_URL>/
gamification/api/tech/JsonRPC
You can find the application URL of your service instance by navigating to Subaccount Services
Gamification Service Go to Service .
6. Select the proxy type: Internet.
7. Select Basic Authentication.
8. Enter a user ID. Recommendation: Use a separate technical user, see following procedure. Alternatively,
you can use your SCN user. In this case make sure to update the destination as well in case of password
changes. Otherwise the SAP ID Service locks your user when you attempt to use the HelpDesk app.
9. Enter the SCN password.
10. Choose Save.
Related Information
Procedure
You can find the application URL of your service instance by navigating to Subaccount Services
Gamification Service Go to Service .
6. Select proxy type: Internet.
7. Select authentication: AppToAppSSO.
8. Choose Save.
Related Information
Prerequisites
● Log in to the SAP Cloud Platform cockpit using your SCN user and password.
● A subaccount for which you are assigned the role Administrator.
Context
To support application-to-application SSO as part of destination gswidgetdest, you must configure your
subaccount to allow principal propagation.
1. Open the cockpit and choose the Trust subtab from the Security tab.
2. Choose the Local Service Provider subtab.
3. Choose Edit.
4. Change the Principal Propagation value to Enabled.
Related Information
Prerequisites
● Log in to the SAP Cloud Platform cockpit with your SCN user and password.
● You are assigned the role TenantOperator.
Procedure
Prerequisites
Procedure
The gamification development cycle describes to introduce gamification into existing or new applications.
Creating gamification concepts is purely a conceptual tasks that is typically executed by gamification
designers. The task is executed during the design phase and covers the specification of a meaningful game /
gamification design.
Implementing the concept mans mapping it to the mechanics offered by the gamification service. This task is
also normally performed by gamification designers, or IT experts.
Integration is a development task that includes the technical integration of the target application with the APIs
of the gamification service. This is normally performed by application developers, since it requires technical
knowledge of the application (such as implementing points for listening for events or creating visual
representation of achievements).
A gamification concept, normally developed by gamification designers and domain experts describes the
mechanics that will encourage users (players) to perform certain tasks. For example an award system
comprising point and badges to encourage call center employees to process tickets efficiently or to select more
complex tickets over more straightforward ones.
Note
Creating gamification concepts is not a service that is covered or supported by the gamification service.
A simple gamification concept includes elements such as points and badges. For example, users are awarded
experience points for certain actions, and badges as a visual representation. The gamification concept
describes how these elements motivate users. It therefore includes descriptions of the actions (within the
application) that allow users to attain the various achievements.
Additional examples include missions that foster collaboration or activities with time constraints that
encourage users to work faster.
Related Information
Implementation means mapping a gamification concept to the elements used in the gamification service. You
can use the gamification workbench to maintain the gamification elements, such as points, badges, levels, or
rules. You can modify the gamification concept at runtime.
Gamification is about full transparency to users, and is intended to encourage them. We therefore advise
against modifying a concept significantly without informing users, since doing so might catch them by surprise
and could possibly demotivate them.
Related Information
Integration refers to the technical integration of the target application with the APIs of the gamification service.
Integration is required to send events that are of interest to the gamification service, for example, when a user
in a call center has successfully processed a ticket. Integration is also necessary to notify the users about their
achievements, to send notifications to users for earned points, or to display user profiles.
The gamification service supports the integration of mainly cloud applications running with SAP Cloud
Platform. Integration of other applications is technically possible, but restricted for security reasons.
Related Information
Gamification is a continuous process. It is crucial that you monitor the influence of a gamification concept and
react to the users' behavior. For example, you want to know if your gamification concept motivates the target
group or if users lose interest.
The gamification service offers basic analytics: for example, the assignment of points or badges to users over
time. Therefore, you can analyze peaks and troughs of user achievements.
The introduction of gamification often requires the acquisition of sensitive information. For example you might
need to track user behavior within an application to allow the gamification of onboarding scenarios.
The gamification service lets you anonymize user data. It also offers secure communication via the various
APIs. However it is ultimately the responsibility of the host application to ensure data privacy however, and
application developers must ensure that only the necessary data is sent to the gamification service.
Related Information
The gamification workbench is the central point for managing all gamification content associated with your
subaccount and for accessing key information about your gamification usage. It allows you to manage the
Summary Dashboard
The figure below shows an example of the Summary dashboard in the workbench and is followed by an
explanation:
The entry page Summary of the gamification workbench provides an overview of the gamification concept for
the selected app, the overall player base and overall landscape.
Logon
You can log on with your subaccount user via SSO (single-sign on).
The gamification workbench can be accessed using the Subscription tab in the SAP Cloud Platform cockpit.
The following link will be used: https://< SUBSCRIPTION_URL>/gamification .
Navigation
● Summary
● Game Design
● Rule Engine
Note
You must have specific roles in order to access the gamification workbench, see Roles [page 653].
Level Description
1.7.3.1 Roles
Different roles can be assigned to users, to enable them to explicitly access the gamification workbench.
Prerequisites
Procedure
Context
The gamification service offers the gamification workbench, an API for integration and a demo app. The access
to the user interfaces and API is protected using SAP Cloud Platform roles.
Note
Note
The API can be used for the integration of host applications. For productive use a technical user (SAP Cloud
Platform user) should be created for a communication between the host application and the gamification
service. (The use of a personal subaccount or user is only recommended for testing or demo purposes.)
1.7.4.1 Roles
The following roles can be assigned to access the gamification service gamification workbench, API or demo
app and must be explicitly assigned to a SAP Cloud Platform user:
AppStandard Technical API (methods are annotated ● Write only - using rules;
with required role) reading achievements is
possible, but should be
Terminal (send events for
avoided
testing purposes)
● Send player-related
events
● Read player achieve
ments and available ach
ievements
AppAdmin Technical API (methods are annotated ● Read and delete a player
with required role) record for a single app
or for the whole tenant
● Create and delete a user
or a team
Player (automatically as Technical (implicit role) API (methods are annotated ● Send player-related
signed) with required role) events (only works for
the user that is authenti
cated using the identity
provider which is config-
ured for your subac
count)
Note
This role is not a stand
ard SAP Cloud Platform
role. It is automatically
assigned to a user
(player) that is created
using the gamification
service and cannot be
explicitly assigned to a
SAP Cloud Platform
user.
Prerequisites
Procedure
Related Information
The SAP Cloud Platform Gamification meets the security and data privacy standards of the SAP Cloud
Platform. In general, the gamification service is not responsible for any content such as game mechanics or
player achievements. It is the responsibility of the host application to meet any local data privacy standards.
Therefore, you need to make sure that the personal information of players is protected according to the local
regulations. In some cases where the gamification is applied to employee scenarios work council approval for
the gamified host application might be necessary.
Prerequisites
You have the role TenantOperator, are logged into the gamification workbench, and have opened the Apps
tab in the Operations section.
The gamification service introduces the concept of apps. An app represents a self-contained, isolated context
for defining and executing game mechanics such as points, levels, and rules.
All data or meta data associated with an app are stored in an isolated way. In addition to this, an isolated rule
engine instance is created and started for each app.
Note
Players are stored independently from apps and can therefore take part in multiple apps.
Prerequisites
You have the roles TenantOperator and GamificationDesigner, are logged into the gamification
workbench, and have opened the Apps tab in the Operations section.
Context
An app represents a self-contained, isolated context for defining and executing game mechanics.
Create Apps
Procedure
Update Apps
Procedure
Delete Apps
Procedure
Prerequisites
You have the role GamificationDesigner or TenantOperator or both and are logged into the gamification
workbench.
Context
By switching the app, the gamification workbench only shows game mechanics and player achievements
associated with the selected app.
Procedure
1. Select an app in the app selection combo box located in the upper right corner of the gamification
workbench.
2. Optional: Review whether the app has been changed successfully, for example by comparing the summary
page (tab Summary).
Prerequisites
You have the role TenantOperator, are logged into the gamification workbench and have opened the
Operations tab, and navigated to the Data Management section.
Context
The gamification service allows exporting all available apps including their content. You can choose between a
full tenant export including all player data and an export of game mechanics only. The latter can be imported
again.
Procedure
1. Select the Export mode in the combo box labeled Export in the form area Import / Export.
○ Full Export: export all game mechanics and player data.
○ Game Mechanics: export game mechanics only.
2. Press Download to start the export. Your browser should show the file storing dialog.
3. Store the provided ZIP file on your disk.
Prerequisites
● You have the role TenantOperator, are logged into the gamification workbench and have opened the
Operations tab, and navigated to the Data Management section.
● You have a gamification service export file.
Note
Context
The gamification service allows importing game mechanics based on existing gamification service export files
(ZIP format). Section Exporting Apps explains how to do the export.
1. Press Browse in the form area Import / Export to select the import file.
2. Press Upload to start the import based on the selected file.
Note
If an app with the same name already exists, the import will skip this app and does not overwrite its
content.
Note
Prerequisites
You have the role TenantOperator, are logged into the gamification workbench, and have opened the
Operations tab, and navigated to the Data Management section.
Context
The gamification service is shipped with selected demo content comprising game mechanics as well as demo
players. The demo content is created within the context of a new app.
Procedure
Note
Appropriate content (points, levels, badges, and rules) is created for the app automatically.
Prerequisites
You have the role TenantOperator, are logged into the gamification workbench and have opened the
Operations tab, and navigated to the Data Management section.
Context
The gamification service is shipped with selected demo content comprising game mechanics as well as demo
player. The demo content is created within the context of a new app. The app can be deleted manually, but this
will not delete generated demo players. To delete the full demo content, the explicit action must be triggered.
Procedure
Prerequisites
You have the GamificationDesigner role , are logged on to the gamification workbench and have opened
the Game Design tab.
Context
The gamification concept describes the metrics, achievements and rules that are applied to an application. The
following checklist describes the tasks required to implement your gamification concept in your subscription of
the gamification service.
1. Configuring Achievements:
○ Configuring Points (Point Categories) [page 664]
○ Configuring Levels [page 666]
General Procedure
For each game mechanics entity there is a tab with a master and details view.
● Master View
○ Shows the list of available entities.
○ Add button for adding a new entity.
○ Edit All button for switching to batch deletion mode.
● Details View
○ Shows entity attributes and images.
○ Edit button for editing entity attributes.
○ Duplicate button for cloning the complete entity including attribute values.
○ Delete button for deleting the given entity.
Each entity has at least the attributes name and a display name. The name serves as the unique identifier and
is immutable.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Points tab.
Points are the fundamental element of a gamification design. For example, points can indicate the progress in
various dimensions. Points can be flagged as "Hidden from Player" for security or privacy reasons. Points that
are flagged as hidden are not visible to players. Instead they can be utilized in rules. Furthermore points can
have various different subtypes. The table lists the available point types.
Point Types
Type Description
ADVANCING Advancing points are points that can never decrease. They
are used to reflect progress.
Points can be configured in the Points subtab of the Game Design tab.
Prerequisites
You have the GamificationDesigner role, are logged on to the gamification workbench and have opened the
Points tab.
Procedure
Prerequisites
You have the GamificationDesigner role, are logged on to the gamification workbench and have opened the
Points tab
Procedure
Prerequisites
You have the GamificationDesigner role, are logged on to the gamification workbench and have opened the
Points tab
Procedure
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Levels tab.
Caution
Only levels that are based on the default point category are exposed to the default user profile.
A level describes the status of a user once a specific goal is reached. The gamification service allows you to
define levels based on a defined point category. The threshold defines the value of the selected point type to
reach the level.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Levels tab.
Procedure
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Levels tab.
Context
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Levels tab.
Procedure
Prerequisites
You have logged onto the gamification workbench with the role GamificationDesigner and you have
opened the Badges tab.
Context
A badge is a graphical representation of an achievement. Hidden badges are not visible to the user before the
assignment and can be used as surprise achievements.
Prerequisites
You have logged onto the gamification workbench with the role GamificationDesigner and you have
opened the Badges tab.
Procedure
Prerequisites
You have logged onto the gamification workbench with the role GamificationDesigner and you have
opened the Badges tab.
Procedure
Prerequisites
You have logged onto the gamification workbench with the role GamificationDesigner and you have
opened the Badges tab.
Procedure
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Missions tab.
Context
A mission defines what has to be achieved to gain a measurable outcome. Besides basic standalone missions
the gamification service allows modelling complex mission structures using mission conditions and
consequences.
Note
Mission conditions and consequences are of descriptive nature only. Actual condition checking and the
execution of consequences has to be done by corresponding rules. These rules are not generated
automatically yet.
● Point Conditions: A number of points, each with a respective threshold. Each point can be considered as a
progress indicator: As soon as the threshold is reached, the condition is met.
● A list of missions that have to be completed. Within the API such missions are referred to as sub missions.
The consequences part is limited to a list of follow-up missions, which should be assigned or unlocked after the
current mission has been completed. Within the API such follow-up missions are referred to as
nextMissions.
Example for a rule that checks a point condition in its WHEN part and assigns a follow-up mission in its THEN
part:
● WHEN
$p : Player($playerid : id)
eval(queryAPIv1.hasPlayerMission($playerid, 'Troubleshooting', false) == true)
eval(queryAPIv1.getScoreForPlayer($playerid, 'Critical Tickets', null,
null).getAmount() >= 5)
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Missions tab.
Procedure
Results
Note
Adding a sub mission or follow-up mission only creates relations in the database. The corresponding rules
for checking conditions, assigning follow up missions, or both are not generated yet. They have to be
created manually. But without storing these relationships and making them available through the
achievement query API it would not be possible to create such rules at all.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Missions tab.
Procedure
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Missions tab.
Procedure
● System Missions: the mission life cycle is fully controlled by the service using API calls within rules.
● User-accepted Missions: the player actively decides whether to accept or reject missions, while the
remaining mission life cycle (unlocking or completing a mission) is controlled by the service. In both cases
the API calls have to be executed within rules to ensure data consistency between the engine and the
backend.
All state transitions are triggered by calling the respective API methods within rules, while the list of missions in
a certain state can be retrieved either by calling the API directly or within a rule.
Sample rule for assigning a system mission as part of the user init rule:
● WHEN
● THEN
● WHEN
$p : Player($playerid : id)
eval(queryAPIv1.hasPlayerMission($playerid, 'Troubleshooting', false) == true)
eval(queryAPIv1.getScoreForPlayer($playerid, 'Critical Tickets', null,
null).getAmount() >= 5)
● THEN
Note
Invoking the manual mission methods via the user endpoint currently does not trigger any rules. If there is
a rule that has to trigger when missions become active for players it would require a separate event to
trigger this rule.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Rules tab.
Context
The rules are a fundamental element of the game mechanics. They describe the consequences of actions, the
corresponding constraints and the goals that can be achieved. The rules allow you to define complex
conditions and consequences based on common complex event processing (CEP) operators.
Related Information
Rules are the core elements of the gamification design. Generally they follow the event condition action (ECA)
structure as for active rules in event driven architectures. Each rule is structured in two parts:
● Left hand side (LHS): rule conditions or trigger (events conditions and/or player conditions)
● Right hand side (RHS): rule consequences (updates from the player and/or event generation)
The rule conditions (LHS) are maintained in the Trigger (“when”) area. Examples are:
The rule consequences (RHS) are maintained in the Consequences (“then”) area. Examples are:
● Create new events - new event with the type “solvedProblemDelayed” that is triggered with a delay of 1
minute:
Note
The gamification service follows the “rule-first” approach. This means that any achievements of a player
are always updated using the rule engine. A modification of player achievements cannot be done using an
API (without any rule execution).
The SAP Cloud Platform Gamification allows you to write rules to reach the best flexibility for the targeted
game concept. Additionally you can write rules in one of the multiple graphical (form based) editors in the
gamification workbench.
The declaration of the trigger (“when”) part is based on the Drools Rules Language (DRL).
The trigger part defines the constraints that must be fulfilled in order to execute the consequences ("then"
part). Variables can be defined and used both in the "when" and in the "then" part. This is generally
recommended in case you want to use the same object more than once. Multiple constraints can be described
in one trigger part. The constraints are typically described using the logical operators (within eval statements)
and evaluation of the event object. The event object must be defined with a type and can include multiple
parameters. Additionally, DRL allows you to define temporal constraints using common complex event
processing (CEP) operators.
Related Information
http://docs.jboss.org/drools/release/5.6.0.Final/drools-expert-docs/html/ch05.html
The gamification service rule engine allows the use of two event streams:
● Managed event stream - eventstream: All events and user actions that are sent using the API will
automatically be sent using the managed event stream. “Managed” means that all events are retracted
automatically. Point-in-time events (duration=0) are retracted immediately after execution of the
corresponding rules while long-living events (duration >0) are retracted 1 second after they have expired. If
this automated event retraction is not suitable for your use case, you can use the unmanaged stream
instead.
● Unmanaged event stream - unmanagedstream: For this stream you must take care of event retraction
yourself, which offers more flexibility with regards to rule design. For stability reasons, events sent to this
stream are retracted automatically after 28 days.
You must explicitly declare in the trigger part which event stream will be used. Furthermore, you must explicitly
declare in the consequences part which event stream is used in case you create new events. Using the
managed stream is strongly recommended. Only use the unmanaged stream if the auto-retraction does not
work with your rule design.
Context
Variables can be defined in the trigger part and can afterwards be used in both the trigger and the
consequences part. Variables are recommended in case one object is used more than once. For example, a
player object needs to be updated multiple times.
Procedure
A variable is declared by any string with a leading $ sign, for example $player or $var.
Declaration of a variable:
$<VARIABLE> : <EXPRESSION>
Context
An event type must be set for each incoming event. The event type needs to be checked within the trigger part.
The player's ID is sent with each event, it should be stored in a variable for further use.
Additionally, multiple parameters can be passed with an event and evaluated. The parameters can be a string
or any numeric values. The parameters can be evaluated with logical operators such as equal (=), larger than
(>) and smaller than (<).
Procedure
Declaration of an event object with a given event type and declaration of a variable with a given player ID:
Note
It is recommended to always assign the player ID (playerid) within the event object of a variable since the
player ID is necessary to get the according player object for updating achievements in the consequence
part.
Declaration of an event with a given event type, declaration of a variable with a given player ID and evaluation of
a property:
EventObject(type=='<EVENT_TYPE>', data['<PROPERTY>']<OPERATOR><VALUE>
$playerid:playerid) from entry-point eventstream
Note
It is recommended to always evaluate event parameters within the event object instead of defining
additional parameters and using additional eval statements.
EventObject(type=='solvedProblem', data['relevance']=='critical',
$playerid:playerid) from entry-point eventstream
● Declaration of event with the given type “buttonPressed” and a property with the name “color” and the
value “red”.
● Declaration of event with the given type “temperatureIncreased” and an integer property with the name
“temperatureValue” where the numeric value is larger than 30.
EventObject(type=='temperatureIncreased',
Integer.parseInt(data['temperatureValue'])>30, $playerid:playerid) from entry-
point eventstream
● Declaration of two events of type “ticketEventA” and “ticketEventB”. Both events must occur and they have
to belong to different players.
EventObject(type=='ticketEventA', $playerid:playerid)
EventObject(type=='ticketEventB', playerid!=$playerid)
● Declaration of two events of type “ticketEventA” and “ticketEventB” using the explicit “and” operator. Both
events must occur and they have to belong to different players.
● Declaration of two events of type “ticketEventA” and “ticketEventB” using the “or” operator that describes
that “eventA” or “eventB” must occur and the "player IDs" must not be the same.
(EventObject(type=='ticketEventA', $playerid:playerid) ||
EventObject(type=='ticketEventB', playerid!=$playerid))
● Declaration of two events of type “ticketEvent” where the “player IDs” are different and the “ticked id” is
the same and another event of the type “connectedEvent” that must not be true.
EventObject(type=='ticketEvent', $ticketid:data['ticketid'],
$playerid:playerid) EventObject(type=='ticketEvent', data['ticketid']==
$ticketid, playerid!=$playerid, $playerid2:playerid)
not(EventObject(type=='connectedEvent', playerid==$playerid,
data['friendid']==$playerid2))
Context
Eval statements are used to define constraints with data that is not available in the working memory, such as
status of player achievements. Multiple constraints can be defined in one rule with the combination of multiple
logical operators.
The code within eval statements must follow the Java syntax, just like in the case of the consequences
("then") part. They are not based on the Drools Rule Language like the rest of the trigger part.
Note
It is recommended to avoid using an eval statement since it is an expensive operation. Use it as late as
possible within your trigger part.
Procedure
eval(<EXPRESSION><OPERATOR><VALUE>)
● Expression: It is recommended to only use methods of the Query API in eval conditions. The use of the
Query API allows you to evaluate available player details and achievements using Java statements.
● Operator: All logical operators supported by Java are supported.
● Declaration of an eval statement where the mission “Troubleshooting” is assigned to the player.
● Declaration of an eval statement where the “Experience Points” of the player are larger or equal to 10.
● Declaration of an eval statement where the player does not have the badge “Sporting Ace” assigned.
Note
The use of an invalid expressions may lead to an error during rule execution. Make sure that referenced
point categories or missions exist and the spelling is correct.
Creating generic facts (a Map object with an optional key) and storing them in the working memory is
supported. This allows you to store temporary results and create complex constraints (e.g.: count the number
of a specific event type). Generic facts can be evaluated in all rules if they exist.
The data structure of a generic fact is Map<String, Object> data. Additionally, you can set a key for the generic
factr to identify it. A generic fact must be initialized in the consequences part.
GenericFact(key=='<KEY>')
$<FACT_VARIABLE>: GenericFact(key=='<KEY>')
Examples for querying generic facts and assignment to a variable that can be used for evaluation:
● $loginCounter: GenericFact(key=='LoginCounter')
● $daysOfWeek: GenericFact(key=='DaysOfWeek')
The declaration of the consequences (“then”) part supports writing code with the Drools Rules Language
(DRL) in version 5.6.0 and Java code.
Note
The formatting in the consequences part must be in the Java style. The DRL can be used in combination
with Java code.
The consequences part defines what will be executed once the trigger part is fulfilled. It allows you to update
the player achievements or to create new events. Multiple consequences can be defined within one
consequences part.
Related Information
http://docs.jboss.org/drools/release/5.6.0.Final/drools-expert-docs/html/ch05.html
The Update API can be used to update any player achievements. Multiple updated can be executed within on
the consequences part.
updateAPIv1.<QUERY_API_METHOD>(<PLAYER_ID>, <PARAMS>);
update(engine.getPlayerById(<PLAYER_ID>));
updateAPIv1.addMissionToPlayer($playerid, 'Troubleshooting');
update(engine.getPlayerById($playerid));
updateAPIv1.completeMission($playerid, 'Troubleshooting');
update(engine.getPlayerById($playerid));
● Increasing the “Experience Points” of the player by one, complete mission “Troubleshooting, and add
badge “Champion Badge”.
New events can be created in the consequences part. They can be used for more complex game mechanics
(cascading rules), changing the state of facts or even for temporal triggers.
Generic facts can be used as global variables and are stored in the working memory. The creation of a generic
fact instance has to be done in the consequences part. In the trigger part you can query for certain generic fact
instances and (if required) bind them to local variables. This works just like querying the EventObject.
● Declaration of a generic fact with the key “factB” with a property “relevance” and according value “critical”.
$<FACT_VARIABLE>.getData();
$<FACT_VARIABLE>.setData(<VALUE>);
update($<FACT_VARIABLE>);
$loginCounter.setData("59");
update($loginCounter);
● Assigning the value of the variable “lCounter” to the generic fact “loginCounter”.
$loginCounter.setData(lCounter);
update($loginCounter);
retract($<FACT_VARIABLE>);
retract($loginCounter);
Using Java code in the consequences part is allowed and very complex rules can be created. You can work with
all Java control flow statements, a selected set of Java objects, for example collections, create generic facts or
update the player's achievements.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Rules tab.
Procedure
Caution
A newly created rule is not automatically deployed. The deployment is initiated once you apply the
changes. The rule must be activated to be deployed.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Rules tab. A rule already exists and is not enabled.
1. Check the Activate on Engine Update checkbox of the rule you want to enable.
2. Open the Rule Engine Manager by pressing Rule Engine.
3. Commit your changes by pressing the Apply Changes button in the Rule Engine Manager. The rule will be
deployed immediately after successful validation. A blue flag next to the rule indicates that the rule has
been changed.
Note
A rule that contains errors will not be deployed. Errors can be viewed by pressing the Show Issues
button in the Rule Engine Manager.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Rules tab. A rule already exists and is enabled.
Procedure
1. Uncheck the Activate on Engine Update checkbox of the rule you want to disable.
2. Open the Rule Engine Manager by pressing Rule Engine.
3. Commit your changes by pressing the Apply Changes button in the Rule Engine Manager. The rule will be
deployed immediately once the validation was successful.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Rules tab.
1. Click on the name of the rule in the entity list to open the rule editor.
2. Change the rule code.
3. Press Save.
4. Optional: Create or modify additional rules.
5. Close the rule editor and apply changes to deploy the rules.
Caution
A modified rule is not automatically deployed. The deployment is initiated once you have pressed Apply
Changes in the rules overview. The rule must be enabled to be deployed.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Rules tab.
Procedure
Caution
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Rule Engine tab.
The gamification workbench supports to detect issues with rules during design time and during runtime. Any
detected issues will be displayed in the Rule Engine tab. Syntax errors are already checked during design time
after the user applied the changes.
Procedure
1. Reported rule warnings are displayed in a table, sorted by the rule which caused them.
2. Optional: Press the refresh button attached to the rule warnings table to refresh and check for new
warnings.
Prerequisites
You have logged on to the gamification workbench with the role TenantOperator or AppAdmin, and you have
opened the Rule Engine tab of the releated app.
Context
The gamification service creates a rule engine instance for each app. Over time the state of each rule engine
instance changes based on its usage. A recovery mechanism for different rule engine states has been
introduced to allow a clean recovery in case of errors, rule set changes or system migrations. This mechanism
allows to create and restore snapshots of the current rule engine instance session and its deployed rule set.
Snapshots are stored into the database.
Generation of snapshots
Using “apply changes” (see Update Rules [page 687] for details), the current rule set stored in the database is
deployed on the currently running rule engine instance. Technically, the current session, which includes all facts
and events, is upgraded to a new rule set. To assure compatibility of new rules with the existing session, rules
are being evaluated one by one. Compatible pairs of session and rule set are stored as snapshots.
Additionally, when receiving events via the “handleEvent” method, the session will change as well and requires
the same recovery mechanism. The gamification service service will generate snapshots during event
execution in dynamic intervals.
The gamification service manages rules and corresponding snapshots in the following way:
● After each successful rule deployment (Apply Changes) the corresponding rule set as well as the session
are both tagged with a new version. The service stores the latest 10 versions at max.
● For the latest (currently active) version as well as the previous version the gamification service stores the
10 latest snapshots in slots numbered 1 through 10.
Procedure
1. The Rule Engine section lists a table with all available rule engine snapshots and their details.
2. Choose a rule engine snapshot to recover and press its Recover button.
3. Read and confirm the modal dialog.
4. The gamification service is now recovering the snapshot. This may take a few seconds.
Note
Rule engine snapshots are constantly being created, when events are being sent. Older snapshots are
removed by the system during the process. It is recommended to stop any applications from sending
events to the rule engine while restoring snapshots.
Related Information
Notifications are messages that inform users about certain state changes, for example earned achievements,
new missions, new teams. They are considered "see and forget" information and won't stay long in the system.
Context
On one hand, notifications are created automatically when calling certain API methods. On the other hand, you
can also create and assign custom notifications by using the methods addCustomNotificationToPlayer
and addCustomNotificationToTeamMembers.
Notifications are delivered to players or teams by implementing a polling-based approach using the API
methods getNotificationsForPlayer and getAllNotifications.
The gamification service automatically creates notification for users when calling certain API methods. The
table below lists all methods, which implicitly generate notifications and explains the corresponding
notification parameters.
API Method Player Type Category Subject Details Message Date Created
Custom messages can usually be specified using an optional parameter <notificationMessage> of the
corresponding API method.
Examples:
Besides the automatically generated notification it is possible to add custom notifications to players or teams
using the methods addCustomNotificationToPlayer and addCustomNotificationToTeamMembers
from within rules.
The table explains how the notification parameters are used when creating custom notifications.
API Method Player Type Category Subject Detail Message Date Created
Context
Notifications are strictly defined as "see and forget". The gamification service will only store the last 25
notifications for each player (currently "X" defaults to 25). The show notifications to players a polling-based
approach has to be implemented using the following API methods:
● getNotificationsForPlayer(playerId, timestamp)&app=APPNAME
Returns the latest notifications for a player starting from the timestamp. This mechanism allows other
applications to better track which notifications have been requested or displayed already. This is the
current approach for "user2service" communication. It works well with the user endpoint using JavaScript.
● getAllNotifications(timestamp)&app=APPNAME
Returns all generated notifications for all players within one app starting from the provided timestamp.
This is the current approach for "application2service" communication. An application can query all
notifications for the app using the tech endpoint and forward the information to the user using custom
events or communication channels. This avoids having all clients in parallel polling for notifications.
Procedure
You can see the Notification Widget in the Helpdesk Scenario (sap_gs_notifications.js) for more information on
how the polling of notifications can be implemented at the client side. The notification polling is handled as
follows:
1. Retrieve the gamification service server time on initialization, using the method getServerTime.
Prerequisites
You have logged into the gamification workbench and opened the Terminal tab.
Context
The Terminal within the game mechanics area allows you to quickly execute one or more API calls. Make sure
that you have the appropriate access rights for executing the call.
A comprehensive documentation of the API can be found in your SAP Cloud Platform Gamification
subscription under Help API Documentation .
Procedure
1. Enter the list of JSON RPC calls as a JSON array: [JSON_RPC_CALL1, JSON_RPC_CALL2,…]
Example:
2. Press Execute to execute the calls. Check Force synchronous execution checkbox to enforce sequential
execution of calls in the JSON array.
3. Review server response. You can view the detailed JSON response by clicking on the symbol on the right.
Note
The calls are executed in the context of the currently selected app (see dropdown box in the upper right
corner of the gamification workbench).
Press the Restore Example button in the Terminal section to show some example requests. Use the API
Documentation ( Help Open API Documentation ) to find a list of all available methods.
Related Information
Prerequisites
Navigate to the Terminal in the Game Design tab. Your user has the role AppAdmin.
Context
The Terminal allows you to send events that are typically sent to the host application.
Note
The Terminal should be only used to send events for testing purposes. In case you send events for a user
that is used in a productive environment it will modify the real achievements!
Procedure
1. Enter the list of JSON RPC calls with the method handleEvent.
[ {"method":"handleEvent", "params":[{"type":"myEvent","playerid":"demo-
[email protected]","data":{}}]} ]
2. Press Execute to execute the calls. Check Force synchronous execution checkbox to enforce sequential
execution of calls in a JSON array.
3. Review server response. You can view the detailed JSON response by clicking on the symbol on the right.
Once the event is send successfully the response is true.
4. All rules that listen on the according event type (when clause) will be executed.
Prerequisites
Context
The Terminal allows you to execute all methods for retrieving the user achievements data.
Procedure
1. Enter list of JSON RPC calls with the method with the desired achievement query methods.
Example getPlayerRecord:
[ {"method":"getPlayerRecord", "params":["[email protected]"]} ]
2. Press Execute to execute the calls. Check Force synchronous execution checkbox to enforce sequential
execution of calls in a JSON array.
3. Review server response. You can view the detailed JSON response by clicking on the symbol on the right.
Once the event is send successfully you will see the result.
Prerequisites
You are logged into the gamification workbench and have opened the Logging tab.
Context
The logging view allows you to search the event log for the selected app. The event log includes all API calls
related to “Event Submission” as well as the corresponding API calls executed from within the rules, which
were triggered by the corresponding events.
Note
The maximum retention time for the event log is 7 days, but not exceeding 500,000 log entries.
Rules with an EventObject fact and one or more other facts (Player or
GenericFact) in WHEN part cause endless loops.
Understanding why such rule sets result in loops requires a deeper understanding of the gamification service
itself:
● Rules with fact-based conditions are triggered on changes of the respective fact or facts. For example,
insert, update or retract fact.
● handleEvent inserts a fact of type EventObject and fires all rules. For example the THEN parts of all
rules that satisfy a fact-based condition involving EventObject will be executed.
● THEN execution may involve the modification of facts (insert, update, delete), which in turn may trigger
further rules. For example, insert a new GenericFact or update an existing fact (Player or
GenericFact). Rule execution runs until there are no more rules to fire.
● Endless loops occur if there are circles in the rule execution graph, for example, one rule calling another
and vice versa. The gamification service loop detection will detect such loops at runtime and stop the
engine until the problems are resolved.
● The EventObject inserted by handleEvent is per default retracted automatically after all rules have
fired. Thus, if the WHEN part includes EventObject conditions and further fact conditions, for example,
Player(), the rule will trigger again if one of the respective facts changed and the overall condition is still
true.
● This can cause an endless loop. For example: Rule 1 WHEN includes EventObject and queries for
corresponding player (Player(playerid==$playerid)). Rule 2 WHEN expects Player change only
(Player()) in WHEN. If both, Rule 1 and Rule 2, include an update($player) in the THEN part, this will
result in an endless loop.
Mitigation strategy
● Use update(fact) with care. Think if it is needed and check for rules that could trigger accidently.
● Minimize the number of update calls in the THEN part. Example: Only call update($player) if player
achievement data has changed and you want other rules to retrigger, e.g. rules checking for mission
Both, key and value are interpreted as Strings. Thus, an explicit type conversion is required if you want to
compare them with numbers. This type conversion is done using the standard Java approach for the different
numeric types, for example, Integer.parseInt(value) or Double.parseDoube(value).
Example:
[
{"method":"handleEvent", "params":
[{"type":"solvedProblem","playerid":"D053659","data":
{"relevance":"critical","processTime":15}}]}
]
Related Information
Context
The integration of a (gamified) cloud application must consider the following aspects:
1. Sending gamification-relevant events to a player or a team, for example the user has completed a task for
which the gamification service grants a point.
2. Giving feedback to the players/teams, for example by showing achievements, progress, and game
notifications, .
3. Integrating the user management - creating or enabling players/teams, blocking players/teams, deleting
players/teams.
The following sections describe how you can deal with these aspects using the Web APIs provided. The sample
code shown is based on the demo application "Help Desk". The demo application's source code is also
available in GitHub .
Note
The sample code used to demonstrate the integration is not ready for production.
The Application Programming Interface (API) of the gamification service is the central integration point of your
application.
● Technical endpoint for integrating gamification events and user management in youur backend.
● User endpoint for integrating user achievements in the application frontend.
It is recommended to use the technical endpoint only for executing methods of the gamification service that
must not be executed by the users themselves, such as sending events to the gamification service that trigger
certain achievements or performing user management tasks, creating players for example. Authentication and
authorization in this case is based on a technical user that is created for the application itself.
The user endpoint should be used for accessing user related information for example earned achievements,
available achievements/mission, notifications and others. A great advantage of this approach is that the
gamification service manages access control, based on the user roles. For instance to make sure that a user
cannot access other users' data. For this, the authenticated user must be passed to the user endpoint.
Note
The whole integration can be done by using only the technical endpoint. However, in this case you must
manage access control yourself..
The documentation for the API can be found in your gamification service under Help API Documentation
or at https://gamification.hana.ondemand.com/gamification/documentation/documentation.html.
In a SAP Cloud Platform setting we assume that the gamified app and the gamification service subscription are
located in the same subaccount. Furthermore, we assume that the application back end is written in Java, while
the application front end is based on HTML5 or SAPUI5.
The technical endpoint is used to send gamification-relevant events and perform user management tasks from
the application back end. Communication is based on a BASIC AUTH destination that uses the user name and
password of a technical user.
Note
For productive settings the client-side event sending should support resending events in case of failures,
planned or unplanned service downtimes. For instance, short planned downtimes (less than 5 minutes
according to Cloud Platform maintenance schedules) are required to apply regular gamification service
updates.
The easiest way to show player achievements is to integrate a default user profile that comes with the
gamification service subscription as an iFrame in the application's web front end.
To implement a user profile or single widgets (for example a progress bar tailored to the application's front
end), we recommend you use the user endpoint in combination with a local proxy servlet and an app-to-app
SSO destination. The proxy servlet prevents running into cross-site scripting issues and the app-to-app SSO
destination automatically forwards the credentials of the authenticated user to the gamification service. This
allows reuse of the access control mechanisms offered by the gamification service.
Since the user endpoint is used from a browser it is protected against cross-site request forgery. Accordingly,
an XSRF token has to be acquired by the client first.
Context
If the user performs actions in the application that are relevant to gamification, the gamification service has to
be informed by invoking the corresponding API method. To prevent cheating this should be done in the
application back end using the technical endpoint offered by the API.
Note
For productive settings the client-side event sending should support resending events in case of failures,
planned or unplanned service downtimes. For instance, short planned downtimes (less than 5 minutes
according to Cloud Platform maintenance schedules) are required to apply regular gamification service
updates.
Procedure
Note
See also:
○ Demo application source code: https://github.com/SAP/gamification-demo-app
○ API Documentation: SAP Cloud Platform Gamification subscription, under Help API
Documentation .
Context
The gamification service subscription includes a default user profile, which you can include in your application
as an <iFrame/>.
https://<Subscription URL>/gamification/userprofile.html?
name=<userid>&app=<appid>
2. Include the default user profile in your HTML5 code as an iFrame:
Prerequisites
Configure your subaccount to allow principal propagation. For more information, see HTTP Destinations [page
217]
Context
The integration of custom gamification elements tailored to your application's user interface requires the
development of custom JavaScript/HTML5 widgets. To avoid cross-site-scripting issues, you should introduce
a proxy servlet in the application. This servlet forwards JSON-RPC requests to the user endpoint using an App-
to-App SSO destination. This way, the gamification service has access to the user principle and the built-in
access control is active.
Procedure
API Documentation: SAP Cloud Platform Gamification subscription under Help API Documentation .
Context
The players (users) must be explicitly created before they can be used to assign achievements. A player
context is always valid for one tenant and therefore can be used across multiple apps (managed in one tenant).
Procedure
1. Register (create) a player (user) for a tenant subscription using the API method createPlayer.
Note
This is done automatically on the first event if the flag Auto-Create Players is set to true for the given
app.
2. (Optional) Initialize a player (user) by creating a rule listening for an event of type initPlayerForApp.
a. Precondition: The player is registered.
b. On event: if a player has not been initialized for the given app yet an event of type initPlayerForApp
is automatically inserted into the engine. The THEN-part of this rule should include the user-defined
init actions, for example assigning initial missions.
c. (Optional) If you want players to be created with a display name you can add the optional parameter
playerName to the event. During the automated player creation this parameter is used for setting the
player name. Example:
{"method":"handleEvent","params":
[{"type":"linkProvided","playerid":"[email protected]", "playerName":
"Maria Rossi", "data":{}}]}
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Game Design tab.
Context
The gamification introduction is a continuous process since the modification of game mechanics can be done
at any point in time. For example, the number of points a player can reach might be changed in order to change
the behavior of the user.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Analytics tab.
Context
You can view the statistics of achievements such as points and badges. The points metrics that can be viewed
are all point categories and badges that are maintained for your application.
The following aggregations can be selected (the values for badges cannot be aggregated):
Note
The analytics are currently limited to point categories and badges. Analytics on player level are not
available due to privacy reasons.
Procedure
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have
opened the Analytics tab. You have selected the statistics you are interested in. A time range must be selected.
Context
You can view the statistics of achievements such as points and badges. The selected values can be compared
to an earlier time range in order to identify changes in the assignment of achievements.
Note
View a lag chart for a comparison of the selected data to an earlier time range.
1. Select the Enable lag chart checkbox.
2. Select the lag amount for comparison.
The lag chart displays the difference of the aggregated values to the values before the lag amount. For
example, when you select the sum of point category for the current month, the lag chart will show the
difference compared to the month before, provided you have selected a lag amount equal to one month.
In this case study, a demo application will be gamified in order to demonstrate the implementation and
configuration of a gamification concept step by step.
The demo host application is a “Help Desk” software, which is typically used by call center employees.
Customers can create tickets (for an issue with software or hardware, for example) and call center employees
can process these tickets.
The image below shows the welcome screen of the Help Desk application. The welcome screen appears once
the user is successfully authenticated using the identity provided. The user must have the role helpdesk. The
assignment of roles is described in page Roles [page 653].
Context
The demo application (Help Desk) will be automatically subscribed for each subaccount that is subscribed to
the gamification service.
The gamification service has already been integrated within the demo application. Events such as the
processing of tickets will be sent to the gamification service of the subaccount subscription for example, and
the achievements are going to be retrieved by the corresponding interfaces.
Since the gamification service and the demo applications are subscriptions, a destination has to be enabled in
order to allow communication between the services. A technical user is also required in order to allow secure
communication.
Procedure
The Help Desk app can be accessed via the menu Help Open Help Desk . The following link will be used:
https://< SUBSCRIPTION_URL>/helpdesk. The role helpdesk must be granted to the user.
Context
The user requires the role helpdesk in order to access the help desk application.
Procedure
The destination requires a technical user for secure communication between your application and the
gamification service subscription.
Context
Note
You can request user IDs at the SAP Service Marketplace: http://service.sap.com/request-user SAP
Service Marketplace users are automatically registered with the SAP ID service, which controls user access
to SAP Cloud Platform.
Procedure
1. Request a technical user via SMP. (You can use your subaccount user as well, but this is not recommended
for security reasons.)
2. In the SAP Cloud Platform cockpit, choose the Services tab.
3. Click the Gamification Service tile.
4. Click on the Configure Gamification Service link.
Related Information
Prerequisites
For more information about how to install the SAP Cloud Platform tools, see Eclipse Tools [page 1007].
Context
The demo application's (Help Desk) source code is also available in GitHub .
This section explains how to set up an Eclipse project, deploy the demo application on SAP Cloud Platform, and
configure it to run with your gamification service subscription.
Procedure
3. Open Eclipse with SAP Cloud Platform tools and choose File Import .
5. Choose the folder containing the demo application sources and choose Finish.
6. Deploy and start the demo application on the cloud from Eclipse IDE. Select Java Web as a Runtime.
For more information, see Deploying on the Cloud from Eclipse IDE [page 1469].
7. Configure destinations and roles for the deployed application. Use the same configuration as described in
section Configure Available Subscription [page 710].
The host application without the application does not allow the user (call center employee) to see any feedback
on his/her daily work. The user does not really know how s/he performs compared to other colleagues either.
To meet the introduced gamification requirements, an example gamification design is introduced. All users (call
center employees) are considered as players where the gamification concept will apply.
Points Categories
Levels
Based on the number of experience points a user gains, s/he can reach different levels. Three levels are
introduced:
“Competent” - this level can be reached once the user has gained 10 “Experience Points”
“Expert” - this level can be reached once the user has gained 50 “Experience Points”
Badges
Based on the successful completion of a mission, the user will gain a badge. The following badges are
introduced:
“Troubleshooting Champion”
Missions
Missions will be introduced to motivate continuous efforts. The following missions will be introduced:
“Troubleshooting”
Rules
For each processed ticket, the user will gain 1 “Experience point”.
For each processed ticket categorized as “critical”, the user will gain 1 “Critical Tickets” point.
Once a user has processed 5 critical tickets (gained 5 “Critical Tickets” points), the “Troubleshooting” mission
is completed.
Once the mission troubleshooting is completed, the user will gain the “Troubleshooting Champion” badge.
The gamification concept introduced above can be generated automatically within the gamificationworkbench.
The generated gamification concept is designed for the demo application only and is designed to provide an
example of a gamification concept.
The demo content for the Help Desk application can be generated in the OPERATIONS tab. You need to have
the TenantOperator role. Go to "Demo Content Creation" (shown in the picture below) and select the Create
HelpDesk Demo button. After a short while you will see a notification Gamification concept
successfully created. once the content generation was successful. The demo content has been
generated into a new app: HelpDesk.
The generated gamification concept contains more gamification elements than described in Switch Apps [page
660] to provide additional examples.
The following sections describe how the gamification design is realized in the gamification workbench.
The gamification workbench makes it possible to manage gamification concepts for multiple apps. An app
must be created before the gamification concept can be implemented.
Procedure
1. Go to the OPERATIONS tab. The user must have the TenantOperator role.
2. Go to Apps.
3. Press the Add button.
4. Enter App name: “HelpDesk”.
5. Optional: Enter an app description.
6. Optional: Enter owner.
7. Click on Save.
Next Steps
Once the app has been created, it must be selected in the top right corner so that the gamification concept can
be implemented for it.
Procedure
7. Press Add.
8. Enter Name: “Critical Tickets”.
9. Enter Abbreviation: “CT”.
10. Select point type: “ADVANCING”.
11. Check Hidden from Player
12. Press Create.
You should now see both point categories (“Experience Points” and “Critical Tickets”) in the list for Points.
Procedure
7. Press Add.
8. Enter Name: “Competent”.
9. Select Points: “Experience Points”.
Results
You should now see all three levels (“Novice”, “Competent”, and “Expert”) in the list for Levels.
Procedure
You should now see all badges (“Troubleshooting Champion”) in the list for Badges.
Procedure
You should now see all missions (“Troubleshooting”) in the list for Missions.
Context
Procedure
Procedure
1. Press Add.
2. Enter Name: “GiveXPCritical”
3. Enter Description: “Give additional Experience Points for critical ticket.”
4. Enter the following text for the trigger:
Procedure
1. Press Add.
2. Enter Name: “GiveCT”
3. Enter Description: “Give Critical Ticket Points for processed ticket.”
4. Enter the following text for the trigger:
Procedure
1. Press Add.
2. Enter Name: “AssignMissionTS”
3. Enter Description: “Assign Troubleshooting mission.”
4. Enter the following text for the trigger:
$p : Player($playerid : uid)
$event : EventObject(type=='initPlayerForApp', $playerid==playerid) from
entry-point eventstream
updateAPI.addMissionToPlayer($playerid, 'Troubleshooting');
update($p);
Procedure
1. Press Add.
$p : Player($playerid : uid);
eval(queryAPI.hasPlayerMission($playerid, 'Troubleshooting') == true)
eval(queryAPI.getPointsForPlayer($playerid, 'Critical Tickets').getAmount()
>= 5)
updateAPI.completeMission($playerid, 'Troubleshooting');
updateAPI.addBadgeToPlayer($playerid, 'Troubleshooting Champion', 'You solved
5 critical tickets!');
update($p);
1.7.9.5.6.6 Result
You should now see the created rules in the list for Rules.
Results
The SAP Cloud Platform Git service lets you store and version the application source code. It is based on Git,
the widely used open-source system for revision management of source code that facilitates distributed and
concurrent large-scale development workflows.
Git is a widely used open source system for revision management of source code that facilitates distributed
and concurrent large-scale development workflows.
You can use any standard compliant Git client to connect to the Git service. Many modern integrated
development environments, including but not limited to Eclipse and the SAP Web IDE, provide tools for working
with Git. There are also native clients available for many operating systems and platforms.
Environment
Features
Records only differences Only the differences between versions are recorded allowing for a compact storage
between versions and efficient transport.
Cost-effective and Create and merge branches supporting a multitude of development styles. Git is
simple widely used and supported by many tools and is highly distributed. Every clone of a
repository contains the complete version history.
Operations on local Perform almost all operations locally and thus very fast and without need to be
repository clone permanently online. Only required when synchronizing with the Git service.
While Git can manage and compare text files very efficiently, it was not designed for processing large files or
files with binary content, such as libraries, build artifacts, multimedia files (images or movies), or database
backups. Consider using the document service or some other suitable storage service for storing such content.
To ensure best possible performance and health of the service, the following restrictions apply:
● The size of an individual file cannot exceed 20 MB. Pushes of changes that contain a file larger than 20 MB
are rejected.
● The overall size of the bare repository stored in the Git service cannot exceed 500 MB.
● The number of repositories per subaccount is not currently limited. However, SAP may take measures to
protect the Git service against misuse.
Third-Party Notice
The SAP Cloud Platform Git service makes use of the Git-Icon-1788C image made available by Git (https://git-
scm.com/downloads/logos ) under the Creative Commons Attribution 3.0 Unported License (CC BY 3.0)
http://creativecommons.org/licenses/by/3.0 .
Related Information
In the SAP Cloud Platform cockpit, you can create and delete Git repositories, as well as lock and unlock
repositories for write operations. In addition, you can monitor the current disk consumption of your
repositories and perform garbage collections to clean up and compact repository content.
Related Information
In the SAP Cloud Platform cockpit, you can create Git repositories for your subaccounts.
Prerequisites
Context
Note
To create a repository for the static content of an HTML5 application, see Create an HTML5 Application
[page 1714].
Procedure
1. Log on to the SAP Cloud Platform cockpit, and select the subaccount.
Field Entry
Name (Mandatory) A unique name starting with a lowercase letter, followed by digits and lowercase
letters. The name is restricted to 30 characters.
Description (Optional) A descriptive text for the repository. You can change this description later on.
Create empty commit An initial empty commit in the history of the repository. This might be useful if you want to im
port the content of another repository.
4. Choose OK.
5. To navigate to the details page of the repository, click its name.
Results
The URL of the Git repository appears under Source Location on the detail page of the repository. You can use
this URL to access the repository with a standard-compliant Git client. You cannot use this URL in a browser to
access the Git repository.
Related Information
Permissions for Git repositories are granted based on the subaccount member roles that are assigned to users.
To grant a subaccount member access to a Git repository, assign one of these roles: Administrator, Developer,
or Support User.
Prerequisites
For details about the permissions associated with the individual roles, see Security [page 738].
Procedure
Make sure that you assign at least one of these roles: Administrator, Developer, or Support User.
Related Information
In the SAP Cloud Platform cockpit, you can change the state of a Git repository temporarily to READ ONLY to
block all write operations.
Prerequisites
Procedure
1. Log on to the SAP Cloud Platform cockpit, and select the required subaccount.
2. In the list of Git repositories, locate the repository you want to work with and follow the link on the
repository's name.
3. On the details page of the repository, choose Set Read Only.
The state flag of the repository changes from ACTIVE to READ ONLY and all further write operations on this
repository are prohibited.
Note
To unlock the repository again and allow write access, choose Set Active on the details page of the
repository.
In the SAP Cloud Platform cockpit, you can delete a Git repository unless it is associated with an HTML5
application. In this case, delete the HTML5 application.
Prerequisites
Context
Caution
Be very careful when using this command. Deleting a Git repository also permanently deletes all data and
the complete history. Clone the repository to some other storage before deleting it from the Git service in
case you need to restore its content later on.
Procedure
1. Log on to the SAP Cloud Platform cockpit, and select the appropriate subaccount.
In the SAP Cloud Platform cockpit, you can trigger a garbage collection for a repository to clean up
unnecessary objects and compact the repository content aggressively.
Prerequisites
Context
Perform this operation from time to time to ensure the best possible performance for all Git operations. The Git
service also automatically runs normal garbage collections periodically.
Note
This operation might take a considerable amount of time and may also impact the performance of some
Git operations while it is running.
Procedure
1. Log on to the SAP Cloud Platform cockpit, and select the subaccount.
Results
The garbage collection runs in the background. You can use the Git repository without restrictions while the
process is running.
We assume that you are familiar with Git concepts, and that you have access to a suitable Git client, for
example, SAP Web IDE for performing Git operations.
Related Information
The URL of the Git repository is shown under Source Location on the details page of the repository. Use this
URL to access the repository using a Git client.
Prerequisites
In the subaccount where the repository resides, you must be a subaccount member who is assigned the role
Administrator, Developer, or Support User.
Procedure
1. Log on to the SAP Cloud Platform cockpit, and select the required subaccount.
You need to clone the Git repository of your application to your development environment.
Procedure
1. In the cockpit, copy the link to the Git repository of your application.
○ To use Eclipse:
1. Start the Eclipse IDE.
2. In the JavaScript perspective, open the Git Repositories view.
3. Choose the Clone a Git repository icon.
4. Paste the link that points to the Git repository of your application.
5. If prompted, enter your SCN user and password.
6. Choose Next.
○ To use the Git command line tool:
1. Enter the following command:
$ git clone <repository URL>.
2. If prompted, enter your SCN user ID and password.
Related Information
EGit/User Guide
Web IDE: Cloning a Repository
The Git fetch operation transfers changes from the remote repository to your local repository.
Prerequisites
● You must be a subaccount member who is assigned the role Administrator, Developer, or Support User.
● You have cloned the repository to your workspace, see Clone Repositories [page 734].
Context
Refer to the SAP Web IDE documentation if you want to fetch changes to SAP Web IDE. Otherwise, see the
documentation of your Git client to learn how to fetch changes from a remote Git repository.
Related Information
The Git push operation transfers changes from your local repository to a remote repository.
Prerequisites
● You must be a subaccount member who is assigned the role Administrator or Developer.
● You have already committed the changes you want to push in your local repository.
● You have ensured that the e-mail address in the push commit matches the e-mail address you registered
with the SAP ID service.
Context
Refer to the SAP Web IDE documentation if you want to push changes from SAP Web IDE. Otherwise, see the
documentation of your Git client to learn how to push changes to a remote Git repository.
Procedure
Related Information
The Git service offers a web-based repository browser that allows you to inspect the content of a repository.
Prerequisites
In the subaccount where the repository resides, you must be a subaccount member who is assigned the role
Administrator, Developer, or Support User.
Context
The repository browser gives read-only access to the full history of a Git repository, including its branches and
tags as well as the content of the files. Moreover, it allows you to download specific versions as ZIP files.
The repository browser automatically renders *.md Markdown files into HTML to make it easier to create
documentation.
Procedure
1. Log on to the SAP Cloud Platform cockpit, and select the required subaccount.
You can find the commits of a given user containing the user's name and e-mail address.
Procedure
1. Clone the Git repositories of the account to which the user had write access.
For more information, see Determine the Repository URL [page 734] and Clone Repositories [page 734].
2. On each of the Git repositories, execute the following commands:
1.8.3 Security
Access to the Git service is protected by SAP Cloud Platform roles and granted only to subaccount members.
Restrictions
You cannot host public repositories or repositories with anonymous access on the Git service.
Authentication
Authentication for the Git service is performed against the configured platform identity provider. The following
providers are supported:
● SAP ID Service
Users can use their SAP ID Service credentials to authenticate and access the Git service.
● Custom Identity Authentication tenant
The Git service supports basic authentication against a custom Identity Authentication tenant that is
configured as a platform identity provider. If a subaccount is configured to use a custom Identity
Authentication tenant as the platform identity provider as described in Platform Identity Provider [page
2431], then basic authentication is done against that custom Identity Authentication tenant.
You can add members to a subaccount from the SAP ID Service as well. However, these users cannot be
used for basic authentication. This is because the security service doesn't support mixed use of custom
platform Identity Authentication tenant users and SAP ID service users for basic authentication in the
same subaccount. For more information, see the notes on basic authentication in Authentication [page
2364].
If a custom Identity Authentication tenant is configured as the platform identity provider to grant Git
permissions to users in this tenant, assign the respective roles to the respective user IDs (Pxxxxxx).
For more information on platform scopes, see Platform Scopes [page 1910].
Note
Before the Git service supported custom Identity Authentication tenants (until October 25, 2018), only
users of the SAP ID Service could perform basic authentication when executing Git commands.
Permissions
The permitted operations depend on the subaccount member role of the user.
● Clone a repository
● Fetch commits and tags
Write access is granted to all users who are assigned the Administrator or Developer role. These users are
allowed to do the following:
● Create repositories.
● Push commits
● Push tags
Note
If the repository is associated with an HTML5 application, pushing a tag defines a new version for the
HTML5 application. The version name is the same as the tag name.
Only users who are assigned the Administrator role are allowed to do the following:
● Delete repositories
● Run garbage collection on repositories
● Lock and unlock repositories
● Delete remote branches
● Delete tags
● Push commits committed by other users (forge committer identity)
● Forcefully push commits, for example to rewrite the history of a Git repository
● Forcefully push tags, for example to move the version of an HTML5 application to a different commit
You can also use custom roles to grant permissions for using the Git service. For more information on custom
roles, see Manage Custom Platform Roles [page 1909].
Related Information
Governments place legal requirements on industry to protect data and privacy. We provide features and
functions to help you meet these requirements.
Note
SAP does not provide legal advice in any form. SAP software supports data protection compliance by
providing security features and data protection-relevant functions, such as blocking and deletion of
personal data. In many cases, compliance with applicable data protection and privacy laws is not covered
by a product feature. Furthermore, this information should not be taken as advice or a recommendation
regarding additional features that would be required in specific IT environments. Decisions related to data
protection must be made on a case-by-case basis, taking into consideration the given system landscape
and the applicable legal requirements. Definitions and other terms used in this documentation are not
taken from a specific legal source.
Handle personal data with care. You as the data controller are legally responsible when processing personal
data.
If you need to know which repositories contain Git commits of a given user that contain the user's name and e-
mail address, see Find Commits of a Given User [page 737].
If you need help with this, open a ticket on BC-NEO-GIT as described in 1888290 . Please indicate the user’s
e-mail address and the account where Git repositories reside to which this user had write access.
If you need to anonymize a user’s e-mail address and name from a given Git repository this requires rewriting
the history of the Git repository. This will change the IDs of all affected commits and their successor commits.
For more information,
If you intend to delete a subaccount or terminate your contract you can export the Git repositories by cloning
them. For more information, see Determine the Repository URL [page 734] and Clone Repositories [page 734].
Related Information
Following best practices can help you get started with Git and to avoid common pitfalls.
If you are new to Git, we strongly recommend that you read a text book about Git, search the Internet for
documentation and guides, or get in touch with the large worldwide community of developers working with Git.
Note
The only valid exception to this guideline is if you accidentally pushed a secret, for example, a
password, to the Git service.
● Don't create dependencies on changes that have not yet been pushed.
While Git provides some powerful mechanisms for handling chains of commits, for example, interactive
rebasing, these are usually considered to be for experienced users only.
● Do not push binary files.
Git efficiently calculates differences in text files, but not in binary files. Pushing binary files bloats your
repository size and affects performance, for example, in clone operations.
● Store source code, not generated files and build artifacts.
Keep build artifacts in a separate artifact repository because they tend to change frequently and bloat your
commit history. Furthermore, build artifacts are usually stored in some sort of binary or archive format that
Git cannot handle efficiently.
● Periodically run garbage collection.
Trigger a garbage collection in the SAP Cloud Platform cockpit from time to time to compact and clean up
your repository. Also run garbage collection regularly for repositories cloned to your workplace. This will
minimize the disk usage and improve the performance of common Git commands.
While working with the Git service, you might occasionally encounter common problems and error messages.
The actual error messages and their presentation depend on the Git client you are using for communication
with the Git service.
General Issues
● All remote operations on a repository fail with Authentication failed for ....
Make sure you've entered the correct SAP ID credentials. Verify that you can log on to the SAP Cloud
Platform, for example, to the cockpit. If that fails as well, your subaccount may have been locked
temporarily due to too many failed logon attempts. If the problem persists, contact SAP Support for help.
● A remote operation on a repository fails with Git access forbidden.
This message means you don't have permission to access the repository at all, or to perform the requested
Git operation. Ensure that you are member of the subaccount that owns the repository. For read access
(clone, fetch, pull), you must be assigned the role Administrator, Developer, or Support User. For write
access (push, push tags), you must be assigned the Administrator or Developer role. For more information
about required roles for certain Git operations, see Security [page 738].
● Change pushes fail with a message similar to: You are not allowed to perform this operation.
To push into this reference you need 'Push' rights. ... HEAD -> master
(prohibited by Gerrit).
This message means you are not assigned the subaccount member role Developer or Administrator, or
that the repository is currently locked for write operations. Check your roles in the SAP Cloud Platform
cockpit or ask a subaccount administrator to assign the necessary roles. Verify the state of the repository
in the cockpit and unlock it to enable write operations.
Currently, the Git service does not support the Gerrit code review workflow. To use it, you need to run your
own Gerrit server, which you integrate intoSAP Cloud Platform using the cloud connector.
● Change pushes fail with You are not committer ....
● Change pushes fail with Pack exceeds the limit of ..., rejecting the pack.
This error message indicates that the maximum size of your Git repository would be exceeded by
accepting this change. The Git service imposes a hard limit of 500 MB as the maximum size of repositories
to ensure the best possible performance and health of the service. You can see this limit in the SAP Cloud
Platform cockpit, as well as your current repository size.
Run a garbage collection in the SAP Cloud Platform cockpit to clean up unnecessary objects and compact
the repository content. If doing this doesn't significantly reduce the size of the repository, it's usually an
indication that in addition to source code, the repository contains build artifacts or some other binary data
that cannot be compressed efficiently. Remove such files from the history of the repository and consider
storing them outside the Git service.
● Change pushes fail with Object too large (... bytes), rejecting the pack. Max object
size limit is ... bytes.
This error message indicates that the commit you are trying to push contains files that are too large to be
stored by the Git service. The Git service imposes a hard limit of 20 MB as the maximum size of individual
files in a repository to ensure the best possible performance and health of the service. Remove the file or
files that are too big from the commit and push again.
Related Information
SAP Cloud Platform Enterprise Messaging employs a centralized message-oriented architecture. It is more
scalable and reliable compared to the traditional point-to-point communication model.
The traditional point-to-point communication model is a decentralized one where applications and services
directly communicate with each other. The service decouples communication between the sending and
receiving applications and ensures the delivery of messages and events between them.
Queues
The service enables applications to communicate with each other through message queues. A sending
application sends a message to a specific named queue. There's a one on one correspondence between a
receiving application and its queue. The message queue retains the messages until the receiving application
consumes it. You can manage these queues using the service.
Topics
The service enables a sending application to publish messages and events to a topic. Applications must be
subscribed to that topic, and be active when the message is sent. Topics do not retain messages. This method
can be used when each message needs to be consumed by a number of receiving applications. Topics should
be managed programmatically.
Events
An event is a message that is sent to notify a consumer that an object has changed. An event source is the
system or application from which the event originates. A receiving application needs to create a connection to
the event source to facilitate the flow of events. The service has the capability of receiving events from an event
source, event look up, and event discovery. Different event sources can publish events to the service.
A message client allows you to connect to the service using its unique credentials to send and receive
messages. It can run within SAP Cloud Platform or outside it. You can create multiple message clients that can
be distinguished with a set of credentials that consist of a namespace and connection rules that define the list
of queues or topics to which the message client can send and receive messages. The namespace is a unique
prefix that defines all the queues or topics that have been created in the context of a particular message client.
When you manage queues or topics in the service, providing the namespace allows message clients to identify
the queues or topics to which communication should be made. You can define connection rules in the service
descriptor when you create a service instance. These rules specify the queue or topic to which a message client
should publish or consume messages. The default service plan facilitates a connection between different
message clients using its unique credentials defined in the service descriptor. Message clients can
communicate with each other using the service. Each message client has a set of queues and topics to which it
is associated. All these queues and topics belonging to one message client are exposed to other message
clients using its unique credentials. This entire set of queues and topics within different message clients in a
subaccount can send and receive messages or events to each other using the service.
SAP Cloud Platform Enterprise Messaging (Beta) service is at the heart of a centralized message-oriented
architecture. It is more scalable and reliable compared to the traditional point-to-point communication model.
The traditional point-to-point communication model is a decentralized one where applications and services
directly communicate with each other. The service decouples communication between the sending and
receiving applications and ensures the delivery of messages between them.
The service consists of one or more messaging hosts. You can think of a messaging host as a space or domain
in the service where your message queues and topics reside.
The messaging models that the service supports includes the following:
● In the point-to-point model, the service enables applications to communicate with each other through
message queues. A sending application sends a message to a specific named queue. Only one receiving
application retrieves the messages from the queue intended for it. The message queue retains the
messages until the receiving application consumes it. You can manage these queues using SAP Cloud
Platform cockpit.
Perform the following tasks to configure and set up enterprise messaging service in your subaccount.
Use SAP Cloud Platform Enterprise Messaging (Beta) in SAP Cloud Platform cockpit to manage messaging
hosts. A messaging host is a space or domain in enterprise messaging service where the message queues and
topics reside.
Prerequisites
You have an SAP Cloud Platform subaccount with messaging hosts provisioned in it.
Procedure
1. Navigate to your subaccount in the Neo environment. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. In the navigation area, choose Services. You can see the list of services available to you.
For more information about accessing services, see Using Services in the Neo Environment [page 1740].
3. Choose Enterprise Messaging.
4. Choose Messaging Hosts in the navigation area.
○ Edit the description of a messaging host by selecting it and choosing Edit Description.
○ Create and manage queues in your messaging host. For more information, see Manage Queues [page
749].
Use SAP Cloud Platform Enterprise Messaging (Beta) in SAP Cloud Platform cockpit to manage queues in your
messaging hosts.
Context
Messaging hosts use queues to enable point-to-point communication between two Java applications. You can
configure a queue to deliver messages in different ways when multiple applications are connected to it. This
property of a queue is called access type and it can be one of the following values:
Procedure
1. Navigate to your subaccount in the Neo environment. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. Choose Services in the navigation area. You can see the list of services available to you.
For more information about accessing services, see Using Services in the Neo Environment [page 1740].
3. Choose Enterprise Messaging.
4. Choose Messaging Hosts in the navigation area.
5. Select a messaging host.
6. Choose Queues in the navigation area.
You can also search for a queue by typing its name in the Search field. The list of queues is filtered to match
the pattern you have entered.
7. To create a new queue, perform the following substeps.
a. Choose Create Queue.
b. Enter a queue name.
c. Select an access type.
d. Choose Save.
8. Manage the queues by performing one or more of the following administrative tasks:
Use SAP Cloud Platform Enterprise Messaging (Beta) in SAP Cloud Platform cockpit to create and manage
application bindings to messaging hosts.
Context
To complete the messaging setup, create application bindings that connect Java applications to messaging
hosts. After an application binding has been created, applications can send messages to any queue or topic in
the messaging host. They can also receive messages from any queue or topic in the messaging host.
Procedure
1. Navigate to your subaccount in the Neo environment. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. Choose Services in the navigation area.
For more information about accessing services, see Using Services in the Neo Environment [page 1740].
3. Choose Enterprise Messaging.
4. Choose Application Bindings in the navigation area.
5. To create an application binding:
a. Choose Create Application Binding. At least one Java application must be available to create an
application binding. For more information on developing Java applications, see Developing Java
Applications [page 1442].
b. Select a Java application.
c. Select the messaging host to which you want to bind the application.
d. Enter a name for the application binding. This name must be unique across all application bindings
associated with the selected Java application.
e. Choose Save. You may need to restart the application, depending on how you developed it.
The Monitoring service allows you to access application monitoring data and get notified of subscribed events.
Configure custom metrics, thresholds, and alerts. Use the cloud cockpit, the console client, or a REST API to
manage monitoring data.
Features
Fetch metrics of a Java Use the cloud cockpit or the monitoring REST API to get the status of or the
application metrics from a Java application and its processes.
Fetch metrics of a HANA XS Use the cloud cockpit or the monitoring REST API to get the status of or the
or HTML5 application metrics from a HANA XS or HTML5 application.
Fetch metrics of a database Use the cloud cockpit to get the metrics from a database system.
system
View histrory of metrics Use the cloud cockpit to see the history of metrics for a Java, HTML5, or HANA
XS application, or for a database system.
Define alert recipients Use the console client to set e-mail alert notifications.
Receive alerts Receive alert e-mail notifications for a Java, HTML5 or SAP HANA XS
application.
Configure JMX-based checks Use the console client to configure JMX checks for a Java application.
Register custom checks Use the cloud cockpit to create custom checks for an HTML5 or HANA XS
application.
Register availability checks Use the cloud cockpit or the console client to create availability checks for a
Java or SAP HANA XS application.
Perform JMX operations Use the cloud cockpit to execute operations on JMX MBeans.
Retrieve Java application metrics in a JSON format by performing a REST API request defined by the
monitoring API.
Parameter Value
Example
The JSON response for Java application metrics may look like the following example:
[
{
"account": "mySubaccount",
"application": "hello",
"state": "Ok",
"processes": [
{
"process": "bf061f611cc520f39839f2fa9e44813b2a20cdb7",
"state": "Ok",
"metrics": [
{
"name": "Used Disc Space",
"state": "Ok",
"value": 43,
"unit": "%",
"warningThreshold": 90,
"errorThreshold": 95,
"timestamp": 1456408611000,
"output": "DISK OK - free space: / 4177 MB (54% inode=84%); /
var 1417 MB (74% inode=98%); /tmp 1845 MB (96% inode=99%);",
"metricType": "rate",
"min": 0,
"max": 8063
},
{
"name": "Requests per Minute",
"state": "Ok",
"value": 0,
"unit": "requests",
"warningThreshold": 0,
"errorThreshold": 0,
Related Information
Learn how to configure a custom application that retrieves the metrics for Java applications running on SAP
Cloud Platform. The dashboard, as implemented, shows the states of the Java applications and can also show
the state and metrics of the processes running on those applications.
Prerequisites
● To test the entire scenario, you need subaccounts on SAP Cloud Platform in two regions (Europe [Rot/
Germany] and US East).
● To retrieve the metrics from Java applications, you need two deployed and running Java applications.
Context
This tutorial uses a Java project published on GitHub. This project contains a notification application that
requests the metrics of the following Java applications (running on SAP Cloud Platform):
After receiving each JSON response, the dashboard application parses the response and retrieves the name
and state of each application as well as the name, state, value, thresholds, unit, and timestamp of the metrics
for each process. The data is arranged in a list and then shown in the browser as a dashboard. For more
information about the JSON response, see Monitoring Service Response for Java Applications [page 752].
Procedure
Note
You can also upload your project by copying the URL from GitHub and pasting it as a Git repository path
or URI after you switch to the Git perspective. Remember to switch back to a Java perspective
afterward.
3. In Eclipse, open the Configuration.java class and update the following information: your logon
credentials, your Java applications and their subaccounts and regions (hosts).
...
private final String user = "my_username";
private final String password = "my_password";
private final List<ApplicationConfiguration> appsList = new
ArrayList<ApplicationConfiguration>();
public void configure(){
String landscapeFQDN1 = "api.hana.ondemand.com";
String account1 = "a1";
String application1 = "app1";
ApplicationConfiguration app1Config = new
ApplicationConfiguration(application1, account1, landscapeFQDN1);
this.appsList.add(app1Config);
Note
The example above shows only two applications, but you can create more and add them to the list.
Tip
View the status of your Java applications and start them in the SAP Cloud Platform cockpit.
○ When you select an application, you can view the states of the application’s processes.
○ When you select a process, you can view the process’s metrics.
An empty field in the Thresholds column signifies that the warning and critical values are set to
zeros.
Related Information
Cockpit
Java: Application Operations
Regions and Hosts
Configure an example notification scenario that includes a custom application that notifies you of critical
metrics via e-mail or SMS. The application also performs actions to fix issues based on these critical metrics.
Prerequisites
● To test the entire scenario, you need subaccounts on SAP Cloud Platform in two regions (Europe [Rot/
Germany] and US East).
● To retrieve the metrics of Java applications as shown in this scenario, you need two deployed and running
Java applications.
Note
If a Java application is not started yet, the notification application automatically triggers the start
process.
Context
In this tutorial, you'll implement a notification application that requests the metrics of the following Java
applications (running on SAP Cloud Platform):
Note
Since the requests are sent to only two applications, the Maven project that you import in Eclipse only
spawns two threads. However, you can change this number in the MetricsWatcher class, where the
When the notification application receives the Java application metrics, it checks for critical metrics. The
application then sends an e-mail or SMS, depending on whether the metrics are received as critical once or
three times. In addition, the notification application restarts the Java application when the metrics are detected
as critical three times.
Procedure
Note
You can also upload your project by copying the URL from GitHub and pasting it as a Git repository path
or URI after you switch to the Git perspective. Remember to switch back to a Java perspective
afterward.
3. Open the Demo.java class and update the following information: your e-mail and SMS addresses, your
logon credentials, your Java applications and their subaccounts and regions.
...
String mail_to = "[email protected]";
String mail_to_sms = "[email protected]";
private final String auth_user = "my_user";
private final String auth_pass = "my_password";
String landscapeFqdn1 = "api.hana.ondemand.com";
String account1 = "a1";
String application1 = "app1";
String landscapeFqdn2 = "api.us1.hana.ondemand.com";
String account2 = "a2";
String application2 = "app2";
...
4. Open the Mailsender.java class and update your e-mail account settings.
...
private static final String FROM = "[email protected]";
final String userName = "my_email_account";
final String password = "my_email_password";
...
public static void sendEmail(String to, String subject, String body) throws
AddressException, MessagingException {
// Set up the mail server
Properties properties = new Properties();
properties.setProperty("mail.transport.protocol", "smtp");
properties.setProperty("mail.smtp.auth", "true");
properties.setProperty("mail.smtp.starttls.enable", "true");
properties.setProperty("mail.smtp.port", "587");
properties.setProperty("mail.smtp.host", "smtp.email.com");
properties.setProperty("mail.smtp.host", "mail.email.com");
...
To do this, you can create a JMX check with a very low critical threshold for HeapMemoryUsage so that
the check is always received in a critical state.
Example
To use the console commands, you need to set up the console client. For more information, see Set
Up the Console Client.
Related Information
create-jmx-check
What Is Monitoring [page 751]
Regions [page 11]
Use the Monitoring Service for Critical Notifications and Self-Healing of SAP Cloud Platform Java Applications
Context
The Remote Data Sync service provides bi-directional synchronization of complex structured data between
many remote databases at the edge and SAP Cloud Platform databases at the center. The service is based on
SAP SQL Anywhere and its MobiLink technology.
A single cloud database may have hundreds of thousands of data collection and action endpoints that operate
in the real world over sometimes unreliable networks. Remote Data Sync provides a way to connect all of these
remote applications and to synchronize all databases at the edge into a single cloud database.
The figure below illustrates a typical IoT scenario using the Remote Data Sync service: Sensors or smart
meters create data that is sent and stored decentrally in small embedded databases, such as SQL Anywhere
or SQL Anywhere UltraLite. To get a consolidated view on the data of all remote locations, this data is
synchronized in the following:
● SAP HANA database on the cloud via SQL Anywhere MobiLink clients, running on the edge devices;
● SQL Anywhere MobiLink servers, which are provided in the cloud by the Remote Data Sync service.
New insights can be later gained by analytics and data mining on the consolidated data in the cloud.
Sizing
Before you start working with the service, check its sizing requirements and choose the optimal hardware
features for fluent run of your applications. For more information, see Performance and Scalability of the
MobiLink Server [page 782].
Prerequisites
● You have an account in a productive SAP Cloud Platform landscape (for example, hana.ondemand.com,
us1.hana.ondemand.com, ap1.hana.ondemand.com, eu2.hana.ondemand.com).
● Your SAP Cloud Platform account has an SAP HANA instance associated to it. The Remote Data Sync
service is currently only supported with SAP HANA database as target database in the cloud.
● On the edge side, you need to install SAP SQL Anywhere Remote Database Client version 16. You
can get a free Developer Edition . See also the existing production packages: Overview
Context
The procedure below helps you to make the Remote Data Sync service available in your SAP Cloud Platform
account. As the service is not available for your SAP Cloud Platform account by default, you need to first fulfill
the prerequisites above. After that follow the procedure described below to request the Remote Data Sync
service for your account.
Note
Before you start working with the service, check its sizing requirements and choose the optimal hardware
features for fluent run of your applications. For more information, see Performance and Scalability of the
MobiLink Server [page 782].
To get access to the Remote Data Sync service, you need to extend your standard SAP Cloud Platform license
with an a-la-carte license for Remote Data Sync in one of two flavors:
1. Remote Data Sync, Standard: MobiLink server on 2 Cores / 4GB RAM (Price list material number:
8003943 )
2. Remote Data Sync, Premium: MobiLink sever on 4 Cores / 8 GB RAM (Price list material number:
8003944 )
Prerequisites
● You have received the needed licenses and have enabled the Remote Data Sync service for your
subaccount. For more information, see Get Access to the Remote Data Sync Service [page 764].
● You have installed and configured the console client. For more information, see Using the Console Client
[page 1928].
Context
To use the Remote Data Sync service, a MobiLink server must be started and bound to the SAP HANA
database of your subaccount. This can be done by the following steps (they are described in detail in the
procedure below):
1. Deploy the MobiLink server on a compute unit of your subaccount using the console client.
2. Bind the MobiLink server to your SAP HANA database to connect the MobiLink server to the database.
3. Start the MobiLink server within the console client.
Note
To provision a MobiLink server in your subaccount, you need a free compute unit of your quota. The
Remote Data Sync service license includes an additional compute unit for the MobiLink server.
Procedure
1. Deploy the MobiLink server on a compute unit of your subaccount using the deploy command. You can
configure the MobiLink server to be started with customized server options (see MobiLink Server Options
). You can do this either during deployment using the --ev parameter, or later on using the set-
application-property command. You can also specify the compute unit by using the --size
parameter of the deploy command.
2. Bind the MobiLink server to your SAP HANA database. This is needed to connect the MobiLink server to
the database.
Note
Prerequisite: You have created an SAP HANA database user dedicated to the MobiLink server instance.
For more information, see Creating Database Users [page 1589].
Hint: In case your SAP HANA instance is configured to create database users with a temporary
password (the user is forced to reset it on first logon), you need to do it before creating the binding.
Note
In case you find the log message below, your binding step is missed or unsuccessfully executed:
5. You can stop or undeploy your MobiLink server. For more information, see stop [page 2140] or undeploy
[page 2151].
Next Steps
Prerequisites
● An SQL Anywhere version 16 installation is available on the client side. For more information, see Get
Access to the Remote Data Sync Service [page 764].
● A MobiLink server is running in your account. For more information, see Provide a MobiLink Server in Your
Subaccount [page 765].
Context
This page provides a simple example that demonstrates how to synchronize data from a remote SQL Anywhere
database into the SAP HANA database, using the Remote Data Sync service and the underlying SQL Anywhere
MobiLink technology. For more information on MobiLink synchronizations, see Quick start to MobiLink
(Synchronization) .
Tip
The SQL Anywhere database running on the client side is called remote database. The central SAP HANA
database running on SAP Cloud Platform is called consolidated database.
Procedure
1. Connect to a local database
Sample Code
4. Choose the Back button in the toolbar menu to get back to the root task level.
9. Run a synchronization
Next Steps
Related Information
Context
You can access the MobiLink server logs both in the cockpit and the console client.
Procedure
4. In the Most Recent Logging section, click the icon to view the logs, or the icon to download them.
Related Information
This page helps you to achieve end-to-end traceability of all synchronizations done via the Remote Data Sync
service of SAP Cloud Platform. This way, you can track who made what changes during work on the SAP HANA
target database in the cloud.
To monitor and record which users performed selected actions on SAP HANA database, you can use the SAP
HANA Audit Activity with Database Table as trail target. To use this feature, it must first be activated for your
SAP HANA database. This can be done via SAP HANA Studio by a database user with role HCP_SYSTEM.
● Use an SAP HANA database table as the trail target makes it possible to query and analyze auditing
information quickly. It also provides a secure and tamper-proof storage location.
● Audit entries are only accessible through the public system view AUDIT_LOG. Only SELECT operations can
be performed on this view by users with the system privilege AUDIT OPERATOR or AUDIT ADMIN.
For more information about how to configure audit policy, see SAP HANA Administration Guide and SAP HANA
Security Guide.
Note
These links point to the latest release of SAP HANA Administration Guide and SAP HANA Security Guide.
Refer to the SAP Cloud Platform Release Notes to find out which HANA SPS is supported by SAP Cloud
Platform. Find the list of guides for earlier releases in the Related Links section below.
Additionally to the SAP HANA audit logs, you might want to use the MobiLink server logs to achieve end-to-end
traceability.
● We recommend that you set the log level of the MobiLink server to a value that produces logs in granularity
useful for end-to-end traceability of the performed synchronization operations. For example, the log level -
vtRU. For more information about this log level configuration, see -v parameter documentation .
● To configure the log level, use the deploy command in the console client. For more information, see Provide
a MobiLink Server in Your Subaccount [page 765].
Remember
SAP Cloud Platform retains the MobiLink server log files for only a week. To fulfill the legal requirements
regarding retention of audit log files, make sure you download the log files regularly (at least once a week),
and keep them for a longer period of time according to your local laws.
Related Information
Context
This section provides information about security-related operations and configurations you can perform in a
Remote Data Sync scenario.
Currently, as part of SAP Cloud Platform, the MobiLink servers support only basic authentication. For
more information, see User Authentication Architecture .
Tasks
On SAP Cloud Platform, MobiLink clients can only be connected via HTTPS to MobiLink servers in the
cloud, which means that plain HTTP connections are not supported.
There are different options how to configure the HTTPS connection, depending on the SQL Anywhere
synchronization tool that is used to trigger synchronizations:
○ When using SQL Anywhere dbmlsync command line tool to trigger client-initiated synchronizations,
trusted certificates can be specified using the trusted_certificates parameter as described here
.
○ When using the Sybase Central UI to trigger client-initiated synchronizations, you can specify Trusted
certificates as described here .
Related Information
MobiLink Users
MobiLink Security
Prerequisites
● An SQL Anywhere version 16 installation is available on the client side. For more information, see Get
Access to the Remote Data Sync Service [page 764].
● A MobiLink server is running in your account. For more information, see Provide a MobiLink Server in Your
Subaccount [page 765].
Context
The page describes how existing tools of SQL Anywhere (SQL Anywhere Monitor and MobiLink
Profiler) can be connected and used with the Remote Data Sync service running on SAP Cloud Platform.
Related Information
MobiLink Profiler
Context
SQL Anywhere Monitor comes as part of the standard SQL Anywhere installation. You can find it under
Administrative Tools of SQL Anywhere 16. The tool provides basic information about the health and availability
of a SQL Anywhere and MobiLink landscape. It also gives basic performance information and overall
synchronization statistics of the MobiLink server.
Procedure
1. To start the SQL Anywhere Monitor tool, open the SQL Anywhere 16 installation and go to Administrative
Tools.
2. Open the SQL Anywhere Monitor dashboard via URL: http://<host_name>:4950, where <host_name>
is the host of the computer where SQL Anywhere Monitor is running.
3. Log in with the default credentials: user= admin , password= admin .
○ MobiLink server:
○ As Host, specify the fully qualified domain name of the MobiLink server running in your SAP Cloud
Platform account.
○ As Port, specify 8443.
○ As Connection Type, specify HTTPS. Leave the rest unchanged.
○ MobiLink user: provide the credentials of a valid MobiLink user.
Next Steps
SQL Anywhere Monitor also allows you to configure e-mail alerts for synchronization problems. For more
information, see Alerts .
Related Information
Context
MobiLink Profiler comes as part of the standard SQL Anywhere installation. You can find it under Administrative
Tools of SQL Anywhere 16. The tool collects statistical data about all synchronizations during a profiling
session, and provides performance details of the single synchronizations, down to the detailed level of a
MobiLink event. It also provides access to the synchronization logs of the MobiLink server. Therefore, the tool is
mostly used to troubleshoot failed synchronizations or performance issues, and during the development phase
to further analyze synchronizations, errors, or warnings.
Procedure
1. Start the MobiLink Profiler under Administrative Tools of SQL Anywhere 16. The tool is a desktop client and
does not run in a Web browser.
2. Open File Begin Profiling Session to connect to the MobiLink server of your cloud account.
3. In the Connect to MobiLink Server window, provide the appropriate connection details, such as:
○ Host: specify the fully qualified domain name of the MobiLink server running in your SAP Cloud
Platform account.
○ Port: 8443
Next Steps
To learn more about the UI of the MobiLink Profiler, see MobiLink Profiler Interface .
Prerequisites
● An SQL Anywhere version 16 installation is available on the client side. For more information, see Get
Access to the Remote Data Sync Service [page 764].
● A MobiLink server is running in your subaccount. For more information, see Provide a MobiLink Server in
Your Subaccount [page 765].
Context
This page describes how you can configure an availability check for your MobiLink server and subscribe
recipients to receive alert e-mail notifications when your server is down or responds slowly. Furthermore,
recommended actions are listed in case of issues.
Procedure
Example:
3. To subscribe recipients to notification alerts, execute the following command (exemplary data).
Tip
To add multiple e-mail addresses, separate them with commas. We recommend that you use
distribution lists rather than personal e-mail addresses. Keep in mind that you will remain responsible
for handling of personal e-mail addresses with respect to data privacy regulations applicable.
Next Steps
● Check the logs. In case of synchronization errors, use the MobiLink Profiler tool to drill down into the
problem for root cause analysis.
● In case of crude server startup parameters, reset the MobiLink server.
● If your MobiLink server hangs, restart it.
Related Information
Configure Availability Checks for Java Applications from the Console Client
This page provides sizing information for applications using the Remote Data Sync service.
Although the only realistic answers to optimal resource planning are “It depends” and “Testing will show what
you need”, this section aims to help you choose the right hardware parameters.
Synchronization Phases
The figure below shows the major phases of a synchronization session. Though not complete, it covers many
common use cases.
1. Synchronization is initiated by a remote database client. It uploads any changes made at the remote
database to the server.
2. MobiLink applies the changes to the database.
3. MobiLink queries the database and prepares the changes to be sent to the remote database client.
4. MobiLink sends the changes to the remote database client.
Database Capacity
When the Remote Data Sync server applies changes to the consolidated database and prepares changes to be
sent to the remote database client, it typically does so by executing SQL statements or stored procedures that
are invoked by MobiLink events. For example, to apply an upload MobiLink may execute insert, update, and
delete statements for each table being synchronized; to prepare a download MobiLink may execute a query for
each table being synchronized.
Database tuning is outside the scope of this document, but the load on the database can be substantial. Think
of MobiLink as a concentrator of database load. All the operations that are carried out against the remote
database while disconnected, in addition to the requests for updates to be downloaded to the remote
database, are executed in two transactions (1 upload, 1 download) against the consolidated database. This can
place a heavy load on the database.
You should know the number of concurrent synchronizations as a starting point, and from there on, calculate
back on the required resources. Typically, this number is limited by RAM requirements. To estimate, you need a
typical upload and download data volume as a starting point.
A machine with N MB of RAM can have C clients each with about V MB of upload or download data volume,
where C = N/V.
Following this formula, for large synchronizations (< 20 MB), you can have:
Remote Data Sync servers are not typically CPU intensive, and typically require less than half the processing
that is required by the consolidated database. When selecting the appropriate compute units for MobiLink,
memory is more likely to limit the maximum sustainable throughput for a Remote Data Sync server than CPU.
Example:
1. Let's assume the database can process the target load of L synchronizations per second (and that is a
matter for testing).
2. At this throughput, one database thread will come open every 1/L seconds. To keep throughput high, a
synchronization request should be ready, with data uploaded and available to pass to the database thread.
3. To keep the database busy, if a synchronization request takes t seconds to upload (which will depend on
network speed and data volume, and which should be determined by testing), then the Remote Data Sync
server must be able to hold (L x t) client data uploads in memory.
4. The Remote Data Sync server must also be able to download the data to the client to prevent the database
threads having to wait for a network connection to download. In the case, this volume is similar to the
uploads we end up with: MobiLink should be able to support (2 x L x t) simultaneous synchronizations to
maintain a throughput of L synchronizations per second.
Note
For example, to support a peak sustained throughput of 50 synchronizations per second, with a client that
takes 0.5 seconds to upload and download data, then the Remote Data Sync server should be able to
support 50 simultaneous synchronizations in RAM to sustain this rate as a peak throughput. Assuming
data transfer volumes per client are less than 80 MB (which is a very high number for data
synchronization), a Standard machine would be a good choice to start with.
This guide describes how to use the SAP HANA service in the Neo environment and in the Cloud Foundry
environment for SAP HANA database systems provisioned in Amazon Web Services (AWS) before June 4,
2018.
Note
There are different versions of the SAP HANA service available on SAP Cloud Platform. Whether this guide
is helpful for you depends on the SAP HANA service version you're using.
SAP HANA Service in AWS Regions (Cloud Foundry Environment) [Provisioned Before June 2018] [page
785]
Create and consume SAP HANA databases in AWS regions (databases provisioned before June 4,
2018).
Create and consume SAP HANA databases in AWS regions (databases provisioned before June 4, 2018).
Note
Two versions of the SAP HANA service exist in the Cloud Foundry environment. The newer version is
available since June 4, 2018. For more information, see 4 June 2018 - What's New for SAP HANA Service.
If you have databases in an AWS region and if you can see the SAP HANA tab in the navigation area of the
SAP Cloud Platform cockpit, this guide is valid for you. It is not valid for instances in AWS regions created
with the updated version of the SAP HANA service. For more information on working with these instances,
please refer to SAP Cloud Platform, SAP HANA Service Getting Started Guide.
What Is the SAP HANA Service in AWS Regions (Cloud Foundry Environment) [page 786]
Create and consume SAP HANA databases in AWS (databases provisioned before June 4, 2018)
regions.
Create and consume SAP HANA databases in AWS (databases provisioned before June 4, 2018) regions.
The SAP HANA service allows you to leverage the in-memory data processing capabilities of SAP HANA in the
cloud. As a managed database service, backups are fully automated and service availability guaranteed. Using
the SAP HANA service, you can set up and manage SAP HANA databases and bind them to applications
running on SAP Cloud Platform. You can access SAP HANA databases using a variety of languages and
interfaces, as well as build applications and models using tools provided with SAP HANA.
Note
There are different versions of the SAP HANA service available on SAP Cloud Platform. Whether this guide
is helpful for you depends on the SAP HANA service version you're using.
This guide is valid for the version of the SAP HANA service that runs in the Cloud Foundry
environment on AWS regions (databases provisioned before June 4, 2018).
If you're using the SAP HANA service in another environment or region, or are unsure whether this guide is
right for you, see Find the Right Guide.
Environment
For a list of all regions, see SAP Cloud Platform Regions and Service Portfolio.
Features
Remember
This guide describes the features of the SAP HANA service that enable you to administrate SAP HANA
databases on SAP Cloud Platform. Keep in mind that it helps you with everything that is specific to SAP
Cloud Platform. For features specific to SAP HANA, please refer to SAP HANA Platform.
Work with SAP HANA 2.0 Work with SAP HANA tenant database systems and SAP HANA version 2.0.
Manage Database Systems Manage the overall lifecycle of your database systems. You can frequently
update to new revisions.
Tip
New revisions that are available for the update are announced in the
What's New for SAP Cloud Platform.
Bind Databases to Applications Create service instances and bind them to application running on SAP
Cloud Platform.
Share Tenant Databases with Share a tenant database that has been created in one space with other
Other Spaces spaces that belong to the same organization.
View Memory Usage for View the memory limits and usage for a database system.
Database Sytems
Access SAP HANA Cockpit Access the SAP HANA cockpit for a tenant database through the SAP
Cloud Platform cockpit.
Connect to a Tenant Database Open a database tunnel and connect to a tenant database using SAP HANA
from Your Local Workstation studio.
For more detailed information about features and capabilities, see the Feature Scope Description for SAP Cloud
Platform, SAP HANA Service.
Tools
You can use the following tools in combination with the SAP HANA service:
Restrictions
Restriction
Backup ● When you stop a tenant database for several days, you may not be able to recover the
database. It is important to keep databases running without longer downtimes.
Deleting Spaces ● If you delete a space in your Cloud Foundry org, all databases, which you created in
that space, are automatically deleted.
Monitoring ● The availability of SAP HANA tenant databases is not monitored and no alerts are sent
when a database is not available.
Restriction
● It is currently not possible to use a dedicated SAP HANA tenant database system in a trial account. You can, how
ever, create a service binding to a shared SAP HANA tenant database in your trial account. For more information,
see First Steps in Trial Accounts [page 802].
There are some other restrictions as to which SAP HANA features can be used in the trial scenario and
which cannot.
2020
Techni
cal Envi Availa
Com Capa ron ble as
ponent bility ment Title Description Type of
SAP Data- Cloud Revi The Cloud Foundry environment supports SAP HANA revision Chang 2020-0
HANA Driven Foun sion 2.00.045.00. See SAP Note 2378962 - SAP HANA 2.0 Revision ed 1-30
Service Insights dry 2.00.0 and Maintenance Strategy .
45.00
Note
This revision is not valid for the new version of the SAP HANA
service available in Amazon Web Services (AWS) and Google
Cloud Platform (GCP) since 4 June 2018. See 4 June 2018 -
SAP HANA Service.
Release notes for the SAP HANA service in AWS (provisioned before June 4, 2018) regions for 2019 are
available in the What's New for SAP Cloud Platform.
To access the release notes, go to What's New for the SAP HANA Service (Cloud Foundry).
Set up the SAP HANA service in AWS (databases provisioned before June 4, 2018) regions.
Restriction
This version of the SAP HANA service is not available for purchase anymore.
Prerequisites
● You have set up your global account and subaccount on SAP Cloud Platform. For an overview of the
required steps, see Getting Started in the Cloud Foundry Environment [page 1023].
● You have purchased quota for the SAP HANA service.
To enable the SAP HANA service, distribute the quota you've purchased in your subaccounts. For more
information, see Configure Entitlements and Quotas for Subaccounts [page 1756].
To manage databases systems and databases, and to bind service instances to applications, you must be
assigned the Space Manager or the Space Developer role in the space.
For more information, see Adding Members to Global Accounts, Orgs, and Spaces [page 1759].
Getting Started
Once you've completed the initial setup, have a look at our Getting Started [page 790] tutorial to quickly learn
how to set up your databases and bind them to an application.
Get started with the SAP HANA service in the Cloud Foundry environment.
Depending on your account type, different steps are necessary to get you started.
Learn how to bind an application to a new SAP HANA tenant database in a Cloud Foundry space in an
enterprise account by creating a service binding.
Scenario Tutorial
You want to create a service binding using the SAP Cloud Creating a Service Binding Using the Cloud Cockpit [page
Platform cockpit. 791]
You want to create a service binding using the console client. Creating a Service Binding Using the Console Client [page
796]
Related Information
Learn about how to bind an application to a new SAP HANA tenant database in a Cloud Foundry space in an
enterprise account by creating a service binding in the SAP Cloud Platform cockpit.
Note
For more information on creating service bindings in trial accounts, see Creating a Service Binding in a Trial
Account Using the Cloud Cockpit [page 802].
Prerequisites
● An SAP HANA tenant database system is deployed in a Cloud Foundry space in your enterprise account.
● Deploy an application in the same Cloud Foundry space.
Context
To bind an application to a tenant database in the cloud cockpit, perform the following steps:
Note
If you've already created a tenant database to which you want to bind your application, you can skip
this step.
To learn more about concepts behind integrating service instances with applications in the Cloud Foundry
environment, see the official documentation for the Cloud Foundry environment at https://
docs.cloudfoundry.org/devguide/services/ .
Create a tenant database on an SAP HANA tenant database management system that is deployed in your
Cloud Foundry space in an enterprise account. If you've already created a tenant database that you want to use
for the binding, you can skip this task.
Prerequisites
You must be Space Manager in the space in which you want to create a tenant database. For more information
on roles and permissions, please see the official Cloud Foundry documentation at https://
docs.cloudfoundry.org/concepts/roles.html .
Procedure
1. In the cloud cockpit, navigate to the Cloud Foundry space in which you want to create a new tenant
database. For more information, see Navigate to Orgs and Spaces [page 1751].
Tip
To view additional database details, for example, its state and the number of existing bindings, select a
database from the list.
Note
The password must be at least 15 characters long and must contain at least one uppercase and one
lowercase letter ('a' – 'z', 'A' – 'Z') and one digit ('0' – '9'). It can also contain special characters (except
", ' and \).
You can define limits for memory consumption by individual processes of the new database. For more
information, see the SAP HANA Administration Guide. For more information on viewing limits of an existing
tenant database, see View Memory Usage for an SAP HANA Database System [page 814].
Caution
You cannot change this parameter after you have created the database, which means you cannot
switch from an embedded to a standalone mode and the other way around. If you run the XS
engine in a standalone mode, you can change the memory limit after you have created the
database, but you won't be able to do so if you run the XS engine in an embedded mode.
Caution
It's important you decide here whether you want to enable or disable either of the servers. You
won't be able to turn either of them on or off after you've created the database. However, you can,
at a later time, change the memory limit of a server you enabled here.
Caution
It's important you decide here whether you want to enable or disable either of the servers. You
won't be able to turn either of them on or off after you've created the database. However, you can,
at a later time, change the memory limit of a server you enabled here.
We strongly recommend you enable the DI server and the SAP HANA schemas & HDI containers
(hana) service broker for your tenant database. If you don't enable it, you won't be able to use the
service broker or to create a service binding.
8. Choose Create.
Note
There is a limit to the number of databases you can create, and you'll see an error message when you
reach the maximum number of databases. The default limits are shown in the table below, but
depending on your database system configuration, the number of tenant databases you can create
might differ from these limits.
61GB 4
122GB 10
244GB 24
488GB 50
>488GB 200
Results
To bind the new SAP HANA tenant database to an application in your Cloud Foundry space in an enterprise
account, you need to create a service instance using a particular plan of the SAP HANA schemas & HDI
containers (hana) service, and bind it to an application.
Procedure
1. In the navigation area, choose Applications, then select the relevant application.
2. In the navigation area, choose Service Bindings.
The overview lists all applications to which the selected application is currently bound.
3. Choose Bind Service.
4. On the Choose Service Type tab, select the Service from the catalog radio button and choose Next.
5. On the Choose Service tab, select the SAP HANA Schemas & HDI Containers (hana) tile and choose Next.
The SAP HANA Schemas & HDI Containers (hana) tile is only displayed if you've turned on the DI
Server + Service Broker switch for a tenant database in your Cloud Foundry org. For more information,
see Create an SAP HANA Tenant Database [page 792].
6. On the Choose Service Plan tab, select the corresponding radio button to create a new instance or reuse an
existing instance. Select a service plan and choose Next.
7. Depending on the number of tenant databases you have created in your space, choose one of the following
options:
Option Description
There is only one ten Skip the Specify Parameters tab by choosing Next.
ant database in your
Cloud Foundry space.
There is more than Specify the parameters in JSON format. Copy the following string by specifying the parame
one tenant database ters:
in your Cloud
Foundry space. {"database_id":"<tenant_db_name>"}
Enter the database ID that you defined when creating an SAP HANA tenant database. Choose
Next.
Note
To use a tenant database that is owned by another space, see Sharing a Tenant Database
with Other Spaces [page 825].
8. On the Confirm tab, enter a name in the Instance Name field and choose Finish.
Once you've created the binding, you must restart your application.
Procedure
Navigate to the Cloud Foundry space and choose Applications. Select the Restart icon for your application.
Note
An application’s state influences when a newly bound SAP HANA tenant database becomes effective. If an
application is already running (Started state), it does not have access to the newly bound HANA tenant
database until it has been restarted.
You have created a service binding for an SAP HANA tenant database in your Cloud Foundry space.
To unbind a database from an application, choose the Delete icon in the Actions column. The application
maintains access to the database until it is restarted.
To change database parameters (for example, to assign a higher memory limit to one of its processes), choose
the Configure button on the Overview page.
Related Information
Bind an application to a new SAP HANA tenant database in a Cloud Foundry space in an enterprise account
using the Cloud Foundry command line interface (cf CLI).
Note
For more information on creating service bindings in trial accounts, see Creating a Service Binding in a Trial
Account Using the Console Client [page 804].
Prerequisites
● An SAP HANA tenant database system is deployed in a Cloud Foundry space in your enterprise account.
● Deploy an application to the same space.
● Download and install the cf CLI. For more information, see Download and Install the Cloud Foundry
Command Line Interface [page 1769].
● Using cf CLI, log on to the Cloud Foundry space in which the SAP HANA system and your application are
deployed. For more information, see Log On to the Cloud Foundry Environment Using the Cloud Foundry
Command Line Interface [page 1770].
Context
To bind an application to a database using the cf CLI, perform the following steps:
Note
You can create tenant databases only in the SAP Cloud Platform cockpit, that is, you cannot create
them using the console client. If you've already created a tenant database to which you'd like to bind
your application, you can skip this step.
To learn more about concepts behind integrating service instances with applications in the Cloud Foundry
environment, please see the documentation for the Cloud Foundry environment at https://
docs.cloudfoundry.org/devguide/services/ .
Create a tenant database on an SAP HANA tenant database management system that is deployed in your
Cloud Foundry space in an enterprise account. If you've already created a tenant database that you want to use
for the binding, you can skip this task.
Prerequisites
You must be Space Manager in the space in which you want to create a tenant database. For more information
on roles and permissions, please see the official Cloud Foundry documentation at https://
docs.cloudfoundry.org/concepts/roles.html .
Procedure
1. In the cloud cockpit, navigate to the Cloud Foundry space in which you want to create a new tenant
database. For more information, see Navigate to Orgs and Spaces [page 1751].
Tip
To view additional database details, for example, its state and the number of existing bindings, select a
database from the list.
Note
The password must be at least 15 characters long and must contain at least one uppercase and one
lowercase letter ('a' – 'z', 'A' – 'Z') and one digit ('0' – '9'). It can also contain special characters (except
", ' and \).
You can define limits for memory consumption by individual processes of the new database. For more
information, see the SAP HANA Administration Guide. For more information on viewing limits of an existing
tenant database, see View Memory Usage for an SAP HANA Database System [page 814].
○ XS Engine:
By default, the XS engine of your new database runs in an embedded mode. You can create a
standalone XS engine by selecting Standalone and set a value in the XS Engine Limit (MB) field.
Caution
You cannot change this parameter after you have created the database, which means you cannot
switch from an embedded to a standalone mode and the other way around. If you run the XS
engine in a standalone mode, you can change the memory limit after you have created the
database, but you won't be able to do so if you run the XS engine in an embedded mode.
Caution
It's important you decide here whether you want to enable or disable either of the servers. You
won't be able to turn either of them on or off after you've created the database. However, you can,
at a later time, change the memory limit of a server you enabled here.
It's important you decide here whether you want to enable or disable either of the servers. You
won't be able to turn either of them on or off after you've created the database. However, you can,
at a later time, change the memory limit of a server you enabled here.
We strongly recommend you enable the DI server and the SAP HANA schemas & HDI containers
(hana) service broker for your tenant database. If you don't enable it, you won't be able to use the
service broker or to create a service binding.
8. Choose Create.
Note
There is a limit to the number of databases you can create, and you'll see an error message when you
reach the maximum number of databases. The default limits are shown in the table below, but
depending on your database system configuration, the number of tenant databases you can create
might differ from these limits.
Your particular use case and the amount of data and workloads handled in your tenant databases
should determine how many tenant databases you create. The more tenant databases you create, the
less memory is available for you in each individual tenant database. Therefore, we recommend that you
create no more than half of the maximum number of databases shown in the table below.
61GB 4
122GB 10
244GB 24
488GB 50
>488GB 200
Results
To bind the new SAP HANA tenant database to an application in your Cloud Foundry space in an enterprise
account, you need to create a service instance first.
Procedure
Caution
You can only create service instances on tenant databases for which the DI Server + Service Broker was
enabled during database creation. For more information, see Create an SAP HANA Tenant Database [page
792].
Open a command line and choose one of the following options, depending on the number of tenant databases
you have created in your space:
Option Description
There is only Enter the following string, providing the appropriate parameters:
one tenant data
base in your cf create-service hana <service_plan> <service_instance_name>
Cloud Foundry
For more information and examples, see Managing Service Instances with the cf CLI .
space.
There is more Enter the following string and specify the parameters:
than one tenant ○ macOS and Linux:
database in your
Cloud Foundry cf create-service hana <service_plan> <service_instance_name> -
c '{"database_id":"<tenant_db_name>"}'
space.
○ Windows Command Line:
○ Windows PowerShell:
Note
To use a tenant database that is owned by another space, see Sharing a Tenant Database with
Other Spaces [page 825].
Once you've created a service instance, you can bind it to the application.
Procedure
Procedure
cf restart <application_name>
Note
An application’s state influences when a newly bound SAP HANA tenant database becomes effective. If an
application is already running (Started state), it does not have access to the newly bound HANA tenant
database until it has been restarted.
Results
You have created a service binding for an SAP HANA tenant database in your Cloud Foundry space.
Related Information
Learn how to bind an application to a shared SAP HANA tenant database in a Cloud Foundry space in a trial
account by creating a service binding.
Scenario Tutorial
You want to create a service binding using the SAP Cloud Creating a Service Binding in a Trial Account Using the Cloud
Platform cockpit. Cockpit [page 802]
You want to create a service binding using the console client. Creating a Service Binding in a Trial Account Using the Con
sole Client [page 804]
Related Information
Bind an application to a shared SAP HANA tenant database in a Cloud Foundry space that belongs to a trial
account by creating a service binding in the SAP Cloud Platform cockpit.
Tip
You cannot create SAP HANA tenant databases in trial accounts. You directly bind a shared SAP HANA
tenant database to an application in your Cloud Foundry space in a trial account.
Note
For more information on creating service bindings in enterprise accounts, see Creating a Service Binding
Using the Cloud Cockpit [page 791].
Context
To learn more about the concepts behind integrating service instances with applications in the Cloud Foundry
environment, please see the official Cloud Foundry documentation at https://docs.cloudfoundry.org/devguide/
services/ .
To bind a shared SAP HANA tenant database to an application in your Cloud Foundry space in a trial account,
you need to create a service instance using a particular plan of the hanatrial service, and bind it to an
application.
Procedure
1. In the navigation area, choose Applications, then select the relevant application.
2. In the navigation area, choose Service Bindings.
Once you've created the binding, you must restart your application.
Procedure
Navigate to the Cloud Foundry space and choose Applications. Select the Restart icon for your application.
Note
An application’s state influences when a newly bound SAP HANA tenant database becomes effective. If an
application is already running (Started state), it does not have access to the newly bound HANA tenant
database until it has been restarted.
Results
You have created a service binding for an SAP HANA tenant database in your Cloud Foundry space.
To unbind a database from an application, choose the Delete icon in the Actions column. The application
maintains access to the database until it is restarted.
To change database parameters (for example, to assign a higher memory limit to one of its processes), choose
the Configure button on the Overview page.
Related Information
Bind an application to a shared SAP HANA tenant database in a Cloud Foundry space that belongs to a trial
account using the Cloud Foundry command line interface (cf CLI).
Note
For more information on creating service bindings in enterprise accounts, see Creating a Service Binding
Using the Console Client [page 796].
Context
To learn more about the concepts behind integrating service instances with applications in the Cloud Foundry
environment, please see the official documetation for the Cloud Foundry environment at https://
docs.cloudfoundry.org/devguide/services/ .
To bind a shared SAP HANA tenant database to an application in your Cloud Foundry space in a trial account,
you must first create a service instance.
Procedure
For more information and examples, see Managing Service Instances with the cf CLI .
Once you've created a service instance, you can bind it to the application.
Procedure
Procedure
cf restart <application_name>
Note
An application’s state influences when a newly bound SAP HANA tenant database becomes effective. If an
application is already running (Started state), it does not have access to the newly bound HANA tenant
database until it has been restarted.
Results
You have created a service binding for an SAP HANA tenant database in your Cloud Foundry space.
Related Information
1.11.9.1.4 Concepts
The main concepts of the SAP HANA service in AWS (databases provisioned before June 4, 2018) regions.
The SAP HANA service in AWS regions supports the database system type SAP HANA tenant database
system and the SAP HANA version SAP HANA 2.0. You can frequently update to new revisions. These
revisions are announced in the What's New for SAP Cloud Platform. For more information, see What's New.
SAP HANA supports multiple isolated databases in a single SAP HANA system. These are referred to as tenant
databases. All tenant databases in the same SAP HANA system share the same system resources (memory
and CPU cores) but each tenant database is fully isolated with its own database users, catalog, repository,
persistence (data files and log files) and services. In your enterprise account, you have full control of user
management and can use a range of database tools.
Main Concepts
In the Cloud Foundry environment in AWS regions, a database system must be provisioned in your space.
Space developers can create tenant databases on the database system. They then can create schemas and
HDI containers on the tenant databases, and bind them to applications running on SAP Cloud Platform. To do
so, they create service instances using the hana service.
An SAP HANA database system is associated with a particular space and is available to applications in this
space. You can administer the database system and its databases using the SAP Cloud Platform cockpit or the
Cloud Foundry command line interface (CF CLI).
For information on SAP HANA database development, please refer to the SAP Cloud Platform documentation.
1.11.9.1.6 Administration
Use the SAP Cloud Platform cockpit to administer your database systems and databases in the Cloud Foundry
environment in AWS (databases provisioned before June 4, 2018) regions.
Remember
This guide describes the features of the SAP HANA service that enable you to administrate SAP HANA
databases on SAP Cloud Platform. Keep in mind that it helps you with everything that is specific to SAP
Cloud Platform. For features specific to SAP HANA, please refer to SAP HANA Platform.
An overview of the different tasks you can perform to administer database systems in the Cloud Foundry
environment.
View Memory Usage for an SAP HANA Database System [page 814]
Update software components or apply a new Support Package to your SAP HANA database system in the
Cloud Foundry environment.
Prerequisites
Context
To update your SAP HANA database system, you have the following options:
● Update the software components installed on your SAP HANA database system to a later version.
● Apply a single Support Package on top of an existing SAP HANA database system.
Remember
Make sure that you read the SAP Notes listed in the UI before applying any updates. Complete all the steps
required before or after the update.
Recommendation
We recommend that you always use the latest available version. New revisions that are available for the
update are announced in the What's New for SAP Cloud Platform.
Please expect a temporary downtime for the SAP HANA database system when you update SAP HANA.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA database system you're
updating. For more information, see Navigate to Orgs and Spaces [page 1751].
2. Choose SAP HANA in the navigation area.
All database systems available in the space are listed with their details, including the database type,
version, memory size, state, and the number of associated databases.
3. Select the entry for the relevant database system.
Note
You can select only SAP HANA revisions that have been approved for use in SAP Cloud Platform. To
update to another revision, please contact SAP Support.
Updating an SAP HANA database system to a maintenance revision may result in upgrade path
limitations. See SAP Note 1948334 for details.
6. (Optional) Specify whether you want a prompt for confirmation before the update of the SAP HANA
database system is applied and the system downtime is started.
By default, this option is selected. If you unselect it, the update is performed without any user interaction.
7. Choose Continue/Update.
The update process takes some time and is executed asynchronously. The update dialog box remains on
the screen while the update is in progress. You can close the dialog box and reopen it later.
8. (Optional) If you chose to be prompted for confirmation after preparation of the update, the process stops
and prompts for your confirmation to start the update.
During preparation, the SAP HANA database system is not modified, so you can cancel the process here if
necessary.
9. Choose Update.
The update starts and takes about 20 minutes.
Note
For more information, see the SAP HANA Developer Guides listed below. Refer to the SAP Cloud
Platform Release Notes to find out which SAP HANA SPS is currently supported by SAP Cloud
Platform.
Related Information
SAP HANA Developer Guide for SAP HANA Studio → section "Set up Application Authentication"
SAP HANA Developer Guide for SAP HANA Web Workbench → section "Set up Application Authentication"
SAP Note 1948334
Restart SAP HANA Database Systems [page 811]
Install SAP HANA Components [page 812]
Access SAP HANA Cockpit [page 830]
Try to solve issues by restarting the entire corresponding SAP HANA database system in the Cloud Foundry
environment.
Prerequisites
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA database system you
want to restart. For more information, see Navigate to Orgs and Spaces [page 1751].
Note
If security OS patches are pending for the database system you have restarted, the host of the
database system is also restarted.
Results
You can monitor the database system status during the restart using the HANA tools. Connected applications
and database users cannot access the system until it is restarted. The restart for the SAP HANA database
system is complete when HANA tools such as the cockpit are available again.
Related Information
Use the SAP Cloud Platform cockpit to install new SAP HANA components in the Cloud Foundry environment.
Prerequisites
Context
● SAP HANA platform components, which are installed on the SAP HANA database system at the operating
system level.
Recommendation
Please expect a temporary downtime for the SAP HANA database when installing SAP HANA components.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA database system for
which you'd like to install SAP HANA components. For more information, see Navigate to Orgs and Spaces
[page 1751].
2. Choose SAP HANA in the navigation area.
3. Select the entry for the relevant database system in the list.
4. To install an SAP HANA component for the selected database system, choose Install components.
Results
Note
For more information, see the SAP HANA Developer Guides listed below. Refer to SAP HANA Service in
AWS Regions (Cloud Foundry Environment) [Provisioned Before June 2018] [page 785] to find out which
HANA revision is supported.
Related Information
Developer Guide for SAP HANA Studio → section "Set up Application Authentication"
Developer Guide for SAP HANA Web Workbench→ section "Set up Application Authentication"
Update SAP HANA Database Systems [page 809]
Restart SAP HANA Database Systems [page 811]
Access SAP HANA Cockpit [page 830]
Prerequisites
You are assigned the Space Manager role for the space.
Context
You can uninstall additional SAP HANA platform components that are already installed on your SAP HANA
database system, from the operating system level.
You can't uninstall core SAP HANA platform components, which are installed by default when your SAP
HANA database system is provisioned.
You can't uninstall SAP HANA smart data streaming from an SAP HANA tenant database system, if SAP
HANA smart data streaming is enabled on a tenant database. You must first disableSAP HANA smart data
streaming on the relevant tenant database.
Please expect a temporary downtime for the SAP HANA database system when you're uninstalling SAP HANA
components.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA database system from
which you want to uninstall SAP HANA components. For more information, see Navigate to Orgs and
Spaces [page 1751].
2. From the navigation area, choose SAP HANA.
3. Select the entry for the relevant database system.
4. To uninstall an SAP HANA component for the selected database system, choose Uninstall component.
5. Select a solution to uninstall.
6. Choose Uninstall.
The uninstallation starts and takes about 15 minutes. You can monitor the status of the uninstallation in the
Database Systems view.
View the memory usage for an SAP HANA database system in the Cloud Foundry environment using the SAP
Cloud Platform cockpit.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA database system you'd
like to view memory limits for. For more information, see Navigate to Orgs and Spaces [page 1751].
2. In the navigation area, choose SAP HANA Database Systems and select the entry for the relevant
database system.
3. In the navigation area, choose Memory Usage.
You see a table that lists the memory limits and usage for each tenant database and the system database.
You can view the following values:
For more information about memory usage, see the SAP HANA Administration Guide.
Note
If you haven't set a limit for a particular process or if you've allocated a percentage, the corresponding
entry is empty and the total of configured allocation limits cannot be calculated.
Related Information
An overview of the different tasks you can perform to administer databases in the Cloud Foundry environment.
Create a tenant database on an SAP HANA tenant database management system that is deployed in your
Cloud Foundry space in an enterprise account.
Prerequisites
You must be Space Manager in the space in which you want to create a tenant database. For more information
on roles and permissions, please see the official Cloud Foundry documentation at https://
docs.cloudfoundry.org/concepts/roles.html .
Context
Tip
You cannot create SAP HANA tenant databases in trial accounts. You directly bind a shared SAP HANA
tenant database to an application in your Cloud Foundry space in a trial account. For more information, see
Create Service Bindings in Trial Accounts [page 820] or First Steps in Trial Accounts [page 802].
Procedure
1. In the cloud cockpit, navigate to the Cloud Foundry space in which you want to create a new tenant
database. For more information, see Navigate to Orgs and Spaces [page 1751].
Tip
To view additional database details, for example, its state and the number of existing bindings, select a
database from the list.
Note
The password must be at least 15 characters long and must contain at least one uppercase and one
lowercase letter ('a' – 'z', 'A' – 'Z') and one digit ('0' – '9'). It can also contain special characters (except
", ' and \).
You can define limits for memory consumption by individual processes of the new database. For more
information, see the SAP HANA Administration Guide. For more information on viewing limits of an existing
tenant database, see View Memory Usage for an SAP HANA Database System [page 814].
○ XS Engine:
By default, the XS engine of your new database runs in an embedded mode. You can create a
standalone XS engine by selecting Standalone and set a value in the XS Engine Limit (MB) field.
Caution
You cannot change this parameter after you have created the database, which means you cannot
switch from an embedded to a standalone mode and the other way around. If you run the XS
engine in a standalone mode, you can change the memory limit after you have created the
database, but you won't be able to do so if you run the XS engine in an embedded mode.
It's important you decide here whether you want to enable or disable either of the servers. You
won't be able to turn either of them on or off after you've created the database. However, you can,
at a later time, change the memory limit of a server you enabled here.
Caution
It's important you decide here whether you want to enable or disable either of the servers. You
won't be able to turn either of them on or off after you've created the database. However, you can,
at a later time, change the memory limit of a server you enabled here.
We strongly recommend you enable the DI server and the SAP HANA schemas & HDI containers
(hana) service broker for your tenant database. If you don't enable it, you won't be able to use the
service broker or to create a service binding.
8. Choose Create.
Note
There is a limit to the number of databases you can create, and you'll see an error message when you
reach the maximum number of databases.
Your particular use case and the amount of data and workloads handled in your tenant databases
should determine how many tenant databases you create. The more tenant databases you create, the
less memory is available for you in each individual tenant database.
The following table shows the technically enforced limits. The concrete number of tenant
databases that you can use in parallel on your database system depends on the amount of
memory consumed by each of your tenant databases.
61GB 4
122GB 10
244GB 24
488GB 50
>488GB 200
Related Information
Create a service instance using a particular plan of the hana service and bind it to an application in your Cloud
Foundry space in an enterprise account.
Procedure
1. In the navigation area, choose Applications, then select the relevant application.
2. In the navigation area, choose Service Bindings.
The overview lists all applications to which the selected application is currently bound.
3. Choose Bind Service.
4. On the Choose Service Type tab, select the Service from the catalog radio button and choose Next.
5. On the Choose Service tab, select the SAP HANA Schemas & HDI Containers (hana) tile and choose Next.
Note
The SAP HANA Schemas & HDI Containers (hana) tile is only displayed if you've turned on the DI
Server + Service Broker switch for a tenant database in your Cloud Foundry org. For more information,
see Create SAP HANA Tenant Databases [page 816].
6. On the Choose Service Plan tab, select the corresponding radio button to create a new instance or reuse an
existing instance. Select a service plan and choose Next.
7. Depending on the number of tenant databases you have created in your space, choose one of the following
options:
Option Description
There is only one ten Skip the Specify Parameters tab by choosing Next.
ant database in your
Cloud Foundry space.
There is more than Specify the parameters in JSON format. Copy the following string by specifying the parame
one tenant database ters:
in your Cloud
Foundry space. {"database_id":"<tenant_db_name>"}
Enter the database ID that you defined when creating an SAP HANA tenant database. Choose
Next.
Note
To use a tenant database that is owned by another space, see Sharing a Tenant Database
with Other Spaces [page 825].
8. On the Confirm tab, enter a name in the Instance Name field and choose Finish.
Next Steps
Once you've created the binding, you must restart your application:
Create a service instance using a particular plan of the hanatrial service and bind it to an application in your
Cloud Foundry space in a trial account.
Procedure
1. In the navigation area, choose Applications, then select the relevant application.
2. In the navigation area, choose Service Bindings.
Use the SYSTEM user to create a database administration user in the Cloud Foundry environment using the
SAP HANA cockpit.
Prerequisites
You have enabled the SAP HANA Cockpit Access switch for your tenant database. For more information, see
Access SAP HANA Cockpit [page 830].
Context
You specify a password for the SYSTEM user when your create your SAP HANA tenant database. You use the
SYSTEM user to log on to SAP HANA Cockpit and create your own database administration user.
The SYSTEM user is a preconfigured database superuser with irrevocable system privileges, such as the ability
to create other database users, access system tables, and so on. A database-specific SYSTEM user exists in
every database of a tenant database system. To ensure that the administration tool SAP HANA cockpit can be
used immediately after database creation, the SYSTEM user is automatically given several roles the first time
the SAP HANA cockpit is opened with this user.
Caution
You should not use the SYSTEM user for day-to-day activities. Instead, use this user to create dedicated
database users for administrative tasks and to assign privileges to these users.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space. For more information, see Navigate to Orgs and
Spaces [page 1751].
All databases available in the selected subaccount are listed with their ID, type, version, and related
database system.
To view the details of a database, for example, its state and the number of existing bindings, select a
database in the list and click the link on its name. On the overview of the database, you can perform
further actions, for example, delete the database.
A message is displayed to inform you that at that point, you lack the roles that you need to open the SAP
HANA cockpit.
6. To confirm the message, choose OK.
You receive a confirmation that the required roles are assigned to you automatically.
7. Choose Continue.
Note
12. In the Authentication section, make sure the Password checkbox is selected and enter a password.
Note
According to the SAP HANA default password policy, the password must start with a letter and only
contain uppercase and lowercase letters ('a' - 'z', 'A' - 'Z'), and numbers ('0' - '9'). If you've changed the
default policy, other password requirements might apply.
The new database user is displayed as a new node under the Users node.
14. Assign your user the necessary administrator roles and privileges by going to the Granted Roles section
and choosing the + (Add Role) button. To allow your administration user to create new users and assign
roles and privileges to them for example, add the USER_ADMIN privilege. For more information see, System
Privileges in the SAP HANA Security Guide.
15. Choose Ok.
16. Save your changes.
Caution
At this point, you are still logged on with the SYSTEM user. You can only use your new database user to
work with SAP HANA Web-based Development Workbench by logging out from SAP HANA Cockpit
first. Otherwise, you would automatically log in to the SAP HANA Web-based Development Workbench
Recommendation
We recommend that you create more than one database administration users. If one database
administration user is locked or if the password needs to be reset, only another administration user can
unlock this user and reset the password.
Next Steps
You can use the newly created database administration user to create database users for the members of your
subaccount and assign them the required developer roles.
Try to solve issues by stopping and restarting the corresponding tenant database in the Cloud Foundry
environment.
Prerequisites
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA database system you
want to restart. For more information, see Navigate to Orgs and Spaces [page 1751].
Redefine the limits for memory consumption by individual processes of the new database in the Cloud Foundry
environment by using the SAP Cloud Platform cockpit.
Prerequisites
You must have the Space Manager role for the space that owns the SAP HANA tenant database you want to
reconfigure.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA tenant database you want
to reconfigure. For more information, see Navigate to Orgs and Spaces [page 1751].
You can redefine the limits for memory consumption by individual processes of the new database. For
more information, see the SAP HANA Administration Guide. For more information about viewing the
memory usage of an existing tenant database, see View Memory Usage for an SAP HANA Database
System [page 814].
If you don't enter any values, no limits are set, and the respective value appears as unlimited.
○ XS Engine:
If you run the XS engine in a standalone mode, you can change the memory limit after you have
created the database.
Note
If you run the XS engine in an embedded mode, the XS engine and the index server are part of the
same process, and therefore share the same allocation limit.
Note
If you run the XS engine in an embedded mode, the XS engine and the index server are part of the
same process, and therefore share the same allocation limit.
Share your tenant database that has been created in one space in the Cloud Foundry environment with other
spaces that belong to the same organization.
When a tenant database is created in a space, only the space it is located in can access it and use it for service
bindings, for example. However, a user who is assigned the Space Manager role can change this, and grant
other spaces within the same organization controlled access to the tenant database in the space he or she
manages. Depending on the permission type, a member of the space who receives permission can access the
database, including using it to create a service instance and bind it to applications.
Prerequisites
● The space in which the tenant database was created, and the space that receives permission to use this
database must be part of the same organization.
● To give another space permission to use a tenant database in your space, you must be assigned the Space
Manager role.
Context
Share a database with other spaces This gives another space permission to The Space Manager of the space in
use a tenant database. which the tenant database is located
Use a database shared by another Depending on the permission given, a A member of the space receiving the
space space can access and use a tenant da permission to use the tenant database
tabase that is located in another space.
If Space Managers want to share a tenant database in their space with another space in the same org, they can
assign different permission types to the other space: To allow applications in another space to access a tenant
database, Space Managers can provide the other space with the permission type APPLICATION_ACCESS,
which assigns a security group to that other space. To enable members of another space to create service
instances on the tenant database and bind these service instances to applications, the Space Manager needs
to assign both the permission types APPLICATION_ACCESS and HANA_SERVICE to the other space.
As a Space Manager, you can give another space in the same organization permission to access and use a
tenant database that has been created in your space.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the space in which the tenant database you would like to
share with another space has been created. For more information, see Navigate to Orgs and Spaces [page
1751].
2. Choose SAP HANA Tenant Databases and select the tenant database that you'd like to share.
3. In the navigation area, choose Permissions.
4. Choose New Permissions.
5. In the dialog box, do the following:
a. Select the space that should receive permission to use the tenant database.
b. Choose the permission type(s).
Results
You have given another space permission to use a tenant database in your space. In your space (the space that
owns the tenant database), the (Edit Permissions) icon appears in the Tenant Databases list in the cockpit
next to all tenant databases that can be used by other spaces.
Next Steps
To edit the permission type of an existing access permission, select the required tenant database, then choose
Permissions in the navigation menu. Select the (Edit Permissions) icon next to the space in question.
To revoke an existing access permission, first delete all service instances. Then select the required tenant
database, navigate to Permissions and choose the (Delete) icon next to the space. This revokes all
permissions for this space.
As a member of a space that has received permission to use a tenant database owned by another space, you
can access the database, by opening a tunnel to it, for example, or using it to create a service instance and bind
it to an application.
As member of a space that has received permission to use a tenant database owned by another space, you can
open a tunnel to that database.
Prerequisites
● The space in which the application is deployed must have the permission to open a tunnel to the database
in another space (permission type APPLICATION_ACCESS).
● Download and install the cf CLI. For more information, see Download and Install the Cloud Foundry
Command Line Interface [page 1769].
● Log on to the space in which the application for which you want to create a service binding is deployed. For
more information, see Log On to the Cloud Foundry Environment Using the Cloud Foundry Command Line
Interface [page 1770].
● Deploy an application to a space and start the application.
● Enable SSH access either for the application you've started or for your space. For more information, see
https://docs.cloudfoundry.org/devguide/deploy-apps/ssh-apps.html .
Procedure
Open a tunnel to the tenant database as described in Open a Database Tunnel [page 832]. When executing the
cf ssh command, specify the host and port of the tenant database that is owned by another space.
Results
You are now connected to a tenant database that is owned by another space. To connect to the tenant
database from your local workstation, follow the steps described in Connect to a Tenant Database Using SAP
HANA Studio [page 834].
As member of a space that has received permission to use a tenant database owned by another space, you can
use that database to create a service instance and bind it to an application.
Prerequisites
Procedure
Use the cockpit 1. In the SAP Cloud Platform cockpit, navigate to the space in which the application you
would like to bind is deployed. For more information, see Navigate to Orgs and Spaces
[page 1751].
2. Follow the steps described in Create a Service Instance and Bind It to the Application
[page 794]
{"database_id":"<GUID_owner_space>:<tenant_db_name>"}
Tip
Navigate to Security Groups to identify the GUID of the space that owns the tenant da
tabase you'd like to use.
Use the console client 1. Open the cf CLI and enter the following command:
○ macOS and Linux:
○ Windows PowerShell:
2. Bind the service instance to the application. For more information, see Bind the Service
Instance to the Application [page 801].
3. Restart the application. For more information, see Restart the Application [page 801].
Results
You have created a service instance and bound it to an application using a tenant database owned by a different
space than the one in which the application has been deployed.
Delete your SAP HANA tenant database in the Cloud Foundry environment using the SAP Cloud Platform
cockpit.
Procedure
To delete your SAP HANA tenant database, perform the following steps:
Remember
To delete an SAP HANA tenant database, you must first delete all services instances bound to your
application.
Delete all service instances of the hana or hana-managed service, which are bound to your applications, using
the SAP Cloud Platform cockpit.
Prerequisites
● The SAP HANA tenant database you want to unbind the service instances from must be in status Started.
● You must have the Space Developer role for the space that owns SAP HANA tenant database you want to
delete.
● (Optional) Access to the spaces you shared the SAP HANA tenant database with.
Note
Deleting all service instances bound to your application might include service bindings in other spaces
than the space that owns the database.
Procedure
1. In the SAP Cloud Platform, navigate to the space that owns the applications to which your SAP HANA
tenant database is bound to. For more information, see Navigate to Orgs and Spaces [page 1751].
Next Steps
You can delete your SAP HANA tenant database using the SAP Cloud Platform cockpit.
Prerequisites
You must have the Space Manager role in the space that owns the SAP HANA tenant database you want to
delete.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the space that owns the SAP HANA tenant database you
want to delete. For more information, see Navigate to Orgs and Spaces [page 1751].
Access the SAP HANA cockpit for a tenant database in the Cloud Foundry environment using the SAP Cloud
Platform cockpit.
Context
The SAP HANA cockpit is a Web-based administration tool for the administration, monitoring, and
maintenance of SAP HANA databases in the Cloud Foundry environment. It provides a single point of access to
a range of tools for your SAP HANA database, and also integrates development capabilities required by
You can access the SAP HANA cockpit by navigating to a tenant database in SAP Cloud Platform's web-based
administration interface: the SAP Cloud Platform cockpit.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the Cloud Foundry space that owns the tenant database
you'd like to access using the SAP HANA cockpit. For more information, see Navigate to Orgs and Spaces
[page 1751].
2. Choose SAP HANA Tenant Databases and select the tenant database.
3. From the Administration Tools section, select SAP HANA Cockpit.
Note
You can open the SAP HANA cockpit only if SAP HANA Cockpit Access is enabled for the tenant
database. If it is disabled, choose Configure and turn on the switch for SAP HANA Cockpit Access.
Results
You are now logged on to the SAP HANA cockpit for a tenant database on SAP Cloud Platform.
Related Information
Connect to an SAP HANA tenant database deployed in a Cloud Foundry space that belongs to an enterprise
account from your local workstation.
Prerequisites
● Deploy an SAP HANA tenant database in a Cloud Foundry space in your enterprise account.
● Deploy an application to the same Cloud Foundry space and started the application.
● Download and install the cf CLI. For more information, see Download and Install the Cloud Foundry
Command Line Interface [page 1769].
● Log on to the Cloud Foundry space in which the SAP HANA tenant database you want to connect to is
deployed. For more information, see Download and Install the Cloud Foundry Command Line Interface
[page 1769] or Log On to Your Global Account [AWS, Azure, or GCP Regions].
● Access to SAP HANA studio. For more information, see the SAP HANA Developer Guide for SAP HANA
Studio.
Context
To connect to an SAP HANA tenant database, you need to perform the following steps:
Procedure
1. (Optional) If you're working behind a proxy server, you may have to adjust your proxy settings.
2. (Optional) Find out the host and port of your tenant database. Depending on the tool you'd like to use for
finding out the host and port, choose one of the following options. If you already have this information, skip
this step.
Option Description
SAP Cloud 1. Navigate to the space that owns the tenant database to which you'd like to open a tunnel. See
cockpit 2. In the navigation area, choose Security Groups, then select the relevant tenant database.
In the Rules section at the bottom of the screen, find the host (Destination column) and the port
(Port column) for your tenant database.
Note
If two ports are listed in the Ports column, use the port that starts with "300".
cf security-group <security_group_database>
3. Open a command line and enter the following string and specify the parameters:
Note
You'll need to restart the application after you've created the tenant database to be able to use the
command above.
Next Steps
Now that you have opened the database tunnel, you can connect to the remote tenant database.
For more information on SSH in the Cloud Foundry environment, please see the official documentation for the
Cloud Foundry environment at https://docs.cloudfoundry.org/devguide/deploy-apps/app-ssh-overview.html
.
To connect to an SAP HANA tenant database from your local workstation, you have to establish a connection
using SAP HANA studio.
Procedure
1. Open the SAP HANA Administration Console perspective in SAP HANA studio. In the Systems panel,
choose Add System.
2. In the Host Name field, enter localhost.
3. In the Instance Number field, enter 00.
4. Select Single container.
5. Choose Next.
6. Enter the credentials of a valid database user, for example, the SYSTEM user.
7. Choose Finish.
Results
You are now connected to a tenant database in your Cloud Foundry space.
Note
The tunnel to the tenant database needs to remain open for as long as you want to be connected to it in
SAP HANA studio.
Backup and recovery of data stored in your database and database system are performed by SAP.
Backup
For databases in enterprise accounts, a full data backup is done once a day. Log backup is triggered at least
every 30 minutes. Backups are copied to another storage at the same location once a day. Backups (complete
data and log) are kept for the last 14 days. Backups are deleted afterwards. Recovery is therefore only possible
within a time frame of 14 days.
Since backups are kept on a secondary location for 14 days, recovery is only possible within a time frame of 14
days.
Restoring the system from files on a secondary location might take some time depending on the availability.
For more information, see
Note
You can operate several tenant databases in the same database system and recover them individually.
Thus, when binding applications to tenant databases, you can achieve a fine grained control of the backup
and recovery.
Restore your database system in the Cloud Foundry environment from a specific point of time by creating a
service request in the SAP Cloud Platform cockpit.
Prerequisites
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA database system you
want to restore. For more information, see Navigate to Orgs and Spaces [page 1751].
Caution
You will lose all data stored between the time you specify in the New Service Request screen and
the time at which you create the service request. For example, if you create a restore request at
3pm to restore your database system to 9am on the same day for example, all data stored between
9am and 3pm will be lost.
d. Choose Save.
A template for opening an incident in the SAP Support Portal is displayed.
e. Select the text in the template between the two dashed lines and copy it to the clipboard.
Tip
Navigate to SAP HANA Service Requests , then choose the Display icon to find the template
for opening a ticket at any time.
f. Choose Close.
4. Open SAP Support Portal .
5. Choose Report an Incident.
Results
You have created a request for restoring a database system and sent the request to SAP Support for
processing. As soon as your database system is restored, the state of your request will be set to Finished in the
cockpit and the incident you created will be set to Completed. You can see the state of your request in the
cockpit by navigating to SAP HANA Service Requests . The state is displayed next to your service
request. In the meantime, SAP Support might contact you in case they need further clarification. You will be
notified by e-mail if you need to take any further action.
Your database system is available for use for all users immediately after the restore has been successful.
Note
To cancel your restore request, go to SAP HANA Service Request , choose your restore request and
select the Delete icon. Note that your request can only be canceled if it has the state New.
Related Information
Restore your tenant database in the Cloud Foundry environment from a specific point of time by creating a
service request in the SAP Cloud Platform cockpit.
Prerequisites
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA tenant database you want
to restore. For more information, see Navigate to Orgs and Spaces [page 1751].
Caution
You will lose all data stored between the time you specify in the New Service Request screen and
the time at which you create the service request. For example, if you create a restore request at
3pm to restore your tenant database to 9am on the same day for example, all data stored between
9am and 3pm will be lost.
Tip
Navigate to SAP HANA Service Requests , then choose the Display icon to find the template
for opening a ticket at any time.
f. Choose Close.
4. Log on to the SAP Support Portal with your S-user ID and password and create a new incident by
choosing Report an Incident.
Note
You need the authorization to create an incident. Contact a user administrator in your company to
request this authorization.
Results
You have created a request for restoring a tenant database and sent the request to SAP Support for processing.
As soon as your tenant database is restored, the state of your request will be set to Finished in the cockpit and
the incident you created will be set to Completed. You can see the state of your request in the cockpit by
navigating to SAP HANA Service Requests . The state is displayed next to your service request. In the
meantime, SAP Support might contact you in case they need further clarification. You will be notified by e-mail
if you need to take any further action.
Note
Your tenant database is available for use for all users immediately after the restore has been successful.
Note
To cancel your restore request, go to SAP HANA Service Request , choose your restore request and
select the Delete icon. Note that your request can only be canceled if it has the state New.
1.11.9.1.7 Security
Governments place legal requirements on industry to protect data and privacy. We provide features and
functions to help you meet these requirements.
Note
SAP does not provide legal advice in any form. SAP software supports data protection compliance by
providing security features and data protection-relevant functions, such as blocking and deletion of
personal data. In many cases, compliance with applicable data protection and privacy laws is not covered
by a product feature. Furthermore, this information should not be taken as advice or a recommendation
regarding additional features that would be required in specific IT environments. Decisions related to data
protection must be made on a case-by-case basis, taking into consideration the given system landscape
and the applicable legal requirements. Definitions and other terms used in this documentation are not
taken from a specific legal source.
The following sections provide information about the . For the central data protection and privacy statement for
SAP Cloud Platform, see Data Protection and Privacy [page 2527].
An information report is a collection of data relating to a data subject. A data privacy specialist may be required
to provide such a report or an application may offer a self-service.
1.11.9.1.7.1.2 Deletion
When handling personal data, consider the legislation in the different countries where your organization
operates. After the data has passed the end of purpose, regulations may require you to delete the data.
Read-access logging (RAL) is used to monitor and log read access to sensitive data. Data may be categorized
as sensitive by law, by external company policy, or by internal company policy. Read-access logging enables
you to answer questions about who accessed certain data within a specified time frame.
For auditing purposes or for legal requirements, changes made to personal data should be logged, enabling the
monitoring of who made changes and when. Auditing provides you with visibility on who did what in the SAP
HANA database (or tried to do what) and when. This allows you, for example, to log and monitor read access to
sensitive data.
Audit Policies
You can configure audit policies on the SAP HANA database. An audit policy defines the actions to be audited,
as well as the conditions under which the action must be performed to be relevant for auditing. When an action
occurs, the policy is triggered and an audit event is written to the audit trail. For more information, see SAP
HANA Security Guide → Audit Policies.
Audit Trails
When an audit policy is triggered, that is, when an action in the policy occurs under the conditions defined in
the policy, an audit entry is created in one or more audit trails. For more information, see SAP HANA Security
Guide → Audit Trails.
SAP HANA single-container or tenant database systems of the SAP HANA service use syslog for auditing. The
audit log entries are forwarded to the Audit Log service and stored there for a defined period of time. To request
your audit logs, contact SAP Support. For more information, see Getting Support [page 842].
1.11.9.1.7.1.4 Glossary
If you have questions or encounter an issue while working with the SAP HANA service, have a look at our FAQ or
on how to get support.
Answers to some of the most commonly asked questions about the SAP HANA service in the Cloud Foundry
environment.
Where can I view the memory limits for an SAP HANA tenant database
system?
See View Memory Usage for an SAP HANA Database System [page 814].
For database systems, see Restart SAP HANA Database Systems [page 811], and for tenant databases, see
Restart SAP HANA Tenant Databases [page 823].
You cannot update a single tenant database. You can only update the complete database system, including all
tenant databases. See Update SAP HANA Database Systems [page 809].
Restore activities are currently handled by SAP Operations. You can request a recovery for a single tenant
database or the entire SAP HANA database system. Depending on your scenario, see Restore SAP HANA
Tenant Databases [page 837] or Restore SAP HANA Database Systems [page 835].
How often does a backup occur? How much data can I lose in the worst
case?
For databases in enterprise accounts, a full data backup is done once a day. Log backup is triggered at least
every 30 minutes. The corresponding data or log backups are replicated to a secondary location at least every
24 hours. Backups are kept (complete data and log) on a secondary location for the last 14 days. Backups are
deleted afterwards.
Since backups are kept on a secondary location for 14 days, recovery is only possible within a time frame of 14
days.
Restoring the system from files on a secondary location might take some time depending on the availability.
For more information, see Restore SAP HANA Tenant Databases [page 837] and Restore SAP HANA Database
Systems [page 835].
If you have questions or encounter an issue while working with the SAP HANA service in the Cloud Foundry
environment for AWS regions (databases provisioned before June 4, 2018) and Azure regions, there are various
ways to address them.
If you encounter an issue with this service, we recommend to follow the procedure below:
Check the availability of the platform at SAP Cloud Platform Status Page .
In the SAP Support Portal, check the Guided Answers section for SAP Cloud Platform. You can find solutions
for general SAP Cloud Platform issues as well as for specific services there.
You can report an incident or error through the SAP Support Portal . For more information, see Getting
Support [page 2531].
HAN-CLS-DB SAP Cloud Platform, SAP HANA Service in SAP, Azure, and
AWS [for databases provisioned before June 2018] regions
Additionally, for database problems in Azure or AWS (databases provisioned before June 4, 2018) regions,
include the URL of the space the database is provisioned:
1. In the cockpit, navigate to the org and space the database is provisioned in.
2. In the navigation area, go to SAP HANA Database Systems .
3. Choose the affected Database System.
4. Copy the URL from the overview of the database system.
Example URL:
https://account.hana.ondemand.com/cockpit#/globalaccount/<id>/subaccount/
<id>/org/<id>/dbsystems/<id>/overview
What Is the SAP HANA Service in SAP Regions (Neo Environment) [page 844]
Create and consume SAP HANA databases in SAP regions.
The SAP HANA service allows you to leverage the in-memory data processing capabilities of SAP HANA in the
cloud. As a managed database service, backups are fully automated and service availability guaranteed. Using
the SAP HANA service, you can set up and manage SAP HANA databases and bind them to applications
running on SAP Cloud Platform. You can access SAP HANA databases using a variety of languages and
interfaces, as well as build applications and models using tools provided with SAP HANA.
Note
There are different versions of the SAP HANA service available on SAP Cloud Platform. Whether this guide
is helpful for you depends on the SAP HANA service version you're using.
This guide is valid for the version of the SAP HANA service that runs in the Neo environment on SAP
regions.
If you're using the SAP HANA service in another environment or region, or are unsure whether this guide is
right for you, see Find the Right Guide.
Environment
For a list of all regions, see SAP Cloud Platform Regions and Service Portfolio.
Remember
This guide describes the features of the SAP HANA service that enable you to administrate SAP HANA
databases on SAP Cloud Platform. Keep in mind that it helps you with everything that is specific to SAP
Cloud Platform. For features specific to SAP HANA, please refer to SAP HANA Platform.
Work with SAP HANA 1.0 Work with SAP HANA tenant and single-container database systems and
SAP HANA version 1.0.
Manage Database Systems Manage the overall lifecycle of your database systems. You can frequently
update to new revisions.
Tip
New revisions that are available for the update are announced in the
What's New for SAP Cloud Platform.
Create Tenant Databases on Create and manage tenant databases to store data in separate data areas
Database Systems within one database system.
Bind Databases to Applications Create tenant databases and bind them to application running on SAP
Cloud Platform.
Share Databases with Other Share a database that has been created in one space with other spaces
Spaces that belong to the same organization.
View Memory Usage for View the memory limits and usage for a database system.
Database Sytems
Access SAP HANA Cockpit Access the SAP HANA cockpit for a tenant database through the SAP
Cloud Platform cockpit.
Connect to a Database from Your Open a database tunnel and connect to a database using SAP HANA
Local Workstation studio.
Set up Databases in High Improve your database availability by setting up your database systems in
Availablity and/or Disaster high availability and/or disaster recovery mode.
Recovery Mode
For more detailed information about features and capabilities, see the Feature Scope Description for SAP Cloud
Platform, SAP HANA Service.
64 GB 64 GB
128 GB 128 GB
256 GB 256 GB
512 GB 512 GB
1 TB 1 TB
Tools
You can use the following tools in combination with the SAP HANA service:
Restriction
Backup ● When you stop a tenant database for several days, you may not be able to re
cover the database. It is important to keep databases running without longer
downtimes.
Monitoring ● The availability of SAP HANA tenant databases is not monitored and no alerts
are sent when a database is not available.
● The registration of availability checks for HANA native applications is not sup
ported.
Recommendation
The support for database schemas on shared SAP HANA databases in trial accounts has ended. We
recommend to create an SAP HANA tenant database on a shared SAP HANA tenant database system.
Restriction
● You can create only one trial tenant database in the subaccount.
● The SAP HANA service determines to which database system the tenant is assigned.
● Trial databases are configured using fixed quota for RAM and CPU.
● You can use the trial tenant database for 12 hours. It shuts down automatically after this period to free
resources. You can, however, restart it. For more information, see Restart SAP HANA Tenant Databases
[page 933].
● If you do not use the tenant database for 7 days, it is automatically deleted to free the consumed disk
space.
● Backup is not enabled and no recovery is possible.
● There are some other restrictions as to which SAP HANA features can be used in the trial scenario and
which cannot.
2020
Techni
cal Envi Availa
Com Capa ron ble as
ponent bility ment Title Description Type of
SAP Data- Neo Revi The Neo environment supports SAP HANA revision 1.00.122.29. Chang 2020-0
HANA Driven sion See SAP Note 2021789 - SAP HANA 1.0 Revision and Maintenance ed 1-30
Service Insights 1.00.12 Strategy .
2.29
Release notes for the SAP HANA service in SAP regions for 2019 are available in the What's New for SAP Cloud
Platform.
To access the release notes, go to What's New for the SAP HANA Service (Neo).
Prerequisites
● You have set up your global account and subaccount on SAP Cloud Platform. For an overview of the
required steps, see Getting Started in the Neo Environment [page 1053].
● You have purchased quota for the SAP HANA service.
To enable the SAP HANA service, distribute the quota you've purchased in your subaccounts. For more
information, see Configure Entitlements and Quotas for Subaccounts [page 1756].
● To manage databases systems, you must be assigned the Administrator role in the subaccount.
● To manage databases and bind them to applications, you must be assigned the Administrator or the
Developer role in the subaccount.
For more information, see Managing Member Authorizations in the Neo Environment [page 1904].
Getting Started
Once you've completed the initial setup, have a look at our Getting Started [page 849] tutorial to quickly learn
how to set up your databases and bind them to an application.
Get started with the SAP HANA service in the Neo environment.
Depending on your account type, different steps are necessary to get you started.
Quickstart
See the typical flow in enterprise accounts to get started quickly with the SAP HANA tenant database systems
in the Neo environment. Select a tile to find further information about this step.
Prerequisites
Note
There is a limit to the number of databases you can create, and you'll see an error message when you reach
the maximum number of databases.
See Create a Database Administration User for SAP HANA Tenant Databases [page 923].
Create a Technical Database User and Disable the Password Lifetime Handling
When you create a binding in the Neo environment, you specify a technical database user that determines to
which SAP HANA tenant database schema the application is bound. You need to prevent the system from
asking you to change the initial password of that user. Otherwise, the application may not start correctly.
See Disable the Password Lifetime Handling for a New Technical SAP HANA Database User [page 927].
Quickstart
See the typical flow to get started quickly with the SAP HANA single-container database systems in the Neo
environment. Select a tile to find further information about this step.
Prerequisites
Before you begin, purchase quota for an SAP HANA single-container database system. To start using the
service, you need to distribute the quota in your subaccounts. See Configure Entitlements and Quotas for
Subaccounts [page 1756].
Create a Technical Database User and Disable the Password Lifetime Handling
When you create a binding in the Neo environment, you specify a technical database user that determines to
which SAP HANA XS schema the application is bound. You need to prevent the system from asking you to
change the initial password of that user. Otherwise, the application may not start correctly.
See Disable the Password Lifetime Handling for a New Technical SAP HANA Database User [page 927].
In trial accounts in the Neo environment, you create a tenant database on a shared SAP HANA tenant database
system.
Restriction
Certain restrictions apply when you create a tenant database in trial accounts. For more information, see
Restrictions in Trial Accounts [page 847].
Quickstart
See the typical flow in trial accounts to get started quickly with the SAP HANA tenant databases in the Neo
environment. Select a tile to find further information about this step.
Note
You cannot create a database system in a trial account. You directly create a tenant database on a shared
SAP HANA tenant database system.
See Create a Database Administration User for SAP HANA Tenant Databases [page 856].
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas .
3. To create a database, choose New on the Databases & Schemas page.
4. Specify a Database ID.
You can assign any database ID, but it must start with a letter and include only lowercase letters ('a' - 'z')
and numbers ('0' - '9'). Remember that the physical database name is not the same as the database ID.
5. Specify the SYSTEM user password to access the database.
The SYSTEM user is a preconfigured database superuser with irrevocable system privileges, such as the
ability to create other database users, access system tables, and so on. A database-specific SYSTEM user
exists in every database of a tenant database system. To ensure that the administration tool SAP HANA
cockpit can be used immediately after database creation, the SYSTEM user is automatically given several
roles the first time the SAP HANA cockpit is opened with this user.
6. (Optional) Turn on the Configure User for SHINE switch to create a user for the SAP HANA Interactive
Education (SHINE) demo application.
Note
For more information on SHINE, see Enable SAP HANA Interactive Education (SHINE) [page 1581].
a. In the SHINE User Name field, provide a user name for the SHINE user.
The user name can only contain uppercase and lowercase letters ('a' - 'z', 'A' - 'Z'), numbers ('0' - '9'),
and underscores ('_').
b. Provide a password for the SHINE user in the SHINE User Password field and repeat the password in
the Repeat Password field.
The password must contain at least one uppercase and one lowercase letter ('a' - 'z', 'A' - 'Z') and one
number ('0' - '9'). It can also contain special characters (except ", ' and \).
7. (Optional) Configure the parameters.
You can define limits for the memory consumption by the individual processes of the new database. For
more information, see the SAP HANA Administration Guide.
Note
Do not turn the switch off to be able to create a database administration user.
Caution
You cannot change this parameter after you have created the database.
8. Choose Create.
Note
You can only create one tenant database in your trial account.
Results
Next task: Create a Database Administration User for SAP HANA Tenant Databases [page 856]
You use the SYSTEM user to create a database administration user with SAP HANA cockpit in the Neo
environment.
Prerequisites
You have enabled the Web Access switch for your tenant database. For more information, see Create SAP
HANA Tenant Databases.
Context
You specify a password for the SYSTEM user when your create your SAP HANA tenant database. You use the
SYSTEM user to log on to SAP HANA cockpit and create your own database administration user.
The SYSTEM user is a preconfigured database superuser with irrevocable system privileges, such as the ability
to create other database users, access system tables, and so on. A database-specific SYSTEM user exists in
every database of a tenant database system. To ensure that the administration tool SAP HANA cockpit can be
Caution
You should not use the SYSTEM user for day-to-day activities. Instead, use this user to create dedicated
database users for administrative tasks and to assign privileges to these users.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. Choose SAP HANA / SAP ASE Databases & Schemas in the navigation area.
All databases available in the selected subaccount are listed with their ID, type, version, and related
database system.
Tip
To view the details of a database, for example, its state and the number of existing bindings, select a
database in the list and click the link on its name. On the overview of the database, you can perform
further actions, for example, delete the database.
○ User: SYSTEM
○ Password: The password you've defined for the SYSTEM user when creating your tenant database.
A message appears, telling you that you do not have the required roles.
6. Choose OK.
You receive a confirmation that the required roles are assigned to you automatically.
7. Choose Continue.
8. Choose Manage Roles and Users.
9. Expand the Security node.
10. Open the context menu for the Users node and choose New User.
11. On the User tab, provide a name for the new user.
Note
12. In the Authentication section, make sure the Password checkbox is selected and enter a password.
According to the SAP HANA default password policy, the password must start with a letter and only
contain uppercase and lowercase letters ('a' - 'z', 'A' - 'Z'), and numbers ('0' - '9'). If you've changed the
default policy, other password requirements might apply.
The new database user is displayed as a new node under the Users node.
14. Assign your user the necessary administrator roles and privileges by going to the Granted Roles section
and choosing the + (Add Role) button. To allow your administration user to create new users and assign
roles and privileges to them for example, add the USER ADMIN privilege. For more information see, System
Privileges in the SAP HANA Security Guide.
15. Choose Ok.
16. Save your changes.
Caution
At this point, you are still logged on with the SYSTEM user. You can only use your new database user to
work with SAP HANA Web-based Development Workbench by logging out from SAP HANA cockpit
first. Otherwise, you would automatically log in to the SAP HANA Web-based Development Workbench
with the SYSTEM user instead of your new database user. Therefore, choose the Logout button before
you continue to work with the SAP HANA Web-based Development Workbench, where you need to log
on again with the new database user.
Recommendation
We recommend that you create more than one database administration users. If one database
administration user is locked or if the password needs to be reset, only another administration user can
unlock this user and reset the password.
Next Steps
You can use the newly created database administration user to create database users for the members of your
subaccount and assign them the required developer roles.
You can bind a database to your Java application in the SAP Cloud Platform cockpit in the Neo environment.
Prerequisites
● Deploy a Java application in your subaccount. See Deploying and Updating Applications [page 1453].
● (Enterprise Accounts) Install a database system in your subaccount. See Install Database Systems [page
884].
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. Choose one of the following options:
By database 1. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas ,
then select the relevant database.
2. Choose Data Source Bindings.
The overview lists all Java applications that the specified database is currently
bound to, as well as the database user used in each case.
3. Choose New Binding.
4. Enter a data source name and the name of the application you want to bind the da
tabase to.
Note
Data source names you enter here must match the JNDI data source names
used in the corresponding applications, as defined in the web.xml or persis
tence.xml file.
To create a binding to the default data source, enter the data source name
<DEFAULT>. An application that is bound to the default data source (shown as
<DEFAULT>) cannot be bound to any other databases. To use other data
bases, first rebind the application using a named data source.
By Java application 1. Choose Applications Java Applications in the navigation area and select the
relevant application.
Note
Data source names are freely definable but need to match the JNDI data source
names used in the corresponding applications, as defined in the web.xml or
persistence.xml file.
To create a binding to the default data source, enter the data source name
<DEFAULT>. An application that is bound to the default data source (shown as
<DEFAULT>) cannot be bound to any other databases. To use other data
bases, first rebind the application using a named data source.
5. In the Database ID field, enter the database to which the application should be
bound.
6. Provide the credentials of the database user.
7. Select the checkbox Verify credentials to verify the validity of the credentials.
8. Save your entries.
Next Steps
Once the binding is created, you can start your application. To do so, navigate to Applications Java
Applications and select the Start icon for your application.
To unbind a database from an application, choose the Delete icon next to the binding. The application
maintains access to the database until it is restarted.
Previous task: Create a Database Administration User for SAP HANA Tenant Databases [page 856]
1.11.9.2.3.1.3 Tutorials
Follow one of our tutorials to try out the end-to-end process - from creating a database and an application to
binding the database to the application - using the SAP Cloud Platform cockpit or the console client, and other
SAP HANA tools.
Creating an SAP HANA Tenant Database and an SAP HANA XS Application [page 861]
Create and bind an SAP HANA tenant database to an SAP HANA XS application using the SAP Cloud
Platform cockpit, the SAP HANA cockpit and the SAP HANA Web-Based Development Workbench.
Creating an SAP HANA Database Using the Console Client [page 873]
Create a database in an SAP HANA tenant database system, using SAP Cloud Platform console client
commands in the Neo environment.
Create and bind an SAP HANA tenant database to an SAP HANA XS application using the SAP Cloud Platform
cockpit, the SAP HANA cockpit and the SAP HANA Web-Based Development Workbench.
● Install an SAP HANA tenant database system (MDC). See Install Database Systems [page 884].
● You are assigned the Administrator role for the subaccount.
Context
To perform the different steps in this tutorial, you use different tools. You first create a tenant database, and
then create the XS application on the database.
In your subaccount in the SAP Cloud Platform cockpit, you create a database on an SAP HANA tenant
database system.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas .
3. From the Databases & Schemas page, choose New.
4. Enter the required data:
Property Value
Note
mdc1 corresponds to the database system on
which you create the database.
SYSTEM User Password The password for the SYSTEM user of the database.
Note
The SYSTEM user is a preconfigured database super
user with irrevocable system privileges, such as the
ability to create other database users, access system
tables, and so on. A database-specific SYSTEM user
exists in every database of a tenant database system.
5. Choose Create.
Task overview: Creating an SAP HANA Tenant Database and an SAP HANA XS Application [page 861]
Next task: Create a Database User and Add the Required Roles [page 863]
Create a new database user in the SAP HANA cockpit and assign the user the required permissions for working
with the SAP HANA Web-based Development Workbench.
Context
You've specified a password for the SYSTEM user when you created an SAP HANA tenant database. You now
use the SYSTEM user to log on to SAP HANA cockpit and create your own database administration user.
Caution
You should not use the SYSTEM user for day-to-day activities. Instead, use this user to create dedicated
database users for administrative tasks and to assign privileges to these users.
Procedure
a. In the navigation area of the SAP Cloud Platform cockpit, choose SAP HANA / SAP ASE Databases
& Schemas .
b. Select the relevant database.
c. In the database overview, open the SAP HANA cockpit link under Administration Tools.
d. In the SAP HANA cockpit, enter SYSTEM as the user, and its password.
A message appears, telling you that you do not have the required roles.
e. Choose OK.
You receive a confirmation that the required roles are assigned to you automatically.
f. Choose Continue.
2. Choose Manage Roles and Users.
The password must start with a letter and only contain uppercase and lowercase letters ('a' – 'z', 'A' – 'Z'),
and numbers ('0' – '9').
7. Save your changes.
8. In the Granted Roles section, choose + (Add Role).
9. Type ide in the search field and select all roles in the result list.
Caution
At this point, you are still logged on with the SYSTEM user. You can only use your new database user to
work with SAP HANA Web-based Development Workbench by logging out from SAP HANA cockpit
first. Otherwise, you would automatically log in to the SAP HANA Web-based Development Workbench
with the SYSTEM user instead of your new database user. Therefore, choose the Logout button before
you continue to work with the SAP HANA Web-based Development Workbench, where you need to log
on again with the new database user.
13. (Optional) Disable the Password Lifetime Handling for a New Technical SAP HANA Database User [page
927].
This step is not necessary to complete this tutorial, but you shouldn't forget to disable the password
lifetime handling in productive scenarios.
Task overview: Creating an SAP HANA Tenant Database and an SAP HANA XS Application [page 861]
Next task: Create and Deploy an SAP HANA XS Application [page 865]
Create an SAP HANA XS Hello World program using the SAP HANA Web-based Development Workbench.
Procedure
1. In the navigation area of the SAP Cloud Platform cockpit, choose SAP HANA / SAP ASE Databases &
Schemas .
2. Select the relevant database.
3. In the database overview, open the SAP HANA Web-based Development Workbench link under
Development Tools.
4. Log on to the SAP HANA Web-based Development Workbench with your new database user and password.
5. Select the Editor.
Tip
The editor header shows details for your user and database. Hover over the entry for the SID to view
the details.
6. To create a new package, choose New Package from the context menu of the Content folder.
7. Enter a package name.
The program is deployed and appears in the browser: Hello World from User <Your User>.
Task overview: Creating an SAP HANA Tenant Database and an SAP HANA XS Application [page 861]
Previous task: Create a Database User and Add the Required Roles [page 863]
Create and bind an SAP HANA tenant database to a sample JDBC Java application using the SAP Cloud
Platform cockpit.
Prerequisites
● Download and set up your Eclipse IDE, SAP HANA Tools for Eclipse, SAP Cloud Platform Tools for Java, and
SAP Cloud Platform SDK for Neo environment for Java Web. For more information, see Install SAP HANA
Tools for Eclipse [page 1569] and https://tools.hana.ondemand.com/#cloud.
● Install Maven.
● (Enterprise Accounts) Install an SAP HANA tenant database system (MDC). See Install Database Systems
[page 884].
● (Enterprise Accounts) You are assigned the Administrator role for the subaccount.
Context
To perform the different steps in this tutorial, you use different tools.
In your subaccount in the SAP Cloud Platform cockpit, you create a database on an SAP HANA tenant
database system.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas .
3. From the Databases & Schemas page, choose New.
4. Enter the required data:
Property Value
Note
mdc1 corresponds to the database system on
which you create the database.
SYSTEM User Password The password for the SYSTEM user of the database.
Note
The SYSTEM user is a preconfigured database super
user with irrevocable system privileges, such as the
ability to create other database users, access system
tables, and so on. A database-specific SYSTEM user
exists in every database of a tenant database system.
5. Choose Create.
6. The Events page shows the progress of the database creation. Wait until the tenant database is in state
Started.
7. (Optional) To view the details of the new database, choose Overview in the navigation area and select the
database in the list. Verify that the status STARTED is displayed.
Task overview: Creating and Binding an SAP HANA Tenant Database to a Java Application [page 866]
Next task: Create a Database User and Add the Required Roles [page 868]
Context
You've specified a password for the SYSTEM user when you created an SAP HANA tenant database. You now
use the SYSTEM user to log on to SAP HANA cockpit and create your own database administration user.
Caution
You should not use the SYSTEM user for day-to-day activities. Instead, use this user to create dedicated
database users for administrative tasks and to assign privileges to these users.
a. In the navigation area of the SAP Cloud Platform cockpit, choose SAP HANA / SAP ASE Databases
& Schemas .
b. Select the relevant database.
c. In the database overview, open the SAP HANA cockpit link under Administration Tools.
d. In the SAP HANA cockpit, enter SYSTEM as the user, and its password.
A message appears, telling you that you do not have the required roles.
e. Choose OK.
You receive a confirmation that the required roles are assigned to you automatically.
f. Choose Continue.
2. Choose Manage Roles and Users.
3. Expand the Security node.
4. Open the context menu for the Users node and choose New User.
5. On the User tab, provide a name for the new user.
The password must start with a letter and only contain uppercase and lowercase letters ('a' – 'z', 'A' – 'Z'),
and numbers ('0' – '9').
7. Save your changes.
8. In the Granted Roles section, choose + (Add Role).
9. Type ide in the search field and select all roles in the result list.
Caution
At this point, you are still logged on with the SYSTEM user. You can only use your new database user to
work with SAP HANA Web-based Development Workbench by logging out from SAP HANA cockpit
first. Otherwise, you would automatically log in to the SAP HANA Web-based Development Workbench
with the SYSTEM user instead of your new database user. Therefore, choose the Logout button before
you continue to work with the SAP HANA Web-based Development Workbench, where you need to log
on again with the new database user.
13. (Optional) Disable the Password Lifetime Handling for a New Technical SAP HANA Database User [page
927].
This step is not necessary to complete this tutorial, but you shouldn't forget to disable the password
lifetime handling in productive scenarios.
Build a WAR file and deploy the persistence-with-JDBC sample application in the cockpit.
Procedure
3. In the navigation area of the SAP Cloud Platform cockpit, choose Applications Java Applications .
4. Choose Deploy Application.
5. From the target folder, select the persistence-with-jdbc.war file.
6. Enter a valid application name, for example, javatest.
7. For Runtime Name and Runtime Version, select the Java version of your installed SAP Cloud Platform SDK.
8. Choose Deploy.
9. On the confirmation screen, choose Done.
Caution
Do not choose Start. If you choose Start, a default schema and binding is created for the database;
you'll do this in the next task.
Task overview: Creating and Binding an SAP HANA Tenant Database to a Java Application [page 866]
Previous task: Create a Database User and Add the Required Roles [page 868]
Use the SAP Cloud Platform cockpit to bind your tenant database to the application you created.
Procedure
1. From the navigation area of the SAP Cloud Platform cockpit, choose SAP HANA / SAP ASE Databases
& Schemas .
2. Select the relevant database.
3. From the navigation area, choose Data Source Bindings.
4. Choose New Binding.
5. Leave the default settings for the data source (<default>).
6. Select the Java application you created.
7. Enter your user for the database and your password.
8. Choose Save.
Task overview: Creating and Binding an SAP HANA Tenant Database to a Java Application [page 866]
Start the application in the SAP Cloud Platform and fill it with data.
Procedure
You can view the application in the browser and check that the names you entered are available in the
database.
Procedure
1. To view the table in the SAP HANA Web-based Development Workbench, you have the following options:
The SAP HANA Web-based Development Workbench is Choose Navigation Links Catalog .
still open.
The SAP HANA Web-based Development Workbench is To reopen the SAP HANA Web-based Development Work
closed. bench, proceed as described in Create and Deploy an SAP
HANA XS Application [page 865], and on the entry page,
choose Catalog.
Task overview: Creating and Binding an SAP HANA Tenant Database to a Java Application [page 866]
Create a database in an SAP HANA tenant database system, using SAP Cloud Platform console client
commands in the Neo environment.
Prerequisites
Note
To be able to use this functionality, you must purchase an SAP HANA tenant database system.
Please contact SAP for details at the SAP Support Portal as described at Getting Support [page 2531].
● Download and set up your SAP Cloud Platform SDK for Neo environment for Java Web and SAP HANA
client. For more information, see https://tools.hana.ondemand.com/#cloud.
● Install an SAP HANA tenant database system. Assign this to a subaccount.
● A user with the administrator role for the subaccount.
● Install Maven.
Context
To perform the different steps in this tutorial, you use different tools.
Procedure
Note
Output Code
3. Create database.
Note
To create a tenant database on a trial landscape, use -trial- instead of the ID of an SAP HANA tenant
database.
4. To access the SAP HANA database, provide the SYSTEM user password.
5. (Optional) Check that the status of the database is STARTED.
Note
If the console client response is that the status is CREATING, repeat the command until the status is
STARTED.
Note
To check the status of the database in a trial landscape, enter hanatrial.ondemand.com instead of
hana.ondemand.com.
Task overview: Creating an SAP HANA Database Using the Console Client [page 873]
Next task: Deploy Java Application (Person Sample) into a Subaccount [page 875]
Procedure
Output Code
Task overview: Creating an SAP HANA Database Using the Console Client [page 873]
Previous task: Create a Database Using Database System mdc1 [page 874]
Next task: Create a Database User and Assign a Role [page 876]
Use the console client to create a database user, assign the <codeph>content_admin</codeph> role, and
change the password.
Context
You need the tunnel to connect to your database. You can use the connection details you obtain from the tunnel
response to connect to database clients, for example, Eclipse Data Tools Platform (DTP).
Note
The database tunnel must remain open while you work on the remote database instance. Close the tunnel
only when you have completed the session.
Note
You can also create a database user using SAP HANA studio in the Eclipse IDE. For more information,
see Creating an SAP HANA Tenant Database and an SAP HANA XS Application [page 861].
Tip
Output Code
Output Code
Output Code
Password:
Connected to localhost:30015
0 rows affected (overall time 286,192 msec; server time 11,370 msec)
7. Assign the content_admin role to the database user using the following command:
8. Log on to the database with the new database user and change the password:
Note
If the database has a password policy that requires users to change their password after the initial
logon, you need to provide a new password, otherwise you cannot work with the servlet.
a. Use the quit command to log off from the hdbsql client:
hdbsql NEO_MULTID...=> \q
\hdbclient>hdbsql
Output Code
Output Code
Password:
You have to change your password.
Enter new Password:
Confirm new Password:
Connected to localhost:30015
Task overview: Creating an SAP HANA Database Using the Console Client [page 873]
Previous task: Deploy Java Application (Person Sample) into a Subaccount [page 875]
Once the database is available, you use another console client command to create a binding between the
database and an existing Java application.
Procedure
Go to the command window you used to create the database and enter the following command:
Output Code
Task overview: Creating an SAP HANA Database Using the Console Client [page 873]
Previous task: Create a Database User and Assign a Role [page 876]
Next task: Start Java Application and Add Person Data with Servlet [page 879]
There are additional commands that let you deploy the Java application and run it. You can view the application
in the browser, enter first and last names in the table, and check in SAP HANA Client that the names you
entered are available in the database.
Procedure
Output Code
Output Code
3. To add Person Data, Copy the URL from the status command into the address field of your browser and
add /persistence-with-jdbc/. Start the servlet in the browser and add person data.
Output Code
2 rows selected (overall time 291,603 msec; server time 156 usec)
Task overview: Creating an SAP HANA Database Using the Console Client [page 873]
1.11.9.2.4 Concepts
SAP HANA Single-Container Data XS The SAP HANA database is reserved for your
base Systems exclusive use. You have full control of user
management and can use a range of tools.
SAP HANA Tenant Database Sys MDC SAP HANA supports multiple isolated data
tems bases in a single SAP HANA system. These are
referred to as tenant databases. The SAP
HANA tenant database system is reserved for
your exclusive use, hosting multiple SAP
HANA databases on a single SAP HANA data
base system. All tenant databases in the same
system share the same system resources
(memory and CPU cores), but each tenant da
tabase is fully isolated with its own database
users, catalog, repository, persistence (data
files and log files) and services.
For an overview on how to administer your database system and databases, see Administration [page 883].
Main Concepts
In the Neo environment in SAP regions, subaccount administrators can install SAP HANA single-container
database systems (XS) or SAP HANA tenant database systems (MDC), and create databases on database
systems in their subaccounts. Developers can then bind databases to applications running on SAP Cloud
Platform.
A database is associated with a specific subaccount and is available to applications in this subaccount. You can
create databases, bind them to applications, and delete them using the console client or the cockpit. You can
bind the same database to multiple applications, and the same application to multiple databases.
1.11.9.2.5 Development
For information on SAP HANA database development, please refer to the SAP Cloud Platform documentation.
Use the SAP Cloud Platform cockpit or the console client to administer your database systems and databases
in the Neo environment in SAP regions.
Remember
This guide describes the features of the SAP HANA service that enable you to administrate SAP HANA
databases on SAP Cloud Platform. Keep in mind that it helps you with everything that is specific to SAP
Cloud Platform. For features specific to SAP HANA, please refer to SAP HANA Platform.
An overview of the different tasks you can perform to administer database systems.
View Memory Usage for an SAP HANA Database System [page 896]
View the memory usage for an SAP HANA tenant database system in the Neo environment using the
SAP Cloud Platform cockpit.
Install a database system in the Neo environment using the SAP Cloud Platform cockpit.
Prerequisites
Recommendation
We recommend that you always use the latest available database version. New revisions that are available
for the update are announced in the .
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. From the navigation area, choose SAP HANA / SAP ASE Database Systems
You see all database systems that are available in the subaccount, along with their details, including the
database type, version, memory size, state, and the number of associated databases.
3. Choose New Database System.
4. Choose the type of the database system.
The name must be unique in your subaccount and can include only lowercase letters, and digits.
8. Choose Start.
You can monitor the status of the installation in the Database Systems view.
Update your SAP HANA database systems in the Neo environment using the SAP Cloud Platform cockpit.
Prerequisites
● Use the SAP HANA XS administration tool to enable basic authentication for SAP HANA application
lifecycle management to update SAP HANA XS-based components. Navigate to sap/hana/xs/lm and
add Basic in the Authentication section.
● You are assigned the administrator role for the subaccount.
Context
To update your SAP HANA database systems, you have the following options:
● Update the software components installed on your SAP HANA database system to a higher version.
● Apply a single Support Package on top of an existing SAP HANA database system.
● Before you apply an update, read the SAP Notes listed in the UI, and perform all required steps.
We recommend that you always use the latest available version. New revisions that are available for the
update are announced in the What's New for SAP Cloud Platform.
You should expect and plan for a temporary downtime for the SAP HANA database or SAP HANA XS Engine
when you update SAP HANA. You might not be able to work with SAP HANA studio, SAP HANA Web-based
Development Workbench, and cockpit UIs that depend on SAP HANA XS.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. From the navigation are, choose SAP HANA / SAP ASE Database Systems .
You see all database systems that are available in the subaccount, along with their details, including the
database type, version, memory size, state, and the number of associated databases.
3. To select the entry for the relevant database system in the list, click the link on its name.
The overview of the database system shows details, including the database version and state, and the
number of associated databases.
4. To update an SAP HANA database system, choose Check for updates.
5. Select a version to update.
If you select to update to a later version, remember to read the corresponding release note.
Note
You can select SAP HANA revisions approved for use in SAP Cloud Platform only. To update to another
revision, please contact SAP Support.
Updating an SAP HANA database system to a maintenance revision can result in upgrade path
limitations. See SAP Note 1948334 for details.
6. (Optional) Specify whether you'd like a prompt for confirmation before the update of the SAP HANA
database system is applied and the system downtime is started.
By default, this option is selected. If you deselect it, the update is performed without any user interaction.
7. Choose Continue/Update.
The system begins preparing to update. The update process takes some time and is executed
asynchronously. The update dialog box remains on the screen while the update is in progress; however, you
can close the dialog box and reopen it later.
8. (Optional) If you chose to be prompted the process stops and waits for confirmation before starting the
update.
During preparation, the SAP HANA database system is not modified, so you can safely cancel the update
process.
9. Choose Update.
The update starts and takes about 20 minutes.
Note
For more information, see the SAP HANA Developer Guides listed below. Refer to the SAP Cloud Platform
Release Notes to find out which HANA SPS is currently supported by SAP Cloud Platform.
Related Information
Restart your database systems in the Neo environment using the SAP Cloud Platform cockpit.
Context
If your databases aren't working properly, you can try to solve the issues by restarting the corresponding
database system. A restart is performed for the entire database system.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the subaccount that owns the database system you want to
restart. For more information, see Navigate to Global Accounts and Subaccounts [AWS, Azure, or GCP
Regions].
2. From the navigation area, choose SAP HANA / SAP ASE Database Systems .
3. Select the database system you want to restart.
4. On the Overview page of the database system, choose Restart.
5. Choose OK to confirm the restart.
If security OS patches are pending for the database system you have restarted, the host of the
database system is also restarted.
Perform a point-in-time restore in the Neo environment by creating a service request in the SAP Cloud Platform
cockpit.
Prerequisites
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. In the navigation area, choose SAP HANA / SAP ASE Service Requests .
3. Choose New Request and do the following:
a. Choose Database System.
Caution
If you restore a database system, all databases within this system are restored. To restore a single
database only, see Restore Databases [page 996].
Caution
You will lose all data stored in the databases in the database system between the time you specify
in the New Service Request screen and the time at which you create the service request. If you
create a restore request at 3pm to restore your database system to 9am on the same day for
example, all data stored between 9am and 3pm is lost.
d. Choose Save.
You see a template for opening an incident in the SAP Support Portal.
e. From the template, select the text between the two dashed lines and copy it to your clipboard.
Navigate to SAP HANA / SAP ASE Service Requests and choose the Display icon next to
your request to find the template for opening a ticket at any time.
f. Choose Close.
4. Open SAP Support Portal .
5. Choose Report an Incident.
Results
As soon as your database system is restored, the state of your request is set to Finished in the cockpit and the
incident is set to Completed. You can see the state of your request in the cockpit by navigating to SAP
HANA / SAP ASE Service Requests . The state appears next to your service request. SAP Support might
contact you by e-mail if they need clarification, or if you need to take any further action.
Note
Your database system is available for use for all users immediately after it is restored.
Note
To cancel your restore request, go to SAP HANA / SAP ASE Service Request , choose the request and
select the Delete icon. You can cancel a request only if it has still the state New.
Related Information
Upsize your SAP HANA database systems in the Neo environment using the SAP Cloud Platform cockpit.
Prerequisites
Note
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
For database systems in disaster recovery mode, you must first choose the subaccount on the region
where you have set up the secondary database system.
2. From the navigation area, choose SAP HANA / SAP ASE Database Systems .
You see all database systems that are available in the subaccount, along with their details, including the
database type, version, memory size, state, and the number of associated databases.
3. Select the database system you want to resize.
4. From the overview page of the database system, choose Upsize.
Remember
Some downtime is required to upgrade your database system to the new size.
Results
When processing completes, corresponding quota is in use. Old quota is released and can be reused.
Next Steps
For database systems in disaster recovery mode, contact SAP Support after upsizing the secondary database
system to transfer the metadata of the secondary system to the primary system. See Getting Support [page
1001]. Once the metadata has been transferred, repeat the preceding steps to upsize the primary system.
Change the edition of your SAP HANA database systems in the Neo environment using the SAP Cloud Platform
cockpit. You can change the edition from standard to enterprise, or the other way around. You can also change
the edition of database systems in high availability or disaster recovery mode.
Prerequisites
Context
Note
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. From the navigation area, choose SAP HANA / SAP ASE Database Systems .
You see all database systems that are available in the subaccount, along with their details, including the
database type, version, memory size, state, and the number of associated databases.
3. Select the database system you want to change the edition for.
4. From the Details section of the database system, trigger the edition switch:
○ When switching from the standard to the enterprise edition, choose Upgrade to Enterprise.
○ When switching from the enterprise to the standard edition, choose Switch to Standard.
5. Choose Confirm.
Results
When the edition switch completes, the corresponding quota is in use. The old quota is released and can be
reused.
Delete your database system in the Neo environment using the SAP Cloud Platform cockpit.
Prerequisites
Recommendation
Export valuable data to another source in advance, since all data in your database system will be deleted.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. From the navigation area, choose SAP HANA / SAP ASE Database Systems
You see all database systems that are available in the subaccount, along with their details, including the
database type, version, memory size, state, and the number of associated databases.
You can monitor the status of the deletion in the Database Systems view.
Learn how to install new SAP HANA components in the Neo environment.
Prerequisites
Use the SAP HANA XS administration tool to enable basic authentication for SAP HANA application lifecycle
management to update SAP HANA XS-based components. Navigate to sap/hana/xs/lm and add Basic in the
Authentication section.
Context
You can install the following types of SAP HANA components, as long as they are enabled in your subaccount:
● SAP HANA platform components, which are installed on the SAP HANA database system at the operating
system level
● SAP HANA XS applications, which are deployed on the SAP HANA database system
Restriction
Installation of SAP HANA XS-based components on SAP HANA database tenant database systems (MDC) is
currently not supported. Installation of SAP HANA XS-based components is supported on SAP HANA
single-container systems (XS) with version SPS09 or higher.
Recommendation
You should expect and plan for a temporary downtime for the SAP HANA database or SAP HANA XS wngine
when installing some SAP HANA components. You might not be able to work with SAP HANA studio, SAP
HANA Web-based Development Workbench, and cockpit UIs that depend on SAP HANA XS.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
You see all database systems that are available in the subaccount, along with their details, including the
database type, version, memory size, state, and the number of associated databases.
3. To select the entry for the relevant database system in the list, click the link on its name.
The overview of the database system shows details, including the database version and state, and the
number of associated databases.
4. To install an SAP HANA component for the selected productive database system, choose Install
components.
5. Select a solution to install.
If you have a license for the solution in your subaccount, all SAP HANA components that are part of the
solution are listed.
6. Select the target version for all listed components.
7. (Optional) Specify whether you'd like a prompt for confirmation before the SAP HANA components are
installed and the system downtime is started.
By default, this option is selected. If you deselect it, the installation is performed without any user
interaction.
8. Choose Continue/Install.
The system begins preparing to install. The installation process takes some time and is executed
asynchronously. The installation dialog box remains on the screen while the installation is in progress.;
however, you can close the dialog box and reopen it later.
9. (Optional) If you chose to be prompted the process stops and waits for confirmation before starting the
installation.
During preparation, the SAP HANA database system is not modified, so you can safely cancel the
installation process.
10. Choose Install.
The installation starts and takes about 20 minutes.
Results
SAP HANA components are installed on your SAP HANA database system.
Note
Refer to SAP HANA Service in SAP Regions (Neo Environment) [page 843] to find out which HANA revision
is supported by SAP Cloud Platform.
Related Information
Prerequisites
Context
You can uninstall additional SAP HANA platform components that are already installed on your SAP HANA
database system, from the operating system level.
Restriction
You can't uninstall core SAP HANA platform components, which are installed by default when your SAP
HANA database system is provisioned.
You can't uninstall SAP HANA smart data streaming from an SAP HANA tenant database system, if SAP
HANA smart data streaming is enabled on a tenant database. You must first disable SAP HANA smart data
streaming on the relevant tenant database.
Please expect a temporary downtime for the SAP HANA database system when you're uninstalling SAP HANA
components.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. From the navigation area, choose SAP HANA / SAP ASE Database Systems .
You see all database systems that are available in the subaccount, along with their details, including the
database type, version, memory size, state, and the number of associated databases.
3. Select the entry for the relevant database system.
The overview of the database system shows details, including the database version and state, and the
number of associated databases.
4. To uninstall an SAP HANA component for the selected database system, choose Uninstall component.
5. Select a solution to uninstall.
6. Choose Uninstall.
The uninstallation starts and takes about 15 minutes. You can monitor the status of the uninstallation in the
Database Systems view.
View the memory usage for an SAP HANA tenant database system in the Neo environment using the SAP
Cloud Platform cockpit.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA tenant database system
you'd like to view memory limits for. For more information, see Navigate to Orgs and Spaces [page 1751].
2. In the navigation area, choose Database Systems SAP HANA / SAP ASE and select the entry for the
relevant database system.
3. In the navigation area, choose Memory Usage.
You see a table that lists the memory limits and usage for each tenant database and the system database.
You can view the following values:
○ On database system level:
○ Global allocation limit: The amount of memory that is available for the SAP HANA system.
○ Global shared memory allocation: The amount of allocated memory of the SAP HANA system
that cannot be associated with a concrete process.
○ Global shared memory usage: The amount of used memory of the SAP HANA system that cannot
be associated with a concrete process.
○ On tenant and system database level:
○ Configured allocation limit: The limit that is currently set for a particular process.
○ Allocated memory: The memory that is currently allocated to a particular process.
○ Used memory: The memory that is currently used by a particular process.
For more information about memory usage, see the SAP HANA Administration Guide.
Note
If you haven't set a limit for a particular process or if you've allocated a percentage, the corresponding
entry is empty and the total of configured allocation limits cannot be calculated.
Analyze error warnings that are related to data backups of tenant databases or the system database in the Neo
environment.
Context
If the backup ran into problems, backup-related error messages are shown in the Monitoring tab of the SAP
Cloud Platform cockpit. For more information, see View Monitoring Metrics of a Database System [page 898].
This can be related to memory issues in the SAP HANA database system.
Procedure
To find out why the backup failed, analyze the alert to determine which tenant database or tenant databases, or
whether the system database is affected.
1. If only one or a few tenant databases are affected, try the following:
1. Check the memory limits and the memory usage of the affected tenant databases using the Memory
Usage tab of the SAP Cloud Platform cockpit. If there are memory limits set on the tenant databases,
consider removing or increasing the limits. For more information, see Create SAP HANA Tenant
Databases and View Memory Usage for an SAP HANA Database System [page 896].
2. Connect to the tenant database and check the memory consumption of the tenant databases using
SAP HANA studio or SAP HANA cockpit. For more information, see SAP Note 1999997 .
3. If you cannot connect to the tenant database, restart it, which frees memory and may therefore resolve
memory issues. For more information, see Restart SAP HANA Tenant Databases [page 933].
2. If almost all tenant databases or the system database are affected, try the following:
1. If you cannot connect to any of the tenant databases, restart the database system. For more
information, see Restart Database Systems [page 887].
2. Check the overall memory usage using the Memory Usage tab of the SAP Cloud Platform cockpit. For
more information, see View Memory Usage for an SAP HANA Database System [page 896].
3. Connect to the tenant databases and check the memory consumption of the tenant databases using
SAP HANA studio or SAP HANA cockpit. For more information, see SAP Note 1999997 .
4. Stop the tenant databases that you don't currently need. For more information, see Create SAP HANA
Tenant Databases.
Tip
If you frequently run into memory-related backup problems, try to find out where they come from and why
your databases consume too much memory. These actions might resolve your issues:
● If there are any tenant databases you don't currently need, stop these databases to free resources.
Restart them only when you need them.
● Delete any unneeded tenant databases.
● If possible, remove data from your databases.
● If possible, move data to another system.
Note
Even after you've fixed the memory issue, you may still see the alert might in the cockpit until after the next
daily backup has been successfully created.
Related Information
Learn how you can monitor your database systems running in the Neo environment.
In the cockpit, you can view the current metrics of a selected database system to get information about its
health state. You can also view the metrics history of a productive database to examine the performance trends
of your database over different intervals of time or investigate the reasons that have led to problems with it. You
can view the metrics for all types of databases.
CPU Load The percentage of the CPU that is used on average over This metric is updated every minute.
the last minute.
An alert is triggered when 2 consecutive
checks with an interval of 1 minute are
not in an OK state.
Disk I/O The number of bytes per second that are currently being This metric is updated every minute.
read or written to the disc.
An alert is triggered when 5 consecutive
checks with an interval of 1 minute are
not in an OK state.
Network Ping The percentage of packets that are lost to the database This metric is updated every minute.
host.
An alert is triggered when 2 consecutive
checks with an interval of 1 minute are
not in an OK state.
OS Memory Usage The percentage of the operating system memory that is This metric is updated every minute.
currently being used.
An alert is triggered when 2 consecutive
checks with an interval of 1 minute are
not in an OK state.
Used Disc Space The percentage of the local discs of the operating sys This metric is updated every minute.
tem that is currently being used.
An alert is triggered when 5 consecutive
checks with an interval of 1 minute are
not in an OK state.
HANA DB Availability ● OK - the database is reachable from our central ad This metric is updated every minute.
min component via JDBC.
An alert is triggered when 3 consecutive
● Critical - either the database is down or overloaded,
checks with an interval of 1 minute are
or there's a network issue.
not in an OK state.
HANA DB Alerting ● OK - alerts can be retrieved from the SAP HANA This metric is updated every minute.
Availability system.
An alert is triggered when 3 consecutive
● Critical - alerts cannot be retrieved as there is no
checks with an interval of 1 minute are
connection to the database. This also implies that
not in an OK state.
any other visible metric may be outdated.
HANA DB Compile ● OK - the compiler server is running on the SAP This metric is updated every 10 mi
Server HANA system. nutes.
● Critical - the compile server crashed or was other
An alert is triggered when 3 consecutive
wise stopped. The service should recover automati
checks with an interval of 1 minute are
cally. If this does not work, a restart of the system
not in an OK state.
might be necessary.
HANA DB Backup Vol ● OK - the backup volumes are available. This metric is updated every 15 mi
umes Availability ● Critical - the backup volumes are not available. nutes.
HANA DB Data Backup ● OK - the age of the last data backup is below the This metric is updated every 60 mi
Age critical threshold. nutes.
● Critical - the age of the last data backup is above
An alert is triggered when 3 consecutive
the critical threshold.
checks with an interval of 1 minute are
not in an OK state.
HANA DB Data Backup ● OK - the data backup exists. This metric is updated every 60 mi
Exists ● Critical - no data backup exists. nutes.
HANA DB Data Backup ● OK - the last data backup was successful. This metric is updated every 60 mi
Successful ● Critical - the last data backup was not successful. nutes.
HANA DB Log Backup ● OK - the last log backup was successful. This metric is updated every 10 mi
Successful ● Critical - the last log backup failed. nutes.
HANA DB Service ● OK - no server is running out of memory. This metric is updated every 5 minutes.
Memory Usage ● Critical - a service is causing an out of memory er
An alert is triggered when 3 consecutive
ror. See SAP Note 1900257 .
checks with an interval of 1 minute are
not in an OK state.
HANA XS Availability ● OK - XSEngine accepts HTTPS connections. This metric is updated every minute.
● Critical - XSEngine does not accept HTTPS connec
An alert is triggered when 3 consecutive
tions.
checks with an interval of 1 minute are
not in an OK state.
Sybase ASE Availability ● OK - the database is reachable from our central ad This metric is updated every minute.
min component via JDBC.
An alert is triggered when 3 consecutive
● Critical - either the database is down or overloaded,
checks with an interval of 1 minute are
or there's a network issue.
not in an OK state.
Sybase ASE Long Run ● OK - a transaction is running for up to an hour. This metric is updated every 2 minutes.
ning Trans ● Warning - a transaction is running for more than an
An alert is triggered when a consecutive
hour.
check with an interval of 1 minute is not
● Critical - a transaction is running for more than 13
in an OK state.
hours.
Sybase ASE HADR Fm FaultManager is a component for highly available (HA) This metric is updated every 2 minutes.
State SAP ASE systems that triggers a failover in case the pri
An alert is triggered when a consecutive
mary node is not working.
check with an interval of 1 minute is not
● OK - FaultManager for a system that is set up as an in an OK state.
HA system is running properly.
● Critical - FaultManager is not working properly and
the failover doesn’t work.
Sybase ASE HADR La ● OK - the latency for the HA replication path is less This metric is updated every 2 minutes.
tency than or equal to 10 minutes.
An alert is triggered when a consecutive
● Warning - the latency is greater than 10 minutes.
check with an interval of 1 minute is not
● Critical - the latency is greater than 20 minutes. A
in an OK state.
high latency might lead to data loss if there is a fail
over.
Sybase ASE HADR Pri ● OK - the primary host of a system that is set up as This metric is updated every 2 minutes.
mary State HA system is running fine.
An alert is triggered when a consecutive
● Critical - the primary host is not running properly.
check with an interval of 1 minute is not
in an OK state.
Sybase ASE HADR ● OK - the secondary or standby host of a system This metric is updated every 2 minutes.
Standby State that is set up as HA system is running properly.
An alert is triggered when a consecutive
● Critical - the secondary or standby host is not run
check with an interval of 1 minute is not
ning properly.
in an OK state.
The readMonitoringData scope is assigned to the used platform role for the subaccount. For more information,
see Platform Scopes [page 1910].
Procedure
For more information, see Navigate to Global Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. Navigate to the Database Systems page either by choosing SAP HANA / SAP ASE Database Systems
from the navigation area or from the Overview page.
All database systems available in the selected subaccount are listed with their details, including the
database version and state, and the number of associated databases.
3. Choose the entry for the relevant database system in the list.
4. Choose Monitoring from the navigation area to get detailed information about the current state and the
history of metrics for the selected database system.
When you open the checks history, you can view graphic representations for each of the different checks,
and zoom in to see additional details. If you zoom in a graphic horizontally, all other graphics also zoom in
to the same level of detail. Press Shift and drag to pan a graphic. Zoom out to the initial size by double-
clicking.
You can select different periods for each check. Depending on the interval you select, data is aggregated as
follows:
○ Last 12 or 24 hours - data is collected every minute.
○ Last 7 days - data is aggregated from the average values for each 10 minutes.
○ Last 30 days - data is aggregated from the average values for each hour.
You can also select a custom time interval for viewing check history.
Related Information
Use the REST API to get metrics for your database systems that are running on SAP Cloud Platform in the Neo
environment.
Protection
The monitoring REST API is available with the following basic URI: https://api.{host}/monitoring/v2.
This version is protected with OAuth 2.0 client credentials. Create an OAuth client and obtain an access token
to call the API methods. See Using Platform APIs [page 1737]. For more information about the format of the
REST APIs, see Monitoring API .
Note
While you are creating the API client on the Platform API tab, select the Monitoring Service API with the
Read Monitoring Data scope.
Overview
Request the states or the metric details of your database systems by using the GET REST API calls.
Example
Use the following request to receive all the metrics for a database system located in the Europe (Rot/
Germany) region (with hana.ondemand.com host):
https://api.hana.ondemand.com/monitoring/v2/accounts/<subaccount_technical_name>/
dbsystem/<database_system>/metrics
Example
Use the following request to receive the state of a database system located in the US East (Ashburn) region
(with us1.hana.ondemand.com host):
https://api.us1.hana.ondemand.com/monitoring/v2/accounts/
<subaccount_technical_name>/dbsystem/<database_system>/state
Protect your application data stored in SAP HANA databases, SAP ASE databases, any other Data
Management services, and the Document service in the Neo environment.
Overview
The SAP Cloud Platform Neo environment includes a standard disaster recovery option. It is based on data
restore from backups, which are stored in a disaster recovery site. The backups contain all data stored in the
Data Management services and the Document service on SAP Cloud Platform. For more information about the
Data Management services and the Document service, see Capabilities [page 15].
Note
Data not stored in the Data Management services or the Document service, as well as data stored in
deployed applications, cannot be recovered.
For more information about the backup specifics, see Frequently Asked Questions [page 1000].
Note
The time needed for declaring the disaster is included in the RTO.
The SAP Cloud Platform Enhanced Disaster Recovery service requires an additional fee, but offers better SLAs
for the RPO and RTO. For more information, see What Is Enhanced Disaster Recovery.
● If the affected region is still operational and there is a hardware issue with your database, it will be restored
in the same region.
● If the affected region is no longer operational, the future location of your database will be confirmed in your
restore request.
● Follow the instructions provided in the established SAP Cloud Platform notification channels. For more
information, see Platform Updates and Notifications [page 2535].
● Create an incident for backup restore, assigned to the component BC-NEO-PERS. For more information,
see Getting Support [page 2531].
Specify in the incident the respective names of the affected subaccounts, applications, and database
systems along with the corresponding database version.
You can set up your database system in high availability mode. A high availability setup consists of two
database systems that permanently replicate data from a primary to a database system.
Setting Up High Availability for SAP HANA Database Systems [page 907]
To set up high availability, SAP makes a secondary SAP HANA database system available for you and
sets up the data replication between your database system (the primary database system) and your
secondary database system.
Administering SAP HANA Database Systems in a High Availability Setup [page 909]
You can administer the primary database system as you would any other SAP HANA database system.
Some restrictions apply for the management of the secondary database system.
The high availability (HA) setup consists of two SAP HANA database systems, a primary and a secondary,
between which data is continuously and synchronously replicated. If the primary database system fails, the
secondary database system is promoted to the role of primary database system.
As shown in the figure below, the primary and secondary SAP HANA database systems (also called the "HA
pair") are hosted in the same region. Applications that are running in that region establish a client connection
The primary database system is constantly monitored. A failure triggers the promotion of the secondary
system. The promotion takes about a minute and isn't dependent on the load of the database. To minimize data
loss, access to the primary database system is stopped before the promotion, and the promotion takes place
only after all pending data has been replicated.
Note
To prevent data loss, you should develop your applications to gracefully handle connection issues during
the promotion.
Once the promotion is complete, applications can reconnect to the new primary database system and see all
committed data, including data that was committed in the initial primary database system. This also works for
an SAP HANA tenant database. The application doesn't need to be restarted as it is bound to the HA pair. Once
the initial primary database system is reactivated as the new secondary database system, the new primary
database system begins replicating. You should develop your applications to gracefully handle connection
issues during the promotion.
Setting Up High Availability for SAP HANA Database Systems [page 907]
Administering SAP HANA Database Systems in a High Availability Setup [page 909]
What to Do in Case of an SAP HANA Database System Failure [page 912]
Prerequisites
● Available quota for database systems in your subaccount. You need the same quota for the secondary
database system as for the primary database system. For more information, see Configure Entitlements
and Quotas for Subaccounts [page 1756].
● The primary database system must have a size of minimum 64 GB.
● The primary database system must be at least SAP HANA revision 122.13.
Restriction
If you have installed SAP Streaming Analytics on an SAP HANA database system, you cannot set up this
system in high availability mode.
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. From the navigation area, choose SAP HANA / SAP ASE Database Systems .
You see all database systems that are available in the subaccount, along with their details, including the
database type, version, memory size, state, and the number of associated databases.
3. Choose the database system that you want to set up in a high availability mode.
The overview of the database system shows details, including the database version and state, and the
number of associated databases.
4. To set up high availability, choose Configure.
5. Choose Start to confirm the new high availability setup.
You can monitor the status of the setup in the Database Systems view.
Results
You have configured the high availability setup and the replication is running.
Note
The primary database system might need to be restarted during the setup process. You should expect and
plan for a temporary downtime for the SAP HANA database or SAP HANA XS engine when you update SAP
HANA. You might not be able to work with SAP HANA studio, SAP HANA Web-based Development
Workbench, and cockpit UIs that depend on SAP HANA XS.
Related Information
Administering SAP HANA Database Systems in a High Availability Setup [page 909]
What to Do in Case of an SAP HANA Database System Failure [page 912]
High Availability for SAP HANA Database Systems [page 905]
You can administer the primary database system as you would any other SAP HANA database system. Some
restrictions apply for the management of the secondary database system.
Task Primary SAP HANA Database System Secondary SAP HANA Database System
Access Access the primary SAP HANA database system You cannot access the secondary database sys
via the cockpit or the console client as you do tem.
any other database system.
Monitoring Monitor the primary database system as you do You cannot monitor the secondary database
any other SAP HANA database system. You can system. However, you can monitor the status of
also monitor the status of the replication to the the system replication via the primary database
secondary database system. system.
Update Update the primary database system as you do When you trigger an update of the primary sys
any other SAP HANA database system. For more tem, the secondary system is also updated.
information, see Update Database Systems
The secondary system is updated before the pri
[page 885].
mary system. The primary system remains up
and running while the secondary system is being
Restriction
updated.
The downtime during the update is the same
as for an SAP HANA system that is not in a
high availability setup.
Restriction
You can't update additional SAP HANA
components using the update with
minimized downtime.
Restart Restart the primary database system as you do When you trigger the restart of the primary da
any other SAP HANA database system. For more tabase system, the secondary database system
information, see Restart Database Systems will also automatically restart once the restart of
[page 887]. the primary system is completed.
Restore Restore the primary database system as you do You don't need to restore the secondary data
any other SAP HANA database system. For more base system. In case of a failure of the secon
information, see Restore Database Systems dary database system, the data is replicated
[page 995]. from the primary database system once the
secondary system is reactivated.
Installing additional Install SAP HANA components on the primary When you trigger the installation of SAP HANA
SAP HANA compo database system as you do on any other SAP components on the primary database system,
nents HANA database system. For more information, they are installed on the secondary database
see Install SAP HANA Components [page 893]. system.
Upsize You need two quotas of the same edition and the You need two quotas of the size to which you
size to which you want to upgrade - one for the want to upgrade - one for the primary, and one
primary, and one for the secondary database for the secondary database system. The upsize
system. The upsize process will upsize both data process will upsize both database systems.
base systems.
When processing completes, the corresponding
When processing completes, the corresponding quotas are in use. The old quotas are released
quotas are in use. The old quotas are released and can be reused.
and can be reused.
See Upsize Database Systems [page 890].
See Upsize Database Systems [page 890].
Edition Change Change the edition of the primary database sys When you trigger the edition change of the pri
tem as you do for any other SAP HANA database mary database system, the secondary database
system. For more information, see Change the system is also switched to the new edition.
Database System Edition [page 891].
Note
To change the edition of database systems in
high availability mode, two quotas must be
available.
Uninstalling addi Uninstall SAP HANA components on the primary When you trigger the uninstallation of SAP
tional SAP HANA
database system as you do on any other SAP HANA components on the primary database
components
HANA database system. For more information, system, they are uninstalled on the secondary
see Uninstall SAP HANA Components [page database system.
895].
The primary database system remains available
during the uninstallation of SAP HANA compo
Restriction
nents on the secondary database system.
The downtime during the uninstallation is
the same as for an SAP HANA system that is
not in a high availability setup.
Deletion Delete the primary SAP HANA database system See Delete High Availability Systems [page
as you do any other database system. For more 912].
information, see Delete Database Systems [page
892]
Note
If you delete the primary database system,
both the primary and secondary SAP HANA
database systems are deleted.
Related Information
Setting Up High Availability for SAP HANA Database Systems [page 907]
What to Do in Case of an SAP HANA Database System Failure [page 912]
High Availability for SAP HANA Database Systems [page 905]
Remove the high availability setup from a database system by deleting the secondary database system.
Prerequisites
Restriction
You cannot remove the high availability setup from SAP HANA tenant database systems (MDC) with tenant
databases using the SAP Cloud Platform cockpit. To remove these systems, please contact SAP Support.
For more information, see Getting Support [page 2531].
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. From the navigation area, choose SAP HANA / SAP ASE Database Systems .
You see all database systems that are available in the subaccount, along with their details, including the
database type, version, memory size, state, and the number of associated databases.
3. Choose the database system from which to remove the high availability setup.
The overview of the database system shows details, including the database version and state, and the
number of associated databases.
4. Under High Availability and Disaster Recovery, choose Disable and confirm.
5. To remove the high availability setup and delete the associated secondary system, choose Start.
The quota assigned to the secondary system is released after the secondary system is deleted.
If your primary SAP HANA database system fails in a high availability setup, the secondary database system is
promoted to the role of primary database system.
There is no action required from you if your primary SAP HANA database system fails.
If you have questions about the high availability features, you can report an incident on the SAP Support Portal.
For more information, see Getting Support [page 2531].
Administering SAP HANA Database Systems in a High Availability Setup [page 909]
Setting Up High Availability for SAP HANA Database Systems [page 907]
High Availability for SAP HANA Database Systems [page 905]
Set up your SAP HANA database system in the Neo environment in a disaster recovery mode to restore
operations.
You can set up your SAP HANA database system in a disaster recovery mode. A disaster recovery setup
consists of two database systems that are hosted in different regions and that permanently replicate data from
a primary to a secondary database system.
Disaster recovery refers to restoring operations after an outage due to a prolonged region or site failure.
Although similar to a high availability setup, disaster recovery may require backing up data across longer
distances, and may thus be more complex and costly.
Setting Up Disaster Recovery for SAP HANA Database Systems [page 915]
To set up disaster recovery, SAP makes a secondary SAP HANA database system available for you and
sets up the data replication between your database system (the primary database system) and your
secondary database system. Report an incident to create a disaster recovery setup or use the
onboarding wizard in the cockpit.
Administering SAP HANA Database Systems in a Disaster Recovery Setup [page 917]
Although you administer the primary database system in the same way as any other SAP HANA
database system, some restrictions apply to the management of the secondary database system.
The disaster recovery (DR) setup consists of two SAP HANA database systems that are hosted in two different
regions and between which data is continuously and asynchronously replicated: a primary and a secondary
database system. If there is a disaster affecting the primary region, the secondary database system is
promoted to the role of the primary database system.
The geographic separation of the two regions makes the DR system capable of withstanding the loss of an
entire region. For example, your system may include a primary database system that is located in San
Francisco, and a secondary database system in San Jose. If the primary database system is destroyed, the
secondary server is safe and ready to assume control. Data is continuously and asynchronously replicated
between the primary and the secondary database system. Each database has separate resources, including
disks.
As shown in the figure below, the secondary database system in the DR region is promoted to the role of
primary database system if the region in which the primary database system is hosted is down. Once the
promotion is complete, the previously inactive applications can connect to the new primary database system,
and see all committed data. Because the replication between the two regions is asynchronous, there may be
some data loss.
To set up disaster recovery, SAP makes a secondary SAP HANA database system available for you and sets up
the data replication between your database system (the primary database system) and your secondary
database system. Report an incident to create a disaster recovery setup or use the onboarding wizard in the
cockpit.
Prerequisites
Restriction
● You cannot set up SAP HANA tenant database systems (MDC) in disaster recovery mode.
● If you have installed SAP Streaming Analytics on an SAP HANA database system, you cannot set up
this system in disaster recovery mode.
Procedure
To trigger the creation of the disaster recovery setup, see Onboarding Using Incident Requests.
Note
During the creation of the disaster recovery setup, file-based certificate stores are migrated to in-database
certificate stores. For more information, see SAP Note 2175664 .
Results
Once disaster recovery has been set up for your database system, you are notified via the SAP Support Portal.
Note
The primary database system will be restarted during the creation of the disaster recovery setup. The
expected downtime is 30 minutes.
Related Information
Administering SAP HANA Database Systems in a Disaster Recovery Setup [page 917]
What to Do in Case of an SAP HANA Database System Failure [page 919]
Disaster Recovery for SAP HANA Database Systems [page 913]
Although you administer the primary database system in the same way as any other SAP HANA database
system, some restrictions apply to the management of the secondary database system.
Task Primary SAP HANA Database System Secondary SAP HANA Database System
Access Access the primary SAP HANA database system You cannot access the secondary database sys
via the cockpit or the console client as you do tem while system replication is running.
any other database system.
To test the disaster recovery setup, you need to
stop system replication.
Monitoring Monitor the primary database system as you do You can only monitor OS-related metrics, and
any other SAP HANA database system. You can you can only do so while system replication is
also monitor the status of the replication to the running.
secondary database system. You can also monitor the status of system repli
cation via the primary database system.
Update Update the primary database system as you do You can update the secondary database system
any other SAP HANA database system. For more as any other SAP HANA database system. For
information, see Update Database Systems more information, see Update Database Sys
[page 885]. tems [page 885].
Restart Restart the primary database system as you do You can restart the secondary database system
any other SAP HANA database system. For more as any other SAP HANA database system. For
information, see Restart Database Systems more information, see Restart Database Sys
[page 887]. tems [page 887].
Restore Restore the primary database system as you do You don't need to restore the secondary data
any other SAP HANA database system. For more base system. In case of a failure of the secon
information, see Restore Database Systems dary database system, the data is replicated
[page 995]. from the primary database system once the
secondary system is reactivated.
Upsize You can only upsize the primary database system You can upsize the secondary database system
once you've upsized the secondary database as any other SAP HANA database system. See
system and contacted SAP Support to transfer Upsize Database Systems [page 890].
its metadata to the primary system. For more in
You must upsize the secondary database sys
formation, see Upsize Database Systems [page
tem before the primary database system.
890].
Edition Change Change the edition of the primary database as Change the edition of the secondary database
you do for any other SAP HANA database sys as you do for any other SAP HANA database
tem. For more information, see Restore Database system. For more information, see Restore Data
Systems [page 995]. base Systems [page 995].
Note Note
Quota must be available in the region of the Quota must be available in the region of the
primary database system. secondary database system.
Installing additional Install SAP HANA components on the primary You can install SAP HANA components on the
SAP HANA compo database system as you do on any other SAP secondary database system as on any other
nents HANA database system. For more information, SAP HANA database system. For more informa
see Install SAP HANA Components [page 893]. tion, see Install SAP HANA Components [page
893].
Note
Installed SAP HANA components are repli Restriction
cated only once during the system replica ● You cannot install XS-based applica
tion setup. If you want to install other SAP tions on the secondary database sys
HANA components after the system replica tem. XS-based applications are auto
tion has been set up, you'll need to install matically replicated from the primary
them on both the primary and the secondary system.
database system.
● You cannot install SAP Streaming Ana
lytics on a database system that is in
Restriction disaster recovery mode.
Uninstalling addi Uninstall SAP HANA components on the primary You can uninstall SAP HANA components on the
tional SAP HANA
database system as you do on any other SAP secondary database system as on any other
components
HANA database system. For more information, SAP HANA database system. For more informa
see Uninstall SAP HANA Components [page tion, see Uninstall SAP HANA Components
895]. [page 895].
Deletion Delete the primary database system as any other You can delete the secondary database system
database system. For more information, see De as any other SAP HANA database system. For
lete Database Systems [page 892]. more information, see Delete Database Systems
[page 892].
Note
System replication stops before the secondary
To delete the primary database system, you database system is deleted. The primary system
need to delete the secondary database sys remains up and running while the secondary
tem first. system is deleted.
Related Information
Setting Up Disaster Recovery for SAP HANA Database Systems [page 915]
What to Do in Case of an SAP HANA Database System Failure [page 919]
Disaster Recovery for SAP HANA Database Systems [page 913]
If there is a disaster affecting the primary region, the secondary database system, which is hosted in a different
center, is promoted to the role of the primary database system.
There is no action required from you if a disaster affects the primary region and causes the failure of the
primary database system.
● SAP constantly monitors your database systems and will promote your secondary database system as
soon as possible.
● If you are in doubt or if you have further questions, you can report an incident on the SAP Support Portal.
For more information, see Getting Support [page 2531].
Administering SAP HANA Database Systems in a Disaster Recovery Setup [page 917]
Setting Up Disaster Recovery for SAP HANA Database Systems [page 915]
Disaster Recovery for SAP HANA Database Systems [page 913]
Use the set of console client commands that is provided by the for managing database systems in the Neo
environment.
Related Information
An overview of the different tasks you can perform to administer databases in the Neo environment.
Use the cockpit to create databases on in your subaccount in the Neo environment, and set properties, such as
the database size.
Context
In the cockpit, you can create databases at the subaccount and the database system level. The procedures
listed below describe how to create a database at the subaccount level.
There is a limit to the number of databases you can create, and you'll see an error message when you reach
that number.
Create a database user and assign the administrator role to the new database user. The administrator role lets
you perform administrative tasks with your database in the Neo environment.
Context
Related Information
As a subaccount administrator, you can use the database user feature provided in the cockpit to create your
own database administration user for your SAP HANA XS database in the Neo environment.
Context
SAP Cloud Platform creates four users that it requires to manage the database: SYSTEM, BKPMON, CERTADM,
and PSADBA. These users are reserved for use by SAP Cloud Platform.
Caution
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. Choose SAP HANA / SAP ASE Databases and Schemas in the navigation area.
You see all databases that are available in the subaccount, along with their details, including the database
type, version, memory size, state, and the number of associated databases.
3. Select the relevant SAP HANA XS database.
4. In the Development Tools section, click Database User.
A message confirms that you do not yet have a database user.
5. Choose Create User.
Your user and initial password are displayed. Change the initial password when you first log on to an SAP
HANA system, for example the SAP HANA Web-based Development Workbench.
Note
○ Your database user is assigned a set of permissions for administering the SAP HANA database
system, which includes HCP_PUBLIC, and HCP_SYSTEM. The HCP_SYSTEM role contains, for
example, privileges that allow you to create database users and grant additional roles to your own
and other database users.
○ You also require specific roles to use the SAP HANA Web-based tools. For security reasons, only
the role that provides access to the SAP HANA Web-based Development Workbench is assigned as
default.
6. To log on to the SAP HANA Web-based Development Workbench and change your initial password now
(recommended), copy your initial password and then close the dialog box.
You do not have to change your initial password immediately. You can open the dialog box again later to
display both your database user and initial password. Since this poses a potential security risk, however,
you are strongly advised to change your password as soon as possible.
Caution
You are responsible for choosing a strong password and keeping it secure. If your user is blocked or if
you've forgotten the password of your user, another database administration user with USER_ADMIN
privileges can unlock your user.
Next Steps
● Tip
There may be some roles that you cannot assign to your own database user. In this case, we
recommend that you create a second database user (for example, ROLE_GRANTOR) and assign it the
HCP_SYSTEM role. Then log onto the SAP HANA system with that user and grant your database user
the roles you require.
● In the SAP HANA system, you can now create database users for the members of your subaccount and
assign them the required developer roles.
● To be able to use other HANA tools like HANA Cockpit or HANA XS Administration tool, you must assign
yourself access to these before you can start using them. See Assign Roles Required for the SAP HANA XS
Administration Tool [page 1591]
Related Information
Create a Database Administration User for SAP HANA Tenant Databases [page 923]
Create a Database Administration User for SAP ASE Databases
You use the SYSTEM user to create a database administration user with SAP HANA cockpit in the Neo
environment.
Prerequisites
You have enabled the Web Access switch for your tenant database. For more information, see Create SAP
HANA Tenant Databases.
You specify a password for the SYSTEM user when your create your SAP HANA tenant database. You use the
SYSTEM user to log on to SAP HANA cockpit and create your own database administration user.
The SYSTEM user is a preconfigured database superuser with irrevocable system privileges, such as the ability
to create other database users, access system tables, and so on. A database-specific SYSTEM user exists in
every database of a tenant database system. To ensure that the administration tool SAP HANA cockpit can be
used immediately after database creation, the SYSTEM user is automatically given several roles the first time
the SAP HANA cockpit is opened with this user.
Caution
You should not use the SYSTEM user for day-to-day activities. Instead, use this user to create dedicated
database users for administrative tasks and to assign privileges to these users.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. Choose SAP HANA / SAP ASE Databases & Schemas in the navigation area.
All databases available in the selected subaccount are listed with their ID, type, version, and related
database system.
Tip
To view the details of a database, for example, its state and the number of existing bindings, select a
database in the list and click the link on its name. On the overview of the database, you can perform
further actions, for example, delete the database.
○ User: SYSTEM
○ Password: The password you've defined for the SYSTEM user when creating your tenant database.
A message appears, telling you that you do not have the required roles.
6. Choose OK.
You receive a confirmation that the required roles are assigned to you automatically.
7. Choose Continue.
8. Choose Manage Roles and Users.
9. Expand the Security node.
10. Open the context menu for the Users node and choose New User.
11. On the User tab, provide a name for the new user.
12. In the Authentication section, make sure the Password checkbox is selected and enter a password.
Note
According to the SAP HANA default password policy, the password must start with a letter and only
contain uppercase and lowercase letters ('a' - 'z', 'A' - 'Z'), and numbers ('0' - '9'). If you've changed the
default policy, other password requirements might apply.
The new database user is displayed as a new node under the Users node.
14. Assign your user the necessary administrator roles and privileges by going to the Granted Roles section
and choosing the + (Add Role) button. To allow your administration user to create new users and assign
roles and privileges to them for example, add the USER ADMIN privilege. For more information see, System
Privileges in the SAP HANA Security Guide.
15. Choose Ok.
16. Save your changes.
Caution
At this point, you are still logged on with the SYSTEM user. You can only use your new database user to
work with SAP HANA Web-based Development Workbench by logging out from SAP HANA cockpit
first. Otherwise, you would automatically log in to the SAP HANA Web-based Development Workbench
with the SYSTEM user instead of your new database user. Therefore, choose the Logout button before
you continue to work with the SAP HANA Web-based Development Workbench, where you need to log
on again with the new database user.
Recommendation
We recommend that you create more than one database administration users. If one database
administration user is locked or if the password needs to be reset, only another administration user can
unlock this user and reset the password.
Next Steps
You can use the newly created database administration user to create database users for the members of your
subaccount and assign them the required developer roles.
Establish a data source binding between your applications and the SAP HANA database in the Neo
environment using the console client or the SAP Cloud Platform cockpit.
You can also bind a database to applications that are owned by subaccounts other than the one in which the
database is deployed, but these subaccounts first need permission to use the database. For more information,
see Sharing Databases with Subaccounts [page 935].
When you create a binding between an application and an SAP HANA XS or SAP HANA tenant database, you
specify a database user, password, and a database ID. The database user you specify determines which
database schema is assigned to the application as its default. The default name of the database schema is the
same as the name of the database user, who is also referred to as the schema owner. By default, only the
schema owner has permission to access data stored in a schema.
As shown below, you can use different database users to bind applications to different schemas.
To bind several applications to the same schema, specify the same database user each time you create the
binding.
The application uses the database user's default schema, but since a database user may have access to more
than one schema, it could potentially use any of these schemas. For more information on using non-default
schemas for bindings, see Use Non-Default SAP HANA Database Schemas for Application Bindings [page
931].
Recommendation
We recommend that you use a database user’s default schema. Using non-default SAP HANA database
schemas for application bindings requires expert SAP HANA database knowledge.
Before you create the binding, create a technical user as described in Disable the Password Lifetime Handling
for a New Technical SAP HANA Database User [page 927].
When you create a data source binding in the Neo environment, you specify a technical database user that
determines to which SAP HANA XS or SAP HANA tenant database schema the application is bound. You need
to prevent the system from asking you to change the initial password of that user. Otherwise, the application
may not start correctly.
Prerequisites
● Deploy a productive SAP HANA XS or SAP HANA tenant database in your account.
● Create a database administrator user for that database. For more information, see Creating a Database
Administration User [page 921].
● Assign the following roles to your database administration user: sap.hana.ide.roles::CatalogDeveloper,
sap.hana.ide.roles::Developer, and sap.hana.ide.roles::EditorDeveloper.
Note
The procedure below creates a new technical database user. If you've already created a user that you want
to specify for the binding, follow the instructions in step 9.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. In the navigation area, choose SAP HANA / SAP ASE Database & Schemas .
3. Select the SAP HANA database for which you would like to create a binding.
4. In the Development Tools section, choose the SAP HANA Web-based Development Workbench link.
5. Log in with the credentials of your existing database administrator user.
6. Choose Catalog.
7. Choose the SQL button.
8. Enter the following command, providing a username and password for your new user:
Note
To use this user only for the application binding, you don't need to assign any roles.
Results
You have disabled the password lifetime check of a new SAP HANA database user so that this user's password
never expires.
Note
If you'd like to change the password of that user after you've created the binding, do the following:
4. Delete the existing data source binding and create a new one specifying the new credentials of your
database user. See Bind Databases Using the Cockpit [page 929] or Bind Databases Using the
Console Client [page 930].
5. Restart your application. See Start and Stop Applications [page 2181].
Next Steps
You can now use the database user to create the data source binding:
Related Information
Use Non-Default SAP HANA Database Schemas for Application Bindings [page 931]
SAP HANA Security Guide
You can bind a database to your Java application in the SAP Cloud Platform cockpit in the Neo environment.
Prerequisites
● Deploy a Java application in your subaccount. See Deploying and Updating Applications [page 1453].
● (Enterprise Accounts) Install a database system in your subaccount. See Install Database Systems [page
884].
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. Choose one of the following options:
By database 1. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas ,
then select the relevant database.
2. Choose Data Source Bindings.
The overview lists all Java applications that the specified database is currently
bound to, as well as the database user used in each case.
3. Choose New Binding.
4. Enter a data source name and the name of the application you want to bind the da
tabase to.
Note
Data source names you enter here must match the JNDI data source names
used in the corresponding applications, as defined in the web.xml or persis
tence.xml file.
To create a binding to the default data source, enter the data source name
<DEFAULT>. An application that is bound to the default data source (shown as
<DEFAULT>) cannot be bound to any other databases. To use other data
bases, first rebind the application using a named data source.
By Java application 1. Choose Applications Java Applications in the navigation area and select the
relevant application.
Note
Data source names are freely definable but need to match the JNDI data source
names used in the corresponding applications, as defined in the web.xml or
persistence.xml file.
To create a binding to the default data source, enter the data source name
<DEFAULT>. An application that is bound to the default data source (shown as
<DEFAULT>) cannot be bound to any other databases. To use other data
bases, first rebind the application using a named data source.
5. In the Database ID field, enter the database to which the application should be
bound.
6. Provide the credentials of the database user.
7. Select the checkbox Verify credentials to verify the validity of the credentials.
8. Save your entries.
Next Steps
Once the binding is created, you can start your application. To do so, navigate to Applications Java
Applications and select the Start icon for your application.
To unbind a database from an application, choose the Delete icon next to the binding. The application
maintains access to the database until it is restarted.
You can bind a database to your Java application using the bind-db command in the Neo environment.
Prerequisites
● Deploy a Java application in your subaccount. See Deploying and Updating Applications [page 1453].
●
● Set up the console client. See Set Up the Console Client [page 1412].
●
1. Open the command window in the <SDK>/tools folder and execute the bind-db [page 1945] command,
replacing the values as appropriate:
Example:
In this example, a data source name has not been specified and the application therefore uses the default
data source.
Note
To bind an application to a database in another account, specify an owner account or an access token.
For more information, see bind-db [page 1945] or Sharing Databases with Subaccounts [page 935].
Note
You can unbind your database from the application using the unbind-db [page 2147] command.
Related Information
The database user you specify while creating a binding determines which SAP HANA database schema an
application can access. Typically, the application uses the database user’s default schema, but since a
database user may have access to more than one schema, the application can potentially use any of the non-
default schemas.
Recommendation
We recommend that you work with a database user’s default schema. The name of the default schema is
the same as the database user, and is created automatically when you create the user. If you require
Caution
Using non-default schemas is error prone and requires greater care with the application code.
The following example shows one scenario for which you might want to use non-default schemas.
An application can access a non-default schema in its program code by adding the schema name as a prefix to
the table name as follows: <schema name>.<table name>
When programming with JPA, add the schema prefix to the table annotation in the JPA entity class.
Example
@Entity
@Table(name = "COMPANYDATA.T_PERSON")
For JDBC, all occurrences of the table names in SQL statements require the schema prefix.
Example
Note
When you retrieve database metadata to check whether a table already exists, you might also need to
specify the schema parameter; in particular, if you have multiple schemas containing tables with identical
names:
Example
Disable the Password Lifetime Handling for a New Technical SAP HANA Database User [page 927]
Bind Databases Using the Cockpit [page 929]
Bind Databases Using the Console Client [page 930]
Programming with JPA [page 1501]
Programming with Plain JDBC [page 1548]
Try to solve issues by restarting an SAP HANA tenant database in the Neo environment.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the subaccount that owns the SAP HANA tenant database
you want to restart. For more information, see Navigate to Global Accounts and Subaccounts [AWS, Azure,
or GCP Regions].
2. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas .
3. Select the tenant database to restart.
4. On the Overview page of the tenant database, choose Stop and confirm the dialog.
5. Once the database is stopped, choose Start and confirm the dialog.
Restore your database in the Neo environment from a specific point of time by creating a service request in the
SAP Cloud Platform cockpit.
Prerequisites
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
Caution
You will lose all data stored between the time you specify in the New Service Request screen and
the time at which you create the service request. For example, if you create a restore request at
3pm to restore your database to 9am on the same day for example, all data stored between 9am
and 3pm will be lost.
d. Choose Save.
A template for opening an incident in the SAP Support Portal is displayed.
e. Select the text in the template between the two dashed lines and copy it to the clipboard.
Tip
Navigate to SAP HANA / SAP ASE Service Requests , then choose the Display icon to find the
template for opening a ticket at any time.
f. Choose Close.
4. Open SAP Support Portal .
5. Choose Report an Incident.
Results
You have created a request for restoring a database and sent the request to SAP Support for processing. As
soon as your database is restored, the state of your request will be set to Finished in the cockpit and the
incident you created will be set to Completed. You can see the state of your request in the cockpit by navigating
to SAP HANA / SAP ASE Service Requests . The state is displayed next to your service request. In the
meantime, SAP Support might contact you in case they need further clarification. You will be notified by e-mail
if you need to take any further action.
Note
Your database is available for use for all users immediately after the restore has been successful.
Note
To cancel your restore request, go to SAP HANA / SAP ASE Service Request , choose your restore
request and select the Delete icon. Note that your request can only be canceled if it has the state New.
Share that are provisioned in a subaccount with other subaccounts in the Neo environment.
When you provision a database in an SAP Cloud Platform subaccount, only the current has access to it. You can
change this by giving other subaccounts controlled access to productive databases that are owned by a
different subaccount. You can also allow other subaccounts to bind their Java applications to a database in a
different subaccount.
Prerequisites
● You have an enterprise account. For more information, see Enterprise versus Trial Accounts.
● Your subaccounts are located in the same region. For more information, see Regions [page 11].
Procedure
Sharing Databases in the Same Global This method allows you to give a subaccount permission to use a database that is
Account [page 936] owned by a different subaccount. You can add and revoke this permission using
the cockpit or the console client. See Managing Access Permissions [page 938].
Restriction
The subaccount providing the permission and the subaccount receiving the
permission must be part of the same global account. For more information on
global accounts, see Accounts [page 8].
The subaccount receiving the permission can bind its applications or open a tun
nel to the database in the different subaccount, or both. See
Sharing Databases with Other Subac This method allows you to give any subaccount permission to use a database that
counts [page 943] is owned by a different subaccount that is no part of your global account. You can
add and revoke this permission using the console client. See Managing Access to
Databases for Other Subaccounts [page 945]
The subaccount receiving the permission uses an access token to bind a Java ap
plication or to open a tunnel to a database in the other subaccount. See
You can share that have been provisioned in a subaccount with other subaccounts of your global account in the
Neo environment.
Note
The following explanations apply only to subaccounts that belong to the same global account. To share a
database with a subaccount that is not part of your global account, see Sharing Databases with Other
Subaccounts [page 943].
You can give subaccounts controlled access to a database owned by another subaccount by adding a
permission for the subaccounts requesting access. Depending on the type of permission you provide, the
owners of the subaccounts receiving the permission can bind their applications to database or open a tunnel to
the database [page 964] that is owned by another subaccount.
To give access permissions to other subaccounts in your global account, log in to the subaccount in which the
database you want to share is provisioned. Then use the SAP Cloud Platform cockpit or the console client to
give permissions to other subaccounts. Subsequently, owners of the subaccounts receiving the permission can
see the database listed in the cockpit and in the console client, and use it in accordance with the permissions
given.
The table below lists the tasks and the person responsible for sharing databases with other subaccounts in the
same global account:
Add New Access Permissions [page Administrator in the subaccount that grant-db-access [page 2029]
938]
owns the database
Revoke Access Permissions [page 941] Administrator in the subaccount that revoke-db-access [page 2111]
Bind Applications to Databases in the Member of the subaccount that has re bind-db [page 1945]
Same Global Account [page 942]
quested permission to use a database
owned by another subaccount
Open Database Tunnels [page 964] Member of the subaccount that has re open-db-tunnel [page 2093]
Subaccount A, B, and C are all part of the same global account. An is provisioned in all three subaccounts.
Three Java applications have been deployed in subaccount C. Java application 3 is bound to the database in
subaccount C. To bind Java application 1 to the database in subaccount A, a member of subaccount A provides
subaccount C with a permission for data source bindings. In addition, a member of subaccount B gives
subaccount C the permission to open a tunnel to the database in subaccount B.
After the permissions have been given, members of subaccount C can see the databases owned by subaccount
A and B in the console client and in the cockpit. As shown in the picture below, subaccount C binds two of its
Java applications to the database in subaccount A. The permission for data source bindings provided to
subaccount C by subaccount A is not restricted to a single application. All members of subaccount C can bind
multiple Java applications to the database in subaccount A. Due to the permission for opening database
tunnels provided to subaccount C by subaccount B, all members of subaccount C can also open a tunnel to the
database in subaccount B.
As a subaccount member with the administrator role, you can add, change, and revoke access permissions for
subaccounts in your global account by using the cockpit or the console client in the Neo environment.
Caution
To share a database with a subaccount that is not part of your enterprise account, follow the steps in
Sharing Databases with Other Subaccounts [page 943].
Related Information
You use the cockpit or the console client in the Neo environment to create a new access permission, allowing a
subaccount to use a database that is owned by another subaccount in the same global account.
Prerequisites
● Create the database you want to share in a subaccount that belongs to an enterprise account. See
Creating Databases [page 921].
● You are assigned the administrator role in that subaccount.
● (For the console command only) Set up the console client. See Set Up the Console Client [page 1412] and
Using the Console Client [page 1928].
As a subaccount member with the Administrator role, you use the cockpit or the console client to give
subaccounts permission to use a productive that is owned by another subaccount.
Restriction
The subaccount providing the permission to use the database and the subaccount receiving the permission
must be part of the same global account.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the subaccount that owns the database
Using the Cockpit
you would like to share. For more information, see Navigate to Global Accounts and
Subaccounts [AWS, Azure, or GCP Regions].
2. Choose SAP HANA / SAP ASE Databases & Schemas in the navigation area.
3. Choose the required database.
4. In the navigation area, choose Permissions.
5. Choose New Permission, then do following:
1. Select the subaccount to receive permission to use the database.
2. Choose the permission type by selecting TUNNEL, BINDING, or both.
3. Choose Save.
6.
Using the Console Client 1. Open the command window in the <SDK>/tools folder and enter the following com
mand:
Note
For an example, see grant-db-access [page 2029].
2. (Optional) Check that permission has been given successfully by entering the following
command:
Note
For an example, see list-db-access-permissions [page 2064].
You have given a subaccount permission to use a database that is owned by another subaccount in the same
global account. In the subaccount that owns the database, the Shared icon appears in the Databases &
Schemas list in the cockpit next to all databases that can be used by other subaccounts.
Related Information
Use the cockpit to change the type of an existing access permission in the Neo environment.
Prerequisites
● You are assigned the administrator role in the subaccount that owns the database.
● Give a subaccount permission to use a database that is owned by another subaccount. The subaccount
providing the permission and the subaccount receiving the permission must be part of the same global
account. See Add New Access Permissions [page 938].
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the subaccount that owns the database for which you would
like to change permissions. For more information, see Navigate to Global Accounts and Subaccounts
[AWS, Azure, or GCP Regions].
2. Choose SAP HANA / SAP ASE Databases & Schemas in the navigation area.
3. Choose the required database.
4. Choose the Edit permission icon for an existing permission.
5. Change the available permission options as required. For example, to change the permission type from
binding to tunnel, select TUNNEL and unselect BINDING.
6. Choose Save.
You use the cockpit or the console client to revoke an access permission for another subaccount in the Neo
environment.
Prerequisites
● You are assigned the administrator role in the subaccount that owns the database.
● Give a subaccount permission to use a database that is owned by another subaccount. The subaccount
providing the permission and the subaccount receiving the permission must be part of the same global
account. See Add New Access Permissions [page 938].
● (For the console command only) Set up the console client. See Set Up the Console Client [page 1412] and
Using the Console Client [page 1928].
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the subaccount that owns the database
Using the Cockpit
for which you would like to revoke permissions. For more information, see Navigate to
Global Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. Choose SAP HANA / SAP ASE Databases & Schemas in the navigation area.
3. Choose the required database.
4. In the navigation area, choose Permissions.
5. Choose the Delete icon next to the subaccount that you want to revoke the permission
for.
Caution
Choosing the Delete icon revokes all access permissions for this subaccount. To
change the type of permission for a subaccount, from Tunnel to Binding for exam
ple, see Change Access Permission Types [page 940].
6. Choose OK.
Using the Console Client Open the command window in the <SDK>/tools folder and enter the following com
mand:
Note
For an example, see revoke-db-access [page 2111].
Related Information
You use the cockpit or the console client in the Neo environment to bind a Java application that you deployed in
one subaccount to an that is owned by another subaccount.
Prerequisites
● Deploy a Java application to SAP Cloud Platform. See Deploying and Updating Applications [page 1453].
● (For the console commands only) Set up the console client. See Set Up the Console Client [page 1412] and
Using the Console Client [page 1928].
● The subaccount that owns the database and the subaccount in which the Java application has been
deployed must be part of the same global account. The subaccount that owns the database has given the
subaccount in which the Java application has been deployed permission to bind the application to the
database. See Managing Access Permissions [page 938].
Procedure
Using the cockpit In the SAP Cloud Platform cockpit, navigate to the subaccount in which the applica
tion you would like to bind has been deployed. For more information, see Navigate
to Global Accounts and Subaccounts [AWS, Azure, or GCP Regions].
Follow the steps described in Binding SAP HANA Databases to Java Applications
[page 926] or Bind Applications to Databases in the Same Global Account [page
942]. When prompted to select the database that you want to bind the application
to, select the database that is owned by another subaccount.
Note
To unbind the database from an application, simply delete the binding. The ap
plication maintains access to the database until restarted.
Using the console client Open the command window in the <SDK>/tools folder and enter the command
for binding an application to a database in another subaccount (same global ac
count) described in bind-db [page 1945].
Note
To unbind the database from an application, open the command window in the
<SDK>/tools folder and enter the following command:
You can share a that is owned by a subaccount with other subaccounts in the Neo environment.
Note
We recommend that you use this method to share your database with subaccounts that do not belong to
your global account. To share your database in the same global account, see Sharing Databases in the
Same Global Account [page 936].
You can allow a subaccount to access a database that is owned by another subaccount by generating an access
token with the console client. A member of the subaccount requesting access to the database can use the
access token to bind a Java application [page 952] and/or to open a tunnel [page 953] to the database in
question.
Restriction
● It always applies to one database (and one application, if the permission allows for a data source binding)
and is not transferrable.
● It has an unlimited validity period.
● (For application bindings only) It can be used for as long as application bindings exist or until the
permission is revoked. You can revoke permissions at any time, wheter or not the target application has
already been bound to the database.
The table below lists the tasks and the person responsible for sharing databases with other subaccounts:
Give Applications in Other Subaccounts Administrator in the subaccount that grant-schema-access [page 2031]
Permission to Access a Database [page
owns the database
946]
Revoke Database Access Permissions Administrator in the subaccount that revoke-schema-access [page 2114]
for Applications in Other Subaccounts
owns the database
[page 947]
Give Other Subaccounts Permission to Administrator in the subaccount that grant-db-tunnel-access [page 2030]
Open a Database Tunnel [page 949]
owns the database
Revoke Tunnel Access to Databases for Administrator in the subaccount that revoke-db-tunnel-access [page 2113]
Other Subaccounts [page 951]
owns the database
Bind Applications to Databases in Other Member of the subaccount that has re bind-db [page 1945] (for SAP HANA
Subaccounts [page 952]
quested permission to use a database tenant databases and SAP ASE data
owned by another subaccount bases)
Open Tunnels to Databases in Other Member of the subaccount that has re open-db-tunnel [page 2093]
Subaccounts [page 953]
quested permission to use a database
owned by another subaccount
Subaccount A, B, and C are not part of the same global account. An SAP HANA or SAP ASE database is
provisioned in all three subaccounts. Three Java applications have been deployed in subaccount C. Java
application 3 is bound to the database in subaccount C. To bind Java application 1 to the database in
subaccount A, a member of subaccount C requests access permission to the database in subaccount A for
Java application 1. An administrator in subaccount A generates a unique access token for binding Java
application 1 to the database in subaccount A. The administrator also creates a database user with appropriate
roles and privileges and provides the credentials of that user together with the access token to a member of
subaccount C.
As shown in the picture below, the access token provided by subaccount A is used by a member of subaccount
C to bind Java application 1 to the database in subaccount A. The token only applies to Java application 1, it
would not be possible to bind other Java applications in subaccount C to the database in subaccount A. The
access token provided by subaccount B is used by a member of subaccount C to open a tunnel to the database
in subaccount B. All members of subaccount C can open tunnels to the database in subaccount B if they are in
possession of the access token.
Related Information
As a subaccount member with the administrator role, you can manage access to databases for other
subaccounts in the Neo environment.
Caution
To share a database with a subaccount that is part of your global account, we recommend you follow the
steps in Managing Access Permissions [page 938].
Related Information
You can give a Java application in another subaccount permission to access a in your subaccount in the Neo
environment.
Prerequisites
● The database you would like to share has been created in a subaccount. See Creating Databases [page
921].
● You have the administrator role in that subaccount.
● You have set up the console client. See Set Up the Console Client [page 1412] and Using the Console Client
[page 1928].
Context
To give a Java application permission to access a database in your subaccount, you generate an access token
using the grant-schema-access command. A member of the subaccount in which the application has been
deployed uses the token to create a data source binding.
● It always applies to one database and one application, and is not transferrable
● It has an unlimited validity period
● You can revoke permissions at any time, whether or not the target application has already been bound to
the database
● It can be used for as long as application bindings exist or until the permission is revoked
Procedure
Open the command window in the <SDK>/tools folder and enter the following command:
A successfully generated access token (an alphanumeric string) appears on the screen.
Next Steps
To give a Java application in another subaccount access to your database, create a database user and a
password and provide it, together with the access token, to a member of the subaccount receiving the
permission.
Related Information
You can revoke the permission to access an in your subaccount for applications in other subaccounts in the
Neo environment.
Prerequisites
● Give an application in another subaccount permission to use a database in your subaccount. See Give
Applications in Other Subaccounts Permission to Access a Database [page 946].
● You are assigned the administrator role in that subaccount.
● Set up the console client. See Set Up the Console Client [page 1412] and Using the Console Client [page
1928].
Context
Note
You can revoke the permission to use a database in your subaccount for applications in other subaccounts
at any time, whether or not the applications have already been bound to the database.
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command to list all
permissions for the specified database:
Example output:
2. To revoke the permission, enter the following command, using the access token obtained in the previous
step:
Caution
We strongly recommend that you delete the database user and password you provided to the other
subaccount requesting the access to your database.
If the access token has already been used to bind the database, revoking the access permission also
unbinds the database. If the application is running, it continues to use the database until it is restarted.
3. Optional: Check that the access token has been revoked by listing all permissions again as described in
step 1 or using the display-schema-info command.
Related Information
You can allow other subaccounts to open a tunnel to an database in your subaccount in the Neo environment.
Prerequisites
● Create the database you want to share in a subaccount. See Creating Databases [page 921].
● You are assigned the administrator role in that subaccount.
● Set up the console client. See Set Up the Console Client [page 1412] and Using the Console Client [page
1928].
Context
To give another subaccount permission to open a tunnel to your database, create a database user for that
subaccount and provide that user's credentials, together with an access token, to a member of the subaccount
that requested permission to open a database tunnel. This allows this subaccount member to open a database
tunnel to the database in your subaccount. All members of the subaccount receiving the permission can
access the database in your subaccount.
Provide the following information to a member of the subaccount that requested permission to open a
database tunnel:
● To check if the database access has been given successfully, you can view a list of all currently active
database access permissions to other subaccounts, which exist for a specified subaccount, by using the
The token is simply a random string, for example,
31t0dpim6rtxa00wx5483vqe7in8i3c1phv759w9oqrutf638l, which remains valid until the provider
subaccount revokes it again.list-db-tunnel-access-grants command.
● The token is simply a random string, for example, you can revoke the database access permission at any
point in time using the revoke-db-tunnel-access command. See Revoke Tunnel Access to Databases
for Other Subaccounts [page 951].
Note
Only the provider subaccount can revoke the access permission. When you revoke the access
permission, we highly recommend that you disable the database user and password created for the
If a subaccount member has already used the access token and there are open database tunnels, they remain
open until they are closed, even though the user has been disabled.
We highly recommend that you create a dedicated database user on the database for each access permission.
Procedure
If the permission has been given successfully, you see the access token. As a database administrator,
create a database user with the needed permissions. Provide the database user and password together
with the access token to a member of the subaccount that has requested permission to open a tunnel to
your database.
Related Information
You can revoke the permission to open database tunnels to an SAP HANA database in your subaccount for
other subaccounts in the Neo environment.
Prerequisites
● Give another subaccount permission to use a database in your subaccount. See Give Other Subaccounts
Permission to Open a Database Tunnel [page 949].
● You are assigned the administrator role in that subaccount.
● Set up the console client. See Set Up the Console Client [page 1412] and Using the Console Client [page
1928].
Context
Note
You can revoke the permission to use a database in your subaccount for other subaccounts at any time.
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command to list all
permissions for the specified database:
Example output:
Note
Only the provider subaccount can revoke the access permission. When you revoke the access
permission, we highly recommend that you disable the database user and password created for the
access permission on the database itself and that you close any open sessions on the SAP HANA
database.
You have revoked the permission to open tunnels to a database in your subaccount for other subaccounts.
3. Optional: Check that the access token has been revoked by listing all permissions again as described in
step 1.
Related Information
To bind applications to productive in other subaccounts, you use a remote access token that indicates that
access to the database has been permitted.
Prerequisites
● You have set up the console client. For more information, see Set Up the Console Client [page 1412].
● You have received an access token from the database owner.
Context
When you bind Java applications to the specified database in other subaccounts, you provide a database user
and password and an access token that you have received from the database owner. You can use this token for
as long as application bindings exist, or until the permission is revoked.
The token is not transferrable to other applications in your subaccount. The owner subaccount can revoke
access to the database at any time.
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command:
You have bound your application to the database in the other subaccount.
Related Information
To open a tunnel to a database that is owned by another subaccount, you request permission from that
subaccount. If your request is approved, the subaccount that owns the database in question provides you with
an access token, a database user, and password. This allows you to open a tunnel from your subaccount to the
database in the other subaccount.
● Set up the console client. For more information, see Set Up the Console Client [page 1412].
● The subaccount that owns the database has given you an access token and a database user and password.
See Give Other Subaccounts Permission to Open a Database Tunnel [page 966].
Context
Once you have received the token and the database credentials, you can open the database tunnel. Use the
access token parameter for the open-db-tunnel command, not the database ID parameter. Then you can
use a database tool of your choice to connect to the database in another subaccount. Log on to the database
with the user and password that you received from the provider. You can then work on the remote database
instance.
Note
All members of the consumer subaccount have permission to access the database in the provider
subaccount.
Procedure
Next Steps
Once you have opened the tunnel, you can connect to the database. See:
Delete your database in the Neo environment using the SAP Cloud Platform cockpit.
Procedure
Remember
Delete all bindings to your tenant database using the SAP Cloud Platform cockpit.
Prerequisites
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the subaccount that owns the applications you bound your
database to. For more information, see Navigate to Global Accounts and Subaccounts [AWS, Azure, or GCP
Regions].
2. Choose the application you want to unbind the tenant database from.
3. On the overview page of the applications, choose Configuration Data Source Bindings .
Next Steps
You can delete your tenant database using the SAP Cloud Platform cockpit.
Prerequisites
You must have the Administrator role in the subaccount that owns the tenant database you want to delete.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the subaccount that owns the tenant database you want to
delete. For more information, see Navigate to Global Accounts and Subaccounts [AWS, Azure, or GCP
Regions].
2. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas .
3. Select the database to delete.
4. On the Overview page of the database, choose Delete and confirm the deletion.
In the Neo environment, configure the database connection pool size and other properties when deploying or
updating your application.
Prerequisites
You have bound a database to a Java application and therefore, specified a data source name. For more
information, see .
A connection pool maintains a specific number of open database connections to an application in the main
memory. The default size of the database connection pool is eight, but you can change this number while
deploying or updating an application in the SAP Cloud Platform cockpit.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
To update a -
specific data Dcom.sap.persistence.jdbc.connection.pool.<data_source_name>.<pro
source, use perty>=<value>
the name of
the data ○ Replace <data_source_name> with the data source name you want to use.
source.
Caution
Replace forward slashes with periods when entering the data source name in the JVM
Arguments field, as shown in the example above.
Example
If the name of your data source is jdbc/myds and you want to set the connection pool size to 20, enter
the following:
-Dcom.sap.persistence.jdbc.connection.pool.jdbc.myds.max_active=20
For an overview of all database connection pool properties that you can configure, see the following table:
-
Dcom.sap.persistence.jd
bc.connection.pool.max_
active=<value>
Caution
If you disable the database con
nection pool,
-
Dcom.sap.persistence.jd
bc.connection.pool.enab
led=<value>
-
Dcom.sap.persistence.jd
bc.connection.pool.max_
wait_millis=<value>
-
Dcom.sap.persistence.jd
bc.connection.pool.min_
evictable_idle_time_mil
lis=<value>
-
Dcom.sap.persistence.jd
bc.connection.pool.min_
idle=<value>
-
Dcom.sap.persistence.jd
bc.connection.statement
.enabled=<value>
-
Dcom.sap.persistence.jd
bc.connection.statement
.max_total=<value>
Next Steps
For more information on monitoring JMX attributes, see JMX Attributes for the Database Connection Pool
[page 960].
JMX attributes let you monitor the database connection pool for a started Java application in the Neo
environment.
Note
To monitor the JMX attributes for the database connection pool using the SAP Cloud Platform cockpit,
follow the procedure in Use the JMX Console. Select the com.sap.cloud.jmx Persistence-
ConnectionPools MBean, then select the data source.
Tip
Collecting stack traces is helpful to analyze connection
leaks in an application, but it also comes at a cost. You
can disable it by passing the following system property
to the application:
com.sap.persistence.jdbc.connection.p
ool.stacktraces.enabled=false
Recommendation
To determine if your current database connection pool size is suitable for your scenario, monitor the
AverageConnectionWaitTimeMillis, the MinConnectionWaitTimeMillis, and the
MaxConnectionWaitTimeMillis attributes, all of which provide connection pool performance statistics. If
the values of these attributes don't conform to the accepted values for application performance, you might
Tip
You can configure JMX checks to monitor the performance of your database connection pool and to send
you incident alerts. For example, if you configure a JMX check to monitor the
AverageConnectionWaitTimeMillis attribute, the system sends an alert if the value of the attribute
exceeds its limit. See Configure a JMX Check to Monitor Your Application.
Related Information
JMX Checks
Use the JMX Console
Analyze error warnings that are related to data backups of tenant databases or the system database in the Neo
environment.
Context
If the backup ran into problems, backup-related error messages are shown in the Monitoring tab of the SAP
Cloud Platform cockpit. For more information, see View Monitoring Metrics of a Database System [page 898].
This can be related to memory issues in the SAP HANA database system.
Procedure
To find out why the backup failed, analyze the alert to determine which tenant database or tenant databases, or
whether the system database is affected.
1. If only one or a few tenant databases are affected, try the following:
1. Check the memory limits and the memory usage of the affected tenant databases using the Memory
Usage tab of the SAP Cloud Platform cockpit. If there are memory limits set on the tenant databases,
consider removing or increasing the limits. For more information, see Create SAP HANA Tenant
Databases and View Memory Usage for an SAP HANA Database System [page 896].
2. Connect to the tenant database and check the memory consumption of the tenant databases using
SAP HANA studio or SAP HANA cockpit. For more information, see SAP Note 1999997 .
3. If you cannot connect to the tenant database, restart it, which frees memory and may therefore resolve
memory issues. For more information, see Restart SAP HANA Tenant Databases [page 933].
Tip
If you frequently run into memory-related backup problems, try to find out where they come from and why
your databases consume too much memory. These actions might resolve your issues:
● If there are any tenant databases you don't currently need, stop these databases to free resources.
Restart them only when you need them.
● Delete any unneeded tenant databases.
● If possible, remove data from your databases.
● If possible, move data to another system.
● Resize the database system.
Note
Even after you've fixed the memory issue, you may still see the alert might in the cockpit until after the next
daily backup has been successfully created.
Related Information
Access remote database instances through a database tunnel in the Neo environment, which provides a secure
connection from your local machine and bypasses any firewalls.
Connect to SAP HANA Databases via the Eclipse IDE (Neon) [page 971]
Connect to an SAP HANA single-container (XS) or tenant database system (MDC) using SAP HANA
tools via the Eclipse IDE.
Use the open-db-tunnel command to open a tunnel and to get the connection details required for the
remote database instance. The database tunnel allows you to connect to a remote database instance through a
secure connection.
Prerequisites
You have set up the console client. For more information, see Set Up the Console Client [page 1412].
Procedure
Note
For more information on the required parameters, see open-db-tunnel [page 2093].
Results
Next Steps
You can now connect to the remote database instance using the connection details you have just obtained.
Note
The database tunnel must remain open while you work on the remote database instance. Close it only when
you have completed the session.
To access data from a productive database in another subaccount you need the required permissions. From
the subaccount prodiving the permission, you can obtain a token and a database user, which you use to open a
tunnel to the database that is owned by that subaccount.
The table below lists the tasks and the person responsible for providing access to the database in another
subaccount:
Give Other Subaccounts Permission to Administrator in the subaccount that grant-db-tunnel-access [page 2030]
Open a Database Tunnel [page 966]
owns the database
Open Tunnels to Databases in Other Member of the subaccount that has re open-db-tunnel [page 2093]
Subaccounts [page 967]
quested permission to open a tunnel to
a database owned by another subac
count
Revoke Tunnel Access to Databases for Administrator in the subaccount that revoke-db-tunnel-access [page 2113]
Other Subaccounts [page 969]
owns the database
Prerequisites
● Create the database you want to share in a subaccount. See Creating Databases [page 921].
● You are assigned the administrator role in that subaccount.
● Set up the console client. See Set Up the Console Client [page 1412] and Using the Console Client [page
1928].
Context
To give another subaccount permission to open a tunnel to your database, create a database user for that
subaccount and provide that user's credentials, together with an access token, to a member of the subaccount
that requested permission to open a database tunnel. This allows this subaccount member to open a database
tunnel to the database in your subaccount. All members of the subaccount receiving the permission can
access the database in your subaccount.
Provide the following information to a member of the subaccount that requested permission to open a
database tunnel:
● To check if the database access has been given successfully, you can view a list of all currently active
database access permissions to other subaccounts, which exist for a specified subaccount, by using the
The token is simply a random string, for example,
31t0dpim6rtxa00wx5483vqe7in8i3c1phv759w9oqrutf638l, which remains valid until the provider
subaccount revokes it again.list-db-tunnel-access-grants command.
● The token is simply a random string, for example, you can revoke the database access permission at any
point in time using the revoke-db-tunnel-access command. See Revoke Tunnel Access to Databases
for Other Subaccounts [page 951].
Note
Only the provider subaccount can revoke the access permission. When you revoke the access
permission, we highly recommend that you disable the database user and password created for the
access permission on the database itself and that you close any open sessions on the SAP HANA
database.
If a subaccount member has already used the access token and there are open database tunnels, they remain
open until they are closed, even though the user has been disabled.
Procedure
If the permission has been given successfully, you see the access token. As a database administrator,
create a database user with the needed permissions. Provide the database user and password together
with the access token to a member of the subaccount that has requested permission to open a tunnel to
your database.
Related Information
To open a tunnel to a database that is owned by another subaccount, you request permission from that
subaccount. If your request is approved, the subaccount that owns the database in question provides you with
an access token, a database user, and password. This allows you to open a tunnel from your subaccount to the
database in the other subaccount.
Prerequisites
● Set up the console client. For more information, see Set Up the Console Client [page 1412].
● The subaccount that owns the database has given you an access token and a database user and password.
See Give Other Subaccounts Permission to Open a Database Tunnel [page 966].
Once you have received the token and the database credentials, you can open the database tunnel. Use the
access token parameter for the open-db-tunnel command, not the database ID parameter. Then you can
use a database tool of your choice to connect to the database in another subaccount. Log on to the database
with the user and password that you received from the provider. You can then work on the remote database
instance.
Note
All members of the consumer subaccount have permission to access the database in the provider
subaccount.
Procedure
Next Steps
Once you have opened the tunnel, you can connect to the database. See:
Related Information
You can revoke the permission to open database tunnels to an SAP HANA database in your subaccount for
other subaccounts in the Neo environment.
Prerequisites
● Give another subaccount permission to use a database in your subaccount. See Give Other Subaccounts
Permission to Open a Database Tunnel [page 949].
● You are assigned the administrator role in that subaccount.
● Set up the console client. See Set Up the Console Client [page 1412] and Using the Console Client [page
1928].
Context
Note
You can revoke the permission to use a database in your subaccount for other subaccounts at any time.
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command to list all
permissions for the specified database:
Example output:
Note
Only the provider subaccount can revoke the access permission. When you revoke the access
permission, we highly recommend that you disable the database user and password created for the
access permission on the database itself and that you close any open sessions on the SAP HANA
database.
You have revoked the permission to open tunnels to a database in your subaccount for other subaccounts.
3. Optional: Check that the access token has been revoked by listing all permissions again as described in
step 1.
Related Information
For continuous delivery and automated tests, the open-db-tunnel command supports a background mode,
which allows a database tunnel to be opened by automated scripts or as part of a Maven build.
Prerequisites
Set up the console client on the CI server. For more information, see Set Up the Console Client [page 1412].
Procedure
To open or close the database tunnel in a Maven build, use the following goals of the SAP Cloud Platform Maven
plug-in:
Tip
Take a look at the following samples delivered with the SAP Cloud Platform SDK:
○ persistence-with-ejb
○ persistence-with-jpa
Each sample includes a test that opens a database tunnel in background mode within the Maven build and
executes some SQL statements.
Related Information
Connect to an SAP HANA single-container (XS) or tenant database system (MDC) using SAP HANA tools via
the Eclipse IDE.
Prerequisites
● You have followed the steps described in Getting Started [page 849].
● You use Eclipse Neon or an older Eclipse version.
Note
Neon is the last Eclipse version that supports this feature. If you use a newer Eclipse version, open a
database tunnel using the SAP Cloud Platform console client. See Open Database Tunnels [page 964].
● You have installed and set up all the necessary tools. For more information, see Install SAP HANA Tools for
Eclipse [page 1569].
For more information about hosts, see Regions and Hosts Available for the Neo Environment [page 14].
5. Enter your SAP Cloud Platform subaccount information:
○ Subaccount name
For more information, see Accounts [page 8].
○ E-mail or user name
○ Password
(If you select Save password, the password for a given user name will be kept in the secure store.)
6. Choose Next.
7. In the SAP HANA Schemas and Databases window, select Databases.
8. Select the database you want to work with.
9. Enter a database user and the password you defined for the user.
Recommendation
If you haven't done so yet, create a database user and assign this user the administrator role to
perform administrative tasks with your database. For more information, see Creating a Database
Administration User [page 921].
Use console client commands to manage your databases in the Neo environment.
Related Information
An overview of the different tasks you can perform to administer database schemas in the Neo environment.
Restriction
You cannot create database schemas on SAP HANA single-container or tenant database systems. You can
only create schemas on shared SAP HANA databases systems, which are displayed as HANA (<shared>)
in the SAP Cloud Platform cockpit. The usage of shared databases is restricted to partners of SAP that
purchased the innovation pack for SAP Cloud Platform. For more information, see Innovation Pack for SAP
Cloud Platform .
Connect to SAP HANA Schemas via the Eclipse IDE (Neon) [page 979]
Establish a direct connection to a shared SAP HANA schema via the Eclipse IDE (Neon) using SAP
HANA tools.
Each application deployed on SAP Cloud Platform can be assigned one or more database schemas. A schema
is associated with a particular subaccount and is available solely to applications within this subaccount. A
schema can be bound to multiple applications.
You can create schemas explicitly, assigning to them any name and certain properties, such as a specific
database type. The schema is independent of any application and has to be explicitly bound.
Schemas can also be created automatically for applications. If you have not explicitly bound a schema to an
application when it is deployed and started for the first time, a schema is created and bound implicitly. This is
the fallback behavior on SAP Cloud Platform.
A schema ID is unique within a subaccount. When a schema is created automatically, an ID is also created
based on a combination of the subaccount and application names and the suffix web.
Binding Schemas
Schemas can be bound to applications based on an explicitly named data source or using the default data
source. The main differences are as follows:
You can share a schema between applications by binding the same schema to more than one application. The
following items apply when binding schemas to applications:
● An application’s bindings are based on either named data sources or the default data source. An
application cannot use a combination of the two types of bindings.
● When named data sources are used, binding names must be unique per application.
Applications can also use schemas belonging to other subaccounts if they are explicitly granted access
permission.
Unbinding Schemas
Unbind a schema from an application if the application no longer needs it. It can still be used by other
applications to which it is still bound. Before a schema can be deleted, it must be unbound from all
applications. Schemas can be deleted only if they no longer have any bindings.
If an application is undeployed but was not yet unbound from its schema, the schema is still listed as bound to
the application and remains bound if the application is redeployed.
Deleting Schemas
You should drop a schema when it is no longer required or to redeploy an application from scratch.
Before deleting a schema, explicitly remove any bindings that still exist between the schema and an
application. You can also remove all bindings by enforcing the deletion of the schema.
When using explicitly named data sources to create bindings between schemas and applications, make sure
that the data source names are the same as the JNDI names used in the applications.
Data sources are defined as resources in the web.xml file, or as JTA or non-JTA data sources in the
persistence.xml file in the normal manner. Data sources can be referenced in the application code using a
context.lookup or annotations (@Resource, @PersistenceUnit, @PersistenceContext).
When using explicitly named data sources in the Java EE 6 Web Profile runtime environment, you must create
two additional bindings:
● A binding between the application and schema using a data source named jdbc/
defaultManagedDataSource
● A binding between the application and schema using a data source named jdbc/
defaultUnmanagedDataSource
Related Information
Context
Schemas have properties, such as a database type and database version, and are identified by an ID that is
unique within the subaccount. The schema is independent of any application.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas .
All schemas available in the selected subaccount are listed with their ID, type, version, and related
database system.
Note
To display a schema’s details, for example, its state and the number of existing bindings,select the
relevant schema in the list and click the link on its name.
3. To create a new schema, choose New on the Databases & Schemas page.
4. Enter the following schema details:
○ Schema ID: A schema ID is freely definable but must start with a letter and contain only uppercase and
lowercase letters ('a' - 'z', 'A' - 'Z'), numbers ('0' - '9'), and the special characters '.' and '-'. The actual
schema ID assigned in the database isn't the same as the ID you enter here..
○ Database System: HANA (<shared>)
To create schemas on your productive HANA instances, you have to use the HANA-specific tools.
5. Save your entries.
The schema overview shows details about its state, quota used, and the number of existing bindings. You
can perform further actions for the newly created schema, for example, delete it.
Note
To delete a schema, first delete all existing bindings to the schema. The Delete button is only enabled if
a schema has no bindings.
Related Information
Context
Bindings are identified by a data source name, which must be unique per application. You can bind the same
schema to multiple applications, and the same application to multiple schemas.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. Choose one of the following options:
By schema 1. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas
in the navigation area.
2. Select the schema for which you want to create a new binding.
The overview shows the schema details, for example, its state, and the number of
existing bindings, and provides access to further actions.
3. In the navigation area, choose Data Source Bindings.
4. Choose New Binding.
5. In the New Binding dialog box, enter a data source name and select the name of the
application to which the schema should be bound. The application must be de
ployed in the selected subaccount.
6. Save your entries.
By application 1. In the navigation area, choose Applications Java Applications and select the
relevant application.
○ To create a binding to the default data source, enter the data source name <DEFAULT>.
○ An application that is bound to the default data source (shown as <DEFAULT>) cannot be bound to
additional schemas. To use additional schemas, first rebind the application using a named data
source.
○ Data source names are freely definable but need to match the JNDI data source names used in the
respective applications, as defined in the web.xml or persistence.xml file. For more
information, see the example scenarios and Declare JNDI Resource References [page 1536].
Next Steps
An application’s state influences when a newly bound schema becomes effective. If an application is already
running (Started state), it continues to use the old schema until it is restarted. A restart is also required if
additional schemas have been bound to the application.
Note
To unbind a schema from an application, simply delete the binding. The application retains access to the
schema until it is restarted.
Related Information
Establish a direct connection to a shared SAP HANA schema via the Eclipse IDE (Neon) using SAP HANA tools.
Prerequisites
Neon is the last Eclipse version that supports this feature. If you use a newer Eclipse version, open a
database tunnel using the SAP Cloud Platform console client. See Open Database Tunnels [page 964].
● You have installed and set up all the necessary tools. For more information, see Install SAP HANA Tools for
Eclipse [page 1569].
● You have created a schema you want to connect to. For more information, see Create Schemas Using the
Cockpit [page 976] or create-schema [page 1971].
Procedure
For more information about hosts, see Regions and Hosts Available for the Neo Environment [page 14].
5. Enter your SAP Cloud Platform subaccount information:
○ Subaccount name
For more information, see Accounts [page 8].
○ E-mail or user name
○ Password
(If you select Save password, the password for a given user name will be kept in the secure store.)
6. Choose Next.
7. In the SAP HANA Schemas and Databases window, choose Schemas.
8. Select the schema you want to work with.
9. Choose Finish.
Change the database property, which determines the database in the Neo environment on which an application
runs.
Context
The default database system is used when schemas are created automatically. This occurs if an application is
started but has not yet been assigned a schema.
● A new application that has not been explicitly assigned a schema uses the most effective default database
system when automatic schema creation is triggered, that is, when the application is started for the first
time.
● When deploying an application from the Eclipse IDE, an application is deployed and started in one step.
● An application that is already using a default database system is not affected by any changes. Its schema
remains associated with the default database system selected when the application was created.
Procedure
1. In the SAP Cloud Platform cockpit, navigate the list of subaccounts available to you. For more information,
see Navigate to Global Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. Choose the (edit) icon on the tile for the subaccount you want to change.
3. Select the new default database system, and save your changes.
Related Information
Perform the most typical use case scenarios in the Neo environment either in the cockpit or by using the
console client.
The schema management scenarios outline the steps involved for the most typical use cases in the Neo
environment. The scenarios use the console client together with the schema commands provided by the SAP
HANA service. Alternatively, you can perform the scenarios from the cockpit.
For the sake of simplicity, the example scenarios use JDBC and web.xml to illustrate the definition of data
sources. Depending on your application and runtime environment, you can use other options, such as the
persistence.xml file and annotations.
Related Information
Prerequisites
Set up the console client. For more information, see Set Up the Console Client [page 1412].
Context
In this scenario, an application has been deployed with the default database type assigned to the subaccount.
Use the unbind-schema command to remove the schema already assigned to the application, then create a
schema with the database type you want to use (create-schema) and bind it to the application (bind-
schema). The following example data is used:
● The application myapp runs on the SAP MaxDB database and is bound to a schema that was created
automatically. The application has been stopped.
● Runtime environment: Java Web
● Data source name: jdbc/dshana
● Schema: myhana
● User: test
● Subaccount: mysubaccount
● Host: hana.ondemand.com (replace as necessary, for example, with hanatrial.ondemand.com for trial
accounts)
Procedure
1. In the application's web.xml file, update the resource definition by replacing the default data source <res-
ref-name>jdbc/DefaultDB</res-ref-name>, or similar, with the named data source <res-ref-
name>jdbc/dshana</res-ref-name>:
<resource-ref>
<res-ref-name>jdbc/dshana</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
2. Adjust the JNDI lookup in the application to use the data source you just defined in the web.xml file. You
will later bind the application to the myhana schema using this data source:
# JNDI lookup
InitialContext ctx = new InitialContext();
DataSource ds = (DataSource) ctx.lookup("java:comp/env/jdbc/dshana");
Example output:
Schema ID DB Type
myhana hana
5. Unbind the current schema from the application. Since the application has a default binding, you do not
need to specify a data source name:
You see a message that the schema has been successfully unbound.
6. Since you have made code changes, redeploy the application.
7. Bind the schema to the application using the data source you defined in the application. Make sure that the
name is identical to that in the web.xml file and in the JNDI lookup (jdbc/dshana):
You see a message that the schema has been successfully unbound.
8. (Optional) Verify the results using the following command:
Example output:
Related Information
Regions and Hosts Available for the Neo Environment [page 14]
Multiple schemas allow you to use multiple databases in parallel. You might, for example, want to use SAP ASE
for normal transaction processing and the SAP HANA database for analytics.
Prerequisites
You have set up the console client. For more information, see Set Up the Console Client [page 1412].
Context
In this scenario, you use the create-schema command to create two schemas, one associated with SAP ASE
and the other with the SAP HANA database. You then use the bind-schema command to bind both schemas
to the application. The following example data is used:
● The application is named myapp and has not yet been deployed.
● Runtime environment: Java Web
● Schemas: myhana (SAP HANA database) and myase (SAP ASE database)
● Data source names: jdbc/dshana and jdbc/dsase
● User: test
● Subaccount: mysubaccount
● Host: hana.ondemand.com (replace as necessary, for example, with hanatrial.ondemand.com for trial
accounts)
Procedure
1. In the application's web.xml file, add resource definitions for the two data sources:
<resource-ref>
<res-ref-name>jdbc/dshana</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>jdbc/dsase</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
2. Add JNDI lookups in the application code using the two data sources. This allows the application to access
both the myhana and myase schemas:
# JNDI lookup
InitialContext ctx = new InitialContext();
DataSource ds = (DataSource) ctx.lookup("java:comp/env/jdbc/dshana");
...
Example output:
Schema ID DB Type
myhana hana
myase ase
7. Bind the schemas to the application using the data source names jdbc/dshana and jdbc/dsase:
In both cases, you see a message that the schema has been successfully bound.
8. (Optional) Verify the results with the following command:
Example output:
Related Information
Regions and Hosts Available for the Neo Environment [page 14]
You can migrate from an auto-created schema by unbinding the schema currently assigned to your application
and rebinding it to the required one. This step is necessary, for example, to use more than one database in
parallel.
Prerequisites
You have set up the console client. For more information, see Set Up the Console Client [page 1412].
Context
In this scenario, you migrate from the auto-bound schema by unbinding and then rebinding the same schema.
This allows you to retain the schema and all its artifacts. The following example data is used:
Procedure
1. Open the command window in the <SDK>/tools folder and use the list-application-datasources
command to obtain the name of the schema currently assigned to the application (you need the schema ID
in step 3):
Example output:
2. Unbind the current schema from the application. Since the application has a default binding, you do not
need to specify a data source name:
<resource-ref>
<res-ref-name>jdbc/DefaultDB</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
Note
If you prefer, you can change this name, but then you will also need to change the JNDI lookup in the
application code and redeploy the application.
4. Rebind the application to the same schema using the data source name from the previous step, for
example, jdbc/DefaultDB:
Example output:
6. The application continues to use the old schema and default data source until it is restarted. Restart the
application so that it uses the new binding to the schema.
Related Information
Allow applications that belong to other subaccounts controlled access to the schemas of your subaccount in
the Neo environment.
Schemas can normally be used only by applications within the same subaccount in the Neo environment. You
can, however, allow applications belonging to other subaccounts controlled access to your subaccount’s
schemas. The other subaccount might be one of your own subaccounts or a third-party subaccount.
The access token is used by the consumer subaccount to bind the schema to the application. It can be used
only once. An unbind operation does not require an access token.
An access token:
● Always applies to one schema and one application and is not transferrable
● Has an unlimited validity period
● Can be revoked at any time, regardless of whether the schema has already been bound to the target
application
Related Information
As a subaccount member who is assigned the Administrator or Developer role, you can grant applications in
other subaccounts access to any of your subaccount’s schemas in the Neo environment.
Prerequisites
You have set up the console client. For more information, see Set Up the Console Client [page 1412].
Context
To allow access, generate a one-time access token that permits the requesting application to access your
schema from its subaccount.
Procedure
Open the command window in the <SDK>/tools folder and enter the following command:
A successfully generated access token (an alphanumeric string) appears on the screen.
Next Steps
The generated access token can now be used by the consumer subaccount to bind the schema to the
application.
● When the target application binds the schema to which it has been granted access, a new technical
database user is created automatically (name: DEV_<guid>). This user has access permission only for the
specified schema (technical name: NEO_<guid>).
● To allow the application to access other schemas or packages on the productive SAP HANA instance, you
can grant the technical database user additional privileges ( Security Users DEV_<guid> ).
● The technical database user is not the same as a normal database user and is provided purely as a
mechanism for enabling schema access.
Related Information
To bind a schema contained in another subaccount to your application, use a remote access token that
indicates that access to this specific schema has been permitted in the Neo environment.
Prerequisites
You have set up the console client. For more information, see Set Up the Console Client [page 1412].
Context
To prevent misuse, the remote access token can be used only once and cannot be transferred to other
applications in your subaccount. The owner subaccount can revoke access to the schema at any time.
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command:
Since the schema does not belong to your subaccount, the schema ID is prefixed with the owner
subaccount’s name (subaccount:schemaID), as shown in the example output below:
A permission grant applies to a specific schema and specific application and is identified by an access token. It
is valid until it is revoked by a member of the owner subaccount in the Neo environment.
Context
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command to list all grants
for the specified schema:
Example output:
2. To revoke the grant, enter the following command, using the access token obtained in the previous step:
If the access token has already been used to bind the schema, revoking the access permission also
unbinds the schema. If the application is running, it continues to use the schema until it is restarted.
3. Optionally check that the access token has been revoked by listing all grants again as described in step 1 or
using the display-schema-info command.
Use the set of console client commands for managing schemas in the Neo environment that is provided by the
SAP HANA service.
Related Information
Use console client commands for different tasks in the Neo environment.
1.11.9.2.6.4.7 Delete
Backup and recovery of data stored in your database and database system are performed by SAP.
Backup
For databases in enterprise accounts, a full data backup is done once a day. Log backup is triggered at least
every 30 minutes. The corresponding data or log backups are replicated to a secondary location at least every
24 hours. Backups are kept (complete data and log) on a secondary location for the last 14 days. Backups are
deleted afterwards.
Recovery
Since backups are kept on a secondary location for 14 days, recovery is only possible within a time frame of 14
days.
Restoring the system from files on a secondary location might take some time depending on the availability.
For more information, see Restore Database Systems [page 995] and Restore Databases [page 996].
Perform a point-in-time restore in the Neo environment by creating a service request in the SAP Cloud Platform
cockpit.
Prerequisites
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. In the navigation area, choose SAP HANA / SAP ASE Service Requests .
3. Choose New Request and do the following:
a. Choose Database System.
Caution
If you restore a database system, all databases within this system are restored. To restore a single
database only, see Restore Databases [page 996].
Caution
You will lose all data stored in the databases in the database system between the time you specify
in the New Service Request screen and the time at which you create the service request. If you
create a restore request at 3pm to restore your database system to 9am on the same day for
example, all data stored between 9am and 3pm is lost.
d. Choose Save.
You see a template for opening an incident in the SAP Support Portal.
e. From the template, select the text between the two dashed lines and copy it to your clipboard.
Tip
Navigate to SAP HANA / SAP ASE Service Requests and choose the Display icon next to
your request to find the template for opening a ticket at any time.
f. Choose Close.
4. Open SAP Support Portal .
5. Choose Report an Incident.
Results
As soon as your database system is restored, the state of your request is set to Finished in the cockpit and the
incident is set to Completed. You can see the state of your request in the cockpit by navigating to SAP
HANA / SAP ASE Service Requests . The state appears next to your service request. SAP Support might
contact you by e-mail if they need clarification, or if you need to take any further action.
Note
Your database system is available for use for all users immediately after it is restored.
Note
To cancel your restore request, go to SAP HANA / SAP ASE Service Request , choose the request and
select the Delete icon. You can cancel a request only if it has still the state New.
Related Information
Restore your database in the Neo environment from a specific point of time by creating a service request in the
SAP Cloud Platform cockpit.
Prerequisites
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. In the navigation area, choose SAP HANA / SAP ASE Service Requests .
3. Choose New Request, then do the following:
a. Choose Database.
b. From the dropdown box, select the Database you want to restore.
c. Use the Restore To field to specify a specific point in time to which you want to restore the database.
Caution
You will lose all data stored between the time you specify in the New Service Request screen and
the time at which you create the service request. For example, if you create a restore request at
3pm to restore your database to 9am on the same day for example, all data stored between 9am
and 3pm will be lost.
d. Choose Save.
A template for opening an incident in the SAP Support Portal is displayed.
e. Select the text in the template between the two dashed lines and copy it to the clipboard.
Tip
Navigate to SAP HANA / SAP ASE Service Requests , then choose the Display icon to find the
template for opening a ticket at any time.
f. Choose Close.
4. Open SAP Support Portal .
5. Choose Report an Incident.
Results
You have created a request for restoring a database and sent the request to SAP Support for processing. As
soon as your database is restored, the state of your request will be set to Finished in the cockpit and the
incident you created will be set to Completed. You can see the state of your request in the cockpit by navigating
to SAP HANA / SAP ASE Service Requests . The state is displayed next to your service request. In the
meantime, SAP Support might contact you in case they need further clarification. You will be notified by e-mail
if you need to take any further action.
Note
Your database is available for use for all users immediately after the restore has been successful.
To cancel your restore request, go to SAP HANA / SAP ASE Service Request , choose your restore
request and select the Delete icon. Note that your request can only be canceled if it has the state New.
Related Information
1.11.9.2.8 Security
Governments place legal requirements on industry to protect data and privacy. We provide features and
functions to help you meet these requirements.
Note
SAP does not provide legal advice in any form. SAP software supports data protection compliance by
providing security features and data protection-relevant functions, such as blocking and deletion of
personal data. In many cases, compliance with applicable data protection and privacy laws is not covered
by a product feature. Furthermore, this information should not be taken as advice or a recommendation
regarding additional features that would be required in specific IT environments. Decisions related to data
protection must be made on a case-by-case basis, taking into consideration the given system landscape
and the applicable legal requirements. Definitions and other terms used in this documentation are not
taken from a specific legal source.
The following sections provide information about the . For the central data protection and privacy statement for
SAP Cloud Platform, see Data Protection and Privacy [page 2527].
An information report is a collection of data relating to a data subject. A data privacy specialist may be required
to provide such a report or an application may offer a self-service.
1.11.9.2.8.1.2 Deletion
When handling personal data, consider the legislation in the different countries where your organization
operates. After the data has passed the end of purpose, regulations may require you to delete the data.
However, additional regulations may require you to keep the data longer. During this period you must block
access to the data by unauthorized persons until the end of the retention period, when the data is finally
deleted.
Read-access logging (RAL) is used to monitor and log read access to sensitive data. Data may be categorized
as sensitive by law, by external company policy, or by internal company policy. Read-access logging enables
you to answer questions about who accessed certain data within a specified time frame.
1.11.9.2.8.1.4 Glossary
If you have questions or encounter an issue while working with the SAP HANA service, have a look at our FAQ or
on how to get support.
Answers to some of the most commonly asked questions about the in the Neo environment.
Where can I view the memory limits for an SAP HANA tenant database
system?
See View Memory Usage for an SAP HANA Database System [page 896].
See .
Restore activities are currently handled by SAP Operations. For more information on how to request a recovery,
see Restore Database Systems [page 995] and Restore Databases [page 996].
How often does a backup occur? How much data can I lose in the worst
case?
Due to the EclipseLink bug 317597 , the @Lob annotation is ignored when the corresponding table column is
created in the database. To enforce the creation of a CLOB column, you have to additionally specify
@Column(length=4001) for the property concerned.
If you have questions or encounter an issue while working with the SAP HANA service in the Neo environment,
there are various ways to address them.
If you encounter an issue with this service, we recommend to follow the procedure below:
Check the availability of the platform at SAP Cloud Platform Status Page .
In the SAP Support Portal, check the Guided Answers section for SAP Cloud Platform. You can find solutions
for general SAP Cloud Platform issues as well as for specific services there.
You can report an incident or error through the SAP Support Portal . For more information, see Getting
Support [page 2531].
BC-NEO-PERS SAP Cloud Platform, SAP HANA Service in SAP, Azure, and
AWS [for databases provisioned before June 2018] regions
Additionally, for database problems in SAP regions, include the database or schema ID::
The Cloud Foundry environment contains the Cloud Foundry Application Runtime, which is based on the open-
source application platform managed by the Cloud Foundry Foundation.
You can leverage a multitude of buildpacks, including community innovations and self-developed buildpacks. It
also integrates with SAP HANA extended application services, advanced model (SAP HANA XSA). This runtime
platform enables you to develop and deploy web applications, supporting multiple runtimes, programming
languages, libraries, and services.
The following table shows which Cloud Foundry features are supported by the Cloud Foundry environment on
SAP Cloud Platform and which aren't.
The following technical configurations are specific to SAP Cloud Platform and differ from the default
configuration:
● By default, a newly pushed (or started) Cloud Foundry application needs to respond to a health check
within the first 60 seconds, otherwise the application is considered to have failed. For more information,
see https://docs.cloudfoundry.org/devguide/deploy-apps/healthchecks.html#health_check_timeout .
On SAP Cloud Platform, however, you can override this timeout to up to 10 minutes. For instructions, see
https://docs.cloudfoundry.org/devguide/deploy-apps/large-app-deploy.html .
● On SAP Cloud Platform, application SSH access is disabled by default. For more information on SSH, see
https://docs.cloudfoundry.org/devguide/deploy-apps/app-ssh-overview.html .
● On SAP Cloud Platform, the Cloud Foundry API is protected by a rate limit against misuse. The limit is in
the range of a few 10k requests per hour per user.
● In the Cloud Foundry environment, applications get guaranteed CPU share of ¼ core per GB instance
memory. As the maximum instance memory per application is 8 GB, this allows for vertical scaling up to 2
CPUs.
If applications running on the same virtual machine don't use their guaranteed CPU, other applications
might get more CPU. This is not guaranteed and might be subject to change in the future. If you encounter
performance problems, scale up your application or increase the application start timeout.
The number of running threads per application instance is limited to 10 420. Reaching this limit can cause
performance issues.
● When pushing or scaling your application, you can define a disk_quota that can be up to 4 GB. For more
information, see https://docs.cloudfoundry.org/devguide/deploy-apps/manifest.html#disk-quota .
Application developers can use the Cloud Foundry environment to enhance SAP products and to integrate
business applications, as well as to develop entirely new enterprise applications based on business APIs that
are hosted on SAP Cloud Platform.
The Cloud Foundry environment allows you to use multiple programming languages such as Java, JavaScript
(with Node.js), and community/bring-your-own-language options. We recommend that you use the Cloud
Foundry environment for 12-factor and/or micro-services-based applications, for Internet of Things and
machine learning scenarios, and for developing applications using SAP HANA extended application services,
advanced model (SAP HANA XSA).
Links to additional information about the Cloud Foundry environment that is useful to know but not necessarily
directly connected to the SAP Cloud Platform Cloud Foundry environment.
Content Location
BOSH http://bosh.cloudfoundry.org
Buildpacks http://docs.cloudfoundry.org/buildpacks
http://docs.cloudfoundry.org/devguide/services/user-pro
vided.html
1.12.3 Tools
An overview of tools for working in the Cloud Foundry environment of SAP Cloud Platform.
Tool Description
SAP Cloud Platform Cockpit [page 1006] This is the browser-based interface for managing your global
account.
Cloud Connector [page 345] It serves as the link between on-demand applications in SAP
Cloud PlatformThis is the browser-based and existing on-
premise systems. You can control the resources available for
the cloud applications in those systems.
SAP Cloud Platform SDK for iOS [page 1007] It is based on the Apple Swift programming language for de
veloping apps in the Xcode IDE and includes well-defined lay
ers (SDK frameworks, components, and platform services)
that simplify development of enterprise-ready mobile native
iOS apps. The SDK is tightly integrated with SAP Cloud Plat
form Mobile Service for Development and Operations.
Cloud Foundry Command Line Interface [page 1769] It enables you to work with the Cloud Foundry environment
to deploy and manage your applications.
SAP Java Buildpack The SAP Java buildpack is a SAP Cloud Platform Cloud
Foundry buildpack for running JVM-based applications. See
Developing Java in the Cloud Foundry Environment [page
1162].
A web-based administration interface provides access to a number of functions for configuring and managing
applications, services, and subaccounts.
Use the cockpit to manage resources, services, security, monitor application metrics, and perform actions on
cloud applications.
The cockpit provides an overview of the applications available in the different technologies supported by SAP
Cloud Platform, and shows other key information about the subaccount. The tiles contain links for direct
navigation to the relevant information.
Home Page
The first thing you see on SAP Cloud Platform is the home page. You can find key information about the cloud
platform and its service offering. You can log on to the cloud cockpit from the home page, or register if you are
a new user.
Accessibility
SAP Cloud Platform provides High Contrast Black (HCB) theme support. Switch between the default theme
and the high contrast theme by choosing Your Name Settings in the header toolbar and selecting a
theme. Once you have saved your changes, the cockpit starts with the theme of your choice.
Get Support
To ask a question or give us feedback, choose one of the following options from Your Name Settings in
the header toolbar:
SAP Cloud Platform SDK for iOS enables developers to quickly develop enterprise-ready native iOS apps, built
with Swift, the modern programming language by Apple.
The SDK is tightly integrated with SAP Cloud Platform mobile service for development and operations to
provide:
The SDK provides a set of UI controls that are often used in the enterprise space. These controls are
implemented using the SAP Fiori design language, and are in addition to the existing native controls on the iOS
platform.
Related Information
The Eclipse plug-in for the Cloud Foundry environment enables you to deploy and manage Java and Spring
applications in the Cloud Foundry environment from Eclipse or Spring Tool Suite. For more information about
how to install and use the plug-in, see Eclipse Plugin for the Cloud Foundry Environment .
Use the Cloud Foundry command line interface (CF CLI) for account administration and to deploy and manage
your applications in the Cloud Foundry environment.
Downloading and installing the console client for theCloud Download and Install the Cloud Foundry Command Line In
Foundry environment terface [page 1769]
Cloud Foundry command line interface plug-ins CF CLI: Plug-ins [page 1771]
Managing subaccounts, orgs, spaces, and quotas. Account Administration in the Cloud Foundry Command
Line Interface [page 1768]
Find a list of the product prerequisites and restrictions for SAP Cloud Platform.
General Constraints
● For information on constraints and default settings to consider when you deploy an application in the Cloud
Foundry environment, see http://docs.cloudfoundry.org/devguide/deploy-apps/large-app-
deploy.html#limits_table .
● SAP Cloud Platform exposes applications only via HTTPS. For security reasons, applications cannot be
accessed via HTTP.
● SAP Cloud Platform Tools for Java and SDK have been tested with Java 7, and Java 8.
● SAP Cloud Platform Tools for Java and SDK run in many operating environments with Java 7, and Java 8
that are supported by Eclipse. However, SAP does not systematically test all platforms.
● SAP Cloud Platform Tools for Java must be installed on Eclipse IDE for Java EE developers.
For the platform development tools, SDK, Cloud connector, and SAP JVM, see https://
tools.hana.ondemand.com/#cloud
Browser Support
The SAP Cloud Platform cockpit supports the following desktop browsers on Microsoft Windows, and where
mentioned below, on macOS:
Browser Versions
For a list of supported browsers for developing SAPUI5 applications, see Browser and Platform Support.
For security reasons, SAP Cloud Platform does not support TLS1.1 and older, SSL 3.0 and older, and RC4-based
cipher suites. Make sure your browser supports at least TLS1.2 and modern ciphers (for example, AES).
You can find the restrictions related to each SAP Cloud Platform service in the respective service
documentation. For more information, see Capabilities [page 15].
1.12.5 Tutorials
Follow the tutorials below to get familiar with the Cloud Foundry environment of SAP Cloud Platform.
SAP HANA service scenarios Creating a Service Binding Using the Cloud Cockpit [page
791]
Authentication checks in Node.js applications Authentication Checks in Node.js Applications [page 1211]
More Tutorials
Tutorial Navigator
The ABAP environment allows you to create extensions for ABAP-based products, such as SAP S/4HANA
Cloud, and develop new cloud applications. You can transform existing ABAP-based custom code or extensions
to the cloud.
The ABAP environment is based on the latest ABAP platform cloud release that is also used for SAP S/4HANA
Cloud. It leverages the innovations provided by SAP HANA. The software stack contains standard technology
components that are familiar from the standalone Application Server ABAP. The ABAP environment supports
the new RESTful ABAP programming model including SAP Fiori and Core Data Services (CDS). SAP Services
and APIs are offered according to a new whitelisting approach. The ABAP environment provides you technical
access to SAP Cloud Platform services, such as destination service, integration, machine learning, and IoT.
For information about regional availability, see Region and API Endpoint for the ABAP Environment [page 13].
Related Information
● Build, extend, and run ABAP applications in SAP Cloud Platform to extend SAP products
● Access SAP Cloud Platform services such as integration, machine learning, streaming, and Internet of
Things (IoT)
You can download the ABAP Development Tools installation packages from the SAP Development Tools page.
Find a list of the product prerequisites and restrictions for SAP Cloud Platform.
General Constraints
● For information on constraints and default settings to consider when you deploy an application in the Cloud
Foundry environment, see http://docs.cloudfoundry.org/devguide/deploy-apps/large-app-
deploy.html#limits_table .
● SAP Cloud Platform exposes applications only via HTTPS. For security reasons, applications cannot be
accessed via HTTP.
● SAP Cloud Platform Tools for Java and SDK have been tested with Java 7, and Java 8.
● SAP Cloud Platform Tools for Java and SDK run in many operating environments with Java 7, and Java 8
that are supported by Eclipse. However, SAP does not systematically test all platforms.
● SAP Cloud Platform Tools for Java must be installed on Eclipse IDE for Java EE developers.
Browser Support
The SAP Cloud Platform cockpit supports the following desktop browsers on Microsoft Windows, and where
mentioned below, on macOS:
Browser Versions
For a list of supported browsers for developing SAPUI5 applications, see Browser and Platform Support.
For security reasons, SAP Cloud Platform does not support TLS1.1 and older, SSL 3.0 and older, and RC4-based
cipher suites. Make sure your browser supports at least TLS1.2 and modern ciphers (for example, AES).
Services
You can find the restrictions related to each SAP Cloud Platform service in the respective service
documentation. For more information, see Capabilities [page 15].
1.13.3 Tutorials
Follow the tutorials below to get familiar with the ABAP environment of SAP Cloud Platform.
Trial Account
Create an SAP Cloud Platform ABAP Environment Trial User
Getting Started
Unmanaged Scenario
Getting Started
Unmanaged Scenario
Create Authorization
More Tutorials
Tutorial Navigator
The Neo environment lets you develop HTML5, Java, and SAP HANA extended application services (SAP HANA
XS) applications. You can also use the UI Development Toolkit for HTML5 (SAPUI5) to develop rich user
interfaces for modern web-based business applications.
The Neo environment also allows you to deploy solutions on SAP Cloud Platform. In the context of SAP Cloud
Platform, a solution is made up of various application types and configurations created with different
technologies, designed to implement a certain scenario or task flow. You can deploy solutions by using the
Change and Transport System (CTS+) tool, the console client, or the SAP Cloud Platform cockpit, which also
lets you monitor your solutions. The SAP multitarget application (MTA) model encompasses and describes
application modules, dependencies, and interfaces in an approach that facilitates validation, orchestration,
maintenance, and automation of the application throughout its life cycle.
The Neo environment lets you use virtual machines, allowing you to install and maintain your own applications
in scenarios that aren't covered by the platform.
You can deploy applications developed in the Neo environment to various SAP data centers around the world.
For more information about regional availability of the Neo environment, see Regions and Hosts Available for
the Neo Environment [page 14].
An overview of tools for working in the Neo environment of SAP Cloud Platform.
Tool Description
Cloud Connector [page 345] It serves as the link between on-demand applications in SAP
Cloud Platform and existing on-premise systems. You can
control the resources available for the cloud applications in
those systems.
SAP Cloud Platform SDK for iOS [page 1019] It is based on the Apple Swift programming language for de
veloping apps in the Xcode IDE and includes well-defined lay
ers (SDK frameworks, components, and platform services)
that simplify development of enterprise-ready mobile native
iOS apps. The SDK is tightly integrated with SAP Cloud Plat
form Mobile Service for Development and Operations.
A web-based administration interface provides access to a number of functions for configuring and managing
applications, services, and subaccounts.
Use the cockpit to manage resources, services, security, monitor application metrics, and perform actions on
cloud applications.
The cockpit provides an overview of the applications available in the different technologies supported by SAP
Cloud Platform, and shows other key information about the subaccount. The tiles contain links for direct
navigation to the relevant information.
Home Page
The first thing you see on SAP Cloud Platform is the home page. You can find key information about the cloud
platform and its service offering. You can log on to the cloud cockpit from the home page, or register if you are
a new user.
Accessibility
SAP Cloud Platform provides High Contrast Black (HCB) theme support. Switch between the default theme
and the high contrast theme by choosing Your Name Settings in the header toolbar and selecting a
theme. Once you have saved your changes, the cockpit starts with the theme of your choice.
To ask a question or give us feedback, choose one of the following options from Your Name Settings in
the header toolbar:
1.14.1.1.1 Notifications
Use Notifications to stay informed about different operations and events in the cockpit, for example, to monitor
the progress of copying a subaccount.
The Notification icon in the header toolbar provides a quick access to the list of notifications and shows the
number of available notifications. The icon is visible only if there are currently notifications.
Each notification includes a short statement, a date and time, and the relevant subaccount. A notification
informs you about the status of an operation or asks for an action. For example, if copying a subaccount failed,
an administrator of the subaccount can assign the corresponding notification to himself and provide a fix. The
other members of this subaccount can see that the notification is already assigned to someone else.
● Dismiss a notification.
● Assign a notification to yourself. It's possible also to unassign yourself from a notification without
processing it further.
● Once you have completed the related action, you can set the status to complete. This dismisses the
corresponding notification for everyone else.
You can access the full list of notifications (also the ones you have dismissed earlier) by choosing Notifications
in the navigation area at the region level.
Related Information
SAP Web IDE is a fully extensible and customizable experience that accelerates the development life cycle with
interactive code editors, integrated developer assistance, and end-to-end application development life cycle
support. SAP Web IDE was developed by developers for developers.
SAP Web IDE is a next-generation cloud-based meeting space where multiple application developers can work
together from a common Web interface — connecting to the same shared repository, with virtually no setup
Note
SAP Web IDE is only available in the Neo environment, but it supports developing applications for both the
Neo environment and the Cloud Foundry environment.
Related Information
SAP offers a Maven plugin that supports you in using Maven to develop Java applications for SAP Cloud
Platform. It allows you to conveniently call the SAP Cloud Platform console client and its commands from the
Maven environment.
Most commands that are supported by the console client are available as goals in the plugin. To use the plugin,
you require a SAP Cloud Platform SDK for Neo environment, which can be automatically downloaded with the
plugin. Each version of the SDK always has a matching Maven plugin version.
Note
For a list of goals and parameters, usage guide, FAQ, and examples, see:
Related Information
Prerequisites
You have the SDK installed. See Install the SAP Cloud Platform SDK for Neo Environment [page 1403].
The location of the SDK is the folder you have chosen when you downloaded and unzipped it.
An overview of the structure and content of the SDK is shown in the table below. The folders and files are
located directly below the common root directory in the order given:
Folder/File Description
api The platform API containing the SAP and third-party API
JARs required to compile Web applications for SAP Cloud
Platform (for more information about the platform API, see
the "Supported APIs" section further below).
javadoc Javadoc for the SAP platform APIs (also available as online
documentation via the API Documentation link in the title
bar of the SAP Cloud Platform Documentation Center).
Javadoc for the third-party APIs is cross-referenced from
the online documentation.
server Initially not present, but created once you install a local
server runtime.
tools Command line tools required for interacting with the cloud
runtime (for example, to deploy and start applications) and
the local server runtime (for example, to install and start the
local server).
readme.txt Brief introduction to the SAP Cloud Platform SDK for Neo
environment, its content, and how to set it up.
The cloud server runtime consists of the application server, the platform API, and the cloud implementations of
the provided services (connectivity, SAP HANA and SAP ASE, document, and identity). The SDK, on the other
Supported APIs
The SAP Cloud Platform SDK for Neo environment contains the API for SAP Cloud Platform. All web
applications that should be deployed in the cloud should be compiled against this platform API. The platform
API is used by the SAP Cloud Platform Tools for Java to set the compile-time classpath.
All JARs contained in the platform API are considered part of the provided scope and must therefore be used
for compilation. This means that they must not be packaged with the application, since they are provided and
wired at runtime in the SAP Cloud Platform runtime, irrespective of whether you run your application locally for
development and test purposes or centrally in the cloud.
When you develop applications to run on the SAP Cloud Platform, you should be aware of which APIs are
supported and provisioned by the runtime environment of the platform:
● Third-party APIs: These include Java EE standard APIs (standards based and backwards compatible as
defined in the Java EE Specification) and other APIs released by third parties.
● SAP APIs: The platform APIs provided by the SAP Cloud Platform services.
Related Information
SAP Cloud Platform SDK for iOS enables developers to quickly develop enterprise-ready native iOS apps, built
with Swift, the modern programming language by Apple.
The SDK is tightly integrated with SAP Cloud Platform mobile service for development and operations to
provide:
Related Information
Features of the SAP Cloud Platform Tools for the Neo Environment
You can download SAP Cloud Platform Tools from the SAP Development Tools for Eclipse page. The toolkit
package contains:
Support of the SAP Cloud Platform Tools for the Neo Environment
SAP Cloud Platform Tools come with a wizard for gathering support information in case you need help with a
feature or operation (during deploying/debugging applications, logging, configurations, and so on). For more
information, see Gather Support Information [page 2534].
SAP Cloud Platform console client for the Neo environment enables development, deployment and
configuration of an application outside the Eclipse IDE as well as continuous integration and automation tasks.
The tool is part of the SAP Cloud Platform SDK for Neo environment. You can find it in the tools folder of your
SDK location.
The Console Client is related only to the Neo environment. For the Cloud Foundry environment use the
Cloud Foundry command line interface. See Download and Install the Cloud Foundry Command Line
Interface [page 1769].
Downloading and setting up the console client Set Up the Console Client [page 1412]
Opening the tool and working with the commands and pa Using the Console Client [page 1928]
rameters
Console Client Video Tutorial
Verbose mode of output Verbose Mode of the Console Commands Output [page
1931]
1.14.2 Tutorials
Follow the tutorials below to get familiar with the services offered by SAP Cloud Platform in the Neo
environment.
How to create a "HelloWorld" Web application Creating a Hello World Application [page 1416]
How to create a "HelloWorld" Web application using Java EE Using Java EE Web Profile Runtimes [page 1445]
6 Web Profile
How to create a "Hello World" Multi-Target Application Create a Hello World Multitarget Application [page 1607]
Connectivity service scenarios Consume Internet Services (Java Web or Java EE 6 Web Pro
file) [page 268]
SAP HANA and SAP ASE service scenarios Tutorial: Adding Application-Managed Persistence with JPA
(SDK for Java Web) [page 1516]
Java applications lifecycle management scenarios Lifecycle Management API Tutorial [page 1455]
How to secure your HTTPS connections Using the Keystore Service for Client Side HTTPS Connec
tions [page 2472]
How to create an SAP HANA XS application ● Creating an SAP HANA XS Hello World Application Us
ing SAP HANA Studio [page 1574]
● Creating an SAP HANA XS Hello World Application Us
ing SAP HANA Web-based Development Workbench
[page 1571]
Continuous Integration scenarios Continuous Integration (CI) Best Practices with SAP: Intro
duction and Navigator
More Tutorials
Tutorial Navigator
Take your first steps in the system. Follow the workflows for trial or customer accounts or subscribe to
business applications.
Related Information
Get onboarded in the Cloud Foundry environment of SAP Cloud Platform. Follow the workflows for trial or
customer accounts or subscribe to business applications.
Getting Started with a Trial Account in the Cloud Foundry Environment [AWS, Azure, or GCP Regions]
[page 1024]
Quickly get started with a trial account.
Getting Started with a Customer Account in the Cloud Foundry Environment [page 1031]
Quickly get started with a customer account.
Getting Started with Multitenant Application Subscriptions in the Cloud Foundry Environment [page 1035]
Related Information
Note
The content in this section is only relevant for AWS, Azure, or GCP regions.
Tip
Also check out the tutorial Create Your First App on Cloud Foundry to see how you can deploy a pre-
bundled set of artifacts using the SAP Cloud Platform cockpit, access the app from your web browser, and
create an instance of a service available on Cloud Foundry and bind it to your app.
Before you begin, sign up for a free trial account. See Get a Free Trial Account. For more information about trial
accounts, see About Trial Accounts in the Cloud Foundry Environment [page 1028].
1. When you register for a trial account, a subaccount and a space are created for you. You can create
additional subaccounts and spaces, thereby further breaking down your account model and structuring it
according to your development scenario, but first it's important you understand how to navigate to your
accounts and spaces using the cockpit. See Navigate to Orgs and Spaces [page 1751].
2. If you like, create further subaccounts. See Create a Subaccount in the Cloud Foundry Environment [AWS,
Azure, or GCP Regions].
3. If you haven't done so already, now is a good time to download and install the Cloud Foundry Command
Line Interface (cf CLI). This tool allows you to administer and configure your environment,enable services,
and deploy applications. See Download and Install the Cloud Foundry Command Line Interface [page
1769]. But don't worry, you can also perform all the necessary task using the SAP Cloud Platform cockpit,
which you don't need to install.
4. If you'd like to use the cf CLI, log on to your environment. See Log On to the Cloud Foundry Environment
Using the Cloud Foundry Command Line Interface [page 1770].
1. Now that you've set up your account model, it's time to think about member management. You can add
members at different levels. For example, you can add members at the org level. See Add Organization
Members Using the Cockpit [page 1764]. For more information about the roles that are available on the
different levels, see About Roles in the Cloud Foundry Environment [page 1760].
2. You can also add members at the space level. See Add Space Members Using the Cockpit [page 1765].
3. In a trial account, quotas are automatically assigned to your subaccounts, but you can change that
assignment. See Configure Entitlements and Quotas for Subaccounts [page 1756]. To learn more about
entitlements and quotas, see Managing Entitlements and Quotas Using the Cockpit [page 1755].
1. Develop your application. Check out the Developer Guide for tutorials and more information. See
Development [page 1070].
Parent topic: Getting Started in the Cloud Foundry Environment [page 1023]
Related Information
Getting Started with a Customer Account in the Cloud Foundry Environment [page 1031]
Getting Started with Multitenant Application Subscriptions in the Cloud Foundry Environment [page 1035]
FAQ for Cloud Foundry environment within SAP Cloud Platform on the SAP Cloud Platform Public Wiki
A trial account lets you try out a limited set of features in the Cloud Foundry environment for free. Access is
open to everyone.
Trial accounts are intended for personal exploration, and not for production use or team development. The
features included in a trial account are limited, compared to an enterprise account. Consider the following
before using a trial account:
For more information about the regions that are available for trial accounts, see Regions and API Endpoints
Available for the Cloud Foundry Environment [page 11].
Your trial account is set up automatically, so that you can start using it right away. However, if one or more of
the automatic steps fail, you can also finalize the setup manually, by following the steps below.
The first thing that is needed in the setup of your trial account is the creation of a subaccount. If this step was
successful in the cockpit, you can directly skip to the next section.
Procedure
1. Navigate into your global account by choosing Enter Your Trial Account.
2. Choose New Subaccount.
3. Configure it as follows:
Field Input
Subdomain <your_id>trial
Example: P0123456789trial
Once you have a subaccount (whether it was created automatically or you followed the steps described above),
you need an org and entitlements.
Procedure
Note
To select a service plan, choose a service from the left and tick all the service plans that appear on the
right. Do that for all services.
6. Once you've added all the service plans you will see them all in a table. Before you choose Save, for all the
plans with numerical quota, choose to increase the amount to the maximum (until the icon becomes
disabled).
7. Finally, choose Save to save all your changes and exit edit mode.
Results
You now have an org and all the entitlements for your subaccount. The last thing you need is a space where you
can use the services you've configured entitlements for and deploy applications.
Procedure
1. In your trial subaccount, navigate to the Spaces page using the left hand-side navigation.
2. Choose New Space and name it dev.
3. Choose Save.
You now have your trial setup all done and ready to go.
To get some guidance on how you can get started, navigate back to your Trial Home by choosing the first item
in your breadcrumbs at the top. There, you can find several guided tours to walk you through the basics of SAP
Cloud Platform and the cockpit, as well as some more complex starter scenarios.
For information about setting up a customer account when running in the China (Shanghai) region, see Setting
Up a Global Account With the Cockpit [China (Shanghai) and Government Cloud (US) Regions] and Setting Up
a Global Account with the CLI [China (Shanghai) and Government Cloud (US) Regions].
Before you begin, purchase a customer account or join the partner program. See Purchase a Customer
Account or Join the Partner Program. For more information about account types, see Enterprise versus Trial
Accounts.
1. After you've received your logon data by email, create subaccounts in your global account. This allows you
to further break down your account model and structure it according to your business needs. See Create a
Subaccount in the Cloud Foundry Environment [AWS, Azure, or GCP Regions].
2. If you haven't done so already, now is a good time to download and install the Cloud Foundry Command
Line Interface (cf CLI). This tool allows you to administer and configure your environment, enable services,
and deploy applications. See Download and Install the Cloud Foundry Command Line Interface [page
1769]. But don't worry, you can also perform all the necessary task using the SAP Cloud Platform cockpit,
which you don't need to install.
3. If you'd like to use the cf CLI, log on to your environment. See Log On to the Cloud Foundry Environment
Using the Cloud Foundry Command Line Interface [page 1770].
4. Create spaces. See Create Spaces [page 1754].
1. You can either use the cockpit or the cf CLI to configure your environment. If you'd like to use the cockpit,
it's important you understand how you can navigate to your accounts and spaces. See Navigate to Orgs
and Spaces [page 1751].
2. It's time to think about member management. You can add members at different levels. For example, you
can add members at the global account level. See Add Members to Your Global Account [AWS, Azure, or
1. Develop your application. Check out the Developer Guide for tutorials and more information. See
Development [page 1070].
2. Deploy your application. See Deploy Business Applications in the Cloud Foundry Environment [page 1075].
3. Integrate your application with a service. To do so, first create a service instance. See Creating Service
Instances [page 1320]
4. Bind the service instance to your application. See Binding Service Instances to Applications [page 1324].
5. Alternatively, you can also create and use service keys. See Creating Service Keys [page 1326]. For more
information on using services and creating service keys, see About Services [page 1318].
6. You can also create instances of user-provided services. See Creating User-Provided Service Instances
[page 1322].
Parent topic: Getting Started in the Cloud Foundry Environment [page 1023]
Related Information
Getting Started with a Trial Account in the Cloud Foundry Environment [AWS, Azure, or GCP Regions] [page
1024]
Getting Started with Multitenant Application Subscriptions in the Cloud Foundry Environment [page 1035]
FAQ for Cloud Foundry environment within SAP Cloud Platform on the SAP Cloud Platform Public Wiki
As an global account owner on SAP Cloud Platform, you can develop and run applications in the Cloud Foundry
environment, and also share them as multitenant applications with your own consumers. See:
Parent topic: Getting Started in the Cloud Foundry Environment [page 1023]
Getting Started with a Trial Account in the Cloud Foundry Environment [AWS, Azure, or GCP Regions] [page
1024]
Getting Started with a Customer Account in the Cloud Foundry Environment [page 1031]
Developing Multitenant Applications in the Cloud Foundry Environment
Subscribe to multitenant applications from the Subscriptions page in the SAP Cloud Platform cockpit.
View, create, and modify application roles and then assign users to these roles using the SAP Cloud Platform
cockpit.
Prerequisites
Your subaccount is subscribed to a multitenant application in the Cloud Foundry environment. See Subscribe
to Multitenant Applications in the Cloud Foundry Environment Using the Cockpit [page 1036].
Context
You can use any SAML 2.0 standard compliant identity provider. See Trust and Federation with SAML 2.0
Identity Providers [page 2281].
Procedure
1. Navigate to your subaccount. For more information, see Navigate to Global Accounts and Subaccounts
[AWS, Azure, or GCP Regions].
Procedure
1. Navigate to your subaccount. For more information, see Navigate to Global Accounts and Subaccounts
[AWS, Azure, or GCP Regions].
2. Choose Security Roles Collections in the navigation area and include the roles into the roles
collection.
Related Information
Get onboarded in the ABAP environment of SAP Cloud Platform. Follow the workflow for customer accounts.
Getting Started with a Trial Account in the ABAP Environment [page 1038]
Quickly get started with a trial account.
Getting Started with a Customer Account: Workflow in the ABAP Environment [page 1043]
Quickly get started with a customer account.
Related Information
Before you begin, sign up for a free trial account. See Get a Free Trial Account. For more information about trial
accounts, see About Trial Accounts in the Cloud Foundry Environment [page 1028].
Restriction
Note that you can only select Amazon Web Services (AWS) as your provider and either Europe (Frankfurt) or
US East (VA) as your region to get access to ABAP trial.
1. After registering for a trial account, you will be navigated to the space that was created for your Cloud
Foundry trial account. If not, see Navigate to Orgs and Spaces [page 1751].
Note
2. Go to your trial service by selecting ABAP Trial from the list of services that are available to you in the
Service Marketplace.
Tip
If you don't see the ABAP Trial tile, go back to your trial subaccount and select Entitlements in the
navigation menu to add a shared service plan. See Configure SAP Cloud Platform Entitlements .
Now that you have registered for a trial account and navigated to your ABAP trial service, it's time to set up
your ABAP trial instance. See Create Your ABAP Trial Instance [page 1043].
Tip
If you experience issues with accessing your ABAP trial service or creating an ABAP trial instance, submit
your question on SAP Community by selecting SAP Cloud Platform, abap environment as your
primary tag.
● https://tools.hana.ondemand.com/#abap [https://tools.hana.ondemand.com/#abap]
● https://developers.sap.com/tutorials/abap-environment-abapgit.html#63bab1ab-0d66-4188-
a693-8f63a2944d49 [https://developers.sap.com/tutorials/abap-environment-
abapgit.html#63bab1ab-0d66-4188-a693-8f63a2944d49]
1. Create a service key for your trial system. See Creating Service Keys [page 1326].
2. To start developing in your trial system, you need to create a new project in your ADT installation. Set up an
ABAP cloud project to connect to your ABAP trial system. See Connect to the ABAP System [page 1341].
Related Information
Getting Started with a Customer Account: Workflow in the ABAP Environment [page 1043]
Tutorial: Create an SAP Cloud Platform ABAP Environment Trial User
Tutorial: Create Your First ABAP Console Application
Trial Learning Journey
Learning Journey
Product Page
SAP Community
Solution Overview and Roadmap
Procedure
{
"email": "<your trial user's mail address>"
}
5. [Optional] Choose an application to bind the new service instance to. See Binding Service Instances to
Applications [page 1324].
6. Select Next.
7. Choose Instance Name (e.g. abap-trial) and select Finish.
Before you begin, purchase a customer account. See Purchase a Customer Account.
Tip
To speed up the setup process, log on to the cockpit and choose Recipes in the navigation area. The
interactive recipe guides you through the process of setting up your subaccounts, configuring entitlements,
and assigning members. This will give you a jumpstart into your ABAP development projects. SeeUsing a
Recipe to Automate the Setup of the ABAP Environment [page 1049]. To find out more about recipes,
seeRecipes [page 1331]
● Create a Subaccount in the Cloud Foundry Environment [AWS, Azure, or GCP Regions]
● Create Spaces [page 1754]
● Global Accounts [page 8]
1. After you've received your logon data by email, create subaccounts in your global account. This allows you
to further break down your account model and structure it according to your business needs. See Create a
Subaccount in the Cloud Foundry Environment [AWS, Azure, or GCP Regions].
2. Create spaces. See Create Spaces [page 1754]. If you want to learn more about subaccounts, orgs, and
spaces, and how they relate to each other, see Accounts [page 8]. You'll also find some recommendations
for setting up your account model so that it meets your business needs.
1. You can either use the cockpit or the cf CLI to configure your environment. If you'd like to use the cockpit,
it's important you understand how you can navigate to your accounts and spaces. See Navigate to Orgs
and Spaces [page 1751].
2. It's time to think about member management. You can add members at different levels. For example, you
can add members at the global account level. See Add Members to Your Global Account [AWS, Azure, or
GCP Regions]. For more information about roles, see About Roles in the Cloud Foundry Environment [page
1760].
1. Since you need to use the SAP Cloud Platform cockpit to configure your environment, it's important you
understand how you can navigate to your global account and subaccounts. See Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. Subscribe to the Web access for ABAP SaaS application to get direct browser access to your instances in
the ABAP environment. This also allows you to access the administration launchpad including your own
SAP Fiori applications. You only have to subscribe once for each subaccount, which includes all systems to
be created in all spaces of this subaccount. For further information, see Subscribing to the Web Access for
ABAP.
3. Now it's time to create your ABAP system. To do so, follow the steps described in Create an ABAP System
[page 1050].
4. Create a service key for the ABAP system that you've just created. Choose a meaningful name such as ADT
for later access to ABAP Development Tools (ADT). For more information about service keys, see Creating
Service Keys [page 1326].
5. You can set up your ABAP environment to ensure secure communication between the Web IDE in the Neo
environment and Cloud Foundry environment. See Setup of the ABAP Environment: Introduction.
6. Integrate your ABAP environment with SAP S/4HANA Cloud. See Integration of SAP Cloud Platform ABAP
Environment with SAP S/4HANA Cloud: Introduction.
● https://tools.hana.ondemand.com/#abap [https://tools.hana.ondemand.com/#abap]
● Connect to the ABAP System [page 1341]
● Development in the ABAP Environment [page 1337]
● Using Services in the Cloud Foundry Environment [page 1318]
● Software Component Lifecycle Management [page 1880]
Note
SAP GUI is no longer supported. You can only use ADT as your development environment.
2. Create an ABAP cloud project with ADT to connect to the ABAP system in the ABAP environment. See
Connect to the ABAP System [page 1341].
3. Create a software component. See Tutorial: Create Software Component via SAP Fiori Launchpad .
4. Develop your application. See Development in the ABAP Environment [page 1337]. To learn more about
how to develop applications, see Tutorial: Create Your First ABAP Console Application and Video
Tutorial: Create Application .
5. Deploy your application. See Software Component Lifecycle Management [page 1880].
Related Information
Getting Started with a Trial Account in the ABAP Environment [page 1038]
Learning Journey
Product Page
SAP Community
Solution Overview and Roadmap
Tutorials
Video Tutorials
You can use a recipe to automate some of the required steps for setting up the ABAP environment in SAP
Cloud Platform. Automation includes creating a subaccount and space, configuring the required entitlements,
and assigning administrators and developers to the subaccount.
Context
A recipe is a set of guided interactive steps that enable you to select, configure, and consume services on SAP
Cloud Platform. For more information, see Recipes in the SAP Cloud Platform documentation on SAP Help
Portal.
1. Log on to the SAP Cloud Platform cockpit and choose the global account for the Cloud Foundry
environment as administrator.
2. In the menu of the SAP Cloud Platform cockpit, choose Recipe.
3. Choose the recipe Prepare an Account for ABAP Development.
The tab pages Overview, Components, and Additional Resources are displayed, where you get more
information about the recipe.
4. Choose Start Recipe.
5. Follow the instructions in the steps of the recipe.
Results
After the recipe has run successfully, the following tasks have been performed automatically:
● A new Cloud Foundry subaccount for the ABAP environment is created and enabled.
● A space and organization for the ABAP environment are available.
● A service instance for the ABAP environment has been created.
● The entitlement 16 GB ABAP Runtime, 64 GB Database has been configured.
● Quotas for the ABAP runtime, the required destination service, and the application runtime have been
increased by 1.
● You’re subscribed to the Web access for ABAP SaaS application and get direct browser access to your
instances in the ABAP environment.
● An instance for the destination service is created, which applications in the ABAP environment need for
outbound connectivity.
● Administration and developer users with roles have been created.
Note
These tasks are still part of this documentation, but there's no need to perform them if you have run the
recipe successfully. A note at the beginning of each task indicates that you can skip steps if you’ve used the
recipe.
Prerequisites
You need to increase the quota for the ABAP runtime. See Increasing the Quota for the ABAP Runtime.
1. Navigate to the space in which you want to create your ABAP system. For more information, see Navigate
to Orgs and Spaces [page 1751].
You see a list of all instances that have already been created.
5. Select New Instance.
{
"admin_email": "[email protected]",
"description": "Test System for Unit Tests",
"is_development_allowed": true,
"sapsystemname": "H01"
}
Note
The email address is used to create the initial user for the system automatically including the
assignment of the administrator role. The system can only be accessed with the specified user.
The description and system name are optional and user-defined. You can specify a description and a
three-character name of your choice for your system to make it easier to refer to the corresponding
system in the development environment. Or simply use the default name H01. The system name does
not have to be technically unique.
Note
The instance name identifies the service instance in the Cloud Foundry environment. Specify an
instance name that is unique among all the service instances in a space, without taking lower or upper
case and special characters into account.
Related Information
Learn how to define a developer business role and set write access.
Prerequisites
Procedure
1. In the SAP Fiori Launchpad, select the Maintain Business Roles tile in the Identity and Access Management
section.
2. To define a new developer business role, choose Create from Template.
3. In the Create Business Role from Template dialog, use the value help to select template SAP_BR_Developer
and choose OK.
Note
The entries for New Business Role ID and New Business Role are automatically filled in.
Note
Once the developer role has been activated, the Lifecycle Status is set to Active.
You have created a developer business role with unrestricted write access.
Related Information
Get onboarded in the Neo environment of SAP Cloud Platform. Follow the workflows for trial or customer
accounts or subscribe to business applications.
Getting Started with a Trial Account in the Neo Environment [page 1054]
Quickly get started with a trial account.
Getting Started with a Customer Account in the Neo Environment [page 1057]
Quickly get started with a customer account.
Getting Started with Business Applications Subscriptions in the Neo Environment [page 1061]
By using SAP Cloud Platform, a provider can build and run an application for consumption by multiple
consumers. A provider is an SAP partner, who wants to sell business applications to their customers, or
an SAP customer, who wants to make their business applications available to different organizational
units, for example.
Related Information
Before you begin, sign up for a free trial account. See Get a Free Trial Account. For more information about trial
accounts, see About Trial Accounts in the Neo Environment [page 1056].
1. Develop and deploy your application. Check out the Developer Guide for tutorials and more information.
See Developing Java in the Neo Environment [page 1401].
2. Enable a service so that you can integrate it with an application. See Enable Services in the Neo
Environment [page 1741].
Related Information
Getting Started with a Customer Account in the Neo Environment [page 1057]
Getting Started with Business Applications Subscriptions in the Neo Environment [page 1061]
A trial account allows you to try out a limited set of features of the Neo environment for free. Access is open to
everyone.
Trial accounts are intended for personal exploration, and not for productive use or team development. The
amount of functionality a trial account offers is limited, compared to an enterprise account. Consider the
following before you decide to use a trial account:
Before you begin, purchase a customer account or join the partner program. See Purchase a Customer
Account or Join the Partner Program. For more information about account types, see Enterprise versus Trial
Accounts.
● Create a Subaccount in the Cloud Foundry Environment [AWS, Azure, or GCP Regions]
● Global Accounts [page 8]
After you've received your logon data by email, create subaccounts in your global account. This allows you to
further break down your account model and structure it according to your business needs. See Create a
Subaccount in the Cloud Foundry Environment [AWS, Azure, or GCP Regions].
1. Since you need to use the SAP Cloud Platform cockpit to configure your environment, it's important you
understand how you can navigate to your global account and subaccounts. See Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. It's time to think about member management. You can add members to subaccounts and assign different
roles to those members. For more information, see Add Members to Your Neo Subaccount [page 1903]. For
more information about roles, see Managing Member Authorizations in the Neo Environment [page 1904].
3. Before you can start using resources such as application runtimes, you need to manage your entitlements
and add quotas to your subaccounts. See Configure Entitlements and Quotas for Subaccounts [page
1756]. To learn more about entitlements and quotas, see Managing Entitlements and Quotas Using the
Cockpit [page 1755].
1. Develop and deploy your application. Check out the Developer Guide for tutorials and more information.
See Developing Java in the Neo Environment [page 1401].
2. Enable a service so that you can integrate it with an application. See Enable Services in the Neo
Environment [page 1741].
Related Information
Getting Started with a Trial Account in the Neo Environment [page 1054]
Getting Started with Business Applications Subscriptions in the Neo Environment [page 1061]
Overview
The platform provides a multitenant functionality, which allows providers to own, deploy, and operate an
application for multiple consumers with reduced costs. For example, the provider can upgrade the application
for all consumers instead of performing each update individually, or can share resources across many
consumers. Application consumers can configure certain application features and launch them using
consumer-specific URLs. Furthermore, they can protect the application by isolating their tenants.
Consumers do not deploy applications in their subaccounts, but simply subscribe to the provider application.
As a result, a subscription is created in the consumer subaccount. This subscription represents the contract or
relation between a subaccount (tenant) and a provider application.
Note
SAP Partners that wish to offer SAP Cloud Platform multitenant business applications in the Cloud Foundry
environment should contact SAP.
In the Neo environment, SAP Cloud Platform supports Java and HTML5 subscriptions. You configure HTML5
subscriptions used for HTML5 provider applications through the cockpit only. While for Java applications, you
Multitenancy Roles
● Application Provider - an organizational unit that uses SAP Cloud Platform to build, run, and sell
applications to customers, that is, the application consumers.
For more information about providing applications, see Providing Multitenant Applications to Consumers in
the Neo Environment [page 1063].
● Application Consumer - an organizational unit, typically a customer or a department inside an
organization of a customer, which uses an SAP Cloud Platform application for a certain purpose. Obviously,
the application is in fact used by end users, who might be employees of the organization (for instance, in
the case of an HR application) or just arbitrary users, internal or external (for instance, in the case of a
collaborative supplier application).
For more information about consuming applications, see Subscribe to Java Multitenant Applications in the
Neo Environment [page 1066] or Subscribe to HTML5 Multitenant Applications in the Neo Environment
[page 1068].
To use SAP Cloud Platform, both the application provider and the application consumer must have a
subaccount. The subaccount is the central organizational unit in SAP Cloud Platform. It is the central entry
point to SAP Cloud Platform for both application providers and consumers. It may consist of a set of
applications, a set of subaccount members and a subaccount-specific configuration.
Subaccount members are users who are registered via the SAP ID service. Subaccount members may have
different privileges regarding the operations that are possible for a subaccount (for example, subaccount
administration, deploy, start, and stop applications). Note that the subaccount belongs to an organization and
not to an individual. Nevertheless, the interaction with the subaccount is performed by individuals, the
members of the subaccount. The subaccount-specific configuration allows application providers and
application consumers to adapt their subaccount to their specific environment and needs.
An application resides in exactly one subaccount, the hosting subaccount. It is uniquely identified by the
subaccount name and the application name. Applications consume SAP Cloud Platform resources, for
instance, compute units, structured and unstructured storage and outgoing bandwidth. Costs for consumed
resources are billed to the owner of the hosting subaccount, who can be an application provider, an application
consumer, or both.
Getting Started with a Trial Account in the Neo Environment [page 1054]
Getting Started with a Customer Account in the Neo Environment [page 1057]
Providing Multitenant Applications to Consumers in the Neo Environment [page 1063]
Providing Java Multitenant Applications to Tenants for Testing [page 1064]
Subscribe to Java Multitenant Applications in the Neo Environment [page 1066]
Subscribe to HTML5 Multitenant Applications in the Neo Environment [page 1068]
In the Neo environment of SAP Cloud Platform, you can develop and run multitenant (tenant-aware)
applications that you can make available to multiple consumers.
For detailed instructions on developing multitenant applications, see Developing Multitenant Applications in
the Neo Environment [page 1482].
Prerequisites
● An enterprise account. For more information, see Global Accounts [page 8].
● Develop and deploy an application in the Neo environment for multiple consumers. For more information,
see Developing Multitenant Applications in the Neo Environment [page 1482].
● Provider and consumer subaccounts that belong to the same region. For more information, see Regions
[page 11].
● Set up the console client. For more information, see Set Up the Console Client [page 1412].
To list all subaccounts subscribed to a Java application, use the list-subscribed-accounts command.
Example
Using the console client, you can create subaccounts and subscribe them to a provider application to test how
applications can be provided to multiple consumers.
Prerequisites
● Set up the console client. For more information, see Set Up the Console Client [page 1412].
● Develop and deploy an application that is used by multiple consumers. For more information, see
Developing Multitenant Applications in the Neo Environment [page 1482].
● You have an enterprise account. For more information, see Global Accounts [page 8].
● You are a member to both subaccounts: the one where the multitenant application is deployed and the one
that you subscribe to the application.
Context
Note
You can subscribe a subaccount to an application that is running in another subaccount only if both
subaccounts (provider and consumer subaccounts) belong to the same region.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Create subaccounts for several consumers.
Access the application through the different tenants and verify that the multitenant application works as
configured for the respective subaccount (tenant).
Procedure
1. Access the application using the dedicated URL for each consumer subaccount in the format https://
<application name><provider subaccount>-<consumer subaccount>.<host>.
You see the list of subscriptions and the corresponding application URLs to access them in the
Subscriptions pane in the cockpit.
2. Change the configuration of the multitenant application for each consumer subaccount (tenant).
3. Verify that the configuration of the provider application differs for each consumer subaccount (tenant).
4. (Optional) You can also check the list of your test subaccounts and subscriptions as follows:
Option Description
Procedure
Create,list, and remove subscriptions for a Java application using the console client and view all our
subscriptions in the SAP Cloud Platform cockpit.
Prerequisites
● An enterprise account. For more information, see Global Accounts [page 8].
● Develop and deploy an application in the Neo environment for multiple consumers. For more information,
see Developing Multitenant Applications in the Neo Environment [page 1482].
● Provider and consumer subaccounts that belong to the same region. For more information, see Regions
[page 11].
● If applicable, purchase SaaS licenses for the applications you want to consume.
● Set up the console client. For more information, see Set Up the Console Client [page 1412].
Example
Example
● To list all subaccounts subscribed to a Java application, use the list-subscribed-accounts command.
Example
Procedure
For more information, see Navigate to Global Accounts and Subaccounts [AWS, Azure, or GCP Regions].
You see a list of subscriptions to Java applications, with the provider subaccount from which the
subscription was obtained and the subscribed application.
3. To navigate to the subscription overview, choose the application name. You have the following options:
○ To launch an application, choose the link in the Application URLs panel.
○ To create connectivity destinations, choose Destinations in the navigation area.
○ To create or assign roles, choose Roles in the navigation area.
Note
Related Information
Manage subscriptions to HTML5 applications by viewing, creating, or removing subscriptions in the SAP Cloud
Platform cockpit.
Procedure
For more information, see Navigate to Global Accounts and Subaccounts [AWS, Azure, or GCP Regions].
Note
The subscription name must be unique across all subscription names and all HTML5 application
names in the current subaccount.
Procedure
1. In the navigation area, choose Applications Subscriptions . The subscriptions to HTML5 applications
are listed with the following information:
○ The subaccount name of the application provider from which the subscription was obtained
○ The name of the subscribed application
2. To navigate to the subscription overview, click the application name:
○ To launch an application, click the URL link in the Active Version panel.
○ To create or assign roles, choose Roles in the navigation area.
Procedure
Related Information
Develop and run business applications on SAP Cloud Platform. Use our application programming model, APIs,
services, tools and capabilities.
We recommend using the application programming model for SAP Cloud Platform for full-stack application
development. This model simplifies your development process by enabling you to create concise and
comprehensive data and service models based on Core Data and Services (CDS). This modular approach
means that you can reuse models and extend services to add features specific to your business logic. Thanks
to the generic service provider, boilerplate code is reduced and code is simplified. This allows you to focus on
your business logic and makes reviewing and maintaining your code easier.
Also, using our application programming model means you get languages, libraries and APIs that guide and
support you through your development project. You can choose to develop stand-alone business applications
or extend other cloud solutions, like SAP S/4 HANA or SAP SuccessFactors.
For more information, see Working with the SAP Cloud Application Programming Model [page 1332].
With SAP Cloud Platform, you are also free to choose your own approach. We support various programming
languages on Cloud Foundry, Neo, and ABAP environment. You can also develop Multi-Target Applications
(MTA) on these environments. We provide information about developing, configuring and deploying your
applications depending on your preferred programming language and development approach.
For more information, choose your environment and preferred programming language:
Related Information
Learn more about developing applications on the SAP Cloud Platform Cloud Foundry environment.
Overview
SAP Cloud Platform Cloud Foundry environment is an open Platform-as-a-Service (PaaS) targeted at
microservice development and orchestration.
Develop polyglot applications Build on open standards with SAP Java, Python, and Node.js buildpacks or
bring your own language with community buildpacks for PHP, Ruby, Go.
Manage the lifecycle of Start, stop, scale and configure distributed cloud applications using standard
applications Cloud Foundry tools, our cockpit and dev-ops capabilities.
Optimize development and Use the rich set of SAP Cloud Platform Cloud Foundry services including
operations messaging, persistence, and many other capabilities.
Use the application Use languages, libraries and APIs tailored for full-stack development on SAP
programming model Cloud Platform.
If you have already monolithic applications running on SAP Cloud Platform and are looking for a way to run
them on the Cloud Foundry environment, read our best practice guide. See Migrating from the Neo to the
Cloud Foundry Environment on SAP Cloud Platform.
Follow any of these tutorials to develop your first application on SAP Cloud Platform. Choose your preferred
programming model and technology.
Use the Application Programming Model to Create a Full-Stack Application Application Programming
Model (CDS and Java)
In the Cloud Foundry environment, SAP is promoting a pattern for building applications. We use the term
Business Application to distinguish from single applications in the context of the Cloud Foundry environment.
Application Patterns
The diagram below is a logical view, abstracting from numerous details like the CF router, CF controller, and
represents architecture of a business application. In general, a business application consists of multiple
microservices which are deployed as separate applications to SAP Cloud Platform Cloud Foundry environment.
Create Microservices from "pushing" code and binaries to the platform resulting in a number of application
instances each running in a separate container.
Services are exposed to apps by injecting access credentials into the environment of the applications via
service binding. Applications are bound to service instances where a service instance represents the required
configuration and credentials to consume a service. Services instances are managed by a service broker which
has to be provided for each service (or for a collection of services).
Routes are mapped to applications and provide the actual application access points / URLs.
A business application is a collection of microservices, service instances, bindings, and routes, which together
represent a usable web application from an end user point of view. These microservices, services instances,
bindings, and routes are created by communicating with the CF / XSA Controller (for example, using a
command line interface).
SAP provides a set of libraries, services, and component communication principles, which are used to
implement (multi-tenant) business applications according this pattern.
Application Router
Business applications embracing a microservice architecture, consist of multiple services that can be managed
independently. Still this approach brings some challenges for application developers, like handling security in a
consistent way and dealing with same origin policy.
● Reverse proxy - provides a single entry point to a business application and forwards user requests to
respective backend services
● Serves static content from the file system
● Security – provides security-related functionality like logon, logout, user authentication, authorization, and
CSRF protection in a central place
The application router exposes the endpoint accessed by a browser to access the application.
UAA Service
The User Account and Authentication (UAA) service is a multi-tenant identity management service, used in the
SAP Cloud Platform Cloud Foundry environment. Its primary role is as an OAuth2 provider, issuing tokens for
client applications to use when they act on behalf of the users of the Cloud Foundry environment. It can also
authenticate users with their credentials for the Cloud Foundry environment, and can act as an SSO service
using those credentials (or others). It has endpoints for managing user accounts and for registering OAuth2
clients, as well as various other management functions.
The platform provides a number of backing services like SAP HANA, MongoDB, PostgreSQL, Redis, RabbitMQ,
Audit Log, Application Log, and so on. Also, the platform provides various business services, like retrieving
currency exchange rates. In addition, applications can use user-provided services, which are not managed by
the platform.
In all these cases applications can bind and consume required services in a similar way. See Services Overview
documentation for general information about services and their consumption.
Application Deployment
There are two options for a deployer (human or software agent), how to deploy and update a business
application:
● Native deployment: Based on the native controller API of the Cloud Foundry environment, the deployer
deploys individual applications and creates service instances, bindings, and routes. The deployer is
responsible for performing all these tasks in an orchestrated way to manage the lifecycle of the entire
business application.
● Multitarget Applications in the Cloud Foundry Environment [page 1232] (MTA): Based on a declarative
model the deployer creates an MTA archive and hands over the model description together with all
application artifacts to the SAP Deploy Service. This service is performing and orchestrating all individual
(native) steps to deploy and update the entire business application (including blue-green deployments).
When an application for the Cloud Foundry environment resides in a folder on your local machine, you can
deploy it and start it by executing the command line interface (CLI) command push. To deploy business
applications bundled in a multitarget application archive, you have to use the command deploy-mta.
● For more information about developing and deploying applications in the Cloud Foundry environment, see
http://docs.cloudfoundry.org/devguide/deploy-apps/deploy-app.html .
● For more information developing Multitarget Applications, see Multitarget Applications in the Cloud
Foundry Environment [page 1232]. See Multitarget Application Commands for the Cloud Foundry
Environment [page 1772] for information about MTA deployment.
Related Information
Download and Install the Cloud Foundry Command Line Interface [page 1769]
Log On to the Cloud Foundry Environment Using the Cloud Foundry Command Line Interface [page 1770]
Multitarget Application Commands for the Cloud Foundry Environment [page 1772]
Prerequisites
● You are aware how cf push works when you deploy a regular Cloud Foundry application.
● You need a higher degree of freedom and know that this freedom comes with higher responsibility for
application operation.
● Your Docker image resides in a highly available container registry.
Context
A Docker image deployed to the Cloud Foundry environment has to adhere to the same boundary conditions as
regular Cloud Foundry applications. Cloud Foundry applications deployed as Docker images are running in
Linux containers with user namespaces enabled, which prevents certain functionality like mounting FuseFS
devices. Another difference is the usage of buildpacks. Docker images can't be used in combination with
buildpacks. If you want to use certain functionality provided by the buildpack, you have to build it into your
Docker image.
1. Managing a whole cluster on Kubernetes is too much overhead for your scenario.
2. The usual approach doesn't let you realize what you want to achieve.
Here are some features of the Cloud Foundry environment you have to implement yourself in your Docker
image. The comparison is made with the SAP Java buildpack.
OS & JVM management Consume newer versions (especially patches) of the under
lying OS and JVM regularly..
Procedure
https://12factor.net/
2. You only use supported features in SAP Cloud Platform Cloud Foundry environment.
Pay special attention to the mentioned requirements. Deploy an App with Docker
Requires traffic routing to the lowest numbered port. How Diego Runs and Monitors Processes
4. Use cf push with the --docker-image parameter.
Also pay attention to the section on private registries and Google Container Registry. Push a Docker Image
From Docker Hub
5. Operate and patch your Docker images.
Docker images contain everything that is necessary to run a process, including the operating system and
libraries into one binary blob. Consequently, staging a Docker image in Cloud Foundry does not provide the
same separation of droplet and stack that is available for a Cloud Foundry application staged using a
buildpack.
For scale or restart operations, the image is pulled again from the container registry. The container registry
has to be highly available and reachable from network side for proper operations.
Results
Was this topic helpful? Did you miss something? Let us know and use the feedback function (feedback).
Find here selected information for SAP HANA database development and references to more detailed sources.
This section gives you information about database development and its surrounding tasks especially if you
want to explore the SAP Cloud Platform Cloud Foundry environment. To get more into detail we have
references to other guides and the SAP HANA extended application services, advanced model documentation.
The context we are looking at is multi-target application (MTA) development, whereby SAP HANA is the
database module and you develop all artifacts in that module.
See a typical flow to get started quickly with SAP HANA development. Select a tile to find further information
about this step and references to other sources that give detailed instructions on task level. The links guide you
to SAP HANA documentation and a comprehensive SAP Web IDE Full-Stack guide. Feel free to explore these
guides if you feel comfortable in using them and need more in-depth knowledge.
Development Environment
1. Register for an SAP Cloud Platform trial account at https://account.hanatrial.ondemand.com/ and log on
afterwards.
2. Open SAP Web IDE Full-Stack
3. Setting Up Application Projects - Create a Project from Scratch & Select a Cloud Foundry Space
4. Create a Database Module
Database Artifacts
The SAP Web IDE Full-Stack provides dedicated editors for specific artifacts. But you can create all relevant
artifacts and open them in a text editor to edit them. For more information, see Develop Database Artifacts
To create database artifacts open the context menu on the <your_db_module>/src folder, select New, and
choose the artifact you want to create.
More Information:
● For an overview about defining the data model, see Defining the Data Model in XS Advanced
● To learn how to create the data persistence artifacts, see Creating the Data Persistence Artifacts in XS
Advanced
● To learn how to create procedures and functions, see Creating Procedures and Functions in XS Advanced
● To learn how to enable cross-container access to external objects, see Using Synonyms to Access External
Schemas and Objects in XS Advanced
To build your database module open the context menu on your database module folder and select Build.
Open the integrated SAP HANA Database Explorer with Tools Database Explorer and add your database.
To learn more, see Getting Started With the SAP HANA Database Explorer
The advanced model of SAP HANA extended application services enhances the Cloud Foundry environment
with a number of tweaks and extensions provided by SAP. These SAP enhancements include amongst other
things: an integration with the SAP HANA database, OData support, compatibility with XS classic model, and
some additional features designed to improve application security. XS advanced allows you to develop and
deploy SAP HANA-based web applications on the cloud platform, supporting multiple runtimes, programming
languages, libraries, and services.
Some of the central concepts for the advanced model of SAP HANA extended application services include the
following:
Reference Information
The SAP HANA Developer Information Map gives you access to detailed SAP HANA documentation from
different angles, by task, by guide or by scenario.
SAP Cloud Platform enables you to easily develop and run HTML5 applications in a cloud environment. In
contrast to classic Cloud Foundry applications that run on the server side, HTML5 applications are a set of
runtime objects that should be stored in a permanent storage and be served to the browser, where they run.
● HTML5 Application Repository enables central storage of HTML5 applications' static content.
● The application router serves static content from HTML5 Application Repository, in addition to providing
authentication, authorization and a reverse proxy to backend application support.
Related Information
HTML5 Application Repository enables central storage of HTML5 applications' static content on the SAP Cloud
Platform Cloud Foundry environment.
HTML5 applications consist of static content such as HTML, CSS, JavaScript, and other files, that run on a
browser. For more information, see Basic Template and openui5-basic-template-app .
Features
● The HTML5 applications are decoupled from the consuming application router. This enables updating the
static content of the HTML5 applications without restarting the application router in the SAP Cloud
Platform Cloud Foundry environment.
● During runtime, the HTML5 application content is cached and optimized to provide high performance with
minimal network load.
● The service provides several instances for a runtime to serve a high load of application requests.
Restrictions
● The size of an application deployed to the repository is limited to 100 MB per service instance of the app-
host service plan.
● Since the applications stored in HTML5 Application Repository can be shared, it is advised not to add
personal data to them.
See Also
If you want to learn more about services in the SAP Cloud Platform Cloud Foundry environment, see Using
Services in the Cloud Foundry Environment [page 1318].
Learn how to deploy your content using the preferred Generic Application Content Deployer or deploy ,
redeploy, and undeploy content from the HTML5 Application Repository.
Related Information
Deploy content from the HTML5 Application Repository using the Generic Application Content Deployer
(GACD).
Prerequisites
● cf CLI is installed locally, see Download and Install the Cloud Foundry Command Line Interface [page
1769].
● The multitarget application (MTA) plug-in for the Cloud Foundry command-line interface to deploy MTAs is
installed locally. For more information, see Install the MultiApps Plug-in in the Cloud Foundry Environment
[page 1771].
● Node.js is installed locally.
Configure the npm registry to use the @sap scope with the following command:
npm config set @sap:registry https://npm.sap.com
Context
You can deploy your content to the HTML5 Application Repository using the GACD (Generic Application
Content Deployer) module. The GACD module has the module type com.sap.application.content. This
module type enables the deploy plug-in generic application content deploy support. It means that when a
module is processed in the cf deploy flow, the deploy service locates the service resource that is required as a
target for the deploy and deploys the corresponding content.zip file.
1. Create a nested content.zip file that contains a zip file for each HTML5 application.
Sample Code
content.zip
app1.zip
manifest.json
xs-app.json
…..
app2.zip
…..
Sample Code
myMtar.mtar
…
mydeployer
content.zip
META-INF
mtad.yaml
MANIFEST.MF
3. 3. In the MANIFEST.MF file, define the following section for the deployer:
Sample Code
Manifest-Version: 1.0
Created-By: SAP WebIDE
Name: mydeployer/content.zip
MTA-Module: ui_deployer
Content-Type: application/zip
Sample Code
ID: testdeployer
_schema-version: '3.1'
modules:
- name: ui_deployer
type: com.sap.application.content
requires:
- name: uideployer_html5_repo_host
parameters:
content-target: true
resources:
- name: uideployer_html5_repo_host
parameters:
service-plan: app-host
service: html5-apps-repo
config: sizeLimit: 5
type: org.cloudfoundry.managed-service
version: 0.0.1
Use the HTML5 application deployer module to deploy the content of the HTML5 applications to the HTML5
Application Repository.
Prerequisites
● cf CLI is installed locally, see Download and Install the Cloud Foundry Command Line Interface [page
1769].
● The multi-target application (MTA) plug-in for the Cloud Foundry command line interface to deploy MTAs is
installed locally. For more information, see Install the MultiApps Plug-in in the Cloud Foundry Environment
[page 1771].
● Node.js is installed locally.
Configure the npm registry to use the @sap scope with the following command:
npm config set @sap:registry https://npm.sap.com
Context
You can deploy your content to the HTML5 Application Repository using the HTML5 application deployer npm
module.
Procedure
1. Add the html5-app-deployer module as a dependency to your package.json file. To do so, navigate to
your package.json file and execute npm install to download the html5-app-deployer module
from the SAP npm registry.
The basic package.json file should look similar to the following example:
Sample Code
{
"name": "myAppDeployer",
"engines": {
"node": ">=6.0.0"
},
"dependencies": {
"@sap/html5-app-deployer": "2.0.1"
},
"scripts": {
"start": "node node_modules/@sap/html5-app-deployer/index.js"
The start script is mandatory as it is executed after the deployment of the application.
2. In the html5-app-deployer structure, create a resources folder and add the static content that you
want to deploy. In the resources folder, add one folder for each application you want to deploy. For each
application you want to deploy, provide a manifest.json and xs-app.json file at root level.
If you want to deploy more than one application to the same app host instance, you can add multiple zip
archives to the resources folder.
Sample Code
myAppsDeployer
+ node_modules
- resources
- app1
index.html
manifest.json
xs-app.json
- app2
...
package.json
manifest.yaml
Sample Code
manifest.json
{
"_version": "1.7.0",
"sap.app": {
"id": "app1",
"type": "application",
"i18n": "i18n/i18n.properties",
"applicationVersion": {
"version": "1.0.0"
}
}
}
Sample Code
xs-app.json
"welcomeFile": "index.html",
"authenticationMethod": "route",
"routes": [
{
"source": "^/be$",
"destination": "simpleui_be",
"authenticationType": "xsuaa"
},
{
"source": "^/ui(/.*)",
Sample Code
ID: html5.repo.deployer.myHTML5App
_schema-version: '2.0'
version: 0.0.3
modules:
- name: myHTML5App_app-deployer
type: com.sap.html5.application-content
path: deployer/
requires:
- name: myHTML5App_app-host
resources:
- name: myHTML5App_app-host //Resource name
type: org.cloudfoundry.managed-service
parameters:
service: html5-apps-repo //Service name
service-plan: app-host //Service plan
service-name: myHTML5App_app-host //Service instance name
The mtad.yaml file acts as the deployment descriptor. For a general description of MTA descriptors, see
Multitarget Applications in the Cloud Foundry Environment [page 1232].
4. Create the pom.xml file for your project according to the multi-target application build (MBT)
requirements.
5. In the Cloud Foundry command line interface (CLI), navigate to your project root and enter the CLI
command: mvn clean install.
The *.mtar file is generated.
6. Deploy the *.mtar file using the CLI command cf deploy.
cf deploy myHTML5App-deployer-assembly-0.0.1.mtar
The Cloud Foundry deploy plug-in uses the mtad.yaml file to configure the following:
○ Create an HTML5 Application Repository service instance of the app-host service plan.
○ Create an HTML5 application deployer application, which uses the HTML5 application deployer npm
module.
○ Bind the app-host service instance to the HTML5 application deployer application.
○ Start the HTML5 application deployer application:
○ Create a zip archive for each application in resources folder.
○ Create a client credential token from the app-host service instance credentials.
○ Deploy the content to the HTML5 application repository: passing on the zip archives and the client
credential token.
You can redeploy changed content to the existing app-host service instance.
After making changes to the static content files of HTML5 applications, you can redeploy the new content to
the already existing app-host service instance of the HTML5 application repository. All content referenced by
the app-host service instance ID is replaced by the new content. If you want to keep more than one version of
your HTML5 application on the repository, you deploy these versions to the same app-host service instance ID.
● Application app1 with version 1.0.0 has been deployed to the repository. This app matches a back-end
application with version 3.0.0
● For the app, a new back-end version 4.0.0 is available. Therefore, the application with version 2.0.0 is
developed that matches the new back-end version.
● As customers may still have a backend with version 3.0.0, app1 with version 1.0.0 cannot be dropped. On
the repository, both versions shall be available so the customer can decide which one to use depending on
their back-end version.
Sample Code
myAppsDeployer
+ node_modules
- resources
- app1
index.html
manifest.json
xs-app.json
- app2
...
package.json
manifest.yaml
To undeploy content you need to delete the content from the repository and delete the app-host service plan
instance.
Procedure
Note
If you do not want to use the –delete-services option, you can delete the app-host service plan
instance manually using the CLI command: cd delete-service SERVICE-NAME.
The HTML5 Application Repository service comprises the following service plans:
● app-host
Use this service plan to deploy HTML5 applications to the repository. For more information, see Deploying
Content [page 1082].
● app-runtime
Use this service plan to consume HTML5 applications stored in the repository. For more information, see
Consuming Content [page 1089].
Use the app-runtime service plan to consume HTML5 applications from the HTML5 Application Repository.
Context
Application router can consume content from the HTML5 Application Repository by binding an HTML5
Application Repository service instance with app-runtime plan to the application router. In addition, the
routing to the HTML5 Application Repository service is configured in the xs-app.json file.
Procedure
Sample Code
applications:
- name: myCRMApp
memory: 256M
services:
- html5AppsRepoRT
Sample Code
"source": "^(/.*)",
"target": "$1",
"service": "html5AppsRepo",
"authenticationType": "xsuaa"
5. HTML5 application content can be consumed from the application router using an URL in the following
format: https://<host>.<domain>/<appName>-<appVersion>/<resourcePath>
The HTML5 Application Repository service is bundled into the Application Runtime service. The total quota
allocated to the HTML5 Application Repository service is derived from the Application Runtime service. The
HTML5 Application Repository meters and reports the actual storage size of each application that is deployed
to the repository.
In the Usage Analytics view of the SAP Cloud Platform cockpit, the admin can see the largest total storage size,
in megabytes, of all the applications in the repository during the selected period of time.
In the global account, the total quota allocated to the HTML5 Application Repository service is derived from the
Application Runtime service, and the administrator can distribute this quota between the subaccounts. In
addition, a storage size limit can be configured during the creation of an html5-apps-repo service instance of
the app-host plan. This limits the maximum size, in megabytes, of application content that can be deployed to
this instance. If a size limit isn’t provided during instance creation, a default of 2 MB is allocated automatically.
Remember
Each app-host service plan has a size limit that, upon creation, decreases your available app-host quota.
The default size limit of an app-host plan is 2 MB. This value can be changed in Subaccount Assignments in
SAP Cloud Platform cockpit. When defining entitlements and the app-host plan quota, you must consider
what the actual size and the expected growth is for the HTML5 Applications uploaded with this app-host.
Use the following parameter to configure the storage size limit for a service instance:
Sample Code
{
"sizeLimit": 5 //an integer between 1 - 100
}
Note
The maximum storage size that can be allocated to a service instance is 100 MB.
● During creation, update, or deletion of service instances, the total storage size quota in the subaccount is
updated accordingly.
● If the storage size limit of a service instance exceeds the subaccount quota, the service instance creation
will fail.
Example
In subaccount A, the html5-apps-repo service instance X is created with a storage size limit of 40 MB. If service
instance Y then requests a storage size limit of 25 MB, it isn’t created, because it exceeds the quota of
subaccount A.
If you have three applications: Application 1–20 MB, Application 2–15 MB, and Application 3–10 MB , you can’t
deploy the three applications together using service instance X because the total storage exceeds the 40-MB
storage size limit.
Related Information
The application router is the single point-of-entry for an application running in the Cloud Foundry environment
on SAP Cloud Platform. The application router is used to serve static content, authenticate users, rewrite
URLs, and forward or proxy requests to other micro services while propagating user information.
Related Information
Prerequisites
● cf CLI is installed locally, see Download and Install the Cloud Foundry Command Line Interface [page
1769].
● Node.js is installed and configured locally, see npm documentation .
● The SAP npm registry, which contains the Node.js package for the application router, is configured:
npm config set @sap.registry https://npm.sap.com
Context
The application router is configured in the xs-app.json file. The package.json file contains the start
command for the application router and a list of package dependencies.
The subfolder for the Web-resources module must be located in the application root folder, for example, /
path/<myAppName>/web.
Tip
The Web-resource module uses @sap/approuter as a dependency; the Web-resources module also
contains the configuration and static resources for the application.
Static resources can, for example, include the following file and components: index.html, style sheets
(.css files), images and icons. Typically, static resouces for a Web application are placed in a subfolder of
the Web module, for example, /path/<myAppName>/web/resources.
4. Create the application-router configuration file.
The application router configuration file xs-app.json must be located in the application's Web-resources
folder, for example, /path/<MyAppName>/web.
Sample Code
<myAppName>
|- web/ # Application descriptors
| |- xs-app.json # Application routes configuration
| |- package.json # Application router details/dependencies
| \- resources/
The contents of the xs-app.json file must use the required JSON syntax. For more information, see
Routing Configuration File [page 1101].
a. Create the required destinations configuration.
Sample Code
/path/<myAppName>/web/xs-app.json.
{
"welcomeFile": "index.html",
"routes": [
{
"source": "/sap/ui5/1(.*)",
"target": "$1",
"localDir": "sapui5"
},
{
"source": "/rest/addressbook/testdataDestructor",
"destination": "backend",
"scope": "node-hello-world.Delete"
},
{
"source": "/rest/.*",
b. Add the routes (destinations) for the specific application (for example, node-hello-world) to the
env section application’s deployment manifest (manifest.yml).
Every route configuration that forwards requests to a micro service has property destination. The
destination is a name that refers to the same name in the destinations configuration. The destinations
configuration is specified in an environment variable passed to the approuter application.
Sample Code
<myAppName>/manifest.yml
- name: node-hello-world
host: myHost-node-hello-world
domain: xsapps.acme.ondemand.com
memory: 100M
path: web
env:
destinations: >
[
{
"name":"backend",
"url":"http://myHost-node-hello-world-
backend.xsapps.acme.ondemand.com",
"forwardAuthToken": true
}
]
6. Add a package descriptor (package.json) for the application router to the root folder of your
application's Web resources module (web/) and execute npm install to download the approuter npm
module from the SAP npm registry.
The package descriptor describes the prerequisites and dependencies that apply to the application router
and starts the application router, too.
Sample Code
<myAppName>
|- web/ # Application descriptors
| |- xs-app.json # Application routes configuration
| |- package.json # Application router details/dependencies
| \- resources/
The basic package.json file for your Web-resources module (web/) should look similar to the following
example:
Sample Code
{
"name": "node-hello-world-approuter",
"dependencies": {
"@sap/approuter": "5.1.0"
Tip
The start script (for example, approuter.js) is mandatory; the start script is executed after
application deployment.
The application router is used to serve static content, authenticate users, rewrite URLs, and forward or proxy
requests to other micro services while propagating user information. The following table lists the resource files
used to define routes for multi-target applications:
package.json The package descriptor is used by the Node.js package man Yes
ager (npm) to start the application router; in the
“dependencies”: {} section
Tip
If you have a static resources folder name in the xs-
app.json file, we recommend that you use localDir
as default.
default-services.json Defines the configuration for one or more special User Ac -
count and Authentication (UAA) services for local develop
ment
xs-app.json
{
"source":"^/web-pages/(.*)$",
"localDir":"my-static-resources"
}
A file that contains the configuration information used by the application router.
The application router configuration file is named xs-app.json and its content is formatted according to
JavaScript Object Notation (JSON) rules.
When a business application consists of several different apps (microservices), the application router is used to
provide a single entry point to that business application. The application router is responsible for the following
tasks:
Tip
The different applications (microservices) are the destinations to which the incoming requests are
forwarded. The rules that determine which request should be forwarded to which destination are called
routes. For every destination there can be more than one route.
User authentication is performed by the User Account and Authentication (UAA) server. In the run-time
environment (on-premise and in the Cloud Foundry environment), a service is created for the UAA
configuration; by using the standard service-binding mechanism, the content of this configuration is available
in the <VCAP_SERVICES> environment variable, which the application router can access to read the
configuration details.
Note
The UAA service should have xsuaa in its tags or the environment variable <UAA_SERVICE_NAME> should
be defined, for example, to specify the exact name of the UAA service to use.
Note
The application router does not “hide” the back-end microservices in any way; they remain directly
accessible when bypassing the application router. So the back-end microservices must protect all their end
points by validating the JWT token and implementing proper authorization scope checks.
The application router supports the use of the $XSAPPNAME placeholder, which you can use in your route
configuration, for example, in the scope property for the specified route. The value of $XSAPPNAME is taken
from the UAA configuration (for example, the xsappname property). For more information, see Routing
Configuration File [page 1101].
Headers
If back end nodes respond to client requests with URLs, these URLs need to be accessible for the client. For this
reason, the application router passes the following x-forwarding-* headers to the client:
● x-forwarded-host
Contains the host header that was sent by the client to the application router
● x-forwarded-proto
Contains the protocol that was used by the client to connect to the application router
● x-forwarded-for
Contains the address of the client which connects to the application router
● x-forwarded-path
Contains the original path which was requested by the client from the approuter
Caution
If the application router forwards a request to a destination, it blocks the header host.
“Hop-by-hop” headers are meaningful only for a single transport-level connection; these headers are not
forwarded by the application router. The following headers are classified as “Hop-By-Hop” headers:
● Connection
● Keep-Alive
● Public
● Proxy-Authenticate
● Transfer-Encoding
● Upgrade
You can configure the application router to send additional HTTP headers, for example, either by setting it in
the httpHeaders environment variable or in a local-http.headers.json file.
local-http.headers.json
[
{
"X-Frame-Options": "ALLOW-FROM http://localhost"
},
{
"Test-Additional-Header": "1"
}
]
Sessions
The application router establishes a session with the client (browser) using a session cookie. The application
router intercepts all session cookies sent by back-end services and stores them in its own session. To prevent
collisions between the various session cookies, back-end session cookies are not sent to the client. On request,
the application router sends the cookies back to the respective back-end services so the services can establish
their own sessions.
Note
Non-session cookies from back-end services are forwarded to the client, which might cause collisions
between cookies. Applications should be able to handle cookie collisions.
Session Contents
A session established by the application router typically contains the following elements:
● Redirect location
The location to redirect to after logon; if the request is redirected to a UAA logon form, the original request
URL is stored in the session so that, after successful authentication, the user is redirected back to it.
● CSRF token
The CSRF token value if it was requested by the clients. For more information about protection against
Cross Site Request Forgery see CSRF Protection [page 1100] below.
● OAuth token
The JSON Web Token (JWT) fetched from the User Account and Authentication service (UAA) and
forwarded to back-end services in the Authorization header. The client never receives this token. The
application router refreshes the JWT automatically before it expires (if the session is still valid). By default,
this routine is triggered 5 minutes before the expiration of the JWT, but it can also be configured with the
<JWT_REFRESH> environment variable (the value is set in minutes). If <JWT_REFRESH> is set to 0, the
refresh action is disabled.
● OAuth scopes
The scopes owned by the current user, which is used to check if the user has the authorizations required
for each request.
● Back-end session cookies
All session cookies sent by back-end services.
The application router keeps all established sessions in local memory, and if multiple instances of the
application router are running, there is no synchronization between the sessions. To scale the application
router for multiple instances, session stickiness is used so that each HTTP session is handled by the same
application router instance.
The application-router process should run with at least 256MB memory. The amount of memory actually
required depends on the application the router is serving. The following aspects have an influence on the
application's memory usage:
● Concurrent connections
● Active sessions
● Size of the Java Web Token
● Back-end session cookies
You can use the start-up parameter max-old-space-size to restrict the amount of memory used by the
JavaScript heap. The default value for max-old-space-size is less than 2GB. To enable the application to
use all available resources, the value of max-old-space-size should be set to a number equal to the memory
limit for the whole application. For example, if the application memory is limited to 2GB, set the heap limit as
follows, in the application's package.json file:
Sample Code
"scripts": {
"start": "node --max-old-space-size=2048 node_modules/@sap/approuter/
approuter.js"
}
If the application router is running in an environment with limited memory, set the heap limit to about 75% of
available memory. For example, if the application router memory is limited to 256MB, add the following
command to your package.json:
Sample Code
"scripts": {
"start": "node --max-old-space-size=192 node_modules/@sap/approuter/
approuter.js"
}
Note
For detailed information about memory consumption in different scenarios, see the Sizing Guide for the
Application Router located in approuter/approuter.js/doc/sizingGuide.md.
The application router enables CSRF protection for any HTTP method that is not GET or HEAD and the route is
not public. A path is considered public, if it does not require authentication. This is the case for routes with
authenticationType: none or if authentication is disabled completely via the top level property
authenticationMethod: none.
To obtain a CSRF token one must send a GET or HEAD request with a x-csrf-token: fetch header to the
application router. The application router will return the created token in a x-csrf-token: <token> header,
where <token> will be the value of the CSRF token.
If a CSRF protected route is requested with any of the above mentioned methods, x-csrf-token: <token>
header should be present in the request with the previously obtained token. This request must use the same
session as the fetch token request. If the x-csrf-token header is not present or is invalid, the application
router will return status code “403 - Forbidden”.
Cloud Connectivity
The application router supports integration with SAP Cloud Platform connectivity service. The connectivity
service enables you to manage proxy access to SAP Cloud Platform Cloud Connector, which you can use to
create tunnels for connections to systems located in private networks, for example, on-premise. To use the
connectivity feature, you must create an instance of the connectivity service and bind it to the Approuter
application.
In addition, the relevant destination configurations in the <destinations> environment variable must have
the proxy type "onPremise", for example, "proxyType": "onPremise". You must also ensure that you
obtain a valid XSUAA logon token for the XS advanced User Account and Authentication service.
Troubleshooting
The application router uses the @sap/logging package, which means that all of the typical logging features
are available to control application logging. For example, to set all logging and tracing to the most detailed level,
set the <XS_APP_LOG_LEVEL> environment variable to “debug”.
Note
Enabling debug log level could lead to a very large amount of data being written to the application logs and
trace files. The asterisk wild card (*) enables options that trace sensitive data that is then written to the
logs and traces.
Tip
Logging levels are application-specific and case-sensitive; they can be defined with lower-case characters
(for example, “debug”) or upper-case characters (for example, “DEBUG”). An error occurs if you set a
logging level incorrectly, for example, using lower-case characters “debug” where the application defines
the logging level as “DEBUG”.
The @sap/logging package sets the header x-request-id in the application router's responses. This is
useful if you want to search the application router's logs and traces for entries that belong to a particular
request execution. Note that the application router does not change the headers received from the back end
and forwarded to the client. If the back end is a Node.js application which uses the @sap/logging package
(and also sets the x-request-id header), then the value of the header that the client receives is the one
coming from the back end and not the one from the application router itself.
Related Information
The routing configuration defined in the xs-app.json file contains the properties used by the application
router.
The following example of an xs-app.json application descriptor shows the JSON-compliant syntax required
and the properties that either must be set or can be specified as an additional option.
Code Syntax
{
"welcomeFile": "index.html",
"authenticationMethod": "route",
"sessionTimeout": 10,
"pluginMetadataEndpoint": "/metadata",
"routes": [
{
"source": "^/sap/ui5/1(.*)$",
"target": "$1",
"destination": "ui5",
"csrfProtection": false
},
{
"source": "/employeeData/(.*)",
"target": "/services/employeeService/$1",
"destination": "employeeServices",
"authenticationType": "xsuaa",
"scope": ["$XSAPPNAME.viewer", "$XSAPPNAME.writer"],
"csrfProtection": true
},
{
"source": "^/(.*)$",
"target": "/web/$1",
"localDir": "static-content",
"replace": {
"pathSuffixes": ["/abc/index.html"],
The following table lists the properties that either must be set or can be specified as an additional option. Click
on the links for information for each property:
welcomeFile [page 1103] String The Web page served by default if the
HTTP request does not include a spe
cific path, for example, index.html.
sessionTimeout [page 1104] Number Define the amount of time (in minutes)
for which a session can remain inactive
before it closes automatically (times
out); the default time out is 15 minutes.
routes [page 1105] Array Defines all route objects, for example:
source, target, and,
destination.
logout [page 1117] Object You can define any options that apply if
you want your application to have cen
tral log out end point.
destinations [page 1118] Object Specify any additional options for your
destinations.
services [page 1119] Object Specify options for a service in your ap
plication.
whitelistService [page 1121] Object Enable the white-list service to help pre
vent against click-jacking attacks.
websockets [page 1123] Object The application router can forward web-
socket communication. Web-socket
communication must be enabled in the
application router configuration.
Related Information
3.1.6.2.4.1 welcomeFile
The Web page served by default if the HTTP request does not include a specific path, for example,
index.html.
"welcomeFile": "index.html"
3.1.6.2.4.2 authenticationMethod
The method used to authenticate user requests, for example: “route” or “none” (no authentication).
Code Syntax
"authenticationMethod" : "route"
Caution
If authenticationMethod is set to “none”, logon with User Account and Authentication (UAA) is
disabled.
3.1.6.2.4.3 sessionTimeout
Define the amount of time (in minutes) for which a session can remain inactive before it closes automatically
(times out); the default time out is 15 minutes.
Note
The sessionTimeout property is no longer available; to set the session time out value, use the
environment variable <SESSION_TIMEOUT>.
Sample Code
{
"sessionTimeout": 40
}
With the configuration in the example above, a session timeout will be triggered after 40 minutes and involves
central log out.
Defines all route objects, for example: source, target, and, destination.
source RegEx Yes Describes a regular expression that matches the in
coming request URL.
Note
Be aware that RegExp is applied to the full URL, in
cluding query parameters.
httpMethods [page Array of upper No HTTP methods that are served by this route; the sup
1108] case HTTP meth ported methods are: DELETE, GET, HEAD, OPTIONS,
ods POST, PUT, TRACE, and PATCH.
Tip
If this option isn’t specified, the route serves any
HTTP method.
target String No Defines how the incoming request path is rewritten for
the corresponding destination or static resource.
destination String No The name of the destination to which the incoming re
quest is forwarded. The destination name can be a
static string or a regular expression that defines how to
dynamically fetch the destination name from the
source property or from the host.
service String No The name of the service to which the incoming request
is forwarded.
endpoint String No The name of the endpoint within the service to which
the incoming request is forwarded. It must only be
used in a route containing a service attribute.
localDir [page String No The directory from which application router serves
1109] static content (for example, from the application's
web/ module).
replace [page Object No An object that contains the configuration for replacing
1109] placeholders with values from the environment.
Note
It is only relevant for static content.
csrfProtection Boolean No Toggle whether this route needs CSRF token protec
tion. The default value is “true”. The application router
enforces CSRF protection for any HTTP request that
changes state on the server side, for example: PUT,
POST, or DELETE.
identityProvider String No The name of the identity provider you use if it’s pro
vided in route’s definition. If it isn’t provided, the route
is authenticated with the default identity provider.
Note
If authenticationType is set to Basic
Authentication or None, don't define the
identityProvider property.
Note
Route order is important. The first matching route will be used. Therefore, it is recommended to sort the
routes by the most specific source to the more generic one. For example:
Sample Code
"routes": [
{
"source": "^/sap/backend/employees(.*)$",
"target": "$1",
"destination": "sfsf"
} ,
{
"source": "^/sap/backend/(.*)$",
"target": "$1",
"destination": "erp",
}
Code Syntax
"routes": [
{
"source": "^/sap/ui5/1(.*)$",
"target": "$1",
"destination": "ui5",
"scope": "$XSAPPNAME.viewer",
"authenticationType": "xsuaa",
"csrfProtection": true
}
]
Note
The properties service, destination, and localDir are optional. However, at least one of them must
be defined.
The httpMethods option allows you to split the same path across different targets depending on the HTTP
method. For example:
Sample Code
"routes": [
{
"source": "^/app1/(.*)$",
"target": "/before/$1/after",
"httpMethods": ["GET", "POST"]
}
]
This route only serves GET and POST requests. Any other method (including extension ones) gets a 405
Method Not Allowed response. The same endpoint can be split across multiple destinations depending on
the HTTP method of the requests:
Sample Code
"routes": [
{
"source": "^/app1/(.*)$",
"destination" : "dest-1",
"httpMethods": ["GET"]
},
{
"source": "^/app1/(.*)$",
"destination" : "dest-2",
"httpMethods": ["DELETE", "POST", "PUT"]
}
]
This sample code routes GET requests to the target dest-1, DELETE, POST and PUT to dest-2, and any other
method receives a 405 Method Not Allowed response. It’s also possible to specify catchAll routes,
namely routes that don’t specify httpMethods restrictions:
Sample Code
"routes": [
{
"source": "^/app1/(.*)$",
"destination" : "dest-1",
"httpMethods": ["GET"]
},
{
"source": "^/app1/(.*)$",
"destination" : "dest-2"
}}
]
In this sample code, GET requests are routed to dest-1, and all of the rest are routed to dest-2.
If there’s no route defined for serving static content via localDir, a default route is created for “resources”
directory as follows:
Sample Code
{
"routes": [
{
"source": "^/(.*)$",
"localDir": "resources"
}
]
}
Note
If there is at least one route using localDir, the default route isn’t added.
replace
The replace object configures the placeholder replacement in static text resources.
Sample Code
{
"replace": {
"pathSuffixes": ["index.html"],
"vars": ["escaped_text", "NOT_ESCAPED"]
}
}
pathSuffixes Array An array defining the path suffixes that are relative to localDir. Only files
with a path ending with any of these suffixes are processed.
vars Array A whitelist with the environment variables that are replaced in the files match
ing the suffix specified in pathSuffixes.
The supported tags for replacing environment variables are: {{ENV_VAR}} and {{{ENV_VAR}}} . If such an
environment variable is defined, it’s replaced; otherwise, it’s just an empty string.
Note
Any variable that is replaced using two-brackets syntax {{ENV_VAR}} is HTML-escaped; the triple
brackets syntax {{{ENV_VAR}}} is used when the replaced values don’t need to be escaped and all values
If your application descriptor xs-app.json contains a route like the one illustrated in the following example,
{
"source": "^/get/home(.*)",
"target": "$1",
"localDir": "resources",
"replace": {
"pathSuffixes": ["index.html"],
"vars": ["escaped_text", "NOT_ESCAPED"]
}
}
Sample Code
<html>
<head>
<title>{{escaped_text}}</title>
<script src="{{{NOT_ESCAPED}}}/index.js"/>
</head>
</html
Then, in the index.html, {{escaped_text}} and {{{NOT_ESCAPED}}} are replaced with the value defined
in the environment variables <escaped_text> and <NOT_ESCAPED>.
Note
All index.html files are processed; if you want to apply the replacement only to specific files, you must
set the path relative to localDir. In addition, all files must comply with the UTF-8 encoding rules.
The content type returned by a request is based on the file extension specified in the route. The application
router supports the following file types:
● .json (application/json)
● .txt (text/plain)
● .html (text/html) default
● .js (application/javascript)
● .css (test/css)
Example Result
{ "pathSuffixes": All files with the extension .html under localDir and its subfolders are
[".html"] } processed .
{ "pathSuffixes": ["/abc/ For the suffix /abc/main.html, all files named main.html that are inside a
main.html", "some.html"] } folder named abc are processed.
For the suffix some.html, all files with a name that ends with “some.html”
are processed. For example: some.html, awesome.html.
{ "pathSuffixes": All files with the name “some.html” are processed. For example:
[".html"] } some.html , /abc/some.html.
Sample Code
{
"source": "^/app1/(.*)$",
"destination": "app-1"
}
Since there is no target property for that route, no path rewriting will take place. If /app1/a/b is received as a
path, then a request to http://localhost:3001/app1/a/b is sent. The source path is appended to the
destination URL.
Sample Code
{
"source": {
"path": "^/app1/(.*)$",
"matchCase": false
},
"destination": "app-1"
}
Note
The property matchCase must be boolean. It is optional and has a default value of true.
Sample Code
{
"source": "^/app1/(.*)$",
"target": "/before/$1/after",
"destination": "app-1"
Sample Code
{
"source": "^/odata/v2/(.*)$",
"target": "$1",
"service": "com.sap.appbasic.country",
"endpoint": "countryservice"
}
When a request with path /app1/a/b is received, the path rewriting is done according to the rules in the target
property. The request will be forwarded to http://localhost:3001/before/a/b/after.
Note
In regular expressions there is the term capturing group. If a part of a regular expression is surrounded with
parenthesis, then what has been matched can be accessed using $ + the number of the group (starting
from 1). In code sample, $1 is mapped to the (.*) part of the regular expression in the source property.
Sample Code
{
"source": "^/destination/([^/]+)/(.*)$",
"target": "$2",
"destination": "$1",
"authenticationType": "xsuaa"
}
Sample Code
[
{
"name" : "myDestination",
"url" : "http://localhost:3002"
}
]
When a request with the path /destination/myDestination/myTarget is received, the destination will be
replaced with the URL from "myDestination", the target will get "myTarget" and the request will be
redirected to http://localhost:3002/myTarget.
Note
You can use a dynamic value (regex) or a static string for destination and target values.
The Application Router first looks for the destination name in the mainfest.yaml file, and if it is not
found, it looks for it in the destination service.
https://<tenant>-<destination>.<customdomain>/<pathofile>
To enable the application router to determine the destination of the URL host, a
DESTINATION_HOST_PATTERN attribute must be provided as an environment variable. For example, When a
request with the path https://myDestination.some-approuter.someDomain.com/app1/myTarget is
received, the following route is used:
Sample Code
{
"source": "^/app1/([^/]+)/",
"target": "$1",
"destination": "*",
"authenticationType": "xsuaa"
}
In this example, the target will be extracted from the source and the ‘$1’ value is replaced with "myTarget".
The destination value is extracted from the host and the "*" value is replaced with "myDestination".
Sample Code
{
"source": "^/web-pages/(.*)$",
"target": "$1",
"localDir": "my-static-resources"
}
Note
Sample Code
{
"source": "^/web-pages/",
"localDir": "my-static-resources",
"cacheControl": "public, max-age=1000,must-revalidate"
}
Sample Code
Sample Code
{
"source": "^/app1/(.*)$",
"target": "/before/$1/after",
"httpMethods": ["GET", "POST"]
}
This route only supports GET and POST requests. Any other method (including extensions) will receive a 405
Method Not Allowed response. The same endpoint can be split across multiple destinations depending on the
HTTP method of the requests:
Sample Code
{
"source": "^/app1/(.*)$",
"destination" : "dest-1",
"httpMethods": ["GET"]
},
{
"source": "^/app1/(.*)$",
"destination" : "dest-2",
"httpMethods": ["DELETE", "POST", "PUT"]
}
In the setup above, GET requests will be routed to "dest-1", and all the rest to "dest-2".
Note
It is possible to configure what scope the user needs to possess in order to access a specific resource. Those
configurations are per route. The user should have at least one of the scopes in order to access the
corresponding resource.
Sample Code
{
"source": "^/web-pages/(.*)$",
"target": "$1",
For convenience if your route only requires one scope, the scope property can be a string instead of an array.
The following configuration is also valid:
Sample Code
{
"source": "^/web-pages/(.*)$",
"target": "$1",
"scope": "$XSAPPNAME.viewer"
}
You can configure scopes for different HTTP methods, such as GET, POST, PUT, HEAD, DELETE, CONNECT,
TRACE, PATCH, andOPTIONS. If some of the HTTP methods are not explicitly set, the behaviour for them is
defined by the default property. In case there is no default property specified and the HTTP method is also not
specified, the request is rejected by default.
Sample Code
{
"source": "^/web-pages/(.*)$",
"target": "$1",
"scope": {
"GET": "$XSAPPNAME.viewer",
"POST": ["$XSAPPNAME.reader", "$XSAPPNAME.writer"],
"default": "$XSAPPNAME.guest"
}
}
The application router supports the $XSAPPNAME placeholder. Its value is taken (and then substituted in the
routes) from the UAA configuration.
Note
You can use the name of the business application directly instead of using the $XSAPPNAME placeholder:
Sample Code
{
"source": "^/backend/(.*)$",
"scope": "my-business-application.viewer"
}
You can define several identity providers for different types of users. In this code example, there are two
categories: hospital patients and hospital personnel:
[
{
"source": "^/patients/sap/opu/odata/(.*)",
"target": "/sap/opu/odata$1",
"destination": "backend",
"authenticationType": "xsuaa",
"identityProvider": "patientsIDP"
},
{
"source": "^/hospital/sap/opu/odata/(.*)",
"target": "/sap/opu/odata$1",
"destination": "backend", "authenticationType": "xsuaa",
"identityProvider": "hospitalIDP"
}
]
A patient who tries to log into the system will be authenticated by patientIDP, and a doctor who tries to log in
will be authenticated by hospitalIDP.
Note
If a user logs in as one identity and then wants to perform tasks for another identity type, the user must log
out and log back in to the system.
Identity provider configuration is only supported in the client side logon redirect flow.
3.1.6.2.4.5 login
A redirect to the application router at a specific endpoint takes place during OAuth2 authentication with the
User Account and Authentication service (UAA).
This endpoint can be configured in order to avoid possible collisions, as illustrated in the following example:
Sample Code
"login": {
"callbackEndpoint": "/custom/login/callback"
}
Tip
In this object you can define an application's central log out end point by using the logoutEndpoint property,
as illustrated in the following example:
Sample Code
"logout" {
"logoutEndpoint": "/my/logout"
}
Making a GET request to “/my/logout” triggers a client-initiated central log out. Central log out can be
initiated by a client or triggered due to a session timeout, with the following consequences:
You can use the logoutPage property to specify the Web page in one of the following ways:
● URL path
The UAA service redirects the user back to the application router, and the path is interpreted according to
the configured routes.
The logoutEndpoint can be called with query parameters. For example:
Sample Code
window.location.replace('/my/logout?siteId=3');
These parameters will be appended as is to the redirect URL set by the logoutPage property. For
example, if the logout section is:
Sample Code
"logout": {
"logoutEndpoint": "/logout",
"logoutPage": "/logoff.html"
},
Sample Code
/logoff.html?siteId=3
The resource that matches the URL path specified in the property logoutPage should not require
authentication; for this route, the property authenticationType must be set to none.
In the following example, my-static-resources is a folder in the working directory of the application
router; the folder contains the file logout-page.html along with other static resources.
Sample Code
{
"authenticationMethod": "route",
"logout": {
"logoutEndpoint": "/my/logout",
"logoutPage": "/logout-page.html"
},
"routes": [
{
"source": "^/logout-page.html$",
"localDir": "my-static-resources",
"authenticationType": "none"
}
]
}
Sample Code
"logout": {
"logoutEndpoint": "/my/logout",
"logoutPage": "http://acme.com/employees.portal"
}
3.1.6.2.4.7 destinations
The destinations section in xs-app.json extends the destination configuration in the deployment manifest
(manifest.yml), for example, with some static properties such as a logout path.
Sample Code
{
"destinations": {
"node-backend": {
"logoutPath": "/ui5logout",
"logoutMethod": "GET"
}
}
}
logoutPath String No The log out end point for your destination. The
logoutPath will be called when central log out
is triggered or a session is deleted due to a time
out. The request to logoutPath contains addi
tional headers, including the JWT token.
3.1.6.2.4.8 services
The services section in xs-app.json extends the services configuration in the deployment manifest
(manifest.yml), for example, with some static properties such as an end point.
Sample Code
{
"services": {
"com.sap.appbasic.country": {
"endpoint": "countryservice",
"logoutPath": "/countrieslogout",
"logoutMethod": "GET"
}
}
}
logoutPath String No The log out end point for your destination. The
logoutPath will be called when central log out
is triggered or a session is deleted due to a time
out. The request to logoutPath contains addi
tional headers, including the JWT token.
Related Information
3.1.6.2.4.9 compression
The compression keyword enables you to define if the application router compresses text resources before
sending them.
By default, resources larger than 1KB are compressed. If you need to change the compression size threshold,
for example, to “2048 bytes”, you can add the optional property “minSize”: <size_in_KB>, as illustrated in
the following example.
Sample Code
{
"compression": {
"minSize": 2048
}
}
● Global
Within the compression section add "enabled": false
● Front end
The client sends a header “Accept-Encoding” which does not include “gzip”.
● Back end
The application sends a header “Cache-Control” with the “no-transform” directive.
minSize Number No Text resources larger than this size will be com
pressed.
If the <COMPRESSION> environment variable is set it will overwrite any existing values.
3.1.6.2.4.10 pluginMetadataEndpoint
Adds an endpoint that serves a JSON string representing all configured plugins.
Sample Code
{
"pluginMetadataEndpoint": "/metadata"
}
Note
If you request the relative path /metadata of your application, a JSON string is returned with the
configured plug-ins.
3.1.6.2.4.11 whitelistService
Enabling the white-list service opens an endpoint accepting GET requests at the relative path configured in the
endpoint property, as illustrated in the following example:
Sample Code
{
"whitelistService": {
"endpoint": "/whitelist/service"
}
}
If the white-list service is enabled in the application router, each time an HTML page needs to be rendered in a
frame, the white-list service is used check if the parent frame is allowed to render the content in a frame.
Sample Code
[
{
"protocol": "http",
"host": "*.acme.com",
"port": 12345
},
{
"host": "hostname.acme.com"
}
]
Note
Matching is done against the properties provided. For example, if only host name is provided, the white-list
service returns “framing: true” for all, and matching will be for all schemata and protocols.
Return Value
The white-list service accepts only GET requests, and returns a JSON object as the response. The white-list
service call uses the parent origin as URI parameter (URL encoded) as follows:
Sample Code
GET url/to/whitelist/service?parentOrigin=https://parent.domain.acme.com
The response is a JSON object with the following properties; property “active” has the value false only if
<CJ_PROTECT_WHITELIST> is not provided:
Sample Code
{
"version" : "1.0",
"active" : true | false,
"origin" : "<same as passed to service>",
"framing" : true | false
}
The “active” property enables framing control; the “framing” property specifies if framing should be
allowed. By default, the Application Router (approuter.js) sends the X-Frame-Options header with value
the SAMEORIGIN.
If the white-list service is enabled, the header value probably needs to be changed, see the X-Frame-
Options header section for details about how to change it.
3.1.6.2.4.12 websockets
The application router can forward web-socket communication. Web-socket communication must be enabled
in the application router configuration.
If the back-end service requires authentication, the upgrade request should contain a valid session cookie. The
application router supports the destination schemata "ws", "wss", "http", and "https".
Sample Code
{
"websockets": {
"enabled": true
}
}
To use web-sockets when the application router is integrated with the HTML5 Application Repository, the
websockets property should be added to the xs-app.json of the deployed HTML5 application. When an
incoming request for an application in the repository goes through the application router, it retrieves the
application's configuration from the repository. If this flag is set, the application router creates a web-socket
connection to the back end (the target url of the request) and acts as a proxy which delivers messages on top
of the ws protocol from the back end to the user, and vice versa.
Restriction
3.1.6.2.4.13 errorPage
Errors originating in the application router show the HTTP status code of the error. It is possible to display a
custom error page using the errorPage property.
The property is an array of objects, each object can have the following properties:
In the following code example, errors with status code “400”, “401” and “402” will show the content of ./
custom-err-4xx.html; errors with the status code “501” will display the content of ./http_resources/
custom-err-501.html.
Sample Code
{ "errorPage" : [
{"status": [400,401,402], "file": "./custom-err-40x.html"},
{"status": 501, "file": "./http_resources/custom-err-501.html"}
]
}
Note
The contents of the errorPage configuration section have no effect on errors that are not generated by
the application router.
3.1.6.2.5 Headers
Forwarding Header
The application router sends the following x-forwarding- headers to the route targets:
x-forwarded-host Contains the host header that is sent from the client to the
application router.
x-forwarded-for Contains the address of the client that connects to the appli
cation router.
If a client performs a path rewriting, it sends the x-forwarded-proto, x-forwarded-host, and the x-
forwarded-path headers to the application router. The values of these headers are forwarded to the route
targets without modifications instead of being generated from the application router request URL. The x-
forwarded-path header of a request does not impact the source pattern of routes in the xs-app.json.
Hop-by-hop headers are only for a single transport-level connection and are not forwarded by the application
router. The headers are:
● Connection
● Keep-Alive
● Public
● Proxy-Authenticate
● Transfer-Encoding
● Upgrade
Custom Header
x-custom-host is used to support the application router behind an external reverse proxy. The x-custom-host
header must contain the internal reverse proxy host.
Note
In a multi-tenancy landscape, application router can be called from multiple tenants. During the authentication
flow, application router uses the tenant ID to fetch the authentication token from XSUAA. Application router
extracts the tenant ID from the corresponding host using the tenant host pattern configuration.
In an external reverse proxy flow, the application router uses the x-custom-host to extract the tenant ID using
the tenant host pattern configuration.
If the x-custom-host is not provided, the application router uses the host header to extract the tenant ID.
x-approuter-authorization header contains the JWT token to support the Service to Application Router
scenario. The application router can receive a consumer service XSUAA JWT token and use it to access the UI
and the data. The JWT token is passed to the application router in the x-approuter-authorization header
of the request.
Note
The XSUAA JWT token is generated with the same xsuaa service instance that is bound to the application
router.
Caution
You should not use SAP Cloud Platform beta features in subaccounts that belong to productive enterprise
accounts. Any use of beta functionality is at the customer's own risk, and SAP shall not be liable for errors
or damages caused by the use of beta features.
The application router is used to serve static content, propagates user information, and acts as a proxy to
forward requests to other microservices. The routing configuration for an application is defined in one or more
destinations. The application router configuration is responsible for the following tasks:
Note
The application router doesn’t manage server caching. The server cache must be set (with e-tags) in the
server itself. The cache must contain not only static content, but container resources too. For example,
relating to OData metadata.
Destinations
A destination defines the back-end connectivity. In its simplest form, a destination is a URL to which requests
are forwarded. There has to be a destination for every single app (microservice) that is a part of the business
application.
The destinations configuration can be provided by the destinations environment variable or by the destination
service.
Sample Code
---
applications:
- name: node-hello-world
port: <approuter-port> #the port used for the approuter
memory: 100M
path: web
env:
destinations: >
[
{
Note
The value of the destination "name":"backend" must match the value of the destinations property
configured for a route in the corresponding application-router configuration file (xs-app.json). It’s also
possible to define a logout path and method for the destinations property in the xs-app.json file.
Destination Service
Destination configuration can be provided by destination service. The destination service can be
consumed by the application router after binding a destination service instance to it.
The following guidelines apply to the configuration of the mandatory properties of a destination:
Property Description
Note
● When using basic authentication, User and
Password are mandatory.
● When using principal propagation, the proxy
type is on-premise.
● When using OAuth2SAMLBearerAssertion, the
uaa.user scope in the xs-security.json file is
required.
● on-premise
If set, binding to SAP Cloud Platform connectivity service is
required.
● internet
Property Description
HTML5.ForwardAuthToken If true, the OAuth token is sent to the destination. The default value is false.
This token contains the user identity, scopes, and other attributes. It’s signed by
the UAA so it can be used for user authentication and authorization with back-end
services.
Note
If the ProxyType is set to on-premise, don’t set the
ForwardAuthToken property to true.
HTML5.Timeout Positive integer representing the maximum time to wait for a response (in millisec
onds) from the destination. The default value is 30000 ms.
Note
The timeout value specified also applies to the destination's log out path (if
defined), which belongs to the destination property.
HTML5.PreserveHostHeader If true, the application router preserves the host header in the back-end request.
This is expected by some back-end systems like AS ABAP, which don’t process x-
forwarded-* headers.
HTML5.DynamicDestination If true, the application router allows this destination to be used dynamically on
the host or path level.
HTML5.SetXForwardedHeaders If true , the application router adds X-Forwarded-(Host, Path, Proto) headers to the
back-end request. The default value is true.
sap-client If true, the application router propagates the sap-client and its value as a
header in the back-end request.
● If a destination with the same name is defined both in the environment destination and the destination
service, the destination configuration loads the settings from the environment.
● If the configuration of a destination is updated in runtime, the changes are reflected automatically to the
AppRouter. There’s no need to restart the AppRouter.
● The destination service is only available in the Cloud Foundry environment.
● You can define destinations at the service instance level. This destination type has a higher priority than the
same destination defined on the subaccount level. This enables exposing a specific destination to a
specific application instead of to the entire subaccount.
Application Router supports integration with the SAP Cloud Platform Connectivity service. The connectivity
service handles proxy access to the SAP Cloud Platform cloud connector, which tunnels connections to private
network systems. In order to use connectivity, a connectivity service instance must be created and bound to
the Application Router application. In addition, the relevant destination configurations must contain
proxyType=OnPremise and a valid XSUAA login token must be obtained from the login flow.
Related Information
The application router is integrated with the HTML5 Application Repository service, to retrieve all the static
content and routes (xs-app.json) of the HTML5 applications stored in the repository.
To integrate HTML5 Application Repository with an application router, create an instance of the html5-apps-
repo service of the app-runtime plan, and bind it to the application router.
Model the xs-app.json routes that are used to retrieve static content from HTML5 Application Repository in
the following way:
Sample Code
{
"source": "^(/.*)",
"target": "$1",
"service": "html5-apps-repo-rt",
"authenticationType": "xsuaa"
}
In case the application router needs to serve HTML5 applications not stored in HTML5 Application Repository,
it is possible to model that in the xs-app.json file of the application router.
When the application router is bound to HTML5 Application Repository, the following restrictions apply:
At runtime, the application router tries to fetch the xs-app.json file from the HTML5 application in the
HTML5 Application Repository, and to use it for routing the request.
A valid request to an application router that uses HTML5 Application Repository, must have the following
format:
Sample Code
https://<tenantId>.<appRouterHost>.<domain>/<bsPrefix>.<appName-appVersion>/
<resourcePath>
For example:
Sample Code
https://tenant1.myapprouter.cf.sap.hana.com/comsapappcountry.countrylist/
index.html
appName Yes Used to uniquely identify the application in HTML5 Application Repo
sitory.
Note
Must not contain dots or special characters.
resourcePath Yes The path to the file as it was stored in HTML5 Application Repository.
● If no HTML5 application is found in HTML5 Application Repository for the current request, the central
application router xs-app.json will be used for routing.
● If the HTML5 application exists in HTML5 Application Repository but no xs-app.json file is returned, an
error message will be issued and the request processing will be stopped.
Application router supports integration with Business Services, which are a flavor of reuse-services.
An SAP Business Service exposes in its binding information in a set of attributes in the VCAP_SERVICES
credentials block that enable application router to serve Business Service UI and/or data.
● The Business Service UI must be stored in HTML5 Application Repository to be accessible from an
application router.
● The Business Service UI must be defined as "public" to be accessible from an application router in a
different space than the one from which the UI was uploaded.
● The Business Service data can be served using two grant types:
user_token The application router performs a token exchange between the login JWT
token and the Business Service token, and uses it to trigger a request to
the Business Service endpoint.
To bind a Business Service instance to the application router, provide the following information in the
VCAP_SERVICES credentials:
sap.cloud.service.alias No Short service name alias for user friendly URL business serv
ice prefix. Make sure the alias is unique in the context of the
application router.
grant_type No The grant type that should be used to trigger requests to the
Business Service. Allowed values: user_token (default) or
client_credentials.
The following example shows how to provide the required information. This information should be provided via
the onBind hook in the service-broker implementation:
Sample Code
"country": [
{
...
"credentials": {
"sap.cloud.service": "com.sap.appbasic.country",
"sap.cloud.service.alias": "country",
"endpoints": {
"countryservice": { "url": https://icf-countriesapp-test-
service.cfapps.sap.hana.ondemand.com/odata/v2/countryservice"},
"countryconfig": {
"url": https://icf-countriesapp-test-
service.cfapps.sap.hana.ondemand.com/rest/v1/countryconfig",
"timeout": 120000
}
},
"html5-apps-repo": {
"app_host_id": "1bd7c044-6cf4-4c5a-b904-2d3f44cd5569,
1cd7c044-6cf4-4c5a-b904-2d3f44cd54445"
},
"saasregistryenabled": true,
"grant_type": "user_token"
....
Related Information
To access Business Service data, the xs-app.json file should have a route referencing a specific
sap.cloud.service or sap.cloud.service.alias via the service attribute. If an endpoint attribute is also
modeled, it will be used to get the service URL; otherwise the fallback URL or URI attribute will be used.
Sample Code
"routes": [
{
"source": "^/odata/v2/(.*)$",
"target": "$1",
"service": "com.sap.appbasic.country",
"endpoint": "countryservice"
},
In order to support JWT token exchange, the login JWT token should contain the uaa.user scope. This
requires that the xs-security configuration file contain a role template that references the uaa.user scope.
Sample Code
{
"xsappname" : "simple-approuter",
"tenant-mode" : "shared",
"scopes": [
{
"name": "uaa.user",
"description": "UAA"
},
{
"name": "$XSAPPNAME.simple-approuter.admin",
"description": "Simple approuter administrator"
}
],
"role-templates": [
{
"name": "Token_Exchange",
"description": "UAA",
"scope-references": [
"uaa.user"
]
},
{
"name": "simple-approuter-admin",
"description": "Simple approuter administrator",
"scope-references": [
"$XSAPPNAME.simple-approuter.admin"
]
}
]
}
Related Information
This section provides information about accessing Business Services UIs that are stored in HTML5 Application
Repository.
Business Service UI's must be stored in HTML5 Application Repository and defined in their manifest.json
files as public: true in order to be accessible from an application router application that is typically running
in a different space than the Business Service space. In addition, dataSource URIs must be relative to the base
URL, which means there is no need for a slash as the first character.
Sample Code
{
“sap.app”: {
“id”:“com.sap.appbasic.country.list”,
“applicationVersion: {
“version”: “1.0.0”
},
"dataSources": {
"mainService":{
"uri": "odata/v2/countryservice",
"type": "OData"
}
},
“sap.cloud”: {
"public": true,
“service”: “com.sap.appbasic.country“
}
}
A Business Service that exposes UI, must provide one or more app-host GUIDs in an html5-apps-repo
block in VCAP_SERVICES credentials.
To access the Business Service UI, the URL request to the application router must contain a business service
prefix, as in the following example of a request URL:
Sample Code
https://tenant1.approuter-repo-examples.cfapps.sap.hana.ondemand.com/
comsapappbasiccountry.comsapappbasicscountrylist/test/flpSandbox.html
In this example, comsapappbasiccountry is the business service prefix which matches the
sap.cloud.service attribute in the country service VCAP_SERVICES credentials (without dots). The
comsapappbasicscountrylist is the name of the HTML5 application as defined in the app.id attribute in
the manifest.json file (without dots).
A list of environment variables that can be used to configure the application router.
The following table lists the environment variables that you can use to configure the application router. The
table also provides a short description of each variable and, where appropriate, an example of the configuration
data.
Variable Description
httpHeaders Configure the application router to return additional HTTP headers in its responses to
client requests
SESSION_TIMEOUT Set the time to trigger an automatic central log out from the User Account and Au
thentication (UAA) server.
CJ_PROTECT_WHITELIST A list of allowed server or domain origins to use when checking for click-jacking at
tacks
WS_ALLOWED_ORIGINS A list of the allowed server (or domain) origins that the application router uses to verify
requests
JWT_REFRESH Configures the automatic refresh of the JSON Web Token (JWT) provided by the User
Account and Authentication (UAA) service to prevent expiry (default is 5 minutes).
UAA_SERVICE_NAME Specify the exact name of the UAA service to bind to an application.
INCOMING_CONNECTION_TIM Specify the maximum time (in milliseconds) for a client connection. If the specified
EOUT time is exceeded, the connection is closed.
TENANT_HOST_PATTERN Define a regular expression to use when resolving tenant host names in the request’s
host name.
SECURE_SESSION_COOKIE Configure the enforcement of the Secure flag of the application router's session
cookie.
CORS Provide support for cross-origin requests, for example, by allowing the modification of
the request header.
httpHeaders
If configured, the application router sends additional HTTP headers in its responses to a client request. You can
set the additional HTTP headers in the <httpHeaders> environment variable. The following example
configuration shows how to configure the application router to send two additional headers in the responses to
the client requests from the application <myApp>:
or
Tip
To ensure better security of your application set the Content-Security-Policy header. This is a
response header which informs browsers (capable of interpreting it) about the trusted sources from which
an application expects to load resources. This mechanism allows the client to detect and block malicious
scripts injected into the application. A value can be set via the <httpHeaders> environment variable in the
additional headers configuration. The value represents a security policy which contains directive-value
pairs. The value of a directive is a whitelist of trusted sources.
Usage of the Content-Security-Policy header is considered second line of defense. An application should
always provide proper input validation and output encoding.
destinations
The destinations configuration is an array of objects that is defined in the destinations environment
variable. A destination is required for every application (microservice) that is part of the business application.
The following table lists the properties that can be used to describe the destination:
url String Yes The Unique Resource Locator for the applica
tion (microservice)
proxyHost String No The host of the proxy server used in case the
request should go through a proxy to reach the
destination.
Tip
Mandatory if proxyPort is defined.
proxyPort String No The port of the proxy server used in case the
request should go through a proxy to reach the
destination.
Tip
Mandatory if proxyHost is defined.
forwardAuthToken Boolean No If true, the OAuth token will be sent to the des
tination. The default value is “false”. This token
contains the user identity, scopes, and some
other attributes. The token is signed by the
User Account and Authorization (UAA) service
so that the token can be used for user-authen
tication and authorization purposed by back-
end services.
Caution
For testing purposes only. Do not use this
property in production environments!
Tip
The timeout specified here also applies to
the logout path, logoutPath, if the log
out path is defined, for example, in the ap
plication's descriptor file xs-app.json.
Tip
In Cloud environments, if you set the appli
cation's destination proxyType to
onPremise, a binding to the SAP Cloud
Platform connectivity service is required,
and the forwardAuthToken property
must not be set.
The following example shows a simple configuration for the <destinations> environment variable:
Sample Code
[
{
"name" : "ui5",
"url" : "https://sapui5.acme.com",
"proxyHost" : "proxy",
"proxyPort" : "8080",
"forwardAuthToken" : false,
"timeout" : 1200
}
]
It is also possible to include the destinations in the manifest.yml file, as illustrated in the following example:
Sample Code
- name: node-hello-world
memory: 100M
path: web
env: destinations: >
[
{"name":"ui5", "url":"https://sapui5.acme.com"}
]
SESSION_TIMEOUT
You can configure the triggering of an automatic central log-out from the User Account and Authentication
(UAA) service if an application session is inactive for a specified time. A session is considered to be inactive if
Note
You can also set the session timeout value in the application's manifest.yml file, as illustrated in the
following example:
Sample Code
- name: myApp1
memory: 100M
path: web
env:
SESSION_TIMEOUT: 40
Tip
If the authentication type for a route is set to "xsuaa" (for example, "authenticationType":
"xsuaa"), the application router depends on the UAA server for user authentication, and the UAA server
might have its own session timeout defined. To avoid problems caused by unexpected timeouts, it is
recommended that the session timeout values configured for the application router and the UAA are
identical."
SEND_XFRAMEOPTIONS
By default, the application router sends the X-Frame-Options header with the value “SAMEORIGIN”. You can
change this behavior either by disabling the sending of the default header value (for example, by setting
SEND_XFRAMEOPTIONS environment variable to false) or by overriding the value, for example, by configuring
additional headers (with the <httpHeaders> environment variable).
The following example shows how to disable the sending of the X-Frame-Options for a specific application,
myApp1:
CJ_PROTECT_WHITELIST
The <CJ_PROTECT_WHITELIST> specifies a list of origins (for example, host or domain names) that do not
need to be protected against click-jacking attacks. This list of allowed host names and domains are used by the
application router's white-list service to protect XS advanced applications against click-jacking attacks. When
an HTML page needs to be rendered in a frame, a check is done by calling the white-list service to validate if the
parent frame is allowed to render the requested content in a frame. The check itself is provided by the white-list
service
The content is a JSON list of objects with the properties listed in the following table:
Note
Matching is done against the properties provided. For example, if only the host name is provided, the white-
list service matches all schemata and protocols.
WS_ALLOWED_ORIGINS
When the application router receives an upgrade request, it verifies that the origin header includes the URL
of the application router. If this is not the case, then an HTTP response with status 403 is returned to the client.
This origin verification can be further configured with the environment variable <WS_ALLOWED_ORIGINS>,
which defines a list of the allowed origins the application router uses in the verification process.
Note
JWT_REFRESH
The JWT_REFRESH environment variable is used to configure the application router to refresh a JSON Web
Token (JWT) for an application, by default, 5 minutes before the JWT expires, if the session is active.
UAA_SERVICE_NAME
The UAA_SERVICE_NAME environment variable enables you to configure an instance of the User Account and
Authorization service for a specific application, as illustrated in the following example:
Note
The details of the service configuration are defined in the <VCAP_SERVICES> environment variable, which
is not configured by the user.
INCOMING_CONNECTION_TIMEOUT
The INCOMING_CONNECTION_TIMEOUT environment variable enables you to set the maximum time (in
milliseconds) allowed for a client connection, as illustrated in the following example:
If the specified time is exceeded, the connection is closed. If INCOMING_CONNECTION_TIMEOUT is set to zero
(0), the connection-timeout feature is disabled. The default value for INCOMING_CONNECTION_TIMEOUT is
120,000 ms (2 min).
TENANT_HOST_PATTERN
The TENANT_HOST_PATTERN environment variable enables you to specify a string containing a regular
expression with a capturing group. The requested host is matched against this regular expression. The value of
the first capturing group is used as the tenant Id. as illustrated in the following example:
COMPRESSION
The COMPRESSION environment variable enables you to configure the compression of resources before a
response to the client, as illustrated in the following example:
The SECURE_SESSION_COOKIE can be set to true or false. By default, the Secure flag of the session cookie is
set depending on the environment the application router runs in. For example, when the application router is
running behind a router that is configured to serve HTTPS traffic, then this flag will be present. During local
development the flag is not set.
Note
If the Secure flag is enforced, the application router will reject requests sent over unencrypted
connections.
The following example illustrates how to set the SECURE_SESSION_COOKIE environment variable:
REQUEST_TRACE
You can enable additional traces of the incoming and outgoing requests by setting the environment variable
<REQUEST_TRACE>“true”. If enabled, basic information is logged for every incoming and outgoing request to
the application router.
The following example illustrates how to set the REQUEST_TRACE environment variable:
Tip
This is in addition to the information generated by the Node.js package @sap/logging that is used by the
XS advanced application router.
CORS
The CORS environment variable enables you to provide support for cross-origin requests, for example, by
allowing the modification of the request header. Cross-origin resource sharing (CORS) permits Web pages from
other domains to make HTTP requests to your application domain, where normally such requests would
automatically be refused by the Web browser's security policy.
Cross-origin resource sharing (CORS) is a mechanism that allows restricted resources on a Web page to be
requested from another domain (protocol and port) outside the domain (protocol and port) from which the
first resource was served. The CORS configuration enables you to define details to control access to your
application resource from other Web browsers. For example, you can specify where requests can originate from
or what is allowed in the request and response headers. The following example illustrates a basic CORS
configuration:
[
{
"uriPattern": "^\route1$",
The CORS configuration includes an array of objects with the following properties, some of which are
mandatory:
Note
Matching is case-sensitive. In addition, if no port or pro
tocol is specified, the default is “*”.
Tip
The specified methods must be upper-case, for exam
ple,GET. Matching of the method type is case-sensitive.
maxAge String No A single value specifying the length of time (in seconds) a
preflight request should be cached for. A negative value that
prevents CORS filter from adding this response header to the
pre-flight response. If maxAge is defined but no value is
specified, the default time of “1800” seconds applies.
allowedCredential Boolean No A Boolean flag that indicates whether the specified resource
s supports user credentials. The default setting is “true”.
It is also possible to include the CORS configuration in either the manifest.yml or the manifest-op.yml
file. The code in the following example enables the CORS configuration for any route with a source URI pattern
that matches the RegExp “^\route1$”:
Sample Code
- name: node-hello-world
memory: 100M
path: web
env:
CORS: >
[
{
"allowedOrigin":[
{
"host":"my_host",
"protocol":"https"
Related Information
3.1.6.2.10 Multitenancy
Each multitenant application has to deploy its own application router, and the application router handles
requests of all tenants to the application. The application router is able to determine the tenant identifier out of
the URL and then forwards the authentication request to the tenant User Account and Authentication (UAA)
service and the related identity zone.
To use a multitenant application router, you must have a shared UAA service and the version of the application
router has to be greater than 2.3.1.
The application router must determine the tenant-specific subdomain for the UAA that in turn determines the
identity zone, used for authentication. This determination is done by using a regular expression defined in the
environment variable TENANT_HOST_PATTERN.
TENANT_HOST_PATTERN is a string containing a regular expression with a capturing group. The request host is
matched against this regular expression. The value of the first capturing group is used as the tenant
subdomain.
With this configuration, the application router extracts the tenant subdomain, which is used for authentication
against a multitenant UAA.
Create a service instance of the SaaS Provisioning service with a configuration JSON file in the Cloud Foundry
sub-account space where the multitenant business application is deployed.
The configuration JSON file must use the following format and set these properties:
Parameters Description
xsappname The xsappname configured in the security descriptor file used to create the
XSUAA instance (see Develop the Multitenant Application).
getDependenciesPath (Optional) Any URL that the application exposes for GET dependencies. If the ap
plication doesn’t have dependencies and the callback isn’t implemented, it mustn't
be declared.
onSubscriptionPath This parameter must end with /{tenantId}. The tenant for the subscription is
passed to this callback as a path parameter. You must keep {tenantId} as a pa
rameter in the URL so that it’s replaced at runtime with the tenant calling the sub
scription. This callback URL is called when a subscription between a multitenant
application and a consumer tenant is created (PUT) and when the subscription is
removed (DELETE).
displayName (Optional) The display name of the application when viewed in the cockpit. For ex
ample, in the application's tile. If left empty, takes the application's technical name.
description (Optional) The description of the application when viewed in the cockpit. For exam
ple, in the application's tile. If left empty, takes the application's display name.
category (Optional) The category to which the application is grouped in the Subscriptions
page in the cockpit. If this parameter is left empty, it’s assigned to the default cate
gory.
Instead of starting the application router directly, you can configure your XS advanced application to use its
own start script. You can also use the application router as a regular Node.js package.
Sample Code
The application router uses the connect framework, for more information, see Connect framework in the
Related Links below. You can reuse all injected “connect” middleware within the application router, for example,
directly in the application start script:
Sample Code
Tip
The path argument is optional. You can also chain use calls.
Sample Code
The application router defines the following slots where you can insert custom middleware:
● first - right after the connect application is created, and before any application router middleware. At this
point security checks are not performed yet.
Tip
This is good place for infrastructure logic, for example, logging and monitoring.
● beforeRequestHandler - before standard application router request handling, that is static resource
serving or forwarding to destinations.
Tip
Tip
Sample Code
Sample Code
module.exports = {
insertMiddleware: {
first: [
function logRequest(req, res, next) {
console.log('Got request %s %s', req.method, req.url);
}
],
beforeRequestHandler: [
{
path: '/my-ext',
handler: function myMiddleware(req, res, next) {
res.end('Request handled by my extension!');
}
}
]
}
};
The extension configuration can be referenced in the corresponding application's start script, as illustrated in
the following example:
Sample Code
By default the application router handles its command-line parameters, but that can be customized as well.
An <approuter> instance provides the property cmdParser that is a commander instance. It is configured
with the standard application router command line options. You can add custom options like this:
Sample Code
To disable the handling of command-line options in the application router, reset the ar.cmdParser property to
“false”, as illustrated in the following example:
ar.cmdParser = false;
Related Information
A detailed list of the features and functions provided by the application router extension API.
The application router extension API enables you to create new instances of the application router, manage the
approuter instance, and insert middleware using the Node.js “connect” framework. This section contains
detailed information about the following areas:
The application router uses the “Connect” framework for the insertion of middleware components. You can
reuse all connect middleware within the application router directly.
Tip
This is a good place to insert infrastructure logic, for ex
ample, logging and monitoring.
Tip
This is a good place to handle custom REST API re
quests.
Tip
This is a good place to capture or customize error han
dling.
Middleware Slot
Related Information
This guide explains how much Application Runtime in MBs you must purchase to run HTML5 Applications.
HTML5 Applications are provided as part of the Application Runtime SKU. To develop and run HTML5
Applications on Cloud Foundry, customers must purchase the Application Runtime.
For each gigabyte of application runtime you purchase, you receive an entitlement of 5 MBs in the HTML5
Application Repository.
The two most important points that affect the required application runtime are:
1. Concurrent users – the number of expected users that log in to the system at the same time (during peak
times)
2. Total size of the HTML5 applications (in MBs).
The average expected size of an HTML5 application is 0.5 MBs. Therefore, you can also check the number
of expected applications. It’s important to consider development, quality, and productive environments
when calculating the total size.
Using the following two tables, estimate how much Application Runtime allocation you need. Based on the t-
shirt sizes of the total number of expected applications and the expected number of concurrent users, you
must purchase the larger number of MBs. For more information, see Examples [page 1153].
Total size of HTML5 Apps Expected No. of SAP Fiori Required AppRuntime
T-Shirt Size (MBs) apps (GBs)
S 5 10 1
M 20 40 4
L 40 80 8
XL 80 160 16
S 1000 1
M 10000 4
L 30000 8
XL 80000 16
Example A
If you have 160 SAP Fiori apps, then for a total of 80 MBs in the HTML5 Application Repository, you must have
16 GBs of application runtime.
If you’re expecting 30,000 concurrent users during peak time, then you must have 8 GBs of application
runtime.
Since you must purchase the larger number of MBs, the total amount of required application runtime you need
to purchase is 16GBs.
Example B
● Expecting 30,000 concurrent users during peak time (requires 8 GBs of application runtime)
● Purchasing 100 GBs of application runtime to run your backend application on Cloud Foundry and
automatically receive 500MBs of entitlements in the HTML5 Application Repository
Then, the total amount of required application runtime you need to purchase is 108 GBs.
If you have questions or encounter an issue while working with HTML5 Application Repository, there are
various ways to address them.
Create an incident in the SAP Support Portal using one of the following components:
Service Component
● Version
● Working with or without HTML5 Apps Repo
● Upload your Application Router xs-app.json file
● Upload your Application Router xs-security.json file or configuration
● Application Router environment (cf env <approuterApp> or from cockpit).
Note
Related Information
3.1.6.5 Troubleshooting
Uploading Applications
Term Description
Solution Delete or do not deploy the old app-host instance or use an
other app.id in the resources folder for a new app-host.
Term Description
Term Description
Solution Try again after the first HTML5 Application Repository de
ployer has completed its upload.
Term Description
Note
The quota is in MBs and the default size limit of an app-
host instance is 2MBs.
Solution Increase the quota for the global account. Assign more
quota in the Entitlements screen to the subaccount, or cre
ate or update the service instance with a lower size limit. For
example:
Term Description
Cause The app-host service was not entitled to the subaccount as
sociated to this organization. However, it was entitled to an
other subaccount under the same global account.
Term Description
Issue When trying to upload content, it fails with error: "Error while
parsing request; Error: maximum file length exceeded".
Cause The zipped application file length exceeds the size limit of
the app-host service instance.
Solution Increase the app-host size limit using the update service. For
example: cf update-service my-app-host -c
'{"sizeLimit": 4}'.
Term Description
Issue You received the following error message after increasing the
size limit and checking that enough entitlements were pro
vided for the app-host: “Error while parsing request; Error:
maximum file length exceeded”.
Note
The application must not be larger than several MBs.
Term Description
Issue When trying to upload content, you receive the following er
ror: "Uploading application content failed: application's size
exceeds the service plan limit".
Term Description
Solution Check if one of the causes is your issue. For example, check
the size of your html5-app-deployer resources folder, check
manifest.json. If you are not sure if another service instance
already uses your app.id, try making a small change to your
app.id and deploy it again.
Failed to run the Application Router after subscription; route not found
Term Description
Term Description
Running Applications
Issue Serving content from the Application Router fails with error
"Application does not have xs-app.json".
Term Description
Issue Serving content from the Application Router fails with error
"Error while retrieving xsApp configuration".
Term Description
Issue Serving content from the Application Router fails with error
"Unauthorized. Please check with the business service UI
provider if the requested UI is defined as public".
Term Description
Term Description
Term Description
Cause The routes in the xs-app.json file are processed top to bot
tom. If a route maps the pattern, it is picked even if a route
below it is a better match. The route for the HTML5 Applica
tion Repository typically is a "catch all" route, and if any
routes are defined below it, then they are never be reached.
Term Description
Term Description
Issue Your application does not work properly after logging out
and when you try to log back in, for example, click anywhere
on the application screen, nothing happens.
Cause The main page of your application, that appears after you log
in, is cached by the browser. Clicking links does not reach
the backend (Application Router) and the log in process
does not work.
{
"routes": [
{
"source": "^/ui/index.html",
"target": "index.html",
"service": "html5-apps-repo-rt",
"authenticationType": "xsuaa"
"cacheControl": "no-cache, no-
store, must-revalidate"
}
]
}
Term Description
Term Description
Find here selected information for Java development on SAP Cloud Platform Cloud Foundry environment and
references to more detailed sources.
The SAP Java buildpack is a SAP Cloud Platform Cloud Foundry buildpack for running JVM-based applications.
The buildpack provides the following runtimes: Tomcat [page 1165], TomEE [page 1167], TomEE 7 [page 1170],
and Java Main [page 1173].
Usage
To use this buildpack specify its name when pushing an application to the Cloud Foundry.
You can also use the buildpack attribute to specify it in the manifest.yml file:
---
applications:
- name: <APP_NAME>
buildpack: sap_java_buildpack
...
...
modules:
- name: <APP_NAME>
type: java.tomcat
path: <path_to_archive>
properties:
...
parameters:
...
memory: 512M
Versioning
The SAP Cloud Platform Cloud Foundry environment provides four versions of the SAP Java Buildpack as part
of its system buildpacks:
● sap_java_buildpack - Always holds the latest available version of the SAP Java Buildpack. All new features
and fixes are provided with this version.
● sap_java_buildpack_<version_latest> - Holds the latest available version of the SAP Java Buildpack;
available for a limited timeframe (four to six weeks).
● sap_java_buildpack_<version_previous> - This version used to be latest in the previous update of the Cloud
Foundry environment; available for a limited timeframe (four to six weeks).
● sap_java_buildpack_<version_before_previous> - This version used to be latest before two updates of the
Cloud Foundry environment; available for a limited timeframe (four to six weeks).
Important considerations about the usage of the different versions of SAP Java Buildpack
● If you always use sap_java_buildpack - This is the way to go in order to take advantage of any new features
and fixes in the SAP Java buildpack. Thus it is guaranteed that the buildpack is always available. The
drawback in this case is the limited time for any adoption which might be needed. In such a scenario
applications can fall back to an older version temporarily to avoid any down time.
● If you pin the version of the buildpack - developers should be aware of the fact that this version will exist for
a limited amount of time. This may lead to the situation where a restage is failing because the used version
of the buildpack is not available anymore. To avoid this, it is recommended to follow the updates of the
buildpack and test the application with the newest buildpack so that it could be adopted in time, in case an
adoption is required, and to update the version regularly. In this scenario developers should never allow
their applciaiton to run on an outdated buildpack version.
For clarity below is an example how the version update will takes place:
Provided that the latest version of the SAP Java Buildpack is 1.2.3, the output of the cf buildpacks command
would be:
When the SAP Java buildpack is updated on the Cloud Foundry environment from v.1.2.3 to v.1.2.4 the list will
change to:
Note
No fixes will be provided to the older versions of the buildpack. Fixes, including security fixes, will be part of
the latest version.
---
applications:
- name: application-name
memory: 128M
path: ./target/application-name.war
instances: 1
buildpack: sap_java_buildpack_<version_suffix>
...
modules:
- name: application-name
type: java.tomcat
path: ./target/application-name.war
properties:
...
parameters:
...
memory: 512M
buildpack: sap_java_buildpack_<version_suffix>
...
Components
The SAP Java buildpack provides the following components (containers, jres, frameworks) in the application
container (<APP_ROOT_DIR>/app/META-INF/.sap_java_buildpack):
● Runtime (Tomcat [page 1165], TomEE [page 1167], TomEE 7 [page 1170], Java Main [page 1173])
● Memory Calculator V1 (SAP JVM Memory Calculator) [page 1180] - by default
● Memory Calculator V2 [page 1179] - optional
● SAP JVM
● Log Level Client
● JVMKill Agent
The following application containers are available for usage with the SAP Java Buildpack.
3.1.7.1.1 Tomcat
By default web applications pushed with the SAP Java buildpack are running in an Apache Tomcat container.
Applications could explicitly define the targeted application container by using the TARGET_RUNTIME
environment variable in the application manifest.yml file.
Sample Code
manifest.yml
---
applications:
- name: <APP_NAME>
...
env:
TARGET_RUNTIME: tomcat
Provided APIs
The tomcat application runtime container provides the following standard APIs:
SAP Java Buildpack provides some default configurations for the Tomcat application container which could be
customized by the application with the Resource Configuration [page 1185] feature.
Below is a list with all of the placeholders which could be customized by the application along with their default
values:
env:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomcat/conf/server.xml':
{'connector.maxHttpHeaderSize':1024}]"
env:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomcat/conf/server.xml':
{'connector.maxThreads':800}]"
env:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomcat/conf/server.xml':
{'connector.allowTrace':true}]"
The SAP Java Buildpack provides the default configurations for unlimited sessions for the Tomcat application
container which could be customized by the application with the Resource Configuration [page 1185] feature.
To limit the number of active sessions set the maxActiveSessions attribute on a Manager element, for example:
<Context>
<Manager maxActiveSessions="500" />
</Context>
To set session timeout value of active sessions set the session-config tag in the application web.xml:
<session-config>
<session-timeout>1</session-timeout>
</session-config>
The default value of context path in server.xml is "" (Empty String). You can override this default value using
app_context_root in the application manifest.yml file. For example:
...
env:
JBP_CONFIG_TOMCAT: "[tomcat:{app_context_root: test_context_path}]"
...
3.1.7.1.2 TomEE
By default web applications pushed with the SAP Java buildpack are running in an Apache Tomcat container.
Applications could explicitly define the targeted application container - Apache TomEE, by using the
TARGET_RUNTIME environment variable in the application manifest.yml file.
Sample Code
manifest.yml
---
applications:
- name: <APP_NAME>
...
env:
TARGET_RUNTIME: tomee
The tomee application runtime container provides the following standard APIs:
The SAP Java buildpack provides some default configurations for the Apache TomEE application container
which could be customized by the application with the Resource Configuration [page 1185] feature.
env:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomee/conf/server.xml':
{'connector.maxHttpHeaderSize':1024}]"
env:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomee/conf/server.xml':
{'connector.maxThreads':800}]"
env:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomee/conf/server.xml':
{'connector.allowTrace':true}]"
The SAP Java Buildpack provides the default configurations for unlimited sessions for the TomEE application
container which could be customized by the application with the Resource Configuration [page 1185] feature.
To limit the number of active sessions set the maxActiveSessions attribute on a Manager element, for example:
<Context>
<Manager maxActiveSessions="500" />
</Context>
To set session timeout value of active sessions set the session-config tag in the application web.xml:
<session-config>
<session-timeout>1</session-timeout>
</session-config>
The default value of context path in server.xml is "" (Empty String). You can override this default value using
app_context_root in the application manifest.yml file. For example:
...
env:
JBP_CONFIG_TOMCAT: "[tomee:{app_context_root: test_context_path}]"
...
3.1.7.1.3 TomEE 7
By default web applications pushed with the SAP Java buildpack are running in an Apache Tomcat container.
Applications could explicitly define the targeted application container - Apache TomEE 7, by using the
TARGET_RUNTIME environment variable in the application manifest.yml file.
Sample Code
manifest.yml
---
applications:
- name: <APP_NAME>
...
env:
TARGET_RUNTIME: tomee7
The tomee7 application runtime container provides the following standard APIs:
WebSocket 1.1
Interceptors 1.2
The SAP Java Buildpack provides some default configurations for the Apache TomEE 7 application container
which could be customized by the application with the Resource Configuration [page 1185] feature.
env:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomee7/conf/server.xml':
{'connector.maxHttpHeaderSize':1024}]"
env:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomee7/conf/server.xml':
{'connector.maxThreads':800}]"
env:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomee7/conf/server.xml':
{'connector.allowTrace':true}]"
The SAP Java Buildpack provides the default configurations for unlimited sessions for the TomEE 7 application
container which could be customized by the application with the Resource Configuration [page 1185] feature.
To limit the number of active sessions set the maxActiveSessions attribute on a Manager element, for example:
<Context>
<Manager maxActiveSessions="500" />
</Context>
To set session timeout value of active sessions set the session-config tag in the application web.xml:
<session-config>
<session-timeout>1</session-timeout>
</session-config>
The default value of context path in server.xml is "" (Empty String). You can override this default value using
app_context_root in the application manifest.yml file. For example:
...
env:
JBP_CONFIG_TOMCAT: "[tomee7:{app_context_root: test_context_path}]"
...
You can create a Java application that starts its own run time. This allows the usage of frameworks and java
runtimes, such as Spring Boot, Jetty, Undertow, or Netty.
Prerequisites
● You are not using the Resource Configuration [page 1185] feature of the buildpack.
Note
The resource configurations needed for the database connection are not applicable for the Java Main
applications. For more information about database connections, see Related Information.
Context
In this section such applications will be referred to as Java Main applications. The application container
provided by the SAP Java Buildpack for running Java Main applications is referred as Java Main container.
Regardless of the tool you use to build your Java application, you have to make sure that the following tasks
are performed:
Sample Code
Manifest.MF
Manifest-Version: 1.0
Archiver-Version: Plexus Archiver
Built-By: p123456
Created-By: Apache Maven 3.3.3
Build-Jdk: 1.8.0_45
Main-Class: com.companya.xs.java.main.Controller
c. You have packaged all your dependent libraries in the JAR file, also known as creating an uber JAR or a
fat JAR.
If you are using Maven as your build tool, you can use the maven-shade-plugin to perform the above
tasks.
Sample Code
<build>
<finalName>java-main-sample</finalName>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<configuration>
<createDependencyReducedPom>false</createDependencyReducedPom>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTra
nsformer" />
Sample Code
manifest.yml
---
applications:
- name: java-main
memory: 128M
path: ./target/java-main-sample.jar
instances: 1
Example
Donna Moore would like to create a Java Main application to use its own run time. For that purpose she
performs the following steps:
path: ./target/sample_main.jar
6. Finally, she pushes the Java Main application with the following command:
cf push sample_main
You can configure the Java properties by defining the JBP_CONFIG_JAVA_OPTS environment variable.
Defining the JBP_CONFIG_JAVA_OPTS environment variable in the manifest.yml file of the application.
Sample Code
manifest.yml
---
applications:
- name: <app-name>
memory: 512M
...
env:
JBP_CONFIG_JAVA_OPTS: '[from_environment: false, java_opts: ''-
DtestJBPConfig=^%PATH^% -DtestJBPConfig1="test test" -DtestJBPConfig2="%PATH
%"'']'
Defining the JBP_CONFIG_JAVA_OPTS environment variable by using the cf set-env command of the Cloud
Foundry Command Line Interface (cf CLI).
Escaping Strings
When a single quote ' is used, other single quotes ' in the string must be escaped by doubling it '' .
Sample Code
manifest.yml
---
applications:
- name: <app-name>
memory: 512M
...
env:
JBP_CONFIG_JAVA_OPTS: '[from_environment: false, java_opts: ''-
DtestJBPConfig=\$PATH -DtestJBPConfig1="test test" -
DtestJBPConfig2="$PATH"'']'
The SAP Java Buildpack prints a histogram of the heap to the logs, when the JVM encounters a terminal failure.
In addition to that, if the application is bound to a volume service with name or tag heap-dump a heap dump file
is also generated and stored in the mounted volume.
Output Code
Sample Code
For more information about the jmvkill agent, see Cloud Foundry jvmkill documentation .
3.1.7.2.3 SapMachine
SapMachine is an alternative to SAP JVM. It provides a Java Runtime Environment (JRE) with Java 11, while
SAP JVM provides a JRE with Java 8. It works only with the Tomcat [page 1165] and Java Main [page 1173]
application containers.
Activation
To activate SapMachine (instead of using the default SAP JVM) using the SAP Java Buildpack, you have to add
the following environment variable:
---
applications:
- name: <app-name>
...
env:
JBP_CONFIG_COMPONENTS: "jres:
['com.sap.xs.java.buildpack.jdk.SAPMachineJDK']"
...
The memory calculator provides a mechanism to fine tune the Java Virtual Machine (JVM) memory for an
application. Its goal is to ensure that applications perform well while not exceeding a container's memory limit.
The SAP Java Buildpack provides two options for a memory calculator:
---
applications:
- name: <app-name>
...
env:
MEMORY_CALCULATOR_V1: true
...
Customization
---
applications:
- name: <app-name>
...
env:
JBP_CONFIG_JAVA_OPTS: "[java_opts: '-Xms144M -Xss3M -Xmx444444K -
XX:MetaspaceSize=66666K -XX:MaxMetaspaceSize=88888K']"
memory_calculator_v2 section in config/sapjvm.yml [page 1180] configuration file has two sizing options:
stack_threads (an estimate of the number of threads that will be used by the application) and class_count (an
estimate of the number of classes that will be loaded). You can customize those two memory options by using
the JBP_CONFIG_SAPJVM environment variable:
---
applications:
- name: <app-name>
...
env:
Related Information
General Information
When pushing applications to Cloud Foundry application developers could specify the memory limit of the
application.
The main goal of the SAP JVM Memory Calculator is to provide mechanism to fine tune the Java Virtual
Machine (JVM) in terms of restricting the JVM's memory to grow below this memory limit.
There are three memory types, which can be sized - heap, metaspace and stack. For each memory type there is
an command line option, which must be passed to the JVM:
● The initial and maximum size of the heap memory is controlled by the -Xms and -Xmx options respectively
● The initial and maximum size of the metaspace memory is controlled by the -XX:MetaspaceSize and -
XX:MaxMetaspaceSize options respectively
● The size of the stack is controlled by the -Xss option
Default Settings
The SAP Java Buildpack is delivered with a default built-in configuration of the memory sizing options in YML
format - config/sapjvm.yml (path relative to the buildpack archive). That configuration file is parsed during
application staging and the memory configuration specified in it is used for calculating the memory sizes of the
heap, metaspace and stack.
Sample Code
config/sapjvm.yml
---
repository_root: "{default.repository.root}/sapjvm/{platform}/{architecture}"
version: +
default_keystore_pass: changeit
memory_calculator:
version: 1.+
repository_root: "{default.repository.root}/memory-calculator/{platform}/
{architecture}"
The memory_calculator section encloses the input data for the memory calculation techniques utilized in
determining the JVM memory sizing options.
● heap - configure sizing options for the java heap. Affects -Xms,-Xmx JVM options
● metaspace - configure sizing options for the metaspace. Affects -XX:MetaspaceSize, -
XX:MaxMetaspaceSize JVM options
● stack - configure sizing options for the stack. Affects -Xss JVM option
● native - serves to represent the rest of the memory (different to heap, stack, metaspace) in the calculations
performed by the SAP JVM Memory Calculator. No JVM options are affected from this setting.
● memory_heuristics - this section defines the proportions between the memory regions addressed by the
memory calculator. The ratios above will result in a heap space which is about 7.5 times larger than the
metaspace and native; stack will be about half of the metaspace size.
● memory_sizes - this section defines sizes of the corresponding memory regions. The size of the memory
region could be specified in kilobytes (by using the K symbol), megabytes (by using the M symbol) and
gigabytes (by using the G symbol).
● memory_initials - this section defines initial and maximum values for heap and metaspace. By defualt the
initial values for heap and metaspace are set to 100% meaning that the memory calculation will result in -
Xms=-Xmx and -XX:MetaspaceSize=-XX:MaxMetaspaceSize. If those values are set to 50% the -Xmx = 2*-
Xms and -XX:MaxMetaspaceSize=2*-XX:MetaspaceSize.
● memory_settings - for details see OutOfMemory error
● memory_calculator_v2 - for details see Memory Calculator V2
There are two possible ways to customize the default settings - during application staging and during the
application runtime.
---
applications:
- name: <app-name>
memory: 512M
...
env:
JBP_CONFIG_SAPJVM: "[memory_calculator: {memory_heuristics: {heap: 85,
stack: 10}}]"
---
applications:
- name: <app-name>
memory: 512M
...
env:
JBP_CONFIG_SAPJVM_MEMORY_SIZES: 'heap:30m..400m,stack:2m..,metaspace:
10m..12m'
JBP_CONFIG_SAPJVM_MEMORY_WEIGHTS: 'heap:5,stack:1,metaspace:3,native:1'
JBP_CONFIG_SAPJVM_MEMORY_INITIALS: "heap:50%,metaspace:50%"
There are several ways to customize the SAP JVM Memory Calcolator's settings during the application runtime
and all of them include using the set-env command either on XS advanced or on Cloud Foundry. Below are
some examples:
When given certain memory limit the memory calculator will try to calculate memory sizes for heap,
metaspace, stack and native in a way which satisfies the proportions configured with the memory_heuristics.
Then the calculator will validate and adjust those sizes against the configured memory_sizes. Let's say that the
calculated value for heap according to the memory_heuristics is 120M. Let's say that the memory_sizes
configuration for heap is 10M..100M. 120M goes beyond the configured range. No matter than according to
memory_heuristics 120M could be allocated for heap the chosen value for heap size would be 100M. The
calculation goes on until sizes are calculated for all of the regions - heap, metaspace, stack, native. Finally, the
memory_initials will be respected. This would affect the values for -Xms, -Xms, -XX:MetaspaceSize, -
XX:MaxMetaspaceSize.
1G default
First the algorithm will try to estimate the number of threads for the given memory_limit and
memory_heuristics given 1M per thread. This way we have estimated_number_of_threads=((5/100)*1024M) =
51.2M space for stack. Considering 1M per thread (which is by default the -Xss size of the SAPJVM) we get
estimated_number_of_threads=51.2
1G memory_sizes:
metaspace: 64m..70m
memory_heuristics:
heap: 75
metaspace: 10,
stack: 5
native: 10
memory_initials:
heap: 100%
metaspace: 50%
First the algorithm will try to estimate the number of threads for the given memory_limit and
memory_heuristics given 1M per thread. This way we have estimated_number_of_threads=((5/100)*1024M) =
51.2M space for stack. Considering 1M per thread (which is by default the -Xss size of the SAPJVM) we get
estimated_number_of_threads=51.2.
Round 1:
Round 2:
The metaspace is already calculated so its size is substracted from the memory limit leading to memory
available for distribution is now (1G - 70M) = 954M.
The same calculation takes place, this time the metaspace is not considered because its already calculated.
Related Information
The following example shows how application could override the default Tomcat's server.xml. Having the
following structure in the application files:
META-INF/
- sap_java_buildpack/
- resources/
- tomcat/
- conf/
- server.xml/
WEB-INF/
META-INF/
- .sap_java_buildpack/
- tomcat
- conf/
- server.xml
WEB-INF/
Meaning that when the Tomcat container is started the server.xml file used will be the one comming from
the application.
This mechanism also allows of adding new resources inside of the droplet on their respective places based on
where the files are located under META-INF/sap_java_buildpack/resources/.
Both application containers Tomcat [page 1165] and TomEE [page 1167] are configured through text
configuration files, examples are the conf/server.xml, conf/tome.xml or conf/logging.properties
(Apache Tomcat 8 Configuration Reference ). Resource configuration feature of the SAP Java buildpack
provides means for changing parameterized values in all text files part of the application container or part of
the application files during staging.
Example resource_configuration.yml:
---
filepath:
key1: value1
key2: value2
filepath2:
key1: value1
The filepath must start with either tomcat or tomee to match the used application container and then contain
a subpath to a resource.
When changing parametrized values inside application files the file path should look like:
<application_container>/webapps/ROOT/
<path_to_file_relative_to_the_root_of_the_application>
When changing parametrized values inside application container's configuration the file path should look like:
<application_container>/conf/<path_to_config_file>
Example:
tomcat/webapps/ROOT/WEB-INF/mytextfile.txt:
placeholder: "defaultValue"
tomcat/conf/server.xml
connector.maxThreads: 800
Example:
src/
main/
java/
resources/
webapp/
META-INF/
sap_java_buildpack/
config/
resource_configuration.yml
WEB-INF/
mytextfile.txt
My hometown is ${hometown}.
---
tomcat/webapps/ROOT/WEB-INF/mytextfile.txt:
hometown: "London"
My hometown is London.
The example below demonstrates how to provide values for placeholders during staging: Given the example
above add the following environment variable in the manifest.yml file of the application:
env:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomcat/webapps/ROOT/WEB-INF/
mytextfile.txt': {'hometown':"Paris"}]"
This gives the possibility to provide values for placeholders that differ from the provided in the example above
defaults. Staging the application with this environment variable will modify the content of mytextfile.txt to:
My hometown is Paris.
When you call a web application without adding its runtime (Tomcat [page 1165], TomEE [page 1167] or TomEE
7 [page 1170]) context path to the URL, it will be automatically appended.
Example:
If you have configured as context path /test_context_path and the Web application is available on /
test_app, when you call:
<HOST>:<PORT>/test_app
<HOST>:<PORT>/test_context_path/test_app
The default context path value for Tomcat, TomEE, and TomEE 7 is "" (Empty String). For details how to
change this default value check here: tomcatTomcat [page 1165], tomeeTomEE [page 1167], tomee7TomEE 7
[page 1170]
Context
In order to produce application logs and forward them to the Kibana (ELK) stack, the application should be
binded to the application logging service.
2. Add the com.sap.hcp.cf.logging.servlet.filter.RequestLoggingFilter filter to the web.xml, if request metrics
should be generated for applications using the Tomcat/TomEE application containers.
<filter-name>request-logging</filter-name>
<filter-class>com.sap.hcp.cf.logging.servlet.filter.RequestLoggingFilter</
filter-class>
</filter>
<filter-mapping>
<filter-name>request-logging</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
Note
Applications could change the logging level of a specific location by setting the SET_LOGGING_LEVEL
environment variable in the manifest.yml file of the application.
env:
SET_LOGGING_LEVEL: '{com.sap.sample.java.LogInfo: INFO,
com.sap.sample.java.LogWarn: WARN}'
Related Information
Configure the collection of log and trace messages generated by a Java application in the Cloud Foundry
Environment.
Prerequisites
Caution
The JARs for the SLF4J and logback are included as part of the Tomcat and TomEE runtimes provided by
the SAP Java buildpack. Packing them in the application could cause problems during class loading.
Note
It is the application's responsibility to ensure that logback is configured in a secure way in the cases when
the application overrides the default logback configuration, included in the SAP Java buildpack, for example
the logback appenders.
Context
The recommended framework for logging is Simple Logging Facade for Java (SLF4J). To use this framework,
you can create an instance of the org.slf4j.Logger class. You can use it to configure your Java application to
generate logs and traces, and if appropriate set the logging or tracing level.
Procedure
1. Instruct Maven that the application should not package the SLF4J dependency.
This dependency is already provided by the runtime, use the scope provided in order not to pack the slf4j-
api in your application.
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.12</version>
<scope>provided</scope>
</dependency>
2. Import the SLF4J classes into the application code (org.slf4j.Logger and org.slf4j.LoggerFactory) and
develop the log handling code.
Sample Code
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public void logging () {
final Logger logger =
LoggerFactory.getLogger(Log4JServlet.class);
logger.debug("this is record logged through SLF4J param1={},
param2={}.", "value1", 1);
logger.info("This is slf4j info");
}
Related Information
Debugging an application helps you detect and diagnose errors in your code. You can control the execution of
your program by setting breakpoints, suspending threads, stepping through the code, and examining the
contents of the variables.
You can debug an application running on a Cloud Foundry container that is using SAP JVM. By using SAP JVM,
you can enable debugging on-demand without having to restart the application or the JVM.
Prerequisites
● Download and install the Cloud Foundry Command Line Interface (cf CLI) and log on to Cloud Foundry. See
Download and Install the Cloud Foundry Command Line Interface [page 1769] and Log On to the Cloud
Foundry Environment Using the Cloud Foundry Command Line Interface [page 1770].
● Deploy your application using the SAP Java Buildpack. From the cf CLI, execute cf push <app name> -
p <my war file> -b sap_java_buildpack.
● Ensure that SSH is enabled for the application. See https://docs.cloudfoundry.org/devguide/deploy-apps/
ssh-apps.html
SAP JVM is included in the SAP Java Buildpack. With SAP JVM, you can enable debugging on-demand. You
do not need to set any debugging parameters.
Context
After enabling the debugging port, you need to open an SSH tunnel, which connects to that port.
Procedure
1. To enable debugging or to check the debugging state of your JVM, run jvmmon in your Cloud Foundry
container by executing cf ssh <app name> -c "app/META-INF/.sap_java_buildpack/
sapjvm/bin/jvmmon".
2. From the jvmmon command line window, execute start debugging.
3. (Optional) To confirm that debugging is enabled and see which port is open, execute print debugging
information.
Note
Prerequisites
Download and Install the Cloud Foundry Command Line Interface (cf CLI) and log on to Cloud Foundry. See
Download and Install the Cloud Foundry Command Line Interface [page 1769] and Log On to the Cloud Foundry
Environment Using the Cloud Foundry Command Line Interface [page 1770].
Context
To debug an application you need to open a debugging port on your Cloud Foundry containter and open an SSH
tunnel that will connect to that port.
Procedure
1. To open the debugging port, you need to configure the JAVA_OPTS parameter in your JVM. By default the
agentlib, which enables the debug is pre-configured. It could be turned off by setting the following property
in the environment:
env
...
DISABLE_JDWP: true
You could also disable the default configuration by specifying it manually. Set the following option in the
application manifest file: JBP_CONFIG_JAVA_OPTS: "[java_opts: '-
agentlib:jdwp=transport=dt_socket,address=8000,server=y,suspend=n,onjcmd=y']You
may change the default port 8000 in the command.
2. To enable debugging or to check the debugging state of your JVM, run jcmd in your Cloud Foundry
container by executing cf ssh -c "export JAVA_PID=ps -C java -o pid= && /app/META-
INF/.sap_java_buildpack/sap_machine_jdk/bin/jcmd $JAVA_PID
VM.start_java_debugging". The following is an example of the information displayed by jvmmon:
<pid>:
Debugging has been started.
Transport : dt_socket
Address : 8000
Note
Caution
The connection is active until you close the SSH tunnel. After you have finished debugging, close the
SSH tunnel by pressing Ctrl + C . Connect a Java debugger to your application. For example, use
the standard Java debugger provided by Eclipse IDE and connect to localhost:8000.
You can debug an application running on a Cloud Foundry container that is using the standard Java community
build pack.
Prerequisites
Download and install the Cloud Foundry Command Line Interface (cf CLI) and log on to Cloud Foundry. For
more information, see Download and Install the Cloud Foundry Command Line Interface [page 1769] and Log
On to the Cloud Foundry Environment Using the Cloud Foundry Command Line Interface [page 1770].
Context
To debug an application you need to open a debugging port on your Cloud Foundry container and open an SSH
tunnel that will connect to that port.
Procedure
1. To open the debugging port, you need to configure the JAVA_OPTS parameter in your JVM. From the cf
CLI, execute cf set-env <app name> JAVA_OPTS '-
Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8000'.
2. To enable SSH tunneling for your application, execute cf enable-ssh <app name>.
3. To activate the previous changes, restart the application by executing cf restart <app name>.
4. To open the SSH tunnel, execute cf ssh <app name> -N -T -L 8000:127.0.0.1:8000.
Your local port 8000 is connected to the debugging port 8000 of the JVM running in the Cloud Foundry
container.
Caution
The connection is active until you close the SSH tunnel. After you have finished debugging, close the
SSH tunnel by pressing Ctrl + C .
5. Connect a Java debugger to your application. For example, use the standard Java debugger provided by
Eclipse IDE and connect to localhost:8000.
Context
Dynatrace OneAgent is a Java agent that sends all captured monitoring data to your monitoring environment
for analysis. The monitoring environment for Cloud Foundry environment resides in the Dynatrace cloud
monitoring environment, see How Do I Monitor Cloud Foundry Applications .
Procedure
Follow the steps described in the Generate PaaS Token section in How Do I Monitor Cloud Foundry
Applications .
2. Create a user provided service.
The service name contains the string dynatrace (for example - dynatrace-service) where environmentid
and apitoken are the ones generated in the previous step. The apiurl should be set to https://
<YourDynatraceServerURL>/e/<environmentid>/api.
For more details on how to create the user provided service refer to Option 1: Create a user-provided
service in the Create a Dynatrace service in your Cloud Foundry Environment section in How Do I Monitor
Cloud Foundry Applications .
3. Bind the application to the service instance created in the previous step, using the manifest.yml file.
Sample Code
manifest.yml
---
applications:
- name: myapp
memory: 1G
instances: 1
You can configure your application to use a database connection so that the application can persist its data.
This configuration is applicable for the Tomcat [page 1165], TomEE [page 1167] and TomEE 7 [page 1170]
application containers.
● Configure a Database Connection for the Tomcat Application Container [page 1195]
● Configure a Database Connection for the TomEE Application Container [page 1196]
● Database Connection Configuration Details [page 1199]
● SAP HANA HDI Data Source Usage [page 1201]
Procedure
The context.xml file has to be inside the META_INF/ folder of the application WAR file and has to contain
information about the data source to be used.
---
tomcat/webapps/ROOT/META-INF/context.xml:
service_name_for_DefaultDB: di-core-hdi
env:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomcat/webapps/ROOT/META-INF/
context.xml': {'service_name_for_DefaultDB' : 'my-local-special-di-core-
hdi'}]"
Procedure
The context.xml file has to be inside the META_INF/ folder of the application WAR file and has to contain
information about the data source to be used.
---
tomee/webapps/ROOT/META-INF/context.xml:
service_name_for_DefaultDB: di-core-hdi
env:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomee/webapps/ROOT/META-INF/
context.xml': {'service_name_for_DefaultDB' : 'my-local-special-di-core-
hdi'}]"
Results
Procedure
Note
If the data source is to be used from a Web application, you have to create the file inside the WEB-INF/
directory.
If the data source is to be used from Enterprise JavaBeans (EJBs), you have to create the file inside the
META-INF/ directory.
The resources.xml file has to be inside the application's WAR file and has to contain information about
the data source to be used.
Sample Code
---
tomee7/webapps/ROOT/WEB-INF/resources.xml:
service_name_for_DefaultDB: di-core-hdi
Sample Code
---
tomee7/webapps/ROOT/META-INF/resources.xml:
Sample Code
Defining a new service for the look-up of the data source in a Web application
env:
TARGET_RUNTIME: tomee7
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomee7/webapps/ROOT/WEB-INF/
resources.xml': {'service_name_for_DefaultDB' : 'my-local-special-di-core-
hdi'}]"
Sample Code
Defining a new service for the look-up of the data source in an EJB
env:
TARGET_RUNTIME: tomee7
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomee7/webapps/ROOT/META-INF/
resources.xml':{'service_name_for_DefaultDB' : 'my-local-special-di-core-
hdi'}]"
Results
Related Information
Define details of the database connection used by your Java Web Application running on Cloud Foundry
Environment with the SAP Java Buildpack.
Configuration Files
To configure your application to establish a connection to the SAP HANA database, you must specify details of
the connection in a configuration file. For example, you must define the data source that the application will use
to discover and look up data. The application then uses a Java Naming and Directory Interface (JNDI) to look
up the specified data in the file.
The easiest way to define the required data source is to declare the keys for the data source in a resource file.
For the Tomcat Application Container, you can create a context.xml file in the META-INF/ directory with the
following content:
Sample Code
context.xml
For the TomEE Application Container, you can create a resources.xml file in the WEB-INF/ directory with the
following content:
Sample Code
resources.xml
The problem with this simple approach is that your WAR file cannot be signed, and any modifications can only
be made in the WAR file. For this reason, it is recommended that you do not use the method in a production
environment but rather take advantage of the Resource Configuration [page 1185] feature of the SAP Java
Buildpack. Use modification settings in resource_configuration.yml and manifest.yml as illustrated in
the following examples.
resource_configuration.yml
---
tomcat/webapps/ROOT/META-INF/context.xml:
service_name_for_DefaultDB: di-core-hdi
Specifying a default name for a service is useful (for example, for automation purposes) only if you are sure
there can be no conflict with other names. For this reason, it is recommended that you include a helpful and
descriptive error message instead of a default value. That way the error message will be part of an exception
triggered when the data source is initialized, which helps troubleshooting.
Sample resource_configuration.yml.
Sample Code
---
tomcat/webapps/ROOT/META-INF/context.xml:
service_name_for_DefaultDB: Specify the service name for Default DB in
manifest.yml via "JBP_CONFIG_RESOURCE_CONFIGURATION"..
Placeholders
The generic mechanism JBP_CONFIG_RESOURCE_CONFIGURATION basically replaces the key values in the
specified files. For this reason, if you use placeholders in the configuration files, it is important to ensure that
you use unique names for the placeholders, see Resource Configuration [page 1185].
Sample context.xml.
Sample Code
context.xml
The names of the defined placeholders are also used in the other resource files.
Sample Code
resource_configuration.yml
---
tomcat/webapps/ROOT/META-INF/context.xml:
service_name_for_DefaultDB: di-core-hdi
max_Active_Connections_For_DefaultDB: 100
service_name_for_DefaultDB: di-core-hdi-xa
max_Active_Connections_For_DefaultXADB: 100
env:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomcat/webapps/ROOT/META-INF/
context.xml': {'service_name_for_DefaultDB' : 'my-local-special-di-core-
hdi' , 'max_Active_Connections_For_DefaultDB' : '30',
'service_name_for_DefaultXADB' : 'my-local-special-xa-di-core-hdi' ,
'max_Active_Connections_For_DefaultXADB' : '20' }]"
Sample manifest.yml.
Sample Code
manifest.yml
env:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomcat/webapps/ROOT/META-INF/
context.xml': {'service_name_for_DefaultDB' : 'my-local-special-di-core-
hdi' , 'max_Active_Connections_For_DefaultDB' : '30',
'service_name_for_DefaultXADB' : 'my-local-special-xa-di-core-hdi' ,
'max_Active_Connections_For_DefaultXADB' : '20' }]"
Procedure
1. Create a service instance for the HDI container, using the cf create-service command.
Specify the service instance in the Java application's deployment manifest file.
Sample Code
manifest.yml
services:
- my-hdi-container
3. Add the resource reference to the web.xml file, which must have the name of the service instance.
web.xml
<resource-ref>
<res-ref-name>jdbc/my-hdi-container</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
You can find the reference to the data source in the following ways:
○ Annotations
@Resource(name = "jdbc/my-hdi-container")
private DataSource ds;
The SAP Java buildpack provides an option to use the SAP Java Connector.
Activation
The SAP Java buildpack provides an option to use the SAP Java Connector (SAP JCo) .
To activate SAP JCo in the SAP Java buildpack, the application needs to be bound to connectivity and
destination services, see Create and Bind a Connectivity Service Instance [page 98] and Create and Bind a
Destination Service Instance [page 101]:
Sample Code
manifest.yml
---
applications:
- name: <APP_NAME>
...
services:
- connection_service_instance
- destination_service_instance
The activation of SAP JCo will provide all relevant libraries in the application container, so the can be used.
It is not possible to use the SAP Java Connector with Spring Boot applications.
Related Information
The SAP JVM Profiler is a tool that helps you analyze the resource consumption of a Java application running
on SAP Java Virtual Machine (JVM). You can use it to profile simple stand-alone Java programs or complex
enterprise applications.
Prerequisites
Install the SAP JVM Tools for Eclipse. For more information, see Install the SAP JVM Tools in Eclipse [page
1204].
Open a debugging connection using an SSH tunnel. For more information, see Debug an Application Running
on SAP JVM [page 1190]
Context
To measure resource consumption, the SAP JVM Profiler enables you to run different profiling analyses. Each
profiling analysis creates profiling events that focus on different aspects of resource consumption. For more
information about the available analyses, see the SAP JVM Profiler documentation in Eclipse. Go to Help
Help Contents SAP JVM Profiler .
Procedure
4. Choose the analysis you want to run and specify your profiling parameters.
Note
To use the thread annotation filters, complete the fields under the Analysis Options section. By default,
all filters are set to *, which means that all annotations pass the filter.
Procedure
2. In the Work with combo box enter https://tools.hana.ondemand.com/oxygen and choose SAP Cloud
Platform Tools SAP JVM Tools .
3. Choose Next and follow the installation wizard.
You can add your SAP JVM to the VM Explorer view of the SAP JVM Profiler.
Prerequisites
Install the SAP JVM Tools for Eclipse. For more information, see Install the SAP JVM Tools in Eclipse [page
1204].
Download and install the Cloud Foundry Command Line Interface (cf CLI) and log on to Cloud Foundry. For
more information, see Download and Install the Cloud Foundry Command Line Interface [page 1769] and Log
On to the Cloud Foundry Environment Using the Cloud Foundry Command Line Interface [page 1770].
From the VM Explorer, you can debug, monitor or profile your SAP JVM. For more information about the VM
Explorer, see the SAP JVM Profiler documentation in Eclipse. Go to Help Help Contents SAP JVM
Profiler .
Note
You need to use the jvmmond tool, which is included in the SAP Java Buildpack.
Procedure
This starts the jvmmond service on your Cloud Foundry container. It is listening to port 50003. This
command also specifies a port range of 50004-50005 in case additional ports need to be opened.
2. To enable an SSH tunnel for these ports, execute cf ssh <app name> -N -T -L
50003:127.0.0.1:50003 -L 50004:127.0.0.1:50004 -L 50005:127.0.0.1:50005.
3. In Eclipse, open the Profiling perspective and from the VM Explorer view, choose Manage Hosts.
4. Choose Add and enter the following host name and port number: localhost:50003.
The versions of the SAP Java buildpack dependencies and the provided APIs from supported runtime
containers, could be consumed through a Bill Of Materials (BOM). The BOM can be used to control the
versions of a project’s dependencies.
Usage
There are three options when using a BOM depending on the application runtime - Tomcat, TomEE, TomEE7.
Note
The versions of BOM artifacts are related to the versions of the SAP Java buildpack. For example, if your
application uses SAP Java buildpack version 1.9.1, then you have to use BOM version 1.9.1 as well.
...
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.sap.cloud.sjb.cf</groupId>
<artifactId>cf-tomcat-bom</artifactId>
<version>1.9.1</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
...
● For TomEE:
...
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.sap.cloud.sjb.cf</groupId>
<artifactId>cf-tomee-bom</artifactId>
<version>1.9.1</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
...
● For TomEE 7:
...
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.sap.cloud.sjb.cf</groupId>
<artifactId>cf-tomee7-bom</artifactId>
<version>1.9.1</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
...
Use groupId and artifactId to add the dependencies that you want.
...
<dependency>
<groupId>com.sap.cloud.sjb</groupId>
<artifactId>xs-env</artifactId>
</dependency>
<dependency>
<groupId>com.sap.cloud.sjb</groupId>
<artifactId>xs-user-holder</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>jul-to-slf4j</artifactId>
This section offers selected information for Node.js development on the SAP Cloud Platform Cloud Foundry
environment and references to more detailed sources.
In this section about Node.js development, you get information about the buildpack supported by SAP and
about the Node.js packages, and how you consume them in your application.
There is also a small tutorial with an introduction to securing your application, and some tips and tricks for
developing and running Node.js applications on the Cloud Foundry environment.
Node.js Buildpack
SAP Cloud Platform uses the standard Node.js buildpack provided by Cloud Foundry to deploy Node.js
applications.
To get familiar with the buildpack and how to deploy applications with it, take a look at the Cloud Foundry
Node.js Buildpack documentation .
You can download and consume SAP developed Node.js packages via the SAP NPM Registry. There is an
overview of Node.js packages developed by SAP, what they are meant for, and where they are included in the
SAP HANA Developer Guide for XS Advanced Model. As the Node.js packages used for development in the
Cloud Foundry are similar or the same, as those in SAP HANA XS advanced, the links guide you to the SAP
HANA Developer Guide for XS Advanced Model.
Additional information:
The tutorial will guide you through creating a Node.js application, and setting up authentication and
authorization checks. This is by no means a setup for productive use, but you get to know the basics and links
to some further reading.
For selected tips and tricks for your Node.js development, see Tips and Tricks for Node.js Applications [page
1218].
This tutorial will guide you through creating and setting up a sample Node.js application by using the Cloud
Foundry command line interface (cf CLI).
Prerequisites
● cf CLI is installed locally, see Download and Install the Cloud Foundry Command Line Interface [page
1769].
● Node.js is installed and configured locally, see npm documentation .
● The SAP npm registry is configured, see npm registry.
Context
You will start by building and deploying a simple web application that returns some sample data.
Procedure
---
applications:
- name: myapp
host: <host>
path: myapp
memory: 128M
Exchange the <host> with an unique name, so it does not clash with other deployed application.
This configuration is used to describe the applications and how they will be deployed.
Tip
For information about the manifest.yml file, see Deploying with Application Manifests .
3. Create a new directory inside node-tutorial named myapp and change the current directory to myapp
with cd myapp.
4. Execute npm init.
This will walk you through creating a package.json file in the myapp directory.
5. Run npm install express --save.
This will add the express package as a dependency in the package.json file. After the installation is
complete the content of the package.json should look similar to this:
{
"name": "myapp",
"version": "1.0.0",
"description": "My App",
"main": "index.js",
"scripts": {
"test": "<test>"
},
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.15.3"
}
}
6. Add engines and update scripts sections to the package.json so it looks similar to this:
{
"name": "myapp",
"description": "My App",
"version": "0.0.1",
"private": true,
"dependencies": {
"express": "^4.15.3"
},
"engines": {
"node": "10.x.x"
},
"scripts": {
"start": "node start.js"
}
}
7. Inside the myapp directory create another file called start.js with the following content:
This will create a very simple web application returning “Hello World” as a response. Express is one of
the most widely used Node.js modules (for serving web content) and it is the web server part of this
application. After these steps are complete note that the package express has been installed in the
node_modules directory.
8. Deploy the application on Cloud Foundry. Execute the following command in the node-tutorial
directory:
cf push
Note
cf push is always executed in the same directory where the manifest.yml is located.
When the staging and deployment steps are complete you can check the state and URL of your application
by using the cf app command.
Results
Prerequisites
You have gone over the Create a Node.js Application [page 1208] tutorial and have the sample application
deployed on the Cloud Foundry environment.
Context
Authentication in the Cloud Foundry environment is provided by the UAA service. In this example, OAuth 2.0 is
used as the authentication mechanism. The simplest way to add authentication is to use the @sap/approuter
package, which is a component used to provide a central entry point for business applications. More details on
the security in SAP Cloud Platform can be found in Web Access in the Cloud Foundry Environment [page 2260]
documentation.
Procedure
1. Create file xs-security.json (see the related link) in node-tutorial directory with the following
content:
{
"xsappname" : "myapp",
"tenant-mode" : "dedicated"
}
2. Create a UAA service instance named myuaa via the following command:
---
applications:
- name: myapp
host: <host>
path: myapp
memory: 128M
services:
- myuaa
In this case myuaa service instance will be bound to the myapp application during deployment.
4. Create a directory named web in the node-tutorial directory.
5. Inside the web directory, create a subdirectory named resources, this directory will be used to provide
the business application static resources.
6. Inside resources, create an index.html file with the following content:
<html>
<head>
"scripts": {
"start": "node node_modules/@sap/approuter/approuter.js"
}
10. Add the following content at the end of the manifest.yml file in the node-tutorial directory:
- name: web
host: <host>
path: web
memory: 128M
env:
destinations: >
[
{
"name":"myapp",
"url":"<myapp url>",
"forwardAuthToken": true
}
]
services:
- myuaa
○ Exchange the <host> with an unique name, so it does not clash with other deployed application.
○ The <destinations> environment variable defines the destinations to the microservices, the
application router will forward requests to.
Set the url property to the URL of the myapp application as displayed by the cf apps command, and
add the network protocol before the URL.
○ In the services section we specify the uaa service name that will be bound to the application.
11. Create the xs-app.json file in the web directory with the following content:
{
"routes": [
{
"source": "^/myapp/(.*)$",
"target": "$1",
"destination": "myapp"
}
]
}
With this configuration, the incoming request path is connected with the destination where the request
should be forwarded to. By default, every route requires OAuth authentication, so the requests to this
path will require an authenticated user.
12. Execute the following commands in the myapp directory to download the @sap/xssec and passport
packages:
13. Verify the request is authenticated by checking the JWT token in the request by using JWTStrategy
provided by the @sap/xssec package. To do that replace the content of the myapp/start.js file with the
following:
cf push
This command will update the myapp application and will deploy the web application.
Note
From this point in the tutorial the URL of the web application will be requested instead of the myapp
URL. It will then forward the requests to the myapp application.
15. Find the URL of the web application via the cf apps command and open it in a Web browser.
16. Enter the credentials of a valid user.
17. Click the myapp link.
Both the myapp and web applications are bound to the same UAA service instance - myuaa. In this
scenario authentication is handled by the UAA via the application router.
Related Information
Prerequisites
You have gone over the Authentication Checks in Node.js Applications [page 1211] tutorial and have the sample
application deployed on the Cloud Foundry environment.
Context
Authorization in the Cloud Foundry environment is provided by the UAA service. In the previous example, the
@sap/approuter package was added to provide a central entry point for the business application and enable
authentication. Now to extend the sample authorization will be added. The authorization concept includes
elements such as roles, scopes, and attributes provided in the security descriptor file xs-security.json,
explained in details in the What Is Authorization and Trust Management [page 2253] section.
In this tutorial, the sample will be extended by implementing the users REST service. Different authorization
checks will be introduced for the GET and CREATE operations to demonstrate how authorization works.
Note
Authorization checks can be configured in the application router, check the Routing Configuration File
[page 1101], route’s scope property. This tutorial focuses on authorization checks in the microservices
using the container security API for Node.js.
Procedure
1. To introduce application roles, open the xs-security.json in the node-tutorial directory, and add
scopes and role templates as follows:
{
"xsappname": "myapp",
Two roles are introduced: Viewer and Manager. These roles are a collecting set of OAuth 2.0 scopes or
actions. The scopes will be used later in the microservices code for authorization checks.
2. Update the UAA service.
3. Add a new file called users.json to the myapp directory with the following content:
[{
"id": 0,
"name": "John"
},
{
"id": 1,
"name": "Paul"
}]
This will be the initial list of users for the REST service.
4. Add a dependency to body-parser that will be used for JSON parsing in the add new user operation. In
the myapp directory execute:
5. Change the myapp/start.js by adding GET and POST operations for the users REST endpoint:
Enforcing authorization checks is done via the @sap/xssec package. Using passport and
xssec.JWTSecurity, a security context is attached as an authInfo object to the request object. This
object is initialized with the incoming JWT token. For a full list of methods and properties of the security
context, see Authentication for Node.js Applications [page 2273].
For the HTTP GET requests, a check if the user has the scope Display is done.
To create new users for example, the HTTP POST requests, require the service to have the Update scope
assigned to the user.
6. Update the UI to be able to send POST requests. Change the content of web/resources/index.html
with the following code:
<html>
<head>
<title>JavaScript Tutorial</title>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/
jquery.min.js"></script>
<script>
function fetchCsrfToken(callback) {
jQuery.ajax({
url: '/myapp/users',
type: 'HEAD',
headers: { 'x-csrf-token': 'fetch' }
})
.done(function(message, text, jqXHR) {
callback(jqXHR.getResponseHeader('x-csrf-token'))
})
.fail(function(jqXHR, textStatus, errorThrown) {
alert('Error fetching CSRF token: ' + jqXHR.status + ' ' +
errorThrown);
});
}
function addNewUser(token) {
var name = jQuery('#name').val() || '--';
jQuery.ajax({
url: '/myapp/users',
In the UI there is a link to get all users, an input box to enter a user name, and a button to send the create a
new user requests.
In the sample, jQuery is used to create a POST request and send a new user in JSON format to the REST
API. The code seems more complicated than expected because you need to get a CSRF token before
sending the POST request. This token is required by the application router for requests that change the
state, so the call should provide it. In general, what the code does in the case when the Add User button is
clicked is to fetch a token and on success send a POST request with the users’ data as a JSON body.
7. Go to the node-tutorial directory and push both applications. Execute the following command:
cf push
8. Find the URL of the application router web application via the cf apps command and open it in a Web
browser.
9. Enter the credentials of a valid user.
10. Click the Show users link. This should result in 403 Forbidden response, due to missing privileges.
11. Configure the roles collections and user groups assignment in the SAP Cloud Platform cockpit.
Note
The configuration is not part of this tutorial. See Set Up Security Artifacts [page 2335].
Initially add only the Viewer role to the user. In a new browser window open the application URL, ensure
you are logged in interactively. Click on the Show users link in the UI and you should be able to see a JSON
response from the REST service. Clicking on Add User leads to a message box with error text: 403
Forbidden.
12. Configure the user with the Manager role. Open a new browser window and open the application URL,
ensure you are logged in interactively. This time writing the example name in the edit box and clicking on
the Add user button should result in a message box with text: success.
Get to know the Node.js buildpack for the Cloud Foundry environment
Check the Tips and Tricks for Node.js applications in the Cloud Foundry Node.js Buildpack documentation.
Vendoring Node.js application dependencies is discussed in the documentation for the Cloud Foundry
environment. Even though the SAP Cloud Platform is a connected environment, for productive applications is
to vendor application dependencies.
There are various reasons for this, for example for productive applications usually security scans are run, at the
same time npm does not provide reliable dependency fetch and you might end-up with different dependencies
in case they are installed during deployment. Additionally, npm downloads any missing dependencies from its
registry. If this registry is not accessible for some reason, the deployment may fail.
Note
Be aware when using dependencies containing native code, that you need to preinstall it in the same
environment as the Cloud Foundry container or that the package has built-in support for it.
To ensure that prepackaged dependencies are pushed to the Cloud Foundry environment and On-premise
runtime, make sure that node_modules directory is not listed in .cfignore file. It’s also preferable that
development dependencies are not deployed for productive deployments. To ensure that run this command:
For performance reasons Node.js (V8) has lazy garbage collection. Even though there are no memory leaks in
your application, this might lead to occasional restarts as explained in the Tips for Node.js Applications
section in the Node.js Buildpack documentation for the Cloud Foundry Environment.
Enforce the garbage collector to run before the memory is consumed by limiting the V8 application’s heap size
at about ~75% of the available memory.
You can either use <OPTIMIZE_MEMORY> environment variable supported by Node.js buildpack or specify the
V8 heap size directly in your application start command (recommended), example for application started with
256M of RAM:
You can optimize V8 behavior using additional options. You can list them using the command:
node --v8-options
When deploying an application in the Cloud Foundry environment without specifying the application memory
requirements, the controller of the Cloud Foundry environment will assign the default (1G of RAM currently) for
your application. Many Node.js applications require less memory and assigning the default is waste of
resources.
To save memory from your quota, specify the memory size in descriptor of the deployment in the Cloud
Foundry environment – manifest.yml. Details how to do that, can be found in the Deploying with Application
Manifests topic in the documentation for the Cloud Foundry environment.
This section offers selected information for Python development on the SAP Cloud Platform Cloud Foundry
environment and references to more detailed sources.
In this section about Python development, you get information about the buildpack supported by SAP and
about the Python packages, and how you consume them in your application.
There is also a small tutorial with an introduction to securing your application, and some tips and tricks for
developing and running Python applications on the Cloud Foundry environment.
SAP Cloud Platform uses the standard Python buildpack provided by the Cloud Foundry environment to deploy
Python applications.
To get familiar with the buildpack and how to deploy applications with it, take a look at the Cloud Foundry
Python Buildpack documentation .
SAP includes a selection of Python packages, which are available for download and use, for customers and
partners who have the appropriate access authorization, from the SAP Service Marketplace (SMP). For more
Python packages
Package Description
hdbcli The SAP HANA Database Client, provides means for data
base connectivity.
Python Tutorial
The tutorial will guide you through creating a Python application, consuming Cloud Foundry services, and
setting up authentication and authorization checks. This is by no means a setup for productive use, but you get
to know the basics and links to some further reading.
Selected tips and tricks for your Python development. See Tips and Tricks for Python Applications [page 1230].
Prerequisites
Before you start you will need to fulfill the following requirements:
● The Cloud Foundry command line interface is installed locally. See Download and Install the Cloud Foundry
Command Line Interface [page 1769].
● Python version 3.5 or higher is installed locally. See the installation guides for OS X , Windows , and
Linux .
● virtualenv is installed locally. See https://github.com/kennethreitz/python-guide/blob/master/docs/dev/
virtualenvs.rst .
Context
This tutorial will guide you through creating and setting up a simple Python application by using the Cloud
Foundry command line interface (cf CLI). The tutorial will showcase some basic SAP provided Python libraries
aimed to ease your application development. You will start by building and deploying the web application that
returns some sample data.
Procedure
1. Log on to Cloud Foundry. See Log On to the Cloud Foundry Environment Using the Cloud Foundry
Command Line Interface [page 1770].
2. Create a new directory named python-tutorial.
3. Create a manifest.yml file in the python-tutorial directory with the following content:
Source Code
manifest.yml
---
applications:
- name: myapp
host: <host>
path: .
memory: 128M
command: python server.py
Exchange the <host> with an unique name, so it does not clash with other deployed applications. This file is
the configuration describing your application and how it should be deployed to the Cloud Foundry. See
Deploying with Application Manifests .
4. Specify the Python runtime version your application will run on by creating a runtime.txt file with the
following content:
runtime.txt
python-3.6.x
Note
The buildpack only supports the stable Python versions, which are listed in the Python buildpack
release notes .
5. The application will be a web server utilizing the Flask web framework. Specify Flask as an application
dependency, by creating a requirements.txt file with the following content:
Source Code
requirements.txt
Flask==0.12.2
6. Create a server.py file, which will contain the following application logic:
Source Code
server.py
import os
from flask import Flask
app = Flask(__name__)
port = int(os.environ.get('PORT', 3000))
@app.route('/')
def hello():
return "Hello World"
if __name__ == '__main__':
app.run(host='0.0.0.0', port=port)
This is a simple server, which will return a “Hello World” when requested. Flask is one of the most widely
used Python web frameworks (for serving web content) and it is the web server part of this application.
7. Vendor dependencies.
The Python buildpack provides a mechanism for that – applications can vendor their dependencies by
creating a vendor folder in their root directory and execute the following command to download
dependencies in it:
Note
You should always make sure you are vendoring dependencies for the correct platform, so if you are
developing on anything other than Ubuntu, use the --platform flag. See pip download .
8. Deploy the application on Cloud Foundry. Execute the cf push command in the python-tutorial
directory.
cf push is always executed in the same directory, where the manifest.yml is located.
When the staging and deployment steps are complete you can check the state and URL of your application by
using the cf app command.
9. Open a browser window and enter the URL of the application.
Prerequisites
You have gone over and completed the Create a Python Application [page 1221] part of the tutorial.
Context
In this part of the tutorial you will connect and consume a Cloud Foundry service in your application. For the
purpose of the tutorial the SAP HANA Service will be used.
You can view what services and plans are available for your application to consume, by executing cf
marketplace.
Procedure
1. Create an instance of the SAP HANA service with the following command:
This will create a service instance called myhana, from the service hana, with plan hdi-shared.
2. Bind this service instance to the application.
a. Modify the manifest.yml file:
Source Code
manifest.yml
---
applications:
- name: myapp
host: <host>
path: .
memory: 128M
command: python server.py
b. To consume the service inside the application you need to read the service settings and credentials
from the application. To do that use the python module cfenv. Add the following line to the
requirements.txt file:
Source Code
requirements.txt
Flask==0.12.2
cfenv==0.5.3
hdbcli==2.3.119
c. Modify the server.py file to include the following lines of code, which are used to read the service
information from the environment:
Source Code
server.py
import os
from flask import Flask
from cfenv import AppEnv
app = Flask(__name__)
env = AppEnv()
port = int(os.environ.get('PORT', 3000))
hana = env.get_service(label='hana')
@app.route('/')
def hello():
return "Hello World"
if __name__ == '__main__':
app.run(host='0.0.0.0', port=port)
When you restage the application the SAP HANA service instance will be bound to the application and
the application could connect to it.
3. Connect to SAP HANA using the SAP HANA database client or hdbcli module provided by SAP.
Note
You should always make sure you are vendoring dependencies for the correct platform, so if you are
developing on anything other than Ubuntu, use the --platform flag. See pip download .
4. Modify the server.py file to execute a query with the hdbcli driver:
server.py
import os
from flask import Flask
from cfenv import AppEnv
from hdbcli import dbapi
app = Flask(__name__)
env = AppEnv()
port = int(os.environ.get('PORT', 3000))
hana = env.get_service(label='hana')
@app.route('/')
def hello():
conn = dbapi.connect(address=hana.credentials['host'],
port=int(hana.credentials['port']), user=hana.credentials['user'],
password=hana.credentials['password'])
cursor = conn.cursor()
cursor.execute("select CURRENT_UTCTIMESTAMP from DUMMY", {})
ro = cursor.fetchone()
cursor.close()
conn.close()
Prerequisites
● You have gone over and completed, the Create a Python Application [page 1221] and Consume Cloud
Foundry Services [page 1223] parts of the tutorial.
● You have npm installed locally.
Context
Authentication in the Cloud Foundry environment is provided by the UAA service. In this example, OAuth 2.0 is
used as the authentication mechanism. The simplest way to add authentication is to use the @sap/approuter
Node.js package, which is a component used to provide a central entry point for business applications. More
details on the security in SAP Cloud Platform can be found in Web Access in the Cloud Foundry Environment
documentation. To use @sap/approuter we’ll create a separate Node.js micro-service to act as an entry point
for the application.
1. Create an xs-security.json file for your application with the following content:
Source Code
xs-security.json
{
"xsappname" : "myapp",
"tenant-mode" : "dedicated"
}
2. Create a UAA service instance named myuaa via the following command:
Source Code
manifest.yml
---
applications:
- name: myapp
host: <host>
path: .
memory: 128M
command: python server.py
services:
- myhana
- myuaa
The myuaa service instance will be bound to the myapp application during deployment.
4. To create a second microservice, which will be the application router, create a directory called web in the
python-tutorial directory.
5. Inside the web directory, create a sub-directory named resources - this directory will be used to provide
the business application's static resources.
6. 6. Inside resources, create an index.html file with the following content:
Source Code
index.html
<html>
<head>
<title>Python Tutorial</title>
</head>
<body>
<h1>Python Tutorial</h1>
<a href="/myapp/">myapp</a>
</body>
</html>
"scripts": {
"start": "node node_modules/@sap/approuter/approuter.js"
}
10. Modify the manifest.yml file in the python-tutorial directory with the following content at the end of
it:
Source Code
---
applications:
- name: myapp
host: <host>
path: .
memory: 128M
command: python server.py
services:
- myhana
- myuaa
- name: web
host: <host>
path: web
memory: 128M
env:
destinations: >
[
{
"name":"myapp",
"url":"<myapp url>",
"forwardAuthToken": true
}
]
services:
- myuaa
○ Exchange the <host> with an unique name, so it does not clash with other deployed application.
○ The <destinations> environment variable defines the destinations to the micro-services, the
application router will forward requests to.
○ Set the url property to the URL of the myapp application as displayed by the cf apps command, and
add the network protocol before the URL.
○ In the services section, specify the UAA service name, that will be bound to the application.
11. Create the xs-app.json file in the web directory with the following content:
Source Code
xs-app.json
{
"routes": [
{
"source": "^/myapp/(.*)$",
"target": "$1",
"destination": "myapp"
}
]
Note
With this configuration, the incoming request path is connected with the destination where the request
should be forwarded to. By default, every route requires OAuth authentication, so the requests to this
path will require an authenticated user.
12. Navigate to the node-tutorial directory and execute cf push. This command will update the myapp
application and will deploy the web application.
Note
From this point in the tutorial the URL of the web application will be requested instead of the myapp
URL. It will then forward the requests to the myapp application.
13. Check the URL of the web application via the cf apps command and open it in a browser window.
You should be prompted to log in and then you should see the current HANA time displayed by the Python
application.
Prerequisites
You have completed Authentication Checks in Python Applications [page 1225] and have the sample
application deployed on the Cloud Foundry environment.
Context
Authorization in the Cloud Foundry environment is provided by the UAA service. In the previous example, the
@sap/approuter package was added to provide a central entry point for the business application and enable
authentication. Now to extend the sample, authorization will be added. The authorization concept includes
elements such as roles, scopes, and attributes provided in the security descriptor file xs-security.json,
explained in details in the What Is Authorization and Trust Management [page 2253] section.
Procedure
1. Add the xssec security library to the requirements.txt file, to place restrictions on the content you
serve.
requirements.txt
Flask==0.12.2
cfenv==0.5.3
hdbcli==2.3.14
xssec==1.0.0
2. Then vendor it inside the vendor folder by executing: pip download -d vendor -r
requirements.txt –-find-links sap_dependencies from the root of the application.
3. Modify server.py to use the security library:
Source Code
server.py
import os
from flask import Flask
from cfenv import AppEnv
from flask import request
from flask import abort
import xssec
from hdbcli import dbapi
app = Flask(__name__)
env = AppEnv()
port = int(os.environ.get('PORT', 3000))
hana = env.get_service(label='hana')
uaa_service = env.get_service(name='myuaa').credentials
@app.route('/')
def hello():
if 'authorization' not in request.headers:
abort(403)
access_token = request.headers.get('authorization')[7:]
security_context = xssec.create_security_context(access_token,
uaa_service)
isAuthorized = security_context.check_scope('openid')
if not isAuthorized:
abort(403)
conn = dbapi.connect(address=hana.credentials['host'],
port=int(hana.credentials['port']), user=hana.credentials['user'],
password=hana.credentials['password'])
cursor = conn.cursor()
cursor.execute("select CURRENT_UTCTIMESTAMP from DUMMY", {})
ro = cursor.fetchone()
cursor.close()
conn.close()
return "Current time is: " + str(ro["CURRENT_UTCTIMESTAMP"])
if __name__ == '__main__':
app.run(host='0.0.0.0',port=port)
4. Try to access the application directly, you should see an HTTP 403 error. If you however access the
application through the application router, you should see the current HANA time, provided you have the
scope ‘openid’ assigned to your user.
Since the OAuth 2.0 client is used, the scope openid is assigned to your user by default and you are able to
access the application as usual.
The functional authorization scopes for applications are declared in the xs-security.json, see Specify
the Security Descriptor Containing the Functional Authorization Scopes for Your Application [page 2338].
● See https://docs.cloudfoundry.org/buildpacks/python/index.html#vendoring
● The cfenv package provides access to Cloud Foundry application environment settings by parsing all the
relevant environment variables. The settings are returned as a class instance. See https://github.com/
jmcarp/py-cfenv .
● While developing Python applications (whether in the Cloud or not) it’s a very good idea to use virtual
environments. The most famous Python package providing such a functionality is virtualenv. See
https://virtualenv.pypa.io/en/stable/
● The PEP 8 style guide for Python applications - https://www.python.org/dev/peps/pep-0008/ , will help
you improve your applications.
Get to know certain aspects of SAPUI5 development, to get up and running quickly.
If you are about to decide which UI technology to use , read everything you need to know about supported
library combinations, the supported browsers and platforms, and so on, at the Read Me First section that
contains the following topics and more:
Select the tiles to discover SAPUI5 Development. The references guide you to the documentation of the
SAPUI5 Demo Kit. Besides the entry points we provide here on each tile, start exploring the demo kit on your
own whenever you feel comfortable enough.
1. Register for an SAP Cloud Platform trial account at https://account.hanatrial.ondemand.com/ and log on
afterwards.
2. Open SAP Web IDE Full-Stack
3. Setting Up Application Projects - Create a Project from Scratch & Select a Cloud Foundry Space
Tutorials
In an HTML5 module in SAP Web IDE Full-Stack, more files are created than described in these tutorials, but
you can run through it and at the end you have a running application on the Cloud Foundry environment.
For more information have a look at the Get Started: Setup and Tutorials section that contains the following
topics and more:
● “Hello World!”
● Data Binding
● Navigation and Routing
● Mock Server
● Worklist App
● Ice Cream Machine
Essentials
This is reference information that describes the development concepts of SAPUI5 , such as the Model View
Controller, data binding, and components.
Developing Apps
Create apps with rich user interfaces for modern Web business applications, responsive across browsers and
devices, based on HTML5.
The following topics are excerpts from the Developing Apps section:
A Multitarget application (MTA) is logically a single application comprised of multiple parts created with
different technologies, which share the same lifecycle.
The developers of the MTA describe the desired result using the MTA model which contains MTA modules, MTA
resources and interdependencies between them. Afterward, the MTA deployment service validates,
You can create and deploy a Multitarget Application in the Cloud Foundry environment as described below by
following different approaches that can yield the same result:
● Using the SAP Web IDE for Full-Stack Development as described in Developing Multitarget Applications -
both the development descriptor mta.yaml and the deployment descriptor mtad.yaml are created
automatically. The mta.yaml is generated when you create the application project, and the mtad.yaml
file is created when you build the project.
Note
Development descriptors are used to generate MTA deployment descriptors, which define the required
deployment data. That is, the MTA development descriptor data specifies what you want to build, how to
build it, while the deployment descriptor data specifies as what and how to deploy it.
● https://help.sap.com/viewer/825270ffffe74d9f988a0f0066ad59f0/CF/en-US/
a71bf8281254489ea8be6e323199b304.html [https://help.sap.com/viewer/
825270ffffe74d9f988a0f0066ad59f0/CF/en-US/a71bf8281254489ea8be6e323199b304.html]
● https://help.sap.com/viewer/825270ffffe74d9f988a0f0066ad59f0/CF/en-US/
3b533e3723674fad90f94510b92f10af.html [https://help.sap.com/viewer/
825270ffffe74d9f988a0f0066ad59f0/CF/en-US/3b533e3723674fad90f94510b92f10af.html]
● https://help.sap.com/viewer/825270ffffe74d9f988a0f0066ad59f0/CF/en-US/
1b0a7a0938944c7fac978d4b8e23a63f.html [https://help.sap.com/viewer/
825270ffffe74d9f988a0f0066ad59f0/CF/en-US/1b0a7a0938944c7fac978d4b8e23a63f.html]
● Using the Cloud MTA Build Tool . Afterward, you deploy the MTA using the Cloud Foundry Command Line
Interface.
Note
● https://sap.github.io/cloud-mta-build-tool/ [https://sap.github.io/cloud-mta-build-tool/]
● https://help.sap.com/viewer/65de2977205c403bbc107264b8eccf4b/Cloud/en-US/
65ddb1b51a0642148c6b468a759a8a2e.html [https://help.sap.com/viewer/
65de2977205c403bbc107264b8eccf4b/Cloud/en-US/65ddb1b51a0642148c6b468a759a8a2e.html]
● https://help.sap.com/viewer/65de2977205c403bbc107264b8eccf4b/Cloud/en-US/
f48880b0295d4e9d859658244da84201.html [https://help.sap.com/viewer/
Multitarget Application extension descriptor Defining MTA Extension Descriptors [page 1276]
Multitarget Application module types and parameters MTA Module Types, Resource Types, and Parameters for
Applications in the Cloud Foundry Environment [page
1307]
How to deploy the Multitarget Application Cloud Foundry Command Line Interface [page 1007]
Development descriptor A YAML file named mta.yaml that contains a list of all enti
ties, such as modules, resources, and properties that belong
to an application or are used by it at runtime, and the de
pendencies between them. It is automatically generated
when an MTA project is created or modified, or when a mod
ule is added or removed. The developer needs to edit the de
scriptor manually to define resources, properties, and de
pendencies, as well as fill in missing information.
Deployment descriptor A YAML file named mtad.yaml that contains a list of all en
tities which is created from the WEB IDE or from Multitarget
Application Archive Builder tool or manually. This file is simi
lar to Development Descriptor but is used from the MTA De
ployer.
Module type A type that defines the structure and the development tech
nology of a module. You can see a list of the module types at
Modules [page 1247].
MTA archive (MTAR) Archive containing a deployment descriptor, the module and
resource binaries, and configuration files. The archive fol
lows the JAR file specification.
You have to consider the following limits for the MTA artifacts, which can be handled by the Cloud Foundry
deploy service:
Related Information
The benefits of the MTA approach can be related to the three “A”s that the MTA offers - Abstraction,
Automation, and Assembly.
● Abstraction - The MTA model abstracts the underlying heterogeneity of application parts, which can grow
over time, and offers a standardized way to model dependencies and environment specific configurations.
In the absence of such a common model, you would have to individually handle all this heterogeneity, using
custom scripts in a CI/CD tool to manage all underlying components, the service dependencies, and the
configurations – achieving the needed automation through such custom scripts is not a trivial task. And
maintaining such scripts – as the technologies and services used by the application changes – demands
higher effort. Modeling your application as an MTA highly simplifies the lifecycle of such heterogeneous
applications.
Your business application may grow over time. When this happens, you should decide on the volume of data for
your MTA or multiple MTAs by analyzing which pieces make sense to be managed together as separate
lifecycle management unit.
A popular example is the case for a “core” or “framework” application parts that could work on their own,
offering the basic business capabilities. Then there might be an “extension” or “plugin” applications that
extend the basic business capabilities, but are not mandatory and could be released and updated separately.
Another possibility is to model just one single Cloud Foundry application in a separate MTA. This makes sense if
the application can be considered a self-contained microservice. However, even such one Cloud Foundry app in
one MTA modeling still bears the benefits of dependency management (for example, backing service creation
and external API lookup), content management (Fiori Launchpad configurations, workflow definitions, and so
on) and configuration management (utilizing system placeholders, default and deployment specific
configuration).
Other Benefits
● Parallel deployment - This feature enables asynchronous deployment of multiple applications, this way the
deployment of the archive is faster.
● Asynchronous service instance creation - The MTA deployer creates services asynchronously, this way
decreasing the deployment time.
● Parallel undeployment - This feature enables asynchronous undeployment of applications, this way
decreasing the undeployment time.
This section lists the example steps to deploy your first multitarget application. An mtar archive and an
extension descriptor (optional) are used to execute the deployment.
For more information about the extension descriptor, see Defining MTA Extension Descriptors [page 1276]
Example
_schema-version: "3.1"
ID: app
version: 1.0.0
modules:
- name: my-first-app
type: staticfile
path: content.zip
requires:
- name: my-first-app-service
resources:
- name: my-first-app-service
type: org.cloudfoundry.managed-service
parameters:
service: application-logs
service-plan: lite
Example
Output
Output Code
Output Code
cf apps
Getting apps in org <ORG> / space <SPACE> as <USER>...
OK
name requested state instances memory disk urls
my-first-app started 1/1 1G 1G deploy-
service-<SPACE>-my-first-app.cfapps.sap.hana.ondemand.com
Output Code
cf services
Getting services in org <ORG> / space <SPACE> as <USER>...
name service plan bound apps last
operation
my-first-app-service application-logs lite my-first-app create
succeeded
Multitarget Applications are defined in a development descriptor required for design-time purposes.
The development descriptor (mta.yaml) defines the elements and dependencies of a Multitarget Application
(MTA) compliant with the Cloud Foundry environment.
Note
The MTA development descriptor (mta.yaml) is used to generate the deployment descriptor
(mtad.yaml), which is required for deploying an MTA to the target runtime. If you use command-line tools
to deploy an MTA, you do not need an mta.yaml file. However, in these cases you have to manually create
the mtad.yaml file.
For more information about the MTA development descriptor, see Inside an MTA Descriptor.
Related Information
You package the MTA deployment descriptor and module binaries in an MTA archive. You can manually do so as
described below, or alternatively use the Cloud MTA Build tool.
Note
There could be more than one module of the same type in an MTA archive.
The Multitarget Application (MTA) archive is created in a way compatible with the JAR file specification. This
allows us to use common tools for creating, modifying, and signing such types of archives.
Note
● The MTA extension descriptor is not part of the MTA archive. During deployment you provide it as a
separate file, or as parameters you enter manually when the SAP Cloud Platform requests them.
● Using a resources directory as in some examples is not mandatory. You can store the necessary
resource files on root level of the MTA archive, or in another directory with name of your choice.
The following example shows the basic structure of an MTA archive. It contains a Java application .war file and
a META-INF directory, which contains an MTA deployment descriptor with a module and a MANIFEST.MF file.
Example
/example.war
/META-INF
/META-INF/mtad.yaml
/META-INF/MANIFEST.MF
The MANIFEST.MF file has to contain a name section for each MTA module part of the archive that has a file
content. In the name section, the following information has to be added:
● Name - the path within the MTA archive, where the corresponding module is located. If it leads to a
directory, add a forward slash (/) at the end.
● Content-Type - the type of the file that is used to deploy the corresponding module
● MTA-module - the name of the module as it has been defined in the deployment descriptor
Note
● You can store one application in two or more application binaries contained in the MTA archive.
● According to the JAR specification, there must be an empty line at the end of the file.
Example
Manifest-Version: 1.0
● Look for the example.war file within the root of the MTA archive when working with module example-
java-app
● Interpret the content of the example.war file as an application/zip
Note
The example above is incomplete. To deploy a solution, you have to create an MTA deployment descriptor.
Then you have to create the MTA archive.
Tip
As an alternative to the procedure described above, you can also use the Cloud MTA Build Tool. See its
official documentation at Cloud MTA Build Tool .
Related Information
https://sap.github.io/cloud-mta-build-tool/
The Multitarget Application Model
JAR File Specification
Defining MTA Deployment Descriptors for the Neo Environment [page 1612]
Defining MTA Extension Descriptors [page 1276]
MTA Module Types, Resource Types, and Parameters for Applications in the Neo Environment [page 1617]
The Multitarget Application (MTA) deployment descriptor is a YAML file that defines the relations between you
as a provider of а deployable artefacts and the MTA Deployment service in SAP Cloud Platform as a deployer
tool.
Using the YAML data serialization language you describe the MTA in an MTA deployment descriptor
(mtad.yaml) file following the Multitarget Application Structure [page 1246].
Note
As there are technical similarities between SAP HANA XS Advanced and Cloud Foundry, you can adapt
application parameter for operation in either platforms. Note that each environment supports its own set of
module types, resource types, and configuration parameters. For more information, see the official
Multitarget Application Model specification.
Writing YAML descriptors in plain text is often hard, so we have contributed a schema to the public Schema
Store. This would provide you with an almost out of the box support when writing MTA deployment descriptors
in some of the more popular IDEs. The schema would provide you with auto-completion as well as suggestions
and syntax checking.
Note
The format and available options in the MTA deployment descriptor could change with the newer versions
of the MTA specification. Always specify the schema version when defining an MTA deployment descriptor,
so that the SAP Cloud Platform is aware against which specific MTA specification version you are
deploying.
Example 1
Example
_schema-version: "3.1.0"
ID: simple-mta
version: 1.0.0
modules:
- name: anatz
type: javascript.nodejs
requires:
- name: hdi-service
resources:
- name: hdi-service
type: org.cloudfoundry.managed-service
parameters:
service: hana
service-plan: hdi-shared
Deployment descriptor with the keep-existing parameter (will keep the existing: environment, service-
bindings and routes)
Example
_schema-version: 3.1.0
ID: anatz-keep-existing
version: 4.0.0
modules:
- name: anatz
type: staticfile
path: hello-world.zip
parameters:
memory: 64M
route: new-custom-route-${space}
keep-existing:
env: true
service-bindings: true
routes: true
Example 3
Deployment descriptor with enabled parallel deployment of modules (will deploy modules in parallel)
Example
_schema-version: 3.1.0
ID: hello-world
version: 1.0.0
parameters:
enable-parallel-deployments: true
modules:
- name: hello-world
type: staticfile
path: content/hello_world.zip
- name: hello-world-first
type: staticfile
path: content/hello_world.zip
- name: hello-world-second
type: staticfile
path: content/hello_world.zip
- name: hello-world-third
type: staticfile
path: content/hello_world.zip
Example 4
_schema-version: "3.1.0"
ID: docker-mtar
version: 2.0.1
modules:
- name: docker-image
type: application
parameters:
docker:
image: cloudfoundry/test-app
Example 5
Example
_schema-version: "3.1.0"
ID: ztana1
version: 1.1.0
modules:
- name: ztana
type: javascript.nodejs
requires:
- name: test-resource
resources:
- name: test-resource
type: org.cloudfoundry.existing-service
optional: true
parameters:
service: non-required-service
Example 6
Example
_schema-version: "3.1"
ID: com.sap.xs2.samples.javahelloworld
version: 0.1.0
modules:
- name: java-hello-world
type: javascript.nodejs
path: web/
requires:
- name: java-uaa
resources:
- name: java-hdi-container
type: com.sap.xs.hdi-container
- name: java-uaa
type: com.sap.xs.uaa-space
parameters:
config-path: xs-security.json
Note
The example above is incomplete. To deploy a solution, you have to create an MTA extension descriptor
with the user and password added there. You also have to create the MTA archive.
Example 7
Sample Code
_schema-version: "3.1"
ID: com.sap.xs2.samples.nodehelloworld
version: 0.1.0
modules:
- name: node-hello-world
type: javascript.nodejs
path: web/
requires:
name: nodejs-uaa
- name: nodejs
group: destinations
properties:
name: backend
url: ~{url}
- name: nodejs-uaa
type: com.sap.xs.uaa
parameters:
config-path: xs-security.json
- name: log
type: application-logs
optional: true
Example 8
A more complex example, which shows an MTA deployment description with the following modules:
● A database model
● An SAP UI5 application (hello world)
The UI5 application “hello-world” use the environment-variable <ui5_library> as a logical reference to some
version of UI5 on a public Website.
Sample Code
ID: com.acme.xs2.samples.javahelloworld
version: 0.1.0
modules:
- name: hello-world
type: javascript.nodejs
requires:
- name: uaa
- name: java_details
properties:
backend_url: ~{url3}/
properties:
ui5_library: "https://sapui5.hana.acme.com/"
- name: java-hello-world-backend
type: java.tomee
requires:
- name: uaa
- name: java-hello-world-db # deploy ordering
- name: java-hdi-container
provides:
- name: java_details
properties:
url3: ${url} # the value of the place-holder $
{url}
# will be made known to the deployer
- name: java-hello-world-db
type: com.sap.xs.hdi
requires:
- name: java-hdi-container
resources:
- name: java-hdi-container
type: com.sap.xs.hdi-container
- name: java-uaa
type: com.sap.xs.uaa
parameters:
name: java-uaa # the name of the actual service
● Global elements - an identifier and version that uniquely identify the MTA, including additional optional
information such as a description, the providing organization, and a copyright notice for the author.
● Modules [page 1247] - they contain the properties of module types, which represent Cloud Foundry
applications or content that form the MTA and are deployed.
● Resources [page 1265] - they contain properties of resource types, which are entities not part of an MTA,
but required by the modules at runtime or at deployment time. For more information, see .
● Dependencies between modules and resources.
3.1.11.6.1 Modules
The modules section of the deployment descriptor lists the deployable parts contained in the MTA deployment
archive.
Optional module attributes include: path, type, description, properties, and parameters, plus
requires and provides lists.
Modify the following MTA module types by providing specific properties or parameters in the MTA deployment
descriptor (mtad.yaml).
● memory (256MB)
● health-check-type - (none)
● execute-app (true)
After start and upon completion, appli
cation sets [success | failure]-marker in
a log message.
● success-marker (deploy:
done)
● failure-marker (deploy:
failed)
● check-deploy-id (true)
Check the deployment (process) ID
when checking the application execu
tion status
● dependency-type (hard)
In circular module-dependencies, de
ploy modules with dependency type
“hard” first
com.sap.porta This module type is deprecated. None CF application with SAP Fiori
l.site- launchpad content
You have to use
content
com.sap.portal.content instead.
com.sap.html5 ● no-route (true). Defines if a route None Deploys the Business Logging for
.application- should be assigned to the application. configuring text resources
content ● memory (256M). Defines the memory
allocated to the application.
● execute-app - (true). Defines
whether the application is executed. If
yes, the application performs certain
amount of work and upon completion
sets a success-marker or
failure-marker by means of a log
message.
● success-marker (STDOUT:The
deployment of html5
application content done.*)
● failure-marker(STDERR:The
deployment of html5
application content
failed.*)
● stop-app (true). Defines if the appli
cation should be stopped after execu
tion.
● check-deploy-id (true) - Defines
if the deployment (process) ID should
also be checked when checking the ap
plication execution status.
● dependency-type(hard). Defines if
this module should be deployed first, if it
takes part in circular module depend
ency cycles. If hard means that this
module is deployed first.
● health-check-type(none). De
fines if the module should be monitored
for availability.
business- This module type is deprecated. None Deploys the Business Logging for
logging configuring text resources
You have to use com.sap.business-
logging.content instead.
Sample Code
modules:
- name: my-binary-app
type: custom
parameters:
Module parameters have platform-specific semantics. To reference a parameter value, use the placeholder
notation ${<parameter>}, for example, ${default-host}.
Tip
It is also possible to declare metadata for parameters and properties defined in the MTA deployment
description; the mapping is based on the parameter or property keys. For example, you can specify if a
parameter is required (optional; false) or can be modified overwritable: true.
default- Yes The name of the application in the The module node-hello-world
app-name Cloud Foundry environment to be name with or
deployed for this module, based on without a com.sap.xs2.samples.xsj
the module name name-space shelloworld.node-hello-
prefix
world
health-check-type:
process
Note
The new application will start
on a route comprised of the
specified host and the default
domain.
Note
The new application will start
on routes comprised of the
specified hosts and default do
main.
memory The memory limit for all instances 256M, or as memory: 128M
of an application. This parameter specified in
requires a unit of measurement M, module-type
MB, G, or GB in upper or lower
case.
user-
provi
ded:
true
By default every module has its own binary that is uploaded and deployed in specific manner. It is possible for
multiple MTA modules to reference a single deployable binary, for example, an application archive. This means
that during deployment, the same application archive is executed separately in multiple applications or
application instances, but with different parameters and properties. This results in multiple running
applications based on the same source code, which have different configurations and setup. A development
Code Syntax
Sample Code
_schema-version: "3.2.0"
ID: hello
version: 0.1.0
modules:
- name: hello-router
type: java.tomee
path: web/router.war
requires:
- name: backend
properties:
backend: ~{url}/content
name: backend
url: ~{url}
- name: hello-backend
type: java.tomee
path: web/router.war
provides:
- name: backend
properties:
url: "${default-url}"
If deployment is based on an MTA archive, it is not necessary to duplicate the code to have two different
deployable modules; the specification for the MTA-module entry in MANIFEST.MF is extended, instead. The
following (incomplete) example of a MANIFEST.MF shows how to use a comma-separated list of module names
to associate one set of deployment artifacts with all listed modules:
Code Syntax
Name: web/router.war
MTA-Module: hello-router,hello-backend
Content-Type: application/zip
Note
The current behavior of this new feature might undergo additional development and is subject to change.
You can use hooks to change the typical deployment process, in this case to enable tasks to be executed during
a specific moment of the application deployment. For example, you can set hooks can be executed before or
after the actual deployment steps for a module, depending on the applications' need.
Sample Code
_schema-version: "3.3"
ID: foo
version: 3.0.0
modules:
- name: foo
type: javascript.nodejs
hooks:
- name: hook
type: task
phases:
- application.before-stop.live
- application.before-stop.idle
parameters:
name: foo-task
command: 'sleep 5m'
Example
In the example above, the application.before-stop.idle phase executes the hook when the new
idle applications are redirected to the new live routes, and the application.before-stop.live is
used just before the deletion of the live application.
● parameters – defines the parameters of the hook. For the hooks of type task, the parameters must
define a one-off task.
The phase values you can use for the phases are:
Note
This phase is relevant for both
normal and blue-green
deployments.
Note
This phase is only relevant and
respected during blue-green
deployments.
Note
This phase is only relevant for and
applied during the executon of
blue-green deployments.
The table below contains the parameters of the supported module hook types:
Remember
Do not rely on the
default value, as it
will probably be
much more higher
than you need and
may not fit into the
limitations of your
quota.
Remember
Do not rely on the
default value, as it
will probably be
much more higher
than you need and
may not fit into the
limitations of your
quota.
You can also extend module hooks through the extension descriptor. To do so, add the code with your specific
parameters, similarly to the following example:
Sample Code
_schema-version: "3.3"
ID: foo-change-command
Related Information
3.1.11.6.3 Resources
The application modules defined in the “modules” section of the deployment descriptor may depend on
resources.
● type - the resource type is one of a reserved list of resource types supported by the MTA-aware
deployment tools, for example: com.sap.xs.uaa, com.sap.xs.hdi-container, com.sap.xs.job-
scheduler; the type indicates to the deployer how to discover, allocate, or provision the resource, for
example, a managed service such as a database, or a user-provided service. When type is not defined,
resource is used for configurations only for other modules and resources.
● description
● properties
● parameters
● (Optional) active - its value can be true or false and the default value is true. If set to false, the
resource will not be processed and it will be ignored in the requires list of the application that requires it.
If you deploy an application with a resource type attribute active set to false, the resource will not be
provisioned and no binding will be created. If you have already deployed the application with the resource type
attribute active set to true, the binding to the resource will be removed.
● If the requires section of the module type expects a list, then the environment variable which is assigned
for this list will be empty and subscriptions between the modules will not be created
● If the requires section does not expect a list, then there will be no environment variable
Restriction
System-specific parameters for the deployment descriptor must be included in a so-called MTA
deployment extension descriptor.
Optional Resources
To describe resources that are not mission-critical for the operation of your Multitarget Application, proceed as
described below.
Note
You can declare some resources as optional, which mitigates the cases when they are not listed, not available,
or have failed to be created or updated. This means that when the deployer cannot allocate the required
resource due to any of these reasons, it generates a warning and continues processing. Alternatively, if a
resource is not declared as optional, the deployer generates an error and stops processing.
Sample Code
...
resources:
...
- name: log
type: com.sap.xs.auditlog
optional: true
...
● If the log managed resource is not provided by the platform or landscape, a warning is logged and traced
and the MTA deployment continues while ignoring the error.
Modify the default MTA resource types by providing specific properties or parameters in the MTA deployment
descriptor.
Note
modules:When using the free trial
subaccount, modify the default
service:
Sample Code
resources:
- name: my-
hdi-service
type:
com.sap.xs.hdi-
container
parameters:
service:
hanatrial
Restriction
Use only with the SAP Node.js
module @sap/site-content-
deployer
Restriction
Only for use with the SAP Node.js
module @sap/site-entry
Restriction
This resource type is now depre
cated. Use rabbitmq instead.
Resource parameters have platform-specific semantics. To reference a parameter value, use the placeholder
notation ${<parameter>}, for example, ${default-host}.
It is also possible to declare metadata for parameters and properties defined in the MTA deployment
description; the mapping is based on the parameter or property keys. For example, you can specify if a
parameter is required (optional; false) or can be modified overwritable: true.
default- Yes The name of the service in the The resource nodejs-hdi-container
service- name with or
Cloud Foundry environment to be com.sap.xs2.samples.xsj
name without a
created for this resource, based on shelloworld.nodejs-hdi-
name-space
the resource name with or without prefix container
a name-space prefix
service- List of alternatives of a default Empty, or as For Coud Foundry Trial, “hanatrial”
alternative service offering, defined in the de specified in in
is available instead of “hana”.
s ploy service configuration. If a de the deploy
fault service offering does not exist service config- service-alternatives:
for the current org/space or creat uration (re
[“hanatrial”]
ing a service to it fails (with a spe source-type)
cific error), service alternatives are
used. The order of service alterna
tives is considered.
service- Write Used when consuming an existing The name of service-key-name: my-
key-name service key. Specifies the name of the resource. service-key
the service key. See Consumption
of existing service keys for more in
formation.
This section contains information about the parameters and properties of a Multitarget Application (MTA).
Parameters
Parameters are reserved variables that affect the behavior of the MTA-aware tools, such as the deployer.
Parameters can be “system”, “write-only”, or “read-write” (default value can be overwritten). Each tool
publishes a list of system parameters and their (default) values for its supported target environments. All
parameter values can be referenced as part of other property or parameter value strings. To reference a
parameter value, use the placeholder notation ${<parameter>}. The value of a system parameter cannot be
changed in descriptors. Only its value can be referenced using the placeholder notation.
Examples of common read-only parameters are user, default-host, default-uri. The value of a writable
parameter can be specified within a descriptor. For example, a module might need to specify a non-default
value for a target-specific parameter that configures the amount of memory for the module’s runtime.
Tip
It is also possible to declare metadata for parameters and properties defined in the MTA deployment
description; the mapping is based on the parameter or property keys. For example, you can specify if a
parameter is required (optional; false) or can be modified overwritable: true.
Descriptors can contain so-called placeholders (also known as substitution variables), which can be used
as sub-strings within property and parameter values. Placeholder names are enclosed by the dollar sign ($)
Sample Code
modules:
- name: node-hello-world
type: javascript.nodejs
path: web/
parameters:
host: ${user}-node-hello-world
Tip
Placeholders can also be used without any corresponding parameters; in this scenario, their value cannot
be overridden in a descriptor. Such placeholders are Read only.
Properties
The MTA deployment descriptor can contain two types of properties, which are very similar, and are intended
for use in the modules or resources configuration, respectively.
Properties can be declared in the deployment description both in the modules configuration (for example, to
define provides or requires dependencies), or in the resources configuration to specify requires
dependencies. Both kinds of properties (modules and requires) are injected into the module’s environment.
In the requires configuration, properties can reference other properties that are declared in the
corresponding provides configuration, for example, using the ~{} syntax.
env-var- required de Write Used when consuming an ex The name of env-var-name:
name pendency isting service key. Specifies the service SERVICE_KEY_CREDENTI
the name of the environment key. ALS
variable that will contain the
service key's credentials. See
Consumption of existing serv
ice keys for more information.
space:
"*"
The values of properties can be specified at design time, in the deployment description (mtad.yaml). More
often, however, a property value is determined during deployment, where the value is either explicitly set by the
administrator, for example, in an deployment-extension descriptor file (myDeployExtension.mtaext), or
inferred by the MTA deployer from a target-platform configuration. When set, the deployer injects the property
values into the module's environment. The deployment operation reports an error, if it is not possible to
determine a value for a property.
Tip
It is possible to declare metadata for properties defined in the MTA deployment description; the mapping is
based on the parameter or property keys. For example, you can specify if a property is required
(optional; false) or can be modified overwritable: true.
Cross-References to Properties
To enable resource properties to resolve values from a property in another resource or module, a resource
must declare a dependency. However, these “requires” declarations do not affect the order of the application
deployment.
Restriction
It is not possible to reference list configuration entries either from resources or “subscription”
functionalities (deployment features that are available to subscriber applications).
Code Syntax
modules:
- name: java
...
provides:
- name: backend
properties:
url: ${default-url}/foo
resources:
- name: example
type: example-type
It is possible to declare metadata for parameters and properties defined in the MTA deployment description,
for example, using the “parameters-metadata:” or “properties-metadata:” keys, respectively; the
mapping is based on the keys defined for a parameter or property.
You can specify if a property is required (optional; false) or can be modified (overwritable: true), as
illustrated in the following (incomplete) example:
The overwritable: and optional keywords are intended for use in extension scenarios, for example, where
a value for a parameter or property is supplied at deployment time and declared in a deployment-extension
descriptor file (myMTADeploymentExtension.mtaext).
You can declare metadata for the parameters and properties that are already defined in the MTA deployment
description. However, any parameters or properties defined in the mtad.yaml deployment descriptor with the
metadata value overwritable: false cannot be overwritten by a value supplied from the extension
descriptor. In this case, an error would occur in the deployment.
Code Syntax
modules:
- name: frontend
type: javascript.nodejs
parameters:
memory: 128M
Note
Parameters or properties can be declared as sensitive. Information about properties or parameters flagged
as “sensitive” is not written as plain text in log files; it is masked, for example, using a string of asterisks
(********).
The deploy service supports the extension of the standard syntax for references in module properties; this
extension enables you to specify the name of the requires section inside the property reference.
You can use this syntax extension to declare implicit groups, as illustrated in the following example:
Sample Code
modules:
- name: pricing-ui
type: javascript.nodejs
properties:
API: # equivalent to group, but defined in the module properties
- key: internal1
protocol: ~{price_opt/protocol} #reference to value of protocol
defined in price_opt of module pricing-backend
- key: external
url: ~{competitor_data/url} # reference to string value of property
'url' in required resource 'competitor_data'
api_keys: ~{competitor_data/creds} # reference to list value of
property 'creds' in 'competitor_data'
requires:
- name: competitor_data
- name: price_opt
- name: pricing-backend
type: java.tomcat
provides:
- name: price_opt
properties:
protocol: http ...
resources:
- name: competitor_data
properties:
url: "https://marketwatch.acme.com/"
The Multitarget Application (МТА) extension descriptor is a YAML file that contains data complementary to the
deployment descriptor. The data can be environment or deployment specific, for example, credentials
depending on the user who performs the deployment. The MTA extension descriptor is a YAML file that has a
similar structure to the deployment descriptor, by following the Multitarget Application Model structure with
several limitations and differences. Normally, extension descriptor extends deployment descriptor but it is
possible to extends other extension descriptor, making extension descriptors chain.. It can add or overwrite
existing data if necessary.
Several extension descriptors can be additionally used after the initial deployment.
Note
The format and available options within the extension descriptor may change with newer versions of the
MTA specification. You must always specify the schema version option when defining an extension
descriptor to inform the SAP Cloud Platform which MTA specification version should be used. Furthermore,
the schema version used within the extension descriptor and the deployment descriptor should always be
same.
In the examples below, we have a deployment descriptor, which has already been defined, and several
extension descriptors.
Note
Deployment descriptor:
Example
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.0'
ID: com.example.extension
version: 0.1.0
resources:
- name: data-storage
properties:
existing-data: value
● Validate the extension descriptor against the MTA specification version 3.1
● Extend the com.example.extension deployment descriptor
Example
_schema-version: '3.1'
ID: com.example.extension.first
extends: com.example.extension
resources:
- name: data-storage
properties:
existing-data: new-value
non-existing-data: value
The following is an example of another extension descriptor that extends the extension descriptor from the
previous example:
Example
_schema-version: '3.1'
ID: com.example.extension.second
extends: com.example.extension.first
resources:
- name: data-storage
properties:
second-non-existing-data: value
● The examples above are incomplete. To deploy a solution, you have to create a deployment descriptor and
an MTA archive.
● Add a new data in modules, resources, parameters, properties, provides, requires sections
● Overwrite an existing data (in depth) in modules, resources, parameters, properties, provides, requires
sections
● As of schema version 3.xx, by default parameters and properties are overwritable and optional. If you want
to make a certain parameter or property non-overwritable or required, you need to add specific metadata.
SeeMetadata for Properties and Parameters [page 1274].
Related Information
Defining MTA Deployment Descriptors for the Neo Environment [page 1612]
Defining Multitarget Application Archives [page 1239]
MTA Module Types, Resource Types, and Parameters for Applications in the Neo Environment [page 1617]
The Multitarget Application Model
This section contains information about how to manage standard Cloud Foundry entities with MTA modelling.
3.1.11.8.1 Applications
If you want to create an application in Cloud Foundry and use the MTA deployment service, you must create a
module first. In the deployment descriptor the module represents an application. These modules can have
different types and parameters which modify their behavior.
The MTA deployment is an incremental process. This means that the state of the artifacts in the Cloud Foundry
environment is compared to the desired state described in the mtad.yaml, and then the set of operations to
change the current state of the MTA to the desired state are computed. The following optimizations are
possible during the MTA redeployment:
Note
Currently, Cloud Foundry environment applications are restaged if they are bound to any service
with parameters, because it is not possible to get the current Cloud Foundry service parameters
and compare them with the desired ones.
○ The application is bound to service configuration parameters that have been changed. This requires an
update of the service instance and rebind, restage, and restart of the application.
○ The service binding parameters are changed. This requires an update of the service binding and
restage of the application.
○ The MTA version is changed, which requires a change of special application environment variables,
managed by the deploy service.
3.1.11.8.2 Tasks
Create one-off administration tasks or scripts.
During the deployment process, these tasks can be executed against staged applications. The platform creates
a new container for them, where they are performed for a specific period until they are completed, after which
the container is deleted.
Sample Code
_schema-version: "3.1"
ID: foo
version: 3.0.0
modules:
- name: foo
type: javascript.nodejs
parameters:
no-route: true
no-start: true
disk-quota: 2GB
tasks:
- name: example_task
command: npm start
memory: 1GB
Тhe values for аpplication memory and task memory do not have a dependency. This is also valid for the
allowed disk quota.
Tip
For more information about one-off tasks, see MTA Deployment Descriptor Examples [page 1241].
Sample Code
_schema-version: "3.1"
ID: foo
version: 3.0.0
modules:
- name: foo
type: javascript.nodejs
parameters:
no-route: true
no-start: true
memory: 1GB
disk-quota: 2GB
tasks:
- name: db_migration
command: "bin/rails db:migrate"
memory: 256M #This parameter is optional.
disk-quota: 128M #This parameter is optional.
Related Information
https://docs.cloudfoundry.org/devguide/using-tasks.html
3.1.11.8.3 Services
If you want to create a service in the Cloud Foundry environment using the MTA deployment service, you must
define an MTA resource first. These resources can have different types and parameters which modify their
behavior. There is a predefined set of supported resource types, that in most cases represents certain
combination of service offering and plan. See subsection “MTA resource types” in Resources [page 1265].
For more flexible approach use the resource types listed below.
Note
In most of cases, MTA resources represent a Cloud Foundry service, but there are cases where they
represent a different platform entity, for example a service key. They can even serve only to a group of
configurations. See Resources [page 1265] for more information.
Sample Code
resources:
- name: my-postgre-service
type: org.cloudfoundry.managed-service
parameters:
service: postgresql
service-plan: v9.6-dev
Note
To choose a different service plan for a predefined MTA resource type, for example, to change the
service plan for PostgreSQL service, you define it with:
Sample Code
resources:
- name: my-postgre-service
type: org.postgresql
parameters:
service-plan: v9.6-dev
● org.cloudfoundry.existing-service
То аssume that the named service exists, but to not manage its lifecycle, you define the service name by
using the org.cloudfoundry.managed-service resource type with the following parameters:
○ service-name
Optional. Service instance name. Default value is the resource name.
● org.cloudfoundry.existing-service-key
Existing service keys can be modeled as a resource of type org.cloudfoundry.existing-service-
key, which checks and uses their credentials. For more information, see Service Keys [page 1287].
● org.cloudfoundry.user-provided-service
Create or update a user-provided service configured with the following resource parameters:
○ service-name
Optional. Name of the service to create. Default value is the resource name.
○ configRequired. Map value, containing the service creation configuration, for example, url and user
credentials (user and password)
Example
resources:
- name: my-destination-service
type: org.cloudfoundry.user-provided-service
parameters:
config:
<credential1>: <value1>
<credential2>: <value2>
● configuration
For more information, see Cross-MTA Dependencies [page 1289].
Some services support additional configuration parameters in the create-service request. These
parameters are parsed in a valid JSON object containing the service-specific configuration parameters.
The deploy service supports the following methods for the specification of service creation parameters:
Note
If service-creation information is supplied both in the deployment (or extension) descriptor and in a
supporting JSON file, the parameters specified directly in the deployment (or extension) descriptor
override the parameters specified in the JSON file.
Sample Code
Additional entry in MANIFEST.MF Sample Code
Additional entry in MANIFEST.MF
Name: xs-
security.json
MTA-Resource: java- Name: xs-
uaa security.json
Content-Type: MTA-Resource: java-
application/json uaa
Content-Type:
application/json
Using method 1, all parameters under the special config parameter are used for the service-creation request.
This parameter is optional.
Using method 2, there are dependencies on further configuration entries in other configuration files. For
example, if you use this JSON method, an additional entry must be included in the MANIFEST.MF file which
defines the path to the JSON file containing the parameters as well as the name of the resource for which the
parameters should be used.
Some services support additional configuration parameters in the create-bind request; these parameters
are passed in a valid JSON object containing the service-specific configuration parameters.
The deployment service supports the following methods for the specification of service-binding parameters:
Note
If service-binding information is supplied both in the MTA's deployment (or extension) descriptor and in a
supporting JSON file, the parameters specified directly in the deployment (or extension) descriptor
override the parameters specified in the JSON file.
In the MTA deployment descriptor, the requires dependency between a module and a resource represents
the binding between the corresponding application and the service created from them (if the service has a
type). For this reason, the config parameter is nested in the requires dependency parameters, and a
distinction must be made between the config parameter in the modules section and the config parameter
used in the resources section (for example, when used for service-creation parameters).
Sample Code
Content of xs-hdi.json Sample Code
Content of xs-hdi.json
{
"permissions":
"debugging" {
} "permissions":
"debugging"
}
Sample Code
Additional entry in MANIFEST.MF Sample Code
Additional entry in MANIFEST.MF
Name: xs-hdi.json
MTA-Requires: node-
hello-world-backend/ Name: xs-hdi.json
node-hdi-container MTA-Requires: node-
Content-Type: hello-world-backend/
application/json node-hdi-container
Content-Type:
application/json
Method 2 shows how to define the service-binding parameters for a service-bind request in a JSON file. Using
this method, there are dependencies on entries in other configuration files. For example, if you use this JSON
method, an additional entry must be included in the MANIFEST.MF file which defines the path to the JSON file
containing the parameters as well as the name of the resource for which the parameters should be used.
Note
To avoid ambiguities, the name of the module is added as a prefix to the name of the requires
dependency; the name of the manifest attribute uses the following format: <module-name>#<requires-
dependency-name>.
Some services provide a list of tags that are later added to the <VCAP_SERVICES> environment variable. These
tags provide a more generic way for applications to parse <VCAP_SERVICES> for credentials.
You can also provide custom tags when creating a service instance. To inform the deployment service about
custom tags, you can use the special service-tags parameter, which must be located in the resources
definition that represent the managed services, as illustrated in the following example:
Sample Code
resources:
- name: nodejs-uaa
type: com.sap.xs.uaa
parameters:
service-tags: ["custom-tag-A", "custom-tag-B"]
Note
Some service tags are inserted by default, for example, xsuaa, for the XS User Account and Authentication
(UAA) service.
Service brokers are applications that advertise a catalog of service offerings and service plans, as well as
interpreting calls for creation, binding, unbinding, and deletion. The deploy service supports automatic creation
(and update) of service brokers as part of an application deployment process.
Аn application can declare that a service broker should be created as part of its deployment process, by using
the following parameters in its corresponding module in the MTA deployment (or extension) descriptor:
Tip
Sample Code
- name: jobscheduler-broker
properties:
user: ${generated-user}
password: ${generated-password}
parameters:
create-service-broker: true
service-broker-name: jobscheduler
service-broker-user: ${generated-user}
service-broker-password: ${generated-password}
service-broker-url: ${default-url}
The create-service-broker parameter should be set to true if a service broker must be created for the
specified application module. You can specify the name of the service broker with the service-broker-name
parameter; the default value is ${app-name}The service-broker-user and service-broker-password
are the credentials that will be used by the controller to authenticate itself to the service broker in order to
make requests for creation, binding, unbinding and deletion of services. The service-broker-url parameter
specifies the URL on which the controller will send these requests.
Note
During the creation of the service broker, the XS advanced controller makes a call to the service-broker API
to inquire about the services and plans the service broker provides. For this reason, an application that
declares itself as a service broker must implement the service-broker application-programming interface
(API). Failure to do so might cause the deployment process to fail.
Note
The consumption of existing service keys from applications is an alternative of service bindings. The
application can use the service key credentials and consume the service.
A service-keys parameter can be created or updated when a service instance is being created or updated. It
has to be defined under the resources that represent services which support service keys.
Sample Code
resources:
- name: my-service-instance
type: org.cloudfoundry.managed-service
parameters:
service-keys: # Specifies the service keys that should be created for the
respective service instance. Optional parameter.
- name: tool-access-key # Specified the service key name. Mandatory
element.
config: # Specified the service key configuration. All entries under
this map element are used for the service key creation request. Optional
element.
permissions: read-write
- name: reader-endpoint
config:
permissions: read-only
As shown in the example above, every service key entry under the service-keys parameter supports
optional configuration parameters that can be defined under the config parameter.
To be cunsumed, the existing service keys are modeled as a resource of type org.cloudfoundry.existing-
service-key. MTA modules might be set to depend on these resources by using a configuration in the
requires section, which results in an injection of the service key credentials in the application environment.
Sample Code
modules:
- name: app123
type: javascript.nodejs
requires:
- name: service-key-1
parameters:
env-var-name: keycredentials
...
resources:
- name: service-key-1
type: org.cloudfoundry.existing-service-key
parameters:
service-name: service-a
As a result, the application app123 has the environment variable keycredentials, with value the credentials
of the existing service key test-key-1 of service service-a.
Note
Note that only the parameter service-name is mandatory. It defines which service is used by the
application.
● service-key-name- resource parameter, which defines which service key of the defined service is
used. The default value is the name of the resource.
● env-var-name - required dependency parameter, which defines what is the name of the new
environment variable of the application. The default value is the service key name.
3.1.11.8.4 Routes
This section describes how developers or administrators have to configure application routes using the MTA
modelling.
Routes are defined on module level by using the routes parameter. The parameter accepts list of one or many
HTTP routes. It is a combination of the old parameters host, domain, port and route-path, which
encompasses the full addresses to which to bind a module.
In case the new routes parameter and the old ones are available, the routes value is used and the values of the
old parameters are ignored. Each route for the application is created if it does not already exist.
Note
A routes parameter consists of one or many HTTP routes following the pattern myhost.my.domain/path.
Example
In order to reference a specific given route inside a deployment descriptor (for example for a provides
section), use the following syntax, which provides the first given route:
provides:
- name: foo
properties:
url: "${protocol}://${routes/0/route}"
If you want to create a route with a wildcard hostname, use an asterisk in quotes ("*").
modules:
- name: my-app
type: application
parameters:
routes:
- route: "*foo.my.custom.domain/path"
- route: "foo.my.custom.domain/path"
3.1.11.9 Features
The section describes MTA-specific features available in SAP Cloud Platform Cloud Foundry environment.
Each feature has its own productive benefit. For example, the blue-green deployment provides zero downtime
update of your MTA.
Note
The declaration method requires the addition of a resource in the deployment descriptor; the additional
resource defines the provided dependency from the other MTA.
This method can be used to access any entry that is present in the configuration registry. The parameters used
in this cross-MTA declaration method are provider-nid, provider-id, version, and target. The
parameters are all optional and are used to filter the entries in the configuration registry based on their
respective fields. If any of these parameters is not present, the entries will not be filtered based on their value
for that field. The version parameter can accept any valid Semver ranges.
When used for cross-MTA dependency resolution the provider-nid is always “mta”, the provider-id
follows the format <mta-id>:<mta-provides-dependency-name> and the version parameter is the
version of the provider MTA. In addition, as illustrated in the following example, the target parameter is
structured and contains the name of the organization and space in which the provider MTA is deployed. In the
following example, the placeholders ${org} and ${space} are used, which are resolved to the name of the
organization and space of the consumer MTA. In this way, the provider MTA is deployed in the same space as
the consumer MTA.
Note
As of version 3.0 of the MTA specification, the provided dependencies are no longer public by default. They
have to be explicitly declared public for cross-MTA dependencies to work.
Sample Code
_schema-version: "3.1"
ID: com.sap.sample.mta.consumer
version: 0.1.0
modules:
- name: consumer
type: java.tomee
requires:
- name: message-provider
properties:
message: ~{message}
resources:
- name: message-provider
type: configuration
parameters:
provider-nid: mta
provider-id: com.sap.sample.mta.provider:message-provider
version: ">=1.0.0"
target:
org: ${org} # Specifies the org of the provider MTA
space: ${space} # Wildcard * searches in all spaces
Tip
If no target organization or space is specified by the consumer, then the current organization and space are
used to deploy the provider MTA. If you specify a wildcard value (*) for organization or space of the provider
MTA, the provider would be searched in all organization or spaces for which the wildcard value is provided.
The following example shows the dependency declaration in the deployment descriptor of the “provider” MTA :
Sample Code
_schema-version: "3.1"
ID: com.sap.sample.mta.provider
version: 2.3.0
modules:
- name: provider
type: javascript.nodejs
provides:
- name: message-provider
public: true
properties:
message: "Hello! This is a message provided by application \"${app-
name}\", deployed in org \"${org}\" and space \"${space}\"!"
A “consumer” module must explicitly declare the organizations and spaces in which a “provider” is expected to
be deployed, except if it is the same space as the consumer. The “provider” can define a white list that specifies
Note
Previously, registry entries were visible from all organizations by default. Now, the default visibility setting is
“visible within the current organization and all the organization's spaces”.
White lists can be defined on various levels. For example, a visibility white list could be used to ensure that a
provider's configuration data is visible in the local space only, in all organizations and spaces, in a defined list of
organizations, or in a specific list of organization and space combinations.
The options for white lists and the visibility of configuration data are similar to the options available for
managed services. However, for visibility white lists, space developers are authorized to extend the visibility of
configuration data beyond the space in which they work, without the involvement of an administrator. An
administrator is required to release service plans to certain organizations.
Visibility is declared on the provider side by setting the parameter visibility: (of type 'sequence'),
containing entries for the specified organization (org:) and space (space:). If no visibility: parameter is
set, the default visibility value org: ${org}, space: '*' is used, which restricts visibility to consumers
deployed into all spaces of the provider's organization. Alternatively, the value org: '*' can be set, which
allows to bindings from all organizations and spaces. The white list can contain entries at the organization level
only. This, however, releases configuration data for consumption from all spaces within the specified
organizations, as illustrated in the following (annotated) example.
Tip
Since applications deployed in the same space are always considered “friends”, visibility of configuration
data in the local space is always preserved, no matter which visibility conditions are set.
Sample Code
provides:
- name: backend
public: true
parameters:
visibility: # a list of possible settings:
- org: ${org} # for local org
space: ${space} # and local space
- org: org1 # for all spaces in org1
- org: org2 # for the specified combination (org2,space2)
space: space2
- org: ${org} # default: all spaces in local org
- org: '*' # all orgs and spaces
- org: '*'
space: space3 # every space3 in every org
● Only those users in the white-listed spaces can read or consume the provided configuration data.
● Only users with the role “SpaceDeveloper” in the configuration-data provider's space can modify (edit or
delete) configuration data.
The deployment service supports a method that allows an MTA to consume multiple configuration entries per
requires dependency.
The following is an example for multiple requires dependencies in the MTA Deployment Descriptor
(mtad.yaml):
Sample Code
_schema-version: "2.1"
ID: com.acme.framework
version: "1.0" modules:
- name: framework
type: javascript.nodejs
requires:
- name: plugins
list: plugin_configs
properties:
plugin_name: ~{name}
plugin_url: ~{url}/sources
parameters:
managed: true # true | false. Default is false
resources:
- name: plugins
type: configuration
parameters:
target:
org: ${org}
space: ${space}
filter:
type: com.acme.plugin
The MTA deployment descriptor shown in the example above contains a module that specifies a requires
dependency to a configuration resource. Since the requires dependency has a list property, the deploy
service will attempt to find multiple configuration entries that match the criteria specified in the configuration
resource.
Tip
It is possible to create a subscription for a single configuration entry, for example, where no “list:”
element is defined in the required dependency.
Note
The filter parameter can be used in combination with other configuration resource specific parameters, for
example: provider-nid, provider-id, target, and version.
The resource itself contains a filter parameter that is used to filter entries from the configuration registry
based on their content. In the example shown above, the filter only matches entries that are provided by an
MTA deployed in the current space, which have a type property in their content with a value of
com.acme.plugin.
If the “list” element is missing, the values matched by the resources filter are single configuration entries –
not the usual list of multiple configuration entries. In addition, if either no value or multiple values are found
during the deployment of the consuming (subscribing) MTA, the deployment operation fails. If a provider (plug-
The XML document in the following example shows some sample configuration entries, which would be
matched by the filter if they were present in the registry.
Sample Code
<configuration-entry>
<id>8</id>
<provider-nid>mta</provider-nid>
<provider-id>com.sap.sample.mta.plugin-1:plugin-1</provider-id>
<provider-version>0.1.0</provider-version>
<target-space>2172121c-1d32-441b-b7e2-53ae30947ad5</target-space>
<content>{"name":"plugin-1","type":"com.acme.plugin","url":"https://
xxx.mo.sap.corp:51008"}</content>
</configuration-entry>
<configuration-entry>
<id>10</id>
<provider-nid>mta</provider-nid>
<provider-id>com.sap.sample.mta.plugin-2:plugin-2</provider-id>
<provider-version>0.1.0</provider-version>
<target-space>2172121c-1d32-441b-b7e2-53ae30947ad5</target-space>
<content>{"name":"plugin-2","type":"com.acme.plugin"}</content>
</configuration-entry>
The JSON document in the following example shows the environment variable that will be created from the
requires dependency defined in the example deployment descriptor above, assuming that the two
configuration entries shown in the XML document were matched by the filter specified in the configuration
resource.
Note
References to non-existing configuration entry content properties are resolved to “null”. In the example
above, the configuration entry published for plugin-2 does not contain a url property in its content. As a
result, the environment variable created from that entry is set to “null” for plugin_url.
Sample Code
plugin_configs: [
{
"plugin_name": "plugin-1",
"plugin_url": "https://xxx.mo.sap.corp:51008/sources"
},
{
"plugin_name": "plugin-2",
"plugin_url": null
}
]
Requires dependencies support a special parameter named “managed”, which registers as a “subscriber” the
application created from the module containing the requires dependency. One consequence of this
Tip
When starting the deployment of an MTA (with the xs deploy command), you can use the special option
--no-restart-subscribed-apps to specify that, if the publishing of configuration entries created for that MTA
result in the update of a subscribing application's environment, then that application should not be
restarted.
By running two identical production environments that are called “blue” and “green”, you can perform a blue-
green deployment, which will eliminate the downtime and risk for your system.
Prerequisites
Context
Restriction
There is no blue-green deployment for bound services. Blue and green applications are bound to the same
service instances.
Procedure
1. Deploy your initial MTA (the blue version) by executing the cf bg-deploy <your-mta-archive-v1>
command.
This will:
○ create new applications
If there are already installed applications, “blue” will be added to the existing application names.
2. Deploy your updated MTA (the green version) by executing the cf bg-deploy <your-mta-archive-
v2> command.
Note
The first action is that all MTA services are updated. The changes between the old and the new versions
must be compatible. For example, the changes between the old and the new versions of the database
tables, UAA configurations and so on.
This will:
○ create new applications adding “green” to the existing application names
○ create temporary routes to the green applications
Output Code
This will:
○ map the productive routes to your green versions
When performing a blue-green deployment, you can use the Zero-Downtime Maintenance (ZDM) parameter to
update an application that has database changes between the “blue” and the “green” versions.
Prerequisites
● The applications use HDI containers for persistence - com.sap.xs.hdi-container resource type in the
deployment descriptor
● ZDM is supported only with a blue-green deployment of a Multitarget Application (MTA)
● The application does not use a hard coded service name for the data source to the HDI service
● The database artifacts are modeled as described in Table 1: Modeling of HDI Artifacts [page 1299]
Context
Overview
Zero-downtime update is achieved by deploying database artifacts in separate schemas - data schema and
access schema.
Note
Applications are bound only to the access schema. Deployment and lifecycle management tools are bound
to both data and access schemas.
● normal
● ZDM - ZDM update of applications is enabled. When used, the deploy-time analyzes the application artifacts
and deploys them into the data and access schemas by providing the corresponding roles in the data
schema, granting necessary permissions, and providing corresponding interfaces to the access schema.
The Table 1: Modeling of HDI Artifacts [page 1299] contains a description of the modeling of the supported HDI
artifacts and the target schema where they should be deployed. Depending on where and how the HDI artifacts
are modeled in the DB module there, are two types of handling:
1. Default handling - when the artifacts are modeled in the default folders (src/, cfg/) of the DB module. In
this case, the deployment-time handles the artifacts by default and deploys them in the relevant default
schema according to artifact type.
This deployment handling has the following limitations:
1. Some access schema objects (for example, interfaces and logic) are deployed in the data schema,
which is unnecessary as this brings a performance reduction and violates the ZDM concept for
schema separation. For example, the CDS types should be separated and used from data schema
objects like CDS entities, or from the access schema objects (such as CDS views). If these types are
separated in two different CDS files - one used from CDS entity and the other from CDS view - all
artifacts except the CDS view are deployed to both schemas. It is not necessary to deploy access
schema objects like the CDS type used from the CDS view into the data schema.
2. The hdbtables are deployed into the data schema by default. In certain situations, it is acceptable to
deploy them into the access schema, for example if there are corresponding hdbtabledata files that
fill these tables, for example with language texts that are incompatible between versions. With the
default handling it is not possible to deploy .hdbtables into an access schema, because they are
deployed into the data schema by default.
2. Separated handling using data/access/ foldersdata/ and access/ folders within src/, cfg/ folders
of the database module. The data/ folder should contain the data schema related objects, which have
persistence data like tables, sequences, indexes. The access/ folder should contain access schema
related objects, like interface-to-database objects (for example, projection views and synonyms) and the
database logic (such as calculation views, database procedures, and functions).
In this case, the deploy-time deploys the artifacts from the data/ folder to the data schema, and generates
the corresponding interface objects in access schema. Artifacts from access/ folder are deployed only to
access schema. This type of handling resolves the limitations listed above due to the following reasons:
1. Limitation 1 is resolved because the access schema objects like the CDS type used from CDS view are
modeled in the access/ folder and are thus deployed only to access schema.
2. Limitation 2 is resolved, because the hdbtables that should be deployed to access schema, are
modeled in access/ folder and are thus deployed only to the access schema.
This handling brings clarity to the database model, as the location of database objects is better defined,
and it also improves the separation of object types.
To ensure that your applications support the ZDM update, follow the adoption guidelines stated in Table 1:
Modeling of HDI Artifacts [page 1299] and model the HDI artifacts in data/ and access/ folders
accordingly. ZDM is also supported with the default handling of HDI artifacts, when they are in the default
folder, but it has limitations with more complex data model.
Persistence in applications
ZDM deployment is possible for applications that use HDI containers for persistence. HDI containers are
services that use the hdi-shared service plan.
Note
Applications must not use a hard-coded service name for the data source to the HDI service, as during
ZDM deployment the applications are bound to a new access HDI service with generated name, which
could be different from the hard-coded name in the application code.
● Java applications
Java applications can use dynamic data source configuration in one of the following ways:
1. by using an SAP JAVA buildpack -Java applications can use a dynamic data source configuration
feature that allows bound services described in the manifest.yml to appear as data sources
available for JNDI lookup in the application. This is done using the environment variable
JBP_CONFIG_RESOURCE_CONFIGURATION as shown in the example deployment descriptor below.
2. by using the Spring Cloud Spring Service Connector
● Node.js applications
Should use @sap/hana-client for connection to the database.
For more information, see Configure the Node.js Driver (Client Install).
.hdbcds Yes ● data and access (both) - 1. Put CDS (temporary) entities
all .hdbcds artifacts that into the data/ folder. If put in
contain an entity, type, or the access schema, CDS (tem
table-type definition porary) entities produce only
● access (only) - all .hdbcds projection views.
artifacts that do not contain 2. Put CDS views only in the
an entity, type, or table-type access/ folder.
definition
access/ folders.
3. CDS types and CDS table types
should not be used by both enti
ties and views or procedures.
Separate CDS types and CDS
table types for data/ and
respectively.
1. data/ folder - Define CDS
types and CDS table types
which are used only by CDS
entities. Do not make back
ward incompatible changes
on data types in next ver
sions.
2. access/ folder - Define
CDS types and CDS table
types which are used by
procedures, views and/or
table types, but not from
entities.
4. Associations defined in CDS en
tities can be used only by CDS
views, but not by .hdbviews.
In the data/ folder do not
model associations to objects
from access/ folder.
5. Put CDS files containing Data
Control Language (DCL) objects
only in the access/ folder.
Note
Default target schema (data/access) - The target schema in which the artifact should be deployed most
frequently (by default). The artifact is located in the default folder (src/, cfg/) of the db module, not into
data/ or access/ folder and HDI Deployer applies default handling to the artifact. If the artifact is
modeled in data/ oraccess/ folder, it is deployed into the corresponding schema.
Procedure
1. To run the deployment in a ZDM mode for the applications and the databases, you have to declare the
value zdm-mode:true as a parameter value of all modules, which are of a module type com.sap.xs.hdi.
Note
Note
For more detailed example, see Cloud HDI ZDM Reference Application
Sample Code
_schema-version: "3.1.0"
ID: cloud-hdi-zdm-ref-app
version: 0.0.1
modules:
- name: backend
type: java.tomee
parameters:
buildpack: sap_java_buildpack
disk-quota: 256M
properties:
SET_LOGGING_LEVEL: '{OpenEJB: DEBUG, OpenEJB.options: DEBUG,
OpenEJB.server: DEBUG, OpenEJB.startup: DEBUG, OpenEJB.startup.service:
DEBUG, OpenEJB.startup.config: DEBUG, OpenEJB.hsql: DEBUG, openjpa.Tool:
DEBUG, openjpa.Runtime: INFO, openjpa.Remote: DEBUG, openjpa.DataCache:
DEBUG, openjpa.MetaData: DEBUG, openjpa.Enhance: DEBUG, openjpa.Query:
DEBUG, openjpa.jdbc.SQL: DEBUG, openjpa.jdbc.SQLDiag: DEBUG,
openjpa.jdbc.JDBC: DEBUG, openjpa.jdbc.Schema: DEBUG}'
requires:
- name: hdi-container
properties:
Sample Code
resources/META-INF/persistence.xml
<persistence version="1.0"
xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd">
<persistence-unit name="cloud-hdi-zdm-ref-app" transaction-type="JTA">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<jta-data-source>jdbc/hdi-container</jta-data-source>
<properties>
<property name="eclipselink.target-database"
value="org.eclipse.persistence.platform.database.HANAPlatform"/>
<property name="eclipselink.logging.level" value="FINE"/>
</properties>
</persistence-unit>
</persistence>
Sample Code
webapp/META-INF/java_xs_buildpack/config/resource_configuration.yml
---
tomee/webapps/ROOT/WEB-INF/resources.xml:
service_name_for_DefaultDB: hdi-container
Sample Code
webapp/WEB-INF/resources.xml
@PersistenceContext(name = "cloud-hdi-zdm-ref-app")
private EntityManager em;
2. To start the blue-green deployment process, follow the steps described in Blue-Green Deployment of
Multitarget Applications (MTA) [page 1294].
In some cases, it is crucial that modules and therefore applications are deployed in a predictable and
consistent order. To ensure this, the module-level attribute deployed-after has been introduced. It contains
a list of module names. If a module has this attribute, it will be deployed only after the modules specified in the
attribute have already been deployed.
The relations described through this attribute are transitive, so if module A should be deployed after module B,
and module B should be deployed after module C, then it means that module A should be deployed after
module C.
Sample Code
ID: com.sap.sample
version: 0.1.0
_schema-version: "3.2.0"
parameters:
enable-parallel-deployments: true
modules:
- name: ui
type: javascript
deployed-after: [ backend, metrics ]
- name: backend
type: java
deployed-after: [ hdi-content ]
requires:
- name: metrics
properties:
METRICS_URL: ~{url}
- name: metrics
type: javascript
deployed-after: [ hdi-content ]
provides:
- name: metrics
properties:
METRICS_URL: ~{url}
- name: hdi-content
type: hdi
In the example above, the deployed-after attributes guarantee that the ui module is deployed after the
backend and the metrics modules, and they in turn are deployed after the hdi-content module. Note that
Parallel Deployment
In the example above, we have also specified the global MTA parameter enable-parallel-deployments
with a value true. It activates the parallel deployment of MTA modules that do not have any deployment order
dependencies between each other. If the parameter is missing or its value is false, the module deployment
will be sequential.
Circular Dependencies
Due to a modelling error, the user can introduce direct or transitive circular deployment order dependencies
between modules. In such cases, this will be reported as a deployment error.
There are many applications that are still depending on the old deployment order algorithm. To support them
until they adapt to the new modeling, the new deployment order is introduced in a backward compatible
manner. This means that if there are no deployed-after module-level elements in the MTA descriptor and
the global MTA parameter enable-parallel-deployments is set to false or is missing, the old ordering
algorithm is enabled by default.
This section contains information about the supported MTA modules, their default parameters, properties, and
supported resource types available in the Cloud Foundry environment.
Module, resource, and dependency parameters have platform-specific semantics. To reference a parameter
value, use the placeholder notation ${<parameter>}, for example, ${default-host}.
Tip
It is also possible to declare metadata for parameters and properties defined in the MTA deployment
description; the mapping is based on the parameter or property keys. For example, you can specify if a
parameter is required (optional; false) or can be modified overwritable: true.
controlle All Yes The URL of the cloud control Generated https://
r-url ler as described api.cf.sap.hana.onde
in the de mand.com
scription.
https://localhost:
30030
Note
○ The module-level
variant of the pa
rameter has priority
over the global pa
rameter.
○ This parameter is
typically used when
users want to keep
the routes they have
mapped manually
by using the cf
map-route com
mand. We discour
age this approach,
as manual opera
tions could lead to
inconsistent deploy
ment results and dif
ficult troubleshoot
ing. We recommend
you to define all
routes in the deploy
ment and/or exten
sion descriptors,
which allows for
their automatic
management.
org All Yes Name of the target organiza The current initial, trial
tion name of the
target or
ganization
protocol All Yes The protocol used by the http or http, https
Cloud Foundry environment. https
space All Yes Name of the target organiza Generated initial, a007007
tional space as described
in the de
scription.
Related Information
You can enable transport of SAP Cloud Platform applications and application content that is available as
Multitarget Applications (MTA) using the Enhanced Change and Transport System (CTS+).
Prerequisites
● You have configured your SAP Cloud Platform subaccounts for transport with CTS+ as described in How
To... Configure SAP Cloud Platform Cloud Foundry for CTS
You use the Change and Transport System (CTS) of ABAP to transport and deploy your applications running on
SAP Cloud Platform in the form of MTAs, for example, from development to a test or production subaccount.
Proceed as follows to be able to transport an SAP Cloud Platform application:
Procedure
1. Package the application in a Multitarget Application (MTA) archive using the Archive Builder Tool as
described in Defining Multitarget Application Archives [page 1239].
2. Attach the MTA archive to a CTS+ transport request as described in How To... Configure SAP Cloud
Platform Cloud Foundry for CTS.
3. Trigger the import of an SAP Cloud Platform application as described in How To... Configure SAP Cloud
Platform Cloud Foundry for CTS.
Related Information
Resources on CTS+
Setting up a CTS+ enabled transport landscape in SAP Cloud Platform
● Use the cf dmol -i <operation-id> command in order to download the logs of the deployment of an
operation with id <operation-id>. The id of the operation could be obtained using the cf mta-ops
command. In the downloaded deployment logs, there will be a <application-name>.log file which
contains the logs of the application.
● Use cf logs <application-name> --recent in order to retrieve the recent logs of the application.
Note
Use the cf dmol -i <operation-id> command in order to download the deployment operation logs as
described in the Deployment Failed [page 1315] section. After downloading the deployment logs, locate the file
with name MAIN_LOG which contains the whole logs of the deployment. You will find a detailed error message
in it.
When an error occurs during deployment, there is a message which indicates whether the problem is in one of
the components of the Cloud Foundry Platform. In such cases, an incident to the corresponding component
could be created.
Example
“Controller operation failed: 400 Bad Request: Cannot bind application to service”
For more information, you can also check the Troubleshooting [page 1314].
Use the command cf mta-ops and find the id of the operation related to the desired deployment. There will
be information about the status of the operation.
Use the cf mtas command and locate the ID of the desired MTA. After locating the correct MTA ID, execute
the command cf mta <located-mta-id> to get detailed information about the the MTA with the provided
ID.
You can abort the currently running deployment, using the command cf <operation> -i <operation-
id> -a abort (Example: cf deploy -i 12353 -a abort) or you can execute the command for staring
the deployment by providing the option -f as described in the deploy [page 1773] section.
Example
cf deploy <path-to-mtar>.mtar -f
The applications and content deployment order is determined based on the deployed-after module
parameter. If deployed-after is not used and parallel deployments are not enabled for the MTA, the
requires/provides module sections define the order. For more information, see Order of Deployment [page
1306].
If you want both MTA deployments to manage the lifecycle of the service, use the command-line option --
skip-ownership-validation for the deploy and bg-deploy commands. For more information, see
Multitarget Application Commands for the Cloud Foundry Environment [page 1772]
How to make sure that my archive is signed correctly by SAP and its content has not been
changed ?
Use the deployment option --verify-archive-signature as described in Multitarget Application
Commands for the Cloud Foundry Environment [page 1772].
3.1.11.13 Troubleshooting
This section contains information about the following problems that may occur during the Multitarget
Application deployment:
Deployment Failed
There might be different reasons for a deployment failure. This section describes the basic steps you should
perform in order to recover from a failed deployment. To interact with the failed deployment, you can execute
the following actions:
Note
This action will not roll back all applied changes. It will allow the next deployments of that MTA to
proceed without confirmation and the current process will end without any possibilities to retry it from
the failed step-on.
Example
● Retry – retries the last failed step(s) of a deployment with the given operation id.
Command: cf <deployment-action> -i <process-id> -a retry
Example
● Download deployment operation logs – downloads the logs for the current deployment. The logs contain
the following structure of files:
○ MAIN_LOG - contains the whole deployment log
○ <application-name>.log – there is a separate file for each application. It holds the logs, related to the
application during staging and starting
Command: cf dmol -i <process-id>
Example
cf dmol -i 12345
● Download or stream single CF application log – in order to debug issues relevant only to a single cf
application that is part of an MTA, the following cf primitives can be used:
○ cf logs <application-name>: streams the logs of the application as it is getting handled by the
platform (stating/starting/jobs and so on)
<process-id> - the process id of the failed deployment. It can be taken from the result of the execution of the
cf mta-ops command.
If a service operation fails, an error message with the following format will be displayed:
Usually, such errors are produced when there is a problem with the services provisioning infrastructure. To
check if there is a problem with the service, perform the following manual steps :
Note
This may result in data loss in case the backing service persists state.
If a route could not be mapped to an application and the deployment fails, the following checks can be
performed:
1. cf routes – displays all the created routes. Verify that the route does not exist.
2. cf map-route <application-name> <domain> [ADDITIONAL OPTIONS] – maps a route to the
application with the given name. If the process finishes successfully, retry the deployment operation.
3. cf unmap-route <application-name> <domain> [ADDITIONAL OPTIONS] – removes a route for
the application with the given name. If this step is successful, execute step 2.
This error can occur when the services are created and the deployment is in phase, in which the application is
being bound to the services. The error has the following format:
Usually, this error happens when the service provider for the service fails to initiate the binding. The problem
might also occur when the Cloud Foundry Cloud Controller component has internal issues. The following steps
might help to investigate the issue and eventually resolve it:
1. Retry the deployment process – for more information about actions related with the deployment, see
Deployment Failed [page 1315] section.
2. If the deployment fails after it is retried, you can try to unbind/bind the application to the service manually,
using the commands:
1. cf bind-service <application-name> <service-name> [ADDITIONAL OPTIONS] - binds
the application with the given name to service.
2. cf unbind-service <application-name> <service-name> - unbinds the application with the
given name from service.
● There is a problem with the Cloud Controller. The Cloud Controller is responsible for taking the application
binaries and storing them, so that it executes the operations stage and starts it afterwards. The status
code and the response returned from the Cloud Controller of such errors are in the following format:
● The application archive size is bigger than 1GB in size. This limitation is set by the Cloud Platform and could
not be modified.
● There is a problem with the processing of the application content by the deployer. If this is the case, then
an incident to the following component should be created:
BC-XS-SL-DS
● Retry the deployment process – for more information about actions related with the deployment, see
Deployment Failed [page 1315] section.
● Execute cf push instead of cf deploy using only the application with the problematic content.
These errors happen when there the one-off tasks defined for some application and their execution fail.
Usually, they fail because:
Usually, this error happens when there is a problem in the application code. It should be resolved by the
developer of the MTA.
In order to locate the problem in the application start-up, the logs of the application should be checked.
You can download the logs as described in the Deployment Failed [page 1315] section.
Learn more about using services in the Cloud Foundry environment, how to create (user-provided) service
instances and bind them to applications, and how to create service keys.
In the Cloud Foundry environment, you usually enable services by creating a service instance using either the
SAP Cloud Platform cockpit or the Cloud Foundry command line interface (cf CLI), and binding that instance to
your application.
In a PaaS environment, all external dependencies, such as databases, messaging systems, files systems, and
so on, are services. In the Cloud Foundry environment, services are offered in a marketplace, from which users
can create service instances on-demand. A service instances is a single instantiation of a service running on
SAP Cloud Platform. Service instances are created using a specific service plan. A service plan is a
To integrate services with applications, the service credentials must be delivered to the application. To do so,
you can bind service instances to your application to automatically deliver these credentials to your
application. Or you can use service keys to generate credentials to communicate directly with a service
instance. As shown in the figure below, you can deploy an application first and then bind it to a service instance:
Alternatively, you can also bind the service instance to your application as part of the application push via the
application manifest. For more information, see https://docs.cloudfoundry.org/devguide/deploy-apps/
manifest.html#services-block .
Note
Have in mind that you need to create a service instance first before you integrate it into your application
manifest.
The Cloud Foundry environment also allows you to work with user-provided service instances. User-provided
service instances enable developers to use services that are not available in the marketplace with their
applications running in the Cloud Foundry environment. Once created, user-provided service instances behave
in the same manner as service instances created through the marketplace. For more information, see Creating
User-Provided Service Instances [page 1322].
For more conceptual information about using services in the Cloud Foundry environment, see https://
docs.cloudfoundry.org/devguide/services/ .
Related Information
Use the Cloud Foundry Command Line Interface to create service instances:
● Create Service Instances Using the Cloud Foundry Command Line Interface [page 1321]
You can also create service instances by declaring them as part of your multitarget application (MTA). To learn
how to do that, have a look at the service creation parameters.
Prerequisites
If you are working in an enterprise account, you need to add quotas to the services you purchased in your
subaccount before they appear in the service marketplace. Otherwise, only default free-of-charge services are
listed. Quotas are automatically assigned to the resources available in trial accounts.
For more information, see Configure Entitlements and Quotas for Subaccounts [page 1756].
Procedure
For more information, see Navigate to Orgs and Spaces [page 1751].
You can use the Cloud Foundry Command Line Interface (cf CLI) to create service instances.
Prerequisites
Procedure
1. (Optional) Open a command line and enter the following string to list all services and service plans that are
available in your org:
cf marketplace
Related Information
User-provided service instances enable you to use services that are not available in the marketplace with your
applications running in the Cloud Foundry environment.
You can create user-provided service instances using the Cloud Foundry Command Line Interface:
● Create User-Provided Service Instances Using the Cloud Foundry Command Line Interface [page 1323]
Use the cockpit to create user-provided service instances and bind them to applications in the Cloud Foundry
environment.
Prerequisites
Obtain what the application requires to access as that is not available in the marketplace service via the
network, a URL and port for example. Also,credentials required to authenticate the application with the service,
such as a user name and a password for example, and tfor communicating with a service t.
Procedure
1. Navigate to the space in which you want to create a user-provided service instance. For more information,
see Navigate to Orgs and Spaces [page 1751].
Next Steps
To bind your application to the user-provided service instance, follow the steps described in Bind Service
Instances to Applications Using the Cockpit [page 1324].
Use the Cloud Foundry Command Line Interface to make a user-provided service instance available to Cloud
Foundry applications.
Prerequisites
● Download and install the cf CLI and log on to Cloud Foundry. For more information, see Download and
Install the Cloud Foundry Command Line Interface [page 1769] and Log On to the Cloud Foundry
Environment Using the Cloud Foundry Command Line Interface [page 1770].
● Obtain a URL, port, user name, and password for communicating with a service that is not available in the
marketplace.
Context
Procedure
Open a command line and enter the following string to create a user-provided service instance:
Next Steps
To bind your application to the user-provided service instance, follow the steps described in Bind Service
Instances to Applications Using the Cloud Foundry Command Line Interface [page 1325].
Related Information
Use the Cloud Foundry Command Line Interface to bind service instances to applications:
● Bind Service Instances to Applications Using the Cloud Foundry Command Line Interface [page 1325]
You can also bind service instances by declaring them as part of your multitarget application (MTA). To learn
how to do that, have a look at the service binding parameters.
You can bind service instances to applications both at the application view, and at the service-instance view in
the cockpit.
Prerequisites
● Deploy an application in the same space in which you plan to create the service instance. For more
information, see Deploy Business Applications in the Cloud Foundry Environment [page 1075].
● Create a service instance. For more information, see Create Service Instances Using the Cockpit [page
1320].
Procedure
1. Navigate to the space in which you deployed your application and created the service instance. For more
information, see Navigate to Orgs and Spaces [page 1751].
2. Choose one of the following options:
Application 1. In the navigation area, choose Applications, then select the relevant applica
tion.
2. In the navigation area, choose Service Bindings.
3. Choose Bind Service.
4. Choose a service type, then choose Next.
5. Choose a service, then choose Next.
6. To create a new instance of the service, choose Create new instance and fol
low the steps required for creating a new instance. To reuse an existing in
stance, choose Re-use existing instance. Then choose Next
7. Choose Finish to save your changes.
You can bind service instances to applications using the Cloud Foundry Command Line Interface (cf CLI).
Prerequisites
● Download and install the cf CLI and log on to Cloud Foundry. For more information, see Download and
Install the Cloud Foundry Command Line Interface [page 1769] and Log On to the Cloud Foundry
Environment Using the Cloud Foundry Command Line Interface [page 1770].
● Deploy an application in the same space in which you plan to create the service instance. For more
information, see Deploy Business Applications in the Cloud Foundry Environment [page 1075].
● Create a service instance. For more information, see Create Service Instances Using the Cloud Foundry
Command Line Interface [page 1321].
Procedure
Related Information
You can use service keys to generate credentials to communicate directly with a service instance. Once you
configure them for your service, local clients, apps in other spaces, or entities outside your deployment can
access your service with these keys.
You can use the Cloud Foundry Command Line Interface to create service keys:
● Create Service Keys Using the Cloud Foundry Command Line Interface [page 1327]
Prerequisites
Create a service instance. For more information, see Create Service Instances Using the Cockpit [page 1320].
Procedure
1. Navigate to the space in which you've created a service instance for which you want to create a service key.
For more information, see Navigate to Orgs and Spaces [page 1751].
Results
Local clients, apps in other spaces, or entities outside your deployment can now access your service instance
with this key.
Use the Cloud Foundry Command Line Interface to create a service key.
Prerequisites
● Download and install the cf CLI and log on to Cloud Foundry. For more information, see Download and
Install the Cloud Foundry Command Line Interface [page 1769] and Log On to the Cloud Foundry
Environment Using the Cloud Foundry Command Line Interface [page 1770].
● Create a service instance. For more information, see Create Service Instances Using the Cloud Foundry
Command Line Interface [page 1321].
Procedure
Local clients, apps in other spaces, or entities outside your deployment can now access your service instance
with this key.
Related Information
Use the Cloud Foundry Command Line Interface to delete service instances:
Caution
Be aware that instances will be deleted ultimately. There is now way to revoke this step.
● Delete Service Instances Using the Cloud Foundry Command Line Interface [page 1329]
You can also delete service instances using the Multitarget Application plug-in for the Cloud Foundry command
line interface. This works with the command deploy, bg-deploy, and undeploy with specifying the --delete-
services option. To learn how to do that, have a look at the multitarget application commands.
● Multitarget Application Commands for the Cloud Foundry Environment [page 1772]
Procedure
For more information, see Navigate to Orgs and Spaces [page 1751].
If your service instance is bound to an application, this step also removes the binding.
You can use the Cloud Foundry Command Line Interface (cf CLI) to delete service instances.
Prerequisites
Procedure
cf l -a <API endpoint>
cf services
cf delete-service SERVICE_INSTANCE
Related Information
Use the Cloud Foundry Command Line Interface to update service instances:
● Update Service Instances Using the Cloud Foundry Command Line Interface [page 1330]
You can also update service instances inside a multitarget application if the service broker supports updates.
Change the deployment descriptor file or a configuration file for a service instance inside your multitarget
application and deploy your application to trigger an update of the respective service instance.
You can use the Cloud Foundry Command Line Interface (cf CLI) to update service instances.
Prerequisites
Context
You are using a service instance, for which you want to change the plan or the service-specific configuration.
Procedure
1. (Optional) Open a command line and enter the following string to list all services in your space:
cf services
2. (Optional) Enter the following string to list the details of your service:
cf service SERVICE_INSTANCE
3.1.12.8 Recipes
A recipe is a set of guided interactive steps which enable you to select, configure and consume services on SAP
Cloud Platform to achieve a specific technical goal.
You can access them directly from your desired global account in the SAP Cloud Platform cockpit, by choosing
Recipes in the navigation menu. This leads to a page where you can find an overview of all available recipes,
grouped by capability. From this overview page you can get quick information about a recipe, start a recipe or
choose a tile to access the recipe details.
Recipe Details
● Overview
Here you can get information about what the recipe does, its key features and how that particular recipe
can help you.
● Components
Here you can see all the components which are required for the recipe to run.
● Additional Resources
Here you find a list of additional information resources where you can learn more about the concepts and
components mentioned in the recipe.
Recipes automate processes and always achieve a technical goal, often in the form of an artifact. Artifacts are
entities that you develop which may consume technical components (for example, services). Examples of
artifacts include applications, content for integration and workflows, or even documents.
When you start a recipe, a wizard opens up which guides you through a set of steps. Following these steps
enables you to reach the end result described in the recipe overview.
Related Information
Create a Subaccount in the Cloud Foundry Environment [AWS, Azure, or GCP Regions]
Navigate to Global Accounts and Subaccounts [AWS, Azure, or GCP Regions]
Create Spaces [page 1754]
Use Core Data & Services (CDS) to build data models and service definitions on a conceptual level. These CDS
models are used as inputs for the data, service, and UI layers. They're then translated to native artifacts, for
example SQL database schemas, and interpreted to automatically serve requests at runtime.
In summary, CDS is used as a business level data definition source, and to generate the artifacts at the
persistence layer. It’s used to define visual aspects relating to the data, with those definitions (annotations)
defining the UI layer. And it's used to generate the application service layer.
We provide seamless integration with the Cloud Foundry environment on SAP Cloud Platform. This makes it
easier for you to deploy your application and consume platform services.
The programming model is compatible with any development environment, but we recommend using SAP Web
IDE Full-Stack.
The following graphic shows the relationship between the application programming model, SAP Cloud
Platform, platform services, and development tools:
Core Data and Services (CDS) Language Reference Documentation [page 1333]
CDS is the backbone of the SAP Cloud Application Programming Model. It provides the means to
declaratively capture service definitions and data models, queries, and expressions in plain
(JavaScript) object notations. CDS features to parse from a variety of source languages and to compile
them into various target languages.
Related Information
Core Data and Services (CDS) Language Reference Documentation [page 1333]
Developing Business Applications Using Java [page 1334]
Developing Business Applications Using Node.js [page 1335]
Best Practices [page 1335]
References [page 1336]
CDS is the backbone of the SAP Cloud Application Programming Model. It provides the means to declaratively
capture service definitions and data models, queries, and expressions in plain (JavaScript) object notations.
CDS features to parse from a variety of source languages and to compile them into various target languages.
CDS models are plain JavaScript objects complying to the Core Schema Notation (CSN), an open specification
derived from JSON Schema. You can easily create or interpret these models, which foster extensions by 3rd-
party contributions. Models are processed dynamically at runtime and can also be created dynamically.
● Definition Language (CDL) - A reference and overview of all CDS concepts and features with compact
examples written in CDS’ definition language.
● Schema Notation (CSN) - Specification of CSN, CDS’ canonical format for representing CDS models as
plain JavaScript objects, similar to JSON Schema.
● Query Language (CQL) - Documents the CDS Query Language (aka CQL) which is an extension of the
standard SQL SELECT statement.
● Query Notation (CQN) - Specification of the Core Query Notation (CQN) format that is used to capture
queries as plain JavaScript objects.
● Core Expression Notation - Specification of the Core Expression Notation (CXN) used to capture
expressions as plain JavaScript objects.
● Common Types & Aspects - Introduces @sap/cds/common a prebuilt CDS model shipped with
@sap/cds that provides common types and aspects.
Related Information
The CAP Java SDK enables developing SAP Cloud Application Programming Model (CAP) applications in Java.
While the SAP Web IDE provides excellent support to develop CAP Java applications, you can also develop
locally with your tool of choice, for example Eclipse.
The CAP Java SDK supports lean application design by its modular architecture, that means you pick the
required features and add them to your application dependencies on demand.
It enables local development by supporting in-memory or file-based SQLite databases. At the same time, the
CAP Java SDK enables switching to a productive environment, using, for example, SAP HANA as a database,
easily by simply switching the application deployment configuration.
The following sections describe how to set up a development environment to get you started.
● Overview
● Using Local Development
● Using Eclipse
● Starting a New Project
● Working in Eclipse
● Java Project Structure
Parent topic: Working with the SAP Cloud Application Programming Model [page 1332]
Related Information
Core Data and Services (CDS) Language Reference Documentation [page 1333]
Developing Business Applications Using Node.js [page 1335]
Best Practices [page 1335]
References [page 1336]
Develop a sample business service using Core Data & Services (CDS), Node.js, and SQLite, by using the SAP
Cloud Application Programming Model (CAP). Develop on your local environment and deploy to the Cloud.
The following sections describe how to set up a development environment to get you started.
Parent topic: Working with the SAP Cloud Application Programming Model [page 1332]
Related Information
Core Data and Services (CDS) Language Reference Documentation [page 1333]
Developing Business Applications Using Java [page 1334]
Best Practices [page 1335]
References [page 1336]
To help you create concise and comprehensible models with CDS, we have put together a list of best practices.
● Domain Modelling - Find here an introduction to the basics of domain modelling with CDS,
complemented with recommended best practises.
● Defining Services - A guide how to define, implement, deploy and publish services to be consumed from
other applications, services and UIs.
● Generic Providers - Define, implement, deploy, and publish services to be consumed from other
applications, services and UIs.
● Adding Custom Logic - Define, implement, deploy, and publish services to be consumed from
applications, services and UIs.
● Consuming Services - This guide is about consuming services in general, including Local Services,
External Services and Databases.
● Authorization - About restricting access to data by adding respective declarations to CDS models,
which are then enforced in service implementations.
Parent topic: Working with the SAP Cloud Application Programming Model [page 1332]
Related Information
Core Data and Services (CDS) Language Reference Documentation [page 1333]
Developing Business Applications Using Java [page 1334]
Developing Business Applications Using Node.js [page 1335]
References [page 1336]
3.1.13.5 References
Parent topic: Working with the SAP Cloud Application Programming Model [page 1332]
Related Information
Core Data and Services (CDS) Language Reference Documentation [page 1333]
Developing Business Applications Using Java [page 1334]
Developing Business Applications Using Node.js [page 1335]
Best Practices [page 1335]
Get started with samples and reusable packages created based on SAP Cloud Application Programming Model.
The SAP Cloud Application Programming Model enables you to quickly create business applications by
allowing you to focus on your domain logic. It offers a consistent end-to-end programming model that includes
languages, libraries, and APIs tailored for full-stack development on SAP Cloud Platform. The samples
provided can be run in a local setup on SQLite database.
Learn how to develop business applications using the SAP Cloud Application Programming Model.
● Learn how to reuse a CDS model: Build a Business App by Reusing a CDS Model .
● Learn how to create a business service with CDS using Node.js and deploy it to SAP Cloud Platform: Create
a Business Service with Node.js using Visual Studio Code .
● Learn how to use the application programming model and SAP Cloud SDK to extend SAP S/4HANA: Use
CAP and SAP Cloud SDK to Extend S/4HANA
● Find more tutorials using the SAP Cloud Application Programming Model in the Tutorial Navigator .
3.1.13.5.3 Troubleshooting
If you run into issues working with the SAP Cloud Application Programming Model samples or tutorials, please
refer to the Troubleshooting guide.
In the Troubleshooting guide, you can find a collection of frequently asked questions and provided solutions.
3.1.13.5.4 Blogs
Learn more about the SAP Cloud Application Programming Model by visiting SAP Community.
Overview
The ABAP environment is a platform as a service that allows you to extend existing ABAP-based applications
and develop ABAP cloud apps decoupled from the digital core. You can leverage your ABAP know-how in the
cloud and reuse existing ABAP assets by writing your source code with ABAP Development Tools for Eclipse.
Development Resources
Related Information
This guide describes the functionality and usage of the possibilities within the ABAP Development Tools (ADT).
It focuses on use cases for creating, editing, testing, debugging, and profiling development objects.
The ABAP RESTful programming model supports the development of all types of Fiori applications as well as
publishing Web APIs.
This guide describes the basic idea to manage SAP HANA procedures and their lifecycle inside the ABAP
server. To allow native consumption of SAP HANA features from within the ABAP layer, the SAP HANA
database procedure language SQLScript has been integrated into the ABAP stack.
The ABAP keyword documentation describes the syntax and meaning of the keywords of the ABAP language
and its object-oriented part – ABAP objects.
Context
The ABAP keyword documentation provides you with context-sensitive information for your ABAP source code.
To access the ABAP language help from the source code editor, position your cursor on an ABAP statement for
which you need help, and press F1 . The language help is displayed in a separate window.
To view the entire ABAP keyword documentation, see ABAP - Keyword Documentation.
The ABAP RESTful programming model defines the architecture for efficient end-to-end development of
intrinsically SAP HANA-optimized OData services (such as Fiori apps) in the ABAP environment. It supports
the development of all types of Fiori applications as well as publishing Web APIs. It is based on technologies
and frameworks such as Core Data Services (CDS) for defining semantically rich data models and a service
● https://help.sap.com/viewer/923180ddb98240829d935862025004d6/Cloud/en-US/
3e0c82dea2654d80a31a3852d3381b10.html#loio3e0c82dea2654d80a31a3852d3381b10__SAP_Fiori_UI
[https://help.sap.com/viewer/923180ddb98240829d935862025004d6/Cloud/en-US/
3e0c82dea2654d80a31a3852d3381b10.html#loio3e0c82dea2654d80a31a3852d3381b10__SAP_Fiori_UI
]
● https://help.sap.com/viewer/923180ddb98240829d935862025004d6/Cloud/en-US/
3e0c82dea2654d80a31a3852d3381b10.html#loio3e0c82dea2654d80a31a3852d3381b10__Web_API [htt
ps://help.sap.com/viewer/923180ddb98240829d935862025004d6/Cloud/en-US/
3e0c82dea2654d80a31a3852d3381b10.html#loio3e0c82dea2654d80a31a3852d3381b10__Web_API]
● https://help.sap.com/viewer/923180ddb98240829d935862025004d6/Cloud/en-US/
853fc213a8424f3f88c4cd623af4e367.html [https://help.sap.com/viewer/
923180ddb98240829d935862025004d6/Cloud/en-US/853fc213a8424f3f88c4cd623af4e367.html]
● https://help.sap.com/viewer/923180ddb98240829d935862025004d6/Cloud/en-US/
ad3e961b6313465dbcf71f653ae52b56.html [https://help.sap.com/viewer/
923180ddb98240829d935862025004d6/Cloud/en-US/ad3e961b6313465dbcf71f653ae52b56.html]
● https://help.sap.com/viewer/923180ddb98240829d935862025004d6/Cloud/en-US/
6e7a10d30b74412a9482a80b0b88e005.html [https://help.sap.com/viewer/
923180ddb98240829d935862025004d6/Cloud/en-US/6e7a10d30b74412a9482a80b0b88e005.html]
● https://help.sap.com/viewer/923180ddb98240829d935862025004d6/Cloud/en-US/
a3ff9dcdb25a4f1a9408422b8ba5fa00.html [https://help.sap.com/viewer/
923180ddb98240829d935862025004d6/Cloud/en-US/a3ff9dcdb25a4f1a9408422b8ba5fa00.html]
Prerequisites
● You have downloaded and installed the front-end components of ABAP Development Tools (ADT). See
Video Tutorial: Configure Developer Tools.
● You are a member of the relevant space in the Cloud Foundry environment. See Add Space Members Using
the Cockpit [page 1765].
● You are assigned the developer role or you have a service key in JSON format. See Define a Developer Role
[page 1052], Assigning the ABAP Developer User to the ABAP Developer Role, and Creating Service Keys
[page 1326].
Procedure
1. Open Eclipse and select File New Project... from the menu.
A Select a wizard dialog box opens.
2. Open the ABAP folder, choose ABAP Cloud Project, and select Next.
3. To establish a service instance connection, you have the following options:
a. [Default] Service key provided by Cloud Foundry environment:
In the New ABAP Cloud Project wizard, on the System Connection to SAP Cloud Platform ABAP
Environment page, select the SAP Cloud Platform Cloud Foundry Environment radio button, and choose
Next.
On the Connection Settings page, select Europe (Frankfurt) as your region, enter your email address
and password, and confirm with Next.
In the New ABAP Cloud Project wizard, on the System Connection to SAP Cloud Platform ABAP
Environment page, select the Service Key radio button, and choose Next.
On the System Connection Using a Service Key page, paste your existing service key from the clipboard
into the Service Key in JSON Format text box, or choose Import... to import a text file containing your
service key.
4. Select Next.
5. To log on to the service instance on the System Connection Using a Service Key page, you have the
following options:
a. Using the integrated browser: Enter your email address and password.
Note
Authentication is performed in the integrated browser. Single Sign On is not supported. Tools, such
as password managers, are not supported for this logon option.
b. Using the default browser: Select the Log on with Browser button.
c. Using another browser: Choose the Copy Logon URL button.
On the Service Instance Connection page, the following connection settings are displayed:
○ Service Instance URL: URL of the server instance where the ABAP system is operated
○ Email: ID of the user who is authorized in the configured identity provider for accessing the ABAP
system
○ User ID: ID of the user who is assigned to the email
○ SAP System ID: Name of the ABAP system
○ Client: ID of the logon client
○ Language: Abbreviation of the logon language
Note
7. Select Next.
The Project Name and Favorite Packages page is opened.
8. [Optional:] If you want to change the name of the project, enter a new name in the Project Name field.
Note
When you create the project, the ZLOCAL ABAP package is added by default to your Favorite Packages.
This ABAP package including all subpackages contains all local objects from every user of the ABAP
system.
To add further ABAP packages to your Favorite Packages, choose Add... and enter the name of the
package in the corresponding input field. Note that this package must be available in the system.
Results
You have created an ABAP cloud project that is added to the Project Explorer.
Note
To verify your result, expand the first level of the project structure. Make sure that the following nodes are
included:
● Favorite Packages: Contains the local packages and the packages that you have added to your
Favorites.
● Released Objects: Contains all released SAP objects that are available to (re)use.
Related Information
Tutorial: Create Console Application with SAP Cloud Platform ABAP Environment
Video Tutorial: Create Application
ABAP Cloud Projects
Prerequisites
● You have created an ABAP service instance. See Creating the Service Instance for the ABAP Environment.
● You have created a destination service instance in the same subaccount. See Creating a Destination
Service Instance (Optional).
● You have created a service key. See Creating a Service Key for the Destination Service Instance (Optional).
Procedure
1. To set up the destination service in the SAP Cloud Platform Cockpit, navigate to Global Accounts.
2. Select your global account and your subaccount.
3. In the menu, go to Spaces and select the space that contains the ABAP service instance.
4. Expand Services and select Service Instances.
5. To open the administration launchpad, you have the following options:
a. In the Actions column, choose the Open Dashboard icon.
The administration launchpad is opened.
b. In the Name column, choose your ABAP service instance and select the Open Dashboard button.
The administration launchpad is opened.
6. If necessary, provide your logon credentials to access the administration launchpad.
7. In the Communication Management section, select the Communication Arrangements tile.
8. On the Communication Arrangements page, select New.
9. In the New Communication Arrangement dialog, use the value help to select scenario SAP_COM_0276 and
give the arrangement a meaningful name (e.g. the name of the destination service instance).
10. Enter the service key of your destination service instance and select Create.
Results
You have set up the integration between your ABAP service instance and your destination service instance.
Prerequisites
● You have set up the communication arrangement for scenario SAP_COM_0267. See Creating a
Communication Arrangement for the Destination Service Instance in the ABAP Environment (Optional).
● You have an existing destination in your destination service instance.
● You have created an HTTP service or OData service. See Tutorial: Create an HTTP Service , Creating an
OData Service and Video Tutorial: Create OData Service .
Procedure
1. In Eclipse, navigate either to your HTTP service or your OData service implementation.
2. Create a destination object using class cl_http_destination_provider and method
create_by_cloud_destination with the following parameters:
○ i_service_instance_name: the value of the service instance name property of the communication
arrangement
○ i_name: the name of the destination
○ i_authn_mode: if_a4c_cp_service=>service_specific
Note
This parameter is used to call the destination service with OAuth2 client credential grant.
Prerequisites
● You have set up the communication arrangement for scenario SAP_COM_0267. See Creating a
Communication Arrangement for the Destination Service Instance in the ABAP Environment (Optional).
Procedure
1. In Eclipse, navigate either to your HTTP service or your OData service implementation.
2. Create a destination object using class cl_http_destination_provider and method
create_by_cloud_destination with the following parameters:
○ i_service_instance_name: the value of the service instance name property of the communication
arrangement
○ i_name: the name of the destination
○ [Optional] i_authn_mode: if_a4c_cp_service=>user_propagation
Note
This parameter is used to call the destination service with OAuth 2.0 SAML Bearer Assertion
Flow. By default, it is set to user_propagation.
Learn how to quickly create a communication user and communication arrangement for an inbound
communication scenario by using the basic service key.
Prerequisites
Procedure
1. Log on to the cockpit and go to the subaccount that contains the space you'd like to navigate to. See
Navigate to Global Accounts and Subaccounts [AWS, Azure, or GCP Regions].
{
"scenario_id":"SAP_COM_XYZ",
"type":"basic"
}
Note
basic is the type of service key that is needed to generate a communication user and communication
arrangement for an inbound communication scenario.
Results
The communication user is generated and the credentials, depending on the selected type, are returned in the
service key. The communication user receives authorizations for all services included in the communication
scenario.
Use the HTTP client to enable HTTP communication from the SAP Cloud Platform ABAP environment.
Context
The HTTP client, which is whitelisted for the SAP Cloud Platform ABAP environment, is a wrapper of the well-
known (but not whitelisted) ABAP HTTP client.
It is used for communication with S/4HANA Cloud, on-premise systems, or any other HTTP service exposed to
the Internet.
The HTTP client provides integration with, for example, the Destination service residing in the SAP Cloud
Platform Cloud Foundry environment.
Sample Code
DATA(lo_url_destination) =
cl_http_destination_provider=>create_by_url( 'https://foo.bar' ).
DATA(lo_cloud_destination) =
cl_http_destination_provider=>create_by_cloud_destination(
i_name = 'S4Demo'
i_service_instance_name =
'OutboundCommunication'
).
Use
The actual processing of an HTTP request and its response is shown in the following code example, where an
S/4HANA OData API is called:
Sample Code
TRY.
" Create HTTP Destination by URL
lo_http_destination =
cl_http_destination_provider=>create_by_url( 'https://my300098-
api.s4hana.ondemand.com/sap/opu/odata/sap/API_BUSINESS_PARTNER' ). "/
A_BusinessPartner' ).
Note
Instead of providing a hard-coded password, you can also use the Destination service to retrieve the
required information (recommended).
Related Information
With the ABAP environment, you can develop HTTP services in your ABAP Development Tools (ADT) in Eclipse.
For the implementation of your HTTP service, we provide the interface IF_HTTP_SERVICE_EXTENSION with
HTTP request/response parameters, giving you the full flexibility to build an HTTP service of your choice. See
Working with the HTTP Service Editor and Tutorial: Create an HTTP Service .
Related Information
abapGit is an open-source Git client for ABAP. In the ABAP environment, it is used to import existing code into
your cloud system.
Prerequisites
● You have signed up for a Git account of your choice, for example GitHub.
● You have access to an on-premise system with the required root CA of the Git server (STRUST).
● You have downloaded and installed the front-end components of ABAP Development Tools (ADT). See
Video Tutorial: Configure Developer Tools .
Restriction
Please be aware that abapGIT is an open-source project owned by the community. Therefore, we do not
provide support for abapGIT. We only support the GIT integration in the ABAP environment.
Procedure
Caution
5. Log on to an on-premise system of your choice, create a new program, and paste the content from your
clipboard.
Note
abapGit is installed and launched. See also Video Tutorial: abapGit Installation .
7. Log on to ABAP Development Tools in Eclipse.
Next Steps
Create Content in an On-Premise System and Push it to abapGit Repository [page 1351]
Prerequisites
You have installed and set up abapGit. See Install and Set Up abapGit [page 1350].
Restriction
Please be aware that abapGIT is an open-source project owned by the community. Therefore, we do not
provide support for abapGIT. We only support the GIT integration in the ABAP environment.
Procedure
1. After installing and launching abapGit, select Clone or download, and copy the URL of your repository.
2. Call up transaction ZABAPGIT, and select + Online.
3. Paste the URL of your repository.
4. Select Create package.
5. Add a package name and short description, and select Continue.
6. Confirm with OK.
Next Steps
Import Content from abapGit Repository into the ABAP Environment [page 1352]
Learn how to import content from your abapGit repository into your ABAP environment, and transfer it across
multiple instances.
Prerequisites
● You have installed the abapGit repositories ADT plug-in. See https://eclipse.abapgit.org/updatesite/ .
● You have created content in your on-premise system and pushed it to your abapGit repository. See Create
Content in an On-Premise System and Push it to abapGit Repository [page 1351].
● You have access to an ABAP cloud system. See Create an ABAP System [page 1050].
● You have defined a developer role and assigned a developer user in the ABAP environment. See Define a
Developer Role [page 1052] and Assigning the ABAP Developer User to the ABAP Developer Role.
Restriction
Please be aware that abapGIT is an open-source project owned by the community. Therefore, we do not
provide support for abapGIT. We only support the abapGIT integration in the ABAP environment.
2. In the Project Explorer, select your cloud project system, and navigate to Window Show View
Other.. to open the abapGit repositories view.
3. Search for ABAP, choose abapGit Repositories, and select Open.
4. In the abapGit repositories view, select the clone button (green + icon).
5. Enter your abapGit repository URL, and select Next.
6. Select a Branch and Package, where you want your abapGit repository to be cloned, and confirm with Next.
Note
If there are no packages, you have to create a structure package and add a development package.
Note
The number of imported objects can differ from the number of exported objects because only released
ABAP object types are considered during the import.
The following table contains all the released ABAP object types that can be imported into your ABAP
environment tenant.
CLAS Class
DEVC Package
DOMA Domain
INTF Interface
XSLT Transformation
Limitations
Known Issues
To enable communication from your ABAP environment to your on-premise systems using Remote Function
Calls (RFC) and HTTP calls, you need to enable Cloud Connector in your ABAP environment. To do so, you first
have to create a communication arrangement for communication scenario SAP_COM_0200 and set up the
Cloud Connector in your on-premise network. See Create a Communication Arrangement for Cloud Connector
Integration [page 1356].
The purpose of communication scenario SAP_COM_0200 is to associate the ABAP environment tenant with the
Neo subaccount that manages the Cloud Connector connection.
Note
Cloud Connector has to be connected to the Neo subaccount used in the communication arrangement for
communication scenario SAP_COM_0200.
You also have to set up a communication arrangement for communication scenario SAP_COM_0276 that
connects your ABAP environment tenant with an instance of the Cloud Foundry destination service. See
Creating a Communication Arrangement for the Destination Service Instance in the ABAP Environment
(Optional).
For communication with on-premise systems, you have to configure the destinations in the Cloud Foundry
destination service instance that are used in the communication arrangement for communication scenario
SAP_COM_0276.
After you have completed this setup, a connection from the ABAP environment tenant to an on-premise
system is established in the following order:
1. The ABAP environment tenant requests the Neo environment to open the tunnel connection.
2. SAP Cloud Platform Connectivity tells Cloud Connector to open the connection to this specific ABAP
environment tenant using the admin connection.
3. Cloud Connector opens the secure tunnel connection to the ABAP environment tenant using its public
tenant URL.
4. After the tunnel is established, it can be used for actual data connection using RFC and HTTP(S) protocols.
For more information about Cloud Connector, see SAP Cloud Platform Connectivity: Cloud Connector.
Note
The host name of Cloud Connector is not needed because Cloud Connector itself connects to the Cloud,
but it is never connected from the Cloud.
To set up your actual data connections between your ABAP environment and on-premise systems, you have to
configure RFC and HTTP destinations. See Setting Up Destinations to Enable On-Premise Connectivity [page
1358].
Learn how to create a communication arrangement for communication scenario SAP_COM_0200 to integrate
the Cloud Connector.
Prerequisites
To set up your ABAP environment, you have to create a communication arrangement for communication
scenario SAP_COM_0200 - Cloud Connector Integration, that requires the following:
● An administrative user in the ABAP environment tenant. See Assigning the ABAP Environment
Administrator Role to the New Administrator User.
● Increased quota for the ABAP runtime. See Increasing the Quota for the ABAP Runtime.
● An ABAP service instance set up in Cloud Foundry. See Creating the Service Instance for the ABAP
Environment.
● Your SAP Cloud Platform Neo subaccount name. See Create a Subaccount in the Neo Environment [page
1892].
● The name of the region host of your Neo subaccount
Note
Please check out the supported Neo subaccounts for on-premise connectivity of the ABAP
environment. See SAP Note 2765161 .
● Installation of Cloud Connector version 2.11 or higher. See Cloud Connector: Installation
● Initial configuration for Cloud Connector and the Neo subaccount. See Cloud Connector: Initial
Configuration
Note
The Cloud Connector is connected to the Neo subaccount and is displayed in the Cloud Cockpit,
section Connectivity under Cloud Connectors.
● User credentials for the Cloud Connector admin role in the Neo subaccount
Procedure
1. To create a communication system that represents your Neo subaccount, open the Communication
Systems app from SAP Fiori Launchpad and select New.
2. Provide a meaningful system name and choose Create to proceed.
Checkbox Use Cloud Connector Make sure this checkbox is not checked
For more information on how to create a communication system, seeHow to Create Communication
Systems [page 1865] .
Field Action
Additional Properties Account Name Enter the name of your Neo subaccount
For more information on how to create a communication arrangement, see How to Create a
Communication Arrangement [page 1863].
Results
You have associated the ABAP environment tenant with the Neo subaccount. This enables the ABAP
environment tenant to request the Neo environment to open a tunnel connection.
Next Steps
After completing the setup of communication scenario SAP_COM_0200, you are ready to set up your actual
data connections between your ABAP environment and on-premise systems. This requires the configuration of
Create an HTTP and an RFC destination to enable communication from the ABAP environment to your on-
premise systems.
Prerequisites
● You have assigned the administrator role to the administrator user in the ABAP environment. See
Assigning the ABAP Environment Administrator Role to the New Administrator User.
● You have set up a destination service instance in Cloud Foundry. See Creating a Destination Service
Instance (Optional).
● You have created a communication arrangement for communication scenario SAP_COM_0200 to associate
the ABAP environment tenant with your Neo subaccount. See Create a Communication Arrangement for
Cloud Connector Integration [page 1356].
● You have configured the destination service instance in the ABAP service instance via communication
scenario SAP_COM_0276. See Creating a Communication Arrangement for the Destination Service
Instance in the ABAP Environment (Optional).
● You have defined a developer role and assigned a developer user in the ABAP environment. See Define a
Developer Role [page 1052] and Assigning the ABAP Developer User to the ABAP Developer Role.
● You have downloaded and installed the front-end components of ABAP Development Tools (ADT) version
2.96 or higher. See Video Tutorial: Configure Developer Tools .
● You have created an ABAP cloud project with ADT to connect to the ABAP system in the ABAP
environment. See Connect to the ABAP System [page 1341].
● If you use more than one Cloud Connector in your subaccount, you have assigned a location ID to each of
these Cloud Connectors. See Managing Subaccounts [page 392] (section Procedure, step 4).
Procedure
To enable on-premise connectivity, you must set up an HTTP destination as well as an RFC destination for the
destination service instance you created before using communication scenario SAP_COM_0276:
Set up on-premise HTTP connectivity for the SAP Cloud Platform ABAP environment by configuring an HTTP
destination of proxy type OnPremise.
To configure an HTTP destination in the SAP Cloud Platform cockpit, perform the following steps:
1. Navigate to the destination service instance that you have previously configured in the system using
communication scenario SAP_COM_0276.
2. In the menu, navigate to Destinations.
3. Select New Destination.
4. In the Destination Configuration section, use the value help to select HTTP as Type.
5. (Optional) If you are using more than one Cloud Connector in your subaccount, you must enter the
Location ID of the target Cloud Connector.
See also Managing Subaccounts [page 392] (section Procedure, step 4).
6. For Proxy Type, select OnPremise from the value help.
7. For Authentication, select BasicAuthentication.
8. Fill in the required fields and select Save.
9. Open Eclipse, and create and execute a runnable class.
10. To enable the HTTP communication, use, for example, the following API.
Sample Code
DATA(lo_destination) =
cl_http_destination_provider=>create_by_cloud_destination(
i_service_instance_name = 'OnPrem'
i_name = 'ERP_HTTP'
i_authn_mode = if_a4c_cp_service=>service_specific
).
DATA(lo_client) =
cl_web_http_client_manager=>create_by_http_destination( lo_destination ).
DATA(lo_request) = lo_client->get_http_request( ).
DATA(lo_response) = lo_http_client->execute( i_method =
if_web_http_client=>get ).
out->write( lo_response->get_text( ) ).
Note
i_name is the name of the destination that you have configured in the previous steps.
Note
Make sure that the called remote function module is exposed in Cloud Connector. See Configure Access
Control (HTTP) [page 425].
Set up on-premise RFC connectivity for the SAP Cloud Platform ABAP environment by configuring a
destination of type RFC.
To configure an RFC destination in the SAP Cloud Platform cockpit, perform the following steps:
1. Navigate to the destination service instance that you have previously configured in the system using
communication scenario SAP_COM_0276.
2. In the menu, navigate to Destinations.
3. Create a destination by selecting New Destination.
4. For Type, select RFC from the value help.
5. (Optional) If you are using more than one Cloud Connector in your subaccount, you must enter the
Location ID of the target Cloud Connector.
See also Managing Subaccounts [page 392] (section Procedure, step 4).
6. Set a user name and credentials for the destination.
7. To configure the RFC destination, choose either of the following options:
○ For a destination that uses load balancing (system ID and message server), proceed as follows:
1. Select New Property, choose jco.client.r3name from the value help and enter the three-letter
system ID of your backend system (as configured in Cloud Connector) in the property field.
2. Create another property, select jco.client.mshost, and enter the message server host (as
configured in Cloud Connector) in the property field.
3. Add another property, choose jco.client.group, and enter a log group in the property field.
4. Create another property, select jco.client.client, and enter the three-digit ABAP client
number.
○ For a destination without load balancing (application server and instance number), perform these
steps:
1. Select New Property, choose jco.client.ashost from the value help and enter the application
server name of your backend system (as configured in the Cloud Connector) in the property field.
2. Add another property, choose jco.client.sysnr, and enter 00, the instance number of the
application server (as configured in Cloud Connector) in the property field.
3. Create another property, select jco.client.client, and enter the three-digit ABAP client
number.
8. Select Save.
Sample Code
DATA(lo_destination) =
cl_rfc_destination_provider=>create_by_cloud_destination(
i_service_instance_name = 'OnPrem'
i_name = 'ERP_RFC'
Note
i_name is the name of the destination that you have configured in the previous steps.
Note
Make sure that the called remote function module is exposed in Cloud Connector. See Configure
Access Control (RFC) [page 432].
You can document changes made to a commercial object, such as the time, the content and the way changes
are made, by logging these changes in a change document.
Example
You can use the change document to simplify the change history analysis for auditing in Financial
Accounting.
Every application object type has its own change document object type, which is called the Change Document
Object (which is an object class). To log changes to a commercial object in a change document, you must
define the Change Document Object for the commercial object type. The Change Document Object definition
contains the tables which represent a commercial object in the system.
Note
● Specifiy for each table, whether a commercial object contains only one (single case) or several
(multiple case) records. For example, an order contains an order header and several order items. In
general, one record for the order header and several records for the order items are passed to the
change document creation when an order is changed.
● If a table contains fields with values referring to units and currency fields, the associated table,
containing these units and currencies, can also be specified.
● The object ID identifies a given commercial object. You can retrieve all changes made to a commercial
object using this key.
IV_OBJECT. IV_DEVCLASS and IV_ACTIVITY get passed as import parameters. The return parameter
RV_IS_AUTHORIZED must be set to ABAP_TRUE if the check is successful.
Example
Sample Code
cl_chdo_object_tools_rel=>if_chdo_object_tools_rel~check_authorization(
EXPORTING
iv_object = 'ZCHDO_TEST'
it_activity = '03'
it_devclass = 'ZLOCAL'
RECEIVING
rv_is_authorized = lv_is_authorized
).
ENDTRY.
IF lv_is_authorized IS INITIAL.
out->write( |Exception occurred: authorization error.| ).
ELSE.
out->write( |Activity can be performed on the change document
object.| ).
ENDIF.
ENDMETHOD.
ENDCLASS.
When a change document object is generated, you receive the CL_<change document object
name>_CHDO class with method IF_CHDO_ENHANCEMENTS~AUTHORITY_CHECK without implementation. You
can create your own authority check for reading change documents written for this change document object.
The authority check for reading change documents are successful if parameter RV_IS_AUTHORIZED = 'X' is
returned.
● For customer objects, start the name with a ‘Z’ or a ‘Y’. For more information, see SAP Note 16466.
● Enter a name space separately from the name of the object without name space. Once you have done so,
the name space and the object name will always be displayed together. All generated objects for this
change document object will automatically be generated within the name space.
● Keep in mind that the change document object name including the name space has a maximum length of
15 characters. That means that if the name space has 10 characters, only 5 characters are left for the
change document object name.
Process
The name of the object is assigned using the import parameter IV_OBJECT. Object details and generation
information are passed using the internal tables IT_CD_OBJECT_DEF (the object definition),
IT_CDOBJECT_TEXT (object texts) and IS_CD_OBJECT_GEN (generation information).
Once generated, a class (name granted automatically CL_<change document object name>_CHDO) with
methods WRITE and IF_CHDO_ENHANCEMENTS~AUTHORITY_CHECK is created. IV_CL_OVERWRITE can be
used to specify whether an existing class with the specified name can be overwritten or not. Changes are saved
in the transport request (IV_CORRNR).
Import Parameters
DOCUDEL = ‘X’
DOCUINS = ‘X’
Export Parameters
Parameter Name Field Name Value Help
ET_ERRORS
msgnr Message ID
v1 Variable to message
v2 Variable to message
v3 Variable to message
v4 Variable to message
cl_chdo_object_tools_rel=>if_chdo_object_tools_rel~create_and_generate_object(
EXPORTING
iv_object = 'ZCHDO_TEST' " change document object name
it_cd_object_def = p_it_tcdob " change document object
definition
it_cd_object_text = p_it_tcdobt " change document object text
is_cd_object_gen = p_it_tcdrp " change document object
generation info
iv_cl_overwrite = 'X' " class overwrite flag
iv_corrnr = '<transport_request>' " transport request
number
IMPORTING
et_errors = rt_errors " generation message table
* et_synt_errors =
* et_synt_error_long =
).
CATCH cx_chdo_generation_error into lr_err.
out->write( |Exception occurred: { lr_err->get_text( ) }| ).
ls_error = 'X'.
ENDTRY.
IF ls_error IS INITIAL.
READ TABLE rt_errors WITH KEY kind = 'E-'
INTO lt_errors_err.
IF sy-subrc IS INITIAL.
out->write( |Exception occurred: { lt_errors_err-text } | ).
ELSE.
out->write( |Change document object created and generated | ).
ENDIF.
The name of the object is assigned using the import parameter IV_OBJECT. The object details and generation
information are passed using the internal tables IT_CD_OBJECT_DEF (the object definition),
IT_CD_OBJECT_TEXT (the object texts) and IS_CD_OBJECT_GEN (the generation information).
If the internal tables are not filled, the object definition is read directly from the database tables TCDOB and
TADIR and the generation information is read directly from database table TCDRP. If no generation information
exists in table TCDRP, it can also be passed using import parameter IS_CD_OBJECT_GEN. In this case, the
change document object will be newly generated without changing the change document object definition. Use
this method when the structure of a table, that belongs to the change document object, was changed.
Once generated, a class (name granted automatically CL_<change document object name>_CHDO) with
methods WRITE and IF_CHDO_ENHANCEMENTS~AUTHORITY_CHECK is created. You can use
IV_CL_OVERWRITE to specify whether an existing class can be overwritten with the specified name or not.
Changes made to the class are saved in the transport request (IV_CORRNR).
The export parameter ET_ERRORS is used to return all generation messages (in the message class CD). Any
syntax errors in the generated class are provided using ET_SYNT_ERROR (with long text if applicable
(ET_SYNT_ERROR_LONG)).
Import Parameters
DOCUDEL = ‘X’
DOCUINS = ‘X’
Export Parameters
Parameter Name Field Name Value Help
ET_ERRORS
msgnr Message ID
v1 Variable to message
v2 Variable to message
v3 Variable to message
v4 Variable to message
Sample Code
Furthermore, the import parameter IV_DEL_CL_WHEN_USED determines if the class of the change document
object is deleted (value ABAP_TRUE) or not (value ABAP_FALSE) when it is still being used. The changes made
to the class are saved in the transport request (IV_CORRNR).
Import Parameters
Parameter Name Field Name Value Help
Export Parameters
Parameter Name Field Name Value Help
ET_ERRORS
msgnr Message ID
v1 Variable to message
v2 Variable to message
v3 Variable to message
v4 Variable to message
Sample Code
The name of the object is assigned using the import parameter IV_OBJECT. The information is returned using
the export parameter ET_OBJECT_INFO.
Import Parameter
Parameter Name Field Name Value Help
Export Parameters
Parameter Name Field Name Value Help
ET_OBJECT_INFO
Sample Code
The WRITE method in the generated class CL_<change document object name>_CHDO creates change
documents from the object-specific update for an object ID.
Import Parameters
Parameter Name Value Help
● U - Change
● I - Insert
● D - Delete
ICDTXT_<change document object name> In this structure, the change document-relevant texts are
collected with the corresponding specifications:
Export Parameter
Parameter Name Value Help
Method CHANGEDOCUMENT_READ reads the change documents for one change document object. You can
restrict the search by various parameters (such as changed by, date, or time).
I_OBJECTCLASS
IT_OBJECTID
I_DATE_OF_CHANGE
I_TIME_OF_CHANGE
I_DATE_UNTIL
I_TIME_UNTIL
IT_USERNAME
IT_READ_OPTIONS
Export Parameter
Parameter Name Field Name Value Help
Sample Code
Many business applications require unique numbers, for example, to complete the keys of data records. In
order to get numbers from an interval, a number range object must be defined which can contain different
properties. In addition, intervals containing the numbers, must be assigned to the number range object.
Numbers can be generated from existing number range intervals.
Note
Creation, change, and deletion of number range objects and intervals require developer role authorization.
Changes to objects and intervals can only be performed in the same software layer.
● For customer objects, the name must start with a ‘Z’ or a ‘Y’.
● The maximum length of a number range object is 10 characters.
NRCHECKASCII
LANGU Language.
Export Parameters
Parameter Name Field Name Value Help
ERRORS
TABLENAME Table
FIELDNAME Field
E: error
W: warning
Sample Code
Import and export parameters are the same as in the CREATE method.
Sample Code
…
lv_object = 'Z_TEST_03'.
lv_devclass = 'Z_SNUM'.
lv_corrnr = 'SIDK123456'.
…
cl_numberrange_objects=>update(
EXPORTING
attributes = VALUE #( object = lv_object
domlen = 'CHAR8'
percentage = 9
buffer = 'S'
noivbuffer = 12
devclass = lv_devclass
corrnr = lv_corrnr )
obj_text = VALUE #( object = lv_object
langu = 'E'
txt = 'Update object'
txtshort = 'Test update' )
IMPORTING
errors = DATA(lt_errors)
returncode = DATA(lv_returncode)
).
…
Import Parameters
Parameter Name Field Name Value Help
…
lv_object = 'Z_TEST_03'.
lv_corrnr = 'SIDK123456'.
…
cl_numberrange_objects=>delete(
EXPORTING
object = lv_object
corrnr = lv_corrnr
).
…
Use the READ method to read the attributes of a number range object.
Import Parameters
Parameter Name Field Name Value Help
Export Parameters
Parameter Name Field Name Value Help
NRCHECKASCII
INTERVAL_EXISTS
LANGU Language.
Sample Code
…
lv_object = 'Z_TEST_03'
…
cl_numberrange_objects=>read(
EXPORTING
language = sy-langu
object = lv_object
IMPORTING
attributes = DATA(ls_attributes)
interval_exists = DATA(lv_interval_exists)
obj_text = DATA(obj_text)
).
…
The class CL_NUMBERRANGE_INTERVALS provides methods for maintaining intervals of number range objects.
Import Parameters
Parameter Name Field Name Value Help
TONUMBER To Number
SUBOBJECT Sub-object
Export Parameters
Parameter Name Field Name Value Help
TONUMBER To number
Sample Code
…
CLASS zcl_nr_test_intervals_create DEFINITION
PUBLIC
FINAL
CREATE PUBLIC.
PUBLIC SECTION.
INTERFACES if_oo_adt_classrun.
PROTECTED SECTION.
PRIVATE SECTION.
ENDCLASS.
CLASS zcl_nr_test_intervals_create IMPLEMENTATION.
METHOD if_oo_adt_classrun~main.
DATA: lv_object TYPE cl_numberrange_objects=>nr_attributes-object,
lt_interval TYPE cl_numberrange_intervals=>nr_interval,
ls_interval TYPE cl_numberrange_intervals=>nr_nriv_line.
lv_object = 'Z_TEST_03'.
* intervals
ls_interval-nrrangenr = '01'.
ls_interval-fromnumber = '00000001'.
ls_interval-tonumber = '19999999'.
ls_interval-procind = 'I'.
APPEND ls_interval TO lt_interval.
ls_interval-nrrangenr = '02'.
ls_interval-fromnumber = '20000000'.
ls_interval-tonumber = '29999999'.
APPEND ls_interval TO lt_interval.
* create intervals
TRY.
out->write( |Create Intervals for Object: { lv_object } | ).
CALL METHOD cl_numberrange_intervals=>create
EXPORTING
interval = lt_interval
object = lv_object
subobject = ' '
IMPORTING
error = DATA(lv_error)
error_inf = DATA(ls_error)
error_iv = DATA(lt_error_iv)
warning = DATA(lv_warning).
ENDTRY.
ENDMETHOD.
Import and export parameters are the same as in the CREATE method.
Sample Code
…
lv_object = 'Z_TEST_03'.
* intervals
ls_interval-nrrangenr = '01'.
ls_interval-fromnumber = '00000002'.
ls_interval-tonumber = '19999998'.
ls_interval-procind = 'U'.
APPEND ls_interval TO lt_interval.
ls_interval-nrrangenr = '02'.
ls_interval-fromnumber = '20000002'.
ls_interval-tonumber = '29999997'.
APPEND ls_interval TO lt_interval.
…
CALL METHOD cl_numberrange_intervals=>update
EXPORTING
interval = lt_interval
object = lv_object
subobject = ' '
IMPORTING
error = DATA(lv_error)
error_inf = DATA(ls_error)
error_iv = DATA(lt_error_iv)
warning = DATA(lv_warning)
…
Import and export parameters are the same as in the CREATE method.
Sample Code
…
lv_object = 'Z_TEST_03'
* intervals
ls_interval-nrrangenr = '01'.
ls_interval-fromnumber = '00000001'.
ls_interval-tonumber = '19999999'.
ls_interval-procind = 'D'.
APPEND ls_interval TO lt_interval.
ls_interval-nrrangenr = '02'.
Use the READ method to get the properties of number range intervals.
Import Parameters
Parameter Name Field Name Value Help
SUBOBJECT Sub-object
Sample Code
…
lv_object = 'Z_TEST_03'.
…
CALL METHOD cl_numberrange_intervals=>read
EXPORTING
object = lv_object
nr_range_nr1 = ' '
nr_range_nr2 = ' '
subobject = ' '
IMPORTING
interval = lt_interval.
…
The CL_NUMBERRANGE_INTERVALS class provides methods for getting numbers from an interval at runtime.
Use the NUMBER_CHECK method to check whether a number is within an external interval.
Import Parameters
Parameter Name Field Name Value Help
SUBOBJECT Sub-object
Export Parameter
Parameter Name Field Name Value Help
Use the NUMBER_GET method to determine the next number of a number range interval.
Import Parameters
Parameter Name Field Name Value Help
SUBOBJECT Sub-object
External Parameters
Parameter Name Field Name Value Help
Sample Code
…
lv_object = 'Z_TEST_03'.
…
CALL METHOD cl_numberrange_runtime=>number_get
EXPORTING
nr_range_nr = '01'
object = lv_object
IMPORTING
number = DATA(lv_number)
returncode = DATA(lv_rcode).
…
Prerequisites
Context
You can use the communication scenario SAP_COM_0510 to pull Git repositories to an SAP Cloud Platform
ABAP Environment system.
Note
In SAP Cloud Platform ABAP Environment Git repositories are wrapped in Software Components. These are
currently managed in the App Manage Software Components. The parameter passed to this API needs to
Procedure
1. (Authentication on the Server)The first step serves the authentication on the server. The response
header contains an x-csrf-token, which is used as authentication for the POST request following in step 2.
Request
GET/sap/opu/odata/sap/MANAGE_GIT_REPOSITORY HTTP/1.1
Host: host.com
Authentication: basicAuthentication
X-csrf-token: fetch
Response
HTTP/1.1 200 OK
X-csrf-token: xCsrfToken
2. (Pull a Git Repository) To trigger the pull of a Git repository, you have to insert the x-csrf-token that was
retrieved in the first request in the header parameters. The Git repository you want to pull is passed in the
body of the request.
Request
POST/sap/opu/odata/sap/MANAGE_GIT_REPOSITORY/Pull HTTP/1.1
Host: host.com
Authentication: basicAuthentication
X-csrf-token: xCsrfToken
Content-Type: application/json
Accept: application/json
{
“sc_name” : “/DMO/GIT_REPOSITORY”
}
Response
HTTP/1.1 200 OK
Content-Type: application/json
{
"d": {
"__metadata": {
"id": "https://host.com/sap/opu/odata/sap/MANAGE_GIT_REPOSITORY/
Pull(guid’UUID’)", "uri": "https://host.com/sap/opu/odata/sap/
MANAGE_GIT_REPOSITORY/Pull(guid’UUID’)", "type":
"cds_sd_a4c_a2g_gha_sc_web_api.PullType"
}
,
"uuid": "UUID ",
"sc_name": "/DMO/GIT_REPOSITORY ",
"namespace": "",
"status": "R",
"status_descr": "Running",
"start_time": "/Date(1571227437000+0000)/",
"change_time": "/Date(1571227472000+0000)/",
"criticality": 2,
3. (Tracking the Status of the Pull) To track the status of the pull, you can make a GET request using the
uuid contained in the response. You can also read the URI directly from the “__metadata” of the response.
Request
GET/sap/opu/odata/sap/MANAGE_GIT_REPOSITORY/Pull(guid’UUID’) HTTP/1.1
Host: host.com
Authentication: basicAuthentication
Accept: application/json
The response contains the same entity as the second request. The Pull is successful, when the “status” in
the response body has the value S. The status description “status_descr” will then return Successful. In
case of an error “status” will have the value E and “status_descr” the value Error.
4. (Retrieving Logs) To get the Execution Log and the Transport Log after the Pull is finished, you can use the
following requests. Alternatively, you can use the URIs from the response of the POST request. You can also
check both logs in the Manage Software Components app for better readability.
Request
GET /sap/opu/odata/sap/MANAGE_GIT_REPOSITORY/Pull(guid’UUID’)/
to_Execution_log HTTP/1.1
Host: host.com
Authentication: basicAuthentication
Accept: application/json
Response
HTTP/1.1 200 OK
Content-Type: application/json
{
"d": {
"results": [ {
"__metadata": {
"id": "host.com/sap/opu/odata/sap/MANAGE_GIT_REPOSITORY/
ExecutionLogs(index_no=1m,uuid=guid'UUID')", "uri": "host.com/sap/opu/
odata/sap/MANAGE_GIT_REPOSITORY/ExecutionLogs(index_no=1m,uuid=guid'UUID')",
"type": "cds_sd_a4c_a2g_gha_sc_web_api.ExecutionLogsType"
}
,
"index_no": "1",
"uuid": "UUID",
"type": "Information",
Request
GET /sap/opu/odata/sap/MANAGE_GIT_REPOSITORY/Pull(guid’UUID’)/
to_Transport_log HTTP/1.1
Host: host.com
Authentication: basicAuthentication
Accept: application/json
Response
HTTP/1.1 200 OK
Content-Type: application/json
{
"d": {
"results": [ {
"__metadata": {
"id": "host.com/sap/opu/odata/sap/MANAGE_GIT_REPOSITORY/
ExecutionLogs(index_no=1m,uuid=guid'UUID')", "uri": "host.com/sap/opu/
odata/sap/MANAGE_GIT_REPOSITORY/ExecutionLogs(index_no=1m,uuid=guid'UUID')",
"type": "cds_sd_a4c_a2g_gha_sc_web_api.ExecutionLogsType"
}
,
"index_no": "1",
"uuid": "UUID",
"type": "Information",
"descr": "Step X: Message",
"timestamp": "/Date(1571227438000+0000)/",
"criticality": 0,
}
]
}
}
Application jobs can be used by key users or business users to schedule predefined business logic. The
execution can be scheduled once at a certain point in time or periodically.
A Job Catalog Entry represents the business logic to be executed. It also contains the parameter definition for
the job execution.
Consumption View
A key user or a business user can schedule a job via Fiori app Application Jobs. The required business catalog
for this app is SAP_CORE_BC_APJ. The user needs to select a Job Template first. Therefore, each Job Catalog
Entry shall have at least one Job Template. Otherwise, a Job Catalog Entry cannot be executed. After the
selection of a Job Template, the respective parameters with their default values are shown. The user can adjust
the default values and schedule the job execution via various scheduling options.
Development View
Job Catalog Entry and Job Template are development entities. They need to be defined in the development
system and transported into subsequent systems. There is no editor available in ABAP Development Tools
(ADT), but you can use a released API to define these entities. The business logic of a Job Catalog Entry needs
to be implemented in a class. This class is assigned to the Job Catalog Entry.
Usually, reports and their selection parameters are used to define the business logic. As reports are not
supported in the SAP Cloud Platform ABAP environment, a class needs to be implemented instead of a report.
The Job Catalog Entry contains the definition of the selection parameters and the reference to the
implemented class. A Job Template corresponds to a system selection variant (starting with &SAP) which can
be also transported in S/4HANA.
Instead of scheduling a job as one option of report execution in S/4HANA, the Fiori app Application Jobs is
used to schedule jobs in S/4HANA Cloud and SAP Cloud Platform ABAP environment.
Related Information
Creating a Job Catalog Entry and a Job Template in ADT [page 1394]
Setting up the Authorizations [page 1398]
Scheduling an Application Job [page 1399]
Follow these steps to create a Job Catalog Entry and a Job Template in ADT.
Procedure
1. Implement the business logic. You need to create a class which implements certain interfaces so that it is
usable in an application job. For more information, see Implementing the Business Logic [page 1394].
2. Define the Job Catalog Entry. With a method of a certain framework class, you create a Job Catalog Entry
which refers to the class of step 1. For more information, see Defining the Job Catalog Entry [page 1396].
3. Define the Job Template. With another method of the framework class, you create a Job Template which
refers to the Job Catalog Entry of step 2. For more information, see Defining the Job Template [page 1396].
Related Information
The following development steps require a user with the development role.
● IF_APJ_DT_EXEC_OBJECT
● IF_APJ_RT_EXEC_OBJECT
This class is considered the main class per application job. This class needs to be part of a customer-defined
software component (not ZLOCAL) in order to be able to transport the class into subsequent systems.
● The content of the table ET_PARAMETER_DEF determines the parameter section for the job catalog entry,
which will refer to this main class.
● The content of the table ET_PARAMETER_VAL determines the default values for these parameters in the
job template, which will refer to the job catalog entry mentioned above.
The second interface, IF_APJ_RT_EXEC_OBJECT, contains runtime related methods. Method EXECUTE() is
called by the application jobs framework if a scheduled job will actually be executed. It receives an internal table
as parameter. This table is of the same type as the table ET_PARAMETER_VAL of method
Please refer to the example code for an application jobs main class. Literals are used instead of language-
dependent texts in order to simplify the example.
Sample Code
Related Information
Instead of a respective editor in ADT, Job Catalog Entries and Job Templates need to be created via a released
API. This API shall be called from within a console application. The console application is only required in the
development system.
Job Catalog Entries and Job Templates are created with a package assignment and the objects are assigned to
a transport request. You need to make sure that package and transport request fit together (regarding
transport layer) and that possible naming conventions are obeyed.
The Job Catalog Entry mainly contains the reference to the implementation class of the business logic.
Related Information
The creation of a Job Template follows the same technical rules as a Job Catalog Entry as described in Defining
the Job Catalog Entry [page 1396].
The Job Template represents a set of default parameters for the assigned Job Catalog Entry. The Job Template
is mandatory for the Fiori app Application Jobs to choose a job definition to be executed. A Job Catalog Entry
can have more than one Job Template.
The following code example shows a console application that generates the minimal number of required
development objects: One Job Catalog Entry and one related Job Template.
Related Information
Some further activities in ADT and in the administrator’s launchpad are necessary to be able to schedule the
Job Template in the Fiori app Application Jobs.
Currently, a business user who shall be able to schedule application jobs also needs a business
role, to which the business catalog Application Jobs (SAP_CORE_BC_APJ) has been assigned. The
Administrator role SAP_BR_ADMINISTRATOR is such a role, but it is recommended to add the
business catalog Application Jobs (SAP_CORE_BC_APJ) to the newly created role as well.
6. Save.
3. Assign the Business Role to a User.
For more information, see How to Maintain Business Users [page 1838].
Related Information
Open the Fiori app Application Jobs in the Fiori Launchpad and perform the following steps:
You can find the log of the executed jobs on the entry screen of the Fiori app Application Jobs.
Learn more about developing applications on the Neo environment of SAP Cloud Platform.
Overview
The Neo environment of SAP Cloud Platform enables you to develop and run cloud applications using
technologies such as Java EE, SAP HANA, HTML5, SAPUI5, and so on. The environment itself is SAP-
proprietary but it is designed to run applications based on community or SAP technologies.
To deploy business applications bundled in a Multi-Target Applications (MTA) archive, use one of the following
options:
● The deploy-mta command for the Command Line Interface (CLI), as described in deploy-mta [page
2008].
● The Solutions view of the SAP Cloud Platform cockpit, as described in Deploying Solutions Using the
Cockpit [page 1681].
Related Information
SAP Cloud Platform enables you to develop, deploy and use Java applications in a cloud environment.
Applications run on a runtime container where they can use the platform services APIs and Java EE APIs
according to standard patterns.
The SAP Cloud Platform Runtime for Java enables the provisioning and running applications on the platform.
The runtime is represented by Java Virtual Machine, Application Runtime Container and Compute Units. Cloud
applications interact at runtime with the containers and services via the platform APIs.
Compute Unit
The Java development process is enabled by the SAP Cloud Platform Tools, which comprise the Eclipse IDE
and the SAP Cloud Platform SDK.
During and after development, you can configure and operate an application using the cockpit and the console
client.
Appropriate for
● Developing and running Java Web applications based on standard JSR APIs
● Executing Java Web applications which include third-party Java libraries and frameworks supporting
standard JSR APIs
● Supporting Apache Tomcat Java Web applications.
Set up your Java development environment and deploy your first application in the cloud.
Samples
A set of sample applications allows you to explore the core functionality of SAP Cloud Platform and shows how
this functionality can be used to develop complex Web applications. See: Using Samples [page 1421]
Tutorials
Before you can start developing your application, you need to download and set up the necessary tools, which
include Eclipse IDE for Java EE Developers, SAP Cloud Platform Tools, and SDK.
SAP Cloud Platform Tools, SAP Cloud Platform SDK for Neo environment, SAP JVM, and the Cloud Connector,
can be downloaded from the SAP Development Tools for Eclipse page.
Procedure
1. For Java applications, choose between three types of SAP Cloud Platform SDK for Neo environment.
For more information, see Install the SAP Cloud Platform SDK for Neo Environment [page 1403].
2. SAP JVM is the Java runtime used in SAP Cloud Platform. It can be set as a default JRE for your local
runtime.
For instructions on how to install it, see (Optional) Install SAP JVM [page 1404].
3. Download and set up Eclipse IDE for Java EE Developers.
See Install Eclipse IDE [page 1405].
4. Download and set up SAP Development Tools for Eclipse.
See Install SAP Development Tools for Eclipse [page 1406].
5. Configure the landscape host and SDK location on which you will be deploying your application.
See Set Up the Runtime Environment [page 1408].
6. Add Java Web, Java Web Tomcat 7, Java Web Tomcat 8, or Java EE 6 Web Profile, according to the SDK you
use. See Set Up the Runtime Environment [page 1408].
For more information on the different SDK versions and their corresponding runtime environments, see
Application Runtime Container [page 1430].
7. To set up SAP JVM as a default JRE for your local environment, see Set Up SAP JVM in Eclipse IDE [page
1410].
8. If you prefer working with the Console Client, see Set Up the Console Client [page 1412].
9. If you need to establish a connection between on-demand applications in SAP Cloud Platform and existing
on-premise systems, you can use the Cloud Connector.
For more information, see Cloud Connector.
Context
For more information, see section Application Runtime Container [page 1430].
Procedure
1. Open https://tools.hana.ondemand.com/#cloud
2. From the SAP Cloud Platform Neo Environment SDK section, download the relevant ZIP file and save it to
your local file system.
Your SDK is ready for use. To use the SAP Cloud Platform SDK for Neo environment with Eclipse, see Set Up the
Runtime Environment [page 1408]. To use the console client, see Using the Console Client [page 1928].
Related Information
Context
SAP Cloud Platform infrastructure runs on SAP's own implementation of a Java Virtual Machine - SAP Java
Virtual Machine (JVM).
SAP JVM is a certified Java Virtual Machine and Java Development Kit (JDK), compliant to Java Standard
Edition (SE) 8. Technology-wise it is based on the OpenJDK and has been enhanced with a strong focus on
supportability and reliability. One example of these enhancements is the SAP JVM Profiler. The SAP JVM
Profiler is a tool that helps you analyze the resource consumption of a Java application running on theSAP
Cloud Platform local runtime. You can use it to profile simple stand-alone Java programs or complex enterprise
applications.
Customer support is provided directly by SAP for the full maintenance period of SAP applications that use the
SAP JVM. For more information, see Java Virtual Machine [page 1428]
Note
This is an optional procedure. You can also run your local server for SAP Cloud Platform on a standard JDK
platform, that is an Oracle JVM. SAP JVM, however, is a prerequisite for local profiling with the SAP JVM
Profiler.
Procedure
1. Open https://tools.hana.ondemand.com/#cloud
2. From the SAP JVM section, download the SAP JVM archive file compatible to your operating system and
save it to your local file system.
3. Extract the archive file.
Note
If you use Windows as your operating system, you need to install the Visual C++ 2013 Runtime prior to
using SAP JVM. The installation package for the Visual C++ 2013 Runtime can be obtained from Microsoft.
Related Information
Prerequisites
If you are not using SAP JVM, you need to have JDK installed in order to be able to run Eclipse.
Procedure
Caution
The support for Mars and Neon has entered end of maintenance. We recommend that you use Oxygen
release.
2. Find the ZIP file you have downloaded on your local file system and unpack the archive.
3. Go to the eclipse folder and run the eclipse executable file.
4. Specify a Workspace directory.
5. To open the Eclipse workbench, choose Workbench in the upper right corner.
Note
If the version of your previous Eclipse IDE is 32-bit based and your currently installed Eclipse IDE is 64-bit
based (or the other way round), you need to delete the Eclipse Secure Storage, where Eclipse stores, for
example, credentials for source code repositories and other login information. For more information, see
Eclipse Help: Secure Storage .
To use SAP Cloud Platform features, you first need to install the relevant toolkit. Follow the procedure below.
Prerequisites
You have installed an Eclipse IDE. For more information, see Install Eclipse IDE [page 1405].
Caution
The support for Mars and Neon has entered end of maintenance. We recommend that you use Oxygen
release.
Procedure
Note
4. Configure your proxy settings (in case you work behind a proxy or a firewall):
1. Go to General Network Connections .
2. In the Active Provider dropdown menu, choose Manual.
3. Configure your <HTTP> and <HTTPS> connections.
4. Choose Apply.
5. Choose OK to close the Preferences window.
6. In the main menu, choose Help Install New Software .
7. Enter in the Work with field the following URL:
○ For Eclipse Oxygen (4.7), add URL: https://tools.hana.ondemand.com/oxygen
8. Press the ENTER key.
9. Checkbox Contact all update sites during install to find required software is selected by default.
10. Select SAP Cloud Platform Tools to install the whole toolkit. If you do not need the complete package,
expand the node and only select the necessary components.
11. Choose Next.
12. In the Install Details window, review the items to be installed and choose Next.
13. Read and accept the Eclipse and SAP license agreements and choose Finish.
14. After the successful installation, you are prompted to restart the Eclipse IDE. Choose Yes.
If you want to have your SAP Cloud Platform Tools updated regularly and automatically, open the
Preferences window again and choose Install/Update Automatic Updates . Select Automatically find
new updates and notify me and choose Apply.
Prerequisites
You have installed the SAP Development Tools for Eclipse. See Install SAP Development Tools for Eclipse [page
1406]
Procedure
Note
○ If you have previously entered a subaccount and user name for your region host, these names will
be prompted to you in dropdown lists.
○ A dropdown list will be displayed as well for previously entered region hosts.
7. Choose the Validate button to check whether the data on this preference page is valid.
8. Choose OK.
In your Eclipse IDE, set up the runtime environment for Java applications. Use the same runtime environment
as the one you will be using to run the applications on the cloud.
Context
There are different runtime environments for Java applications available. For a complete list, see Application
Runtime Container [page 1430].
Prerequisites
You have downloaded an SDK archive and installed it in your Eclipse IDE. For more information, see Install the
SAP Cloud Platform SDK for Neo Environment [page 1403].
Procedure
Note
Choose the steps relevant for the runtime you have downloaded and installed. See Install the SAP Cloud
Platform SDK for Neo Environment [page 1403].
Java Web
Note
When deploying your application on SAP Cloud Platform, you can change your server runtime even during
deployment. If you manually set a server runtime different than the currently loaded, you will need to
republish the application. For more information, see Deploy on the Cloud from Eclipse IDE [page 1469].
Related Information
Context
Once you have installed your SAP JVM, you can set it as a default JRE for your local runtime. Follow the steps
below.
Prerequisites
You have downloaded and installed SAP JVM, version 7.1.054 or higher.
You can set SAP JVM as default or assign it to a specific SAP Cloud Platform runtime.
● To use SAP JVM as default for your Eclipse IDE, follow the steps:
1. Open again the Preferences window.
2. Select sapjvm<n> as default.
3. Choose OK.
● To use SAP JVM for launching local servers only, follow the steps:
1. Double-click on the local server you have created (Java Web Server, Java Web Tomcat 7
Server or Java Web Tomcat 8 Server).
2. Open the Overview tab and choose Open launch configuration.
3. Select the JRE tab.
4. Choose the Alternative JRE option.
5. From the dropdown menu, select the SAP JVM version you have just added.
6. Choose OK.
Related Information
Prerequisites
You have downloaded and extracted the SAP Cloud Platform SDK for Neo environment. For more information,
see Install the SAP Cloud Platform SDK for Neo Environment [page 1403].
Context
SAP Cloud Platform console client is part of the SAP Cloud Platform SDK for Neo environment. You can find it
in the tools folder of your SDK installation. Before using the tool, you need to configure it to work with the
platform.
Procedure
cd C:\HCP\SDK
cd tools
3. In case you use a proxy server, specify the proxy settings by using environment variables. You can find
sample proxy settings in the readme.txt file in the \tools folder of your SDK location.
○ Microsoft Windows
Note
○ For the new variables to be effective every time you open the console, define them using
Advanced System Settings Environment Variables and restart the console.
○ For the new variables to be valid only for the currently open console, define them in the console
itself.
For example, if your proxy host is proxy and proxy port is 8080, specify the following environment
variables:
set HTTP_PROXY_HOST=proxy
set HTTP_PROXY_PORT=8080
set HTTPS_PROXY_HOST=proxy
set HTTPS_PROXY_PORT=8080
set HTTP_NON_PROXY_HOSTS="localhost"
If you need basic proxy authentication, enter your user name and password:
set HTTP_PROXY_USER=<user_name>
set HTTP_PROXY_PASSWORD=<password>
set HTTPS_PROXY_USER=<user_name>
export http_proxy=http://proxy:8080
export https_proxy=https://proxy:8080
export no_proxy="localhost"
If you need basic proxy authentication, enter your user name and password:
export http_proxy=http://user:password@proxy:8080
export https_proxy=https://user:password@proxy:8080
Related Information
If you have already installed and used the SAP Cloud Platform Tools, SAP Cloud Platform SDK for Neo
environment and SAP JVM, you only need to keep them up to date.
Context
If you have already installed an SAP Cloud Platform SDK for Neo environment package, you only need to
update it regularly. To update your SDK, follow the steps below.
1. Download the new SAP Cloud Platform SDK for Neo environment version from https://
tools.hana.ondemand.com/#cloud
2. Unzip the SDK to a new directory on your local file system. Do not install the new SDK version to a directory
that already contains SDK.
3. Go to the Servers tab view.
4. Stop and delete all local servers.
5. Choose Window Preferences Server Runtime Environment .
For each previously added local runtime:
1. Select the corresponding entry in the table.
2. Choose the Edit button.
3. Locate the new SDK version:
○ For Java Web: Select option Use Java Web SDK from the following location and then choose the
Browse button and find the folder where you have unpacked the SDK ZIP file.
○ For Java Web Tomcat 7: Choose the Browse button and find the folder where you have
unpacked the SDK ZIP file or use the Download and Install button to get the latest version.
○ For Java Web Tomcat 8: Choose the Browse button and find the folder where you have
unpacked the SDK ZIP file or use the Download and Install button to get the latest version.
○ For Java EE 6 Web Profile: Select option Use Java EE 6 Web Profile SDK from the following
location and then choose the Browse button and find the folder where you have unpacked the SDK
ZIP file.
Note
Again, if the SAP Cloud Platform SDK for Neo environment version is higher and not supported by
the version of your SAP Cloud Platform Tools for Java, a message appears prompting you to
update your SAP Cloud Platform Tools for Java. You can check for updates (recommended) or
ignore the message.
4. Choose Finish.
6. After editing all local runtimes, choose OK.
Related Information
Install the SAP Cloud Platform SDK for Neo Environment [page 1403]
Application Runtime Container [page 1430]
sdk-upgrade [page 2118]
Context
If you have already installed an SAP Java Virtual Machine, you only need to update it. To update your JVM,
follow the steps below.
Procedure
Note
Do not install the new SAP JVM version to a directory that already contains SAP JVM.
3. In the Eclipse IDE main menu, choose Window Preferences Java Installed JREs and select the
JRE configuration entry of the old SAP JVM version.
4. Choose the Edit... button.
5. Use the Directory... button to select the directory of the new SAP JVM version.
6. Choose Finish.
7. In the Preferences window, choose OK.
Related Information
Context
If you have already installed SAP Cloud Platform Tools, you only need to update them. To do so, follow the steps
below.
1. Ensure that the SAP Cloud Platform Tools software site is checked for updates:
1. Find out whether you are using a Oxygen or Neon release of Eclipse. The name of the release is shown
on the welcome screen when the Eclipse IDE is started.
Caution
The support for Mars has entered end of maintenance. We recommend that you use Oxygen or
Neon releases.
2. In the main menu, choose Window Preferences Install/Update Available Software Sites .
3. Make sure there is an entry https://tools.hana.ondemand.com/oxygen or https://
tools.hana.ondemand.com/neon and that this entry is selected.
4. Choose OK to close the Preferences dialog box.
2. Choose Help Check for Updates .
3. Choose Finish to start installing the updates.
Note
If you want to have your SAP Cloud Platform Tools updated regularly and automatically, open the
Preferences window again and choose Install/Update Automatic Updates . Select Automatically find
new updates and notify me and choose Apply.
Related Information
This document describes how to create a simple Hello World Web application, which you can use for testing on
SAP Cloud Platform.
First, you create a dynamic Web project and then you add a simple Hello World servlet to it.
After you have created the Web application, you can test it on the local runtime and then deploy it on the cloud.
Prerequisites
You have installed the SAP Cloud Platform Tools. For more information, see Setting Up the Development
Environment [page 1402].
If you work in a proxy environment, set the proxy host and port correctly.
1. Open your Eclipse IDE for Java EE Developers and switch to the Workbench screen.
2. From the Eclipse IDE main menu, choose File New Dynamic Web Project .
3. In the Project name field, enter HelloWorld.
4. In the Target Runtime pane, select the runtime you want to use to deploy the Hello World application. In this
tutorial, we use Java Web.
5. In the Configuration pane, use the default configuration.
Note
The application will be provisioned with JRE version matching the Web project Java facet. If the JRE
version is not supported by SAP Cloud Platform, the default JRE for the selected SDK will be used (SDK
for Java Web and for Java EE 6 Web Profile – JRE 7).
1. On the HelloWorld project node, open the context menu and choose New Servlet . Window Create
Servlet opens.
2. Enter hello as Java package and HelloWorldServlet as class name.
6. Choose Finish to generate the servlet. The Java Editor with the HelloWorldServlet opens.
7. Replace the body content of the doGet(…) method with the following line:
response.getWriter().println("Hello World!");
Next Steps
Test your Hello World application locally and deploy it to SAP Cloud Platform. For more information, see
Deploying and Updating Applications [page 1453].
The sample applications allow you to explore the core functionality of SAP Cloud Platform and show how this
functionality can be used to develop more complex Web applications. The samples are included in the SAP
Cloud Platform SDK for Neo environment or presented as blogs in the SAP Community.
SDK Samples
The samples provided as part of the SAP Cloud Platform SDK for Neo environment introduce important
concepts and application features of the SAP Cloud Platform and show how common development tasks can
be automated using build and test tools.
The samples are located in the <sdk>/samples folder. The table below lists the samples currently available:
hello-world A simple HelloWorld Web application Creating a HelloWorld Application [page 1416]
connectivity Consumption of Internet services Consume Internet Services (Java Web or Java EE 6
Web Profile) [page 268]
persistence-with-ejb Container-managed persistence with JPA Tutorial: Adding Container-Managed Persistence with
JPA (SDK for Java EE 6 Web Profile) [page 1503]
persistence-with-jdbc Relational persistence with JDBC Tutorial: Adding Persistence with JDBC (SDK for Java
Web) [page 1551]
document-store Document storage in repository Using the Document Service in a Web Application
[page 575]
SAP_Jam_OData_HCP Accessing data in SAP Jam via OData Source code for using the SAP Jam API
All samples can be imported as Eclipse or Maven projects. While the focus has been placed on the Eclipse and
Apache Maven tools due to their wide adoption, the principles apply equally to other IDEs and build systems.
For more information about using the samples, see Import Samples as Eclipse Projects [page 1423], Import
Samples as Maven Projects [page 1424], and Building Samples with Maven [page 1425].
The Web application "Paul the Octopus" is part of a community blog and shows how the SAP Cloud Platform
services and capabilities can be combined to build more complex Web applications, which can be deployed on
the SAP Cloud Platform.
● It is intended for anyone who would like to gain hands-on experience with the SAP Cloud Platform.
● It involves the following platform services: identity, connectivity, SAP HANA and SAP ASE, and document.
● Its user interface is developed via SAPUI5 and is based on the Model-View-Controller concept. SAPUI5 is
based on HTML5 and can be used for building applications with sophisticated UI. Other technologies that
you can see in action in "Paul the Octopus" are REST services and job scheduling.
For more information, see the SAP Community blog: Get Ready for Your Paul Position .
The Web application "SAP Library" is presented in a community blog as another example of demonstrating the
usage of several SAP Cloud Platform services in one integrated scenario, closely following the product
documentation. You can import it as a Maven project, play around with your own library, and have a look at how
it is implemented. It allows you to reserve and return books, edit details of existing ones, add new titles,
maintain library users' profiles and so on.
● The library users authenticate using the identity service. It supports Single Sign-On (SSO).
● The books’ status and features are persisted using the SAP HANA and SAP ASE service.
● Book’s details are retrieved using a public Internet Web service, demonstrating usage of the connectivity
service.
● The e-mails you will receive when reserving and returning books to the library, are implemented using a
Mail destination.
● When you upload your profile image, it is persisted using the document service.
For more information, see the SAP Community blog: Welcome to the Library!
Related Information
To get a sample application up and running, import it as an Eclipse project into your Eclipse IDE and then
deploy it on the local runtime and SAP Cloud Platform.
Prerequisites
You have installed the SAP Cloud Platform Tools and created a SAP Cloud Platform server runtime
environment as described in Setting Up the Development Environment [page 1402].
Procedure
1. From the main menu of the Eclipse IDE, choose File Import… General Existing Projects into
Workspace and then choose Next.
2. Browse to locate and select the directory containing the project you want to import, for example, <sdk>/
samples/hello-world, and choose OK.
3. Under Projects select the project (or projects) you want to import.
4. Choose Finish to start the import.
The project is imported into your workspace and appears in the Project Explorer view.
Tip
Note
If you have not yet set up a server runtime environment, the following error will be reported: "Faceted
Project Problem: Target runtime SAP Cloud Platform is not defined". To set up the runtime
environment, complete the steps as described in Set Up Default Region Host in Eclipse [page 1407] and
Set Up the Runtime Environment [page 1408].
Next Steps
Run the sample application locally and then in the cloud. For more information, see Deploy Locally from Eclipse
IDE [page 1468] and Deploy on the Cloud from Eclipse IDE [page 1469].
Note
Some samples are ready to run while others have certain prerequisites, which are described in the
respective readme.txt.
When you import samples as Eclipse projects, the tests provided with the samples are not imported. To be
able to run automated tests, you need to import the samples as Maven projects.
To import the tests provided with the SDK samples, import the samples as Maven projects.
Prerequisites
You have installed the SAP Cloud Platform Tools and created a SAP Cloud Platform server runtime
environment as described in Setting Up the Development Environment [page 1402].
Procedure
Note
To configure the Maven settings.xml file, choose Window Preferences Maven User
Settings . This configuration is required if you need to provide your proxy settings. For more
information, see http://maven.apache.org/settings.html .
Procedure
1. From the Eclipse main menu, choose File Import… Maven Existing Maven Projects and then
choose Next.
2. Browse to locate and select the directory containing the project you want to import, for example, <sdk>/
samples/hello-world, and choose OK.
Tip
5. If necessary, update the project to remove any errors after the import. To do this, select the project and
from the context menu choose Maven Update Project and then OK.
Next Steps
Run the sample application locally and then in the cloud. For more information, see Deploy Locally from Eclipse
IDE [page 1468] and Deploy on the Cloud from Eclipse IDE [page 1469].
Note
Some samples are ready to run while others have certain prerequisites, which are described in the
respective readme.txt.
All samples provided can be built with Apache Maven. The Maven build shows how a headless build and test
can be completely automated.
Context
● Builds a Java Web application based on the SAP Cloud Platform API
● Demonstrates how to run rudimentary unit tests (not available in all samples)
● Installs, starts, waits for, and stops the local server runtime
● Deploys the application to the local server runtime and runs the integration test
● Starts, waits for, and stops the cloud server runtime
● Deploys the application to the cloud server runtime and runs the integration test
Related Information
You can use the Apache Maven command line tool to run local and cloud integration tests for any of the SDK
samples.
Prerequisites
● You have downloaded the Apache Maven command line tool. For more information, see the detailed Maven
documentation at http://maven.apache.org .
● You are familiar with the Maven build lifecycle. For more information, see http://maven.apache.org/guides/
introduction/introduction-to-the-lifecycle.html .
Procedure
1. Open the folder of the relevant project, for example, <sdk>/samples/hello-world, and then open the
command prompt.
2. Enter the verify command with the following profile in order to activate the local integration test:
If you are using a proxy, you need to define additional Maven properties as described below in step 4 (see
proxy details).
3. Press ENTER to start the build process.
All phases of the default lifecycle are executed up to and including the verify phase, with the resulting build
status shown on completion.
4. To activate the cloud integration test, which involves deploying the built Web application on a landscape in
the cloud, enter the following profile with the additional Maven properties given below:
○ Landscape host
The landscape host (default: hana.ondemand.com) is predefined in the parent pom.xml file (<sdk>/
samples/pom.xml) and can be overwritten, as necessary. If you have a developer account, for
example, and are therefore using the trial landscape, enter the following:
○ Account details
○ Proxy details
If you use a proxy for HTTPS Internet access, provide your proxy host (https.proxyHost) and if
necessary your proxy port (https.proxyPort):
Tip
If your proxy requires authentication, you might want to use the Authenticator class to pass the
proxy user name and password. For more information, see Authenticator . Note that for the sake
of simplicity this feature has not been included in the samples.
Tip
To avoid having to repeatedly enter the Maven properties as described above, you can add them
directly to the pom.xml file, as shown in the example below:
<sap.cloud.username>p0123456789</sap.cloud.username>
You might also want to use environment variables to set the property values dynamically, in particular
when handling sensitive information such as passwords, which should not be stored as plain text:
<sap.cloud.password>${env.SAP_CLOUD_PASSWORD}</sap.cloud.password>
Related Information
Regions and Hosts Available for the Neo Environment [page 14]
The SAP Cloud Platform Runtime for Java comprises the components which create the environment for
provisioning and running applications on SAP Cloud Platform. The runtime is represented by Java Virtual
Machine, Application Runtime Container and Compute Units. Cloud applications can interact at runtime with
the containers and services via the platform APIs.
Components
Related Information
SAP Cloud Platform infrastructure runs on SAP's own implementation of a Java Virtual Machine - SAP Java
Virtual Machine (JVM).
SAP JVM is a certified Java Virtual Machine and Java Development Kit (JDK), compliant to Java Standard
Edition (SE) 8. Technology-wise it is based on the OpenJDK and has been enhanced with a strong focus on
supportability and reliability. One example of these enhancements is the SAP JVM Profiler. The SAP JVM
Profiler is a tool that helps you analyze the resource consumption of a Java application running on theSAP
Cloud Platform local runtime. You can use it to profile simple stand-alone Java programs or complex enterprise
applications.
SAP JVM
The SAP JVM is a standard compliant certified JDK, supplemented by additional supportability and developer
features and extensive monitoring and tracing information. All these features are designed as interactive, on-
demand facilities of the JVM with minimal performance impact. They can be switched on and off without
having to restart the JVM (or the application server that uses the JVM).
Debugging on Demand
With SAP JVM debugging on demand, Java developers can activate and deactivate Java debugging directly –
there is no need to start the SAP JVM (or the application server on top of it) in a special mode. Java debugging
in the SAP JVM can be activated and deactivated using the jvmmon tool, which is part of the SAP JVM delivery.
This feature does not lower performance if debugging is turned off. The SAP JVM JDK is delivered with full
source code providing debugging information, making Java debugging even more convenient.
Profiling
To address the root cause of all performance and memory problems, the SAP JVM comes with the SAP JVM
Profiler, a powerful tool that supports the developer in identifying runtime bottlenecks and reducing the
memory footprint. Profiling can be enabled on-demand without VM configuration changes and works reliably
even for very large Java applications.
The user interface – the SAP JVM Profiler – can be easily integrated into any Eclipse-based environment by
using the established plug-in installation system of the Eclipse platform. It allows you to connect to a running
SAP JVM and analyze collected profiling data in a graphical manner. The profiler plug-in provides a new
perspective similar to the debug and Java perspective.
A number of profiling traces can be enabled or disabled at any point in time, resulting in snapshots of profiling
information for the exact points of interest. The SAP JVM Profiler helps with the analysis of this information and
provides views of the collected data with comprehensive filtering and navigation facilities.
● Memory Allocation Analysis – investigates the memory consumption of your Java application and finds
allocation hotspots
● Performance Analysis – investigates the runtime performance of your application and finds expensive Java
methods
● Network Trace - analyzes the network traffic
● File I/O Trace - provides information about file operations
● Synchronization Trace - detects synchronization issues within your application
● Method Parameter Trace – yields detailed information about individual method calls including parameter
values and invocation counts
● Profiling Lifecycle Information – a lightweight monitoring trace for memory consumption, CPU load, and
GC events.
The SAP JVM provides comprehensive statistics about threads, memory consumption, garbage collection, and
I/O activities. For solving issues with SAP JVM, a number of traces may be enabled on demand. They provide
additional information and insight into integral VM parts such as the class loading system, the garbage
Further Information
Thread dumps not only contain a Java execution stack trace, but also information about monitors or locks,
consumed CPU and memory resources, I/O activities, and a description of communication partners (in the
case of network communication).
For more information about Java SE Technologies in SAP Products, see 2700275 .
Related Information
SAP Cloud Platform applications run on a modular and lightweight application runtime container where they
can use the platform services APIs and Java EE APIs according to standard patterns.
Depending on the runtime type and corresponding SDK you are using, SAP Cloud Platform provides the
following profiles of the application runtime container:
Supported
Profile Provides support for Java versions Use
Java Web Tom Some of the standard Java EE 7 8 (default); 7 If you need a simplified Java Web application run
cat 8 [page
APIs (Servlet, JSP, EL, Web time container based on Apache Tomcat 8.
1434]
socket)
Tip
We recommend using this runtime with Java 8.
Java 7 for this runtime is deprecated.
Java EE 7 Web Java EE 7 Web Profile APIs 8 (default); 7 If you need an application runtime container to
Profile TomEE 7
gether with all containers defined by the Java EE 7
[page 1436]
Web Profile specification.
For the complete list of supported APIs, see Supported Java APIs [page 1439]
Restriction
Support of Java 6 in the Neo environment is discontinued. You cannot deploy or start applications on Java
6.
● If you redeploy your application or deploy a new one, you will not be able to use Java 6 but you will have
to use Java 7 instead.
● If you have a running application with Java 6, you have to redeploy it with Java 7.
Tip
You can still run Java 6 complied applications with Java 7 as Java 7 backward compatible.
Java Web is a minimalistic application runtime container in SAP Cloud Platform that offers a subset of Java EE
standard APIs typical for a standalone Java Web Container.
Restriction
This runtime is deprecated. We recommend migrating to Java Web Tomcat 8. For more information, see
below.
In the general case, applications running on Java Web runtime are compatible with the Java Web Tomcat 8
runtime, and can be ported there without change. An exception are applications using the HTTP Destination
API [page 253] (com.sap.core.connectivity.api.http.HttpDestination). This API is not available in
Java Web Tomcat 8 runtime. If you use that API in your applications, you need to migrate to the
ConnectivityConfiguration API [page 255]
(com.sap.core.connectivity.api.configuration.ConnectivityConfiguration).
Note
The basic difference between these APIs is that HttpDestination provides a pre-configured Apache
HttpClient (fixed, old version of the library), while ConnectivityConfiguration only provides means
to access the destination configuration, delegating the responsibility to configure the HttpClient to the
application developer. This allows you to use ConnectivityConfiguration with any Apache
HttpClient version or any other HTTP client API (such as HttpURLConnection, for example), while
HttpDestination can be used only with a single, outdated version of the apache library.
Use (Deprecated)
This runtime container is suitable for SAP Cloud Platform applications that need a small, low memory
consuming container. The default supported Java version for Java Web is 7.
The current version 1 of the Java Web application runtime container (neo-java-web 1.x) provides
implementation for the following set of Java Specification Requests (JSRs):
The Java Web enables you to easily create your applications for SAP Cloud Platform utilizing standard defined
APIs suitable for a Web Container in addition to SAP Cloud Platform services APIs.
For more information, see SAP Cloud Platform SDK Java API Documentation.
Related Information
Java Web Tomcat 7 (deprecated runtime) is a simplified edition of Java Web application runtime container
providing optimized performance particularly in the area of startup time and memory footprint.
Note
The Java Web Tomcat 7 runtime is deprecated. We recommend using Java Web Tomcat 8 instead (seeJava
Web Tomcat 8 [page 1434]). If you have applications running on Java Web Tomcat 7, you can migrate to
Java Web Tomcat 8 by simply redeploying (see deploy [page 2002]) and restarting them (see restart [page
2108]) using the console commands or the application deploy options in the cloud cockpit (see Deploy
Business Applications in the Neo Environment [page 1400]). You can check the application runtime by
using the status console command (see status [page 2133]) or by exploring the application information in
the cloud cockpit.
This container leverages Apache Tomcat 7 without modifications and adds a subset of SAP Cloud Platform
services client APIs. Applications running in the Apache Tomcat 7 container are portable on Java Web Tomcat 7.
Existing applications running on the first edition of Java Web application runtime container can run unmodified
on Java Web Tomcat 7 in case they share same set of enabled APIs.
The default supported Java version for Java Web Tomcat 7 is 7; you can also use Java version 8.
The current version of Java Web Tomcat 7 application runtime container (neo-java-web 2.x) provides
implementation for the following set of Java Specification Requests (JSRs) defined specifications:
Java Web Apache Tomcat 8 (Java Web Tomcat 8) is the next edition of the Java Web application runtime
container that has all characteristics and features of its predecessor Java Web Tomcat 7.
This container leverages Apache Tomcat 8.5 Web container without modifications and also adds the already
established set of SAP Cloud Platform services client APIs. Applications running in the Apache Tomcat 8.5 Web
container are portable to Java Web Tomcat 8. Existing applications running in Java Web and Java Web Tomcat 7
application runtime containers can run unmodified in Java Web Tomcat 8 in case they share the same set of
enabled APIs.
Restriction
The default supported Java version for Java Web Tomcat 8 is 8; you can also use Java version 7.
The current version of Java Web Tomcat 8 application runtime container (neo-java-web 3.x) provides
implementation for the following set of Java Specification Requests (JSRs) defined specifications:
The following subset of APIs of SAP Cloud Platform services are available within Java Web Tomcat 8: document
service APIs, mail service APIs, connectivity service APIs (destination configuration and authentication header
provider), SAP HANA service and SAP ASE service JDBC APIs, and security APIs.
Deprecated. The Java EE 6 Web Profile application runtime container of SAP Cloud Platform is Java EE 6 Web
Profile certified.
Restriction
● If you are using web profile features, migrate to Java EE 7 Web Profile TomEE 7
In the general case, you can migrate your applications to Java Web Tomcat 8 or Java EE 7 Web Profile TomEE 7
runtime without change. An exception are applications using the HTTP Destination API [page 253]
(com.sap.core.connectivity.api.http.HttpDestination). This API is not available in Java Web
Tomcat 8 or Java EE 7 Web Profile TomEE 7 runtime. If you use that API in your applications, you need to
migrate to the ConnectivityConfiguration API [page 255]
(com.sap.core.connectivity.api.configuration.ConnectivityConfiguration).
Note
The basic difference between these APIs is that HttpDestination provides a pre-configured Apache
HttpClient (fixed, old version of the library), while ConnectivityConfiguration only provides means
to access the destination configuration, delegating the responsibility to configure the HttpClient to the
application developer. This allows you to use ConnectivityConfiguration with any Apache
HttpClient version or any other HTTP client API (such as HttpURLConnection, for example), while
HttpDestination can be used only with a single, outdated version of the apache library.
Use (Decprecated)
The lightweight Web Profile of Java EE 6 is targeted at next-generation Web applications. Developers benefit
from productivity improvements with more annotations and less XML configuration, more Plain Old Java
Objects (POJOs), and simplified packaging.
The current version 2 of Java EE 6 Web Profile application runtime container (neo-javaee6-wp 2.x) provides
implementation for the following Java Specification Requests (JSRs):
Note
EJB Timer Servce is also supported (although not part
of the EJB Lite specification).
Contexts and Dependency Injection for Java EE platform 1.0 JSR - 299
For more information about the differences between EJB 3.1 and EJB 3.1 Lite, see the Java EE 6 specification,
JSR 318: Enterprise JavaBeans, section 21.1.
Development Process
The Java EE 6 Web Profile enables you to easily create your applications for SAP Cloud Platform.
For more information, see Using Java EE Web Profile Runtimes [page 1445].
Related Information
Java EE at a Glance
The Java EE 7 Web Profile TomEE 7 provides implementation of the Java EE 7 Web Profile specification.
The default supported Java version for Java EE 7 Web Profile TomEE is 8; you can also use Java version 7.
Java API for RESTful Web Services (JAX-RS) 2.0 JSR - 339
Note
EJB Timer Servce is also supported (although not part
of the EJB Lite specification).
Contexts and Dependency Injection for Java EE platform 1.1 JSR - 346
For more information about the differences between EJB 3.2 and EJB 3.2 Lite, see the Java EE 7 specification,
JSR 345: Enterprise JavaBeans, section 21.1.
Restriction
Java 7 for Java EE7 Web Profile TomEE7 runtime is deprecated. If you are running Java applications on that
runtime with that Java version, migrate to Java 8 on the same runtime.
To do this, redeploy the applications or update their Java version. If you redeploy the applications, specify
explicitly Java 8 as the version (see Deploying and Updating Applications [page 1453]). In both cases,
restart the applications afterwards (see restart command [page 2108]).
The Java EE 7 Web Profile TomEE 7 enables you to easily create your applications for SAP Cloud Platform.
For more information, see Using Java EE Web Profile Runtimes [page 1445].
Related Information
Java EE at a Glance
A compute unit is the virtualized hardware resources used by an SAP Cloud Platform application.
After being deployed to the cloud, the application is hosted on a compute unit with certain central processing
unit (CPU), main memory, disk space, and an installed OS.
SAP Cloud Platform offers four standard sizes of compute units according to the provided resources.
Depending on their needs, customers can choose from the following compute unit configurations:
The third column in the table shows what value of the -z or --size parameter you need to use for a console
command.
Note
For customer accounts, all sizes of compute units are available. During deployment, customers can specify the
compute unit on which they want their application to run.
Related Information
The basic tools of the SAP Cloud Platform development environment, the SAP Cloud Platform Tools, comprise
the SAP Cloud Platform Tools for Java and the SAP Cloud Platform SDK for Neo environment.
The focus of the SAP Cloud Platform Tools for Java is on the development process and enabling the use of the
Eclipse IDE for all necessary tasks: creating development projects, deploying applications locally and in the
cloud, and local debugging. It makes development for the platform convenient and straightforward and allows
short development turn-around times.
The SAP Cloud Platform SDK for Neo environment, on the other hand, contains everything you need to work
with the platform, including a local server runtime and a set of command line tools. The command line
capabilities enable development outside of the Eclipse IDE and allow modern build tools, such as Apache
Maven, to be used to professionally produce Web applications for the cloud. The command line is particularly
important for setting up and automating a headless continuous build and test process.
Related Information
When you develop applications that run on SAP Cloud Platform, you can rely on certain Java EE standard APIs.
These APIs are provided with the runtime of the platform. They are based on standards and are backward
compatible as defined in the Java EE specifications. Currently, you can make use of the APIs listed below:
● javax.activation
● javax.annotation
● javax.el
● javax.mail
● javax.persistence
● org.slf4j.Logger
● org.slf4j.LoggerFactory
If you are using the SAP Cloud Platform SDK for Java EE 6 WebProfile, you can have access to the following
Java EE APIs as well:
● javax.faces
● javax.validation
● javax.inject
● javax.ejb
● javax.interceptor
● javax.transaction
● javax.enterprise
● javax.decorator
The table below summarizes the Java Request Specifications (JSRs) supported in the two SAP Cloud Platform
SDKs for Java.
Supported Java EE 6 Specification SDK for Java Web SDK for Java EE 6 WebProfile
The table below summarizes the Java Request Specifications (JSRs) supported in the SAP Cloud Platform SDK
for Java Web Tomcat 8 .
In addition to the standard APIs, SAP Cloud Platform offers platform-specific services that define their own
APIs that can be used from the SAP Cloud Platform SDK. The APIs of the platform-specific services are listed
in the table below
The SAP Cloud Platform SDK contains a platform API folder for compiling your Web applications. It contains
the above content, that is, all standard and third-party API JARs (for legal reasons provided "as is", that is, they
also have non-API content on which you should not rely) and the platform APIs of the SAP Cloud Platform
services.
You can add additional (pure Java) application programming frameworks or libraries and use them in your
applications. For example, you can include Spring Framework in the application (in its application archive) and
use it in the application. In such cases, the application should handle all dependencies to such additional
frameworks or libraries and you should take care for the whole assembly of such additional frameworks or
libraries inside the application itself.
SAP Cloud Platform also provides numerous other capabilities and APIs that might be accessible for
applications. However, you should rely only on the APIs listed above.
Related Information
You can develop applications for SAP Cloud Platform just like for any application server. SAP Cloud Platform
applications can be based on the Java EE Web application model. You can use programming logic that is well-
known to you, and benefit from the advantages of Java EE, which defines the application frontend. Inside, you
can embed the usage of the services provided by the platform.
Development Environment
SAP Cloud Platform development environment is designed and built to optimize the process of development
and deployment.
● SDK for Java Web - provides support for some of the standard Java EE 6 APIs (Servlet, JSP, EL, Websocket)
● SDK for Java Web Tomcat 7 - provides support for some of the standard Java EE 6 APIs (Servlet, JSP, EL,
Websocket)
● SDK for Java EE 6 Web Profile - certified to support Java EE 6 Web Profile APIs
● SDK for Java Web Tomcat 8 - provides support for some of the standard Java EE 7 APIs (Servlet, JSP, EL,
Websocket)
In the Eclipse IDE, create a simple HelloWorld application with basic functional logic wrapped in a Dynamic Web
Project and a Servlet. You can do this with both SDKs.
For more information, see Creating a Hello World Application [page 1416] or watch the Creating a HelloWorld
application video tutorial.
To learn how to enhance the HelloWorld application with role management, see the Managing Roles in SAP
Cloud Platform video tutorial.
SAP Cloud Platform is Java EE 6 Web Profile certified so you can extend the basic functionality of your
application with Java EE 6 Web Profile technologies. If you are working with the SDK for Java EE 6 Web Profile,
you can equip the basic application with additional Java EE features, such as EJB, CDI, JTA.
For more information, see Using Java EE Web Profile Runtimes [page 1445]
Create a fully-fledged application benefiting from the capabilities and services provided by SAP Cloud Platform.
In your application, you can choose to use:
● Authentication [page 2362] - by default, SAP Cloud Platform is configured to use SAP ID service as identity
provider (IdP), as specified in SAML 2.0. You can configure trust to your custom IdP, to provide access to
the cloud using your own user database.
● UI development toolkit for HTML5 (SAPUI5) - use the platform's official UI framework.
● Persistence Service [page 849] - provide relational persistence with JPA and JDBC via our persistence
service.
● Connectivity Service [page 251] - use it to connect Web applications to Internet, make on-demand to on-
premise connections to Java and ABAP on-premise systems and configure destinations to send and fetch
e-mail.
Deploy
First, deploy and test the ready application on the local runtime and then make it available on SAP Cloud
Platform.
For more information, see Deploying and Updating Applications [page 1453]
You can speed up your development by applying and activating new changes on the already running
application. Use the hot-update command.
Manage
Manage all applications deployed in your account from a single dedicated user interface - SAP Cloud Platform
cockpit.
For more information, see SAP Cloud Platform Cockpit [page 1006]
Monitor
This tutorial demonstrates creating a simple Hello World Java application with a Java bean using the Java EE 6
Web Profile or Java EE 7 Web Profile TomEE 7.
Prerequisites
● You have installed SAP Cloud Platform tools. Make sure you also download the SDK for Java EE 6 Web
Profile or SDK for Java EE 7 Web Profile TomEE 7. For more information, see Setting Up the Tools and SDK
[page 1402].
● If you have a previously installed version of SAP Cloud Platform Tools, make sure you update them to the
latest version. For more information, see Updating the Tools and SDK [page 1413].
● The SDK brings all required libraries. In case you get an error with the import of a library, make sure you
have set the SAP Cloud Platform Tools and the Web Project correctly.
Procedure
The following screenshot illustrates the required project configuration for Java EE 6 Web Profile runtime:
5. Choose Finish.
For more information, see Creating a Hello World Application [page 1416] .
1. On the HelloWorld project node, open the context menu and choose New Servlet . Window Create
Servlet opens.
2. Enter hello as the Java package and HelloWorldServlet as the class name. Choose Next.
3. In the URL mappings field, select /HelloWorldServlet and choose Edit.
4. In the Pattern field, replace the current value with just "/" and choose OK. In this way, the servlet will be
mapped as a welcome page for the application.
5. Choose Finish to generate the servlet. The Java Editor with the HelloWorldServlet opens.
6. Change the doGet(…) method so that it contains:
response.getWriter().println("Hello World!");
For more information, see Creating a Hello World Application [page 1416].
Create a JSP
1. On the HelloWorld project node, open the context menu and choose New JSP file . Window New JSP
file opens.
2. Enter the name of your JSP file and choose Finish.
1. On the HelloWorld project node, choose File New Other EJB Session Bean . Choose Next.
2. In the Create EJB session bean wizard, еnter test as the Java package and HelloWorldBean as the name of
your new class. Choose Finish.
3. Implement a simple public method sayHello that returns a greeting string. Save the project.
package test;
import javax.ejb.LocalBean;
import javax.ejb.Stateless;
/**
* Session Bean implementation class HelloWorldBean
*/
@Stateless
@LocalBean
@EJB
private HelloWorldBean helloWorldBean;
<%@page import="javax.naming.InitialContext"%>
<%@ page language="java" contentType="text/html; charset=ISO-8859-1"
pageEncoding="ISO-8859-1"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://
www.w3.org/TR/html4/loose.dtd">
<%@ page import = "test.HelloWorldBean" %>
<%@ page import = "javax.ejb.EJB" %>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>Insert title here</title>
</head>
<body>
</body>
</html>
<%
try {
InitialContext ic = new InitialContext();
HelloWorldBean h= (HelloWorldBean)ic.lookup("java:comp/env/
hello.HelloWorldServlet/helloWorldBean");
out.println(h.sayHello());
}
You can test the application on the local runtime and then deploy it on SAP Cloud Platform.
For more information, see Deploying an Application on SAP HANA Cloud [page 1453].
You can now use JPA together with EJB to persist data in your application
For more information, see Tutorial: Adding Container-Managed Persistence with JPA (SDK for Java EE 6 Web
Profile) [page 1503]
Overview
SAP Cloud Platform runtime sets several system environment variables that identify the runtime environment
of the application. Using them, an application can get information about its application name, subaccount and
URL, as well as information about the region host it is deployed on and region-specific parameters. All SAP
Cloud Platform specific environment variables names start with the common prefix HC_.
The following SAP Cloud Platform environment variables are set to the runtime environment of the application:
HC_LANDSCAPE production / trial Type of the region host where the appli
cation is deployed
SAP Cloud Platform environment variables are accessed as standard system environment variables of the Java
process - for example via System.getenv("...").
Note
Environment variables are not set when deploying locally with the console client or Eclipse IDE.
Example
<html>
<head>
<title>Display SAP Cloud Platform Environment Platform variables</title>
</head>
<body>
<p>Application Name: <%= System.getenv("HC_APPLICATION") %></p>
</body>
</html>
Prerequisites
In the Eclipse IDE you have developed or imported a Java application that is running on a cloud server.
Context
In the Server editor of your local Eclipse IDE, you can use the Advanced tab and the Environment Variables
table to add, edit, select and remove environment variables for the cloud virtual machine.
Note
Procedure
1. In the Eclipse IDE go to the Servers view and select the cloud server you want to configure.
2. Double click on it to open the Server Editor.
3. Open the Advanced tab.
4. (Optional) Add an environment variable.
Note
The changes made by someone else will be loaded once you reopen the editor.
Content
Deploying Applications
After you have created your Java application, you need to deploy and run it on SAP Cloud Platform. We
recommend that you first deploy and test your application on the local runtime before deploying it on the cloud.
Use the tool that best fits your scenario:
Eclipse IDE Deploy Locally from Eclipse IDE [page You have developed your application using SAP Cloud
1468] Platform Tools in the Eclipse IDE.
Cockpit Deploy on the Cloud with the Cockpit You want to deploy an application in the form of a WAR file.
[page 1477]
Lifecycle Manage Deploy an Application [page 1456] You want to deploy an application in the form of one or more
ment API WAR files.
Application properties are configured during deployment with a set of parameters. To update these properties,
use one of the following approaches:
Console Client deploy [page 2002] Deploy the application with new WAR file(s) and make
changes to the configuration parameters.
Command: deploy
Console Client set-application-property [page 2121] Change some of the application properties you defined dur
ing deployment without redeploying the application binaries.
Command: set-application-property
Cockpit Deploy on the Cloud with the Cockpit Update the application with a new WAR file or make changes
[page 1477] to the configuration parameters.
If you want to quickly see your changes while developing an application, use the following approaches:
Eclipse IDE Deploy on the Cloud from Eclipse IDE Republish the application. The cloud server is not restarted,
[page 1469] and only the application binaries are updated.
Console Client hot-update [page 2052] Apply and activate changes. Use the command to speed up
development and not for updating productive applications.
Command: hot-update
If you are an application operator and need to deploy a new version of a productive application or perform
maintenance, you can choose among several approaches:
Zero Downtime Update Applications with Zero Down Use when the new application version is backward compati
time [page 2188] ble with the old version. Deploy a new version of the applica
tion and disable and enable processes in a rolling manner, or,
rolling-update [page 2115]
do it at one go with the rolling-update command.
Planned Down Enable Maintenance Mode for Planned Use when the new application version is backward incompat
time Downtimes [page 2191] ible. Enable maintenance mode for the time of the planned
downtime.
(Maintenance
Mode)
Soft Shutdown Perform Soft Shutdown [page 2193] Supports zero downtime and planned downtime scenarios.
Disable the application or individual processes in order to
shut down the application or processes gracefully.
Related Information
The lifecycle REST API provides functionality for Java application lifecycle management.
This tutorial provides information about the most common use cases for Java applications and the operations
that are included in each one:
● Basic authentication
You provide username and password.
● OAuth authentication and authorization
The REST API is protected with OAuth 2.0 client credentials.
Prerequisites
● For basic authentication, assign the manageJavaApplications scope to the platform role used in the
subaccount. See Platform Scopes [page 1910].
● For OAuth authentication and authorization, create an OAuth client and obtain an access token to call the
API methods. See Using Platform APIs [page 1737] as you add the Lifecycle Management scopes for the
Platform API OAuth client.
Prerequisites
Context
For the purposes of this tutorial, we will deploy three .war files: (app.war, example.war, demo.war).
Procedure
Client Request:
GET https: api.hana.ondemand.com/lifecycle/v1/csrf
Request Headers:
For OAuth Platform API authentication and authorization, the last line looks like Authorization:
Bearer a9cd683534471f499b630bb97b3d3fc, where a9cd683534471f499b630bb97b3d3fc has
been retrieved with POST <host>/oauth2/apitoken/v1?grant_type=client_credentials.
For more information, see Using Platform APIs [page 1737].
Server Response:
Response Status: 200
Response Headers:
X-CSRF-Token: 3B95B7A8B0E8E6B923C67E6C0BFD234D
Note
After a while the CSRF token expires. If you are using an invalid CSRF token, you will receive an
error message similar to this one: HTTP Status 403 - CSRF token validation failed! If
this happens, get a new token.
2. Create an application.
Send a POST Applications request:
Client Request:
POST: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps
Request Body:
{
"applicationName": "myapp",
"runtimeName": "neo-java-web",
"runtimeVersion": "1",
"minProcesses": 1,
"maxProcesses": 1
}
Server Response:
Response Status: 201
Response Body:
{
"metadata": {
"url": "/lifecycle/v1/accounts/test/apps/myapp"
},
"entity": {
"accountName": "test",
"applicationName": "myapp",
"runtimeName": "neo-java-web",
"runtimeVersion": "1",
"minProcesses": 1,
"maxProcesses": 1
}
}
Tip
You can add other properties to the body of the request. The properties in this example are the
minimum requirements that let you execute the request successfully.
Client Request:
Server Response:
Response Status: 201
Response Body:
{
"metadata": {
"url": "/lifecycle/v1/accounts/test/apps/myapp/
binaries"
},
"entity": {
"totalSize": 0,
"status": "UPLOADING",
"files": [{
"path": "app.war",
"pathGuid": "YXBwLndhcg\u003d
\u003d",
"status": "UNAVAILABLE",
"entries": []
}, {
"path": "example.war",
"pathGuid": "ZXhhbXBsZS53YXI
\u003d",
"status": "UNAVAILABLE",
"entries": []
}, {
"path": "demo.war",
"pathGuid": "ZGVtby53YXI\u003d",
"status": "UNAVAILABLE",
"entries": []
}]
}
}
This request describes the metadata of the binaries and prepares them for their upload.
4. Upload the binaries.
Note
You must start uploading the binaries within the next 2 minutes. Otherwise, the operation will be
canceled and you will have to deploy the application again. If you do not start uploading the binaries
within the next 2 minutes, you will receive the following response:
Server Response:
Response Status: 404
Response Body:
{
"code": "98a59939-0e9a-430c-9ec3-c094a4d8d78d",
"description": "Application operation is not found"
}
Client Request:
PUT: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/
binaries/YXBwLndhcg==
Request Headers:
X-CSRF-Token: 3B95B7A8B0E8E6B923C67E6C0BFD234D
Content-Type: application/octet-stream
Request Body:
Choose to add a file and select app.war.
Client Request:
PUT: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/
binaries/ZXhhbXBsZS53YXI=
Request Headers:
X-CSRF-Token: 3B95B7A8B0E8E6B923C67E6C0BFD234D
Content-Type: application/octet-stream
Request Body:
Choose to add a file and select example.war.
Client Request:
PUT: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/
binaries/ZGVtby53YXI=
Request Headers:
X-CSRF-Token: 3B95B7A8B0E8E6B923C67E6C0BFD234D
Content-Type: application/octet-stream
Request Body:
Choose to add a file and select demo.war.
If the operation is successful, the response for all three requests should return 200 without a body.
5. List the binaries.
Send a GET Binaries request every 5-10 seconds:
Client Request:
GET: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/
binaries
Repeat the process and observe the general status until you receive the FAILED or the DEPLOYED values.
The DEPLOYED status shows that the deployment operation has been successful and you can now start
your application:
Server Response:
Response Status: 200
Response Body:
{
"metadata": {},
"entity": {
"totalSize": 57857,
"status": "DEPLOYED",
"warnings": "Warning: No compute unit size was
specified for the application so size was set automatically to \u0027lite
\u0027.",
"files": [{
"path": "app.war",
"pathGuid": "YXBwLndhcg\u003d
\u003d",
"size": 17194,
"status": "AVAILABLE",
"hash":
"6c8b99a72d5b42db31cc576273260f9c2f316c1ac7dcc4a8c845412e51d420f0dcf53f4035745
e303cdd43bf73974fada19839920d845010013bf422ae5bc4dd",
The binaries are now officially DEPLOYED. You can also see that each binary has status AVAILABLE.
Prerequisites
Context
In this tutorial, you will deploy an application from an existing application by specifying the source account and
application as query parameters.
Note
In platform OAuth, the copy operation is applicable for applications in the same account.
Client Request:
GET https: api.hana.ondemand.com/lifecycle/v1/csrf
Request Headers:
X-CSRF-Token: Fetch
Authorization: Basic UDE5NDE3OTM5NDg6RnJhZ28jNjQ3Ng==
For OAuth Platform API authentication and authorization, the last line looks like Authorization:
Bearer a9cd683534471f499b630bb97b3d3fc, where a9cd683534471f499b630bb97b3d3fc has
been retrieved with POST <host>/oauth2/apitoken/v1?grant_type=client_credentials.
For more information, see Using Platform APIs [page 1737].
Server Response:
Response Status: 200
Response Headers:
X-CSRF-Token: 3B95B7A8B0E8E6B923C67E6C0BFD234D
Note
After a while the CSRF token expires. If you are using an invalid CSRF token, you will receive an
error message similar to this one: HTTP Status 403 - CSRF token validation failed! If
this happens, get a new token.
Client Request:
POST: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps?
operation=copy&sourceAccount=sourcesubaccount&sourceApplication=sourceapp
Request Body:
{
"applicationName": "myapp",
"runtimeName": "neo-java-web",
"runtimeVersion": "1",
"minProcesses": 1,
"maxProcesses": 1
}
Server Response:
Response Status: 201
Response Body:
{
"metadata": {
"url": "/lifecycle/v1/accounts/test/apps/myapp"
},
"entity": {
"accountName": "test",
"applicationName": "myapp",
"runtimeName": "neo-java-web",
"runtimeVersion": "1",
"minProcesses": 1,
"maxProcesses": 1
}
Tip
Тhe body is optional for the request. If you do not specify a body, the REST API will take the parameters
from the source application.
Client Request:
GET: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/
binaries
Repeat the process and observe the general status until you receive the FAILED or the DEPLOYED values.
The DEPLOYED status shows that the copy operation has been successful and you can now start your
application:
Server Response:
Response Status: 200
Response Body:
{
"metadata": {},
"entity": {
"totalSize": 57857,
"status": "DEPLOYED",
"warnings": "Warning: No compute unit size was
specified for the application so size was set automatically to \u0027lite
\u0027.",
"files": [{
"path": "app.war",
"pathGuid": "YXBwLndhcg\u003d
\u003d",
"size": 17194,
"status": "AVAILABLE",
"hash":
"6c8b99a72d5b42db31cc576273260f9c2f316c1ac7dcc4a8c845412e51d420f0dcf53f4035745
e303cdd43bf73974fada19839920d845010013bf422ae5bc4dd",
"entries": [{...}]
}, {
"path": "example.war",
"pathGuid": "ZXhhbXBsZS53YXI
\u003d",
"size": 37615,
"status": "AVAILABLE",
"hash":
"7b2a80771f79d0740f629bdaaf019c550b10df55eec8789447ec02fa93e7fdb1f6f47f4864769
f4a4f027a4bca8bfa1ea45a83c5fb38ae539b397abe9fe66be1",
"entries": [{...}]
}, {
"path": "demo.war",
"pathGuid": "ZGVtby53YXI\u003d",
"size": 3048,
"status": "AVAILABLE",
"hash":
"8c4b39bfe3a034d64e8592e7cf638ac4b5985c5f9a4f691270d040b8f15dc8edbb6284bd5431f
1a240abaad3b2288411563b784b691c35ca677ae5e9ced565a9",
"entries": [{...}]
}]
}
}
Prerequisites
Context
You can validate the content of an application by verifying the hash values in a binaries response. For example,
you verify changes to an application by comparing hash values of deployed binaries with the hash values of
modified binaries. You can use this verification to be sure that you have the correct binaries for deploy or
update in a copy operation.
Procedure
1. Get a CSRF token. If you try to start your application long after its deployment, the token has most
probably expired.
Send a GET CSRF Protection request.
Note
If your session is still actively running, you do not have to request a new CSRF token. In this case, we
will use the CSRF token generated during the deployment scenario.
Client Request:
GET: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/
binaries
Repeat the process and observe the general status until you receive the FAILED or the DEPLOYED values.
The DEPLOYED status shows that the deployment operation has been successful and you can now start
your application:
Server Response:
Response Status: 200
Response Body:
{
The binaries are now officially DEPLOYED. You can also see that each binary has status AVAILABLE.
3. Use the hash values of the binaries to compare with those of previous binaries before you start another
operation.
Procedure
1. Get a CSRF token. If you try to start your application long after its deployment, the token has most
probably expired.
Send a GET CSRF Protection request.
Note
If your session is still actively running, you do not have to request a new CSRF token. In this case, we
will use the CSRF token generated during the deployment scenario.
Client Request:
PUT: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/state
Request Headers:
X-CSRF-Token: 3B95B7A8B0E8E6B923C67E6C0BFD234D
Content-Type: application/json
Request Body:
{
"applicationState": "STARTED"
}
Server Response:
Response Status: 200
Response Body:
{
"metadata": {
"message": "Triggered start of application
process.",
"url": "/lifecycle/v1/accounts/test/apps/myapp",
"createdAt": 1501825923105,
"updatedAt": 1501827428000
},
"entity": {
"applicationState": "STARTING",
"processes": [{
"processId":
"dc1460001710d282b42b7331f1831ec5ad9c1924",
"status": "PENDING",
"lastStatusChange": 0,
"availabilityZone": "",
"computeUnitSize": "LITE"
}],
"warningMessage": "Triggered start of
application process."
}
}
The applicationState value will change from STARTING (or PENDING) to STARTED.
3. Make sure the application is working properly.
Send a GET Application State request to verify whether your application is started. Send this request every
5-10 seconds and check the applicationState property in the response. If that property shows the STARTED
value, then you have successfully started your application:
Client Request:
GET: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/state
Server Response:
Response Body:
{
"metadata": {
"domain": "hana.ondemand.com",
"aliases": "[\"/DemoApp\",\"example\",\"/\"]",
"accessPoints": ["https://
myapptest.int.hana.ondemand.com", "https://myapptest.hana.ondemand.com"],
"runtime": {
"id": "neo-java-web",
"state": "recommended",
"expDate": "1541203200000",
"displayName": "Java Web",
"relDate": "1501718400000",
Procedure
1. Get a CSRF token. If you try to start your application long after its deployment, the token has most
probably expired.
Send a GET CSRF Protection request.
Note
If your session is still actively running, you don't have to request a new CSRF token. In this case, we will
use the CSRF token generated during the deployment scenario.
Client Request:
PUT: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/state
Request Headers:
X-CSRF-Token: 3B95B7A8B0E8E6B923C67E6C0BFD234D
Server Response:
Response Status: 200
Response Body:
{
"metadata": {
"message": "Triggered stop of application
process.",
"url": "/lifecycle/v1/accounts/test/apps/myapp",
"createdAt": 1501825923105,
"updatedAt": 1501827428000
},
"entity": {
"applicationState": "STOPPING",
"processes": [{
"processId":
"dc1460001710d282b42b7331f1831ec5ad9c1924",
"status": "PENDING",
"lastStatusChange": 0,
"availabilityZone": "",
"computeUnitSize": "LITE"
}],
"warningMessage": "Triggered stop of
application process."
}
}
Client Request:
GET: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/state
Server Response:
Response Body:
{
"metadata": {
"aliases": "[]",
"runtime": {
"id": "neo-java-web",
"state": "recommended",
"expDate": "1541203200000",
"displayName": "Java Web",
"relDate": "1501718400000",
"version": "1.133.3"
},
"url": "/lifecycle/v1/accounts/test/apps/myapp",
"createdAt": 1502274734263,
"updatedAt": 1502274835000
},
"entity": {
"applicationState": "STOPPED",
"processes": []
}
}
Follow the steps below to deploy your application on a local SAP Cloud Platform server.
Prerequisites
● You have set up your runtime environment in Eclipse IDE. For more information, see Set Up the Runtime
Environment [page 1408].
● You have developed or imported a Java Web application in Eclipse IDE. For more, information, see
Developing Java Applications [page 1442] or Import Samples as Eclipse Projects [page 1423]
Procedure
1. Open the servlet in the Java Editor and from the context menu, choose Run As Run on Server .
2. Window Run On Server opens. Make sure that the Manually define a new server option is selected.
3. Expand the SAP node and, as a server type, choose between:
○ Java Web Server
○ Java Web Tomcat 7 Server
○ Java Web Tomcat 8 Server
○ Java EE 6 Web Profile Server
4. Choose Finish.
5. The local runtime starts up in the background and your application is installed, started and ready to serve
requests.
Note
If this is the first server you run in your IDE workspace, a folder Servers is created and appears in the
Project Explorer navigation tree. It contains configurable folders and files you can use, for example, to
change your HTTP or JMX port.
6. The Internal Web Browser opens in the editor area and shows the application output.
7. Optional: If you try to delete a server with an application running on it, a dialog appears allowing you to
choose whether to only undeploy the application, or to completely delete it together with its configuration.
After you have deployed your application, you can additionally check your server information. In the Servers
view, double-click on the local server and open the Overview tab. Depending on your local runtime, the following
data is available:
● If you have run your application in Java Web or Java EE 6 Web Profile runtime, you see the standard
server data (General Info, Publishing, Timeouts, Ports).
● If you have run your application in Java Web Tomcat 7 or Java Web Tomcat 8 runtime, you see some
additional Tomcat sections, default Tomcat ports, and an extra Modules page, which shows a list of all
applications deployed by you.
Related Information
Prerequisites
● You have set up your runtime environment in the Eclipse IDE. For more information, see Set Up the
Runtime Environment [page 1408].
● You have developed or imported a Java Web application in Eclipse IDE. For more, information, see
Developing Java Applications [page 1442] or Import Samples as Eclipse Projects [page 1423]
● You have an active subaccount. For more information, see Get a Trial Account.
Procedure
1. Open the servlet in the Java editor and from the context menu, choose Run As Run on Server .
2. The Run On Server dialog box appears. Make sure that the Manually define a new server option is selected.
Note
○ If you have previously entered a subaccount and user name for your region host, these names will
be prompted to you in dropdown lists.
○ A dropdown list will be displayed as well for previously entered region hosts.
○ If you select the Save password box, the entered password for a given user name will be
remembered and kept in the secure store.
9. Choose Finish. This triggers the publishing of the application on SAP Cloud Platform.
Note
You cannot deploy multiple applications on the same application process. Deployment of a second
application on the same application process overwrites any previous deployments. If you want to
deploy several applications, deploy each of them on a separate application process.
Next Steps
● If, during development, you need to redeploy your application, after choosing Run on Server or Publish, the
cloud server will not be restarted but only the binaries of the application will be updated.
● If you try to delete a server with an application running on it, a dialog appears allowing you to choose
whether to only undeploy the application, or to completely delete it together with its configuration.
● If you have made changes in your deployed application, and you want them to be faster applied without
uploading the entire set of files tо the cloud, proceed as follows:
1. In the Servers view, double-click on the cloud server.
2. Open the Overview tab.
3. In the Publishing section, select Publish changes only (delta deploy).
You can see all applications deployed in your subaccount within the Eclipse Tools, or change the current
runtime. For more information, see Configuring Advanced Configurations [page 1471].
Related Information
SAP Cloud Platform Tools provide options for advanced server and application configurations from the Eclipse
IDE, as well as direct reference to the cockpit UI.
You have developed or imported a Java Web application in the Eclipse IDE. For more, information, see
Developing Java Applications [page 1442] or Import Samples as Eclipse Projects [page 1423].
Alternatives
There are alternative ways to open the cockpit (1) and the application URLs (2).
1. In the Servers view, open the context menu and choose Show In Cockpit .
2. In the Servers view, expand the cloud server node and, from the context menu of the relevant application,
choose Application URL Open . It will be opened in a new browser tab.
Tip
● If the application is published on the cloud server, besides the Open option you can also choose Copy to
Clipboard, which only copies the application URL.
● If the application has not been published but only added to the server, Copy to Clipboard will be
disabled. The Open option though will display a dialog which allows you to publish and then open the
application in a browser.
● If the cloud server is not in Started status, both Application URL options will be disabled.
After you have deployed your application, you can check and also change the server runtime. Proceed as
follows:
Note
When you change the Runtime value so that it differs from the one in Runtime in use, after saving your
change, a link appears prompting you to republish the server.
From the server editor, you can configure additional application parameters, such as compute unit size, JVM
arguments, and others.
Note
If you make your configurations on a started server, the changes will take effect after server restart. You
can use the link Restart to apply changes.
Related Information
The console client allows you to install a server runtime in a local folder and use it to deploy your application.
Procedure
neo install-local
This installs a server runtime in the default local server directory <SDK installation folder>/
server. To use an alternative directory, enter the command together with the following optional command
argument:
3. To start the local server, enter the following command and press ENTER :
neo start-local
This starts a local server instance in the default local server directory <SDK installation folder>/
server. Again, use the following optional command argument to specify another directory:
4. To deploy your application, enter the following command as shown in the example below and press ENTER :
This deploys the WAR file on the local server instance. If necessary, specify another directory as in step 3.
5. To check your application is running, open a browser and enter the URL, for example:
http://localhost:8080/hello-world
Note
The HTTP port is normally 8080. However, the exact port configurations used for your local server,
including the HTTP port, are displayed on the console screen when you install and start the local server.
6. To stop the local server instance, enter the following command from the <SDK installation folder>/
tools folder and press ENTER :
neo stop-local
Related Information
Deploying an application publishes it to SAP Cloud Platform. During deploy, you can define various specifics of
the deployed application using the deploy command optional parameters.
Prerequisites
● You have downloaded and configured SAP Cloud Platform console client. For more information, see Set Up
the Console Client [page 1412]
● Depending on your subaccount type, deploy the application on the respective region host. For more
information, see Regions [page 11]
Procedure
1. In the opened command line console, execute neo deploy command with the appropriate parameters.
You can define the parameters of commands directly in the command line as in the example below, or in
the properties file. For more information, see Using the Console Client [page 1928].
2. Enter your password if requested.
3. Press ENTER and deployment of your application will start. If deployment fails, check if you have defined
the parameters correctly.
Note
The size of an application deployed on SAP Cloud Platform can be up to 1.5 GB. If the application is
packaged as a WAR file, the size of the unzipped content is taken into account.
Example
To make your deployed application available for requests, you need to start it by executing the neo start
command.
Then, you can manage the application lifecycle (check the status; stop; restart; undeploy) using dedicated
console client commands.
Related Information
By using the delta deployment option, you can apply changes in a deployed application faster without
uploading the entire set of files tо SAP Cloud Platform.
Context
The delta parameter allows you to deploy only the changes between the provided source and the previously
deployed content - new content is added; missing content is deleted; existing content is updated if there are
changes. The delta parameter is available in two commands – deploy and hot-update.
Note
Use it to save time for development purposes only. For updating productive applications, deploy the whole
application.
To upload only the changed files from the application WARs, use one of the two approaches:
Note
With the source parameter, provide the whole set of files of your application, not only the changed ones.
Related Information
The cockpit allows you to deploy Java applications as WAR files and supports a number of deployment options
for configuring the application.
Procedure
○ Start: Start the application to activate its URL and make the application available to your end users.
○ Close: Simply close the dialog box if you do not want to start the application immediately.
Results
You can update or redeploy the application whenever required. To do this, choose Update application to open
the same dialog box as in update mode. You can update the application with a new WAR file or change the
configuration parameters.
To change the name of a deployed application, deploy a new application under the desired name, and delete
the application whose name you want to change.
Related Information
After you have created a Web application and tested it locally, you may want to inspect its runtime behavior and
state by debugging the application in SAP Cloud Platform. The local and the cloud scenarios are analogical.
Context
The debugger enables you to detect and diagnose errors in your application. It allows you to control the
execution of your program by setting breakpoints, suspending threads, stepping through the code, and
examining the contents of the variables. You can debug a servlet or a JSP file on a SAP Cloud Platform server
without losing the state of your application.
Currently, it is only possible to debug Web applications in SAP Cloud Platform that have exactly one
application process (node).
Tasks
Related Information
In this section, you can learn how to debug a Web application on SAP Cloud Platform local runtime in the
Eclipse IDE.
Prerequisites
You have developed a Web application using the Eclipse IDE. For more information, see Developing Java
Applications [page 1442].
Procedure
Related Information
In this section, you can learn how to debug a Web application on SAP Cloud Platform depending on whether
you have deployed it in the Eclipse IDE or in the console client.
Prerequisites
● You have developed a Web application using the Eclipse IDE. For more information, see Developing Java
Applications [page 1442].
● You have deployed your Web application either using the Eclipse IDE or via the console client. For more
information, see Deploying and Updating Applications [page 1453].
Note
Debugging can be enabled if there is only one VM started for the requested account or application.
Procedure
Note
Since cloud servers are running on SAP JVM, switching modes does not require restart and happens in
real time.
1. Deploy your Web application in the console client and start it.
2. Go to the Eclipse IDE, open the Servers view and choose New Server .
3. Choose SAP SAP Cloud Platform .
4. Enter the correct region host, according to your location. (For more information, see Regions [page 11].)
5. Edit the server name, if necessary, and choose Next.
6. On page SAP Cloud Platform Application in the wizard, provide the same application data, which you have
previously entered in the console client.
7. Choose Finish.
8. A new server is created and attached to your application. It should be in Started mode if your application
is started.
9. From the server's context menu, choose Restart in Debug. (This should not restart the application.)
Note
● If you have deployed an application on a running server, we recommend that you do not use Debug on
Server or Run on Server for this will republish (redeploy) your application.
● Also, bear in mind that if you have deployed two or more WAR files, only the debugged one will remain
after that.
● If the sources are not attached (Example: The application is deployed from CLI or you need to attach
additional sources), you may attach them as described here .
Related Information
In the Neo environment of SAP Cloud Platform, you can develop and run multitenant (tenant-aware)
applications. These applications run on a shared compute unit that can be used by multiple consumers
(tenants). Each consumer accesses the application through a dedicated URL.
You can read about the specifics of each platform service with regards to multitenancy in the respective section
below:
● Isolate data
● Save resources by sharing them among tenants
● Perform updates efficiently, that is, in one step
Currently, you can trigger the subscription via the console client. For more information, see Providing Java
Multitenant Applications to Tenants for Testing [page 1064].
When an application is accessed via a consumer specific URL, the application environment is able to identify
the current consumer. The application developer can use the tenant context API to retrieve and distinguish the
tenant ID, which is the unique ID of the consumer. When developing tenant-aware applications, data isolation
for different consumers is essential. It can be achieved by distinguishing the requests based on the tenant ID.
There are also some specifics in the usage of different services when you develop your multitenant application.
● Shared in-memory data such as Java static fields will be available to all tenants.
● Avoid any possibility that an application user can execute custom code in the application JVM, as this may
give them access to other tenants' data.
● Avoid any possibility that an application user can access a file system, as this may give them access to
other tenants' data.
Connectivity Service
For more information, see Multitenancy in the Connectivity Service [page 248].
Multitenant applications on SAP Cloud Platform have two approaches available to separate the data of the
different consumers:
Document Service
The document service automatically separates the documents according to the current consumer of the
application. When an application connects to a document repository, the document service client
automatically propagates the current consumer of the application to the document service. The document
service uses this information to separate the documents within the repository. If an application wants to
connect to the data of a dedicated consumer instead of the current consumer (for example in a background
process), the application can specify the tenant ID of the corresponding consumer when connecting to the
document repository.
Keystore Service
The Keystore Service provides a repository for cryptographic keys and certificates to tenant-aware applications
hosted on SAP Cloud Platform. Because the tenant defines a specific configuration of an application, you can
configure an application to use different keys and certificates for different tenants.
For more information about the Keystore Service, see Keys and Certificates [page 2461].
Access rights for tenant-aware application are usually maintained by the application consumer, not by the
application provider. An application provider may predefine roles in the web.xml when developing the
application. By default, predefined roles are shared with all application consumers, but could also be made
visible only to the provider subaccount. Once a consumer is subscribed to this application, shared predefined
roles become visible in the cockpit of the application consumer. Then, the application consumer can assign
users to these roles to give them access to the provider application. In addition, application consumer
subaccounts can add their own custom roles to the subscribed application. Custom roles are visible only within
the application consumer subaccount where they are created.
For more information about managing application roles, see Managing Roles [page 2397].
Trust configuration regarding authentication with SAML2.0 protocol is maintained by the application consumer.
For more information about configuring trust, see Application Identity Provider [page 2407].
Related Information
Context
● Application Provider - an organizational unit that uses SAP Cloud Platform to build, run and sell
applications to customers, that is, the application consumers.
● Application Consumer - an organizational unit, typically a customer or a department inside a
customer’s organization, which uses an SAP Cloud Platform application for a certain purpose. Obviously,
the application is in fact used by end users, who might be employees of the organization (for instance, in
the case of an HR application) or just arbitrary users, internal or external (for instance, in the case of a
collaborative supplier application).
To use SAP Cloud Platform, both the application provider and the application consumer need to have a
subaccount. The subaccount is the central organizational unit in SAP HANA Cloud Plaftorm. It is the central
Subaccount members are users who must be registered via the SAP ID service. Subaccount members may
have different privileges regarding the operations which are possible for a subaccount (for example,
subaccount administration, deploy/start/stop applications). Note that the subaccount belongs to an
organization and not to an individual. Nevertheless, the interaction with the subaccount is performed by
individuals, the members of the subaccount. The subaccount-specific configuration allows application
providers and application consumers to adapt their subaccount to their specific environment and needs.
An application resides in exactly one subaccount, the hosting subaccount. It is uniquely identified by the
subaccount name and the application name. Applications consume SAP Cloud Platform resources, for
instance, compute units, structured and unstructured storage and outgoing bandwidth. Costs for consumed
resources are billed to the owner of the hosting subaccount, who can be an application provider, an application
consumer, or both.
Related Information
Overview
In a provider-managed application scenario, each application consumer gets its own access URL for the
provider application. To be able to use an application with a consumer-specific URL, the consumer must be
subscribed to the provider application. When an application is launched via a consumer-specific URL, the
tenant runtime is able to identify the current consumer of the application. The tenant runtime provides an API
to retrieve the current application consumer. Each application consumer is identified by a unique ID which is
called tenantId.
Since the information about the current consumer is extracted from the request URL, the tenant runtime can
only provide a tenant ID if the current thread has been started via an HTTP request. In case the current thread
was not started via an HTTP request (for example, a background process), the tenant context API only returns
a tenant if the current application instance has been started for a dedicated consumer. If the current
application instance is shared between multiple consumers and the thread was not started via an HTTP
request, the tenant runtime throws an exception.
Note
API Description
com.sap.cloud.account TenantContext
<resource-ref>
<res-ref-name>TenantContext</res-ref-name>
<res-type>com.sap.cloud.account.TenantContext</res-type>
</resource-ref>
To get an instance of the TenantContext API, use resource injection the following way:
@Resource
private TenantContext tenantContext;
Note
When you use WebSockets, the TenantId and AccountName parameters, provided by the
TenantContext API, are correct only during processing of WebSocket handshake request. This is because
what follows after the handshake does not conform to the HTTP protocol. In case TenantId and
AccountName are needed during next WebSocket requests, they should be stored into the HTTP session,
and, if needed, you can use TenantContext.execute(...) to operate on behalf of the relevant tenant.
Account API
The Account API provides methods to get subaccount ID, subaccount display name, and attributes. For more
information, see the Javadoc.
Sample Code
Sample Code
Related Information
Below are listed tutorials describing end-to-end scenarios with multitenant demo applications:
Create a general demo application (servlet) Create an Exemplary Provider Application (Servlet) [page
1490]
Create a general demo application (JSP file) Create an Exemplary Provider Application (JSP) [page 1493]
Create a connectivity demo application Create a Multitenant Connectivity Application [page 1495]
Prerequisites
● You have downloaded and set up your Eclipse IDE, SAP HANA Cloud Tools for Java and SAP Cloud Platform
SDK for Neo environment. For more information, see Setting Up the Development Environment [page
1402].
● You are an application provider. For more information, see Multitenancy Roles [page 1485].
Procedure
<resource-ref>
<res-ref-name>TenantContext</res-ref-name>
<res-type>com.sap.cloud.account.TenantContext</res-type>
</resource-ref>
9. Replace the entire servlet class with the following sample code:
package tenantcontext.demo;
import java.io.IOException;
import java.io.PrintWriter;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.cloud.account.TenantContext;
/**
* Servlet implementation class TenantContextServlet
*/
public class TenantContextServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
/**
* @see HttpServlet#HttpServlet()
*/
public TenantContextServlet() {
super();
}
/**
* @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse
response)
*/
protected void doGet(HttpServletRequest request, HttpServletResponse
response) throws ServletException, IOException {
try {
InitialContext ctx = new InitialContext();
Context envCtx = (Context)ctx.lookup("java:comp/env");
TenantContext tenantContext = (TenantContext)
envCtx.lookup("TenantContext");
response.setContentType("text/html");
PrintWriter writer = response.getWriter();
writer.println("<!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.01
Transitional//EN\" \"http://www.w3.org/TR/html4/loose.dtd\">");
writer.println("<html>");
writer.println("<head>");
writer.println("<title>SAP Cloud Platform - Tenant Context Demo
Application</title>");
writer.println("</head>");
writer.println("<body>");
writer.println("<h2> Welcome to the SAP Cloud Platform Tenant
Context demo application</h2>");
writer.println("<br></br>");
String currentTenantId = tenantContext.getTenant().getId();
writer.println("<p><font size=\"5\"> The application was accessed
on behalf of a tenant with an ID: <b>" + currentTenantId + "</b></font></p>");
writer.println("</body>");
writer.println("</html>");
} catch (Exception e) {
throw new ServletException(e.getCause());
}
}
10. Save the Java editor. The project compiles without errors.
You have successfully created a Web application containing a sample servlet and connectivity functionality.
To learn how to deploy your application, see Deploy on the Cloud from Eclipse IDE [page 1469].
Result
You have created a sample application that can be requested in a browser. Its output depends on the tenant
context.
Next Steps
● To test the access to your multitenant application, go to a browser and request it on behalf of your
subaccount. Use the following URL pattern: https://
<application_name><provider_subaccount>.<host>/<application_path>
● If you want to test the access to your multitenant application on behalf of a consumer subaccount, follow
the steps in page: Consume a Multitenant Connectivity Application [page 1498]
Related Information
This tutorial explains how to create a sample application which makes use of the multitenancy concept. That is,
you can enable your application to be consumed by users, members of a tenant which is subscribed to this
application in a multitenant flavor.
Prerequisites
● You have downloaded and set up your Eclipse IDE, SAP HANA Cloud Tools for Java and SAP HANA SDK.
For more information, see Setting Up the Development Environment [page 1402].
● You are an application provider. For more information, see Multitenancy Roles [page 1485].
Procedure
<resource-ref>
<res-ref-name>TenantContext</res-ref-name>
<res-type>com.sap.cloud.account.TenantContext</res-type>
</resource-ref>
1. Under the TenantContextApp project node, choose New JSP File in the context menu.
2. Enter index.jsp as the File name and choose Finish.
3. Open the index.jsp file using the text editor.
4. Replace the entire JSP file content with the following sample code:
<%@page
import="javax.naming.InitialContext,javax.naming.Context,com.sap.cloud.account
.TenantContext" %>
<%@ page language="java" contentType="text/html; charset=ISO-8859-1"
pageEncoding="ISO-8859-1"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://
www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>SAP Cloud Platform - Tenant Context Demo Application</title>
</head>
<body>
<h2> Welcome to the SAP Cloud Platform Tenant Context demo application</h2>
<br></br>
<%
try {
InitialContext ctx = new InitialContext();
Context envCtx = (Context) ctx.lookup("java:comp/env");
TenantContext tenantContext = (TenantContext) envCtx
.lookup("TenantContext");
String currentTenantId = tenantContext.getTenant().getId();
out.println("<p><font size=\"5\"> The application was accessed on
behalf of a tenant with an ID: <b>"
+ currentTenantId + "</b></font></p>");
} catch (Exception e) {
out.println("error at client");
}
%>
</body>
</html>
To learn how to deploy your application, see Deploy on the Cloud from Eclipse IDE [page 1469].
Result
You have successfully created a Web application containing a JSP file and tenant context functionality.
● To test the access to your multitenant application, go to a browser and request it on behalf of your
subaccount. Use the following URL pattern: https://
<application_name><provider_subaccount>.<host>/<application_path>
● If you want to test the access to your multitenant application on behalf of a consumer subaccount, follow
the steps in page: Consume a Multitenant Connectivity Application [page 1498]
Related Information
Prerequisites
● You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java and SAP Cloud
Platform SDK for Neo environment. For more information, see Setting Up the Development Environment
[page 1402].
● You are an application provider. For more information, see Multitenancy Roles [page 1485].
Context
This tutorial explains how you can create a sample application which is based on the multitenancy concept,
makes use of the connectivity service, and can be later consumed by other users. That means, you can enable
your application to be consumed by users, members of a tenant which is subscribed for this application in a
multitenant flavor. The output of the application you are about to create, displays a welcome page showing the
URI of the tenant-specific destination configuration. This means that the administrator of the consumer
subaccount may have been previously set a tenant-specific configuration for this application. However, in case
such configuration has not been set, the application would use the default one, set by the administrator of the
provider subaccount.
The application code is the same as for a standard HelloWorld consuming the connectivity service as the
latter manages the multitenancy with no additional actions required by you. The users of the consumer
subaccount, which is subscribed to this application, can access the application using a tenant-specific URL.
This would lead the application to use a tenant-specific destination configuration. For more information, see
Multitenancy in the Connectivity Service [page 248].
As a provider, you can set your destination configuration on application and subaccount level. They are the
default destination configurations in case a consumer has not configured tenant-specific destination
configuration (on subscription level).
Procedure
<resource-ref>
<res-ref-name>search_engine_destination</res-ref-name>
<res-type>com.sap.core.connectivity.api.http.HttpDestination</res-type>
</resource-ref>
1. Under the MultitenantConnectivity project node, choose New JSP File in the context menu.
2. Enter index.jsp as the File name and choose Finish.
3. Open the index.jsp file using the text editor.
4. Replace the entire JSP file content with the following sample code:
<%@page
import="javax.naming.InitialContext,javax.naming.Context,com.sap.core.connecti
vity.api.http.HttpDestination,java.util.Arrays"%>
<%@ page language="java" contentType="text/html; charset=ISO-8859-1"
pageEncoding="ISO-8859-1"%>
You have successfully created a Web application containing a sample JSP file and consuming the connectivity
service via looking up a destination configuration.
To learn how to deploy your application, see Deploy on the Cloud from Eclipse IDE [page 1469].
You, as application provider, can configure a default destination, which is then used at runtime when the
application is requested in the context of the provider subaccount. In this case, the URL used to access the
application is not tenant-specific.
Example:
Name=search_engine_destination
URL=https://www.google.com
For more information on how to define a destination for provider subaccount, see:
Result
You have created a sample application which can be requested in a browser. Its output depends on the tenant
name.
Next Steps
● To test the access to your multitenant application, go to a browser and request it on behalf of your
subaccount. Use the following URL pattern: https://
<application_name><provider_subaccount>.<host>/<application_path>
● If you want to test the access to your multitenant application on behalf of a consumer subaccount, follow
the steps in page: Consume a Multitenant Connectivity Application [page 1498]
Related Information
Prerequisites
This tutorial assumes that your subaccount is subscribed to the following exemplary application (deployed
in a provider subaccount): Create a Multitenant Connectivity Application [page 1495]
Context
This tutorial explains how you can consume a sample connectivity application based on the multitenancy
concept. That is, you are a member of a subaccount which is subscribed for applications provided by other
subaccounts. The output of the application you are about to consume, displays a welcome page showing the
URI of the tenant-specific destination configuration. This means that the administrator of your consumer
subaccount may have been previously set a tenant-specific configuration for this application. However, in case
such configuration has not been set, the application would use a default one, set by the administrator of the
provider subaccount.
Users of a consumer subaccount, which is subscribed to an application, can access the application using a
tenant-specific URL. This would lead the application to use a tenant-specific destination configuration. For
more information, see Multitenancy in the Connectivity Service [page 248].
Note
Procedure
You can consume a provider application if your subaccount is subscribed to it. In this case, administrators of
your consumer subaccount can configure a tenant-specific destination configuration, which can later be used
by the provider application.
To illustrate the tenant-specific consumption, the URL used in this example is diferent from the one in the
exemplary provider application tutorial.
Example:
Name=search_engine_destination
URL=http://www.yahoo.com
Type=HTTP
ProxyType=Internet
Authentication=NoAuthentication
TrustAll=true
Tip
Go to a browser and request the application on behalf of your subaccount. Use the following URL pattern:
https://<application_name><provider_subaccount>-<consumer_subaccount>.<host>/
<application_path>
Result
The application is requested in a browser. Its output is relevant to your tenant-specific destination
configuration.
Related Information
An overview of the options you have when programming with the SAP HANA and SAP ASE databases.
Related Information
Access remote database instances through a database tunnel, which provides a secure connection from your
local machine and bypasses any firewalls.
Program with JPA in the Neo environment, whose container-managed persistence and application-managed
persistence differ in terms of the management and life cycle of the entity manager.
The main features of each scenario are shown in the table below. We recommend that you use container-
managed persistence (Java EE 6 Web Profile runtime), which is the model most commonly used by Web
applications.
JPA Scenario SDK for Java Web SDK for Java EE 6 Web Profile
Download the latest version of EclipseLink. EclipseLink versions 2.5 and later contain the SAP HANA database
platform.
For details about importing the files into your Web application project and specifying the JPA implementation
library EclipseLink, see the tutorial Tutorial: Adding Application-Managed Persistence with JPA (SDK for Java
Web) [page 1516].
Related Information
Special Settings for EclipseLink Versions Earlier than 2.5 [page 1530]
Persistence Units [page 1531]
Using Container-Managed Persistence [page 1532]
Using Application-Managed Persistence [page 1535]
Entity Classes [page 1541]
Use JPA together with EJB to apply container-managed persistence in a simple Java EE web application that
manages a list of persons.
Prerequisites
● Download and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and the SDK for Java EE 6 Web
Profile. For more information, see Setting Up the Development Environment [page 1402].
● Set up your runtime environment in the Eclipse IDE. For more information, see Set Up the Runtime
Environment [page 1408].
● Develop or import a Java Web application in Eclipse IDE. For more information, see Developing Java
Applications [page 1442] or Import Samples as Eclipse Projects [page 1423].
The application is also available as a sample in the SAP Cloud Platform SDK for Neo environment for Java
EE 6 Web Profile:
○ Sample name: persistence-with-ejb
○ Location: <sdk>/samples folder
More information, see Using Samples [page 1421].
Context
Create a dynamic web project using the JPA project facet and add a servlet.
Procedure
1. From the Eclipse main menu, choose File New Dynamic Web Project .
2. Enter the Project name persistence-with-ejb.
3. In the Target Runtime pane, select Java EE 6 Web Profile as the runtime you want to use to deploy the
application.
4. In the Dynamic web module version section, select 3.0.
5. In the Configuration section, choose Modify and select JPA in the Project Facets screen.
6. Choose OK and return to the Dynamic Web Project screen.
7. Choose Next.
14. To add a servlet to your project, choose File New Servlet from the Eclipse main menu.
15. Enter the Java package com.sap.cloud.sample.persistence and the class name
PersistenceEJBServlet.
16. To generate the servlet, choose Finish.
Procedure
2. From the Eclipse main menu, choose File New Other Class and choose Next.
3. Make sure that the Java package is com.sap.cloud.sample.persistence.
4. Enter the class name Person and choose Finish. Replace the entire class with the following content:
package com.sap.cloud.sample.persistence;
import javax.persistence.Basic;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.NamedQuery;
import javax.persistence.Table;
/**
* Class holding information on a person.
*/
@Entity
@Table(name = "T_PERSON")
@NamedQuery(name = "AllPersons", query = "select p from Person p")
public class Person {
@Id
@GeneratedValue
private Long id;
@Basic
private String firstName;
@Basic
private String lastName;
public long getId() {
return id;
}
public void setId(long newId) {
this.id = newId;
}
public String getFirstName() {
return this.firstName;
}
public void setFirstName(String newFirstName) {
this.firstName = newFirstName;
}
public String getLastName() {
return this.lastName;
}
public void setLastName(String newLastName) {
this.lastName = newLastName;
}
}
Procedure
1. Select persistence.xml, and from the context menu choose Open With Persistence XML Editor .
2. On the General tab, make sure that org.eclipse.persistence.jpa.PersistenceProvider is
entered in the Persistence provider field.
3. On the Options tab, make sure that the DDL generation type Create Tables is selected.
4. On the Connection tab, select the transaction type JTA.
5. Save the file.
Procedure
2. From the Eclipse main menu, choose File New Other EJB Session Bean (EJB 3.x) and choose
Next.
3. Make sure that the Java package is com.sap.cloud.sample.persistence.
4. Enter the class name PersonBean, leave the default setting Stateless, and choose Finish.
5. Leave the default setting Stateless and choose Finish.
6. Replace the entire class with the following content:
package com.sap.cloud.sample.persistence;
import java.util.List;
import javax.ejb.LocalBean;
import javax.ejb.Stateless;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
/**
* Session Bean implementation class PersonBean
*/
@Stateless
@LocalBean
public class PersonBean {
Procedure
2. From the context menu, choose Import General File System and choose Next.
3. Browse to the local directory where you downloaded and unpacked the SDK for Java EE 6 Web Profile,
select the repository/plugins directory, and choose OK.
4. Select com.sap.security.core.server.csi_1.x.y.jar and choose Finish.
Extend the servlet to use the Person entity and EJB session bean.
Context
The servlet adds Person entity objects to the database, retrieves their details, and shows them on the screen.
Procedure
2. Select PersistenceEJBServlet.java, and from the context menu choose Open With Java
Editor .
package com.sap.cloud.sample.persistence;
import java.io.IOException;
import java.sql.SQLException;
import java.util.List;
import javax.ejb.EJB;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.security.core.server.csi.IXSSEncoder;
import com.sap.security.core.server.csi.XSSEncoder;
/**
* Servlet implementation class PersistenceEJBServlet
*/
@WebServlet("/")
public class PersistenceEJBServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private static final Logger LOGGER = LoggerFactory
.getLogger(PersistenceEJBServlet.class);
@EJB
PersonBean personBean;
/** {@inheritDoc} */
@Override
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException,
IOException {
response.getWriter().println("<p>Persistence with JPA!</p>");
try {
appendPersonTable(response);
appendAddForm(response);
} catch (Exception e) {
response.getWriter().println(
"Persistence operation failed with reason: "
+ e.getMessage());
LOGGER.error("Persistence operation failed", e);
}
}
/** {@inheritDoc} */
@Override
protected void doPost(HttpServletRequest request,
HttpServletResponse response) throws ServletException,
IOException {
try {
doAdd(request);
doGet(request, response);
} catch (Exception e) {
response.getWriter().println(
"Persistence operation failed with reason: "
+ e.getMessage());
LOGGER.error("Persistence operation failed", e);
}
}
private void appendPersonTable(HttpServletResponse response)
throws SQLException, IOException {
// Append table that lists all persons
List<Person> resultList = personBean.getAllPersons();
response.getWriter().println(
"<p><table border=\"1\"><tr><th colspan=\"3\">"
+ (resultList.isEmpty() ? "" : resultList.size()
+ " ")
+ "Entries in the Database</th></tr>");
if (resultList.isEmpty()) {
Results
Procedure
1. To test your web application on the local server, follow the steps for deploying a web application locally as
described in Deploy Locally from Eclipse IDE [page 1468].
2. Enter a first name (for example, John) and a last name (for example, Smith) and choose Add Person.
Note
If you add more names to the database, they are also listed in the table. This confirms that you have
successfully enabled persistence using the Person entity.
Procedure
If you leave the default Automatic option, the server loads the target runtime of your application.
7. Enter your subaccount name, e-mail or user name, and password, then choose Next.
○ If you have previously entered a subaccount and user name for your host, you can select these names
from lists.
○ Previously entered hosts also appear in a dropdown list.
○ Select Save password to remember and store the password for a given user.
Caution
Do not select your application on the Add and Remove screen. Adding an application automatically
starts it with the effect that it will fail because no data source binding exists. You will add an application
in a later step.
9. In the Servers view, open the context menu for the server you just created and choose Show In
Cockpit .
Procedure
1. In the cockpit, select a subaccount, then choose SAP HANA / SAP ASE Databases & Schemas in the
navigation area.
2. Select the database you want to create a binding for.
3. Choose Data Source Bindings. For more information, see .
4. Define a binding (<DEFAULT>) for the application and select a database ID. Choose Save.
Add the application to the new server and start it to deploy the application on the cloud.
Context
Note
You cannot deploy multiple applications on the same application process. Deployment of a second
application on the same application process overwrites any previous deployments. To deploy several
applications, deploy each of them on a separate application process.
Procedure
1. On the Servers view in Eclipse, open the context menu for the server choose Add and Remove...
<application name> .
2. To add the application to the server, add the application to the panel on the right side.
3. Choose Finish.
4. Start the server.
This deploys the application and starts it on the SAP Cloud Platform.
5. Access the application by clicking the application URL on the application overview page in the cockpit.
You should see the same output as when the application was tested on the local server.
Use JPA to apply application-managed persistence in a simple Java EE web application that manages a list of
persons.
Prerequisites
● Download and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and the SDK for Java Web
For more information, see Setting Up the Development Environment [page 1402].
● Downloaded the JPA Provider, EclipseLink:
1. Download the latest 2.5.x version of EclipseLink from: http://www.eclipse.org/eclipselink/downloads
. Select the EclipseLink 2.5.x Installer Zip (intended for use in Java EE environments).
Note
Context
1. Create a Dynamic Web Project and Servlet with JPA [page 1517]
2. Create the JPA Persistence Entity [page 1520]
3. Maintain Metadata for the Person Entity [page 1521]
4. Prepare the Web Application Project for JPA [page 1522]
5. Extend the Servlet to Use Persistence [page 1523]
6. Test the Web Application on the Local Server [page 1526]
Create a dynamic web project using the JPA project facet and a servlet.
Procedure
1. From the Eclipse main menu, choose File New Dynamic Web Project .
2. Enter the Project name persistence-with-jpa.
3. In the Target Runtime pane, select Java Web as the runtime to deploy the application.
4. In the Dynamic web module version section, select 2.5.
5. In the Configuration section, choose Modify, then select the JPA in the Project Facets screen.
6. Choose OK and return to the Dynamic Web Project screen.
14. To add a servlet to the project you have just created, choose File New Servlet from the Eclipse
main menu.
15. Enter the Java package com.sap.cloud.sample.persistence and the class name
PersistenceWithJPAServlet.
16. To generate the servlet, choose Finish.
Context
Create a JPA persistence entity class named Person. Add an auto-incremented ID to the database table as the
primary key and person attributes. You must also define a query method that retrieves a Person object from
the database table. Each person stored in the database is represented by a Person entity object.
Procedure
2. From the Eclipse main menu, choose File New Other Class and choose Next.
3. Make sure that the Java package is com.sap.cloud.sample.persistence.
4. Enter the class name Person and choose Finish.
5. In the editor, replace the entire class with the following content:
package com.sap.cloud.sample.persistence;
import javax.persistence.Basic;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.NamedQuery;
import javax.persistence.Table;
/**
* Class holding information on a person.
*/
@Entity
@Table(name = "T_PERSON")
@NamedQuery(name = "AllPersons", query = "select p from Person p")
public class Person {
@Id
@GeneratedValue
private Long id;
@Basic
private String firstName;
@Basic
private String lastName;
public long getId() {
return id;
}
public void setId(long newId) {
this.id = newId;
}
public String getFirstName() {
return this.firstName;
}
public void setFirstName(String newFirstName) {
this.firstName = newFirstName;
}
To maintain metadata for your entity class, define additional settings in the persistence.xml file.
Context
Procedure
1. Select persistence.xml and from the context menu choose Open With Persistence XML Editor .
2. Choose the General tab.
3. Make sure that org.eclipse.persistence.jpa.PersistenceProvider is entered in the Persistence
provider field.
4. In the Managed Class section, choose Add..., enter Person, then choose Ok.
5. On the Connection tab, make sure that the transaction type Resource Local is selected.
6. On the Schema Generation tab, make sure the DDL generation type Create Tables in the EclipseLink
Schema Generation section is selected.
7. Save the file.
Prepare the web application project by adding EclipseLink executables and the XSS Protection Library,
adapting the Java build path order, and adding the resource reference description to the web.xml file.
Procedure
<resource-ref>
<res-ref-name>jdbc/DefaultDB</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<servlet-mapping>
<servlet-name>PersistenceWithJPAServlet</servlet-name>
<url-pattern>/</url-pattern>
Note
An application's URL path contains the context root followed by the optional URL pattern ("/<URL
pattern>"). The servlet URL pattern that is automatically generated by Eclipse uses the servlet’s class
name as part of the pattern. Since the cockpit shows only the context root, this means that you cannot
directly open the application in the cockpit without adding the servlet name. To call the application by
only the context root, use "/" as the URL mapping, then you will no longer have to correct the URL in
the browser.
Context
The servlet adds Person entity objects to the database, retrieves their details, and displays them on the
screen.
Procedure
2. Select PersistenceWithJPAServlet.java and from the context menu choose Open With Java
Editor .
3. In the opened editor, replace the entire servlet class with the following content:
package com.sap.cloud.sample.persistence;
import java.io.IOException;
import java.sql.Connection;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
import javax.persistence.Persistence;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.sql.DataSource;
4. Save the servlet. The project should compile without any errors.
Procedure
1. To test your web application on the local server, follow the steps for deploying a web application locally as
described in Deploy Locally from Eclipse IDE [page 1468].
2. Enter a first name (for example, John) and a last name (for example, Smith) and choose Add Person.
Note
If you add more names to the database, they are also listed in the table. This confirms that you have
successfully enabled persistence using the Person entity.
Procedure
If you leave the default Automatic option, the server loads the target runtime of your application.
7. Enter your subaccount name, e-mail or user name, and password, then choose Next.
○ If you have previously entered a subaccount and user name for your host, you can select these names
from lists.
○ Previously entered hosts also appear in a dropdown list.
○ Select Save password to remember and store the password for a given user.
Caution
Do not select your application on the Add and Remove screen. Adding an application automatically
starts it with the effect that it will fail because no data source binding exists. You will add an application
in a later step.
9. In the Servers view, open the context menu for the server you just created and choose Show In
Cockpit .
Procedure
1. In the cockpit, select a subaccount, then choose SAP HANA / SAP ASE Databases & Schemas in the
navigation area.
2. Select the database you want to create a binding for.
3. Choose Data Source Bindings. For more information, see .
4. Define a binding (<DEFAULT>) for the application and select a database ID. Choose Save.
Add the application to the new server and start it to deploy the application on the cloud.
Context
Note
You cannot deploy multiple applications on the same application process. Deployment of a second
application on the same application process overwrites any previous deployments. To deploy several
applications, deploy each of them on a separate application process.
Procedure
1. On the Servers view in Eclipse, open the context menu for the server choose Add and Remove...
<application name> .
2. To add the application to the server, add the application to the panel on the right side.
3. Choose Finish.
4. Start the server.
This deploys the application and starts it on the SAP Cloud Platform.
5. Access the application by clicking the application URL on the application overview page in the cockpit.
You should see the same output as when the application was tested on the local server.
Container-Managed Persistence
<properties>
<property name="eclipselink.target-database"
value="com.sap.persistence.platform.database.HDBPlatform"/>
</properties>
Application-Managed Persistence
Specify the target database as shown above or directly in the servlet code, as shown in the example below:
ds = (DataSource) ctx.lookup("java:comp/env/jdbc/DefaultDB");
connection = ds.getConnection();
Map properties = new HashMap();
properties.put(PersistenceUnitProperties.NON_JTA_DATASOURCE, ds);
properties.put("eclipselink.target-database",
"com.sap.persistence.platform.database.HDBPlatform");
General Points
Set the target database property before you deploy the application on the SAP HANA database. If you dn't,
you'll get an error, and if this happens, you need to re-create the table with the correct definitions, setting the
When testing the application locally, remove the DDL generation type altogether.
A JPA model contains a persistence configuration file, persistence.xml, which describes the defined
persistence units. A persistence unit in turn defines all entity classes managed by the entity managers in your
application and includes the metadata for mapping the entity classes to the database entities.
JPA Provider
The persistence.xml file is located in the META-INF folder within the persistence unit src folder. The JPA
persistence provider used by the is org.eclipse.persistence.jpa.PersistenceProvider.
Example
In the persistence.xml file in the tutorial Adding Container-Managed Persistence with JPA (SDK for Java EE 6
Web Profile), the persistence unit is named persistence-with-ejb, the transaction type is JTA (default
setting), and the DDL generation type has been set to Create Tables, as shown below:
The the EclipseLink capabilities to generate database tables. The following values are valid for generating the
DDL for the entity specified in the persistence.xml file:
Note
Drop-and-create tables are often used during the development phase, when there are frequent
changes to the schema or data needs to be deleted. Don't forget to change it to create-tables
before you deploy the application; all data is lost when you drop a table.
Transaction Type
JTA transactions are used for container-managed persistence, and resource-local transactions for application-
managed persistence. The SDK for Java Web supports resource-local transactions only.
Container-managed entity managers are the model most commonly used by Web applications. Container-
managed entity managers require JTA transactions and are generally used with stateless session beans and
transaction-scoped persistence contexts, which are threadsafe.
Context
The scenario described in this section is based on the Java EE 6 Web Profile runtime. You use a stateless EJB
session bean into which the entity manager is injected using the @PersistenceContext annotation.
Procedure
1. Configure the persistence units in the persistence.xml file to use JTA data sources and JTA
transactions.
2. Inject the entity manager into an EJB session bean using the @PersistenceContext annotation.
Related Information
To use container-managed entity managers, configure JTA data sources in the persistence.xml file. JTA
data sources are managed data sources and are associated with JTA transactions.
Context
To configure JTA data sources, set the transaction type attribute (transaction-type) to JTA and specify the
names of the JTA data sources (jta-data-source), unless the application is using the default data source.
Procedure
The example below shows the persistence units defined for two data sources, where each data source is
associated with a different database:
<persistence>
<persistence-unit name="hanadb" transaction-type="JTA">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<jta-data-source>jdbc/hanaDB</jta-data-source>
<class>com.sap.cloud.sample.persistence.Person</class>
<properties>
<property name="eclipselink.ddl-generation" value="create-
tables" />
</properties>
</persistence-unit>
<persistence-unit name="maxdb" transaction-type="JTA">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<jta-data-source>jdbc/maxDB</jta-data-source>
<class>com.sap.cloud.sample.persistence.Person</class>
<properties>
<property name="eclipselink.ddl-generation" value="create-
tables" />
</properties>
</persistence-unit>
</persistence>
EJB session beans, which typically perform the database operations, can use the @PersistenceContext
annotation to directly inject the entity manager. The corresponding entity manager factory is created
transparently by the container.
Procedure
1. In the EJB session bean, inject the entity manager as follows. A persistence context type has not been
explicitly specified in the example below and is therefore, by default, transaction-scoped:
@PersistenceContext
private EntityManager em;
To use an extended persistence context, set the value of the persistence context type to EXTENDED
(@PersistenceContext(type=PersistenceContextType.EXTENDED)), and declare the session bean as
stateful. An extended persistence context allows a session bean to maintain its state across multiple JTA
transactions. An extended persistence context is not threadsafe.
2. If you have more than one persistence unit, inject the required number of entity managers by specifying
the persistence unit name as defined in the persistence.xml file:
@PersistenceContext(unitName="hanadb")
private EntityManager em1;
...
@PersistenceContext(unitName="maxdb")
private EntityManager em2;
3. Inject an instance of the EJB session bean class into, for example, the servlet of the web application with an
annotation in the following form, where PersonBean is an example session bean class:
The persistence context made available is based on JTA and provides automatic transaction management.
Each EJB business method automatically has a managed transaction, unless specified otherwise. The
entity manager life cycle, such as instantiation and closing, is controlled by the container. Therefore, do not
use methods designed for resource-local transactions, such as em.getTransaction().begin(),
em.getTransaction().commit(), and em.close().
Related Information
Application-managed entity managers are created manually using the EntityManagerFactory interface.
Application-managed entity managers require resource-local transactions and non-JTA data sources, which
you must declare as JNDI resource references.
Context
The scenario described in this section is based on the Java Web runtime, which supports only manual creation
of the entity manager factory.
Procedure
Related Information
An application can use one or more data sources. A data source can be a default data source or an explicitly
named data source. Before a data source can be used, you must declare it as a JNDI resource reference in the
web.xml deployment descriptor.
Procedure
<resource-ref>
<res-ref-name>jdbc/DefaultDB</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>jdbc/datasource1</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>jdbc/datasource2</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
○ The data source name is the JNDI name used for the lookup.
○ The same name must be used for the schema binding.
Note
If you declare the data source reference in a jdbc subcontext, you must use the same pattern for
the name of the schema binding (jdbc/NAME).
Related Information
Context
To use resource-local transactions, the transaction type attribute has to be set to RESOURCE_LOCAL,
indicating that the entity manager factory should provide resource-local entity managers. When you work with
a non-JTA data source, the non-JTA data source element also has to be set in the persistence unit properties in
the application code.
Procedure
In the application code, obtain an initial JNDI context by creating a javax.naming.InitialContext object,
then retrieve the data source by looking up the naming environment through the InitialContext.
Alternatively, you can directly inject the data source.
Procedure
1. To create an initial JNDI context and look up the data source, add the following code to your application and
make sure that the JNDI name matches the one specified in the web.xml file:
According to the Java EE Specification, you should add the prefix java:comp/env to the JNDI resource
name (as specified in the web.xml) to form the lookup name. For more information about defining and
referencing resources according to the Java EE standard, see the Java EE Specification.
@Resource
private javax.sql.DataSource ds;
@Resource(name="jdbc/datasource1")
private javax.sql.DataSource ds1;
@Resource(name="jdbc/datasource2")
private javax.sql.DataSource ds2;
Related Information
Java EE Specification
Use the EntityManagerFactory interface to manually create and manage entity managers in your Web
application.
Procedure
In the code above, the non-JTA data source element has been set in the persistence unit properties, and
the persistence unit name is the name of the persistence unit as it is declared in the persistence.xml
file.
Note
Include the above code in the servlet init() method, as illustrated in the tutorial Adding Application-
Managed Persistence with JPA (SDK for Java Web), since this method is called only once during
initialization when the servlet instance is loaded.
EntityManager em = emf.createEntityManager();
Next Steps
Application-managed entity managers are always extended and therefore retain the entities beyond the scope
of a transaction. You should therefore close an entity manager when it is no longer needed by calling
EntityManager.close(), or alternatively EntityManager.clear() wherever appropriate, such as at the
end of a transaction. An entity manager cannot be used concurrently by multiple threads, so design your entity
manager handling to avoid doing this.
Related Information
When working with a resource-local entity manager ,use the EntityTransaction API to manually set the
transaction boundaries in your application code. You can obtain the entity transaction attached to the entity
manager by calling EntityManager.getTransaction().
To create and update data in the database, you need an active transaction. The EntityTransaction API provides
the begin() method for starting a transaction, and the commit() and rollback() methods for ending a
transaction. When a commit is executed, all changes are synchronized with the database.
Example
The tutorial code (Adding Application-Managed Persistence with JPA (SDK for Java Web)) shows how to create
and persist an entity:
The EntityManager.persist() method makes an entity persistent by associating it with an entity manager.
It is inserted into the database when the commit() method is called. The persist() method can be called
only on new entities.
Tutorial: Adding Application-Managed Persistence with JPA (SDK for Java Web) [page 1516]
The data source is determined dynamically at runtime and does not need to be defined in the web.xml or
persistence.xml file. This allows you to bind additional schemas to an application and obtain the
corresponding data source, without having to modify the application code or redeploy the application.
Context
A dynamic JNDI lookup is applied as follows, depending on whether you are using an unmanaged or a managed
data source:
● Managed - supported in Java EE runtimes (Java EE 6 Web Profile and Java EE 7 Web Profile TomEE 7)
The steps described below are based on JPA application-managed persistence using a Java runtime.
Procedure
1. Create the persistence unit to be used for the dynamic data source lookup:
a. In the Project Explorer view, select <project>/Java Resources/src/META-INF/
persistence.xml, and from the context menu choose Open With Persistence XML Editor .
b. Switch to the Source tab of the persistence.xml file and create a persistence unit, as shown in the
example below. The corresponding data source is not defined in either the persistence.xml or
web.xml file:
ds = (DataSource) context.lookup("unmanageddatasource:mydatasource");
3. Create an entity manager factory in the normal manner. In the example below, the persistence unit is
named "mypersistenceunit", as defined in the persistence.xml file:
4. Use the console client to create a schema binding with the same data source name. To do this, open the
command window in the <SDK>/tools folder and enter the bind-schema [page 1951] command, using
the data source name you defined in step 2:
Note
Note that you need to use the same data source name you have defined in step 2.
To declare a class as an entity and define how that entity maps to the relevant database table, you can either
decorate the Java object with metadata using Java annotations or denote it as an entity in the XML descriptor.
The Dali Java Persistence Tools, which are provided as part of the Eclipse IDE for Java EE Developers, allow you
to use a JPA diagram editor to create, edit, and display entities and their relationships (your application’s data
model) in a graphical environment.
Example
The tutorial Adding Application-Managed Persistence with JPA (SDK for Java Web) defines the entity class
Person, as shown in the following:
package com.sap.cloud.sample.persistence;
import javax.persistence.*;
@Entity
@Table(name = "T_PERSON")
@NamedQuery(name = "AllPersons", query = "select p from Person p")
public class Person {
@Id
@GeneratedValue
private long id;
@Basic
private String FirstName;
@Basic
private String LastName;
Related Information
Tutorial: Adding Application-Managed Persistence with JPA (SDK for Java Web) [page 1516]
Dali Java Persistence Tools User Guide
The SAP HANA database lets you create tables with row-based storage or column-based storage. By default,
tables are created with row-based storage, but you can change the type of table storage you have applied, if
necessary.
The example below shows the SQL syntax used by the SAP HANA database to create different table types. The
first two SQL statements both create row-store tables, the third a column-store table, and the fourth changes
the table type from row-store to column-store:
EclipseLink JPA
When using EclipseLink JPA for data persistence, the table type applied by default in the SAP HANA database is
row-store. To create a column-store table or alter an existing row-store table, you can manually modify your
database using SQL DDL statements, or you can use open source tools, such as Liquibase (with plain SQL
statements), to handle automated database migrations.
Due to the limitations of the EclipseLink schema generation feature, you'll need to use one of the above options
to handle the life cycle management of your database objects.
You can use the ALTER TABLE statement to change a row-store table in the SAP HANA database to a column-
store table. The example is based on the Adding Application-Managed Persistence with JPA (SDK for Java Web)
tutorial and has been designed specifically for this tutorial and use case.
The example allows you to take advantage of the automatic table generation feature provided by JPA
EclipseLink. You merely alter the existing table at an appropriate point, when the schema containing the
relevant table has just been created. The applicable code snippet is added to the init() method of the servlet
(PersistenceWithJPAServlet). The main changes to the servlet code are outlined below:
1. Since the table must already exist when the ALTER statement is called, a small workaround has been
introduced in the init() method. An entity manager is created at an earlier stage than in the original
version of the tutorial to trigger the generation of the schema:
2. The SAP HANA database table SYS.M_TABLES contains information about all row and column tables in the
current schema. A new method, which uses this table to check that T_PERSON is not already a column-
store table, has been added to the servlet.
3. Another new method alters the table using the SQL statement ALTER TABLE <table name> COLUMN.
To apply the solution, replace the entire servlet class PersistenceWithJPAServlet with the following
content:
package com.sap.cloud.sample.persistence;
import java.io.IOException;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
import javax.persistence.Persistence;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.sql.DataSource;
import org.eclipse.persistence.config.PersistenceUnitProperties;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.security.core.server.csi.IXSSEncoder;
import com.sap.security.core.server.csi.XSSEncoder;
/**
* Servlet implementing a simple JPA based persistence sample application for SAP
Cloud Platform.
*/
public class PersistenceWithJPAServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private static final Logger LOGGER =
LoggerFactory.getLogger(PersistenceWithJPAServlet.class);
private static final String SQL_GET_TABLE_TYPE = "SELECT TABLE_NAME,
TABLE_TYPE FROM SYS.M_TABLES WHERE TABLE_NAME = ?";
private static final String PERSON_TABLE_NAME = "T_PERSON";
private DataSource ds;
Related Information
Tutorial: Adding Application-Managed Persistence with JPA (SDK for Java Web) [page 1516]
EclipseLink provides weaving as a means of enhancing JPA entities and classes for performance optimization.
At present, SAP Cloud Platform supports static weaving only. Static weaving occurs at compile time and is
available in both the Java Web and Java EE 6 Web Profile environments.
Prerequisites
● For static weaving to work, the entity classes must be listed in the persistence.xml file.
● EclipseLink Library:
To use the EclipseLink weaving options in your web applications, add the EclipseLink library to the
classpath:
○ SDK for Java Web
The EclipseLink library has already been added to the WebContent/WEB-INF/lib folder, since it is
required for the JPA persistence scenario.
○ SDK for Java EE 6 Web Profile
The EclipseLink library is already part of the SDK for Java EE 6 Web Profile, allowing you to run JPA
scenarios without any additional steps. To use the weaving options, however, you add the EclipseLink
library to the classpath, as described below.
SDK for Java EE 6 Web Profile: Adding the EclipseLink Library to the
Classpath
1. In the Eclipse IDE in the Project Explorer view, select the web application and from the context menu
choose Properties.
2. In the tree, select JPA.
3. In the Platform section, select the correct EclipseLink version, which should match the version available in
the SDK.
4. In the JPA implementation section, select the type User Library.
5. To the right of the user library list box, choose Download library.
6. Select the correct version of the EclipseLink library (currently EclipseLink 2.5.2) and choose Next.
7. Accept the EclipseLink license and choose Finish.
8. The new user library now appears; make sure it is selected.
9. Unselect Include libraries with this application and choose OK.
1. In the Eclipse IDE in the Project Explorer view, select the web application and from the context menu
choose Properties.
Note
If you change the target class settings, make sure you deploy these classes.
Your web application project is rebuilt so that the JPA entity class files contain weaving information. This also
occurs on each (incremental) project build. The woven entity classes will are whenever you publish the web
application to the cloud.
More Information
For information about using an ant task or the command line to perform static weaving, see the EclipseLink
User Guide .
Program with JDBC in the Neo environment in cases in which its low-level control is more appropriate than JPA.
Working with JDBC entails manually writing SQL statements to read and write objects from and to the
database.
An application can use one or more data sources. A data source can be a default, or explicitly named. Either
way, before a data source can be used, you must declare it as a JNDI resource reference.
Declare a JNDI resource reference to a JDBC data source in the web.xml deployment descriptor located in the
WebContent/WEB-INF directory as shown below. The resource reference name is only an example:
<resource-ref>
<res-ref-name>jdbc/DefaultDB</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
● <res-ref-name>: The JNDI name of the resource. The Java EE Specification recommends that you
declare the data source reference in the jdbc subcontext (jdbc/NAME).
● <res-type>: The type of resource that is returned during the lookup.
Add the <resource-ref> elements after the <servlet-mapping> elements in the deployment descriptor.
<resource-ref>
<res-ref-name>jdbc/datasource1</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>jdbc/datasource2</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
You can obtain an initial JNDI context from Tomcat by creating a javax.naming.InitialContext object.
Then consume the data source by looking up the naming environment through the InitialContext, as
follows:
According to the Java EE Specification, you should add the prefix java:comp/env to the JNDI resource name
(as specified in web.xml) to form the lookup name.
If the application uses multiple data sources, construct the lookup in a similar manner to the following:
You can directly inject the data source using annotations as shown below.
@Resource
private javax.sql.DataSource ds;
● If the application uses explicitly named data sources, you must first declare them in the web.xml file. Inject
them as shown in the following example:
@Resource(name="jdbc/datasource1")
private javax.sql.DataSource ds1;
@Resource(name="jdbc/datasource2")
private javax.sql.DataSource ds2;
The data source let you create a JDBC connection to the database. You can use the resulting Connection object
to instantiate a Statement object and execute SQL statements, as shown in the following example:
Database Tables
Use plain SQL statements to create the tables you require. Since there is currently no tool support available,
you have to manually maintain the table life cycles. The exact syntax you'll use may differ, depending on the
underlying database. The Connection object provides metadata about the underlying database and its tables
and fields, which can be accessed as shown in the code below:
To create a table in the Apache Derby database, you could use the following SQL statement executed with a
PreparedStatement object:
See the tutorial Adding Persistence Using JDBC for information about executing SQL statements and applying
the Data Access Object (DAO) design pattern in your Web application.
Related Information
Tutorial: Adding Persistence with JDBC (SDK for Java Web) [page 1551]
Java EE Specification
Use JDBC to persist data in a simple Java EE web application that manages a list of persons.
Prerequisites
● Download and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK for Java Web. For more
information, see Setting Up the Development Environment [page 1402].
● Create a database. For more information, see Creating Databases [page 921].
● Set up your runtime environment in the Eclipse IDE. For more information, see Set Up the Runtime
Environment [page 1408].
● Develop or import a Java Web application in Eclipse IDE. For more information, see Developing Java
Applications [page 1442] or Import Samples as Eclipse Projects [page 1423].
The application is also available as a sample in the SAP Cloud Platform SDK for Neo environment for Java
Web:
○ Sample name: persistence-with-jdbc
○ Location: <sdk>/samples folder
For more information, see Using Samples [page 1421].
Context
1. Create a Dynamic Web Project and Servlet with JDBC [page 1552]
2. Create the Person Entity [page 1552]
3. Create the Person DAO [page 1553]
4. Prepare the Web Application Project for JDBC [page 1555]
5. Extend the Servlet to Use Persistence [page 1556]
6. Test the Web Application on the Local Server [page 1559]
7. Deploy Applications Using Persistence on the Cloud from Eclipse [page 1560]
8. Configure Applications Using the Cockpit [page 1562]
9. Start Applications Using Eclipse [page 1562]
Create a dynamic web project and add a servlet, which you'll extend later in the tutorial.
Procedure
1. From the Eclipse main menu, choose File New Dynamic Web Project .
2. Enter the Project name persistence-with-jdbc.
3. In the Target Runtime pane, select Java Web as the runtime to use to deploy the application.
4. Leave the default values for the other project settings and choose Next.
5. On the Java screen, leave the default settings and choose Next.
6. In the Web Module configuration settings, select Generate web.xml deployment descriptor and choose
Finish.
7. To add a servlet to the project you have just created, choose File New Web Servlet from the
Eclipse main menu.
8. Enter the Java package com.sap.cloud.sample.persistence and the class name
PersistenceWithJDBCServlet.
9. Choose Finish to generate the servlet.
Procedure
2. From the context menu, choose New Class , verify that the package entered is
com.sap.cloud.sample.persistence, enter the class name Person, and choose Finish.
3. Open the file in the text editor and insert the following content:
package com.sap.cloud.sample.persistence;
/**
* Class holding information on a person.
*/
public class Person {
private String id;
private String firstName;
private String lastName;
public String getId() {
Create a DAO class, PersonDAO, in which you encapsulate the access to the persistence layer.
Procedure
2. From the context menu, choose New Class , verify that the package entered is persistence-with-
jdbc/Java Resources/src/com.sap.cloud.sample.persistencecom.sap.cloud.sample.persistence,
enter the class name PersonDAO, and choose Finish.
3. Open the file in the text editor and insert the following content:
package com.sap.cloud.sample.persistence;
import java.sql.Connection;
import java.sql.DatabaseMetaData;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;
import java.util.UUID;
import javax.sql.DataSource;
/**
* Data access object encapsulating all JDBC operations for a person.
*/
public class PersonDAO {
private DataSource dataSource;
/**
* Create new data access object with data source.
*/
public PersonDAO(DataSource newDataSource) throws SQLException {
setDataSource(newDataSource);
}
/**
* Get data source which is used for the database operations.
Prepare the web application project by adding the XSS Protection Library, adapting the Java build path order,
and adding the resource reference description to the web.xml file.
Procedure
<resource-ref>
<res-ref-name>jdbc/DefaultDB</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<servlet-mapping>
<servlet-name>PersistenceWithJDBCServlet</servlet-name>
<url-pattern>/</url-pattern>
</servlet-mapping>
Note
If your servlet version is 3.0 or higher, simply change the WebServlet annotation in the
PersistenceWithJDBCServlet.java class to @WebServlet("/").
Note
An application's URL path contains the context root followed by the optional URL pattern ("/<URL
pattern>"). The servlet URL pattern that is automatically generated by Eclipse uses the servlet’s class
name as part of the pattern. Since the cockpit shows only the context root, this means that you cannot
directly open the application in the cockpit without adding the servlet name. To call the application by
only the context root, use "/" as the URL mapping, then you will no longer have to correct the URL in
the browser.
Procedure
2. Select PersistenceWithJDBCServlet.java, and from the context menu choose Open With Java
Editor .
package com.sap.cloud.sample.persistence;
import java.io.IOException;
import java.sql.SQLException;
import java.util.List;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.sql.DataSource;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.security.core.server.csi.IXSSEncoder;
import com.sap.security.core.server.csi.XSSEncoder;
/**
* Servlet implementing a simple JDBC based persistence sample application for
* SAP Cloud Platform.
*/
public class PersistenceWithJDBCServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private static final Logger LOGGER = LoggerFactory
.getLogger(PersistenceWithJDBCServlet.class);
private PersonDAO personDAO;
/** {@inheritDoc} */
@Override
public void init() throws ServletException {
try {
InitialContext ctx = new InitialContext();
DataSource ds = (DataSource) ctx
.lookup("java:comp/env/jdbc/DefaultDB");
personDAO = new PersonDAO(ds);
} catch (SQLException e) {
throw new ServletException(e);
} catch (NamingException e) {
throw new ServletException(e);
}
}
/** {@inheritDoc} */
@Override
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException,
IOException {
response.getWriter().println("<p>Persistence with JDBC!</p>");
try {
appendPersonTable(response);
appendAddForm(response);
} catch (Exception e) {
response.getWriter().println(
"Persistence operation failed with reason: "
+ e.getMessage());
LOGGER.error("Persistence operation failed", e);
}
}
/** {@inheritDoc} */
@Override
protected void doPost(HttpServletRequest request,
HttpServletResponse response) throws ServletException,
IOException {
try {
doAdd(request);
doGet(request, response);
} catch (Exception e) {
response.getWriter().println(
"Persistence operation failed with reason: "
4. Save the servlet. The project should compile without any errors.
Procedure
1. To test your web application on the local server, follow the steps for deploying a web application locally as
described in Deploy Locally from Eclipse IDE [page 1468].
2. Enter a first name (for example, John) and a last name (for example, Smith) and choose Add Person.
Note
If you add more names to the database, they are also listed in the table. This confirms that you have
successfully enabled persistence using the Person entity.
Procedure
If you leave the default Automatic option, the server loads the target runtime of your application.
7. Enter your subaccount name, e-mail or user name, and password, then choose Next.
○ If you have previously entered a subaccount and user name for your host, you can select these names
from lists.
○ Previously entered hosts also appear in a dropdown list.
○ Select Save password to remember and store the password for a given user.
Caution
Do not select your application on the Add and Remove screen. Adding an application automatically
starts it with the effect that it will fail because no data source binding exists. You will add an application
in a later step.
9. In the Servers view, open the context menu for the server you just created and choose Show In
Cockpit .
Procedure
1. In the cockpit, select a subaccount, then choose SAP HANA / SAP ASE Databases & Schemas in the
navigation area.
2. Select the database you want to create a binding for.
3. Choose Data Source Bindings. For more information, see .
4. Define a binding (<DEFAULT>) for the application and select a database ID. Choose Save.
Add the application to the new server and start it to deploy the application on the cloud.
Context
Note
You cannot deploy multiple applications on the same application process. Deployment of a second
application on the same application process overwrites any previous deployments. To deploy several
applications, deploy each of them on a separate application process.
Procedure
1. On the Servers view in Eclipse, open the context menu for the server choose Add and Remove...
<application name> .
2. To add the application to the server, add the application to the panel on the right side.
3. Choose Finish.
4. Start the server.
This deploys the application and starts it on the SAP Cloud Platform.
5. Access the application by clicking the application URL on the application overview page in the cockpit.
You should see the same output as when the application was tested on the local server.
Test an application in the Neo environment that uses the default data source and runs locally on Apache Derby
on the local runtime.
If an application uses the default data source and runs locally on Apache Derby, provided as standard for local
development, you can test it on the local runtime without any further configuration. To use explicitly named
data sources or a different database, you'll need to configure the connection.properties file appropriately.
To test an application on the local server, define any data sources the application uses as connection properties
for the local database. You don't need to do this if the application uses the default data source.
Prerequisites
Start the local server at least once (with or without the application) to create the relevant folder.
Procedure
1. In the Project Explorer view, open the folder Servers/SAP Cloud Platform local runtime/
config_master/connection_data and select connection.properties.
2. From the context menu, choose Open With Properties File Editor .
3. Add the connection parameter com.sap.cloud.persistence.dsname to the block of connection
parameters for the local database you are using, as shown in the example below:
com.sap.cloud.persistence.dsname=jdbc/datasource1
javax.persistence.jdbc.driver=org.apache.derby.jdbc.EmbeddedDriver
javax.persistence.jdbc.url=jdbc:derby:memory:DemoDB;create=true
javax.persistence.jdbc.user=demo
javax.persistence.jdbc.password=demo
eclipselink.target-database=Derby
If the application has been bound to the data source based on an explicitly named data source instead of
using the default data source, ensure the following:
○ Provide a data source name in the connection properties that matches the name used in the data
source binding definition.
○ Add prefixes before each property in a property group for each data source binding you define. If an
application is bound only to the default data source, this configuration is considered the default no
matter which name you specified in the connection properties. The application can address the data
source by any name.
4. Repeat this step for all data sources that the application uses.
com.sap.cloud.persistence.dsname=jdbc/defaultManagedDataSource
com.sap.cloud.persistence.dsname=jdbc/defaultUnmanagedDataSource
6. To indicate that a block of parameters belong together, add a prefix to the parameters, as shown in the
example below. The prefix is freely definable; the dot isn't required:
1.com.sap.cloud.persistence.dsname=jdbc/datasource1
1.javax.persistence.jdbc.driver=org.apache.derby.jdbc.EmbeddedDriver
1.javax.persistence.jdbc.url=jdbc:derby:memory:DemoDB;create=true
1.javax.persistence.jdbc.user=demo
1.javax.persistence.jdbc.password=demo
1.eclipselink.target-database=Derby
2.com.sap.cloud.persistence.dsname=jdbc/defaultManagedDataSource
2.javax.persistence.jdbc.driver=org.apache.derby.jdbc.EmbeddedDriver
2.javax.persistence.jdbc.url=jdbc:derby:memory:DemoDB;create=true
2.javax.persistence.jdbc.user=demo
2.javax.persistence.jdbc.password=demo
2.eclipselink.target-database=Derby
3.com.sap.cloud.persistence.dsname=jdbc/defaultUnmanagedDataSource
3.javax.persistence.jdbc.driver=org.apache.derby.jdbc.EmbeddedDriver
3.javax.persistence.jdbc.url=jdbc:derby:memory:DemoDB;create=true
3.javax.persistence.jdbc.user=demo
3.javax.persistence.jdbc.password=demo
3.eclipselink.target-database=Derby
Identify inefficient SQL statements in your applications in the Neo environment and investigate performance
issues.
Context
The SQL trace provides a log of selected SQL statements with details about when a statement was executed
and its duration, allowing you to identify inefficient SQL statements in your applications and investigate
performance issues. SQL trace records are integrated in the standard trace log files written at runtime.
By default, the SQL trace is disabled. Generally, you enable it when you require SQL trace information for a
particular application and disable it again once you have completed your investigation. It is not intended for
general performance monitoring.
You can use the cockpit to enable the SQL trace by setting the log level of the logger
com.sap.core.persistence.sql.trace to the log level DEBUG in the application’s log configuration. Once
you've changed this setting, you can view trace information in the log files.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
Note
You can set log levels only when an application is running. Loggers are not listed if the relevant
application code has not been executed.
The new log setting takes effect immediately. Log settings are saved permanently and do not revert to their
initial values when an application is restarted.
Procedure
1. See the application's trace logs, which contain the SQL trace records, either in the Most Recent Logging
panel on the application dashboard or on the Logging page by navigating to Monitoring Logging in
the navigation area.
2. To display the contents of a particular log file, choose (Show). You can also download the file by
choosing (Download).
Example
The SQL-specific information from the default trace is shown below in plain text format:
Related Information
In addition to using the cockpit, you can also enable the SQL trace from the Eclipse IDE, and using the console
client. Whichever tool you use, you need to set the log level of the logger
com.sap.core.persistence.sql.trace to the log level DEBUG.
Eclipse
You can set the log level for applications deployed locally or in the cloud.
Console Client
You can use the console client to set the log level as a logging property for one or more loggers. To do so, use
the command neo set-log-level with the log parameters logger <logger_name> and level
<log_level>.
With SAP Cloud Platform, you can use the SAP HANA development tools to create comprehensive analytical
models and build applications with SAP HANA's programmatic interfaces and integrated development
environment.
● Automatic backups
● Creation of SAP HANA schemas and repository packages. Your SAP HANA instances and XS applications
are visualized in the cockpit.
● Eclipse-based tools for connecting to your SAP HANA instances on SAP Cloud Platform
● Eclipse-based tools for data modeling
Appropriate for
Related Information
Set up your SAP HANA development environment and run your first application in the cloud.
Note
To determine the most suitable tool for your development scenario, see SAP HANA Developer
Information by Scenario.
4. Monitor
Monitor SAP HANA XS applications.
Add Features
Use calculation views and visualize the data with SAPUI5. See: 8 Easy Steps to Develop an XS application on
the SAP Cloud Platform
Enable SHINE
Enable the demo application SAP HANA Interactive Education (SHINE) [page 1581] and learn how to build
native SAP HANA applications.
Before developing your SAP HANA XS application, you need to download and set up the necessary tools.
Prerequisites
● You have downloaded and installed a 32-bit or 64-bit version of Eclipse IDE, version Neon. For more
information, see Install Eclipse IDE [page 1405].
● You have configured your proxy settings (in case you work behind a proxy or a firewall). For more
information, see Install SAP Development Tools for Eclipse [page 1406] → step 3.
Procedure
Note
In case you need to develop with SAPUI5, install also SAP Cloud Platform Tools UI development
toolkit for HTML5 (Developer Edition) .
5. Choose Next.
6. On the next wizard page, you get an overview of the features to be installed. Choose Next.
7. Confirm the license agreements.
8. Choose Finish to start the installation.
9. After the successful installation, you will be prompted to restart your Eclipse IDE.
Next Steps
Creating an SAP HANA XS Hello World Application Using SAP HANA Web-based Development Workbench
[page 1571]
Creating an SAP HANA XS Hello World Application Using SAP HANA Studio [page 1574]
Create and test a simple SAP HANA XS application that displays the "Hello World" message using the SAP
HANA Web-Based Development Workbench in a trial or enterprise account.
● Install an SAP HANA tenant database system (MDC). See Install Database Systems [page 884].
● You are assigned the Administrator role for the subaccount.
In your subaccount in the SAP Cloud Platform cockpit, you create a database on an SAP HANA tenant
database system.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas .
3. From the Databases & Schemas page, choose New.
4. Enter the required data:
Property Value
Note
mdc1 corresponds to the database system on
which you create the database.
SYSTEM User Password The password for the SYSTEM user of the database.
Note
The SYSTEM user is a preconfigured database super
user with irrevocable system privileges, such as the
ability to create other database users, access system
tables, and so on. A database-specific SYSTEM user
exists in every database of a tenant database system.
5. Choose Create.
6. The Events page shows the progress of the database creation. Wait until the tenant database is in state
Started.
7. (Optional) To view the details of the new database, choose Overview in the navigation area and select the
database in the list. Verify that the status STARTED is displayed.
Context
You've specified a password for the SYSTEM user when you created an SAP HANA tenant database. You now
use the SYSTEM user to log on to SAP HANA cockpit and create your own database administration user.
Caution
You should not use the SYSTEM user for day-to-day activities. Instead, use this user to create dedicated
database users for administrative tasks and to assign privileges to these users.
Procedure
a. In the navigation area of the SAP Cloud Platform cockpit, choose SAP HANA / SAP ASE Databases
& Schemas .
A message appears, telling you that you do not have the required roles.
e. Choose OK.
You receive a confirmation that the required roles are assigned to you automatically.
f. Choose Continue.
2. Choose Manage Roles and Users.
3. Expand the Security node.
4. Open the context menu for the Users node and choose New User.
5. On the User tab, provide a name for the new user.
The password must start with a letter and only contain uppercase and lowercase letters ('a' – 'z', 'A' – 'Z'),
and numbers ('0' – '9').
7. Save your changes.
8. In the Granted Roles section, choose + (Add Role).
9. Type ide in the search field and select all roles in the result list.
Caution
At this point, you are still logged on with the SYSTEM user. You can only use your new database user to
work with SAP HANA Web-based Development Workbench by logging out from SAP HANA cockpit
first. Otherwise, you would automatically log in to the SAP HANA Web-based Development Workbench
with the SYSTEM user instead of your new database user. Therefore, choose the Logout button before
you continue to work with the SAP HANA Web-based Development Workbench, where you need to log
on again with the new database user.
13. (Optional) Disable the Password Lifetime Handling for a New Technical SAP HANA Database User [page
927].
This step is not necessary to complete this tutorial, but you shouldn't forget to disable the password
lifetime handling in productive scenarios.
Create an SAP HANA XS Hello World program using the SAP HANA Web-based Development Workbench.
Procedure
1. In the navigation area of the SAP Cloud Platform cockpit, choose SAP HANA / SAP ASE Databases &
Schemas .
2. Select the relevant database.
3. In the database overview, open the SAP HANA Web-based Development Workbench link under
Development Tools.
4. Log on to the SAP HANA Web-based Development Workbench with your new database user and password.
5. Select the Editor.
Tip
The editor header shows details for your user and database. Hover over the entry for the SID to view
the details.
6. To create a new package, choose New Package from the context menu of the Content folder.
7. Enter a package name.
The program is deployed and appears in the browser: Hello World from User <Your User>.
Create and test a simple SAP HANA XS application that displays the "Hello World" message.
Prerequisites
Make sure the database you want to use is deployed in your account before you begin with this tutorial. You can
create SAP HANA XS applications using one of the following databases:
Note
Learn more about the steps that are needed for Create SAP HANA Tenant Databases. For more information
on purchasing a larger SAP HANA database for development or productive purposes, see SAP Cloud
Platform Pricing and Packaging .
You also need to install the tools as described in Install SAP HANA Tools for Eclipse [page 1569] to follow the
steps described in this tutorial.
Context
Context
You will perform all subsequent activities with this new user.
Procedure
2. Choose SAP HANA / SAP ASE Databases & Schemas in the navigation area.
All databases available in the selected account are listed with their ID, type, version, and related database
system.
Tip
To view the details of a database, for example, its state and the number of existing bindings, select a
database in the list and click the link on its name. On the overview of the database, you can perform
further actions, for example, delete the database.
If you want to
create your appli
cation using... Do the following:
A productive SAP Follow the steps described in Create a Database Administrator User [page 1589].
HANA XS data
base
A productive or 1. Select the relevant SAP HANA tenant database in the list.
trial SAP HANA
2. In the overview that is shown in the lower part of the screen, open the SAP HANA cockpit link
tenant database
under Administration Tools.
3. In the Enter Username field, enter SYSTEM, then enter the password you determined for the
SYSTEM user in the Enter Password field.
A message is displayed to inform you that at that point, you lack the roles that you need to
open the SAP HANA cockpit.
4. To confirm the message, choose OK.
You receive a confirmation that the required roles are assigned to you automatically.
5. Choose Continue.
You are now logged on to the SAP HANA cockpit.
6. Choose Manage Roles and Users.
7. To create database users and assign them the required roles, expand the Security node.
8. Open the context menu for the Users node and choose New User.
9. On the User tab, provide a name for the new user.
The user name always appears in upper case letters.
10. In the Authentication section, make sure the Password checkbox is selected and enter a pass
word.
Note
The password must start with a letter and only contain uppercase and lowercase letters
('a' - 'z', 'A' - 'Z'), and numbers ('0' - '9').
Note
For more information on the CONTENT_ADMIN role, see Predefined Database Roles.
Caution
At this point, you are still logged on with the SYSTEM user. You can only use your new data
base user to work with SAP HANA Web-based Development Workbench by logging out from
SAP HANA Cockpit first. Otherwise, you would automatically log in to the SAP HANA Web-
based Development Workbench with the SYSTEM user instead of your new database user.
Therefore, choose the Logout button before you continue to work with the SAP HANA Web-
based Development Workbench, where you need to log on again with the new database user.
Context
After you add the SAP HANA system hosting the repository that stores your application-development files, you
must specify a repository workspace, which is the location in your file system where you save and work on the
development files.
Procedure
Results
In the Repositories view, you see your workspace, which enables you to browse the repository of the system
tied to this workspace. The repository packages are displayed as folders.
At the same time, a folder will be added to your file system to hold all your development files.
Context
After you set up a development environment for the chosen SAP HANA system, you can add a project to
contain all the development objects you want to create as part of the application-development process. There
are a variety of project types for different types of development objects. Generally, a project type ensures that
only the necessary libraries are imported to enable you to work with development objects that are specific to a
project type. In this tutorial, you create an XS Project.
1. In the SAP HANA Development perspective in the Eclipse IDE, choose File New XS Project .
2. Make sure the Share project in SAP repository option is selected and enter a project name.
3. Choose Next.
4. Select the repository workspace you created in the previous step and choose Next.
5. Choose Finish without doing any further changes.
Results
The Project Explorer view in the SAP HANA Development perspective in Eclipse displays the new project. The
system information in brackets to the right of the project node name in the Project Explorer view indicates that
the project has been shared; shared projects are regularly synchronized with the Repository hosted on the SAP
HANA system you are connected to.
Context
SAP HANA Extended Application Services (SAP HANA XS) supports server-side application programming in
JavaScript. In this step, you add some simple JavaScript code that generates a page which displays the
wordsHello, World!
Procedure
1. In the Project Explorer view in the SAP HANA Development perspective in Eclipse, right-click your XS
project, and choose New Other in the context-sensitive popup menu.
2. In the Select a wizard dialog, choose SAP HANA Application Development XS JavaScript File .
3. In the New XS JavaScript File dialog, enter MyFirstSourceFile.xsjs in the File name text box and
choose Next.
4. Choose Finish.
5. In the MyFirstSourceFile.xsjs file, enter the following code and save the file:
$.response.contentType = "text/html";
$.response.setBody( "Hello, World !");
Note
By default, saving the file automatically commits the saved version of the file to the repository.
The application descriptors are mandatory and describe the framework in which an SAP HANA XS
application runs. The .xsapp file indicates the root point in the package hierarchy where content is to be
served to client requests; the .xsaccess file defines who has access to the exposed content and how.
Note
By default, the project-creation Wizard creates the application descriptors automatically. If they are not
present, you will see a 404 error message in the Web Browser when you call the XS JavaScript service.
In this case, you will need to create the application descriptors manually. See the SAP HANA Developer
Guide for SAP HANA Studio for instructions.
7. Open the context menu for the new files (or the folder/package containing the files) and select Team
Activate All . The activate operation publishes your work and creates the corresponding catalog objects;
you can now test it.
Context
Check if your application is working and if the Hello, World! message is displayed.
Procedure
In the SAP HANA Development perspective in the Eclipse IDE, open the context menu of the
MyFirstSourceFile.xsjs file and choose Run As 1 XS Service .
Note
You might need to enter the credentials of the database user you created in this tutorial again.
Note
If you have used an SAP HANA XS database for creating your SAP HANA XS application, you can also
launch your application from the SAP Cloud Platform cockpit by choosing the application URL after
navigating to Applications HANA XS Applications . For more information, see Launch SAP HANA XS
Applications [page 1583].
Hello, World !
Context
To extract data from the database, you use your JavaScript code to open a connection to the database and
then prepare and run an SQL statement. The results are added to the Hello, World! response. You use the
following SQL statement to extract data from the database:
The SQL statement returns one row with one field called DUMMY, whose value is X.
Procedure
1. In the Project Explorer view in the SAP HANA Development perspective in Eclipse, open the
MyFirstSourceFile.xsjs file in the embedded JavaScript editor.
2. In the MyFirstSourceFile.xsjs file, replace your existing code with the following code:
$.response.contentType = "text/html";
var output = "Hello, World !";
var conn = $.db.getConnection();
var pstmt = conn.prepareStatement( "select * from DUMMY" );
var rs = pstmt.executeQuery();
if (!rs.next()) {
$.response.setBody( "Failed to retrieve data" );
$.response.status = $.net.http.INTERNAL_SERVER_ERROR;
} else {
output = output + "This is the response from my SQL: " + rs.getString(1);
}
rs.close();
pstmt.close();
conn.close();
$.response.setBody(output);
4. Open the context menu of the MyFirstSourceFile.xsjs file and choose Team Activate All .
Context
Check if your application is retrieving data from your SAP HANA database.
Procedure
In the SAP HANA Development perspective in the Eclipse IDE, open the context menu of the
MyFirstSourceFile.xsjs file and choose Run as XS Service .
Note
If you have used an SAP HANA XS database for creating your SAP HANA XS application, you can also
launch your application from the SAP Cloud Platform cockpit by choosing the application URL after
navigating to Applications HANA XS Applications . For more information, see Launch SAP HANA XS
Applications [page 1583].
Results
You can enable the SAP HANA Interactive Education (SHINE) demo application for a new or existing SAP HANA
tenant database in your trial account.
Context
SAP HANA Interactive Education (SHINE) demonstrates how to build native SAP HANA applications. The demo
application comes with sample data and design-time developer objects for the application's database tables,
data views, stored procedures, OData, and user interface. For more information, see the SAP HANA Interactive
Education (SHINE) documentation.
By default, SHINE is available for all SAP HANA tenant databases in trial accounts in the Neo environment.
1. Log in to the SAP Cloud Platform cockpit navigate to a subaccount. For more information, see Navigate to
Global Accounts and Subaccounts [AWS, Azure, or GCP Regions].
Restriction
2. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas .
3. To enable SHINE for an SAP HANA tenant database, you must first create a SHINE user. If you are enabling
SHINE for a new SAP HANA tenant database, a SHINE user can be automatically created during the
database creation. If you are enabling SHINE for an existing SAP HANA tenant database, you must
manually create the SHINE user.
Enable SHINE for a 1. Follow the steps described in Create SAP HANA Tenant Databases.
new SAP HANA ten 2. From the list of all databases and schemas, choose the SAP HANA tenant database you
ant database just created.
3. In the overview in the lower part of the screen, choose the SAP HANA Interactive Education
(SHINE) link under Education Tools.
Enable SHINE for 1. From the list of all databases and schemas, choose the SAP HANA tenant database for
an existing SAP which you want to enable SHINE.
HANA tenant data 2. In the overview in the lower part of the screen, open the SAP HANA Cockpit link under
base Administration Tools.
3. In the Enter Username field, enter SYSTEM, then enter the password you determined for
the SYSTEM user.
The first time you log in to the SAP HANA Cockpit, you are informed that you don't have
theroles that you need to open the SAP Cloud Platform cockpit.
4. Choose OK. The required roles are assigned to you automatically.
5. Choose Continue.
You are now logged in to the SAP HANA Cockpit.
6. Choose Manage Roles and Users.
7. To create database users and assign them the required roles, expand the Security node.
8. Open the context menu for the Users node and choose New User.
9. On the User tab, provide a name for the new SHINE user.
Note
The user name can contain only uppercase and lowercase letters ('a' - 'z', 'A' - 'Z'),
numbers ('0' - '9'), and underscores ('_').
Note
The password must contain at least one uppercase and one lowercase letter ('a' - 'z', 'A'
- 'Z') and one number ('0' - '9'). It can also contain special characters (except ", ' and
\).
A login screen for the SHINE demo application is shown in a new browser window.
4. Enter the credentials of the SHINE user you created and choose Login.
Note
The first time you log in to the SHINE demo application, you are prompted to change your initial
password.
Results
You see the SHINE demo application for your SAP HANA tenant database. Consult the SAP HANA Interactive
Education (SHINE) documentation for detailed information about using the application.
You can open your SAP HANA XS applications in a Web browser directly from the cockpit.
Context
Note
This feature is only available for SAP HANA XS applications in single container SAP HANA systems. For SAP
HANA XS applications in SAP HANA tenant database systems, use SAP Cloud Platform Web IDE or SAP
HANA cockpit to manage your applications.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
Note
If an HTTP status 404 (not found) error is shown, bear in mind that the cockpit displays only the root of
an application’s URL path. This means that you might have to either:
○ Add the application name to the URL address in the browser, for example, hello.xsjs.
○ Use an index.html file, which is the default setting for the file displayed when the package is
accessed without specifying a file name in the URL.
○ Override the above default setting by specifying the default_file keyword in the .xsaccess file, for
example:
{
"exposed" : true,
"default_file": "hello.xsjs"
}
Related Information
Use SAP HANA single-container database systems designed for developing with SAP HANA in a productive
environment.
Prerequisites
An SAP HANA XS database system is deployed in a subaccount in your enterprise account. For more
information, see Install Database Systems [page 884].
Note
To find out the latest SAP HANA revision supported by SAP Cloud Platform in the Neo environment, see
What's New for SAP HANA Service.
Before going live with an application for which a significant number of users and/or significant load is expected,
you should do a performance load test. This is best practice in the industry and we strongly recommend it for
HANA XS applications.
SAP Cloud Platform creates four users that it requires to manage the database: SYSTEM, BKPMON, CERTADM,
and PSADBA. These users are reserved for use by SAP Cloud Platform. For more information, see Create a
Database Administrator User [page 1589].
Caution
Each SAP HANA XS database system has a technical database user NEO_<guid>, which is created
automatically when the database system is assigned to a subaccount. A technical database user is not the
same as a normal database user and is provided purely as a mechanism for enabling schema access.
Caution
Do not delete or change the technical database user in any way (password, roles, permissions, and so on).
Features
Feature Description
See:
See:
See:
Connectivity destinations ● Connectivity for SAP HANA XS (Enterprise Version) [page 330]
● Maintaining HTTP Destinations
Monitoring ● Configure Availability Checks for SAP HANA XS Applications from the Cockpit
[page 1592]
● Configure Availability Checks for SAP HANA XS Applications from the Console Cli
ent [page 1594]
● View Monitoring Metrics of a Database System [page 898]
● View Monitoring Metrics of an SAP HANA XS Application
Launch SAP HANA XS applica Launch SAP HANA XS Applications [page 1583]
tions
Note
The support for database schemas on shared SAP HANA databases in trial accounts has ended. We
recommend to create an SAP HANA tenant database on a shared SAP HANA tenant database system. See
Create SAP HANA Tenant Databases.
SAP Cloud Platform supports the following Web-based tools: SAP HANA Web-based Development Workbench,
SAP HANA Studio, and SAP HANA XS Administration Tool.
Prerequisites
● You have a database user. See Creating Database Users [page 1589].
● Your database user is assigned the roles required for the relevant tool. See Roles Required for Web-based
Tools [page 1592].
Context
You can access the SAP HANA Web-based tools using the Cockpit or the tool URLs. The following table
summarizes what each supported tool does, and how to acess it.
SAP HANA Web-based Devel Includes an all-purpose edi Development Tools section: https://<database
opment Workbench tor tool that enables you to SAP HANA Web-based instance><subaccoun
maintain and run design-time Development Workbench t>.< host>/sap/
objects in the SAP HANA re hana/xs/ide/
pository. It does not support
modeling activities.
SAP HANA Cockpit Provides you with a single Administration Tools section: https://<database
point-of -access to a range of SAP HANA Cockpit instance><subaccoun
Web-based applications for t>.<host>/sap/
the online administration of hana/xs/admin/
SAP HANA.
cockpit
For more information, see
the SAP HANA Administra
tion Guide.
Note
It is not possible to use
the SAP HANA database
lifecycle manager
(HDBLCM) with the
cockpit.
SAP HANA XS Administra Allows you, for example, to Administration Tools section: https://<database
tion Tool configure security options SAP HANA XS Administration instance><subaccoun
and HTTP destinations. t>.<host>/sap/
For more information, see hana/xs/admin/
the SAP HANA Administra
tion Guide.
Remember
When using the tools, log on with your database user (not your SAP Cloud Platform user). If this is your
initial logon, you will be prompted to change your password. You are responsible for choosing a strong
password and keeping it secure.
Related Information
Use the database user feature in the SAP Cloud Platform cockpit to create a database administration user for
SAP HANA XS databases, and set up database users in SAP HANA for the members of your development
team.
To create database users for SAP HANA XS databases, perform the following steps:
Related Information
As an subaccount administrator, you can use the database user feature provided in the cockpit to create your
own database user for your SAP HANA database.
Context
SAP Cloud Platform creates four users that it requires to manage the database: SYSTEM, BKPMON, CERTADM,
and PSADBA. These users are reserved for use by SAP Cloud Platform.
Caution
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. Choose SAP HANA / SAP ASE Databases and Schemas in the navigation area.
You see all databases that are available in the subaccount, along with their details, including the database
type, version, memory size, state, and the number of associated databases.
3. Select the relevant SAP HANA XS database.
4. In the Development Tools section, click Database User.
A message confirms that you do not yet have a database user.
5. Choose Create User.
Your user and initial password are displayed. Change the initial password when you first log on to an SAP
HANA system, for example the SAP HANA Web-based Development Workbench.
Note
○ Your database user is assigned a set of permissions for administering the SAP HANA database
system, which includes HCP_PUBLIC, and HCP_SYSTEM. The HCP_SYSTEM role contains, for
example, privileges that allow you to create database users and grant additional roles to your own
and other database users.
○ You also require specific roles to use the SAP HANA Web-based tools. For security reasons, only
the role that provides access to the SAP HANA Web-based Development Workbench is assigned as
default.
6. To log on to the SAP HANA Web-based Development Workbench and change your initial password now
(recommended), copy your initial password and then close the dialog box.
You do not have to change your initial password immediately. You can open the dialog box again later to
display both your database user and initial password. Since this poses a potential security risk, however,
you are strongly advised to change your password as soon as possible.
7. In the Development Tools section, click SAP HANA Web-based Development Workbench.
8. On the SAP HANA logon screen, enter your database user and initial password.
9. Change your password when prompted.
Caution
You are responsible for choosing a strong password and keeping it secure. If your user is blocked or if
you've forgotten the password of your user, another database administration user with USER_ADMIN
privileges can unlock your user.
Next Steps
● Tip
There may be some roles that you cannot assign to your own database user. In this case, we
recommend that you create a second database user (for example, ROLE_GRANTOR) and assign it the
● In the SAP HANA system, you can now create database users for the members of your subaccount and
assign them the required developer roles.
● To be able to use other HANA tools like HANA Cockpit or HANA XS Administration tool, you must assign
yourself access to these before you can start using them. See Assign Roles Required for the SAP HANA XS
Administration Tool [page 1591]
Related Information
To work with the SAP HANA XS Administration Tool, add the required rules to your database user.
Context
The initial set of roles of your database user also contains the sap.hana.xs.ide.roles::Developer role, allowing you
to work with the SAP HANA Web-based Development Workbench, but not the SAP HANA XS Administration
tool.
Procedure
○ Use the Eclipse IDE and connect to your SAP HANA studio. For more information, see Connect to SAP
HANA Databases via the Eclipse IDE (Neon) [page 971].
○ Use the SAP HANA Web-based Development Workbench. For more information, see Supported SAP
HANA Web-based Tools [page 1587].
To use the SAP HANA Web-based tools, you require specific roles.
Role Description
sap.hana.xs.ide.roles::EditorDeveloper or parent Use the Editor component of the SAP HANA Web-based Development
role sap.hana.xs.ide.roles::Developer Workbench.
sap.hana.xs.admin.roles::TrustStoreViewer Read-only access to the trust store, which contains the server's root
certificate or the certificate of the certification authority that signed
the server’s certificate.
sap.hana.xs.admin.roles::TrustStoreAdministrator Full access to the SAP HANA XS application trust store to manage the
certificates required to start SAP HANA XS applications.
Related Information
In the cockpit, you can configure availability checks for the SAP HANA XS applications running on your
productive SAP HANA database system.
Prerequisites
● The manageMonitoringConfiguration scope is assigned to the used platform role for the subaccount. For
more information, see Platform Scopes [page 1910].
Procedure
For more information, see Navigate to Global Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. In the cockpit, choose Applications HANA XS Applications in the navigation area of the subaccount.
3. Select an application from the list, and in the Application Details panel, configure the check.
When your availability check is created, you can view your application's latest HTTP response code and
response time as well as a status icon showing whether your application is up or down. If you want to
receive alerts when your application is down, you have to configure alert recipients from the console client.
For more information, see the Subscribe recipients to notification alerts. step in Configure Availability
Checks for SAP HANA XS Applications from the Console Client [page 1594].
Related Information
In the console client, you can configure an availability check for your SAP HANA XS application and subscribe
recipients to receive alert e-mail notifications when it is down or responds slowly.
Prerequisites
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Create the availability check.
Execute:
○ Replace "mysubaccount", "myhana:myhanaxsapp" and "myuser" with the technical name of your
subaccount, and the names of the productive SAP HANA database, application, and user respectively.
○ The availability URL (/heartbeat.xsjs in this case) is not provided by default by the platform. Replace it
with a suitable URL that is already exposed by your SAP HANA XS application or create it. Keep in mind
the limitations for availability URLs. For more information, see Availability Checks.
Note
In case you want to create an availability check for a protected SAP HANA XS application, you need
to create a subpackage, in which to create an .xsaccess file with the following content:
{
"exposed": true,
"authentication": null,
"authorization": null
}
○ The check will trigger warnings "-W 4" if the response time is above 4 seconds and critical alerts "-C 6"
if the response time is above 6 seconds or the application is not available.
○ Use the respective host for your subaccount type.
3. Subscribe recipients to notification alerts.
Execute:
○ Replace "mysubaccount", "myhana" and "myuser" with the technical name of your subaccount, and
the names of the productive SAP HANA database and user respectively.
○ Replace "[email protected]" with the e-mail addresses that you want to receive alerts.
Separate e-mail addresses with commas. We recommend that you use distribution lists rather than
personal e-mail addresses. Keep in mind that you will remain responsible for handling of personal e-
mail addresses with respect to data privacy regulations applicable.
○ Use the respective host for your subaccount type.
Note
Setting an alert recipient for an application will trigger sending all alerts for this application to the
configured e-mail(s). Once the recipients are subscribed, you do not need to subscribe them again
after every new check you configure. You can also set the recipients on subaccount level if you skip the
-b parameter so that they receive alerts for all applications and for all the metrics you are monitoring.
Related Information
Configure Availability Checks for SAP HANA XS Applications from the Cockpit [page 1592]
Regions and Hosts Available for the Neo Environment [page 14]
Availability Checks Commands
list-availability-check [page 2056]
create-availability-check [page 1962]
delete-availability-check [page 1980]
Alert Recipients Commands
list-alert-recipients [page 2058]
set-alert-recipients [page 2119]
clear-alert-recipients [page 1954]
In the cockpit, you can view the current metrics of a selected database system to get information about its
health state. You can also view the metrics history of a productive database to examine the performance trends
CPU Load The percentage of the CPU that is used on average over This metric is updated every minute.
the last minute.
An alert is triggered when 2 consecutive
checks with an interval of 1 minute are
not in an OK state.
Disk I/O The number of bytes per second that are currently being This metric is updated every minute.
read or written to the disc.
An alert is triggered when 5 consecutive
checks with an interval of 1 minute are
not in an OK state.
Network Ping The percentage of packets that are lost to the database This metric is updated every minute.
host.
An alert is triggered when 2 consecutive
checks with an interval of 1 minute are
not in an OK state.
OS Memory Usage The percentage of the operating system memory that is This metric is updated every minute.
currently being used.
An alert is triggered when 2 consecutive
checks with an interval of 1 minute are
not in an OK state.
Used Disc Space The percentage of the local discs of the operating sys This metric is updated every minute.
tem that is currently being used.
An alert is triggered when 5 consecutive
checks with an interval of 1 minute are
not in an OK state.
HANA DB Availability ● OK - the database is reachable from our central ad This metric is updated every minute.
min component via JDBC.
An alert is triggered when 3 consecutive
● Critical - either the database is down or overloaded,
checks with an interval of 1 minute are
or there's a network issue.
not in an OK state.
HANA DB Alerting ● OK - alerts can be retrieved from the SAP HANA This metric is updated every minute.
Availability system.
An alert is triggered when 3 consecutive
● Critical - alerts cannot be retrieved as there is no
checks with an interval of 1 minute are
connection to the database. This also implies that
not in an OK state.
any other visible metric may be outdated.
HANA DB Compile ● OK - the compiler server is running on the SAP This metric is updated every 10 mi
Server HANA system. nutes.
● Critical - the compile server crashed or was other
An alert is triggered when 3 consecutive
wise stopped. The service should recover automati
checks with an interval of 1 minute are
cally. If this does not work, a restart of the system
not in an OK state.
might be necessary.
HANA DB Backup Vol ● OK - the backup volumes are available. This metric is updated every 15 mi
umes Availability ● Critical - the backup volumes are not available. nutes.
HANA DB Data Backup ● OK - the age of the last data backup is below the This metric is updated every 60 mi
Age critical threshold. nutes.
● Critical - the age of the last data backup is above
An alert is triggered when 3 consecutive
the critical threshold.
checks with an interval of 1 minute are
not in an OK state.
HANA DB Data Backup ● OK - the data backup exists. This metric is updated every 60 mi
Exists ● Critical - no data backup exists. nutes.
HANA DB Data Backup ● OK - the last data backup was successful. This metric is updated every 60 mi
Successful ● Critical - the last data backup was not successful. nutes.
HANA DB Log Backup ● OK - the last log backup was successful. This metric is updated every 10 mi
Successful ● Critical - the last log backup failed. nutes.
HANA DB Service ● OK - no server is running out of memory. This metric is updated every 5 minutes.
Memory Usage ● Critical - a service is causing an out of memory er
An alert is triggered when 3 consecutive
ror. See SAP Note 1900257 .
checks with an interval of 1 minute are
not in an OK state.
HANA XS Availability ● OK - XSEngine accepts HTTPS connections. This metric is updated every minute.
● Critical - XSEngine does not accept HTTPS connec
An alert is triggered when 3 consecutive
tions.
checks with an interval of 1 minute are
not in an OK state.
Sybase ASE Availability ● OK - the database is reachable from our central ad This metric is updated every minute.
min component via JDBC.
An alert is triggered when 3 consecutive
● Critical - either the database is down or overloaded,
checks with an interval of 1 minute are
or there's a network issue.
not in an OK state.
Sybase ASE Long Run ● OK - a transaction is running for up to an hour. This metric is updated every 2 minutes.
ning Trans ● Warning - a transaction is running for more than an
An alert is triggered when a consecutive
hour.
check with an interval of 1 minute is not
● Critical - a transaction is running for more than 13
in an OK state.
hours.
Sybase ASE HADR Fm FaultManager is a component for highly available (HA) This metric is updated every 2 minutes.
State SAP ASE systems that triggers a failover in case the pri
An alert is triggered when a consecutive
mary node is not working.
check with an interval of 1 minute is not
● OK - FaultManager for a system that is set up as an in an OK state.
HA system is running properly.
● Critical - FaultManager is not working properly and
the failover doesn’t work.
Sybase ASE HADR La ● OK - the latency for the HA replication path is less This metric is updated every 2 minutes.
tency than or equal to 10 minutes.
An alert is triggered when a consecutive
● Warning - the latency is greater than 10 minutes.
check with an interval of 1 minute is not
● Critical - the latency is greater than 20 minutes. A
in an OK state.
high latency might lead to data loss if there is a fail
over.
Sybase ASE HADR Pri ● OK - the primary host of a system that is set up as This metric is updated every 2 minutes.
mary State HA system is running fine.
An alert is triggered when a consecutive
● Critical - the primary host is not running properly.
check with an interval of 1 minute is not
in an OK state.
Sybase ASE HADR ● OK - the secondary or standby host of a system This metric is updated every 2 minutes.
Standby State that is set up as HA system is running properly.
An alert is triggered when a consecutive
● Critical - the secondary or standby host is not run
check with an interval of 1 minute is not
ning properly.
in an OK state.
Prerequisites
The readMonitoringData scope is assigned to the used platform role for the subaccount. For more information,
see Platform Scopes [page 1910].
Procedure
For more information, see Navigate to Global Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. Navigate to the Database Systems page either by choosing SAP HANA / SAP ASE Database Systems
from the navigation area or from the Overview page.
All database systems available in the selected subaccount are listed with their details, including the
database version and state, and the number of associated databases.
3. Choose the entry for the relevant database system in the list.
4. Choose Monitoring from the navigation area to get detailed information about the current state and the
history of metrics for the selected database system.
When you open the checks history, you can view graphic representations for each of the different checks,
and zoom in to see additional details. If you zoom in a graphic horizontally, all other graphics also zoom in
to the same level of detail. Press Shift and drag to pan a graphic. Zoom out to the initial size by double-
clicking.
You can select different periods for each check. Depending on the interval you select, data is aggregated as
follows:
○ Last 12 or 24 hours - data is collected every minute.
○ Last 7 days - data is aggregated from the average values for each 10 minutes.
○ Last 30 days - data is aggregated from the average values for each hour.
You can also select a custom time interval for viewing check history.
Related Information
You can only debug SAP HANA server-side JavaScript with the SAP HANA Tools plugin for Eclipse as of release
7.4. If you are working with lower plugin versions, use the SAP HANA Web-based Development Workbench to
perform your debugging tasks.
Prerequisites
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [AWS, Azure, or GCP Regions].
Note
3. In the HANA XS Applications table, select the application to display its details.
4. In the Application Details section, click Open in Web-based Development Workbench. Note that the SAP
HANA Web-based Development Workbench can also be opened directly at the following URL: https://
<database instance><subaccount>.<host>/sap/hana/xs/ide/
5. Depending on whether you want to debug a .xsjs file or a more complex scenario (set a breakpoint in
a .xsjs file and run another file), do the following:
○ .xsjs file:
1. Set the breakpoints and then choose the Run on server (F8) button.
○ Complex scenario:
1. Set the breakpoint in the .xsjs file you want to debug.
2. Open a new tab in the browser and then open the other file on this tab by entering its URL
(https://<database instance><subaccount>.<host>/<package>/<file>).
If you synchronously call the .xsjs file in which you have set a breakpoint and then open the other
file in the SAP HANA Web-based Development Workbench and execute it by choosing the Run on
server (F8) button, you will block your debugging session. You will then need to terminate the
session by closing the SAP HANA Web-based Development Workbench tab.
Note
If you leave your debugging session idle for some time once you have started debugging, your session
will time out. An error in the WebSocket connection to the backend will be reported and your
WebSocket connection for debugging will be closed. If this occurs, reopen the SAP HANA Web-based
Development Workbench and start another debugging session.
Valid for SAP HANA instances running SP8 or lower only. Use this procedure to configure your HANA XS
applications to use Security Assertion Markup Language (SAML) 2.0 authentication. This is necessary if you
want to implement identity federation with your corporate identity providers.
Prerequisites
● You have the SAP HANA Tools installed in your Eclipse IDE. See Install SAP HANA Tools for Eclipse [page
1569].
● You have a user on the productive landscape of SAP Cloud Platform. See Purchase a Customer Account.
● You have a SAP HANA database user on the productive landscape of SAP Cloud Platform. See Create a
Database Administrator User [page 1589].
● You have a corporate identity provider (IdP) configured with its own trust settings (key pair and
certificates). See the identity provider vendor’s documentation for more information.
Note
To establish successful trust with SAP HANA XS Engine on SAP Cloud Platform, the identity provider
must have the following features:
○ Supports unsigned SAML requests
○ Sends its signing certificate when sending a SAML response
● You have a SAP HANA XS engine configured with its key pair and certificates. See the SAP HANA
Administration Guide.
Restriction
This procedure is valid for productive HANA instances running SAP HANA SP8 or lower. For SAP HANA SP9
instances, see theConfigure SSO with SAML Authentication for SAP HANA XS Applications section in the
SAP HANA Administration Guide.
Use this procedure to configure your HANA XS applications to use Security Assertion Markup Language
(SAML) 2.0 authentication. This is necessary if you want to implement identity federation with your corporate
identity providers. See Authorization and Trust Management in the Neo Environment [page 2358].
Procedure
1. Download the identity provider metadata. See the identity provider vendor’s documentation for more
information.
2. Store the IdP signing certificate in a valid PEM or DER file, enclosing the certificate content in -----BEGIN
CERTIFICATE----- and -----END CERTIFICATE-----.
3. Upload the PEM or DER file to SAP Cloud Platform using the upload-hanaxs-certificates command.
Tip
: If you get an error message while uploading the certificates, try to fix the problem using the
reconcile-hanaxs-certificates command. See reconcile-hanaxs-certificates [page 2100]
4. Restart the SAP HANA XS service so the upload takes effect. This is done using the restart-hana console
command.
Procedure
○ sap.hana.xs.admin.roles::HTTPDestAdministrator
○ sap.hana.xs.admin.roles::HTTPDestViewer
○ sap.hana.xs.admin.roles::RuntimeConfAdministrator
○ sap.hana.xs.admin.roles::RuntimeConfViewer
○ sap.hana.xs.admin.roles::SAMLAdministrator
○ sap.hana.xs.admin.roles::SAMLViewer
4. Open the SQL Console and execute the following set of statements:
a. To create the SAML 2.0 identity provider:
Tip
Get the certificate subject and issuer from the IdP certificate. If you don’t have direct access to the
certificate, use a proper file viewer tool to view the certificate contents from the PEM or DER file.
Note
With this statement, you also enable the automatic user creation of a corresponding SAP HANA
database user at first login. Otherwise, you will have to do it manually if such does not exist. See
the SAP HANA Administration Guide.
b. To create a destination:
<uppercase idp name> Create a short name for this IdP in uppercase.
○ 0 - No
Note
You need to configure all four endpoints, executing all four statements.
5. Open the SAP HANA XS Administation tool (see SAP HANA Administration Guide). For the required
applications, configure SAML authentication to be using this identity provider:
a. Select the application.
b. Go to the SAML section.
c. Choose Identity Provider and set this identity provider as value.
Procedure
1. Download the SAP HANA service provider metadata from the following URL:
https://<SAP HANA url>/sap/hana/xs/saml/info.xscfunc
Tip
You can get the SAP HANA URL from the HANA XS Applications section in the cockpit.
2. Import the SAP HANA service provider metadata in the identity provider. See the identity provider vendor’s
documentation for more information.
Open the required application and check if SAML authentication with the required identity provider works. You
should be redirected to the identity provider and prompted to log in. After successful login, you are shown the
application.
To be able to call SAP Cloud Platform services from SAP HANA XS applications, you need to assign a
predefined trust store to the HTTP destination that defines the connection details for a specific service. The
trust store contains the certificate required to authenticate the calling application.
Prerequisites
In the SAP HANA repository, you have created the HTTP destination (.xshttpdest file) to the service to be
called. The file must have the .xshttpdest extension and be located in the same package as the application
that uses it or in one of the application's subpackages.
Procedure
Related Information
A Multitarget Application (MTA) is a package comprised of multiple application and resource modules, which
have been created using different technologies and deployed to different runtimes, but have a common
Complex business applications are composed of multiple parts developed with focus on micro-service design
principles, API-management, usage of the OData protocol, increased usage of application modules developed
with different languages, IDEs, and build methodologies. Thus, development, deployment, and configuration of
separate elements introduce a variety of lifecycle and orchestration challenges. To address these challenges,
SAP introduces the Multitarget Application (MTA) concept. It addresses the complexity of continuous
deployment by employing a formal target-independent application model.
An MTA comprises of multiple modules created with different technologies, deployed to different target
runtimes, but having a common lifecycle. Initially, developers describe the modules of the application, the
interdependencies to other modules and services, and required and exposed interfaces. Afterward, the SAP
Cloud Platform validates, orchestrates, and automates the deployment of the MTA.
For more information about the Multitarget Application model, see the official The Multitarget Application
Model specification.
Multitarget Application deployment descriptor Defining MTA Deployment Descriptors for the Neo Environ
ment [page 1612]
Defining MTA Development Descriptors Defining MTA Development Descriptors [page 1611]
Multitarget Application extension descriptor Defining MTA Extension Descriptors [page 1276]
Multitarget Application module types and parameters MTA Module Types, Resource Types, and Parameters for Ap
plications in the Neo Environment [page 1617]
How to use transport management tools for moving MTA ar Integration with Transport Management Tools [page 1645]
chives among subaccounts
Related Information
● A Multitarget Application (MTA) archive that bundles all the deployable modules and configurations
together with the accompanying MTA deployment descriptor, which describes the content of the MTA
archive, the module interdependencies, and required and exposed interfaces
Prerequisites
Procedure
Note
Strictly adhere to the correct indentations when working with YAML files, and do not use the
tabulator character.
Example
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.demo.basic
version: 0.1.0
modules:
- name: example-java-app
type: com.sap.java
requires:
- name: db-binding
parameters:
name: example
jvm-arguments: -server
java-version: JRE 7
runtime: neo-java-web
runtime-version: 1
Example
resources:
- name: db-binding
type: com.sap.hcp.persistence
parameters:
id:
The example above instructs the SAP Cloud Platform to create a database binding during the
deployment process.
At this point of the procedure, no database id or credentials for your database binding have been added. The
reason for this is that all the content of the mtad.yaml so far is a target-platform independent, meaning that
the same mtad.yaml could be deployed to multiple SAP Cloud Platform subaccounts. The information about
your database id and credentials are, however, subaccount-specific. To keep the mtad.yaml target platform
independent, you have to create an MTA extension descriptor. This file is used in addition to your primary
descriptor file, and contains data that is account-specific.
Note
Security-sensitive data, for example database credentials, should be always deployed using an MTA
extension descriptor, so that this data is encrypted.
Example
_schema-version: '3.1'
ID: com.example.demo.basic.config
extends: com.example.demo.basic
parameters:
title: Basic Solution
description: This is a sample of a basic Solution.
Example
Manifest-Version: 1.0
Created-By: example.com
Name: example.war
Content-Type: application/zip
MTA-Module: example-java-app
The example above instructs the SAP Cloud Platform to link the module example-java-app to the
archive example.war.
Caution
Make sure that the MANIFEST.MF is compliant to the JAR file specification.
The MTA extension descriptor file is deployed separately from the MTA archive.
Example
/example.war
/META-INF
/META-INF/mtad.yaml
/META-INF/MANIFEST.MF
d. Archive the content of the root directory in an .mtar format using an archiving tool capable of
producing a JAR archive.
Results
After you have created your Multitarget Application archive, you are ready to deploy it into the SAP Cloud
Platform as a solution. To deploy the archive, proceed as described in Deploy a Standard Solution [page 1682].
Multitarget Applications are defined in a development descriptor required for design-time and build purposes.
The development descriptor (mta.yaml) defines the elements and dependencies of a Multitarget Application
(MTA) compliant with the Neo environment.
Note
The MTA development descriptor (mta.yaml) is used to generate the deployment descriptor
(mtad.yaml), which is required for deploying an MTA to the target runtime. The MTA Archive Builder uses
the MTA development descriptor in order to create an MTA archive, including the mtad.yaml and the
MANIFEST.MF file.
An MTA development descriptor contains the following main elements, in addition to the deployment
descriptor elements:
● path
● build-parameters
Restriction
The WebIDE currently does not support creating MTA development descriptors for the Neo Environment.
You have to create it manually by a text editor of your choice that supports the YAML serialization language.
The Multitarget Application (MTA) deployment descriptor is a YAML file that defines the relations between you
as a provider of а deployable artifact and the SAP Cloud Platform as a deployer tool.
Using the YAML data serialization language, you describe the MTA in an MTA deployment descriptor
(mtad.yaml) file containing the following:
● Modules and module types that represent Neo environment applications and content, which form the MTA
and are deployed on the platform
● Resources and resource types that are not part of an MTA, but are required by the modules at runtime or at
deployment time
● Dependencies between modules and resources
● Technical configuration parameters, such as URLs, and application configuration parameters, such as
environment variables.
See the following examples of a basic MTA deployment descriptor that is defined in an mtad.yaml file:
Example
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.descriptor
version: 0.1.0
modules:
- name: example-java-app
type: com.sap.java
requires:
- name: db-binding
parameters:
name: examplejavaapp
jvm-arguments: -server
java-version: JRE 7
runtime: neo-java-web
runtime-version: 1
resources:
- name: db-binding
type: com.sap.hcp.persistence
parameters:
id: fx7
user-id:
password:
Note
● The format and available options in the MTA deployment descriptor could change with the newer
versions of the MTA specification. Always specify the schema version when defining an MTA
deployment descriptor, so that the SAP Cloud Platform is aware against which specific MTA
specification version you are deploying.
Since the Neo environment supports a different set of module types, resource types, and configuration
parameters, the deployment of an MTA archive can be further configured by using MTA extension descriptors.
This allows administrators to adapt a deployment to a target or use case specific requirements, like setting
URLs, memory allocation parameters, and so on. For more information, see the official Multitarget Application
Model specification.
Related Information
You package the MTA deployment descriptor and module binaries in an MTA archive. You can manually do so as
described below, or alternatively use the Cloud MTA Build tool.
Note
There could be more than one module of the same type in an MTA archive.
The Multitarget Application (MTA) archive is created in a way compatible with the JAR file specification. This
allows us to use common tools for creating, modifying, and signing such types of archives.
● The MTA extension descriptor is not part of the MTA archive. During deployment you provide it as a
separate file, or as parameters you enter manually when the SAP Cloud Platform requests them.
● Using a resources directory as in some examples is not mandatory. You can store the necessary
resource files on root level of the MTA archive, or in another directory with name of your choice.
The following example shows the basic structure of an MTA archive. It contains a Java application .war file and
a META-INF directory, which contains an MTA deployment descriptor with a module and a MANIFEST.MF file.
Example
/example.war
/META-INF
/META-INF/mtad.yaml
/META-INF/MANIFEST.MF
The MANIFEST.MF file has to contain a name section for each MTA module part of the archive that has a file
content. In the name section, the following information has to be added:
● Name - the path within the MTA archive, where the corresponding module is located. If it leads to a
directory, add a forward slash (/) at the end.
● Content-Type - the type of the file that is used to deploy the corresponding module
● MTA-module - the name of the module as it has been defined in the deployment descriptor
Note
● You can store one application in two or more application binaries contained in the MTA archive.
● According to the JAR specification, there must be an empty line at the end of the file.
Example
Manifest-Version: 1.0
Created-By: example.com
Name: example.war
Content-Type: application/zip
MTA-Module: example-java-app
● Look for the example.war file within the root of the MTA archive when working with module example-
java-app
● Interpret the content of the example.war file as an application/zip
Note
The example above is incomplete. To deploy a solution, you have to create an MTA deployment descriptor.
Then you have to create the MTA archive.
As an alternative to the procedure described above, you can also use the Cloud MTA Build Tool. See its
official documentation at Cloud MTA Build Tool .
Related Information
https://sap.github.io/cloud-mta-build-tool/
The Multitarget Application Model
JAR File Specification
Defining MTA Deployment Descriptors for the Neo Environment [page 1612]
Defining MTA Extension Descriptors [page 1276]
MTA Module Types, Resource Types, and Parameters for Applications in the Neo Environment [page 1617]
The Multitarget Application (МТА) extension descriptor is a YAML file that contains data complementary to the
deployment descriptor. The data can be environment or deployment specific, for example, credentials
depending on the user who performs the deployment. The MTA extension descriptor is a YAML file that has a
similar structure to the deployment descriptor, by following the Multitarget Application Model structure with
several limitations and differences. Normally, extension descriptor extends deployment descriptor but it is
possible to extends other extension descriptor, making extension descriptors chain.. It can add or overwrite
existing data if necessary.
Several extension descriptors can be additionally used after the initial deployment.
Note
The format and available options within the extension descriptor may change with newer versions of the
MTA specification. You must always specify the schema version option when defining an extension
descriptor to inform the SAP Cloud Platform which MTA specification version should be used. Furthermore,
the schema version used within the extension descriptor and the deployment descriptor should always be
same.
In the examples below, we have a deployment descriptor, which has already been defined, and several
extension descriptors.
Note
Deployment descriptor:
Example
_schema-version: '3.1'
● Validate the extension descriptor against the MTA specification version 3.1
● Extend the com.example.extension deployment descriptor
The following is a basic example of an extension descriptor that adds data and overwrites data to another
extension descriptor:
Example
_schema-version: '3.1'
ID: com.example.extension.first
extends: com.example.extension
resources:
- name: data-storage
properties:
existing-data: new-value
non-existing-data: value
The following is an example of another extension descriptor that extends the extension descriptor from the
previous example:
Example
_schema-version: '3.1'
ID: com.example.extension.second
extends: com.example.extension.first
resources:
- name: data-storage
properties:
second-non-existing-data: value
● The examples above are incomplete. To deploy a solution, you have to create a deployment descriptor and
an MTA archive.
● Add a new data in modules, resources, parameters, properties, provides, requires sections
● Overwrite an existing data (in depth) in modules, resources, parameters, properties, provides, requires
sections
● As of schema version 3.xx, by default parameters and properties are overwritable and optional. If you want
to make a certain parameter or property non-overwritable or required, you need to add specific metadata.
SeeMetadata for Properties and Parameters [page 1274].
Related Information
Defining MTA Deployment Descriptors for the Neo Environment [page 1612]
Defining Multitarget Application Archives [page 1239]
MTA Module Types, Resource Types, and Parameters for Applications in the Neo Environment [page 1617]
The Multitarget Application Model
Tip
This section contains collapsible subsections. By clicking on the arrow-shaped icon next to a subsection,
you can expand it to see additional information.
This section contains the parameters and options that can be used to compose the structure of an MTA
deployment descriptor or an MTA extension descriptor.
Note
As both descriptor types use the YAML file format, strictly adhere to the following syntax practices:
The supported target platform options describe general behavior and information about the deployed
Multitarget Application. The according options are placed within the primary part of the МТА deployment
descriptor or МТА extension descriptor. That is, they are not placed within any modules or resources.
Note
● Note that any sensitive data should be placed within the MTA extension descriptor.
● To ensure that numeric values, such as product version and IDs, are not automatically interpreted as
numbers, always wrap them in single quotes.
De
fault Manda
Option Description Type Value tory
_schema-version Version of the MTA specification to which the MTA deployment En n/a yes
closed
descriptor complies. The current supported versions by the
String,
SAP Cloud Platform are:
use sin
● 2.1 gle
quotes
● 3.1
ID The identifier of the deployed artifact. The ID should follow the String n/a yes
convention for a reverse-URL dot-notation, and it has to be
unique within a particular subaccount.
extends Used in MTA extension descriptor to denote which MTA de String The ID yes
of the
ployment descriptor should be extended. Applicable only in ex
de
tension descriptors.
ploy
ment
de
scrip
tor
that is
to be
ex
tende
d
version Version of the current Multitarget Application. The format of En n/a yes
the version is a numeric string of <major>.<minor>.<micro> closed
String,
Note use sin
gle
The value must not exceed 64 symbols.
quotes
para hcp-deployer- Version of the deploy service of the SAP Cloud Platform. This En n/a yes
mete version version differs from the schema version. The current sup closed
rs: ported versions are: String,
use sin
● 1.0.0
gle
● 1.1.0 quotes
● 1.2.0
Note
● Deployer version 1.0.0 is going to be deprecated.
Use version 1.1.0 and higher.
● During a solution update, a different technical ap
proach is employed. For more information, see Gen
eral Information About Solution Updates [page 1693].
● png
● jpeg
● gif
The following syntax is for a .png logotype that has been en
coded in Base64:
Example
logo: "data:image/
png;base64,iVBORw0KGgoAAAANSUhEUgAAAF
oAAABaCAMAAAAPdrEwAAAAnFBMVEX///..."
This section contains the modules that are supported by the SAP Cloud Platform and their parameters and
properties.
● The relation between a module and the entities created in the SAP Cloud Platform is not one-to-one,
that is, it is possible for one module to contain several SAP Cloud Platform entities and vice versa.
● Any security-sensitive data, such as user credentials and passwords, has to be placed in the MTA
extension descriptor.
Tip
Expand the following subsections by clicking on the arrow-shaped element to see the available parameters
and values.
name HTML5 application name, which has to be unique within the current String n/a yes
subaccount.
Note
The display-name and name parameters belong to an applica
tion level that is different from the one of the application versions.
If another application version is defined in the MTA deployment
descriptor, then its display name has to be identical to display
names of other already defined versions of the application or has
to be omitted.
version Application version to be used in the HTML5 runtime. Used for deploy String n/a yes
ing Java HTML5 modules with the same version can be deployed only
once. In the version parameter, the usage of a <timestamp> read-
only variable is supported. Thus, a new version string is generated with
every deploy. For example, version: '0.1.0-${timestamp}'
active This flag indicates whether the related version of the application Boolean true no
should be activated or not. The default value is true.
subscribe When a provided solution is consumed, а subscription and designated Boolean true no
entities might be created in the consumer subaccount, unless the pa
rameter is set to false.
sfsf-access- If true, the application is activated for the SAP SuccessFactors system. Boolean false no
point The default value is false.
sfsf-idp- If true, the extension application is registered as an authorized asser Boolean false no
access tion consumer service for the SAP SuccessFactors system to enable
the application to use the SAP SuccessFactors identity provider (IdP)
for authentication.
sfsf-home- Registers SAP SuccessFactors Employee Central (EC) home page tiles Binary n/a no
page-tiles in the SAP SuccessFactors company instance.
For more information, see Home Page Tiles JSON File. Ensure that
each tile name is unique within the current subaccount.
com.sap.java and java.tomcat - used for deploying Java applications, either with the
proprietary SAP Java Web or the Java Web Tomcat runtime containers.
For more information about runtime containers, see Application Runtime Container [page 1430].
Note
You can deploy these application types using two or more war files contained in the MTA archive.
name Java application name, which has to be unique within the current subac String n/a yes
count. The name value length has to be between 1 and 255 symbols.
runtime Depending on the module and its used runtime, use one of the following: String neo- yes
java-
● For com.sap.java web
○ neo-java-web
○ neo-javaee6-wp
○ neo-javaee7-wp
● For java.tomcat - do not define this parameter
runtime- If defining a specific runtime version is required, use one of the following: En ● For no
version closed com
● For com.sap.java - for example, 1 or 2 String, .sa
● For java.tomcat - for example, 2 or 3. The major supported run use sin p.j
time versions are 2 (with Tomcat 7) and 3 (with Tomcat 8). gle
ava
quotes
○ F
o
r
n
e
o
-
j
a
v
a
-
w
e
b
-
1
○ F
o
r
n
e
o
-
j
a
v
a
e
e
6
-
w
p
-
2
● For
jav
a.t
omc
at -
2
java- The JVM major version, for example JRE 7 or JRE 8. String JRE 7 no
version
compute- The virtual machine computing unit size. The available sizes are LITE, String LITE no
unit-size PRO, PREMIUM, PREMIUM_PLUS. For more information, see Compute
Units [page 1438].
minimum- Minimum number of process instances. The allowed range is from 1 to 99. Integer 1 no
processes
Note
You either have to use both the minimum-processes and
maximum-processes parameters, or neither.
maximum- Maximum number of process instances. The allowed range is from 1 to 99. Integer 1 no
processes
Note
● You either have to use both the minimum-processes and
maximum-processes parameters, or neither.
● The maximum-processes should be equal to or higher than
the minimum-processes value.
rolling- Performs update of an application without downtime in one go. Boolean false no
update
Note
At least hcp-deployer-version 1.2.0 is required.
rolling- Defines how long the old process will be disabled before it is stopped. Integer 60 no
update-
timeout Note
At least hcp-deployer-version 1.2.0 is required.
running- Specifies how many processes will run at the end of the state of the Java Integer n/a no
processes application. If not specified, the minimum number is used.
jvm- The relevant JVM arguments employed by the customer application. String n/a no
arguments
connection The maximum timeout period for the connection, in milliseconds. Integer 20000 no
-timeout
encoding The used Uniform Resource Identifier (URI) encoding standard. String ISO-88 no
59-1
compressio The use of gzip compression for optimizing HTTP response time between String "off" no
n the Web server and its clients. The available values are "on", "off",
forced.
Note
● Always wrap "on" or "off" values in quotation marks.
● Explicitly specify the compression-mime-types and
compression-min-size parameters only when you use the
value "on".
compressio The used compression mime type, for example text/json text/xml String n/a no
n-mime- text/html
types
compressio The threshold size above which an HTTP response package is compressed Integer n/a no
n-min-size to reduce traffic.
role- Defines the application that provides the role for the Java application. Use String n/a no
provider one of the following:
● sfsf
● hcp
roles Maps predefined Java application roles to the groups they have to be as List n/a no
signed to. It has to specify the following parameters:
subscribe When a provided solution is consumed, а subscription and designated enti Boolean true no
ties might be created in the consumer subaccount, unless the parameter is
set to false.
sfsf- If true, the application is activated for the SAP SuccessFactors system. The Boolean false no
access- default value is false.
point
sfsf-idp- If true, the extension application is registered as an authorized assertion Boolean false no
access consumer service for the SAP SuccessFactors system to enable the appli
cation to use the SAP SuccessFactors identity provider (IdP) for authenti
cation.
sfsf- Use this to configure the connectivity of a Java extension application to an List n/a no
connection SAP SuccessFactors system. It creates the required HTTP destination and
s registers an OAuth client for the Java application in SAP SuccessFactors.
Note
Note that SFSF connections can only be created after the correspond
ing Java application has been deployed and started. This means that
an sfsf-connections module depends on a com.sap.java
module.
sfsf- Configures the connectivity from a SAP SuccessFactors system to the Java List n/a no
outbound- application. It creates the required OAuth client to the Java application
connection
and, if required, the application identity provider configuration.
s
The sfsf-outbound-connections parameter is a YAML list com
prised of entries with the following attributes:
sfsf-home- Registers SAP SuccessFactors Employee Central (EC) home page tiles in Binary n/a no
page-tiles the SAP SuccessFactors company instance.
For more information, see Home Page Tiles JSON File. Ensure that each tile
name is unique within the current subaccount.
destinatio This parameter is a YAML list comprised of one or more connectivity desti List n/a no
ns nations. To see the available parameters and values, see the table “Desti
nation Parameters” below.
Note
● If you have sensitive data, all destination parameters have to be
moved to the MTA extension descriptor.
● When you redeploy a destination, any parameter changes per
formed after deployment of the destination are removed. Your
custom changes have to be performed again.
owner Indicates in which subaccount the content should be imported. The possi String provi no
ble values are provider or consumer. der
Note
● To reduce the risk of being out of sync, we recommend that you
use YAML anchors.
● The value must not exceed 64 symbols.
target- It specifies the target site in which the content will be deployed. String n/a no
site-id
minimum- Version of the minimum required SAPUI5 Runtime. The format of the ver En n/a no
sapui5- sion is a numeric string of <major>.<minor> or closed
version <major>.<minor>.<micro> String
N
ot
e
Use
sin
gle
quo
tes.
Note
You have to ensure that the back-end-*-id parameter values are numeric strings of exactly 20 digits.
html5-app- SAP Fiori application name, which has to be unique within the current sub String n/a yes
name account.
html5-app- This flag indicates whether the related version of the application should be Boolean true no
active activated or not. The default value is true.
name SAP Fiori custom role name, which has to be unique within the current sub String n/a yes
account. The name value length has to be between 1 and 255 symbols.
groups List of group names to which the role has to be assigned. List n/a no
For more information, see Role Assignment of Fiori Roles to Security Groups [page 1677].
name HTML5 application custom role name, which has to be unique within the String n/a yes
current subaccount. The name value length has to be between 1 and 255
symbols.
groups List of group names to which the role has to be assigned. List n/a no
For more information, see Role Assignment of HTML 5 Roles to Security Groups [page 1678].
Remember
The use of this module type with parameters valid for hcp-deployer-version: '1.0.0' will soon be
de-supported. We recommend that you use the parameters valid for hcp-deployer-version:
'1.1.0', or adapt your module type accordingly.
Remember
This deployer version will soon be de-supported. We recommend you use 1.1.0.
metadata- Enable or disable metadata validation, for example true. Boo n/a yes
validation- lean
setting
metadata- Enable or disable metadata cache, for example false. Boo n/a yes
cache- lean
setting
services List of OData services. Parameters required for an OData service are: List n/a yes
metadata- Enable or disable metadata validation, for example true. Boo n/a yes
validation- lean
setting
metadata- Enable or disable metadata cache, for example false. Boo n/a yes
cache- lean
setting
services List of OData services. Parameters required for an OData service are: List n/a yes
Note
If a service with the same name/namespace/version
combination already exists but has different description, the
import will fail.
Note
If a service with the same name/namespace/version
combination already exists but has different model-id, the
import will fail.
Note
If a service with the same name/namespace/version
combination already exists but has different default destina
tion, the import will fail.
com.sap.hcp.sfsf-roles - used for uploading and importing SAP SuccessFactors HCM Suite
roles from the SAP Cloud Platform system repository into the SAP SuccessFactors
customer instance.
The role definitions must be described in a JSON file. For more information about creating roles.json file,
see Create the Resource File with Role Definitions.
com.sap.hcp.group - used for modeling the SAP Cloud Platform authorization groups.
name Group name, which has to be unique within the current subaccount. The String n/a yes
name value length has to be between 1 and 255 symbols.
To see the available parameters and values, see the table “Destination Parameters” below.
com.sap.integration - used for modeling the content for the SAP Cloud Platform
Integration service.
technical- Technical name of the com.sap.integration module type String n/a yes
name
Note
● Enable the Solutions Lifecycle Management service that you will use, in a subaccount that supports
SAP Cloud Platform Integration. For more information, see Content Transport in the SAP Cloud
Platform Integration documentation.
● Create a destination with named CloudIntegration with the following properties:
○ Type - HTTP
○ URL - URL pointing to the /itspaces of the TMN node for the SAP Cloud Platform Integration
tenant in the current subaccount
○ Proxy Type - Internet
○ Authentication - BasicAuthentication
○ User and password - credentials of a user that has the AuthGroup.IntegrationDeveloper role
for the above-mentioned TMN node
This section contains the resource types and their parameters that are supported by the SAP Cloud Platform.
Note
● The relation between a module and the entities created in the SAP Cloud Platform is not one-to-one,
that is, it is possible for one module to contain several SAP Cloud Platform entities.
● Any security-sensitive data, such as user credentials and passwords, has to be placed in the MTA
extension descriptor.
<untyped> Used for adding any properties that you might require and
which you define. It does not have a lifecycle.
Note
The untyped resource is unclassified, that is, it does not
have a type.
Manda
Resource type Parameter Parameter Description Type Default tory
com.sap.hcp.p id Identifier of the database that will be bound to String n/a yes
ersistence a deployed Java application You can model a
named data source by using the parameter
Note
If you want to use a <DEFAULT> data
base binding, the standard data source
jdbc/DefaultDB has to be set up at
the stage of the Java application develop
ment.
Note
We recommend that you place this param
eter in the MTA extension descriptor, if you
are using one.
password You can model a named data source by using String n/a no
Note
We recommend that you place this param
eter in the MTA extension descriptor, if you
are using one.
Note
The provider subaccount must meet the
following criteria:
binding-name that is added to the database binding resource required in the requires section of the
com.sap.java and java.tomcat module types.
The MTA specification _schema-version 3.1 introduces the notion for metadata, which can be added to a
certain property or parameter.
consumer- Used when you want to provide your Multitarget Application for con Boo true no
optional sumption by other subaccounts. You can add the consumer- lean
optional metadata to a property to indicate that it should be
populated with an MTA extension descriptor when your subscribers
consume the Multitarget Application. If you do not provide the
consumer-optional metadata, the deployment of the MTA de
ployment descriptor within your subaccount will fail due to missing
data.
Example
resources:
- name: example-resource
properties:
user:
password:
properties-metadata:
user:
optional: true
consumer-optional: false
password:
optional: true
consumer-optional: false
...
Note
● The optional parameter has to be explicitly defined and
set to true if you want to use the option consumer-
optional. See the MTA specification for additional infor
mation.
● Тhis option is available for Multitarget Application schema
3.1.0 and higher
Example
resources:
- name: example-resource
properties:
user:
properties-metadata:
user:
description: Еxample resource
user name
...
Example
resources:
- name: example-resource
properties:
password:
properties-metadata:
password:
sensitive: true
...
Example
resources:
- name: example-resource
properties:
description:
properties-metadata:
description:
complex: true
...
Note
This parameter is not taken into account if you use it in conjunc
tion with the sensitive parameter. The Password input field
is used instead.
Example
resources:
- name: example-resource
properties:
user:
properties-metadata:
user:
default-value: John Doe
...
Depending on the type of the destination that you wish to create (subaccount-level, application-level,
subscription destination, and so on), the destination can be modeled as a module
com.sap.hcp.destination, or as a parameter of the modules com.sap.java or java.tomcat. However,
the options available when you create a destination are the same for all of the destination types.
Mandator Default
MTA Parameter Type y Possible Values Value Description or Comment
description String
url URL Yes Use when the parameter type has the
HTTP or LDAP values. Mandatory only
for these types.
● if BasicAuthentication is the
Authentication type.
Mandatory only for this type.
● if the parameter type has the
values MAIL, or RFC.
● if BasicAuthentication is the
Authentication type.
Mandatory only for this type.
● if MAIL, or RFC are the destination
type.
client String Yes 3 digits, in single Use with the RFC parameter type.
quotes Mandatory only for this type.
client-ashost String 00-99, in single quotes Use with the RFC parameter type.
Either this or client-mshost must
be specified.
client-r3name String 3 letters or digits Use with the RFC parameter type, if
client-mshost is specified.
Example
ldap.
mail.
jco.client.
jco.destination.
The additional-properties values are not strictly verified during deployment, since they may vary
excessively. For example, such values might depend on the destination type or authentication type. In case
you are using such additional values, after deployment you have to ensure that the required elements have
been properly created, and they operate as expected.
In regard to modeling destinations the SAP Cloud Platform offers several keyword properties, which you can
use to optimize your declaration about deploying a destination. You can have the following destination types:
application- This keyword can be placed only within the properties category of the provides section of the
url
com.sap.java and java.tomcat module types. It is used when you want to extract the URL of
the Java Application and link it to a destination that you have modeled.
The following example contains a Java application that has a destination that leads to itself. Note that
this example uses the MTA placeholder concept. For more information, see “Destination with Specific
Target Platform Data Options” below.
Example
modules:
- name: java-module
type: com.sap.java
provides:
- name: java-module
properties:
application-url: ${default-url}
requires:
- name: java-module
parameters:
name: exampleapp
destinations:
- name: ExampleWebsite
type: HTTP
url: ${java-module/application-url}
...
When modeling destinations the SAP Cloud Platform offers several keyword properties that allow you to
express your intention when deploying a destination more clearly. There might be cases, in which some of the
destination data prior deploying the MTA archive is not known to you. Such data might be, for example, the URL
of a Java Application that you want your destination to point to. To address these cases, SAP Cloud Platform
provides several placeholders that you can use when you model your MTA. Placeholders are part of the
Currently all types of destinations support the following placeholders, which are automatically resolved with
their valid values during deployment.
Placeholder Op
tion Description
${default- Instructs SAP Cloud Platform to resolve the placeholder value to the default Java Application URL
url} when deploying the destination. Тhis placeholder can be part only of the property named
application-url, which serves as a provided dependency of the com.sap.java and
java.tomcat module types.
This example shows the usage of the ${default-url} placeholder. The modeled java-module pro
vides the application-url dependency:
Example
modules:
- name: java-module
type: com.sap.java
provides:
- name: java-module
properties:
application-url: ${default-url}
parameters:
name: exampleapp
...
Note
● This placeholder can be used only with destination types that have a URL within their proper
ties, that is, destination types such as an HTTP destination.
● This URL can be automatically resolved only if the Java Application has only one URL.
${account- Instructs SAP Cloud Platform to resolve the placeholder value to your subaccount name when deploy
name} ing the destination. This placeholder can be used only in the url parameter for a destination, the
token-service-url parameter, and in the application-url property, which serves as a
provided dependency of the com.sap.java and java.tomcat module types.
Example
modules:
- name: java-module
type: com.sap.java
provides:
- name: java-module
properties:
application-url: ${default-url}/accounts/${account-
name}/example
parameters:
name: exampleapp
...
- name: abc-java
type: com.sap.java
parameters:
destinations:
- name: ExampleWebsite
type: HTTP
url: http://abc.example.com/accounts/${account-name}
....
${provider- Instruct SAP Cloud Platform to resolve the placeholder value to the subaccount name of the provider
account-name} when the destination is being deploying. This placeholder can be used only in the url parameter for a
destination and the token-service-url parameter. You can use it if you want to employ a model,
where a destination is created within your subscribers subaccount and you want it to point to a URL in
your provider subaccount.
Example
modules:
- name: abc-java
type: com.sap.java
parameters:
destinations:
- name: ExampleWebsite
type: HTTP
url: http://abc.example.com/accounts/${provider-
account-name}
owner: consumer
....
Note
● In the example the subscriber subaccount is consuming a Solution that is provided by you.
● The consumer property of the destination indicates that this destination is going to be de
ployed into the subaccount of the consumer.
${landscape- Instructs the SAP Cloud Platform to resolve the placeholder value to the current landscape URL when
url} deploying the destination. This placeholder can be used only in the url property for a destination, the
token-service-url parameter, and in the application-url property that serves as a
provided dependency of the com.sap.java and java.tomcat module types.
Example
modules:
- name: java-module
type: com.sap.java
provides:
- name: java-module
properties:
application-url: myjava.${landscape-url}/
parameters:
name: exampleapp
...
- name: abc-java
type: com.sap.java
parameters:
destinations:
- name: ExampleWebsite
type: HTTP
url: abc.${landscape-url}/
....
To transport an application or application content to other subaccounts, you use the Enhanced Change and
Transport System (CTS+) or the cloud-based Transport Management Service.
● Transport Management using the Enhanced Change and Transport System (CTS+)
Use this option if you already have CTS+ in use for other applications, or if you have a hybrid landscape in
which you want the ABAP system to be leading the transport environment.
How to use CTS+ to transport SAP cloud platform applica Transporting Multitarget Applications with CTS+ [page
tions from one subaccount to another 1646]
What you need to do to enable the direct upload of MTA Set Up a Direct Upload of MTA Archives to a CTS+ Trans
archives to a CTS+ transport request port Request [page 1647]
How to configure destinations to the target end points of Configuring the Access to the Solutions Lifecycle Manage
deployment provided by the Solutions Lifecycle Manage ment Service [page 1650]
ment Service that are required as part of the setup of your
transport landscapes in CTS+ and Transport Management
Service.
How to use the Transport Management Service (BETA) in Introduction to the Transport Management Service
general
What you need to do to enable the direct upload of MTA Set Up Direct Uploads of MTA Archives Using the Trans
Archives to a transport request that will be used by the port Management Service [page 1649]
Transport Management Service (BETA)
How to configure destinations to the target end points of Configuring the Access to the Solutions Lifecycle Manage
deployment provided by the Solutions Lifecycle Manage ment Service [page 1650]
ment Service that are required as part of the setup of your
transport landscapes in CTS+ and Transport Management
Service.
You can enable transport of SAP Cloud Platform applications and application content that is available as
Multitarget Applications (MTA) using the Enhanced Change and Transport System (CTS+).
Prerequisites
● You have configured your SAP Cloud Platform subaccounts for transport with CTS+ as described in How
To... Configure SAP Cloud Platform for CTS
● The content that you want to transport can be made available as a Multitarget Application (MTA) archive as
described in Multitarget Applications for the Neo Environment [page 1606].
Context
You use the Change and Transport System (CTS) of ABAP to transport and deploy your applications running on
SAP Cloud Platform in the form of MTAs, for example, from development to a test or production subaccount.
Proceed as follows to be able to transport an SAP Cloud Platform application:
Procedure
1. Package the application in a Multitarget Application (MTA) archive. To do this, you have the following
options:
1. Use the Cloud MTA Build Tool .
2. Use the Solution Export Wizard as described in Exporting Solutions [page 1689].
2. Attach the MTA archive to a CTS+ transport request as described in How To... Configure SAP Cloud
Platform for CTS.
Related Information
Resources on CTS+
Setting up a CTS+ enabled transport landscape in SAP Cloud Platform
Use the CTS+ Export Web Service to perform a transport of an MTA from one subaccount to another.
Prerequisites
● You have activated and configured the CTS+ Export Web Service as described in Activating and Configuring
CTS Export Web Service.
Note
Make sure that you select the Transport Channel Authentication and User ID / Password as a provider
security of the web service binding.
Note the Calculated Access URL of the web service, which can be found in the transport settings.
The Calculated Access URL follows the pattern /sap/bc/srt/rfc/sap/export_cts_ws/<ABAP
Client ID>/export_cts_ws/export_cts_ws.
● You have to define a user that is going to call the CTS+ Export Web Service. This user needs to have the
following user roles:
○ SAP_BC_WEBSERVICE_CONSUMER
○ SAP_CTS_PLUS
Note
● You have installed and configured the Cloud Connector, which is used to connect on-premise systems with
the SAP Cloud Platform. For more information, see Cloud Connector.
Note
If you maintain a list of trusted applications and a principal propagation trust configuration, you have to
authorize the application services:slservice.
● Define the transport systems and route corresponding to your SAP Cloud Platform subaccounts. For more
information, see How To... Configure HCP for CTS
Procedure
1. Define the destinations leading to the on-premise systems. In the SAP Cloud Platform cockpit, navigate to
the Services Solution Lifecycle Management Configure Destinations New Destination .
2. For the new destination configuration, enter the required parameters:
○ Name: TransportSystemCTS
Note
○ Type: HTTP
○ URL: <Exposed URL of the system, taken from the Cloud Connector section,
following the convention: https://<virtual host name>:<virtual port, such as
443>/<Calculated Access URL>><System ID of the source system in the transport
route, which is defined above>
Example
https://myctsplushost:443/sap/bc/srt/rfc/sap/export_cts_ws/001/
export_cts_ws/export_cts_ws
Note
You have to manually enter the attributes names, as they are not available in the drop-down list.
For this feature to be consumed by the Cloud Platform Integration, see Content Transport.
Related Information
Create the required configurations for using the Transport Management Service, in order to transport MTA
archives from one subaccount to another.
Prerequisites
● You are subscribed and have access to the Transport Management Service, and have set up the
environment to transport MTA archives directly in an application. For more information, see Set Up the
Environment to Transport Content Archives Directly in an Application.
● You have a service key, which contains parameters you need to refer in the required destinations.
Context
To perform transports of MTA archives using the Transport Management Service, you have to create and set up
destinations defining the source transport node for transporting MTA archives. Proceed as follows:
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the Services Solution Lifecycle Management
Configure Destinations .
2. Choose New Destination to create the destination directed at the Transport Management Service URL and
defining the source transport node. Enter the following parameters:
○ Name: TransportManagementService
Note
Example
https://tmsdemo123.authentication.sap.hana.ondemand.com/oauth/token
○ Choose New Property, and from the drop-down list select sourceSystemId. As a value, enter the ID of
the source node of the transport route, for example, DEV_NODE.
3. Save the destination.
Results
You can use the Transport Management Service to transport MTA archives.
Related Information
Get Access
Set Up the Environment to Transport MTAs Directly in an Application
Creating Service Keys [page 1326]
Content Transport
To deploy Multitarget Applications from other tools, such as CTS+ or the Transport Management Service, you
have to connect to the Solutions Lifecycle Management Service by using its dedicated service endpoint
https://slservice.<landscape-host>/slsservice/. You have two authentication methods available -
Basic authentication, and OAuth Platform API Client.
Note
The default option for both CTS+ and Transport Management Service is Basic authentication.
● https://slservice.<landscape-host>/slservice/slp/basic/<account-id>/slp/ -
authentication using username and password
● https://slservice.<landscape-host>/slservice/slp/oauth/<account-id>/slp/ -
authentication using an OAuth token created using the OAuth client
● Basic authentication:
1. Ensure the user has an assigned platform role that contains the following scopes:
○ Manage Multitarget Applications
○ Read Multitarget Applications
For more information, see section Managing Member Authorizations in the Neo Environment [page
1904]
● Authentication using an OAuth Client:
1. Create a new OAuth client as described in Using Platform APIs [page 1737].
2. During the process, assign the following scopes from the Solution Lifecycle Management API:
○ Manage Multitarget Applications
○ Read Multitarget Applications
In the context of SAP Cloud Platform, a solution is comprised of various application types and configurations,
designed to serve a certain scenario or task flow. Typically the comprised parts of the solution are
interconnected and have a common lifecycle. They are explicitly deployed, updated, deleted, configured, and
monitored together.
A solution allows you to easily manage complex deployable artifacts. You can compose a solution by yourself,
or you can acquire one from a third-party vendor. Furthermore, you can use the solutions to deploy artifacts
that are comprised by entities external to SAP Cloud Platform, such as SAP SuccessFactors entities. This
allows you to have a common management and lifecycle of artifacts spread across various SAP platforms and
systems.
● A Multitarget Application (MTA) archive, which contains all required application types and configurations
as well as a deployment descriptor file. It is intended to be used as a generic artifact that can be deployed
and managed on several SAP Cloud Platform subaccounts. For example, you can reuse one MTA archive on
your development and productive subaccounts.
● (Optionally) An МТА extension descriptor file that contains a deployment-specific data. It is intended to be
used as a specific data source for a given SAP Cloud Platform subaccount. For example, you can have
different extension descriptors for your development and productive subaccounts. Alternatively, you can
also provide this data manually during the solution deployment.
You model the supported entities according to the MTA specification so that they can be deployed as a
solution.
The SAP Cloud Platform allows you to deploy Java applications that run either on the proprietary SAP Java
Web or the Java Web Tomcat runtime containers. These Java applications are com.sap.java and the
java.tomcat.
When you model a Java application in the МТА deployment descriptor, you can specify a set of properties
related to this application. For a complete list of the supported properties, see MTA Module Types, Resource
Types, and Parameters for Applications in the Neo Environment [page 1617].
If a Java application is a part of your solution, the following rules apply during deployment:
● The Java application is deployed and started at the end of the deployment
● If a Java application with the same name already exists in your subaccount, it is replaced with the newer
Java application
● An existing Java application is updated only if its binaries or configuration in the MTA deployment
descriptor have been changed
● When updating an already existing application, parameters defined in the new MTA deployment descriptor
override the existing parameters in the already deployed application. Parameters not defined in the
descriptor are copied from the already existing application.
● You can also update a Java application using a rolling update. For more information, see MTA Module
Types, Resource Types, and Parameters for Applications in the Neo Environment [page 1617].
● You can deploy one Java application that is distributed in two or more war files in the MTA archive. They
have to be described accordingly in the MANIFEST.MF file, and the archive names should differ.
Note
The Java аpplications are modeled as a Multitarget Application (MTA) specification modules.
For specification of the Java аpplication module, see MTA Module Types, Resource Types, and Parameters for
Applications in the Neo Environment [page 1617].
Example
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.javaapp
version: 0.1.0
modules:
- name: example-java
type: com.sap.java
parameters:
name: exampleapp
jvm-arguments: -server
java-version: JRE 7
runtime-version: 1
requires:
- name: dbbinding
resources:
- name: dbbinding
type: com.sap.hcp.persistence
parameters:
id: tst
Example
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.javaapp
version: 0.1.0
modules:
- name: example-java
type: java.tomcat
parameters:
name: exampleapp
jvm-arguments: -server
java-version: JRE 8
runtime-version: 3
requires:
- name: dbbinding
resources:
- name: dbbinding
type: com.sap.hcp.persistence
parameters:
id: tst
In the examples above you see the required application module properties such as the required Java version,
runtime, description, and so on.
You also have to create an MTA extension descriptor that will hold sensitive data, such as credentials.
Always enter the security-sensitive data of your Solution in an MTA extension descriptor.
Example
_schema-version: '3.1'
ID: com.example.basic.javaapp.config
extends: com.example.basic.javaapp
parameters:
title: Java Application Example
description: This is an example of the Java Application module
resources:
- name: dbbinding
parameters:
user-id: myuser
password : mypassword
In the example above, the extension descriptor adds the user-id and password parameters to the resource,
which is modeled in the deployment descriptor.
After you deploy your solution, you can open its tile in the cockpit and check if the Java application is deployed.
Related Information
You can deploy HTML5 applications to the SAP Cloud Platform by modeling it as a part of a Multitarget
Application.
When you model an application in the МТА deployment descriptor, you have to specify a set of properties
related to thе application. For a complete list of the supported properties, see MTA Module Types, Resource
Types, and Parameters for Applications in the Neo Environment [page 1617].
The following rules apply when you deploy a solution that contains an HTML5 application:
● If an application with an identical name but a different version already exists in your subaccount, the added
new version exists in parallel to the earlier application. Depending on the value of the active parameter,
the new version is activated.
● If an application with an identical name and the identical version already exists in your subaccount, the
application in the solution to be deployed is going to be skipped.
● If there is no version specified in the MTA deployment descriptor, it will be deployed with its current
timestamp as version.
● When you delete a solution containing an HTML5 application, the application itself and all of its versions
are going to be deleted.
Example
Sample Code
parameters:
hcp-deployer-version: '1.1.0'
ID: com.sap.example.html5
version: 0.1.0
modules:
- name: examplehtml5
type: com.sap.hcp.html5
parameters:
name: example
version: '0.1.0'
active: true
display-name: Example HTML5
To always create a new version of the HTML5 application, you can also use the ${timestamp} as a suffix to
you version.
Example
- name: examplehtml5
type: com.sap.hcp.html5
parameters:
name: example1
version: '0.1.0-${timestamp}'
Related Information
By using а database binding, the Java applications connects to a database set up in your current subaccount or
provided by another subaccount part of the same global account. This connection is modeled within your
solution by setting it up during the deployment operation.
Note
● You have a database that is set up in your subaccount or there is a database provided to you by another
subaccount.
● You have valid credentials for that database. In case that you do not have valid credentials for the
database, default credentials will be generated for you.
Note
You cannot have a database binding to the <DEFAULT> data source together with a database binding to a
named data source, but you can have more than one database binding to named data sources.
Each database binding is modeled as a Multitarget Application (MTA) resource, which is required by a Java
application module.
For specification of the database binding resource, see MTA Module Types, Resource Types, and Parameters
for Applications in the Neo Environment [page 1617].
First, you have to model the deployment descriptor that will contain the Java application module and the
database binding resource and then you have to create an extension descriptor that will hold sensitive data,
such as credentials.
Make sure that you always use an extension descriptor when you have sensitive data within your solution.
Example
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.dbbinding
version: 0.1.0
modules:
- name: example-java
type: com.sap.java
parameters:
name: exampleapp
jvm-arguments: -server
java-version: JRE 7
runtime: neo-java-web
runtime-version: 1
requires:
- name: dbbinding
resources:
- name: dbbinding
type: com.sap.hcp.persistence
parameters:
id: tst
Example
_schema-version: '3.1'
ID: com.example.basic.dbbinding.config
extends: com.example.basic.dbbinding
parameters:
title: Database binding example
description: This is an example of the database binding resource
resources:
- name: dbbinding
parameters:
user-id: myuser
In the example above, the extension descriptor adds the user-id and password parameters to the resource
dbbinding, which is modeled in the deployment descriptor.
Example
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.dbbinding
version: 0.1.0
modules:
- name: example-java
type: com.sap.java
parameters:
name: exampleapp
jvm-arguments: -server
java-version: JRE 7
runtime: neo-java-web
runtime-version: 1
requires:
- name: firstdbbinding
parameters:
binding-name: tstbinding
- name: seconddbbinding
parameters:
binding-name: abcbinding
resources:
- name: fisrtdbbinding
type: com.sap.hcp.persistence
parameters:
id: tst
- name: seconddbbinding
type: com.sap.hcp.persistence
parameters:
id: abc
Example
_schema-version: '3.1'
ID: com.example.basic.dbbinding.config
extends: com.example.basic.dbbinding
parameters:
In the example above, the extension descriptor adds the user-id and password parameters to each resource,
which is modeled in the deployment descriptor.
Note
The provider subaccount must belong to the same global account to which your subaccount belongs.
Example
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.dbbinding
version: 0.1.0
modules:
- name: example-java
type: com.sap.java
parameters:
name: exampleapp
jvm-arguments: -server
java-version: JRE 7
runtime: neo-java-web
runtime-version: 1
requires:
- name: dbbinding
resources:
- name: dbbinding
type: com.sap.hcp.persistence
parameters:
id: tst
account: abcd
Example
_schema-version: '3.1'
ID: com.example.basic.dbbinding.config
extends: com.example.basic.dbbinding
parameters:
title: Database binding example
description: This is an example of the database binding resource
resources:
- name: dbbinding
parameters:
user-id: myuser
password : mypassword
In the example above, the MTA extension descriptor adds the user-id and password parameters to the
resource dbbinding, which is modeled in the MTA deployment descriptor. After you deploy your solution, you
can open its tile in the cockpit and check if the database binding is created.
● Database aliases tst and abc, which are provided by another subaccount
Note
The provider subaccount must belong to the same global account to which your subaccount belongs.
Example
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.dbbinding
version: 0.1.0
modules:
- name: example-java
type: com.sap.java
parameters:
name: exampleapp
jvm-arguments: -server
java-version: JRE 7
runtime: neo-java-web
runtime-version: 1
requires:
- name: firstdbbinding
parameters:
binding-name: tstbinding
Example
_schema-version: '3.1'
ID: com.example.basic.dbbinding.config
extends: com.example.basic.dbbinding
parameters:
title: Named database bindings example
description: This is an example of the database binding resources
resources:
- name: firstdbbinding
parameters:
user-id: myuser
password : mypassword
- name: seconddbbinding
parameters:
user-id: myuser
password : mypassword
In the example above, the MTA extension descriptor adds the user-id and password parameters to each
resource, which is modeled in the MTA deployment descriptor. After you deploy your solution, you can open its
tile in the cockpit and check if the database bindings are created.
Note
When you delete a database binding, the credentials that you used are not removed from the database.
Delete them manually, if you want to do so.
Related Information
You can connect your applications to another source by describing the source connection properties in a
destination. Later on, you can access that destination from your application.
Depending on whether the destination source is located within the SAP Cloud Platform or not, the destinations
are classified as internal or external. You can also provide a Solution for consumption to another SAP Cloud
Platform subaccount and define a destination as deployable to all subscriber subaccounts.
The supported destination levels you can model within a Solution are:
Related Information
Subaccount-level destinations are not linked to a particular application, but instead can be used by all
applications. For example, the subaccount-level destination can be used by an HTML5 application to connect to
a source Java application.
Note
If you modify a subaccount-level destination, you will affect all applications that use it. The subaccount-
level destination has a lifecycle that is independent from the applications that use it.
Destinations to external resources lead to services or applications that are not part of the current Multitarget
Application (MTA) archive and you do not have direct access to them. For example, it might be an application
running in another subaccount or outside SAP Cloud Platform.
For a list of the available destination parameters, see MTA Module Types, Resource Types, and Parameters for
Applications in the Neo Environment [page 1617] and The Multitarget Application Model design document.
Remember
● If you need more than one destination, you have to model each subaccount-level destination in a
separate module.
● Use the property force-overwrite to choose a redeployment approach for an existing destination.
Place the property at destination level, and use the value true to have the destination forcefully
overwritten, or false to leave it unchanged, which is also the default behavior.
● If you want a destination to be created before the application that requires it, use a required
dependency to instruct SAP Cloud Platform to define the deploy order. For example, if you have a Java
application with a destination defined in its web.xml file, you have to use the required dependency
so that the destination is resolved correctly when the Java application starts.
Example
modules:
- name: nwl
type: com.sap.java
requires:
- name: examplewebsite-connection
parameters:
name: networkinglunch
...
- name: examplewebsite-connection
type: com.sap.hcp.destination
parameters:
name: ExampleWebsite
type: HTTP
description: Connection to ExampleWebsite
url: http://www.examplewebsite.com
proxy-type: Internet
authentication: BasicAuthentication
user: myuser
password: mypassword
...
In the example above, the module type com.sap.hcp.destination is used to define the subaccount-level
destination and the Java module nwl requires it, because the destination is created prior starting the Java
application. The required section has to ensure proper ordering.
The example above will result in a subаccount-level destination created within the consumer subaccount with
credentials that are still placed into the MTA extension descriptor. If you are providing your solution for
Example
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.destination.subaccount
version: 0.1.0
modules:
- name: nwl
type: com.sap.java
requires:
- name: examplewebsite-connection
parameters:
name: networkinglunch
- name: examplewebsite-connection
type: com.sap.hcp.destination
requires:
- name: data-storage
parameters:
name: ExampleWebsite
type: HTTP
description: Connection to ExampleWebsite
url: http://www.examplewebsite.com
proxy-type: Internet
authentication: BasicAuthentication
user: ~{data-storage/user}
password: ~{data-storage/password}
owner: consumer
resources:
- name: data-storage
properties:
user:
password:
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.destination.subaccount.config
extends: com.example.basic.destination.subaccount
version: 0.1.0
modules:
resources:
- name: data-storage
properties:
user: myuser
password: mypassword
Note
● The reference syntax ${source/value} is used to link the destinations user and password options.
● The data-storage resource is of untyped type.
In the example above, you create the destination within the subscriber subaccount, but the credentials for that
destination are still provided by you. If the consumer of your solution has to provide the credentials for the
destination, you have to use the consumer-optional metadata element.
Note
Note that using metadata is available in an MTA archive with schema version 3.1 and higher.
Example
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.destination.subaccount
version: 0.1.0
modules:
- name: nwl
type: com.sap.java
requires:
- name: examplewebsite-connection
parameters:
name: networkinglunch
- name: examplewebsite-connection
type: com.sap.hcp.destination
requires:
- name: data-storage
parameters:
name: ExampleWebsite
type: HTTP
description: Connection to ExampleWebsite
url: http://www.examplewebsite.com
proxy-type: Internet
authentication: BasicAuthentication
user: ~{data-storage/user}
password: ~{data-storage/password}
owner: consumer
resources:
- name: data-storage
properties:
user:
password:
properties-metadata:
user:
optional: true
consumer-optional: false
password:
optional: true
consumer-optional: false
_schema-version: '3.1'
ID: com.example.basic.destination.subaccount.config
extends: com.example.basic.destination.subaccount
parameters:
title: Subaccount Destination Example
description: This is an example of the sample Subaccount Destination
_schema-version: '3.1'
ID: com.example.basic.destination.subaccount.config.subscriber
extends: com.example.basic.destination.subaccount.config
resources:
- name: data-storage
properties:
user: subscriberuser
password: subscriberpassword
In the example above, the consumer-optional metadata is used to enforce the subscriber to provide
credentials. The provider's extension descriptor does not provide the credentials, but the consumer's one.
Note
The subaccount-level destination to an internal application is a destination of type HTTP that points to a Java
application, which in turn is part of the current MTA and will be deployed with the same solution. It is modeled
as a com.sap.hcp.destination module type.
Note
● If you need more than one destination, you have to model each subaccount-level destination in a
separate module.
● To overwrite an already existing destination, you have to use the force-overwrite option. For more
information, see MTA Module Types, Resource Types, and Parameters for Applications in the Neo
Environment [page 1617].
In the following example, an HTML5 application, which uses a subaccount-level destination to an internal
resource, connects to a Java application as a backend. The destination uses the URL of the Java application as
its URL target:
Example
- name: abc
type: com.sap.java
provides:
- name: abc
properties:
application-url: ${default-url}
parameters:
name: networkinglunch
...
- name: abc-destination
type: com.sap.hcp.destination
requires:
- name: abc
- name: abc-ui
type: com.sap.hcp.html5
requires:
- name: abc-destination
parameters:
name: networkingui
● The Java module type abcapplication-url property, which value is a placeholder. For more information
about the ${default-url} placeholder, see MTA Module Types, Resource Types, and Parameters for
Applications in the Neo Environment [page 1617].
● The destination abc-destination is a module of type com.sap.hcp.destination and requires the
Java application module.
By requiring the Java application module, the destination can access all provided properties of that module
(namely the application-url property). Later on, the destination module can use a reference to read
the provided properties.
● The destination module uses the reference to read the value of the application-url property.
● The HTML5 module requires the destination module, because the destination has to be created prior to
deploying the HTML5 application. The required section will ensure the proper ordering.
For more information about the relation between HTML5 applications and destinations, see Assign
Destinations for HTML5 Applications [page 2215].
Note
Ensure that your applications do not have circular dependencies, that is, that the application-url
provides its property or one Java application does not refer to the application-url property of another
Java application module.
Application-level destinations apply only within a given application, compared to subaccount-level destinations
that apply to the whole subaccount. You can use them to connect you application to resources outside the SAP
Cloud Platform, applications not part of your subaccount, applications from your subaccount and even to your
own application.
Destinations to external resources lead to services or applications external to and not accessible by the current
Multitarget Application (MTA) archive. For example, it can be an application running in another subaccount.
For a list of the available destination parameters, see MTA Module Types, Resource Types, and Parameters for
Applications in the Neo Environment [page 1617] and The Multitarget Application Model design document.
Remember
● If you need more than one destination, you have to model each subaccount-level destination in a
separate module.
● Use the property force-overwrite to choose a redeployment approach for an existing destination.
Place the property at destination level, and use the value true to have the destination forcefully
overwritten, or false to leave it unchanged, which is also the default behavior.
● You cannot define a destination in the web.xml file of the Java application, if that specific destination
points to the application itself, and has a URL automatically resolved by the SAP Cloud Platform. You
have to manually resolve the destination in the application code.
Example
modules:
- name: abc
type: com.sap.java
parameters:
name: networking
...
destinations:
- name: ExampleHttpDestination
type: HTTP
url: http://www.examplewebsite.com
proxy-type: Internet
authentication: BasicAuthentication
user: myuser
password: mypassword
● The com.sap.java module defines the Java application that has to be deployed.
● The destinations parameter defines the destinations that have to be created.
As a result, the example above creates an application-level destination within your subaccount with credentials,
which are still located in the MTA deployment descriptor.
If you want to provide your solution for consumption by another subaccount, you can create that destination
into the subscriber subaccount. To do this, you can use the owner option.
Example
Deployment Descriptor
modules:
- name: abc
type: com.sap.java
parameters:
Extension Descriptor
...
resources:
- name: data-storage
properties:
user: myuser
password : mypassword
...
Note
The owner option indicates that the destination has to be deployed to the subscriber subaccount.
● The untyped resource data-storage contains the sensitive parameters, which are deployed using the
MTA extension descriptor.
Note
For more information, see MTA Module Types, Resource Types, and Parameters for Applications in the
Neo Environment [page 1617]
The example above creates the destination within the subscribers subaccount, but the credentials for that
destination are still provided separately by you. If the consumer of your solution has to provide the credentials
for the destination, you have to use consumer-optional metadata.
Note
Note that metadata is available in an MTA archive with schema version 3.1 and higher.
Deployment Descriptor
modules:
- name: abc
type: com.sap.java
parameters:
name: networking
requires:
- name: data-storage
...
destinations:
- name: ExampleDestination
type: HTTP
url: http://www.examplewebsite.com
proxy-type: Internet
authentication: BasicAuthentication
user: ~{data-storage/user}
password: ~{data-storage/password}
owner: consumer
...
resources:
- name: data-storage
properties:
user:
password:
properties-metadata:
user:
optional: true
consumer-optional: false
password:
optional: true
consumer-optional: false
...
...
# no credentials provided
...
resources:
- name: data-storage
properties:
user: subscriberuser
password: subscriberpassword
...
In the example above the consumer-optional metadata is used to enforce the consumer of your solution to
provide the required credentials. In this case, the MTA extension descriptor of the consumer provides the
required credentials instead of the provider MTA extension descriptor.
Note
The application-level destination to an internal application is an HTTP type destination, which can point to the
same or different Java application deployed with the same solution. It is modeled as a com.sap.java or
java.tomcat module.
Note
● If you need more than one destination, you have to model each subaccount-level destination in a
separate module.
● To overwrite an already existing destination, you have to use the force-overwrite option. For more
information, see MTA Module Types, Resource Types, and Parameters for Applications in the Neo
Environment [page 1617].
The following example creates a frontend Java application that has a destination directed at a backend Java
application, which is deployed within the same solution. In this case, the URL of the source Java application is
not described, but instead left to be resolved to its default value during deployment by the SAP Cloud Platform.
For additional options of what can be resolved automatically by the SAP Cloud Platform during deployment see
MTA Module Types, Resource Types, and Parameters for Applications in the Neo Environment [page 1617].
Example
- name: abc
type: com.sap.java
provides:
- name: abc
properties:
application-url: ${default-url}
parameters:
name: javaapp1
...
- name: abc-ui
type: com.sap.java
requires:
- name: abc
parameters:
name: javaapp2
...
destinations:
- name: JavaAppBackend
type: HTTP
url: ~{abc/application-url}
proxy-type: Internet
authentication: AppToAppSSO
The naming of the applicaiton-url property is mandatory. For more details, see MTA Module
Types, Resource Types, and Parameters for Applications in the Neo Environment [page 1617].
● The value of the application-url property is a placeholder. For more details about the ${default-
url} placeholder, see MTA Module Types, Resource Types, and Parameters for Applications in the Neo
Environment [page 1617]
● The destination JavaAppBackend is an entry of the destinations parameter of the com.sap.java
module.
● The module abc-ui requires the module abc. By requiring the abc module, the abc-ui module gains
access to all provided properties of that module, namely the application-url property. Later on, the
destination module can use a reference to read the provided properties. For more details, see MTA Module
Types, Resource Types, and Parameters for Applications in the Neo Environment [page 1617].
● The destination is using the reference to read the value of the application-url property.
Note
Ensure that your applications do not have circular dependencies, that is, that the application-url
property or one Java Application does not refer to the application-url property of another Java
Application module.
You can connect your SAP SuccessFactors system to your SAP Cloud Platform subaccount. After you do so,
you can define a solution that extends it. In more complex scenarios, you can even provide a solution that can
be consumed by another SAP Cloud Platform subaccount and extend the subscribers SAP SuccessFactors
system.
Note
● You have onboarded an SAP SuccessFactors company in your SAP Cloud Platform subaccount. If you
are providing a solution that is consumed by another subaccount in the SAP Cloud Platform, the
subscriber subaccount is responsible for onboarding the SAP SuccessFactors company. For more
information, see Configuring the SAP Cloud Platform Subaccount for SAP SuccessFactors.
● You have a database and valid credentials.
In the example below, you will create a standard SAP SuccessFactors extension. The “Benefits” sample Java
application provided by SAP is used. It is located at https://github.com/SAP/cloud-sfsf-benefits-ext .
Note
● The sample “Benefits” Java Application will be deployed to your subaccount, but the SAP
SuccessFactors artifacts will be deployed to the subscriber subaccounts and their SAP SuccessFactors
systems.
Note
For the example below, we assume that you have the following:
You have to model the sample “Benefits” Java application as a module into the MTA deployment descriptor.
You also have to define an SAP SuccessFactors Role module and a database binding resource.
Example
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.sfsf
version: 0.1.0
modules:
- name: benefits-app
type: com.sap.java
parameters:
name: benefits
jvm-arguments: -server
java-version: JRE 7
runtime: neo-java-web
runtime-version: 1
sfsf-idp-access: true
sfsf-connections:
- type: default
additional-properties:
nameIdFormat: 'urn:oasis:names:tc:SAML:1.1:nameid-
format:unspecified'
- type: technical-user
technical-user-id: SFSFAdmin
sfsf-outbound-connections:
- type: OAuth2SAMLBearerAssertion
name: BenefitsOutboundConnection
subject-name-id: mail
subject-name-id-format: EMAIL_ADDRESS
assertion-attributes-mapping:
firstname: firstname
lastname: lastname
email: email
role-provider: sfsf
sfsf-home-page-tiles:
resource: resources/benefits-tiles.json
requires:
- name: dbbinding
- name: benefits-roles
- name: benefits-roles
type: com.sap.hcp.sfsf-roles
● The sample “Benefits” Java application module requires both database binding resource and the SAP
SuccessFactors Roles module.
● The SAP SuccessFactors tile is defined as a parameter of the sample “Benefits” Java application and
points to a JSON file within the Multitarget Application archive
● SAP SuccessFactors provider is defined as a parameter of the sample “Benefits” Java application
● The sample “Benefits” Java application is going to use the SAP SuccessFactors IDP for authentication
● The sample “Benefits” Java application is going to use the default connectivity options when accessing the
SAP SuccessFactors system
Both the SAP SuccessFactors roles and tiles require additional files to be added to the Multitarget Application
archive. The deployment descriptor contains only the modeling of those entities, but their actual content is
external to the MTA deployment descriptor, in the same way as the sample “Benefits” Java application .war
archive.
You also have to create a JSON file benefits-tiles.json that contains the SAP SuccessFactors tiles.
Example
[
{
"name" : "SAP Corporate Benefits",
"path" : "com.sap.hana.cloud.samples.benefits",
"size" : 3,
"padding" : false,
"roles" : ["Corporate Benefits Admin"],
"metadata" : [
{
"title" : "SAP Corporate Benefits",
"description" : "SAP Corporate Benefits home page tile",
"locale" : "en_US"
}
]
}
]
In the example above, you can see an example of an SAP SuccessFactors tile for the sample “Benefits” Java
application.
Next you have to create a JSON file benefits-roles.json that contains the SAP SuccessFactors roles.
Example
[
{
"roleDesc": "SAP Corporate Benefits Administrator",
"roleName": "Corporate Benefits Admin",
"permissions": []
}
]
Afterward, you have to create your MANIFEST.MF file and define the Java application, roles, and tiles.
Example
Manifest-Version: 1.0
Created-By: SAP SE
Name: resources/benefits-roles.json
Content-Type: application/json
MTA-Module: benefits-roles
Name: com.sap.hana.cloud.samples.benefits.war
Content-Type: application/zip
MTA-Module: benefits-app
● Entry that links your SAP SuccessFactors roles with the MTA deployment descriptor
● Entry that links your “Benefits” sample Java application with the MTA deployment descriptor
Now you can create your Multitarget Application archive by following the JAR file specification. The archive
structure has to be as follows:
Example
/com.sap.hana.cloud.samples.benefits.war
/META-INF
/META-INF/mtad.yaml
/META-INF/MANIFEST.MF
/resources/benefits-roles.json
/resources/benefits-tiles.json
Start by creating an MTA extension descriptor that holds the security-sensitive data, such as credentials.
Note
Make sure that you always use an extension descriptor when you have sensitive data within your solution.
Example
_schema-version: '3.1'
ID: com.example.basic.sfsf.config
extends: com.example.basic.sfsf
parameters:
title: SuccessFactors example
description: This is an example of the sample Benefits Java Application for
SuccessFactors
resources:
- name: dbbinding
parameters:
user-id: myuser
password : mypassword
In the example above, the extension descriptor adds the user-id and password parameters to the resource
dbbinding, which is modeled in the deployment descriptor.
After you deploy your solution, you can open its tile in the cockpit and check if the SuccessFactors extension
solution is deployed.
To organize application security roles and to manage user access, you create authorization groups in SAP
Cloud Platform.
You model security groups in the MTA deployment descriptor using the module type com.sap.hcp.group.
You can also assign any roles defined in a Java application to these authorization groups.
The following rules apply when you deploy a solution containing authorization groups:
● If the group already exists, it is updated with the new roles assignment defined in the MTA deployment
descriptor.
● If you delete a solution, a group is not deleted, as it might be used by other applications.
Example
We assume that you have defined as follows a set of security roles in the web.xml of your Java application.
<web-app>
<display-name>My Java Web Application</display-name>
<security-role>
<role-name>administrator</role-name>
</security-role>
</web-app>
For a complete list of the supported properties, see MTA Module Types, Resource Types, and Parameters for
Applications in the Neo Environment [page 1617].
The security roles can be assigned to a group modeled in the MTA deployment descriptor.
Example
ID: com.sap.mta.demo
_schema-version: '2.1'
parameters:
hcp-deployer-version: '1.1.0'
modules:
- name: administratorGroup
parameters:
name: &adminGroup AdministratorGroup
type: com.sap.hcp.group
- name: demowebapp
parameters:
name: demowebapp
title: Demo MTA Application
When you deploy the above example, a new authorization group named “AdministratorGroup” is created, and
the “administrator” application security role form the “demowebapp” is assigned to this group. In case the
roles already exists, only the application security role is assigned to the existing group.
Related Information
You can assign security roles on subscription level for use with SAP Fiori applications.
These roles are assigned to authorization groups when designed as modules in a descriptor file, as shown in
the following example:
Sample Code
ID: com.sap.mta.demo
_schema-version: '3.1'
modules:
- name: administratorGroup
parameters:
name: &adminGroup AdministratorGroup
type: com.sap.hcp.group
- name: fiori-role
type: com.sap.fiori.role
parameters:
name: HRManager
groups:
- *adminGroup
You can assign security roles on subscription level for use with HTML5 applications.
These roles are assigned to authorization groups when designed as modules in a descriptor file, as shown in
the following example:
Sample Code
ID: com.sap.mta.demo
_schema-version: '3.1'
modules:
- name: administratorGroup
parameters:
name: &adminGroup AdministratorGroup
type: com.sap.hcp.group
- name: html5-role
type: com.sap.hcp.html5.role
parameters:
name: HRManager
groups:
- *adminGroup
requires:
- name: administratorGroup
Related Information
To operate a solution you require at least one of the following roles in your subaccount:
Note
Currently you can operate SAP SuccessFactors extensions using only the Administrator or Developer roles.
Depending on the type of the solution, you can operate it using the cockpit, CTS+ and the SAP Cloud Platform
console client for the Neo environment:
Standard Solution
This solution is only deployed and can be used in the current SAP Cloud Platform subaccount and subscription
to it is not possible. All entities that are part of the solution will be deployed and managed within this
subaccount.
Note
The CTS+ cannot be used for providing a solution for subscription, or for subscribing to a solution that
is provided by another subaccount.
Provided Solution
This is a solution that is deployed to the current subaccount, but provided for subscription to another SAP
Cloud Platform subaccount. Before the deployment of your solution, you have to set it as a provided solution.
After that you have to grant entitlements to a given SAP Cloud Platform global account that will allow its
subaccounts to subscribe to the solutions.
When providing a solution for subscription, you can define which parts of it will be deployed to your
subaccount, and which parts will be deployed to the subscriber's subaccount. Note that the parts deployed to
your subaccount will consume resources from your quotas. All parts deployed to the subaccount of the
subscriber will consume resources from its own quotas.
Available Solutions
This is a solution that is available for subscription. It has been provided by another SAP Cloud Platform
subaccount and you have granted entitlements to subscribe to it. After subscribing to the solution, you can use
it.
Subscribed Solution
This is a solution that has been provided by another SAP Cloud Platform subaccount. You have subscribed to it,
and thus have a limited set of management operations.
When providing a solution for subscription, the provider defines which parts of it are deployed to your
subaccount, and which parts are deployed to the provider subaccount. Note that the parts deployed to your
subaccount consume resources from your quotas. All parts deployed to the provider subaccount consume
resources from its own quotas.
You can list the solutions that are available for subscription using the:
Monitoring Solutions
Deleting Solutions
Related Information
By using the cockpit you can provision a solution using one of the following ways:
● Deploy a Standard Solution [page 1682]- The solution is deployed in the current subaccount and
subscription to it is not possible.
● Deploy a Provided Solution [page 1684]- The solution is deployed in the current subaccount, but is provided
for subscription to another subaccount.
You can deploy a solution that can be consumed only within your subaccount.
Prerequisites
● The MTA archive containing your solution is created according to the information in Multitarget
Applications for the Neo Environment [page 1606].
● Optionally, you have created an extension description as described in Defining MTA Extension Descriptors
[page 1276].
● You have a valid role for your subaccount as described in Operating Solutions [page 1678].
● You have sufficient resources available in your subaccount to deploy the content of the Multitarget
Application.
Note
If you are performing a redeployment of an MTA, the existing components are first deleted, which
means that you do not need additional available resources.
Procedure
Alternatively, as of _schema version 3.1, if you do not provide it and your solution has missing data
required for productive use, you can enter that data manually in the dialog subsection that appears. Keep
in mind that you have to input complex parameters such as lists and maps in JSON format. For example, an
account-level destination parameter additional-properties should be a map that has a value similar
to {"additional.property.1": "1", "additional.property.2": "2"}.
Note
Make sure that you do not select the Provider deploy checkbox. If you select it, you will provide your
solution for a subscription. For more information, see Deploy a Provided Solution [page 1684].
Note
If you experience issues during the process, see Troubleshooting [page 1691].
7. (Optional) When deploying against _schema version 3.1, if you have manually entered parameters
during deployment, at the end of the process you can download an extension descriptor containing only
those parameters.
Note
Parameters marked as security sensitive, either by default or as set in the mtad.yaml, are not saved to
this extension descriptor.
Results
Your newly deployed solution appears in the Standard Solutions category in the Solutions page in the cockpit.
Each solution component originates from a certain MTA module or resource, which in turn can result in several
solution components. That is, one MTA module or resource corresponds to given solution components.
Related Information
Using the Solutions view of the cockpit, you can deploy a solution locally to your subaccount and provide it for a
subscription to another subaccount or you can subscribe to a solution that has been provided for subscription
by another subaccount in the cockpit.
You can deploy a solution locally to your subaccount and provide it for a subscription to another subaccount.
Prerequisites
● Ensure that the MTA archive containing your solution is created as described in Multitarget Applications in
the Cloud Foundry Environment [page 1232].
● Optionally, you have created an extension description as described in Defining MTA Extension Descriptors
[page 1276].
Note
Several extension descriptors can be additionally used after initial deployment, that is, you can extend
one extension descriptor with another unlimited times. You can use this approach if you want your
subscribers to define their own data.
● You have a valid role for your subaccount as described in Operating Solutions [page 1678].
● You have sufficient resources available in your subaccount to deploy the content of the Multi-Target
Application.
Note
○ If you are performing a re-deploy, the already deployed parts of the Multi-Target Application are
deleted first, so you are not required to have additional resources available in your subaccount.
○ If parts of your solution have to be deployed to the subscribers subaccounts, note that those parts
consume the resources of those subaccounts.
Procedure
Note
If you experience issues during the deployment process, see Troubleshooting [page 1691].
8. (Optional) When deploying against _schema version3.1, if you have manually entered parameters
during deployment, at the end of the process you can use the option to download an extension descriptor
containing only those parameters.
Note
Parameters marked as security sensitive, either by default or as set in the mtad.yaml, are not saved to
this extension descriptor.
Results
Your newly deployed solution appears in the Solutions Provided for Subscription category in the Solutions page
in the cockpit. Each solution component originates from a certain MTA module or resource that in turn can
result in several solution components. That is, one MTA module or resource corresponds to given solution
components.
Note
If you want to create an MTA extension descriptor, for the еxtension ID you have to use the value of the
Еxtension ID parameter, which you can find in the page of the solution you have just deployed.
Related Information
After the deployment of a solution, which is going to be provided for subscription, create the entitlements that
are going to be granted to the subscribers subaccounts.
Prerequisites
Procedure
Note
○ Granted Entitlements - the number of subaccounts part of the global account, which are going to be
able to subscribe to the provided solution
Note
Currently it is not possible to decrease the number of granted entitlements per particular global
account.
Results
Prerequisites
Procedure
Note
Currently it is not possible to decrease the number of granted entitlements per particular global
account.
Results
You have edited the number of granted entitlements for a particular global account.
Related Information
Prerequisites
● You have a valid role for your subaccount as described in Operating Solutions [page 1678].
● There is a solution available for subscription in your subaccount. That is, you have been granted with an
entitlement from the provider of the solution.
● You have sufficient resources available in your subaccount to deploy the content of the Multi-Target
Application.
Note
Typically, parts of a provided for subscription solution are deployed to the providers subaccount and
parts of it within your subaccount. The parts of the solution deployed to your subaccount consume
your resources, while the parts of the solution deployed to the providers subaccount consume the
resources the provider subaccount.
Procedure
Alternatively, as of _schema version 3.1, if you do not provide it and your solution has missing data
required for productive use, you can enter that data manually in the dialog subsection that appears. Keep
in mind that you have to input complex parameters such as lists and maps in JSON format. For example, an
account-level destination parameter additional-properties should be a map that has a value similar
to {"additional.property.1": "1", "additional.property.2": "2"}.
Note
Ensure that your extension descriptor file extends correctly the solution you are subscribing to. To do
so, check the Extension ID of the solution in the Additional Details field of the solution overview page
in the cockpit, and input it in the extends section of your extension descriptor.
5. Choose Subscribe to subscribe to the provided solution, and deploy the optional MTA extension descriptor
to the SAP Cloud Platform.
The Subscribe to a Solution dialog remains on the screen while the deployment is in progress. When the
deployment is completed, a confirmation appears that the solution has been successfully deployed. If you
Note
Parameters marked as security sensitive, either by default or as set in the mtad.yaml, are not saved to
this extension descriptor.
Remember
Any resources created in the subscriber account can be updated to the provided state by resubscribing
to the privided solution. You can do this either by using the Update option in the Solutions view of the
SAP Cloud Platrform cockpit, or the subscribe-mta command of the command line interface.
Results
The solution to which you are now subscribed appears in the Subscribed Solutions category in the Solutions
page in the cockpit. Each solution component originates from a certain MTA module or resource, which in turn
can result in several solution components. That is, one MTA module or resource corresponds to specific
solution components.
Related Information
Prerequisites
● You have deployed or created the application components, configurations, and content that you want to
export as an MTA in a SAP Cloud Platform Neo subaccount.
● Optionally, you have configured the connectivity to a transport service such as CTS+ and the Transport
Management Service.
You have the option to export subaccount components as Solutions. You have two possible scenarios:
● you can generate MTA development descriptor, MTA deployment descriptor template, and MTA extension
descriptor template
● you can export an MTA archive and, optionally, upload it to a transport service such as CTS+ or the
Transport Management Service.
Remember
Exporting MTA archives containing Java modules is not supported. You can, however, export descriptor
files that contain information about a Java software module.
● Java applications, including application destinations, data-source bindings, and role group assignments
● HTML5 applications, including permission role assignments
● Subaccount destinations
● HTML5 roles
● Cloud Portal sites
● Cloud Portal roles
● Cloud Portal destinations
● OData Provisioning service configurations
● OData Provisioning destinations
● Security groups
● Cloud Platform Integration content packages
Note
If a destination leads to a Java application and both are chosen for export, the destination url parameter is
automatically replaced with a placeholder leading to the Java application default-url.
Proceed as follows:
Procedure
1. Log on to the cockpit, select a subaccount, and choose Solutions in the navigation area.
2. Choose Export. Wait until the subaccount components are discovered.
3. Choose the subaccont-level components that you want to export from the list. You can also use the search
field to locate components by name. Afterward, choose Next.
○ The option Automatically select dependent components is enabled by default, so that any related
components and configurations are automatically selected for you. Disable this option if you want to
choose components individually.
○ The checkbox for selecting all components operates only for visible items. If you want to select all
discovered items, first expand the full list by choosing More at the bottom of the list.
4. Deselect the subcomponents that you do not want to export. Afterward, choose Next.
Tip
You can use the MTA development descriptor template in combination with the MTA archive builder
tool to create your MTA archive. The template contains build-parameters and path sections with all
possible build options for the corresponding module type. Note that for your particular build
environment you need to manually remove unnecessary parameters. For more information, see
Configuring Builders.
Related Information
3.3.4.9.3 Troubleshooting
While transporting SAP Cloud Platform applications using the CTS+ tool, or while deploying solutions using the
cockpit, you might encounter one of the following issues. This section provides troubleshooting information
about correcting them.
Technical error [Invalid MTA This error could occur if the MTA archive is not consistent. There are sev
archive [<mtar archive>]. MTA eral different reasons for this:
deployment descriptor (META-
INF/mtad.yaml) could not be ● The MTA deployment descriptor META-IND/mtad.yaml cannot
parsed. Check the be parsed, because it is syntactically incorrect according to the YAML
troubleshooting guide for specification. For more information, see the publicly available YAML
guidelines on how to resolve specification.
descriptor errors. Technical Make sure that the descriptor is compliant with the specification. Vali
details: <…> date the descriptor syntax, for example, by using an online YAML
parser.
Note
Ensure that you do not submit any confidential information to the
online YAML parser.
● The MTA deployment descriptor might contain data that is not com
patible with SAP Cloud Platform. Make sure the MTA deployment de
scriptor complies with the specification at Multitarget Applications
for the Neo Environment [page 1606].
● The archive might not be suitable for deployment to the SAP Cloud
Platform. This might happen if, for example, you attempt to deploy an
archive built for XSA to the SAP Cloud Platform. The technical details
might contain information similar to the following:
"Unsupported module type "<module type>" for
platform type "HCP-CLASSIC""
Technical error [Invalid MTA The archive is inconsistent, for example, when a module referenced in the
archive [<MTA name>]: Missing META-INF/mtad.yaml is not present in the MTA archive or is not refer
MTA manifest entry for module
enced correctly. Make sure that the archive is compliant with the MTA
[<module name>]]
specification available at The Multitarget Application Model .
Technical error [MTA extension This error could occur if one or more extension descriptors are not consis
descriptor(s) could not be tent. There are several different reasons for this:
parsed. Check the
troubleshooting guide for ● One or more extension descriptors might not be syntactically compli
ant with the YAML specification. Validate the descriptor syntax, for
guidelines on how to resolve
example, by using an online YAML parser.
descriptor errors. Technical
details: <…>
Note
Ensure that you do not submit any confidential information to the
online YAML parser.
Technical error [MTA deployment This error could occur if the MTA archive, or one or more extension de
descriptor (META-INF/mtad.yaml) scriptors are not consistent. There are several different reasons for this:
from archive [<mtar archive>]
and some of extension ● The MTA deployment descriptor or an extension descriptor might
contain data that is not compatible with the SAP Cloud Platform.
descriptors [<extension
Make sure the MTA deployment descriptor and all extension descrip
descriptor>] could not be
tors comply with the specification at Multitarget Applications for the
processed . Check the
Neo Environment [page 1606].
troubleshooting guide for
● The archive may not be suitable for deployment to SAP Cloud
guidelines on how to resolve
Platform. This might happen if, for example, you attempt to deploy an
descriptor errors. Technical archive built for XSA to the SAP Cloud Platform. The technical details
details: <…> might contain information similar to the following:
"Unsupported module type "<module type>" for
platform type "HCP-CLASSIC""
Process [<process-name>] has Ensure that you have required the necessary permissions or roles that are
failed with [Your user is not required to list or manage Multitarget Applications. For more information,
authorized to perform the see Operating Solutions [page 1678].
requested operation. Forbidden
(403)]. Contact SAP Support.
To enhance your solution with new capabilities or technical improvements, you can update it using the cockpit.
Depending on the deployer version (hcp-deployer-version) described in the MTA deployment descriptor,
SAP Cloud Platform uses one of the following technical approaches, where several distinctions apply.
Redeployment
When you update your solution against deployer version 1.0 or 1.1.0, the update is treated as a
redeployment, which means:
● Any new components that have now been described in the MTA deployment descriptor are deployed as
usual
● Any already existing components are redeployed or updated, depending on their current runtime state in
the SAP Cloud Platform.
● Only relations are removed to components, which are no longer present in the MTA deployment descriptor
of the new solution version. The component artefacts are not removed.
When you update your solution against deployer version 1.2 or 1.2.0, the update is treated as an update with
full semantics, which means:
● Any new components that have now been described in the MTA deployment descriptor are deployed as
usual
● Any already existing components are redeployed or updated, depending on their current runtime state in
the SAP Cloud Platform.
● Components that are no longer present in the MTA deployment descriptor are removed.
Note
The version of the MTA has to follow the “semver” Semantic Versioning specification, for example 1.1.2.
Related Information
Context
Procedure
1. Log on to the cockpit and select the subaccount containing the solution you want to update.
2. Choose Solutions in the navigation area.
3. Choose the tile of the solution you want to update.
4. On the solution overview page that appears, choose Update.
5. Only for standard and provided solutions: provide the location of the MTA archive you want to use.
Note
When you update a solution as a solution provider, ensure that the solution ID of the new deployed
archive matches the ID of the existing solution.
6. (Optional) You can provide the location of an MTA extension descriptor file.
Note
○ Alternatively to the Update option, to perform the update operation you can also use the Deploy
option.
○ As an alternative to the cockpit procedure, you can update a solution using the following command
line comand:
Sample Code
Results
Related Information
Note
For the examples below we assume that you have an already deployed MTA with a deployment descriptor
containing data similar to Version 1, and you want to update it to Version 2.
Version 1 Version 2
Version 1 Version 2
Version 1 Version 2
parameters:
hcp-deployer-version: '1.2.0'
description: The application
demonstrates some of the
main MTA features on SAP CP NEO.
title: Demo MTA Application
version: 0.1.4
In the example above, the previously missing module demohtml5app is added. As a result, the corresponding
HTML5 application is deployed.
Related Information
When deployed to your SAP Cloud Platform subaccount, a solution consists of various solution components.
Each solution component originates from a certain MTA module that in turn can result in several solution
components. That is, one MTA module corresponds to given solution components.
Prerequisites
You have a valid role for your subaccount as described in Operating Solutions [page 1678]
Procedure
To see a status overview of an individual solution or solution components in your subaccount, proceed as
follows:
1. Log on to the SAP Cloud Platform and select a subaccount.
2. In the cockpit, choose Solutions in the navigation area.
You can monitor the overall status of the deployed and available for subscription solutions.
Note
The overall status of a solution is a combination of the statuses of all its internal parts and the statuses
of any ongoing operations for that particular solution.
3. In the solution list, select the tile of the solution for which you want to see details.
If you have selected a solution that is available for subscription but not yet subscribed to, you can
monitor only a limited set of its properties.
○ Overview - it displays the solution name and status. For more information about the solution states,
see Solutions page help in the cockpit.
○ Description - a short descriptive text about the solution, typically stating what it does.
○ Additional Information- contains information about the provisioning type, the provider's subaccount,
and the organization.
○ Ongoing Operations - the ongoing operations for the solution.
○ Solution components - a list of the components that are part of the solution, the states of these
components and their types.
For more information about the possible states of a solution component and what they mean, see your
Solution page help in the cockpit.
4. If you have provided a solution that is available for subscription to another subaccount, you can monitor
the licenses and subscribers of a provided solution as follows:
a. In the solution list under the Solutions Provided for Subscription category, select the tile of the solution
for which you want to see details.
b. Choose Entitlement in the navigation area of the cockpit.
You can monitor the granted entitlements for that solution as well as the parts that were deployed to
the subscribers subaccounts.
Note
Monitoring granted licenses is only available for you if you have the subaccount administrator role.
Solution Components
Results in One or More of the Following Solution Components
Related Information
Delete a solution from your subaccount following the steps for the corresponding solution types.
Prerequisites
You have a valid role for your subaccount as described in Operating Solutions [page 1678]
Context
Note
● Some parts of the solution might be shared and used by other entities within the SAP Cloud Platform.
Such parts of the solution have to be removed manually.
● SAP SuccessFactors roles are not deleted.
● Custom application destinations and subaccount destinations are also deleted.
● Deleted solutions and their components cannot be restored.
● Currently, deleting a solution that has been provided for subscription is not automated. Subaccounts
consuming your provided solutions have to delete their subscribed solutions before you delete that
solution from your subaccount.
● If the solution has been provided to you for subscription from a provider subaccount, your entitlement
is not deleted. You will be able to subscribe again to the provided solution.
Procedure
Note
If the Delete data source checkbox is selected, any deployed database binding will be deleted. Note
that your database credentials will not be removed from your database and can be used again.
If set, any errors during deletion that are external to SAP Cloud Platform (for example, a
SuccessFactors system) are ignored.
Typical use case is to be able to delete a solution that is linked to a now nonexistent external
system. Then, if the Clean-up on error checkbox is not selected, the deletion process will fail with
an error. When the Clean-up on error is selected the deletion process will ignore the error and
continue.
Note
If the Clean-up on error checkbox is selected and an error that originates from an external to SAP
Cloud Platform instance occurs, it will be ignored. As a result all the data stored in the SAP Cloud
Platform for that solution will be deleted. However, external systems might still contain some data
that is not deleted.
The solution deletion dialog remains on the screen during the process. А confirmation appears when the
deletion is completed.
If you close the dialog while the process is running, you can open it again by choosing Check Progress of the
corresponding operation, located in the Ongoing Operations table in the solution overview page.
Results
Related Information
SAP Cloud Platform enables you to easily develop and run HTML5 applications in a cloud environment.
HTML5 applications on SAP Cloud Platform consist of static resources and can connect to any existing on-
premise or on-demand REST services. Compared to a Java application, there is no need to start a dedicated
process for an HTML5 application. Instead the static resources and REST calls are served using a shared
dispatcher service provided by the SAP Cloud Platform.
The static content of the HTML5 applications is stored and versioned in Git repositories. Each HTML5
application has its own Git repository assigned. For offline editing, developers can directly interact with the Git
Lifecycle operations, for example, creating new HTML5 applications, creating new versions, activating, starting
and stopping or testing applications, can be performed using the SAP Cloud Platform cockpit. As the static
resources are stored in a versioned Git repository, not only the latest version of an application can be tested,
but the complete version history of the application is always available for testing. The version that is delivered
to the end users of that application is called the "active version". Each application can have only one active
version.
Related Information
Set up your HTML5 development environment and run your first application in the cloud.
For more information about building applications in SAP Web IDE, see the SAP Web IDE documentation. There,
you will also find information on building your project first and then pushing your app to the cockpit.
Related Information
This tutorial illustrates how to build a simple HTML5 application using SAP Web IDE.
Prerequisites
Context
Context
For each new application a new Git repository is created automatically. To view detailed information on the Git
repository, including the repository URL and the latest commits, choose Applications HTML5
Applications in the navigation area and then Versioning.
Note
To create the HTML5 application in more than one region, create the application in each region separately
and copy the content to the new Git repository.
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
If you have already created applications using this subaccount, the list of HTML5 applications is displayed.
3. To create a new HTML5 application, choose New Application and enter an application name.
Note
4. Choose Save.
5. Clone the repository to your development environment.
Results
Task overview: Creating a Hello World Application Using SAP Web IDE [page 1707]
Related Information
A project is needed to create files and to make them available in the cockpit.
Procedure
1. In SAP Web IDE, choose Development (</>), and then select the project of the application you created in
the cockpit.
2. To create a project and to clone your app to the development environment, right-click the project, and
choose New Project from Template .
3. Choose the SAPUI5 Application button, and choose Next.
4. In the Project Name field, enter a name for your project, and choose Next.
Note
6. Choose Finish.
Task overview: Creating a Hello World Application Using SAP Web IDE [page 1707]
SAP Web IDE already created an HTML page for your project. You now adapt this page.
Procedure
1. In SAP Web IDE, expand the project node in the navigation tree and open the HelloWorld.view.js using
a double-click.
2. In the HelloWorld.view.js view, replace Title in the title: "{i18n>title}" line with the title of
your application Hello World.
4. To test your Hello World application, select the index.html file and choose Run ( ).
Task overview: Creating a Hello World Application Using SAP Web IDE [page 1707]
Next task: Deploy Your App to SAP Cloud Platform [page 1711]
With this step you create a new active version of your app that is started on SAP Cloud Platform.
Procedure
1. In SAP Web IDE, select the project node in the navigation tree.
2. To deploy the project, right-click it and choose Deploy Deploy to SAP Cloud Platform .
3. On the Login to SAP Cloud Platform screen, enter your password and choose Login.
4. On the Deploy Application to SAP Cloud Platform screen, increment the version number and choose
Deploy.
Note
If you leave the Activate option checked, the new version is activated directly.
Task overview: Creating a Hello World Application Using SAP Web IDE [page 1707]
The developer’s guide introduces the development environment for HTML5 applications, a procedure on how
to create applications, and supplies details on the descriptor file that specifies how dedicated application URLs
are handled by the platform.
Related Information
The development workflow is initiated from the SAP Cloud Platform cockpit.
The cockpit provides access to all lifecycle operations for HTML5 applications, for example, creating new
applications, creating new versions, activating a version, and starting or stopping an application.
The SAP Cloud Platform Git service stores the sources of an HTML5 application in a Git repository.
For each HTML5 application there is one Git repository. You can use any Git client to connect to the Git service.
On your development machine you may, for example, use Native Git or Eclipse/EGit. The SAP Web IDE has a
built-in Git client.
Git URL
With this URL, you can access the Git repository using any Git client.
The URL of the Git repository is displayed under Source Location on the detail page of the repository. You can
also view this URL together with other detailed information on the Git repository, including the repository URL
and the latest commits, by choosing HTML5 Applications in the navigation area and then Versioning.
Authentication
Access to the Git service is only granted to authenticated users. Any user who is a member of the subaccount
that contains the HTML5 application and who has the Administrator, Developer, or Support User role has
Permissions
The permitted actions depend on the subaccount member role of the user:
Any authenticated user with the Administrator, Developer, or Support User role can read the Git repository.
They have permission to:
Write access is granted to users with the Administrator or Developer role. They have permission to:
Related Information
Context
For each new application a new Git repository is created automatically. To view detailed information on the Git
repository, including the repository URL and the latest commits, choose Applications HTML5
Applications in the navigation area and then Versioning.
Note
To create the HTML5 application in more than one region, create the application in each region separately
and copy the content to the new Git repository.
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
If you have already created applications using this subaccount, the list of HTML5 applications is displayed.
3. To create a new HTML5 application, choose New Application and enter an application name.
Note
4. Choose Save.
5. Clone the repository to your development environment.
a. To start SAP Web IDE and automatically clone the repository of your app, choose Edit Online ( ) at
the end of the table row of your application.
b. On the Clone Repository screen, if prompted enter your user and password (SCN user and SCN
password), and choose Clone.
Results
Context
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
Results
You can now activate this version to make the application available to the end users.
Related Information
As end users can only access the active version of an application, you must create and activate a version of
your application.
Context
The developer can activate a single version of an application to make it available to end users.
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
Results
You can now distribute the URL of your application to the end users.
Related Information
Using the application descriptor file you can configure the behavior of your HTML5 application.
This descriptor file is named neo-app.json. The file must be created in the root folder of the HTML5
application repository and must have a valid JSON format.
With the descriptor file you can set the options listed under Related Links.
{
"authenticationMethod": "saml"|"none",
"welcomeFile": "<path to welcome file>",
"logoutPage": "<path to logout page>",
"sendWelcomeFileRedirect": true|false,
"routes": [
{
"path": "<application path to be mapped>",
"target": {
"type": "destination | service | application",
"name": "<name of the destination> | <name of the service> |
<name of the application or subscription>",
"entryPath": "<path prepended to the request path>",
"version": "<version to be referenced. Default is active
version.>"
},
"description": "<description>"
}
],
"securityConstraints": [
{
"permission": "<permission name>",
"description": "<permission description>",
"protectedPaths": [
"<path to be secured>",
...
],
"excludedPaths": [
"<path to be excluded>",
...
]
}
],
"cacheControl": [
{
"path": "<optional path of resources to be cached>",
"directive": "none | public | private",
"maxAge": <lifetime in seconds>
}
],
"headerWhiteList": [
"<header1>",
"<header2>",
...
]
}
All paths in the neo-app.json must be specified as plain paths, that is, paths with blanks or other special
characters must include these characters literally. These special characters must be URI-encoded in HTTP
requests.
Related Information
3.3.5.2.5.1 Authentication
Authentication is the process of establishing and verifying the identity of a user as a prerequisite for accessing
an application.
By default an HTML5 application is protected with SAML2 authentication, which authenticates the user against
the configured IdP. For more information, see Application Identity Provider [page 2407]. For public applications
the authentication can be switched off using the following syntax:
Example
"authenticationMethod": "none"
Note
Even if authentication is disabled, authentication is still required for accessing inactive application versions.
To protect only parts of your application, set the authenticationMethod to "none" and define a security
constraint for the paths you want to protect. If you want to enforce only authentication, but no additional
authorization, define a security constraint without a permission (see Authorization [page 1719]).
After 20 minutes of inactivity user sessions are invalidated. If the user tries to access an invalidated session,
SAP Cloud Platform returns a logon page, where the user must log on again. If you are using SAML as a logon
method, you cannot rely on the response code to find out whether the session has expired because it is either
200 or 302. To check whether the response requires a new logon, get the com.sap.cloud.security.login
HTTP header and reload the page. For example:
jQuery(document).ajaxComplete(function(e, jqXHR) {
if(jqXHR.getResponseHeader("com.sap.cloud.security.login")) {
alert("Session is expired, page shall be reloaded.");
window.location.reload();
}
})
To enforce authorization for an HTML5 application, permissions can be added to application paths.
In the cockpit, you can create custom roles and assign them to the defined permissions. If a user accesses an
application path that starts with a path defined for a permission, the system checks if the current user is a
member of the assigned role. If no role is assigned to a defined permission only subaccount members with
developer permission or administrator permission have access to the protected resource.
Permissions are only effective for the active application version. To protect non-active application versions, the
default permission NonActiveApplicationPermission is defined by the system for every HTML5
application. This default permission must not be defined in the neo-app.json file but is available
automatically for each HTML5 application.
If only authentication is required for a path, but no authorization, a security constraint can be added without a
permission.
A security constraint applies to the directory and its sub-directories defined in the protectedPaths field,
except for paths that are explicitly excluded in the excludedPaths field. The excludedPath field supports
pattern matching. If a path specified ends with a slash character (/) all resources in the given directory and its
sub-directories are excluded. You can also specify the path to be excluded using wildcards, for example, the
path **.html excludes all resources ending with .html from the security constraint.
To define a security constraint, use the following format in the neo-app.json file:
...
"securityConstraints": [
{
"permission": "<permission name>",
"description": "<permission description>",
"protectedPaths": [
"<path to be secured>"
],
"excludedPaths": [
"<path to be excluded>",
...
]
}
]
...
Example
An example configuration that restricts a complete application to the accessUserData permission, with
the exception of all paths starting with "/logout", looks like this:
...
"securityConstraints": [
{
"permission": "accessUserData",
"description": "Access User Data",
"protectedPaths": [
"/"
],
"excludedPaths": [
"/logout/**"
]
}
]
...
By default end users can access the application descriptor file of an HTML5 application.
To do so, they enter the URL of the application followed by the filename of the application descriptor in the
browser.
Tip
For security reasons we recommend that you use a permission to protect the application descriptor from
being accessed by end users.
A permission for the application descriptor can be defined by adding the following security constraint into the
application descriptor
...
"securityConstraints": [
{
"permission": "AccessApplicationDescriptor",
"description": "Access application descriptor",
"protectedPaths": [
"/neo-app.json"
]
}
]
...
After activating the application, a role can be assigned to the new permission in the cockpit to give users with
that role access to the application descriptor via the browser. For more information about how to define
permissions for an HTML5 application, see Authorization [page 1719].
To access SAPUI5 resources in your HTML5 application, configure the SAPUI5 service routing in the application
descriptor file.
To configure the SAPUI5 service routing for your application, map a URL path that your application uses to
access SAPUI5 resources to the SAPUI5 service:
...
"routes": [
{
"path": "<application path to be mapped>",
"target": {
"type": "service",
"name": "sapui5",
"version": "<version>",
"entryPath": "/resources"
},
"description": "<description>"
}
]
...
Example
This configuration example maps all paths starting with /resources to the /resources path of the
SAPUI5 library.
...
"routes": [
{
"path": "/resources",
"target": {
"type": "service",
"name": "sapui5",
"entryPath": "/resources"
},
"description": "SAPUI5"
}
]
...
For more information about using SAPUI5 for your application, see SAPUI5: UI Development Toolkit for HTML5.
Example
This configuration example shows how to reference the SAPUI5 version 1.26.6 using the neo-app.json
file.
...
"routes": [
{
"path": "/resources",
"target": {
"type": "service",
"name": "sapui5",
"version": "1.26.6",
"entryPath": "/resources"
},
"description": "SAPUI5"
}
}
...
Related Information
To connect your application to a REST service, configure routing to an HTTP destination in the application
descriptor file.
A route defines which requests to the application are forwarded to the destination. Routes are matched with
the path from a request. All requests with paths that start with the path from the route are forwarded to the
destination.
If you define multiple routes in the application descriptor file, the route for the first matching path is selected.
The HTTP destination must be created in the subaccount where the application is running. For more
information on HTTP destinations, see Create HTTP Destinations [page 206] and Assign Destinations for
HTML5 Applications [page 2215].
...
"routes": [
{
"path": "<application path to be forwarded>",
Example
With this configuration, all requests with paths starting with /gateway are forwarded to the gateway
destination.
...
"routes": [
{
"path": "/gateway",
"target": {
"type": "destination",
"name": "gateway"
},
"description": "Gateway System"
}
]
...
The browser sends a request to your HTML5 application to the path /gateway/resource (1). This request
is forwarded by the HTML5 application to the service behind the destination gateway (2). The path is
shortened to /resource. The response returned by the service is then routed back through the HTML5
application so that the browser receives the response (3).
Caution
Destination Properties
In addition to the application-specific setup in the application descriptor, you can configure the behavior of
routes at the destination level. For information on how to set destination properties, see You can enter
additional properties (step 9) [page 206].
Timeout Handling
A request to a REST service can time out when the network or backend is overloaded or unreachable. Different
timeouts apply for initially establishing the TCP connection (HTML5.ConnectionTimeoutInSeconds) and
reading a response to an HTTP request from the socket (HTML5.SocketReadTimeoutInSeconds). When a
timeout occurs, the HTML5 application returns a gateway timeout response (HTTP status code 504) to the
client.
While some long-running requests may require to increase the socket timeout, we do not recommend that you
change the default values. Too high timeouts may impact the overall performance of the application by
blocking other requests in the browser or blocking back-end resources.
Redirect Handling
By default all HTML5 applications follow HTTP redirects of REST services internally. This means whenever your
REST service responds with a 301, 302, 303, or 307 HTTP status code, a new request is issued to the redirect
target. Only the response to this second request reaches the browser of the end user. To change this behavior,
We recommend that you set this property to false. This helps improve the performance of your HTML5
application because the browser stores redirects and thus avoids round trips. If you use relative links, the
automatic handling of redirects might break your HTML5 application on the browser side. However, certain
service types may not run with a value of false.
Example
Prerequisites:
● Your application descriptor contains a route that forwards requests starting with the path /gateway, to
the destination named gateway as in the example above.
● The service redirects requests from /resource to the path ./servicePath/resource.
When the browser requests the path /gateway/resource (1), the HTML5 application forwards it to the
path /resource of the service (2). As the service responds with a redirect (3), the HTML5 application
sends another request to the new path /servicePath/resource (4). This second response contains the
required resource and is forwarded back to the browser (5).
For the same request to the path /gateway/resources (1), the HTML5 application again forwards the
request to the path /resources of the service (2). Now the redirect is directly forwarded back to the
browser (3). In this case it is the browser that sends another request to the path /gateway/
servicePath/resource (4), which the HTML5 application forwards to the service path /servicePath/
resource (5). The requested resource is then forwarded back to the browser (6).
Deprecated Properties
The following destination properties have been deprecated and replaced by new properties. If the new and the
old properties are both set, the new property overrules the old one.
Security Considerations
When accessing a REST service from an HTML5 application, a new connection is initiated by the HTML5
application to the URL that is defined in the HTTP destination.
To prevent that security-relevant headers or cookies are returned from the REST service to the client, only
whitelisted headers are returned. While some headers are whitelisted per default, additional headers can be
whitelisted in the application descriptor file. For more information about how to whitelist additional headers,
see Header Whitelisting [page 1732].
Cookies that are retrieved from a REST service response are stored by the HTML5 application in an HTTP
session that is bound to the client request. The cookies are not returned to the client. If a subsequent request is
initiated to the same REST service, the cookies are added to the request by the application. Only those cookies
are added that are valid for the request in the sense of correct domain and expiration date. When the client
session is terminated, all associated cookies are removed from the HTML5.
Related Information
To access resources from another HTML5 application or a subscription to an HTML5 application, you can map
an application path to the corresponding application or subscription.
If the given path matches a request path, the resource is loaded from the mapped application or subscription.
This feature may be used to separate re-usable resources in a dedicated application.
If multiple routes are defined in the application descriptor, the route for the first matching path in the
application descriptor is selected.
...
"routes": [
{
"path": "<application path to be mapped>",
"target": {
"type": "application",
"name": "<name of the application or subscription>"
"version": "<version to be referenced. Default is active
version>",
},
"description": "<description>"
}
]
...
Example
This configuration example maps all paths starting with /icons to the active version of the application
named iconlibrary.
...
"routes": [
{
"path": "/icons",
"target": {
"type": "application",
"name": "iconlibrary"
},
"description": "Icon Library"
}
]
...
Related Information
The User API service provides an API to query the details of the user that is currently logged on to the HTML5
application.
If you use a corporate identity provider (IdP), some features of the API do not work as described here. The
corporate IdP requires you to configure a mapping from your IdP’s assertion attributes to the principal
attributes usable in SAP Cloud Platform. See Configure User Attribute Mappings [page 2415].
...
"routes": [
{
"path": "<application path to be forwarded>",
"target": {
"type": "service",
"name": "userapi"
}
}
]
...
The route defines which requests to the application are forwarded to the API. The route is matched with the
path from a request. All GET requests with paths that start with the path from the route are forwarded to the
API.
Example
With the following configuration, all GET requests with paths starting with /services/userapi are
forwarded to the user API.
...
"routes": [
{
"path": "/services/userapi",
"target": {
"type": "service",
"name": "userapi"
}
}
]
...
● /currentUser
● /attributes
The User API requires authentication. The user is logged on automatically even if the authentication
property is set to none in the neo-app.json file.
Calling the /currentUser endpoint returns a JSON object that provides the user ID and additional
information of the logged-on user. The table below describes the properties contained in the JSON object and
specifies the principal attribute used to compute this information.
The /currentUser endpoint maps a default set of attributes. To retrieve all attributes, use the /attributes
endpoint as described in User Attributes.
Example
A sample URL for the route defined above would look like this: /services/userapi/currentUser.
{
"name": "p12345678",
"firstName": "John",
"lastName": "Doe",
"email": "[email protected]",
"displayName": "John Doe (p12345678)"
}
User Attributes
The /attributes endpoint returns the principal attributes of the current user as a JSON object. These
attributes are received as SAML assertion attributes when the user logs on. To make them visible, define a
mapping within the trust settings of the SAP Cloud Platform cockpit, see Configure User Attribute Mappings
[page 2415].
Example
A sample URL for the route defined above would look like this: /services/userapi/attributes.
If the principal attributes firstname, lastname, companyname, and organization are present, an
example response may return the following user data:
{
"firstname": "John",
"lastname": "Doe",
"companyname": "Doe Enterprise",
"organization": "Customer sales and marketing"
}
For some endpoints, you can use query parameters to influence the output behavior of the endpoint. The
following table shows which parameters exist for the /attributes endpoint and how they impact the outputs.
Recom
mended
URL Parameter Type/Unit Default Value Behavior
multiValuesAsArra Boolean false true If set to true, multivalued attributes are formatted as JSON
ys arrays. If set to false, only the first value of the entire value
range of the specific attribute is returned and formatted as a
simple string.
Note
If set to true for an attribute that is not multivalued,
then the value of the attribute is formatted as a simple
string and not a JSON array.
You can either display the default Welcome file or specify a different file as Welcome file.
If the application is accessed only with the domain name in the URL, that is without any additional path
information, then the index.html file that is located in the root folder of your repository is delivered by
default. If you want to deliver a different file, configure this file in the neo-app.json file using the
WelcomeFile parameter. With this additional parameter you specify whether a redirect is sent to the Welcome
file or whether the Welcome file is delivered without redirect. If this option is set, then instead of serving the
Welcome file directly under /, the HTML5 application will send a redirect to the WelcomeFile location. With
that, relative links in a Welcome file that is not located in the root directory will work.
To configure the Welcome file, add a JSON string with the following format to the neo-app.json file:
Example
An example configuration, which forwards requests without any path information to an index.html file in
the /resources folder would look like this:
"welcomeFile": "/resources/index.html",
"sendWelcomeFileRedirect": true
To trigger a logout of the logged-in user, you can configure a logout page in the application descriptor.
When executing a request to the configured logout page, the server triggers a logout. This results in a response
containing a logout request that is send to the identity provider (IdP) to invalidate the user's session on the IdP.
After the user is logged out from the IdP, the configured logout page is called again. Now, the content of the
logout page is served. The logout page is always unprotected, independent of the authentication method of the
application and independent of additional security constraints. In case additional resources, for example,
SAPUI5, are referenced from the logout page, those resources have to be unprotected as well.
For information on how to configure certain paths as unprotected, see Authentication [page 1718] and
Authorization [page 1719].
Because non-active application versions always require authentication, a logout is only triggered for the active
application version. For non-active application versions the logout page is served without triggering a logout.
To configure a logout page for your application, use the following format in the neo-app.json file:
...
"logoutPage": "<path to logout page>"
...
Example
...
"logoutPage": "/logout.html"
...
To improve the performance of your application you can control the Cache-Control headers, which are
returned together with the static resource of your application.
You can configure caching for the complete application, for dedicated paths, or resources of the application. If
the path you specify ends with a slash character (/) all resources in the given directory and its sub-directories
are matched. You can also specify the path using wildcards, for example, the path **.html matches all
resources ending with .html. Only the first caching directive that matches an incoming request is applied. The
path **.css hides, for example, other paths such as /resources/custom.css.
With the directive property, you specify whether public proxies can cache the resources. The possible values
for the directive property are:
● public
The resource can be cached regardless of your response headers.
● private
Your resource is stored by end-user caches, for example, the browser's internal cache only.
● none
This is the default value that does not send an additional directive
...
"cacheControl": [
{
"path": "<optional path of resources to be cached>",
"directive": "none | public | private",
"maxAge": <lifetime in seconds>
}
]
...
Example
An example configuration that caches all static resources for 24 hours looks like this:
...
"cacheControl": [
{
"maxAge": 86400
}
]
...
For security reasons not all HTTP headers are forwarded from the application to a backend or from the
backend to the application.
The following HTTP headers are forwarded automatically without any additional configuration because they are
part of the HTTP standard:
● Accept
● Accept-Charset
● Accept-Language
● Accept-Range
● Age
● Allow
● Authorization
● Cache-Control
● Content-Language
● Content-Location
● Content-Range
● Content-Type
● Date
Additionally the following HTTP headers are transferred automatically because they are frequently used by
Web applications and (SAP) servers:
● Content-Disposition
● Content-MD5
● DataServiceVersion
● DNT
● MaxDataServiceVersion
● Origin
● RequestID
● Sap-ContextId
● Sap-Message
● Sap-Messages
● Sap-Metadata-Last-Modified
● SAP-PASSPORT
● Slug: For more information, see Atom Publishing Protocol .
● X-CorrelationID
● X-CSRF-TOKEN
● X-dynaTrace
● X-Forwarded-For
● X-HTTP-Method
● X-Requested-With
If you need additional HTTP headers to be forwarded to or from a backend request or backend response, add
the header names in the following format to the neo-app.json file:
Example
An example configuration that forwards the additional headers X-Custom1 and X-Custom2 looks like this:
Excluded Headers
● Cookie
● Cookie2
● Content-Length
● Accept-Encoding
Cookies are used for user session identification and therefore should not be shared. The system stores cookies
sent by a backend in the session and removes them from the response before forwarding to the user. With the
next request to the backend the stored cookies are added again.
The Content-Length header cannot be whitelisted as the value is recalculated on demand matching the
content of the given request or response.
Custom response headers are added to an application, for example, to comply with security standards.
To set default HTTP response headers retrieved by the request, add the header names and values in the
following format to the neo-app.json file:
Note
If back end requests are retrieved with the same response headers as in the neo-app.json file, then those
values are not overridden.
Custom Response headers are not supported when running your application from SAP Web IDE.
"responseHeaders": [
{
"headers": [
{
"name": "header name",
"value": "header value"
}
]
}
],
Sample Code
"responseHeaders": [
{
"headers": [
{
"name": "Content-Security-Policy",
"value": "default-src 'self'"
}
]
}
],
Note
The Content Security Policy (CSP) is an added layer of security that helps to detect and mitigate certain
types of attacks, including Cross Site Scripting (XSS) and data injection attacks. These attacks are used for
everything from data theft to site defacement or distribution of malware.
This document contains references to API documentation to be used for development with SAP Cloud
Platform.
The Java API documentation for the Neo environment is provided as part of the downloadable SDK archives. To
get to it, do the following:
1. Install the SDK for Neo environment of your choice . See Install the SAP Cloud Platform SDK for Neo
Environment [page 1403].
REST APIs
Monitoring API v2
Keystore API
Related Information
Platform APIs of SAP Cloud Platform are protected with OAuth 2.0 client credentials. Create an OAuth client
and obtain an access token to call the platform API methods.
Context
For description of OAuth 2.0 client credentials grant, see the OAuth 2.0 client credentials grant specification .
For detailed description of the available methods, see the respective API documentation.
Tip
Do not get a new OAuth access token for each and every platform API call. Re-use the same existing access
token throughout its validity period instead, until you get a response indicating the access token needs to
be re-issued.
Context
The OAuth client is identified by a client ID and protected with a client secret. In a later step, those are used to
obtain the OAuth API access token from the OAuth access token endpoint.
Procedure
1. In your Web browser, open the Cockpit. See SAP Cloud Platform Cockpit [page 1006].
Caution
Make sure you save the generated client credentials. Once you close the confirmation dialog, you
cannot retrieve the generated client credentials from SAP Cloud Platform.
Context
OAuth access token endpoint and use the client ID and client secret as user and password for HTTP Basic
Authentication. You will receive the access token as a response.
By default, the access token received in this way is valid 1500 seconds (25 minutes). You cannot configure its
validity length.
If you want to revoke the access token before its validity ends, delete the respective OAuth client. The access
token remains valid up to 2 minutes after the client is deleted.
Procedure
1. Send a POST request to the OAuth access token endpoint. The URL is landscape specific, and looks like
this:
The parameter grant_type=client_credentials notifies the endpoint that the Client Credentials flow is used.
2. Get and save the access token from the received response from the endpoint.
The response is a JSON object, whose access_token parameter is the access token. It is valid for the
specified time (in seconds) in the expires_in parameter. (default value: 1500 seconds).
Example
Retrieving an access token on the trial landscape will look like this:
POST https://api.hanatrial.ondemand.com/oauth2/apitoken/v1?
grant_type=client_credentials
Headers:
Authorization: Basic eW91ckNsaWVudElEOnlvdXJDbGllbnRTZWNyZXQ
{
"access_token": "51ddd94b15ec85b4d54315b5546abf93",
"token_type": "Bearer",
"expires_in": 1500,
"scope": "hcp.manageAuthorizationSettings hcp.readAuthorizationSettings"
}
urlConnection.setRequestMethod("POST");
urlConnection.setRequestProperty("Authorization", "Basic <Base64 encoded
representation of {clientId}:{clientSecret}>");
urlConnection.connect();
Procedure
In the requests to the required platform API, include the access token as a header with name Authorization and
value Bearer <token value>.
Example
GET https://api.hanatrial.ondemand.com/authorization/v1/accounts/p1234567trial/
users/roles/?userId=myUser
Headers:
Authorization: Bearer 51ddd94b15ec85b4d54315b5546abf93
Related Information
In the Neo environment, you enable services in the SAP Cloud Platform cockpit.
The cockpit lists all services grouped by service category. Some of the services are basic services, which are
provided with SAP Cloud Platform and are ready-to-use. In addition, extended services are available. A label on
the tile for a service indicates if this service is enabled.
An administrator must first enable the service and apply the service-specific configuration (for example,
configure the corresponding roles and destinations) before any subaccount members can use it.
Note
Some services are exposed only for trial accounts. That means the services are not, or not yet, released for
use with a customer or partner account.
Some services are exposed only if your organization has purchased a license.
Remember
You can access most of the links only after the service has been enabled.
The configuration options for a service may look like in this example for the Portal service:
● To configure connection parameters to other systems (by creating connectivity destinations), choose
Configure <Portal Service> Destinations .
This option is available only if the service is enabled.
● To create custom roles and assign custom or predefined roles to individual users and groups, choose
Configure <Portal Service> Roles .
This option is available only if the service is enabled.
In the Neo environment, you might need to enable services before subaccount members can integrate them
with applications. Note that free services are always enabled.
Prerequisites
Procedure
1. Navigate to the subaccount in which you'd like to enable a service. For more information, see Navigate to
Global Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. In the navigation area, choose Services.
3. Select the service and choose Enable.
In the Neo environment, you might need to disable services so that they are not available to subaccount
members.
Prerequisites
Procedure
1. Navigate to the subaccount in which you'd like to disable a service. For more information, see Navigate to
Global Accounts and Subaccounts [AWS, Azure, or GCP Regions].
2. In the navigation area, choose Services.
3. Select the service and choose Disable.
Note
○ If other services use the service, they may be negatively impacted when you disable it. Your service
documentation may provide information about services that are dependent on your service.
SAP Cloud Platform is the extension platform from SAP. It enables developers to implement loosely coupled
extension applications securely, thus implementing additional workflows or modules on top of the existing SAP
cloud solution they already have.
SAP Cloud Platform Cloud SAP S/4HANA Cloud Automatically* Extending SAP S/4HANA
Foundry Environment Cloud Using SAP Cloud Plat
form Extension Factory
SAP Cloud Platform Cloud SAP SuccessFactors Automatically* Extending SAP SuccessFac
Foundry Environment tors Using SAP Cloud Plat
form Extension Factory
SAP C/4HANA Foundation SAP C/4HANA Cloud Automatically* Extending SAP C/4HANA
Products Using SAP Cloud
Platform Extension Factory
SAP Cloud Platform Cloud SAP Cloud for Customer Manually Extending SAP Cloud for
Foundry Environment Customer on SAP Cloud Plat
form Cloud Foundry Environ
ment Manually
SAP Cloud Platform Neo En Manually* Extending SAP Cloud for
vironment Customer on SAP Cloud Plat
form Neo Environment
SAP Cloud Platform Neo En SAP Ariba Manually* Extending SAP Ariba on SAP
vironment Cloud Platform Neo Environ
ment
All standard SAP solutions are offered with customizing capabilities. Additionally, customers often have their
own requirements for innovative or industry-specific extensions and the SAP Cloud Platform extension
capability can help them build, deploy, and operate their new functionalities easily and securely.
If you use SAP Cloud Platform, Cloud Foundry environment, you have these options to extend your SAP
solution:
● SAP Cloud Platform Extension Factory: applicable for SAP S/4HANA Cloud and SAP SuccessFactors
● Extensions with manual configurations: applicable for SAP Cloud for Customer, SAP S/4HANA Cloud, and
SAP SuccessFactors
● Extensibility framework - simplified, standardized and unified extensibility and configuration for the SAP
solutions.
● Central catalog - a central repository per customer for all connected SAP systems where data such as
APIs, events, credentials and other is stored. You have business services and actionable events across end-
to-end business processes.
There is an automated configuration for extending these SAP solutions on SAP Cloud Platform, Neo
environment:
Extending SAP Cloud for Customer on SAP Cloud Platform allows you to implement additional workflows or
modules on top of the SAP Cloud for Customer benefiting from out-of-the-box security, inherited data access
governance, user interface embedding, and others.
In the SAP Cloud for Customer extensions scenarios, these are the important aspects:
● The Extension Applications for SAP Cloud for Customer are hosted or subscribed in a dedicated SAP Cloud
Platform subaccount to ensure the consistency in the integration configuration between the two solutions.
The purpose of the subaccount is to hold the common integration configurations for all extension
applications.
● The single sign-on configuration between the SAP Cloud for Customer and the SAP Cloud Platform
ensures the secure and consistent data access for the extension application.
● OAuth connectivity configuration for enabling the use of SAP Cloud for Customer OData APIs.
● Configuration of the HTML mashups in SAP Cloud for Customer helps with embedding the extension
application UI directly in the SAP Cloud for Customer screens and offers the same look and feel to the end
users.
SAP Cloud Platform enables you to create SAP S/4HANA Cloud side-by-side extensions: they extend the SAP
S/4HANA Cloud functionality but reside on the cloud platform.
To do that, you need to own an SAP S/4HANA Cloud tenant. The authentication against the SAP S/4HANA
Cloud tenant is based on the SAP Cloud Platform Identity Authentication tenant that is provided together with
the SAP S/4HANA Cloud tenant. Typically, you will configure this Identity Authentication tenant to forward
authentication requests to your corporate identity provider.
Extending SAP SuccessFactors on SAP Cloud Platform allows you to broaden the SAP SuccessFactors scope
with applications running on the platform. This makes it quick and easy for companies to adapt and integrate
SAP SuccessFactors cloud applications to their existing business processes, thus helping them maintain
competitive advantage, engage their workforce and improve their bottom line.
Note
You can integrate an extension subaccount that is part of a customer or partner SAP Cloud Platform global
account only. This functionality is not available for trial accounts.
Learn how to manage and configure global accounts and subaccounts, as well as how to operate your
applications in the different environments.
Learn about the different account administration and application operation tasks which you can perform in the
Cloud Foundry environment.
Learn about frequent administrative tasks you can perform using the SAP Cloud Platform cockpit.
Related Information
Account Administration in the Cloud Foundry Command Line Interface [page 1768]
Account Administration in the Neo Console Client [page 1926]
Your SAP Cloud Platform global account is the entry point for managing the resources, landscape, and
entitlements for your departments and projects in a self-service manner.
Set up your account model by creating subaccounts in your enterprise account. You can create any number of
subaccounts in any environment (Neo, Cloud Foundry, and ABAP) and region.
Related Information
Change the display name for the global account using the SAP Cloud Platform cockpit.
Prerequisites
You are a member of the global account that you'd like to edit.
Context
The overview of global accounts available to you is your starting point for viewing and changing global account
details in the cockpit. To view or change the display name for a global account, trigger the intended action
directly from the tile for the relevant global account.
Procedure
1. Choose the global account for which you'd like to change the display name and choose on its tile.
A new dialog shows up with the mandatory Display Name field that is to be changed.
2. Enter the new human-readable name for the global account.
3. Save your changes.
Context
You edit a subaccount by choosing the relevant action on its tile. It's available in the global account view, which
shows all its subaccounts.
The subaccount technical name is a unique identifier of the subaccount on SAP Cloud Platform that is
generated when the subaccount is created.
Procedure
1. Choose the subaccount for which you'd like to make changes and choose on its tile.
You can view more details about the subaccount such as its description and additional attributes by
choosing Show More.
2. Make your changes and save them.
Prerequisites
You cannot delete the last remaining subaccount from the global account.
Procedure
Note
For any SAP Cloud Platform instance not running on AWS, Azure, or GCP regions, all references to SAP in
this topic are the responsibility of the respective cloud operator.
When the contract of an SAP Cloud Platform customer ends, SAP is legally obligated to delete all the data in
the customer’s accounts. The data is physically and irreversibly deleted, so that it cannot be restored or
recovered by reuse of resources.
The termination process is triggered when a customer contract expires or a customer notifies SAP that they
wish to terminate their contract.
1. SAP sends a notification e-mail of the expiring contract, and the termination date on which the account will
be closed.
2. A banner is displayed in the cockpit during the 30-day validation period before the global account is closed.
During this period, the customer can export their data, or they can open an incident to download their data
before the termination date.
The customer can cancel the termination during this period and renew their contract with SAP.
3. After the termination date, a grace period of 30 days begins.
4. During the grace period:
○ Access is blocked to the account, and to deployed and subscribed applications.
○ No data is deleted, and backups are ongoing.
○ The global account tile is displayed in the Global Accounts page of the cockpit with the label Expired.
Clicking the tile displays the amount of days left in the grace period.
Related Information
In the Cloud Foundry enviroment, manage orgs and spaces using the SAP Cloud Platform cockpit.
To administer your Cloud Foundry environment, navigate to orgs, and spaces in the SAP Cloud Platform
cockpit.
Prerequisites
● Sign up for an enterprise or a trial account and receive your logon data.
For more information, see Get a Free Trial or Purchase a Customer Account.
● Create the org or space to which you want to navigate.
For more information, see Create a Subaccount in the Cloud Foundry Environment [AWS, Azure, or GCP
Regions] and Create Spaces [page 1754].
Procedure
Org Navigate to the subaccount that contains the Cloud Foundry org by following the
steps described in Navigate to Global Accounts and Subaccounts [AWS, Azure, or
GCP Regions]. If you've already enabled the Cloud Foundry environment in your
subaccount, you see the name of your organization, the number of its spaces and
members, and its API endpoint on the Overview page of your subaccount. If you
haven't enabled Cloud Foundry yet, choose Enable Cloud Foundry to create a
Cloud Foundry org.
Note
Note that your subaccount and your org have a 1:1 relationship. They have the
same name and therefore also the same navigation level in the cockpit.
Space 1. Navigate to the subaccount that contains the space you'd like to navigate to
by following the steps described in Navigate to Global Accounts and Subac
counts [AWS, Azure, or GCP Regions].
2. On the subaccount Overview page, you have table with all your spaces.
Choose the space you want to navigate to.
Once a subaccount is created in the Cloud Foundry environmnent, you must create an organization in order to
use it.
Procedure
Administrators of a global account who are members of a subaccount in the Cloud Foundry environment can
delete an organization assigned to this subaccount using the cockpit. Once the organization is deleted, you can
create a new organization.
Prerequisites
You are a global account administrator, as well as a member of the subaccount containing the organization you
want to delete.
Note
You can only delete an organization using the cockpit. You cannot use the Cloud Foundry CLI to perform
this task.
Procedure
Results
The organization is deleted. All data in the organization including spaces, applications, service instances, and
member information is lost. You can now choose Enable Cloud Foundry to create a new organization.
Related Information
Create spaces in your Cloud Foundry organization using the SAP Cloud Platform cockpit.
Prerequisites
You have a Cloud Foundry organization in your Cloud Foundry subaccount, and you have the Org Manager role
in that organization.
Procedure
1. Navigate to the subaccount that contains the Cloud Foundry organization in which you'd like to create a
space.
2. On the subaccount Overview page, you have a Spaces table. Choose Create Space from the top right hand-
side corner of that table.
3. Enter a space name and choose the permissions you'd like to assign to your ID, then choose Save.
Next Steps
To assign quota to spaces, see Change Space Quota Plans [page 1759].
You can change the name of a space by going to the Spaces page from the left hand-side navigation and
choosing (Edit) on the tile of that space. You can also create additional spaces from there.
Related Information
Delete spaces in your Cloud Foundry organization using the SAP Cloud Platform cockpit.
Prerequisites
● You have the Org Manager role in the organization from which you want to delete a space. For more
information about roles and permissions, see https://docs.cloudfoundry.org/concepts/roles.html .
● You have ensured that the data in the space you are going to delete is no longer needed.
Procedure
1. Navigate to the Spaces page in your Cloud Foundry subaccount and choose (Delete) on the tile of the
space you want to delete.
2. Confirm your change.
Related Information
When you purchase an enterprise account, you are entitled to use a specific set of resources, such as the
amount of memory that can be allocated to your applications.
An entitlement equals your right to provision and consume a resource. A quota represents the numeric
quantity that defines the maximum allowed consumption of that resource.
Entitlements and quotas are managed at the global account level, distributed to subaccounts, and consumed
by the subaccounts. When quota is freed at the subaccount level, it becomes available again at the global
account level.
Note
Only global account administrators can configure entitlements and quotas for subaccounts.
There are two places in the cockpit where you can view and configure entitlements and quotas for
subaccounts:
Depending on your permissions, you may only have access to one of these pages. You can find more details in
the table below:
In the Cloud Foundry environment, you can further distribute the quotas that are allocated to a subaccount.
This is done by creating space quota plans and assigning them to the spaces. For more information on space
quota plans in the Cloud Foundry environment, see https://docs.cloudfoundry.org/adminguide/quota-
plans.html .
Assign entitlements to subaccounts by adding service plans and distribute the quotas available in your global
account to your subaccounts using the SAP Cloud Platform cockpit.
Prerequisites
Context
You can distribute entitlements and quotas across subaccounts within a global account from two places in the
cockpit:
● The Entitlements Subaccount Assignments page in the at global account level (only visible to global
account administrators)
● The Entitlements page at subaccount level (visible to all subaccount members, but only editable by global
account administrators)
For more information see Managing Entitlements and Quotas Using the Cockpit [page 1755].
Procedure
Tip
If your global account contains more than 20 subaccounts, choose to open up the value help dialog.
There you can filter subaccounts by role, environment and region to make your selection easier and
faster. You can only select a maximum of 50 subaccounts at once.
You will get a table for each of the subaccounts you selected, displaying the current entitlement and quota
assignments.
5. Choose Configure Entitlements to start editing entitlements for a particular subaccount.
Note
Action Steps
Add new service plans to the subaccount Choose Add Service Plans and from the dialog select the
services and the plans from each service that you would
like to add to the subaccount.
Edit the quota for one or more service plans Use and to increase or decrease the quota for each
service plan.
Delete a service plan and its quota from the subaccount Choose from the Actions column.
7. Once you're done, choose Save to save the changes and exit edit mode for that subaccount.
8. Optional: Repeat steps 5 to 7 to configure entitlements for the other subaccounts selected.
Prerequisites
You have the Org Manager role for the org in which you want to create a space quota plan.
Procedure
1. Navigate to the subaccount that contains the spaces to which you want to add quotas.
Note
The org quota limit is applicable for a resource if you do not enter a space quota limit. If the space
quota limit for a resource exceeds the org quota limit, the org quota limit applies.
You can use the SAP Cloud Platform cockpit to assign quota plans to spaces.
Prerequisites
● You have the Org Manager role for the org in which you want to create a space quota plan.
● You have at least one space quota plan. For more information, see Create Space Quota Plans [page 1757].
Procedure
1. Navigate to the subaccount that contains the spaces to which you want to add quotas.
2. In the navigation menu, choose Quota Plans.
3. In the Plan Assignment section, select a quota plan for your spaces.
Manage space quota plans in the Cloud Foundry environment using the SAP Cloud Platform cockpit.
Prerequisites
You have the Org Manager role for the organization in which you want to change space quota plans.
Procedure
1. Choose the subaccount that contains the spaces for which you'd like to change the quota.
2. In the navigation menu, choose Quota Plans.
Related Information
Change Space Quota Plans Using the Command Line Interface [page 1802]
Create Space Quota Plans [page 1757]
Assign Quota Plans to Spaces [page 1758]
Change Space Quota Plans [page 1759]
Create Space Quota Plans Using the Cloud Foundry Command Line Interface [page 1801]
Assign Quota Plans to Spaces Using the Cloud Foundry Command Line Interface [page 1801]
Change Space Quota Plans Using the Command Line Interface [page 1802]
Configure Entitlements and Quotas for Subaccounts [page 1756]
You can add members to your global account, orgs, and spaces and assign different roles to them:
Related Information
A member is a user who is assigned to an SAP Cloud Platform global account or subaccount. A member
automatically has the permissions required to use the SAP Cloud Platform functionality within the scope of the
respective accounts and as permitted by their account member roles.
You manage users at global account level. For more information, see Add Members to Your Global Account
[AWS, Azure, or GCP Regions].
In the Cloud Foundry environment, you also manage members at subaccount, org, and space level. See
Related Information
Roles determine which features in the cockpit users can view and access, and which actions they can initiate.
SAP Cloud Platform includes predefined roles that are specific to the navigation level in the cloud cockpit; for
example, the roles at the level of the organization differ from the ones for the space. A user can be assigned one
or more roles, where each role comes with a set of permissions. Roles apply to all operations that are
associated with the global account, the organization, or the space, irrespective of the tool used (Eclipse-based
tools, cockpit, and cf CLI).
The following roles can be assigned to users in the Cloud Foundry environment on SAP Cloud Platform:
Global account Global Account Adminis A Global Account Administrator has permission to add members to
trator the global account.
Note
The Administrator role is automatically assigned to the user who
has started a trial account or who has purchased resources for
an enterprise account.
Restriction
You can add members to global accounts only in enterprise ac
counts, that is, not in trial accounts.
Space Developer
Space Auditor
Related Information
The default platform identity provider and application identity provider of SAP Cloud Platform is SAP ID
service.
Context
Trust to SAP ID service in your subaccount is pre-configured in both the Neo and the Cloud Foundry
environment of SAP Cloud Platform by default, so you can start using it without further configuration.
Optionally, you can add additional trust settings or set the default trust to inactive, for example if you prefer to
use another SAML 2.0 identity provider. Using the SAP Cloud Platform cockpit you can make changes by
navigating to your respective subaccount and by choosing Security Trust Configuration for Cloud
Foundry, and Security Authorization for Neo.
If you want to add new users to a subscribed app, or if you want to add users to a service, such as Web IDE, you
can add those users to SAP ID service in your subaccount. See Add Users to SAP ID Service in the Cloud
Foundry Environment [page 1763]
Note
If you want to use a custom IdP, you must establish trust to your custom SAML 2.0 identity provider. We
describe a custom trust configuration using the example of SAP Cloud Platform Identity Authentication
service.
For more information, see Trust and Federation with SAML 2.0 Identity Providers [page 2281].
Related Information
Before you can assign roles or role collection to a user in SAP ID service, you have to ensure that this user is
assigned to SAP ID service.
Prerequisites
The user you want to add to SAP ID service must have an SAP user account (for example, an S-user or P-user).
For more information, see Create SAP User Accounts [page 1764].
Context
When you create your own trial account, your SAP user is automatically created and assigned to SAP ID
service. But when you onboard new members to your subscribed app, you must add them to your subaccount
and ensure that they are also added to SAP ID service. Then you can assign a role collection to a user in SAP ID
service.
Procedure
Remember
If the user identifier you entered does not have an SAP user account or has never logged on to an
application in this subaccount, SAP Cloud Platform cannot automatically verify the user name. To avoid
mistakes, you must ensure that the user name is correct and that it exists in SAP ID service.
Related Information
If you want to add users to SAP ID service in your subaccount, you must ensure that they have an SAP user
account.
Context
If you register for an SAP Cloud Platform trial account https://cockpit.hanatrial.ondemand.com, you
automatically get a user in SAP ID service. But if you want to add other users to SAP ID service in your
subaccount, you must ensure that they have an SAP user account (for example, an S-user or P-user). This
could be the case if you wanted to add new users to a subscribed app in your subaccount.
Procedure
You can add organization members and assign roles to them at the subaccount level in the cockpit.
Prerequisites
You have the Org Manager role for the org in question.
Note
You automatically have the Org Manager role in a subaccount that you created.
Procedure
1. Choose the subaccount that contains the org to which you'd like to add members.
2. In the navigation area, choose Members.
All members currently assigned to the organization are shown in a list.
Next Steps
● To select or unselect roles for a member, choose (Edit). The changes you make to the roles of a member
take effect immediately.
● To remove all the roles of a member, choose (Delete). This removes the member from the organization.
Note
You can only remove members from the organization that are no longer assigned to any space in the
organization.
Related Information
You can add space members and assign roles to them at the space level in the cockpit.
Prerequisites
Note
You must add e-mail addresses of registered members who have an S-user or a P-user (normally used
for trial accounts). Administrators can request S-user IDs on the SAP ONE Support Launchpad User
Management application: 1271482 .
If users do not have a registered S-user ID, they can register for a P-user on sap.com .
Space members are organization members who have specific roles in a space.
Organization members can only be added by an Organization Manager. This means that if you only have the
Space Manager role, you cannot add space members that are not known to the organization. To do that, you
must first ask your Organization Manager to add the users as organization members with no roles. Once this is
done, you can add them as space members following the steps below.
If you are the Organization Manager, you do not need to first add the users as organization members with no
roles. Since you have the permissions necessary to add users to the organization, when you add a new user as
a space member, that user automatically becomes part of the organization as well.
Procedure
1. Navigate to the space to which you'd like to add members. For more information, see Navigate to Orgs and
Spaces [page 1751].
2. In the navigation area, choose Members.
All members currently assigned to the space are shown in a list.
3. Choose Add Members.
4. Enter one or more e-mail addresses.
You can use commas, spaces, semicolons, or line breaks to separate members.
5. Select the roles for the users and save your changes.
Next Steps
● To select or unselect roles for a member, choose (Edit). The changes you make to the roles of a member
take effect immediately.
● To remove all the roles of a member, choose . This removes the member from the space.
Related Information
Note
The content in this section is only relevant for AWS, Azure, or GCP regions.
You can explore, compare, and analyze all your usage data for the services and applications that are available in
your subaccount.
To monitor and track usage in a subaccount, open the subaccount in the cockpit and choose Usage Analytics in
the navigation area.
The Usage Analytics page contains views that display usage at different levels of detail.
View Description
Subaccount Displays high-level usage information for your subaccount relating to services, business applica
tion subscriptions, and Cloud Foundry org spaces.
Some of the information displayed in this view is displayed only if you are a global account ad
min. The information you see varies depending on the environment of the subaccount. For ex
ample, information about spaces is displayed only for subaccounts in the Cloud Foundry envi
ronment.
Services Displays usage per service plan for the region of the subaccount, and the selected metric and
period. Information is shown for all services whose metered consumption in the subaccount is
greater than zero.
Spaces (Cloud Foundry environment only) Displays service plan usage per space for the services shown
in the Services view.
In the Services and Spaces views, the usage information is displayed in both tabular and chart formats.
● The tables present accumulated usage values based on the aggregation logic of each service plan and its
metric over the selected time period. The usage values are broken down by resource.
● The charts present usage values for one or more service plans or spaces that you select in the adjacent
tables. The resolution of the charts is automatically set to days, weeks, or months, depending on the range
of the selected time period.
● In the Services and Spaces views, select a table row to display the usage information in the chart. You can
also select multiple rows to compare usage in the following ways:
○ In the Services view, you can compare usage between service plans in the same service. Multi-row
selection in the table is possible only when you have selected a single service in the Service filter and
the row items share the same metric.
○ In the Spaces view, you can compare usage between spaces for the same service and service plan.
Multi-row selection in the table is possible only when you have selected a single service plan in the
Service Plan filter.
● In the charts, you can view the data as a column chart or as a line chart.
Use the filters in the Services and Spaces views to choose which usage information to display.
The Spaces view inherits the filter settings, except for the Period, from the Services view above it. In other
words, when you modify filters in the Services view, it affects the information that is displayed in the Spaces
view. You can apply the Period filter independently to each view.
In the Services view, you can apply the Metric filter only when you have selected a service with more than one
metric.
Click the (Reset) icon in each view to reset the filters to their default settings. If you reset the filters in the
Services view, the filters in the Spaces will also be reset.
Related Information
Use the Cloud Foundry command line interface (CF CLI) for managing subaccounts in the Cloud Foundry
environment, such as creating orgs and spaces, or managing quota.
● Download and Install the Cloud Foundry Command Line Interface [page 1769]
● CF CLI: Plug-ins [page 1771]
● Create Spaces Using the Cloud Foundry Command Line Interface [page 1798]
Find out how to get and use the Cloud Foundry command line interface.
Related Information
Download and Install the Cloud Foundry Command Line Interface [page 1769]
Log On to the Cloud Foundry Environment Using the Cloud Foundry Command Line Interface [page 1770]
CF CLI: Plug-ins [page 1771]
Download and set up the Cloud Foundry Command Line Interface (cf CLI) to start working with the Cloud
Foundry environment.
Procedure
1. Download the latest version of cf CLI from GitHub at the following URL: https://github.com/cloudfoundry/
cli#downloads
Use the Cloud Foundry Command Line Interface (cf CLI) to log on to the Cloud Foundry space.
Prerequisites
● (Enterprise accounts only) You have created at least one subaccount and enabled the Cloud Foundry
environment in this subaccount. For more information, see Create a Subaccount in the Cloud Foundry
Environment [AWS, Azure, or GCP Regions].
Note
In a Cloud Foundry trial account, the Cloud Foundry environment has been activated for you
automatically and a first space "dev" has been created for you.
● You have download and installed the cf CLI. For more information, see Download and Install the Cloud
Foundry Command Line Interface [page 1769].
Procedure
Note
There is no specific endpoint for trial accounts. Both enterprise and trial accounts use the same API
endpoints.
cf login
A list of additional commands that have been implemented as plug-ins to extend the base CF CLI client.
Multitarget Application Commands for the Cloud Foundry Environment [page 1772]
Use the Multitarget application plug-in for the Cloud Foundry command line interface to deploy, remove, and
view MTAs, among other possible operations.
Note
Before using the extended commands in the Cloud Foundry environment, you need to install the MTA plug-
in in the Cloud Foundry environment.
Related Information
The MultiApps plug-in (formerly known as the MTA plug-in) for the Cloud Foundry command line interface lets
you deploy, remove, and view MTAs, among other possible operations, by extending Cloud Foundry commands.
Prerequisites
You have installed the Cloud Foundry command line interface version 6.20 or higher.
Procedure
Output Code
cf install-plugin multiapps -f
Note
Alternatively, if you want to install a specific version of the plug-in, proceed as follows:
1. Download the preferred version of the plug-in that is compatible with your operating system.
2. Untar or unzip the downloaded archive if required.
3. To install the plug-in, enter the following command:
You see a list of plug-ins that now includes the MTA plug-in for the Cloud Foundry command line interface.
The output also displays commands that are specific to this plug-in.
Related Information
https://tools.hana.ondemand.com/#cloud
Download and Install the Cloud Foundry Command Line Interface [page 1769]
A list of additional commands to install archives and deploy Multitarget Applications (MTA) to the Cloud
Foundry environment.
Note
The expiration time for all Cloud Foundry operations is 3 days. If an operation is still active when time limit
is reached, it is automatically aborted.
download-mta-op- dmol Download the log files for one or more operations concerning Multitarget
logs Applications
purge-mta-config Purge all configuration entries and subscriptions, which are no longer
valid
deploy
Usage
Interact with an active MTA deploy operation, for example, by performing an action:
Arguments
<MTA_ARCHIVE> The path to (and name of) the archive or the directory containing the Multitarget
Application to deploy; the application archive must have the format (and file exten
sion) mtar, for example, MTApp1.mtar.
-t <TIMEOUT> Specify the maximum amount of time (in seconds) that the deploy
ment service must wait for before starting the deployed application
-v <VERSION_RULE> Specify the rule to apply to determine how the application version
number is used to trigger an application-update deployment opera
tion, for example: “HIGHER”, “SAME_HIGHER”, or “ALL”.
-i <OPERATION_ID> Specify the ID of the deploy operation that you want to perform an
action on
-a <ACTION> Specify the action to perform on the deploy operation, for example,
“abort”, “retry”, or “monitor”, or “resume”
--delete-service-keys Delete the existing service keys and apply the new ones
--no-restart-subscribed-apps Do not restart subscribed applications that are updated during the
deployment
Example
Two MTAs have one identical module that creates an application
named backend when the MTA is deployed. If this common
module has different parameters in the second MTA, upon de
ployment it overrides the backend application generated earlier
by the first MTA. Thus, the second module might corrupt the ap
plication finctionality, database schema, declared resource, or
even render it inoperable.
Note
In cases where users might explicitly want two MTAs to reuse the
same service instance, for example due to costs or quota restric
tions, you can use the command line option --skip-ownership-
validation to skip the validation. However, we strongly discourage
using this option in production systems.
Example
cf deploy my.mtar --skip-ownership-
validation
-m <module name> Deploy only the module with the specified name.
Note
It can be used multiple times.
Note
Any -m options are ignored.
-r <resource name> Deploy only the resource with the specified name
Note
It can be used multiple times.
Note
Any -r options are ignored.
Example
cf deploy <mtar_name> --verify-archive-
signature
Note
● If any of the options -m, --all-modules, -r, --all-resource is used, only the specified modules
and resources will be deployed. Otherwise, everything will be deployed by default.
● If the options for the module deploying ( -m, --all-modules) are used, the modules need to contain a
path element on an MTA module level, which points to the content of the module. In case the module
has some requires dependency section to a resource that needs a configuration file, then the
requires section should have a path parameter, which points to the configuration file.
● If the options for the resource deploying (-r, --all-resource) are used, then any resources that
have some configuration files need to contain a path parameter in their parameters section, which
points to the configuration file.
● The path element or parameter value should be relative to the MTA directory.
An example of an MTA deployment descriptor, containing all variants of the path elements and parameters
can be found at Defining Multitarget Application Deployment Descriptors for Cloud Foundry [page 1240].
bg-deploy
“Blue-green” deployment is a release technique that helps to reduce application downtime and the resulting
risk by running two identical target deployment environments called “blue” and “green”. Only one of the two
target environments is “live” at any point in time and it is much easier to roll back to a previous version after a
failed (or undesired) deployment.
cf bg-deploy <MTA_ARCHIVE>
[-e <EXT_DESCRIPTOR_1>[,<EXT_DESCRIPTOR_2>]]
[-u <URL>] [-t <TIMEOUT>] [-v <VERSION_RULE>]
[--no-start] [--use-namespaces] [--no-namespaces-for-services]
[--delete-services] [--delete-service-keys] [--delete-service-brokers]
[--keep-files] [--no-restart-subscribed-apps] [--no-confirm] [--do-not-fail-on-
missing-permissions]
Interact with an active MTA deploy operation, for example, by performing an action:
Arguments
Command Arguments Overview
Argument Description
<MTA_ARCHIVE> The path to (and name of) the archive or the path to the directory containing the
Multitarget Application to deploy. The application archive must have the format
(and file extension) mtar, for example, MTApp1.mtar; the specified directory
can be specified as a path (for example, myApp/ or . (current directory).
Options
Command Options Overview
Option Description
-t <TIMEOUT> Specify the maximum amount of time (in seconds) that the deploy
ment service must wait for before starting the deployed application
-v <VERSION_RULE> Specify the rule to apply to determine how the application version
number is used to trigger an application-update deployment opera
tion, for example: “HIGHER”, “SAME_HIGHER”, or “ALL”.
-i <OPERATION_ID> Specify the ID of the deploy operation that you want to perform an
action on
-a <ACTION> Specify the action to perform on the deploy operation, for example,
“abort”, “retry”, or “monitor”, or “resume”
--delete-service-keys Delete the existing service keys and apply the new ones
--no-restart-subscribed-apps Do not restart subscribed applications that are updated during the
deployment
--no-confirm Use this option to turn off the manual confirmation for deleting the
older version of the MTA applications. Thus the deployment process
is performed from start to finish uninterrupted, and you are not
prompted to confirm the switch of routes to the new version of the
MTA applications.
Example
Two MTAs have one identical module that creates an application
named backend when the MTA is deployed. If this common
module has different parameters in the second MTA, upon de
ployment it overrides the backend application generated earlier
by the first MTA. Thus, the second module might corrupt the ap
plication finctionality, database schema, declared resource, or
even render it inoperable.
Note
In cases where users might explicitly want two MTAs to reuse the
same service instance, for example due to costs or quota restric
tions, you can use the command line option --skip-ownership-
validation to skip the validation. However, we strongly discourage
using this option in production systems.
Example
cf deploy my.mtar --skip-ownership-
validation
-m <module name> Deploy only the module with the specified name.
Note
It can be used multiple times.
Note
Any -m options are ignored.
-r <resource name> Deploy only the resource with the specified name
Note
It can be used multiple times.
Note
Any -r options are ignored.
Example
cf bg-deploy <mtar_name> --verify-archive-
signature
Note
● If any of the options -m, --all-modules, -r, --all-resource is used, only the specified modules
and resources will be deployed. Otherwise, everything will be deployed by default.
● If the options for the module deploying ( -m, --all-modules) are used, the modules need to contain a
path element on an MTA module level, which points to the content of the module. In case the module
has some requires dependency section to a resource that needs a configuration file, then the
requires section should have a path parameter, which points to the configuration file.
● If the options for the resource deploying (-r, --all-resource) are used, then any resources that
have some configuration files need to contain a path parameter in their parameters section, which
points to the configuration file.
● The path element or parameter value should be relative to the MTA directory.
An example of an MTA deployment descriptor, containing all variants of the path elements and parameters
can be found at Defining Multitarget Application Deployment Descriptors for Cloud Foundry [page 1240].
undeploy
Usage
Undeploy an MTA.
cf undeploy <MTA_ID>
[-u <URL>] [-f]
[--delete-services] [--delete-service-brokers] [--no-restart-subscribed-apps]
[--do-not-fail-on-missing-permissions]
Arguments
Command Arguments Overview
Argument Description
Options
Command Options Overview
Option Description
-u <URL> Specify the URL for the service end-point that is to be used for the
undeployment operation
-i <OPERATION_ID> Specify the ID of the undeploy operation that you want to perform an
action on
-a <ACTION> Specify the action to perform on the undeploy operation, for exam
ple, “abort”, “retry”, or “monitor”
--no-restart-subscribed-apps Do not restart subscribed applications that are updated during the
deployment
mta
Display information about a Multitarget Application (MTA). The information displayed includes the requested
state, the number of instances, information about allocated memory and disk space, as well as details
regarding the bound service (and service plan).
cf mta MTA_ID
[-u <URL>]
Arguments
Command Arguments Overview
Argument Description
Options
Command Options Overview
Option Description
-u <URL> Specify the URL for the deployment-service end-point to use to ob
tain details of the selected MTA
mtas
Usage
Options
Command Options Overview
Option Description
-u <URL> Specify the URL for the deployment-service end-point to use to ob
tain details of the selected MTA
mta-ops
Display information about all active operations for Multitarget Applications (MTA). The information includes
the ID, type, status, the time the MTA-related operation started, as well as the name of the user that started the
operation.
Usage
Option Description
-u <URL> Specify the URL for the deployment-service end-point to use to ob
tain details of the selected MTA operations
download-mta-op-logs
Download the log files for one or more operations concerning Multitarget Applications.
Usage
cf download-mta-op-logs
[-u <URL>]
[-i <OPERATION_ID>] [-d <DIRECTORY>]
Tip
You can use the alias dmol in place of the download-mta-op-logs command.
Options
Command Options Overview
Option Description
-u <URL> Specify the URL for the deployment-service end-point to use to ob
tain details of the selected MTA operations
-i <OPERATION_ID> Specify the identity (ID) of the MTA operation whose logs you want to
download
-d <DIRECTORY> Specify the path to the location where you want to save the down
loaded MTA operation logs; by default, the location is ./mta-op-
<OPERATION_ID>/
purge-mta-config
Purge all configuration entries and subscriptions, which are no longer valid.
cf purge-mta-config
[-u <URL>]
Invalid configuration entries are often produced when the application that is providing configuration entries is
deleted by the user without using the deploy-service, for example, the cf delete command . In this case, the
configuration remains in the deploy-service database even though the corresponding application is no longer
available. This could lead to a failure during subsequent attempts to resolve the configuration entries.
Options
Command Options Overview
Option Description
5.1.2.1.3.1.2.1 Namespaces
To prevent name clashes for applications and services contained in different MTAs, but deployed in the same
space, the deployment service provides an option that enables you to add the MTA IDs in front of the names of
the applications and services contained in those MTAs.
Sample Code
ID: com.sap.xs2.sample
version: 0.1.0
modules:
- name: app
type: java.tomcat
resources:
- name: service
type: com.sap.xs.uaa
By default the “use namespaces” feature is disabled. However, specifying the --use-namespaces option as part
of the xs deploy command allows you to enable it. If you want namespaces to be used for applications, but
not for services, you can use the --no-namespaces-for-services option in combination with the --use-
namespaces. For example, if you use the --use-namespaces and --no-namespaces-for-services options when
deploying an MTA with the deployment descriptor shown in the sample code above, the deployment operation
creates an application named “com.sap.xs2.sample.app” and a service named “service”.
The deployment service follows the rules specified in the Semantic Versioning Specification (Semver) when
comparing the versions of a deployed MTA with the version that is to be deployed. If an MTA submitted for
deployment has a version that is lower than or equal to an already deployed version of the same MTA, the
deployment might fail if their is a conflict with the version rule specified in the deployment operation. The
version rule is a parameter of the deployment process, which specifies which relationships to consider between
the existing and the new version of an MTA before proceeding with the deployment operation. The version rules
supported by the deploy service are as follows:
● HIGHER - Only MTAs with versions that are bigger than the version of the currently deployed MTA, are
accepted for deployment.
● SAME_HIGHER - Only MTAs with versions that are higher than (or the same as) the version of the currently
deployed MTA are accepted for deployment. This is the default setting for the version rule.
● ALL - All versions are accepted for deployment.
Service Fabrik CF CLI plugin is used for performing various backup and restore operations on service-instances
in Cloud Foundry, such as starting/aborting a backup, listing all backups, removing backups, starting/aborting
a restore, etc.
This CF CLI plugin is only available for ServiceFabrik broker, so it can only be used with CF installations in which
this service broker is available. You can also list all available commands and their usage with 'cf backup'. You
can now manage your backups of service instance and restore them using the backup and restore functionality.
To use the functionality, use the SAP Cloud Platform cockpit or the extended Cloud Foundry commands in the
command line interface.
The Service Fabrik plugin lets you manage backups of a service instance by extending Cloud Foundry
commands.
Prerequisites
You need to have Cloud Foundry Command Line Interface (CF CLI) installed for the plugin to work since it is
built on CF CLI. The installation instructions for CF CLI can be found here . The minimum version of CF CLI,
on which the plugin has been tested successfully is v6.20.
1. Download the latest version of the plugin that is compatible with your operating system. On the Web page,
you will find the plugin under the SAP Cloud Platform Cloud Foundry CLI Plugins section with the name
Service Fabrik based B&R.
2. Untar or unzip the downloaded archive.
3. Open the command line interface or terminal.
4. To install the plugin, enter the following command:
Note
If you are reinstalling the plugin, first uninstall the previous version using: cf uninstall-plugin
ServiceFabrikPlugin
5. Verify that the plugin has been installed successfully by entering cf plugins.
You see a list of plugins that now includes Service Fabrik. The output also displays commands that are
specific to the Service Fabrik plugin.
Related Information
https://tools.hana.ondemand.com/#cloud
Download and Install the Cloud Foundry Command Line Interface [page 1769]
The Service Fabrik plugin provides commands that support backup and restore operations.
The various commands described in the table below are functionalities that facilitate these operations:
cf list-backup Shows a list of all the service instance Unauthorized Access: if you do not
backups. These backups are specific to have permission to access the space
the spaceyou are logged on to. If you containing the service instance or the
have permission to access multiple service instance itself. Verify that you
spaces and need to view backups for a have the required permission.
specific space, log on to that space in
Cloud Foundry.
cf list-backup Shows a list of backups that are specific Unauthorized Access: if you do not
<service_instance_name> to the service instance within a space. have permission to access the space
containing the service instance or the
service instance itself. Verify that you
have the required permission.
cf list-backup --guid Shows the list of all backups for the Unauthorized Access: if you do not
<service_instance_guid>
given service-fabrik service instance. have permission to access the space
The argument has to be the guid of the containing the service instance or the
service instance. (Works even for a de service instance itself. Verify that you
leted instance.) have the required permission.
cf list- Shows the list of all backups for a de Unauthorized Access: if you do not
backup<service_instance_name> leted service-fabrik service instance. have permission to access the space
--deleted
(Works only for a deleted service-in containing the service instance or the
stance.) service instance itself. Verify that you
have the required permission.
cf start-restore Restores a service instance from the Unauthorized Access: if you do not
<service_instance_name> specified instance name and backup ID. have permission to access the space
<backup_id> Before providing a restore command, containing the service instance or the
ensure that the backup is available for service instance itself. Verify that you
the service instance. You can verify the have the required permission.
state of the restore process using cf
Another concurrent access: if another
restore
operation is already in progress for the
<service_instance_name>. service instance. You might need to try
again after the current operation is
completed.
cf instance-events -- List all delete service instance events in Unauthorized Access: if you do not
delete the space. have permission to access the space
containing the service instance or the
service instance itself. Verify that you
have the required permission.
The Custom Domain CLI plugin provides functions for creating private keys and certificate signing requests, as
well as additional commands for managing your custom domains.
To use the functionality of the Custom Domain plugin, use the extended Cloud Foundry commands in the
command-line interface.
Related Information
Use the Custom Domain CLI plugin to configure and manage your custom domains.
Prerequisites
Install the Cloud Foundry command-line interface (CLI) for the plugin to work. You can find the installation
instructions here: Download and Install the Cloud Foundry Command Line Interface [page 1769].
Procedure
1. Download the latest version of the plugin that is compatible with your operating system. On the Web page,
you'll find the plugin under the SAP Cloud Platform Cloud Foundry CLI plugins section with the name
Custom Domain.
2. Untar or unzip the downloaded archive.
3. Open the command-line interface or terminal.
4. To install the plugin, enter the following command:
○ Windows:
If you are reinstalling the plugin, first uninstall the previous version using: cf uninstall-plugin
"Custom Domain"
cf plugins
You see a list of plugins that now includes Custom Domain. The output also displays commands that are
specific to the Custom Domain plugin.
Results
You have installed the Custom Domain plugin for the Cloud Foundry CLI and can now use the extended
commands that are available from the Custom Domain service.
Related Information
Download and Install the Cloud Foundry Command Line Interface [page 1769]
Configuring Custom Domains [page 1817]
The Custom Domain plugin includes commands that you can use to configure and manage your custom
domains.
- Show
verb ver
ose bose
infor
mat
ion.
- Force
forc key
e crea
tion
with
out
confir-
mat
ion.
- Show
verb ver
ose bose
infor
mat
ion.
- Show
verb ver
ose bose
infor
mat
ion.
- Force
forc up
e load
ing the
certifi-
cate
chain
with
out
confir-
mat
ion.
- Show
verb ver
ose bose
infor
mat
ion.
- Show
verb ver
ose bose
infor
mat
ion.
- Do not
skip at
- tempt
ssl- to vali
date
vali
SSL
dati
certifi-
on
cate.
- Show
verb ver
ose bose
infor
mat
ion.
- Show
verb ver
ose bose
infor
mat
ion.
- Show
verb ver
ose bose
infor
mat
ion.
- Force
forc deac
e tiva
tion of
client
au
thenti
cation
with
out
confir-
mat
ion.
- Do not
skip at
- tempt
ssl- to vali
date
vali
SSL
dati
certifi-
on
cate.
- Show
verb ver
ose bose
infor
mat
ion.
- Show
verb ver
ose bose
infor
mat
ion.
- Force
forc deac
e tiva
tion
with
out
confir-
mat
ion.
cf custom-domain- cddk cf custom- Delete the private key and its cer
delete-key domain- - Do not tificates.
delete-key skip at
KEY [options] - tempt
ssl- to vali
date
vali
SSL
dati
certifi-
on
cate.
- Show
verb ver
ose bose
infor
mat
ion.
- Force
forc key
e dele
tion
with
out
confir-
mat
ion.
- Show
verb ver
ose bose
infor
mat
ion.
- Show
verb ver
ose bose
infor
mat
ion.
- Show
verb ver
ose bose
infor
mat
ion.
Related Information
Use the cf create-space command to create spaces in your Cloud Foundry organization using the Cloud
Foundry Command Line Interface (cf CLI).
Prerequisites
● (Enterprise accounts only) Create at least one subaccount and enable the Cloud Foundry environment in
this subaccount. For more information, see Create a Subaccount in the Cloud Foundry Environment [AWS,
Azure, or GCP Regions].
Note
In a trial account, the Cloud Foundry environment is automatically activated, and a first space named
dev is created.
● You must be assigned the Org Manager role in the organization in whichyou'll create a space. For more
information about roles and permissions, see https://docs.cloudfoundry.org/concepts/roles.html .
● Download and install the cf CLI and log on to Cloud Foundry. For more information, see Download and
Install the Cloud Foundry Command Line Interface [page 1769] and Log On to the Cloud Foundry
Environment Using the Cloud Foundry Command Line Interface [page 1770].
Procedure
1. Enter the following string, specifying a name for the new space, the name of the organization, and the
quota you'd like to assign to it:
Note
If you are assigned to only one Cloud Foundry organization and space, the system automatically targets
you to the relevant Cloud Foundry organization and space when you log on.
You can use the Cloud Foundry Command Line Interface (cf CLI) to add organization members and assign roles
to them.
Prerequisites
Note
You automatically have the Org Manager role in a subaccount that you created.
Procedure
Enter the following string, specifying the user name, the name of the organization, and the role:
Next Steps
To remove an org role from a user, enter the following string, specifying the user name, the name of the
organization, and the role:
You can use the Cloud Foundry Command Line Interface (cf CLI) to add space members and assign roles to
them.
Prerequisites
Procedure
Enter the following string, specifying the user name, the name of the organization, the name of the space, and
the role:
Next Steps
To remove a space role from a user, enter the following string, specifying the user name, the name of the
organization, the name of the space, and the role:
You can use the Cloud Foundry Command Line Interface to create space quota plans.
Prerequisites
● The Org Manager role for the org that contains the spaces to which you want to assign quotas.
● Download and install the cf CLI and log on to Cloud Foundry. For more information, see Download and
Install the Cloud Foundry Command Line Interface [page 1769] and Log On to the Cloud Foundry
Environment Using the Cloud Foundry Command Line Interface [page 1770].
Procedure
Open a command line and enter the following string, replacing QUOTA with the name for your space quota plan:
You use the Cloud Foundry Command Line Interface to assign the quotas available in your global account to
your subaccounts.
Prerequisites
● The Org Manager role for the org that contains the spaces to which you want to assign quotas.
Procedure
Change space quota plans in the Cloud Foundry environment using the Cloud Foundry command line interface
(cf CLI).
Prerequisites
● You have the Org Manager role in the organization in which you'd like to change space quota plans. For
more information about roles and permissions, see https://docs.cloudfoundry.org/concepts/roles.html .
● Download and install the cf CLI and log on to your Cloud Foundry instance. For more information, see
Download and Install Cloud Foundry Command Line Interface and Log On to the Cloud Foundry
Environment Using the Cloud Foundry Command Line Interface [page 1770].
Procedure
1. (Optional) Enter the following string to identify the names of all space quota plans available in your org:
cf space-quotas
For more information about managing space quota plans, see https://docs.cloudfoundry.org/
adminguide/quota-plans.html#space .
Related Information
Use the cf create-org command to create organizations in your Cloud Foundry subaccount using the Cloud
Foundry Command Line Interface (cf CLI).
Prerequisites
● (Enterprise accounts only) Create at least one subaccount, For more information, see Create a Subaccount
in the Cloud Foundry Environment [AWS, Azure, or GCP Regions].
Note
In a trial account, the Cloud Foundry environment is automatically activated, and a first space named
dev is created.
● You must be assigned the Org Manager role in the organization in which you'll create an organization. For
more information about roles and permissions, see https://docs.cloudfoundry.org/concepts/roles.html .
● Download and install the cf CLI and log on to Cloud Foundry. For more information, see Download and
Install the Cloud Foundry Command Line Interface [page 1769] and Log On to the Cloud Foundry
Environment Using the Cloud Foundry Command Line Interface [page 1770].
Procedure
Enter the following string, specifying a name for the organization, the name of the organization, and the quota
you'd like to assign to it:
Use the cf delete-org command to delete organizations in your Cloud Foundry subaccount using the Cloud
Foundry Command Line Interface (cf CLI).
Prerequisites
● You must be assigned the Admin role in the organization in which you'll delete an organization. For more
information about roles and permissions, see https://docs.cloudfoundry.org/concepts/roles.html .
● Download and install the cf CLI and log on to Cloud Foundry. For more information, see Download and
Install the Cloud Foundry Command Line Interface [page 1769] and Log On to the Cloud Foundry
Environment Using the Cloud Foundry Command Line Interface [page 1770].
Procedure
Enter the following string, specifying the name for the organization you want to delete:
Use the cf delete-space command to delete spaces in your Cloud Foundry organization using the Cloud
Foundry Command Line Interface (cf CLI).
Prerequisites
● You must be assigned the Org Manager role in the organization in which you'll create a space. For more
information about roles and permissions, see https://docs.cloudfoundry.org/concepts/roles.html .
● Download and install the cf CLI and log on to Cloud Foundry. For more information, see Download and
Install the Cloud Foundry Command Line Interface [page 1769] and Log On to the Cloud Foundry
Environment Using the Cloud Foundry Command Line Interface [page 1770].
Enter the following string, specifying a name for the space, and the name of the organization:
Learn more about the different application operations that you can perform in the Cloud Foundry environment.
Related Information
You can use the cockpit to deploy a new application in the Cloud Foundry environment.
Procedure
1. In your Cloud Foundry subaccount, navigate to the space where you would like to deploy your application.
2. On the Applications page, choose Deploy Application.
3. Choose the location of the file which contains your application.
4. (Optional) If you would like to use a manifest, choose the location of your manifest.yml file.
5. (Optional) If you don't want to use a manifest, untick the Use Manifest box and enter the application details.
a. Enter a name for your application.
b. (Optional) Edit the amount of memory and disk space available to each instance of your app, as well as
the number of instances.
Note
By default, each instance of a new app is assigned 1024 MB of memory and 512 MB of disk quota
and each app starts with 1 instance. If you require more or less resources, edit the prefilled fields in
the form to suit your needs.
c. (Optional) If you don't need a route for your app, tick the No Route box.
Note
When you enter an app name, the Host field is automatically filled with the same name. You can
make changes to it or leave it as is. After deciding on a host and domain, you can see a preview of
your final application route at the bottom of the form.
6. Choose Deploy.
Results
The file containing your new application is uploaded and your application is deployed.
Related Information
You can start and stop applications in the Cloud Foundry environment to control whether they can be accessed
by end users.
Prerequisites
Context
A user can only reach your application if the application is started. The application routes do not work when the
application is stopped.
The first start of the application occurs when you deploy the app, if enough quota is available in the space.
Page Actions
Applications For the application that you'd like to start or stop, choose the respective icon from the Actions col
page umn:
○ (Start) - This starts the first instance of your application and makes it available to users.
○ (Stop) - This stops all running instances of the application.
Note
When you start or stop an application from this page, the state shown in the table changes. How
ever, this is not the actual state of the application, but the state that has been requested through
your action. To check the actual state of the application, navigate into it by choosing the applica
tion name from the table.
Note
The Cloud Foundry restage action cannot be peformed from the cockpit. If you'd like to restage
your application, you must use the CF CLI.
To learn more about restarting and restaging applications in Cloud Foundry, see https://
docs.cloudfoundry.org/devguide/deploy-apps/start-restart-restage.html .
Related Information
To increase the availability of your Cloud Foundry application, you can run multiple instances of it.
Prerequisites
You must have one application deployed in your Cloud Foundry space.
Procedure
Related Information
By default, all applications running on SAP Cloud Platform are accessed on the default landscape domain.
According to your needs, you can change the default application URL by configuring additional application
domains.
There’s no default URL available in China, therefore you can’t deploy an application without configuring a
custom domain first. Please refer to the related information link on how to use custom domains.
Custom Domains
Use custom domains to reach your applications on a domain that's different from the default, for example,
https://subdomain.mydomain.com instead of https://myapp.cfapps.eu10.hana.ondemand.com.
You can configure custom domains using the Cloud Foundry command-line interface with the plugin for
custom domains.
Application Routes
If you want to make your application reachable on another route, you can add additional routes using the Cloud
Foundry CLI. If your application is available under the route https://
myapp.cfapps.eu10.hana.ondemand.com, you could add a route that reads https://
expenses.cfapps.eu10.hana.ondemand.com that leads to the same application.
Routes are the URLs that enable your end users to reach your application.
The Router component in Cloud Foundry is responsible for routing routes. It maintains a list of mapped
applications and compares each request with the list to find the best match. Based on this comparison, it then
routes requests to the appropriate application instance.
Routes belong to a space, and therefore are managed at space level. Currently, you can create, map, and delete
HTTP routes. An HTTP route includes a domain, a host name (or subdomain), and an optional path. You can
only map a route to an application that belongs to the same space.
It is possible to map a single route to multiple applications, as well as multiple routes to a single application.
The number of routes your can create in a space depends on your subaccount entitlements and quotas, or on
the quota plan assigned to that space (if such a quota plan exists). The maximum number of routes is
determined through the Application Runtime quota: 1 GB of Application Runtime comes with 10 routes.
Related Information
In the SAP Cloud Platform cockpit you can configure the URLs through which end users can reach your
applications.
Context
Routes belong to a space but they are globally unique, regardless of the organization that controls a space. If a
route with a URL exists, you cannot create a route with the same URL.
Procedure
Parameter Details
Host Name This is your desired subdomain. In the URL, this will be added before the domain selected above, as fol
lows:
https://<host name>.<domain>
Path In addition to the domain and subdomain, you can also add a path. You can use paths if you want to cre
ate routes for multiple applications available for the same host name and domain. The path will become
part of the URL as follows:
https://<host name>.<domain>/<path>
You can see the preview of your route at the bottom of the dialog.
Next Steps
Once you have created a route, you must map it to your application. Additionally, you also have the option to
bind it to a route service instance, by choosing (Bind Route Service) from the Actions column.
Once a route has been created, you can map it to an application to make this application reachable for end
users.
Prerequisites
You have at least one route and one deployed application in the same Cloud Foundry space.
Procedure
Results
Your application can now be accessed via the route mapped to it. You can launch your mapped route from 2
different places in the cockpit:
● Routes page:
Choose (Launch Route) from the Actions column.
● Overview page of the mapped applcation:
Choose the URL from the Application Routes section.
Related Information
SAP Cloud Platform allows subaccount owners to make their SAP Cloud Platform applications reachable and
secure via a custom domain that is different from the default domain – for example,
subdomain.mydomain.com.
See the following use cases for more information about configuring and managing custom domains:
The SAP Cloud Platform, Custom Domain service lets you configure your own custom domain to expose
publicly your SAP Cloud Platform application instead of using the default hana.ondemand.com subdomain.
The SAP Cloud Platform, Custom Domains service lets you configure your own custom domain to expose
publicly your SAP Cloud Platform application.
Environment
Overview Graphic
The following graphic illustrates the process of obtaining a custom domain certificate.
Use the custom domain service if you want to securely expose your own developed application or a SaaS
application under a different domain than the default one.
To learn more about the prerequisites, please have a look at the following sections.
Tools
You need the Cloud Foundry CLI and the Custom Domain CLI plugin to use the Custom Domain service. For
more information, see Custom Domain Plugin for the Cloud Foundry Environment [page 1789]. Always keep the
Custom Domain CLI plugin up-to-date.
Restrictions
You won't receive a warning from the SAP Cloud Platform or the custom domain service if one of your
certificates is about to expire. To make sure, that a secure connection to your applications is maintained, use a
certificate life cycle management tool to monitor your certificates.
This section provides information on the initial setup of the Custom Domain service in the Cloud Foundry
environment.
To use the Custom Domain service, you have to complete some initial steps. Please follow these steps before
using the Custom Domain service. Also take a good look at the prerequisites section.
1. Make sure that the Custom Domain service is entitled to your subaccount, see the related link for more
information.
2. Install the Custom Domain plugin for the Cloud Foundry CLI, see the related link for more information.
3. Create a service instance of the Custom Domain service.
1. Create the Service Instance with the Cloud Foundry CLI [page 1815]
2. Running on the AWS, Azure, or GCP regions:
Create the Service Instance with the SAP Cloud Platform Cockpit [AWS, Azure, or GCP Regions]
4. Check the prerequisites section, see the related link for more information.
Related Information
To use the custom domain service for your applications, use the Cloud Foundry CLI to create a service instance
for your Cloud Foundry organization.
Prerequisites
● Download and install the Cloud Foundry command-line interface (CLI) see Download and Install the Cloud
Foundry Command Line Interface [page 1769].
● Configure the entitlement for the Custom Domain service and assign it to the subaccount, that wants to
use the service, see Configure Entitlements and Quotas for Subaccounts [page 1756].
Context
If you want to use a service from the SAP Cloud Platform, you have to first create a service instance for that
service.
Procedure
cf login
Note
INFRA is a fixed value for the service, custom_domains is a fixed value for the service plan and
custom-domain-service is an individual name that you can choose for the service instance.
Assigning it a descriptive name helps you to distinguish it from your other service instances.
Note
You have to create the service instance only once for every organization. You don't have to bind this service
to an application.
Related Information
5.1.3.4.4.3 Prerequisites
Before configuring SAP Cloud Platform custom domains, you have to make some preliminary steps and fulfill a
number of prerequisites.
You have to come up with a list of custom domains and applications that you want to be served through them.
For example, you may decide to have three custom domains: test.mydomain.com, preview.mydomain.com
and production.mydomain.com – for test, preview and production versions of your SAP Cloud Platform
application.
After configuring your custom domains, you can securely reach your application, for example "myapp" under
the three domains. The result would be myapp.test.mydomain.com, myapp.preview.mydomain.com and
myapp.production.mydomain.com.
You can also choose a domain for different countries, for example: production.mydomain.com,
production.mydomain.jp, and production.mydomain.de.
Domain names are owned by the customer, not by SAP Cloud Platform. Therefore, you have to buy any custom
domain names from a registrar who sells domain names.
To make sure that your domain is trusted and all your application data is protected, you have to get an
appropriate TLS/SSL certificate from a Certificate Authority (CA). Determine the domains you want to be
protected by this certificate. One certificate can be valid for a number of domains and subdomains.
To make sure, that a secure connection to your applications is maintained, use a certificate life cycle
management tool to monitor your certificates. You won't receive a warning from the SAP Cloud Platform or the
custom domain service if one of your certificates is about to expire.
Ensure that you have the necessary authorizations within your Cloud
Foundry account
To perform tasks like installing the custom domain CLI and creating custom domains within your Cloud
Foundry organization, you have to have Admin or Org Manager rights. To create certificate signing requests
and manage your custom domains, you must also have the Space Developer role. For an overview of roles and
permissions in the Cloud Foundry environment, visit: https://docs.cloudfoundry.org/concepts/
roles.html#roles .
Related Information
To make sure that your domain is trusted and all application data is protected, you must first set up secure
TLS/SSL communication. Then, make your application reachable via the custom domain and route traffic to it.
Prerequisites
● Download and install the Cloud Foundry command-line interface (CLI), see Download and Install the Cloud
Foundry Command Line Interface [page 1769].
Procedure
Using custom domains with server authentication lets you establish secure communication between clients
and your application.
Prerequisites
There are several prerequisites for creating custom domains. See Prerequisites [page 1816] for additional
information.
Procedure
To make your applications reachable and secure under your own domain, use the Cloud Foundry CLI to create
your custom domains..
Procedure
cf login
Note
You must create each custom domain separately; that is, you cannot create multiple domains with a
single command.
cf domains
Use the Cloud Foundry CLI to create a private key and a certificate signing request (CSR) to obtain a certificate
for your custom domains from a trusted certificate authority.
Context
Depending on what kind of Subject Alternative Name (SAN) you want to add to the CSR, you should either use
a wildcard (*.<subdomain.mydomain.com>) or the application hostname
(<myapp.subdomain.mydomain.com>) in your commands.
Procedure
1. Create a private key and certificate signing request (CSR) for one or multiple custom domains:
Caution
Every custom domain that uses that key must be added here. There is no option to add a custom
domain to an existing key at a later point of time.
Restriction
○ Only the following parameters are supported. Only the CN parameter of the command is required,
the other parameters are optional.
○ CN = Common Name – a subject for the certificate request, in this example: mycloud.
○ OU = Organizational Unit, for example: IT Department.
○ O = Organization - company name, for example: SAP.
○ L = Locality - city full name, for example: Portsmouth.
○ ST = State - state or province name, for example: Hampshire.
○ C = Country - two-digit code - for example: GB.
Note
○ The domains that you specify in the command are added as a Subject Alternative Name to the
certificate signing request.
○ The name of the key has to be unique and is later used to activate your custom domains.
Results
You have now successfully created a private key and a certificate signing request. You can find the .pem file in
the folder where you executed the command. Send this file to a trusted certificate authority of your choice to
get it signed.
Use your signed certificate to activate secure communication between your application and clients.
Prerequisites
Create a certificate signing request, and verify that you have a signed certificate from a trusted certificate
authority.
Procedure
Note
○ mykey refers to the key that you created for your custom domain in the previous steps.
○ If your certificate authority sends you a PKCS#7 file in the PEM format (file content begins with
"-----BEGIN PKCS7-----"), run the following command to convert it to the required PEM-encoded X.
509 certificate list: openssl pkcs7 -in <certificatechain.pem> -inform PEM -out
<certificatechain.pem> -outform PEM -print_certs.
○ If you do not have OpenSSL, you must download and install it. You can find a download link at:
https://www.openssl.org/source/
Restriction
○ We do not support uploading existing certificates including a private key (PFX, PKCS#12).
2. Verify that the certificate has been uploaded and assigned to your key:
cf custom-domain-show-certificates <mykey>
cf custom-domain-list
Note
From the moment the custom domain status is listed as activated, it can take until the next day to
process the activation. The timezone depends on the region of your Cloud Foundry environment.
To route traffic to an application on your custom domain, you must also configure the Domain Name System
(DNS) that you use.
Context
For each custom domain you want to use, you must create a CNAME mapping from the custom domain to its
Cloud Foundry domain. This mapping is specific for each domain name provider you are using. Usually, you can
modify CNAME records using the administration tools available from your domain name registrar.
Procedure
1. Sign in to the administrative tool of your domain name registrar and find the place where you can update
the domain DNS records.
2. Locate and update the CNAME records for your custom domain to point to your Cloud Foundry API, for
example: . You can look up the API by executing the command:
cf api
If your custom domain is for example, subdomain.mydomain.com and your Cloud Foundry API is , create a
CNAME record with the name *.subdomain and the alias . After this configuration is active, you can route
applications from to subdomain.mydomain.com.
Note
If your provider doesn't allow creating a CNAME record with a wildcard, you have to create a single
entry for each application under your custom domain, for example
myapp.subdomain.mydomain.com, myapp2.subdomain.mydomain.com and so on.
dig <*.subdomain.mydomain.com>
dig <myapp.subdomain.mydomain.com>
If you see your custom domain pointing to your Cloud Foundry domain under ANSWER SECTION, the
DNS is configured.
○ On Windows, open the command-line tool and type:
For a wildcard entry:
nslookup <*.subdomain.mydomain.com>
nslookup <myapp.subdomain.mydomain.com>
If you see your custom domain under Aliases, the DNS is configured.
Note
Depending on your provider, it may take several hours for the changes to take effect. For further details,
consult your domain name registrar documentation.
Depending on what kind of application you want to map to a custom domain there are different steps to take.
Procedure
Scenario Instructions
You have deployed your own application on Cloud Foundry Follow these instructions: Map Your Own Application to a
and want to run this application under a custom domain. Custom Domain [page 1823].
You have subscribed to a SaaS application and want to Follow these instructions: Map a SaaS Application to a Cus
run this SaaS application under a custom domain. tom Domain [page 1824]
To make your application reachable from your custom domain, use the Cloud Foundry CLI to map a route to
your application.
Prerequisites
Note
○ If you have already deployed your application, you probably specified a host name for it, either in
the manifest file or while pushing the app. By default, the name of the app serves as its hostname;
however, you can change it using the command map-route.
○ For more information about mapping and routes in the Cloud Foundry environment, visit: https://
docs.cloudfoundry.org/devguide/deploy-apps/routes-domains.html
cf routes
Related Information
To make your SaaS application reachable from a custom domain, use the custom domain CLI plugin.
Prerequisites
● Make sure that you’ve done the previous steps of the Creating Custom Domains with TLS/SSL Server
Authentication [page 1818] process.
● Subscribe to a SaaS application and write down the default URL of the SaaS application.
● Be an administrator in your global account.
● Be an administrator in the subaccount, that subscribed to the SaaS application.
● Be a security administrator in the subaccount that subscribed to the SaaS application.
● Be space developer in the subaccount that subscribed to the SaaS application.
Note
Please refer to the customer guide of your SaaS application to check if you must request an additional
SaaS route configuration. In any case, the following steps must be performed.
Note
○ It can take until the next day to activate the route. The timezone depends on the region of your
Cloud Foundry environment.
○ The default SaaS URL consists of the following elements: <subaccount-name>.<saas-app-
name>.cfapps.<region>.hana.ondemand.com, for example https://
subaccount.enterprise-messaging.cfapps.eu10.hana.ondemand.com
Restriction
You can't map a route for a domain that has only been shared with your organization. In this case the
mapping has to be done under the orgization, where the domain has been created.
cf custom-domain-routes
Context
If your application can be securely reached on its custom domain depends on different factors such as the
activation time of the custom domain, therefore, it's important that you check your custom domain after
setting it up. Remember that certain steps during the setup process of your custom domain can take up to 24
hours and the DNS configuration can take several hours depending on your provider.
Procedure
Open your browser and enter the custom domain address of your application. In this example the address has
the following scheme: https://myapp.subdomain.mydomain.com.
Note
You can look up the hostname of your application and your custom domain by typing: cf routes.
Use client authentication to make your application more secure and grant individual access to your custom
domains.
Prerequisites
Create a custom domain with TLS/SSL server authentication, see Creating Custom Domains with TLS/SSL
Server Authentication [page 1818].
Procedure
cf login
cf custom-domain-enable-client-authentication <trustedcertificates.pem>
"*.<subdomain.mydomain.com>" "<myapp.subdomain.mydomain2.com>" ...
cf custom-domain-show-trusted-certificates "*.<subdomain.mydomain.com>"
Note
cf custom-domain-list
Note
From the moment that client authentication is listed as activated, it can take until the next day to
process the activation. The timezone depends on the region of your Cloud Foundry environment.
Related Information
Use the Cloud Foundry command-line interface to share your custom domains across multiple Cloud Foundry
organizations, to make them available for other users.
Prerequisites
You must have either Admin rights or have Org Manager rights for both your Cloud Foundry organization and
the Cloud Foundry organization that you want to share your custom domain with.
Context
When you create a domain with cf create-domain, the domain is private. If you want to make domains
available to users in other Cloud Foundry organizations, you have to share them.
Note
When you share a custom domain you also share its configuration, for example certificate based server or
client authentication configuration. Although the configuration is shared, it can only be changed within the
organization that it has been created in.
Restriction
Procedure
cf login
cf share-private-domain <othercloudfoundryorganization>
<subdomain.mydomain.com>
In this case, othercloudfoundryorganization is a Cloud Foundry organization that you have Org
Manager rights to. If you don't have the necessary permissions, you get an error message saying
"FAILED Organization othercloudfoundryorganization not found".
Results
Users in the organization othercloudfoundryorganization are now able to map applications to the custom
domain subdomain.mydomain.com.
Related Information
Use the Cloud Foundry CLI to manage your custom domains and complete tasks like updating a certificate.
Context
Use the Cloud Foundry CLI to deactivate custom domains that you temporarily no longer want to use. As a
result, a secure connection to applications that use those custom domains can be established only on their
default domain.
Procedure
cf login
cf custom-domain-deactivate "*.<subdomain.mydomain.com>"
"<myapp.subdomain.mydomain2.com>" ...
Caution
○ If you deactivate a custom domain with client authentication, make sure to download the list of
trusted certificates before you deactivate it. To do so, use the command: cf custom-domain-
show-trusted-certificates "*.<subdomain.mydomain.com>"
<trustedcertificates.pem>.
○ Deactivating custom domains also deactivates client authentication for those domains. When
reactivating a custom domain, client authentication is not automatically reactivated. To activate
client authentication, see Adding Trusted Certificates for Client Authentication to a Custom
Domain [page 1826].
cf custom-domain-list
Note
From the moment the custom domain status is listed as deactivated, it can take until the next day to
process the deactivation. The timezone depends on the region of your Cloud Foundry environment.
Related Information
Use the Cloud Foundry CLI to remove custom domains and certificates that you no longer want to use. As a
result, any applications that use those custom domains can be reached only on their default domain.
Prerequisites
Deactivate the custom domains that you want to remove: Deactivate Custom Domains [page 1829].
Procedure
cf login
cf custom-domain-delete-key <mykey>
Note
You must wait until the deactivation process has finished before you try to delete the key, otherwise,
you'll see the message: "Key cannot be deleted because it is still in use."
cf custom-domain-list
cf delete-domain <subdomain.mydomain.com>
Caution
○ Deactivate the custom domains that you want to delete first, before removing them. You must
delete each domain separately.
○ Deleting the custom domain will automatically delete the routes from applications to that domain.
If you used a custom domain for a SaaS application, you have to manually remove the route. To do
this, use the command: cf custom-domain-unmap-route <my-saas-
app.subdomain.mydomain.com>
Note
If you delete a domain that is used along other domains under one certificate, you can still use the
certificate for the remaining domains. If you wish to create a new certificate for the remaining domains,
cf domains
Related Information
If a certificate for your custom domains expires or you want to replace a certificate, use the Cloud Foundry CLI
to upload and activate a new one that's based on a new certificate signing process.
Procedure
1. Create a new certificate signing request by following the steps in Create a Certificate Signing Request
[page 1819].
Note
To keep high security standards, it is required to create a new key when updating a certificate.
The parameter mynewkey represents the new key that you created in the first step Create a Certificate
Signing Request [page 1819].
Note
○ If your certificate authority sends you a PKCS#7 file in the PEM format (file content begins with
"-----BEGIN PKCS7-----"), run the following command to convert it to the required PEM-encoded X.
509 certificate list: openssl pkcs7 -in <certificatechain.pem> -inform PEM -out
<certificatechain.pem> -outform PEM -print_certs.
○ If you don't have OpenSSL, you must download and install it. You can find a download link at:
https://www.openssl.org/source/
Note
Activating the domain with the new certificate and key automatically deactivates the old key and
certificate. The old key is not deleted.
cf custom-domain-list
Note
From the moment the custom domain status is listed as activated, it can take until the next day to
process the activation. The timezone depends on the region of your Cloud Foundry environment.
cf custom-domain-delete-key <mykey>
cf custom-domain-list
Use the Cloud Foundry CLI to deactivate client authentication for specific custom domains. As a result, your
applications under those domains can be accessed without the need of client authentication.
Procedure
cf login
cf custom-domain-disable-client-authentication "*.<subdomain.mydomain.com>"
Note
You can't disable client authentication for more than one custom domain at a time using this
command.
cf custom-domain-list
Note
From the moment that client authentication is listed as disabled, it can take until the next day to
process the deactivation. The timezone depends on the region of your Cloud Foundry environment.
To manage client authentication, use the Cloud Foundry CLI to add or remove certificates to or from the list of
trusted certificates.
Procedure
cf login
cf custom-domain-show-trusted-certificates "*.<subdomain.mydomain.com>"
<trustedcertificates.pem>
Note
The file is downloaded to the folder where you executed the command, for example: C:\Users
\YourUsername.
cf custom-domain-enable-client-authentication
<updatedlistoftrustedcertificates.pem> "*.<subdomain.mydomain.com>"
Caution
This command overwrites the existing list of trusted certificates. The lists are not merged.
cf custom-domain-show-trusted-certificates "*.<subdomain.mydomain.com>"
cf custom-domain-list
Note
From the moment that client authentication is listed as activated, it can take until the next day to
process the activation. The timezone depends on the region of your Cloud Foundry environment.
Use the Cloud Foundry command-line interface to remove the custom domain service instance from your
Cloud Foundry organization.
Procedure
cf delete-service <custom-domain-service>
Caution
This command deletes your certificates, keys and client authentication configuration from your Cloud
Foundry organization. As a result, a secure connection to your applications can only be established
through their default domains.
Restriction
You can't perform any commands from the custom domain service during the deletion process.
Note
custom-domain-service is an example for the name, that you have given your service instance.
Type cf services to look up the name that you have given the custom domain service instance.
cf services
Note
It can take until the next day to complete the deletion process. During the deletion process, the column
last operation shows "delete in progress". The timezone depends on the region of your Cloud
Foundry environment.
If the service instance is not listed under services anymore, you have successfully deleted the service instance
from your Cloud Foundry organization.
Related Information
Create the Service Instance with the Cloud Foundry CLI [page 1815]
This section provides information on troubleshooting-related activities for the Custom Domain service in the
Cloud Foundry environment.
Getting Support
If you encounter an issue with this service, we recommend that you follow the procedure below:
For more information about selected platform incidents, see Root Cause Analysis.
We also recommend that you regularly check the SAP Notes and Knowledge Base for component BC-CP-CF-
SEC-DOM in the SAP Support Portal . These contain information about program corrections and provide
additional information.
Here you can find a list of error messages that will lead you to a solution in the guided answers format:
Find out what functions of the Custom Domain service have been added or changed.
Learn about the different account administration and application operation tasks which you can perform in the
ABAP environment.
If you use the ABAP environment, you must perform the account administration tasks in the Cloud Foundry
environment. See:
The Identity and Access Management apps secure the access to your solution for your business users.
Related Information
You use this app to provide business users with access rights and to maintain business user settings.
Business users gain access to Fiori apps through business roles. A business role can comprise one or several
business catalogs which in turn comprise several apps.
Key Features
● Desktop
● Tablet
Related Information
Business Catalogs for Identity and Access Management Apps [page 1851]
Context
You can change user data (for example, the user name) and regional settings (for example, the date and time
format).
Procedure
You can navigate from the Maintain Business Users app to the Maintain Business Roles app by
choosing a business role ID in the list of assigned business roles.
The user then sees the respective app tiles in the tile catalogs.
● Removing Role Assignments from Business Users
● To remove a role assignment from a user, proceed as follows:
a. Select the business user and then the assigned business role.
b. Choose Remove above the list or Remove Business Roles at the bottom of the screen.
● Updating User Role Assignments
● To update the assignments of roles to a user individually, edit the affected business user.
● To mass update user role assignments, upload a CSV file.
Note
The CSV file needs to be UTF-8-encoded and must comply with the following pattern:
In the app, you are provided with a CSV template that you can download and use for filling in the user
role assignments. When uploading the CSV file, you can decide if you want to add the role assignments
to a user, or if you want to overwrite them completely.
Make sure that the CSV file is not opened with any program before the upload.
You can use this app to create and edit business roles, add business catalogs to the roles, and maintain access
restrictions.
With the Maintain Business Roles app you define business roles by combining predefined business catalogs
and, if necessary, define value help, read and write access by maintaining the allowed values for fields. You use
business roles to control the access to your applications. The predefined catalogs contain the actual
authorizations that allow users to access apps and allow to define instance-based restrictions where
necessary. Business catalogs bundle authorizations for a specific business area. Once you have created a
business role, you can assign it to multiple business users who perform similar business tasks.
Key Features
● Desktop
● Tablet
Context
You use business roles to control the access to your applications. To create a business role, you add one or
multiple business catalogs to it. These predefined catalogs contain the actual authorizations that allow users to
access apps and allow to define instance based restrictions where necessary. Business catalogs bundle
authorizations for a specific business area. Once you have created a business role, you can assign it to multiple
business users who perform similar business tasks.
Process Steps
Procedure
1. Select the tile of the Maintain Business Roles app on the SAP Fiori Launchpad to open the app. On the
initial screen, select New.
2. Add general role details, such as business role name, ID and description.
3. On the Assigned Business Catalogs tab, select Add to add the required business catalogs. Select the
catalogs according to the business activities that the users with this role need to perform. Select Apply.
Note
Some business catalogs require additional dependent catalogs to be assigned to enable access to
associated master data information (for example, for business partners or customers). These
additionally required catalogs are listed in the business catalog description and need to be selected in
addition, to ensure access to all business objects used with the SAP Fiori apps of the main catalog.
4. By default, the value help and read access for each business catalog is set to unrestricted and there is no
write access. If you want to change these restrictions, select Maintain Restrictions.
5. Maintain instance-based restrictions for all required business objects (following the requirements of your
local authorization concept).
6. On the Assigned Business Users tab, you can assign the business users to your new business role. These
users will receive the authorizations as defined in the business role.
7. Save the business role to activate it.
Note
If you go back to the business roles overview without saving the business role, the business role will
automatically be saved in a draft status. You can access it again and edit it from the business roles
overview.
Context
Instead of creating a business role from scratch, you can also create it from a business role template. A
business role template is defined by SAP to make it easier for you to find the business catalogs that might be
relevant for the corresponding role in your company. Usually, the business role templates have a very broad job
profile to show all options.
Note
SAP does not recommend using them in their full scope, as the business catalogs might even conflict from
a business perspective. Instead, adjust them to the tasks of the role in your company by choosing only the
relevant business catalogs.
If changes to the template were included in an upgrade, the Business Role Templates app informs you about
these changes and how they affect your business roles. For more information, see Business Role Templates
[page 1851].
When you create a role based on a template, as a default the read and value help access are unrestricted, and
write access is not granted. You can change this and define for each field to which a catalog provides access
what a user is allowed to see and to edit. This allows you to shape your business roles in a very detailed way. For
example, you can create two roles that comprise the same catalogs, but role 1 is restricted to work with data for
the US, and role 2 is restricted to work only with data for Germany.
Process Steps
Procedure
1. Select the tile of the Maintain Business Roles app on the SAP Fiori Launchpad to open the app. On the
initial screen select Create From Template.
2. Select the required template. Define the ID and the name of the new business role and click OK.
3. A template already contains one or more business catalogs that will be assigned to. Adjust the displayed
template to your requirements, for example, change the general role details, and add or delete catalogs.
4. By default the value help and read access for each business catalog is set to unrestricted and there is no
write access. If you want to change these restrictions, select Maintain Restrictions.
5. Maintain instance-based restrictions for all required business objects (following the requirements of your
local authorization concept).
6. On the Assigned Business Users tab you can assign the business users to your new business role. These
users will receive the authorizations as defined in the business role.
Note
If you go back to the business roles overview without saving the business role, the business role will
automatically be saved in a draft status. You can access it again and edit it from the business roles
overview.
Get an overview of how to create a business role for the Key User if you haven't used the system before.
Context
If you as a Key User haven't used the Fiori apps before, you have to create a Key User business role first.
Otherwise the app tiles are not visible on the Fiori Launchpad. With this role, you can create all other business
roles according to the needs of your company. To create a business role for the Key User, proceed as follows:
Process Steps
Procedure
1. Use the initial credentials that you have received separately e.g. via email, to log in to the system. On the
SAP Fiori Launchpad select the tile Maintain Business Roles to open the app.
2. Select Create From Template. The Create Business Role from Template window opens. In the field Template,
search for SAP_BR_ADMINISTRATOR. Define the ID and the name of the new business role.
3. A predelivered template contains a number of catalogs that in this case a Key User could need to perform
tasks. These catalogs are displayed on the Assigned Business Catalogs tab. By default for each catalog the
read access is unrestricted and there is no write access. If you want to change the restrictions, for example,
to allow unrestricted write access for all catalogs, proceed as follows:
a. Select Maintain Restrictions.
b. On the new screen, select the restriction you would like to change.
Under Write, select Unrestricted.
c. Select Back to Main Page to save the changes.
4. On the Assigned Business Users tab, all users that are assigned to the selected catalogs are listed. To add a
business user, proceed as follows:
a. Select Add.
Note
If you go back to the business roles overview without saving the business role, the business role will
automatically be saved in a draft status. You can access it again and edit it from the business roles
overview.
You can now log out from the system and log in with your own credentials and proceed creating further
business roles according to the needs of your company
Example
Context
The editing of an active business role takes place in a draft version of this business role. This draft version is
created by the system once you enter the edit mode of the active business role. Proceed as follows to edit an
active business role:
Procedure
A draft version of the business role is created in which the editing takes place. The active business role
remains.
3. Make the required changes to the business role.
4. Save the business role.
Once the changes are saved, they are written into the active business role and the draft version disappears.
If the dialog for editing a business role is left without saving the business role, the active business role
as well as the draft version of the business role will be available in the business roles overview. In this case,
you can continue editing the draft version later.
Context
By maintaining restrictions you can define the subset of all existing business objects a user can view or edit
when working with this business role. The access restrictions allow you to differentiate your business roles on a
fine-granular level. When you specify for a business role that a certain object should not be visible or editable,
this applies to all apps included in this role.
If you assign multiple business roles that contain the same business catalogs, but different access restrictions,
all restrictions are aggregated to the business user.
Using the Maintain Business Roles app, you have the following options for maintaining restrictions:
Procedure
● On the Maintain Restrictions tab, you can maintain restrictions for business roles.
The business catalog defines which access categories are available for maintaining and for which fields
restriction values can be maintained. The following access categories can be available:
○ Value Help (value help access)
○ Read, Value Help (read access)
○ Write, Read, Value Help (write access)
The available restriction fields represent the authorization-relevant attributes of the business objects that
are used in a role and for which instanced based access restrictions for value help, read and write access
can be granted (for example for a particular sales area).
Depending on the required authorization concept for the business role, you can restrict data access for the
following separately:
○ Value Help
○ Read, Value Help
○ Write, Read, Value Help
Note
When you set an access category to Restricted and it is not included implicitly according to the rule
above, you either need to maintain a particular field value or you need to grant unrestricted access
(“*No Access) for this restriction field.
● Value Help
● You can define access restrictions for value helps that are used in a business role. Leaving fields empty will
typically mean No Access for this restriction field.
These value help restrictions will not influence the defined restrictions for read access.
Switching the read or write access to Restricted allows you to define which data can be seen and edited by
the users assigned to this business role.
● In the Restrictions and Values section, you can define the instance-based restrictions for the desired
restriction fields used for value helps.
Example
For example, you have created the business roles 1 and 2 that both include the business catalog A. In business
role 1, you restricted the access rights for sales organization to A. In business role 2, you allowed to work with
data for all sales organizations. The business user to whom you assign both roles will then have full access to
the data for all sales organizations.
● Leading Restriction
For general organizational fields that are used in several different restriction types, you can define a
restriction as leading. That means the value in this field is automatically passed on to other restriction
types the field is used in as well.
Select the required field under Restriction and Values General and click the pencil icon. In the dialog
box that opens, select the values you want to define as leading and switch Leading Restriction on. These
values are then automatically transferred to the other restriction types the field used in.
You want to, for example, define that the values for the country templates for Austria and Switzerland are
applied in all restriction types the Company Code field is used in. So you select the values AU01 (Country
Template AU) and CH01 (Country Template CH) and switch Leading Restriction on. Then these values are
automatically distributed to all occurrences of the Company Code field.
Download business roles from the test system and then upload them to the productive system to make them
available there.
Context
You can download one or more business roles from the test system and then upload them to the productive
system using an XML file.
1. To download the required business roles, go to your test system and select them.
2. Click Download and save the XML file on your hard drive.
3. To upload the required business roles, go to your productive system and click Upload.
4. Browse for the XML file and click OK.
You can use this app to display restriction types and their validity.
With this app you can display restriction types, the assigned fields, and in which business catalog the
restriction type is used.
Key Features
Restriction types bundle one or more restriction fields. The restriction type Organizational Area, for example,
contains the following fields: Purchasing Group, Purchasing Organization, Division, Sales Organization,
Distribution Channel.
● Desktop
● Tablet
If you need support or experience issues, please report an incident under component BC-SRV-APS-IAM.
This app shows all technical users that exist in the system.
With this app you can display technical users that can be services that are used to automate technical tasks in
the system, for example, a print queue user who pulls print jobs remotely. In addition, the service and support
users of the software provider or hosting provider are technical users.
Key Features
● Lock or unlock the following types of technical users: Print users, communication users, and initial user
(SAP_CUST_INI) that comes with the new system
● Change user name and password of some types of technical users
● Desktop
● Tablet
With this app you can get an overview of the business catalogs, their status (for example Deprecated), and their
usage within business roles. You use this application to see which applications and business catalogs delivered
by SAP have changed, for example after an upgrade. They may also be deprecated. As a key user, you need to
have an overview of these changes to adapt the affected business roles accordingly.
Key Features
● Desktop
● Tablet
If you need support or experience issues, please report an incident under component BC-SRV-APS-IAM.
Context
Due to ongoing development of new features and new apps, we need to periodically revise existing business
catalogs. This means that some business catalogs are deprecated and replaced by new ones, and you need to
assign business roles and business users to these new catalogs.
Rather than disappearing, deprecated business catalogs are identified as being obsolete, which allows you to
identify them at a glance. You can also check how many deprecated business catalogs you still have in use with
the Business Catalogs app. This app lets you change assignments from the old, deprecated business catalogs
to the new, active catalogs quickly and easily.
Note
Some business catalogs might be redesigned in each release. Please check the assignments for your
business roles and business users and make the necessary changes to the assignments as soon as
possible.
Procedure
1. In the Business Catalogs app, check how many deprecated business catalogs you still have in use.
You can filter the list of catalogs for the deprecated ones but the deprecated business catalogs are also
marked with the appendix (obsolete) in the list of all catalogs in use.
Example: A business catalog is deprecated with a Cloud 1811 release. The business catalog will then be
deleted with the Cloud 1905 release.
With this app you can get an overview of business users in your system and what roles and restrictions are
assigned to them.
With this app you can display information about the usage of business roles, business catalogs, business users
and restrictions, and how they are related. For example, you can use this app to check if a business user is
using a particular app and to check which authorizations he or she has.
If you want to look up more information about a business role, business user, business catalog, business role
template or restriction, you can jump directly to the respective app by clicking the entity.
You can use this app as part of your daily work as SAP S/4 HANA Cloud administrator.
Key Features
● Check the usage of the following entities and how they are related: Business role, usiness role template,
application, business user, business catalog, restriction
● Check, for example, which business roles are assigned to a business user, which business catalogs and
restrictions are therefore assigned to the business user and to which applications a user has access. You
can also download a list of business users and business catalogs.
● Desktop
● Tablet
If you need support or experience issues, please report an incident under component BC-SRV-APS-IAM.
You can use this app to you get an overview of the business role templates delivered by SAP.
With this app you can get an overview of the delivered business role templates, any changes included in
upgrade, and whether you need to adapt your business roles to these changes. For example, after an upgrade,
you can check if the business role templates have changed and are therefore different from the existing
business role - a new business catalog might have been added or an existing catalog replaced by another. You
can see which business roles were affected by the changes and adapt them if required.
Key Features
● Desktop
● Tablet
If you need support or experience issues, please report an incident under component BC-SRV-APS-IAM.
You assign business catalogs to business roles that are assigned to business users. Business catalogs contain
authorizations that define what a business user with a certain business role is allowed to do. Currently, four
business catalogs are supported.
Identity and Access Management All authorizations for the IAM apps Business catalog was set to obsolete
due to potential SoD (segregation of
SAP_CORE_BC_IAM
duties) conflicts. Please use the other
three catalogs listed below.
User Management Maintain Business Users Not possible to assign business roles to
SAP_CORE_BC_IAM_UM the business users
Role Management Maintain Business Roles Not possible to assign business roles to
SAP_CORE_BC_IAM_RM the business users
Role Assignment Maintain Business Users Only possible to assign business roles
SAP_CORE_BC_IAM_RA to the business users. Not possible to
change the user data.
Terminology Overview
Term Description
Business User Management enables and supports the lifecycle maintenance of business users.
For information about the blocking of business partner data, see the Business Partner documentation on SAP
Help Portal at SAP S/4HANA Cloud Master Data Business Partner/Customer/Supplier Scheduling
Block Business Partners .
Prerequisites
The role SAP_BCR_HCM_EMPLOYEE_MD_PC is assigned to the relevant user ID. The assignment enables the
Maintain Employees application in the launchpad. Use the Maintain Business Users application to assign the
role to a user ID.
With this app you can create employees and modify employee information.
Key Features
● Create employees
● Modify employee information such as personal data (Last Name, First Name, and E-Mail), and employee
data (Employee ID, Valid From, Valid To)
● Display the changes that have been made to an employee
● Search for employee details providing the employee ID
● Mass upload employees
● Desktop
● Tablet
● Smartphone
If you need support or experience issues, please report an incident under component CA-GTF-BUM.
Context
If you delete an employee in Maintain Employees, it is not physically deleted from the database but is marked
for archiving. This is contrary to the Fiori app Maintain Business Users, where the user is physically deleted.
An employee cannot be deleted immediately for business process reasons. It is also not possible to delete an
employee in the Fiori app Maintain Employees, if the employee has a user assigned to it. In this case, you have
to delete the user in the Fiori app Maintain Business Users first.
By default, all employees that are marked for archiving (deleted) are hidden. You can unhide them and also
undo the mark (deletion) to reuse the employee. To do so, follow these steps:
Procedure
You may have to change the settings of the table to add the optional column Deleted.
5. Choose a deleted employee to get to the Display Employee view.
6. Choose Undo Deletion.
You can now use the employee again, for example, to create a new user for it.
5.2.1.3 Security
With this app you can maintain a list of trusted certificates. If certificates of communication partners are
classified as trusted, outbound communication among these partners can be enabled.
With this app you can monitor all available trusted certificates.
● Desktop
● Tablet
If you need support or experience issues, please report an incident under component BC-SRV-APS-COM.
With this app you can define secure applications by adding trusted hosts to the clickjacking protection
whitelist. By default, clickjacking protection is active to protect your systems against malicious clickjacking. If
the system is embedded into another application, the check determines whether the other application is
secure. To add trusted hosts, you have to enter specific details, such as schema and port, for each trusted host
to make sure that malicious hosts are identified and prevented from calling your system.
Key Features
● Desktop
● Tablet
If you need support or experience issues, please report an incident under component BC-SRV-APS-COM.
With this app you can only display the list of the apps that are available for download. This helps you to better
integrate your apps with other programs you need for your daily business.
Key Features
● Desktop
● Tablet
If you need support or experience issues, please report an incident under component BC-CCM-PRN-OM-PQ.
Context
To better integrate your apps with other programs you need for your daily business, proceed as follows to
download additional software.
Procedure
● To install additional software, click the relevant icon and follow the instructions on the download site.
By selecting the required link from the list you will be redirected to a download service (SAP ONE Support
Launchpad).
Note
The communication management apps allow you to integrate your system or solution with other systems to
enable data exchange.
Prerequisites
● Predefined communication scenarios are available for different use cases, for example the integration for
employee data. Decide which scenario you are going to use to create a communication arrangement.
Process Overview
The communication management apps allow you to establish secure communication between your solution
and other systems. The best practice to organize efficient data exchange is to proceed as follows:
Related Information
You can use this app to get an overview of available communication scenarios.
With this app you can display communication scenarios used for integrations, and the scenario status.
Key Features
● Desktop
● Tablet
If you need support or experience issues, please report an incident under component BC-SRV-APS-COM.
You can use this app to create and edit communication users. Communication users are used by solutions to
authenticate themselves to be able to post data.
With this app you can manage communication users for different integration scenarios with other solutions. A
communication user enables the integration with other solutions. To be able to post data, the solutions have to
authenticate themselves with the user and password you create here. The communication users are assigned
to the communication system you want to use.
Key Features
● Create a user
● Edit a user
● Lock or unlock a user
● Delete a user
● Display communication systems that use the selected communication user
● Display communication arrangements for the systems that use the selected communication user
● Desktop
● Tablet
If you need support or experience issues, please report an incident under component BC-SRV-APS-COM.
Context
Procedure
Note
You can now assign the created user to a communication system. Use the Communication Systems app for
this purpose.
Related Information
You can use this app to create and edit communication arrangements. Communication arrangements help you
to configure the electronic data exchange between the system and a communication partner.
With this app you can create and edit communication arrangements that your company has set up with a
communication partner. The system provides communication scenarios for inbound and outbound
communication that you can use to create communication arrangements. Inbound communication defines
how business documents are received from a communication partner, whereas outbound communication
defines how business documents are sent to a communication partner. The communication scenario
determines the authorizations, inbound and outbound services and the supported authentications methods,
that are required for the communication.
Key Features
● Desktop
● Tablet
If you need support or experience issues, please report an incident under component BC-SRV-APS-COM.
Prerequisites
● You have already created a communication system with inbound and outbound users. For more
information, see section Maintain Communication Systems.
● You have already created a communication user with a supported authentication type that is defined in the
selected communication scenario. For more information, see section Maintain Communication Users.
Context
Process Steps
Procedure
1. Open the Maintain Communication Arrangements app from the SAP Fiori Launchpad. Already existing
communication arrangements are listed on the initial screen.
2. Select New. In the New Communication Arrangement window, select a communication scenario and define
the arrangement name. Select Create.
3. New screen Communication Arrangements is now open. Under Common Data in the field Communication
Systemselect a communication system that you want your system to connect to.
4. If the selected communication system already has a communication user for inbound communication, the
user and required authentication type will be displayed in the field User Name under Inbound
Communication. In the Inbound Services section, the URLs to the service endpoints are displayed.
5. If the selected communication system already has a communication user for outbound communication,
the user and required authentication type will be displayed in the field User Name under Outbound
Communication.
6. Save the arrangement.
You can use this app to create communication systems. Communication systems are created to enable the
communication among different systems.
With this app you can create new communication systems, that you can later use to establish communication
arrangements. To enable communication between different systems you have to register this systems in the
Communication Systems app. The communication system represents the communication partner within a
communication. For inbound communication, this is the system that calls services provided by your system.
For outbound communication, this is the system that provides services called by your system.
Key Features
● Desktop
● Tablet
If you need support or experience issues, please report an incident under component BC-SRV-APS-COM.
Related Information
Context
Procedure
○ General Data
○ Technical Data
Here you can define alternative IDs for the communication system: the logical system ID that is
required for IDoc communication and the business system ID that identifies the communication
system in your system landscape.
Note
○ Destination Service
Because SAP Cloud Platform ABAP Environment uses destination services of SAP Cloud Platform for
outbound communication, you need to enter the name of the required destination service. For more
information, see Set Up the Destination Service. You also need to select the instance the destination
service is based on. For more information, see Creating the Service Instance for the ABAP
Environment.
○ Contact Information
4. Add technical users for inbound and/or outbound communication. You can either select a user from the list
or create a new user. If you decide to create a new user, you will be redirected to the app Maintain
Communication User.
Note
Inbound users are communication users provided by your system and are used by the communication
system to call the inbound services. Outbound users are communication users provided by the external
communication system to call the outbound services.
You can now establish a communication arrangement with the created system. Use Maintain
Communication Arrangements app for this purpose.
You can use this app to create SAP Cloud Platform extensions to build extension applications.
With this app you can create SAP Cloud Platform Extensions for your SAP S/4HANA system. These
automatically connect the two systems and enable you to build extension applications for SAP S/4HANA on
SAP Cloud Platform.
Key Features
● Desktop
● Tablet
If you need support or experience issues, please report an incident under component BC-SRV-APS-COM.
Related Information
Triggering an Automated Integration Between SAP S/4HANA Cloud and SAP Cloud Platform
Terminology Overview
Term Description
Purpose
The Custom Code Migration app enables you to analyze custom code that needs to be migrated from an
SAP Business Suite system to SAP S/4HANA (on-premise). To evaluate the custom objects to be adopted, it
performs the SAP S/4HANA custom code checks.
In addition, this app supports you with identifying unused custom code based on your collected usage data.
This enables you to remove unused custom code during a system conversion to SAP S/4HANA.
Key Features
Scoping:
● You as an user define which ABAP custom code needs to be migrated to be taken over to SAP S/4HANA
(based on usage data)
● This app creates a deletion transport in order to delete unused ABAP source code during the system
conversion to SAP S/4HANA
Access Information
● how to provide access to users and how to implement this app, see Enable Usage of the Custom Code
Migration App [page 1868]
● how to access this app, see SAP Fiori Apps reference library
BC-DWB-CEX
● Desktop
You can use this app to perform SAP S/4HANA custom code checks in the on-premise system in which the
custom code to be analyzed is stored.
Overview
Learn how to enable and implement the Custom Code Migration (CCM) app.
1. In the SAP Cloud Connector Administration, you have to configure the access control (RFC) [page 1869] for
remote connection.
2. In the SAP Cloud Platform Cockpit, setting up the destinations to enable the connectivity to on-premise
systems [page 1872] is required.
3. In the SAP Fiori launchpad of your ABAP environment, you have to administrate user assignments and
maintain communication arrangements [page 1872].
As an SAP administrator, you define the access authorization by using the resource accessible, also known as
access control list (ACL), for the connection to the checked system as follows:
Note
1. In the SAP Cloud Connector AdministrationSAP Cloud Platform Cockpit: Setting Up the, configure the
access control (RFC).
2. Define the permitted function modules (resources) for a particular back-end system.
Note
BAPI_USER_GET_DETAIL
FUNCTION_EXISTS
REPOSITORY_ENVIRONMENT_ALL
RFC_GET_NAMETAB
RFC_GET_SYSTEM_INFO
RFC_PING
RFC_SYSTEM_INFO
RFCPING
RS_ABAP_CHECK_PROGRAM_E
RS_ABAP_EXPAND_DDL_OBJ_LIST_E
RS_ABAP_EXPORT_COMP_PROCS_E
RS_ABAP_GET_ALL_DYNPROS_E
RS_ABAP_GET_CODE_RANGES_E
RS_ABAP_GET_DDIF_TABL_E
RS_ABAP_GET_DDIF_VIEW_E
RS_ABAP_GET_DDL_DEPENDENCIES_E
RS_ABAP_GET_INSP_PROGRAMS_E
RS_ABAP_GET_OBJECTS_E
RS_ABAP_GET_OBJECT_CLASS_E
RS_ABAP_GET_OBJECT_E
RS_ABAP_GET_OBJECTS_E
RS_ABAP_GET_TYPE_INFO_E
RS_ABAP_GET_TYPES_E
RS_TOOL_ACCESS_REMOTE
SCA_REMOTE_DATA_ACCESS
SCA_REMOTE_DATA_VERSION
SCI_ANALYZE_SQL_STMNTS
SCI_DCTHDB_DOWNLOAD
SCI_F4_EXIT_CHK
SCI_F4_EXIT_CHKV
SCI_F4_EXIT_ERRTY
SCI_F4_EXIT_INSP
SCI_F4_EXIT_OBJS
SCI_GET_INSPECTION_CODE_STAT
SCI_GET_INSPECTION_PLAIN_LIST
SCI_GET_INSPECTION_RESULT
SCI_GET_INSPECTION_RESULTS
SCI_GET_OBJECTS_PACKAGES
SCI_INSPECT_LIST
SCI_PERF_CHKLST_COMPLIANCE
SCI_QUERY_SET_GUI_STATUS
SCI_REMOTE_DELETE
SCI_REMOTE_OBJLIST_CHECK
SCI_REMOTE_OBJLIST_RESULT_SHOW
SCI_REMOTE_RESULT_DISPLAY
SCI_REMOTE_SQL_STMNTS_ANALYZE
SCI_REMOTE_SQLT_RESULT_SHOW
SCI_RUN
SCI_SHOW_RESULTS
SLINRMT_RUN
SUSG_API_ADMIN_ITERATOR
SUSG_API_DATA_ITERATOR
SUSG_API_ODATA_ITERATOR
SUSG_API_PROC_ITERATOR
SUSG_API_PROG_ITERATOR
SUSG_API_RDATA_ITERATOR
SUSG_API_SUB_ITERATOR
SYCM_CALCULATE_URLS
SYCM_CREATE_DELETION_TRREQ
SYCM_GET_TADIR_INFO
SYCM_HEALTH_CHECK
Create an instance for the destination service to define the applications in the ABAP environment that are
needed for outbound connectivity.
Set up on-premise RFC connectivity for the ABAP environment by configuring a destination of type RFC.
To enable business users to access the Custom Code Migration app, an SAP administrator has to assign
the Project Manager – IT business role to one or more business user(s) as follows:
1. From the SAP Fiori launchpad of your ABAP environment, open the Maintain Business Users app.
2. To open the details page of a business user, select a business user.
3. Choose the Add Business Role button and add the Project Manager – IT business role template (ID
SAP_BR_IT_PROJECT_MANAGER) to the business user.
Note
To enable write access for this business role, ensure that it is set to Unrestricted. To do this, open the
business role in the Maintain Business Role app and choose the Maintain Restrictions
button from the header.
For more information, see How to Create a Business Role from a Template.
To enable communication from your ABAP environment to your on-premise systems using Remote Function
Calls (RFC), you need to create the following communication arrangements in your ABAP environment:
Ensure that you create a service instance containing the destination to your on-premise system in the
SAP Cloud Platform Cockpit.
Note
To create these communication arrangements, you need the Administrator and the Project
Manager – IT role.
To set up the RFC connection from the ABAP system in the ABAP environment to your on-premise system,
an SAP administrator has to proceed as follows:
1. In the SAP Cloud Platform Cockpit, create a destination to your on-premise system in the service
instance.
2. Log on to the SAP Fiori launchpad of your ABAP environment.
3. In the Communication Management section, select the Communication Arrangement tile.
4. Create a new communication arrangement using the SAP_COM_0464 scenario.
5. If not yet available or defined, create a communication system for the communication arrangement to
define an endpoint for your checked system.
1. Use the System ID and System Name of the checked system.
2. Choose Create.
3. In the General tab, switch on the slider for the Destination Service.
4. Select the service instance that is defined in communication arrangement SAP_COM_0276 as
Instance.
Enter the corresponding destination to your on-premise system that you have defined in the the
service instance in your SAP Cloud Cockpit as Name.
Note
You have to enter the exact name of the destination for Name.
Now, you can use the communication arrangement as Destination in the Custom Code Migration app to
establish the connection to your on-premise system.
To analyze your custom code in your checked system using the Custom Code Migration app, see SAP Note
2599695 Custom Code Migration Fiori App: Remote Stubs for the Checked System.
With this app you can view and repair the CDS views that are in inconsistent state.
Key Features
● View the details of the inconsistent CDS views such as the Application Component, Error Category, Person
responsible and Package details.
● Search for a specific view based on the view name or filter the views based on the error category.
● Repair the inconsistent views
In addition, the app allows you to export the CDS views to spreadsheets.
● Desktop
If you need support or experience issues, please report an incident under component BC-DWB-DIC.
With this app you can enable an authorization trace for a business user. This helps you to analyze if any
authorizations are missing or are insufficient.
A maximim of 10.000 data sets is possible, therefore we recommend to consider this when defining the
selection criteria, especially the date range.
Status Meaning
If an authorization check resulted in a Filtered status, you can check which business roles expose the affected
restriction type. One potential solution is that the business user that has been checked is not assigned to the
required business role or that the required value has not been maintained yet.
● Desktop
● Tablet
● Smartphone
If you need support or experience issues, please report an incident under component BC-SRV-APS-IAM.
The Output Management apps comprise activities related to the output of business documents in print and
email. You can choose the channel for your output directly in your app. This version supports only print format.
Purpose
Using this app, you can set up print queues to manage the printing of documents and monitor the print jobs in
each queue.
This helps you to identify and analyze errors and gives you a sense of direction for doing troubleshooting to
solve them.
In a cloud environment, the back-end system does not have a connection to local printers in the customer's
network (no virtual private network access is available, for example). To establish this connection, you need to
create a print queue in the cloud system representing an output channel to a local printer. To do so, you have to
install SAP Cloud Print Manager in the customer's network from the Install Additional Software app so that you
can regularly check via a connection to the backend system whether new print items are in the print queue. If
this is the case, SAP Cloud Print Manager retrieves these items and sends them to the locally configured
printer.
Alternatively, you can directly print out the PDF document in the Maintain Print Queues app in preview mode.
Key Features
If you need support or experience issues, please report an incident under component BC-CCM-PRN-OM-PQ or
BC-CCM-PRN-OM-PM.
If you want to use this app and print documents, you need to install Cloud Print Manager. For more information,
see Install Additional Software [page 1879]
General Information
In a cloud environment, the back-end system does not have a connection to local printers in the customer's
network (no virtual private network access is available, for example). To establish this connection, you need to
create a print queue in the cloud system representing an output channel to a local printer. To do so, you have to
install SAP Cloud Print Manager in the customer's network from the Install Additional Software app so that you
can regularly check via a connection to the backend system whether new print items are in the print queue. If
this is the case, SAP Cloud Print Manager retrieves these items and sends them to the locally configured
printer.
Alternatively, you can directly print out the PDF document in the Maintain Print Queues app in preview mode.
Context
A print user is a technical user used by the SAP Cloud Print Manager to log on to the system and retrieve the
print queue information.
Note
Results
Procedure
You have created a print queue you can use in the SAP Cloud Print Manager.
Use method CREATE_QUEUE_ITEM_BY_DATA to create a queue item (job) with its print data and attachments
within a queue. This method has the following parameters:
Once a queue item has been created, it returns the ID of the item (RV_ITEMID parameter). If an error occurs,
the error messages are found in exporting EV_ERR_MSG parameter.
Example
Sample Code
With this app you can only display the list of the apps that are available for download. This helps you to better
integrate your apps with other programs you need for your daily business.
Tip
For more information, see the SAP Cloud Print Manager Quick Guide that you can call up directly in the
application by clicking Help. This document is only available if you have installed SAP Cloud Print
Manager.
● Desktop
● Tablet
If you need support or experience issues, please report an incident under component BC-CCM-PRN-OM.
The software component lifecycle management allows you to manage the lifecycle of software components
that are available in your SAP Cloud Platform ABAP Environment landscape.
Purpose
For the lifecycle management of software components you can use the app Manage Software Components in
your launchpad that displays available software components, and imports them to service instances in your
ABAP environment landscape. With the app, you can create new software components and delete them. Due to
similarity to the Git protocol and its workflows, the following documentation uses the term pull as synonym for
import.
One software component is comparable to a repository in Git. These "repositories" are centrally stored in
separate units and cannot be referenced by other software components, which means the transport of
development objects between the components is not possible. All software components are managed by SAP.
Once you have pulled a new software component to a service instance, a new structure package is created. The
structure package name corresponds to the software component name. A software component itself is
Related Information
You can use this app to create, display, pull and delete software components in your ABAP environment
landscape.
Purpose
This app allows you to create a software component on a service instance and make it locally available on other
instances within the landscape by pulling them.
Key Features
● Desktop
● Tablet
● Smartphone
Context
You want to see which software components already exist in the environment and are locally available. To
display available software components, perform the following steps:
Procedure
1. After opening the app, choose Go on the Manage Software Component Lifecycle screen to display the list of
available software components. The list provides you with the following information:
Tip
We recommend that you use a namespace for the
software component (Z or Y). The namespace is re
quired to check whether the software component can
be imported to a service instance or not.
<Created On> Date and time when the software component was created.
<Created By> Email address of the technical user who created the soft
ware component.
<Changed On> Date and time when last changes were made to the soft
ware component.
<Created By> Email address of the technical user who performed the
last changes on the software component.
Context
Procedure
1. On the Manage Software Component Lifecycle screen, select Create Object (plus icon).
2. In the Software Component dialog, enter a name of the software component (max. 18 characters).
Optionally you can also enter a description (max. 60 characters), which explains the functions of the
component and is displayed in the app. Choose a type for your software component. Select either
Development or Business Configuration from the drop-down.
Note
You can create multiple business configuration soft
ware components. You can only import one business
configuration software component to a system. You
can only release customizing transports to a business
configuration software component
Note
Note that you have to create a development package first. Afterward, create ABAP development
objects in this newly created development package. The development package is a sub-package of
the structure package dependent on the software component.
4. After the transport of the structure package, the software component becomes available for all
instances and can be pulled.
As soon as the structure package is transported, for example, to provide a new version of the software
component, the date and time of the transport as well as the user who triggered the transport are
displayed in the list of available software components (<Changed On> and <Created By>).
Tip
We recommend that you add all your pulled software components (structure packages) to your favorite
packages to a have an easy and quick access to it.
Context
You can pull new as well as already pulled software components to the service instance you are currently on. To
do the pull, perform the following steps:
Procedure
1. Select a software component (radio button) from the list and choose Pull to pull new or already pulled
software components. Alternatively, you can navigate to the detail page of a software component and
choose Pull in the page header.
In both cases, you pull the latest version of the software component. If the pull is finished and the software
component is locally available on the instance, the value in the Pulled column turns to Yes.
2. Display the list of all pulls by choosing Pull History.
To display pulls that have been triggered for a particular software component, first select the component
from the list and then choose Pull History. In both cases, you will be navigated to the Software Component
Pulls screen.
3. To display or refresh the list of pulls, choose Go.
Field Description
<Software Component> (Column initially hidden) Name of the software component that has to be unique
per service instance. The maximum length of the name is
restricted to 18 characters.
<Pull Identifier> (UUID, Column initially hidden) A unique key of the software component.
Note
Only one pull can be in status Running. If you trigger
several pulls, other software components will switch
to status Error.
<Started By> E-Mail address of the user who started the pull.
<Start Time> Date and time when the pull has been triggered.
<Change Time> Date and time of the changes during the pull procedure.
4. You can display the pull details by choosing the entry from the list.
The execution log and transport log are displayed as a table. You can find the new tables when navigating to
the detail page of every pull entry. Additionally, every log line is classified with a criticality level. A logline
can have the criticality level "Information", "Success", "Warning" or "Error". Each level is represented with a
colored criticality icon.
Furthermore, you can also download the logs as an Excel-file. This may be helpful when communicating
with SAP in the context of error analysis.
Field Description
Column Name Description
<Index (Initially hidden)> The index column is initially hidden but can be enabled
when clicking the Settings button in the table toolbar.
Context
Procedure
1. On the Manage Software Components screen, select a software component you want to delete.
Alternatively, can delete a software component from the details page.
2. Choose the Delete icon in the control panel.
3. You will be prompted to confirm the action.
Caution
Result
If this software component has already been pulled to a service instance, objects are not deleted in the ABAP
instances but the status of the SW component and all belonging objects is changed to read-only. You are not
able to make any changes to the dedicated structure package and the development package that is contained
in there. If you want to delete the objects, you must delete them and import the deletion into other systems,
before you delete the SW component.
Note
Currently you cannot restore software components that have been deleted.
Learn about the different account administration and application operation tasks which you can perform in the
Neo environment.
Learn about frequent administrative tasks you can perform using the SAP Cloud Platform cockpit, including
managing subaccounts, entitlements and members.
● Managing Global Accounts and Subaccounts Using the Cockpit [page 1748]
● Managing Entitlements and Quotas Using the Cockpit [page 1755]
Related Information
Your SAP Cloud Platform global account is the entry point for managing the resources, landscape, and
entitlements for your departments and projects in a self-service manner.
Set up your account model by creating subaccounts in your enterprise account. You can create any number of
subaccounts in any environment (Neo, Cloud Foundry, and ABAP) and region.
Use the cockpit to log on to your global account and start working in SAP Cloud Platform.
Prerequisites
You have received a Welcome e-mail from SAP for your global account.
Context
Note
The content in this section is only relevant for AWS, Azure, or GCP regions.
When your organization signs a contract for SAP Cloud Platform services, an e-mail is sent to the e-mail
address specified in the contract. The e-mail message contains the link to log on to the system and the SAP
Cloud Identity credentials (user ID) for the specified user. These credentials can also be used for sites such as
the SAP Store page opens displaying global account information, including the number of subaccounts and
regions in your global account, and service usage information. or the SAP Community .
Procedure
For more information, see Regions and Hosts Available for the Neo Environment [page 14].
Note
If single sign-on has not been configured for you, you will have to enter your credentials. You’ll find your
logon ID in your Welcome e-mail.
Tip
In the Global Accounts page, you can filter the display of global accounts:
○ Filter by role to display global accounts according to your role in the global account, administrator
or member.
○ Filter by region to display global accounts that contain subaccounts in the selected region.
○ Filter by environment to display global accounts that contain subaccounts in the selected
environment.
Results
Note
page opens displaying globalThe information in the Overview page is only available to the global account
administrators.
Next Steps
Learn how to navigate to your global accounts and subaccounts in the SAP Cloud Platform cockpit.
Prerequisites
Note
The content in this section is only relevant for AWS, Azure, or GCP regions.
● Sign up for an enterprise or a trial account and receive your logon data. For more information, see Get a
Trial Account or Purchase a Customer Account.
● Create the subaccount to which you want to navigate. For more information, see Create a Subaccount in
the Cloud Foundry Environment [AWS, Azure, or GCP Regions].
Procedure
Result:
Home / <global_account>
Subaccount 1. Select the global account that contains the subaccount you'd like to navigate
to by following the steps described above.
2. Select the subaccount. For more information about creating subaccounts,
see Create a Subaccount in the Cloud Foundry Environment [AWS, Azure, or
GCP Regions].
Result:
Change the display name for the global account using the SAP Cloud Platform cockpit.
Prerequisites
You are a member of the global account that you'd like to edit.
Context
The overview of global accounts available to you is your starting point for viewing and changing global account
details in the cockpit. To view or change the display name for a global account, trigger the intended action
directly from the tile for the relevant global account.
Procedure
1. Choose the global account for which you'd like to change the display name and choose on its tile.
A new dialog shows up with the mandatory Display Name field that is to be changed.
2. Enter the new human-readable name for the global account.
3. Save your changes.
Related Information
Create subaccounts in your global account using the SAP Cloud Platform cockpit.
Prerequisites
Before creating your subaccounts, we recommend you learn more about Setting Up Your Account Model.
Context
You create subaccounts in your global account. Once you create a new subaccount, you see a tile for it in the
global account view, and you are automatically assigned to it as an administrator.
When you create a new subaccount in the Neo environment, you can choose to copy settings from an existing
Neo subaccount in the same region.
Subaccounts are created in the background. Some details, including the subaccount display name and
description are available immediately. Settings that you copy are initially created only in the background, and
there may be some delay before you can see them.
Running on the AWS, Azure, or GCP regions: You can enable subaccounts to use beta features, including
services and applications, which are occasionally made available by SAP for SAP Cloud Platform. This option is
not selected by default and available only to administrators for your enterprise account.
Caution
You should not use SAP Cloud Platform beta features in subaccounts that belong to productive enterprise
accounts. Any use of beta functionality is at the customer's own risk, and SAP shall not be liable for errors
or damages caused by the use of beta features.
Procedure
Since Neo runs solely on SAP infrastructure, when you select Neo Environment the provider is
automatically filled in as SAP.
5. (Optional) To use beta features in the subaccount, select Enable beta features.
6. Save your changes.
Results
A new tile appears in the global account page with the subaccount details.
Context
You edit a subaccount by choosing the relevant action on its tile. It's available in the global account view, which
shows all its subaccounts.
The subaccount technical name is a unique identifier of the subaccount on SAP Cloud Platform that is
generated when the subaccount is created.
Procedure
1. Choose the subaccount for which you'd like to make changes and choose on its tile.
You can view more details about the subaccount such as its description and additional attributes by
choosing Show More.
2. Make your changes and save them.
Prerequisites
Context
You cannot delete the last remaining subaccount from the global account.
Note
For any SAP Cloud Platform instance not running on AWS, Azure, or GCP regions, all references to SAP in
this topic are the responsibility of the respective cloud operator.
When the contract of an SAP Cloud Platform customer ends, SAP is legally obligated to delete all the data in
the customer’s accounts. The data is physically and irreversibly deleted, so that it cannot be restored or
recovered by reuse of resources.
The termination process is triggered when a customer contract expires or a customer notifies SAP that they
wish to terminate their contract.
1. SAP sends a notification e-mail of the expiring contract, and the termination date on which the account will
be closed.
2. A banner is displayed in the cockpit during the 30-day validation period before the global account is closed.
During this period, the customer can export their data, or they can open an incident to download their data
before the termination date.
The customer can cancel the termination during this period and renew their contract with SAP.
3. After the termination date, a grace period of 30 days begins.
4. During the grace period:
○ Access is blocked to the account, and to deployed and subscribed applications.
○ No data is deleted, and backups are ongoing.
○ The global account tile is displayed in the Global Accounts page of the cockpit with the label Expired.
Clicking the tile displays the amount of days left in the grace period.
The customer can contact SAP to restore their account to a fully active account without data loss.
5. After the end of the grace period, all customer-related data for the account and services is deleted and
cannot be restored. The global account tile is removed from the cockpit.
When you purchase an enterprise account, you are entitled to use a specific set of resources, such as the
amount of memory that can be allocated to your applications.
An entitlement equals your right to provision and consume a resource. A quota represents the numeric
quantity that defines the maximum allowed consumption of that resource.
Entitlements and quotas are managed at the global account level, distributed to subaccounts, and consumed
by the subaccounts. When quota is freed at the subaccount level, it becomes available again at the global
account level.
Note
Only global account administrators can configure entitlements and quotas for subaccounts.
There are two places in the cockpit where you can view and configure entitlements and quotas for
subaccounts:
● The Entitlements Subaccount Assignments page in the left hand-side navigation of the global
account
● The Entitlements page in the left hand-side navigation of a subaccount
Depending on your permissions, you may only have access to one of these pages. You can find more details in
the table below:
In the Cloud Foundry environment, you can further distribute the quotas that are allocated to a subaccount.
This is done by creating space quota plans and assigning them to the spaces. For more information on space
quota plans in the Cloud Foundry environment, see https://docs.cloudfoundry.org/adminguide/quota-
plans.html .
Assign entitlements to subaccounts by adding service plans and distribute the quotas available in your global
account to your subaccounts using the SAP Cloud Platform cockpit.
Prerequisites
Context
You can distribute entitlements and quotas across subaccounts within a global account from two places in the
cockpit:
● The Entitlements Subaccount Assignments page in the at global account level (only visible to global
account administrators)
● The Entitlements page at subaccount level (visible to all subaccount members, but only editable by global
account administrators)
For more information see Managing Entitlements and Quotas Using the Cockpit [page 1755].
Procedure
3. At the top of the page, select all the subaccounts for which you would like to configure or display
entitlements.
Tip
If your global account contains more than 20 subaccounts, choose to open up the value help dialog.
There you can filter subaccounts by role, environment and region to make your selection easier and
faster. You can only select a maximum of 50 subaccounts at once.
You will get a table for each of the subaccounts you selected, displaying the current entitlement and quota
assignments.
Note
Action Steps
Add new service plans to the subaccount Choose Add Service Plans and from the dialog select the
services and the plans from each service that you would
like to add to the subaccount.
Edit the quota for one or more service plans Use and to increase or decrease the quota for each
service plan.
Delete a service plan and its quota from the subaccount Choose from the Actions column.
7. Once you're done, choose Save to save the changes and exit edit mode for that subaccount.
8. Optional: Repeat steps 5 to 7 to configure entitlements for the other subaccounts selected.
You can add members to your global accounts and subaccounts and assign different roles to them:
A member is a user who is assigned to an SAP Cloud Platform global account or subaccount. A member
automatically has the permissions required to use the SAP Cloud Platform functionality within the scope of the
respective accounts and as permitted by their account member roles.
Roles
Roles determine which functions in the cockpit users can view and access, and which actions they can initiate.
Roles support typical tasks performed by users when interacting with the cloud platform, for example, adding
and removing users. A user can be assigned one or more roles, where each role comes with a set of
permissions. The set of assigned roles defines what functionality is available to the user and what activities he
can perform.
The Administrator role in a global account is automatically assigned to the user who has started a trial account
or who has purchased resources for an enterprise account. A global account administrator has permissions to
● Users can be assigned to one or more subaccounts and to one or more roles in the relevant subaccount.
● If the user is assigned to more than one subaccount, an administrator must assign the roles to the user for
each subaccount.
● Roles apply to all operations associated with the subaccount, irrespective of the tool used (Eclipse-based
tools, cockpit, and console client).
● As an administrator in the Neo environment, you cannot remove your own administrator role. You can
remove any member except yourself.
The default platform identity provider and application identity provider of SAP Cloud Platform is SAP ID
service.
Trust to SAP ID service in your subaccount is pre-configured in SAP Cloud Platform by default, so you can start
using it without further configuration. Optionally, you can add additional trust settings or set the default trust to
inactive, for example if you prefer to use another SAML 2.0 identity provider. Using the SAP Cloud Platform
cockpit you can make changes by navigating to your respective subaccount and choosing Security
Authorization .
If you want to add new users to a subscribed app, or if you want to add users to a service, such as Web IDE, you
can add those users to SAP ID service in your subaccount. See Add Users to SAP ID Service in the Neo
Environment [page 1900].
Note
If you want to use a custom IdP, you must establish trust to your custom SAML 2.0 identity provider. We
describe a custom trust configuration using the example of SAP Cloud Platform Identity Authentication
service.
For more information, see Trust and Federation with SAML 2.0 Identity Providers [page 2281].
Before you can assign roles or role collection to a user in SAP ID service, you have to ensure that this user is
assigned to SAP ID service.
Prerequisites
The user you want to add to SAP ID service must have an SAP user account (for example, an S-user or P-user).
For more information, see Create SAP User Accounts [page 1901].
Context
When you create your own trial account, your SAP user is automatically created and assigned to SAP ID
service. But when you onboard new members to your subscribed app, you must add them to your subaccount
and ensure that they are also added to SAP ID service. Then you can assign a role or group to a user in SAP ID
service.
Procedure
Remember
If the user identifier you entered does not have an SAP user account or has never logged on to an
application in this subaccount, SAP Cloud Platform cannot automatically verify the user name. To avoid
mistakes, you must ensure that the user name is correct and that it exists in SAP ID service.
Related Information
If you want to add users to SAP ID service in your subaccount, you must ensure that they have an SAP user
account.
Context
If you register for an SAP Cloud Platform trial account https://cockpit.hanatrial.ondemand.com, you
automatically get a user in SAP ID service. But if you want to add other users to SAP ID service in your
subaccount, you must ensure that they have an SAP user account (for example, an S-user or P-user). This
could be the case if you wanted to add new users to a subscribed app in your subaccount.
Procedure
Add users as global account members using the SAP Cloud Platform cockpit.
Prerequisites
Context
Note
The content in this section is only relevant for AWS, Azure, or GCP regions.
● View all the subaccounts in the global account, meaning all the subaccount tiles in the global account's
Subaccounts page.
● Edit general properties of the subaccounts in the global account from the Edit icon in the subaccount tile.
● Create a new subaccount in the global account.
● View, add, and remove global account members.
● Manage entitlements for the global account.
Restriction
Adding members to global accounts is only possible in enterprise accounts, not in trial accounts.
By default, the cockpit and console client are configured to use SAP ID Service, which uses the SAP user base,
as an identity provider for user authentication. If you want to use a custom user base and custom Identity
Authentication tenant settings, see Platform Identity Provider [page 2431].
Procedure
If you want to use a custom user base, choose Other for User Base and then enter the corresponding
Identity Authentication tenant name. For more information, see Platform Identity Provider [page 2431].
4. Enter one or more e-mail addresses, separated by commas, spaces, semicolons, or line breaks and choose
Add Members.
The users you add as members at global account level are automatically assigned the Administrator role.
Next Steps
To delete members at global account level, choose (Delete) next to the user's ID.
Add users as members to a subaccount in the Neo environment and assign roles to them using the SAP Cloud
Platform cockpit.
Prerequisites
Restriction
In the Neo environment, adding members to subaccounts is only possible in enterprise accounts, not in
trial accounts.
Tip
Administrators can request S-user IDs on the SAP ONE Support Launchpad User Management
application: 1271482 .
Context
In the Neo environment, you can assign predefined roles to subaccount members, but you can also create
custom platform roles. For more information, see Managing Member Authorizations in the Neo Environment
[page 1904].
Procedure
Note
The name of a member is shown only after he or she visits the subaccount for the first time.
● To select or unselect roles for a member, choose (Edit). The changes you make to the member's roles
take effect immediately.
● You can enter a comment when editing user roles. This lets you track the reasons for subaccount
membership and other important data. The comments are visible to all members.
● You can send an e-mail to a member. This option appears only after the recipient visits the subaccount for
the first time.
● To remove all the roles of a member, choose Delete (trashcan). This also removes the member from the
subaccount.
● Choose the History button to view a list of changes to members (for example, added or removed members,
or changed role assignments).
● Use the filter to show only the members with the role you've selected.
Related Information
SAP Cloud Platform includes predefined platform roles that support the typical tasks performed by users when
interacting with the platform. In addition, subaccount administrators can combine various scopes into a
custom platform role that addresses their individual requirements.
A platform role is a set of permissions, or scopes, managed by the platform. Scopes are the building blocks for
platform roles. They represent a set of permissions that define what members can do and what platform
resources they can access (for example, configuration settings such as destinations or quotas). Most scopes
follow a “Manage” and “Read” pattern. For example, manageXYZ comprises the actions create, update, and
delete on platform resource XYZ. However, some areas use a different pattern, for example, Application
Lifecycle Management.
Predefined platform roles cannot be changed. However, global account administrators can copy from
predefined roles, and then modify the copies.
You can also manage subscriptions, trust, authorizations, and OAuth settings, and re
start SAP HANA services on HANA databases.
Furthermore, you can view heap dumps and download a heap dump file.
In addition, you have all permissions granted by the developer role, except the debug
permission.
Note
This role also grants permissions to view the Connectivity tab in the SAP Cloud
Platform cockpit.
Cloud Connector Admin Open secure tunnels via Cloud Connector from on-premise networks to your subac
counts.
Note
This role also grants permissions to view the Connectivity tab in the SAP Cloud
Platform cockpit.
Developer Supports typical development tasks, such as deploying, starting, stopping, and debug
ging applications. You can also change loggers and perform monitoring tasks, such as
creating availability checks for your applications and executing MBean operations.
Note
By default, this role is assigned to a newly created user.
Support User Designed for technical support engineers, this role enables you to read almost all data
related to a subaccount, including its metadata, configuration settings, and log files. For
you to read database content, a database administrator must assign the appropriate
database permissions to you.
Application User Admin Assigned by the subaccount administrator to a subaccount member. Manage user per
missions on application level to access Java, HTML5 applications, and subscriptions.
You can control permissions directly by assigning users to specific application roles or
indirectly by assigning users to groups, which you then assign to application roles. You
can also unassign users from the roles or groups.
Note
This role does not let you manage subaccount roles and perform actions at the
subaccount level (for example, stopping or deleting applications).
The following graphic illustrates the predefined Administrator, Developer, and Support User roles and their
amount of scopes:
The Admin role includes all platform scopes available on SAP Cloud Platform. The Developer and Support User
are subsets of the Admin role.
Administrators of a subaccount can define custom platform roles based on their needs by assembling the
different scopes they want their custom platform role to include. Custom platform roles are managed at
subaccount level and can be changed at any time.
Subaccount administrators can combine various scopes into a custom platform role that addresses their
individual requirements. Scopes are the building blocks for platform roles. They represent a set of permissions
that define what members can do and what platform resources they can access (for example, configuration
settings such as destinations or quotas).
The following example illustrates how custom platform roles in SAP Cloud Platform typically look like regarding
their amount of scopes:
Related Information
If your scenario requires it, you can add application providers as members to your SAP Cloud Platform
subaccount in the Neo environment and assign them the administrator role so that they can deploy and
administer the applications you have purchased.
Prerequisites
You can request user IDs at the SAP Service Marketplace: http://service.sap.com/request-user
SAP Service Marketplace users are automatically registered with the SAP ID service, which controls
user access to SAP Cloud Platform.
Context
As an administrator of a subaccount, you can add members to it and make them administrators of the
subaccount using the SAP Cloud Platform cockpit. For example, if you have purchased an application from an
SAP implementation partner, you may need to enable the partner to deploy and administer the application.
Procedure
User IDs are case-insensitive and can contain alphanumeric characters only. Currently, there is no user
validation.
5. Select the Administrator checkbox.
Note
Note
7. Notify your application provider that they now have the necessary permissions to access the subaccount.
Related Information
Subaccount administrators can define custom platform roles and assign them to the members of its
subaccounts.
Prerequisites
Procedure
1. Choose Platform Roles in the navigation area for the subaccount for which you'd like to manage custom
platform roles.
All custom and predefined platform roles available for the subaccount are shown in a list.
2. You have the following options:
Note
You cannot change or delete a predefined platform role, but you can copy from it and make
changes to the copy.
Related Information
Account Management readAccount View Accounts Enables you to view a list of all subac
counts available to you and access
them.
readCustomPlatform View Custom Platform Roles Enables you to list self-defined plat
Roles form roles.
manageCustomPlat Manage Custom Platform Enables you to define your own plat
formRoles Roles form roles.
Agent Activation for Dyn readDynatraceInte Manage Dynatrace Integra Enables you to view the configuration
trace Service gration tion settings of the Agent Activation for
Dynatrace service in your subaccount.
manageDynatraceIn Read Dynatrace Integration Enables you to update and delete the
tegration configuration settings of the Agent
Activation for Dynatrace service in
your subaccount.
Authorization Management readApplicationRoles View Application Roles Enables you to list all user roles availa
ble for a Java application.
manageApplication Manage Application Roles Enables you to assign user roles for a
Roles Java application and create new roles.
readAuthorizationSet View Authorization Settings Enables you to view all kinds of role,
tings group, and user mappings on account
and subscription level.
Note
Assigning this scope to a role re
quires to assign the readAuthori
zationSettings and readSubscrip
tions scopes as well.
Note
Assigning this scope to a role re
quires to assign the readSub
scriptions scope as well.
Connectivity Service readDestinations View Destinations Enables you to view destinations re
quired for communication outside
SAP Cloud Platform.
readSCCTunnels View SCC Tunnels Enables you to view the data trans
mission tunnels used by the Cloud
connector to communicate with back-
end systems.
manageSCCTunnels Manage SCC Tunnels Enables you to operate the data trans
mission tunnels used by the Cloud
connector.
Document Service listECMRepositories List Document Service Re Enables you to list the document serv
positories ice repositories.
Enterprise Messaging readMessagingSer Read Messaging Service Enables you to view details of mes
vice saging hosts, queues, and applica
tions bindings to messaging hosts.
manageMessaging Manage Messaging Hosts Enables you to create, edit, and delete
Hosts messaging hosts.
Extension Integration Man readExtensionIntegra Read Extension Integration Enables you to read integration to
agement tion kens.
manageExtensionIn Manage Extension Integra Enables you to create and delete inte
tegration tion gration tokens.
readApplicationRole Read Application Role Pro Enables you to read the Java applica
Provider vider tions' role provider configuration us
ing the SAP Cloud Platform cockpit.
Git Service accessGit Access Git Repositories Enables you to create repositories,
push commit, push tags, create new
remote branches, and push commits
authored by other users (forge author
identity).
Note
Assigning this scope to a role re
quires to assign the
readHTML5Applications scope as
well.
Note
Assigning this scope to a role re
quires to assign the
readHTML5Applications scope as
well.
Note
Assigning this scope to a role re
quires to assign the
readHTML5Applications scope as
well.
HTML5 Application Manage readHTML5Applica List HTML5 Applications Enables you to list HTML applications
ment tions and review their status.
Note
Assigning this scope to a role re
quires to assign the readAccount
scope as well.
Note
Assigning this scope to a role re
quires to assign the readAccount
scope as well.
Java Application Lifecycle readJavaApplications List Java Applications Enables you to list Java applications,
Management get their status, and list Java applica
tion processes.
Note
Assigning this scope to a role re
quires to assign the readMonitor
ingData scope as well.
Note
Assigning this scope to a role re
quires to assign the readMonitor
ingData scope as well.
manageJavaPro Manage Java Processes Enables you to start or stop Java ap
cesses plication processes.
Note
Assigning this scope to a role re
quires to assign the readMonitor
ingData scope as well.
Keystore Service manageKeystores Manage Keystores Enables you to manage (create, de
lete, list) key stores (using the console
client).
Logging Service readLogs View Application Logs Enables you to view all logs available
for a Java application.
manageLogs Manage Application Logs Enables you to change the log level for
Java application logs.
Note
Assigning this scope to a role re
quires to assign the readLogs
scope as well.
Member Management readAccountMem View Account Members Enables you to view a list of members
bers for an individual subaccount.
Note
Assigning this scope to a role re
quires to assign the readCustom
PlatformRoles scope as well.
manageAccountMem Manage Account Members Enables you to add and remove mem
bers bers for an individual subaccount and
to assign user roles to them.
Metering Service readMeteringData Read Metering Data Enables you to access data related to
your application's resource consump
tion, e.g. network data volume or da
tabase size.
Monitoring Service readMonitoringConfi- Read Monitoring Configura- Enables you to list JMX checks, availa
guration tion bility checks, and alert recipients.
manageMonitoring Manage Monitoring Configu- Enables you to set and update JMX
Configuration ration checks, availability checks, and alert
recipients. It also allows you to man
age custom HTTP checks.
Multi-Target Application readMultiTargetAppli Browse Solutions Inventory Enables you to list solutions, get their
Management cation status, and list solution operations.
OAuth Client Management readOAuthSettings View OAuth Settings Enables you to view OAuth Applica
tion Client settings.
Note
This scope is used for viewing the
OAuth clients for OAuth-pro
tected applications, not for the
platform APIs. Each platform API
requires its own scopes, descri
bed in its documentation.
Note
Assigning this scope to a role re
quires to assign the readSub
scriptions scope as well.
Note
This scope is used for managing
the OAuth clients for OAuth-pro
tected applications, not for the
platform APIs. Each platform API
requires its own scopes, descri
bed in its documentation.
Note
Assigning this scope to a role re
quires to assign the readSub
scriptions scope as well.
Password Storage managePasswords Manage Passwords Enables you to set and delete pass
words for a given application in the
password storage (using the console
client).
SAP HANA / SAP ASE Serv readDatabaseInfor View Database Information Enables you to view lists of SAP HANA
ice mation and SAP ASE database systems, da
tabases, and database-related service
requests. You can also view informa
tion such as the assigned database
type, the database version, and data
source bindings.
Service Management listServices List Services Enables you to browse through the list
of services and review their status in
your subaccount.
Note
For applying any service-specific
configuration, additional scopes
(e.g. manageDestinations) may
be required.
Note
Assigning this scope to a role re
quires to assign the man
ageHTML5Applications scope as
well. It is planned to remove this
requirement in a future release.
Note
Assigning this scope to a role re
quires to assign the
readHTML5Applications scope as
well. It is planned to remove this
requirement in a future release.
Trust Management readTrustSettings View Trust Settings Enables you to read trust configura-
tions.
Note
Assigning this scope to a role re
quires to assign the readAccount,
readAuthorizationSettings and
readSubscriptions scopes as well.
Note
Assigning this scope to a role re
quires to assign the readTrustSet
tings, readAccount, readAuthori
zationSettings and readSubscrip
tions scopes as well.
Virtual Machines readVirtualMachines List Virtual Machines Enables you to list virtual machines,
volumes, volume snapshots, security
group rules, to get their status, and
list virtual machine processes.
manageVirtualMa Manage Virtual Machines Enables you to create and delete vir
chines tual machines, volumes, volume snap
shots, security group rules, and ac
cess points.
Note
The content in this section is only relevant for AWS, Azure, or GCP regions.
In a global account that uses the consumption-based commercial model, you can monitor the usage of billed
services and your consumption costs.
The Global Account Overview page provides information about the usage of cloud credits (if your account has a
cloud credits balance) and costs for a global account, and the usage of services in the global account and its
subaccounts.
To monitor and track usage in the views of the Global Account - Overview page, open the global account in the
cockpit and choose Overview in the navigation area.
● When there is a cloud credits balance for the global account, the cloud credits usage information is
displayed for the current contract period. The total contract duration is split into contract periods (usually
of one year each) and the total cloud credits are divided between these periods.
Note
If your global account has received at any time a cloud credits refund during the current contract
phase, you may see a difference between your total usage and monthly usage in the chart.
● When there is no cloud credits balance for the global account, the monthly total costs for the global
account are displayed from the contract start date.
Note
If your global account has received at any time a refund during the current contract phase, you may see
a difference between the displayed monthly costs and your billing document.
● In the resource usage views, use the filters to specify which information to display. The Period filter applies
only to the chart display.
● Some rounding or shortening is applied to large values. Mouse over values in the table to view the exact
values in the tooltips.
● Choose a row in the table to view its historic information in the Monthly Usage chart.
You can export the displayed usage and cost data to a Microsoft Excel file:
● To export usage and cost data from all the views, choose Export All Data .
● To export only cloud credits usage data, choose Export Cloud Credits Usage .
Note
Usage data is processed according to accounting formulas that generate a billing document that
aggregates all usage, from all spaces, so that it is favorable to customers. There is no unified method to
calculate costs based on actual usage.
Global Account Info Displays general information about the global ac Use the Subaccounts link to navigate di
count, including the number of subaccounts it con rectly to the Subaccounts page.
tains, and the number of regions in which it has these
subaccounts.
Cloud Credits Usage Displays the current balance and monthly usage of
cloud credits as a percentage of the cloud credits al
Displayed when there is a
located per contract period.
cloud credits balance for
the global account. ● Cloud credits information is based on the
monthly balance statements for the global ac
count.
● For the time period between the last balance
statement and the current date, the chart uses
estimated values, which are displayed as striped
bars.
These estimates are based on resource usage
values before computation for billing, and might
change after the next balance statement is is
sued. The estimated values are not projected or
forecast values.
Global Account Costs Displays the total cost per month for usage of serv
ices in the global account from the contract start
Displayed when there is
date.
no cloud credits balance
for the global account. All cost information is for all regions in the global ac
count.
Global Account Over Displays usage and costs of services in the global ac Use the View By options to switch the
view count. The information is broken down according to view between service plan usage and
service plans. All the regions in which a service plan is costs for the global account.
available are displayed; however, actual usage is only
Use the filter to select the environment
in the regions of your subaccounts.
and service for which you want to view
information.
Note
Usage and cost information is updated every 24 For the chart display, select a row, and
hours. Data for the first of the month might not filter by period. If no row is selected, the
appear until the following day. chart displays information for all the
service plans in the table.
The charts display information for each month for all
service plans in the table or only for the selected row.
Provisional data for the current month is displayed as
striped bars.
Note
Some service plans have differential pricing de
pending on the region. Usage and costs of a serv
ice plan in multiple regions with differential pric
ing are displayed as separate items.
Service Usage Displays resource usage according to service. Some Use the filter to select the service and
service plans may display usage according to multiple subaccount for which to display usage
metrics. values.
Subaccount Usage Displays resource usage according to subaccount. Use the filter to select the subaccount
Some service plans may display usage according to for which to display usage values.
multiple metrics.
For the chart display, select a row and
filter by period. If no row is selected, the
chart displays information for all the
service plans in the table.
You can also monitor and track usage of services in a subaccount in the Subaccount Overview page. In the
Service Usage view, the table displays the subaccount usage in the current month for service plans of the
selected service. The chart displays monthly usage for the selected service plan in the selected period. Use the
filters to specify which usage information to display in the table and monthly usage charts. The Period filter is
applied to the chart display.
The spreadsheet file that is exported when you use the Export All Data option, contains several sheets
(tabs). The sheets that are included in the export depend on the commercial model that is used by your global
account.
Global Account Info Provides general information about your global account. If there This sheet is provided for
is a cloud credits balance for the global account, then its cloud global accounts that use
credit usage, per month as a percentage of your total cloud either the consumption-
credits for the current contract period, is also shown. based or subscription-
based commercial
model.
Global Account Costs Allows you to view total monthly usage data and costs for all bill This sheet is provided
able services and plans consumed at the level of your global ac only for global accounts
count. that use the consump
tion-based commercial
The items listed are all the billed items that are created in the ac
model.
counting system and deducted from the cloud credit balance of
your global account.
Subaccount Costs by Service Allows you to view monthly usage data and costs of all billable This sheet is provided
services consumed by plan and subaccount. only for global accounts
that use the consump
All subaccount calculations are estimated and proportionate to
tion-based commercial
the total global account usage:
model.
[Subaccount usage / Global account usage x
Rate plan per SKU]
Actual Usage Allows to you to view the actual monthly usage data of all con This sheet is provided for
sumed services by plan, subaccount, and space. global accounts that use
either the consumption-
Actual or raw usage data is different to the billed usage that is
based or subscription-
used in your billing document, and includes non-billable serv
based commercial
ices.
model.
The aggregated usage for each service is based on a formula
that is specific to each service, for example, MIN, MAX, or AVG.
Example
A global account, which uses the consumption-based commercial model, has the following usage data in a
given month, where SAP HANA and Application Runtime are billable services and Bandwidth is a non-
billable service:
Subaccount 1 Space A 1x SAP HANA 256 GB, 200 MB of Application Runtime, and 300 MB of
Bandwidth
Subaccount 2 Space C 1x SAP HANA 512 GB, 600 MB of Application Runtime, and 300 MB of
Bandwidth
Based on these usage measurements, you would see the following data in the spreadsheet. Each listed item
represents one row in the spreadsheet. Cost prices shown are for illustration purposes only and do not
reflect the actual rates for the mentioned services.
Subaccount Costs by Serv ● Subaccount 1: 1 instance of SAP HANA 256 = EUR 1024
ice ● Subaccount 2: 1 instance of SAP HANA 512 = EUR 2048
● Subaccount 1: 400 MB of Application Runtime = EUR 40 (400 MB / 1 GB x EUR 100)
● Subaccount 2: 600 MB of Application Runtime = EUR 60 (600 MB / 1 GB x EUR 100)
Related Information
Administrators can define legal links per enterprise global account in the SAP Cloud Platform cockpit.
Prerequisites
You have the Administrator role for the global account for which you'd like to define legal links.
Context
Note
The content in this section is only relevant for AWS, Azure, or GCP regions.
You can define the legal information relevant for a global account so the members of this global account can
view this information.
Procedure
1. Choose the global account for which you'd like to make changes.
The links you configured are available in the Legal Information menu.
Related Information
Use the SAP Cloud Platform console client for the Neo environment for subaccount management in the Neo
environment.
Downloading and setting up the console client Set Up the Console Client [page 1412]
Opening the tool and working with the commands and pa Using the Console Client [page 1928]
rameters
Console Client Video Tutorial
Verbose mode of output Verbose Mode of the Console Commands Output [page
1931]
You can create subaccounts using the console client in the Neo environment.
Prerequisites
Restriction
● You must be a member of the global account that contains the subaccount.
● You work in the Neo environment.
● You set up the console client. See Set Up the Console Client [page 1412].
Recommendation
Before creating your subaccounts, we recommend you learn more about Setting Up Your Account Model.
Example:
Note
For more information on creating new subaccounts and cloning existing subaccounts using the console
client, see create-account [page 1958].
You can use the Neo console client to add quotas to subaccounts
Prerequisites
● An enterprise account.
Restriction
In the Neo environment, adding quotas to subaccounts is not possible in trial accounts.
● You must be a member of the global account that contains the subaccount.
● Set up the console client. See Set Up the Console Client [page 1412].
Procedure
Example:
For more information on adding quotas to subaccounts using the console client, see set-quota [page
2129].
SAP Cloud Platform console client for the Neo environment enables development, deployment and
configuration of an application outside the Eclipse IDE as well as continuous integration and automation tasks.
The tool is part of the SAP Cloud Platform SDK for Neo environment. You can find it in the tools folder of your
SDK location.
Note
The Console Client is related only to the Neo environment. For the Cloud Foundry environment use the
Cloud Foundry command line interface. See Download and Install the Cloud Foundry Command Line
Interface [page 1769].
Downloading and setting up the console client Set Up the Console Client [page 1412]
Opening the tool and working with the commands and pa Using the Console Client [page 1928]
rameters
Console Client Video Tutorial
Verbose mode of output Verbose Mode of the Console Commands Output [page
1931]
You execute a console client command by entering neo <command name> with the appropriate parameters. To
list all parameters available for the respective command, execute neo help <command name>.
The console client is part of the SAP Cloud Platform SDK for Neo environment. You can find it in the tools folder
of your SDK installation.
To start it, open the command prompt and change the current directory to the <SDK_installation_folder>\tools
location, which contains the neo.bat and neo.sh files.
Command Line
You can deploy the same application as in the example above by executing the following command directly in
the command line:
Properties File
Within the tools folder, a file example_war.properties can be found in the samples/deploy_war folder.
In the file, enter your own user and subaccount name:
################################################
# General settings - relevant for all commands #
################################################
# Your subaccount technical name
account=<your subaccount>
# Application name
application=<your application name>
# User for login to hana.ondemand.com.
user=<email or user name>
# Host of the admin server. Optional. Defaults to hana.ondemand.com.
host=hana.ondemand.com
#################################################################
# Deployment descriptor settings - relevant only for deployment #
#################################################################
# List of file system paths to *.war files and folders containing them
source=samples/deploy_war/example.war
Parameter Priority
Argument values specified in the command line override the values specified in the properties file. For example,
if you have specified account=a in the properties file and then enter account=b in the command line, the
operation will take effect in account b.
Parameter Values
Since the client is executed in a console environment, not all characters can be used in arguments. There are
special characters that should be quoted and escaped.
Consult your console/shell user guide on how to use special characters as command line arguments.
For example, to use argument with value abc&()[]{}^=;!'+,`~123 on Windows 7, you should quote the
value and escape the! character. Therefore you should use "abc&()[]{}^=;^!'+,`~123".
User
Password
Do not specify your password in the properties file or as a command line argument. Enter a password only
when prompted by SAP Cloud Platform console client.
instead of
Restriction
Proxy Settings
If you work in a proxy environment, before you execute commands, you need to configure the proxy.
For more information, see Set Up the Console Client [page 1412]
You can configure the console to print detailed output during command execution.
For more information, see Verbose Mode of the Console Commands Output [page 1931]
Related Information
● Local code - executed inside a local JVM, which is started when the command is started.
● Remote code - executed at back end (generally, the REST API that was called by the local code), which is
started in a separate JVM on the cloud.
Note
For local code execution, a LOG4J library is used. It is easy to be configured and, by default, there is a
configuration file located inside the commands class path, that is .../tools/lib/cmd.
For each command execution, two appenders are defined - one for the session and one for the console. They
both define different files for all messages that are logged by the SAP infrastructure and by apache.http. By
default, the console commands output is written in a number of log files. However, you are allowed to change
the log4j.properties file, and define additional appenders or change the existing ones. If you want, for
example, the full output to be printed in the console (verbose mode), or you want to see details from the
execution of specific libraries (partially verbose mode), you need to adjust the LOG4J configuration file.
To adjust the level of a specific logger, you have to add log4j.logger.<package> = <level> in the code of
the log4j.properties file.
In the file defined for the session, only loggers with level ERROR are logged. If you want, for example, to log
debug information about the apache.http library, you have to change
log4j.category.org.apache.http=ERROR, session to
log4j.category.org.apache.http=DEBUG, session.
This example demonstrates how you can change the output of command execution so that it is printed in the
console instead of collecting the information within log files. To do this, open your SDK folder and go to
directory /tools/lib/cmd. Then, open the log4j.properties file and replace its content with the code
below.
Tip
We recommend that you save the original content of the log4j.properties file. To switch back to the
default settings, just revert the changes you did in the log4j.properties file.
##########
# Log levels
##########
log4j.rootLogger=INFO, console
log4j.additivity.rootLogger=false
##########
# System out console appender
##########
log4j.appender.console.Threshold=ALL
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.Target=System.out
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d %-5p [%t] %C: %m%n
log4j.appender.console.filter.1=org.apache.log4j.varia.StringMatchFilter
log4j.appender.console.filter.1.StringToMatch=>> Authorization: Basic
log4j.appender.console.filter.1.AcceptOnMatch=false
Related Information
Context
The console commands can return structured, machine-readable output. When you use the optional --
output parameter in a command, the command returns values and objects in a format that a machine can
easily parse. The currently supported output format is JSON.
Cases
When --output json is specified, the console client prints out a single JSON object containing information
about the command execution and the result, if available.
Example
Here is a full example of a command ( neo start ) that supports structured output and displays result values:
{
"command": "start",
"argLine": "-a myaccount -b myapplication -h hana.ondemand.com -u myuser -p
******* -y",
"pid": 6523,
"exitCode": 0,
"errorMsg": null,
"commandOutput": "Requesting start for:
application : myapplication
account : myaccount
host : https://hana.ondemand.com
synchronous : true
SDK version : 1.48.99
user : myuser
web: STARTED
URL: https://myapplicationmyaccount.hana.ondemand.com
Access points:
https://myapplicationmyaccount.hana.ondemand.com
Note
The shown command result is only an example and may look different in the real or future implementation.
The output is similar for commands that do not support structured result values but the result property is
then null.
Related Information
Prerequisites
1. Download and install the SAP Cloud Platform SDK for Neo environment and set up the console client. See
Set Up the Console Client [page 1412].
2. Open the console client. See Using the Console Client [page 1928].
Note
You may need admin permissions in the cloud cockpit to be able to run some of these commands listed
below.
Group Commands
Local Server install-local [page 2053]; deploy-local [page 2007]; start-local [page
2137]; stop-local [page 2142]
Deployment deploy [page 2002]; start [page 2135]; status [page 2133]
SAP HANA / SAP ASE list-application-datasources [page 2054]; list-dbms [page 2066]; list-
dbs [page 2067]; list-schemas [page 2082]
Subaccountd and Entitlements create-account [page 1958]; delete-account [page 1979]; list-accounts
[page 2057]; set-quota [page 2129]
Custom SSL add-ca [page 1940]; list-cas [page 2062]; remove-ca [page 2102]; cre
ate-ssl-host [page 1974]; delete-ssl-host [page 1992]; list-ssl-hosts
[page 2085]; set-ssl-host [page 2130]
Virtual Machines create-vm [page 1976]; delete-vm [page 1998]; list-vms [page 2089];
reboot-vm [page 2098]
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: URL. For acceptable values see Regions and Hosts Available for the Neo Environ
ment [page 14]
Type: string
Type: string
Type: string
Type: string
-v, --virus-scan Can be used to activate the virus scanner and check all incoming documents for vi
ruses.
Default: true
Type: boolean
Recommendation
For repositories that are used by untrusted users and or for unknown content, we
recommend that you enable the virus scanner by setting this parameter to true.
Enabling the virus scanner could impair the upload performance.
If a virus is detected, the upload process for the document fails with a virus scanner ex
ception.
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Example
5.3.2.3.4.2 add-ca
Uploads a trusted CA certificate and adds it to a certificate authority (CA) bundle. If you don't have a CA bundle
yet, it will be created automatically.
To configure a CA bundle, run set-ssl-host using the --ca-bundle parameter. For more information, see
set-ssl-host [page 2130].
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: URL. For acceptable values, see Regions and Hosts Available for the Neo Envi
ronment [page 14].
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--bundle Name of a new or existing bundle in which CAs will be added. You can have several CA
bundles, but you can assign only one bundle to one SSL host. One bundle can hold up
to 150 certificates.
Type: string
When creating a new bundle, the bundle name must start with a letter and can only
contain lowercase letters (a-z), uppercase letters (A-Z), numbers (0-9), underscores
( _ ), and hyphens (-).
-l, --location Path to a file that contains one or more certificates of trusted CAs in PEM format.
Example
Related Information
Use this command to add a custom domain to an application URL. This will route the traffic for the custom
domain to your application on SAP Cloud Platform.
Parameters
To list all parameters available for this command, execute neo help add-custom-domain in the command
line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: URL. For acceptable values see Regions and Hosts Available for the Neo Environ
ment [page 14]
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-i, --application-url The access point of the application on SAP Cloud Platform default domains (hana.on
demand.com, etc.)
Query strings are not supported in the --application-url parameter and are ig
nored. For example, if you specify “mysubaccountmyapp.hana.ondemand.com/sites?
idp=example”, the “?idp=example” part will be ignored.
Note
For SAP Cloud Platform Integration applications, the application URL is formed dif
ferently. For more information, see Configuring Custom Domains for SAP Cloud
Platform Integration.
-l, --ssl-host SSL host as defined with the --name parameter when created, or 'default' if not speci
fied.
Optional
--disable-application- Allows you to disable the access to the platform URL, for example
url hana.ondemand.com, for subscribed applications with a URL of type https://
<application_name><provider_subaccount>-
<consumer_subaccount>.<domain>. The <domain> is the respective region
host. For example, us1.hana.ondemand.com.
Note
It may take up to one hour for this change to take effect.
If you do not explicitly use this parameter, your subscribed application will continue to
be accessible via the default URL hana.ondemand.com.
Example
Related Information
5.3.2.3.4.4 add-platform-domain
Adds a platform domain (under hana.ondemand.com) on which the application will be accessed.
Parameters
To list all parameters available for this command, execute neo help add-platform-domain in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: URL. For acceptable values see Regions and Hosts Available for the Neo Environ
ment [page 14]
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
The chosen platform domain will be parent domain in the absolute application domain.
Acceptable values:
● svc.hana.ondemand.com
● cert.hana.ondemand.com
Example
Related Information
5.3.2.3.4.5 bind-db
This command binds an SAP HANA tenant database or SAP ASE user database to a Java application using a
data source.
You can only bind an application to an SAP HANA tenant database or SAP ASE user database if the application
is deployed.
Note
To bind your application to a database that is owned by another subaccount of your global account, you
need permission to use it. See Sharing Databases in the Same Global Account [page 936].
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: URL, for acceptable values see Regions and Hosts Available for the Neo Environ
ment [page 14]
Type: string
--access-token Identifies a database access permission. The access token and database ID parameters
are mutually exclusive.
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Default: <DEFAULT>
Type: string (uppercase and lowercase letters, numbers, and the following special char
acters: `/`, `_`, `-`, `@`. Do not use special characters as first or last charachters of
the data source name.)
Related Information
5.3.2.3.4.6 bind-domain-certificate
Parameters
To list all parameters available for this command, execute neo help bind-domain-certificate in the
command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: URL. For acceptable values see Regions and Hosts Available for the Neo Environ
ment [page 14]
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--certificate Name of the certificate that you set to the SSL host
-l, --ssl-host SSL host as defined with the--name parameter when created, or 'default' if not speci
fied.
Example
Related Information
5.3.2.3.4.7 bind-hana-dbms
This command binds a Java application to an SAP HANA single-container database system (XS) via a data
source.
You can only bind an application to an SAP HANA single-container database system (XS) if the application is
deployed.
Note
To bind your application to a database that is owned by another subaccount of your global account, see
bind-db [page 1945].
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: URL, for acceptable values see Regions and Hosts Available for the Neo Environ
ment [page 14]
Note
You cannot use this command in trial accounts.
Type: string
--access-token Identifies a database access permission. The access token and database ID parameters
are mutually exclusive.
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--db-password Password of the database user used to access the SAP HANA single-container data
base system (XS)
--db-user Name of the database user used to access the SAP HANA single-container database
system (XS)
Optional
Type: string (uppercase and lowercase letters, numbers, and the following special char
acters: `/`, `_`, `-`, `@`. Do not use special characters as first or last charachters of
the data source name.)
Example
Related Information
This command binds a schema to a Java application via a data source. If a data source name is not specified,
the schema will be automatically bound to the default data source of the application.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: URL, for acceptable values see Regions and Hosts Available for the Neo Environ
ment [page 14]
Type: string
--access-token Identifies a schema access grant. The access token and schema ID parameters are mu
tually exclusive.
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
The application will be able to access the schema via the specified data source.
Type: string (uppercase and lowercase letters, numbers, and the following special char
acters: `/`, `_`, `-`, `@`. Do not use special characters as first or last charachters of
the data source name.)
Related Information
5.3.2.3.4.9 change-domain-certificate
Parameters
To list all parameters available for this command, execute neo help change-domain-certificate in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: URL. For acceptable values, see Regions and Hosts Available for the Neo Envi
ronment [page 14].
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--certificate Name of the certificate that you set to the SSL host
-l, --ssl-host SSL host as defined with the --name parameter when created, or 'default' if not speci
fied.
Example
The change-domain-certificate command lets you change the domain certificate of a custom domain in
one step instead of executing both the unbind-domain-certificate and bind-domain-certificate
commands.
If your current SAP Cloud Platform SDK version for Neo environment does not support this command, update
your SDK or use the unbind-domain-certificate and bind-domain-certificate commands instead.
Note
The first SAP Cloud Platform SDK versions for Neo environment to support the change-domain-
certificate command are:
For more information, see Update the SAP Cloud Platform SDK for Neo Environment [page 1413].
Related Information
Parameter
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Type: URL. For acceptable values see Regions and Hosts Available for the Neo Environ
ment [page 14]
Optional
-b, --application Application name for Java or HTML5 applications, or productive SAP HANA instance
database name and application name in the format <instance
name>:<application name> for SAP HANA XS applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Example
5.3.2.3.4.11 clear-downtime-app
The command deregisters a previously configured downtime page for an application. After you execute the
command, the default HTTP error will be shown to the user in the event of unplanned downtime.
Parameters
To list all parameters available for this command, execute neo help clear-downtime-app in the command
line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: URL. For acceptable values see Regions and Hosts Available for the Neo Environ
ment [page 14]
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Related Information
5.3.2.3.4.12 close-db-tunnel
This command closes one or all database tunnel sessions that have been opened in a background process
using the open-db-tunnel --background command.
A tunnel opened in a background process is automatically closed when the last session using the tunnel is
closed. The background process terminates after the last tunnel has been closed.
Parameters
Required
--all Closes all tunnel sessions that have been opened in the background
--session-id Tunnel session to be closed. Cannot be used together with the parameter --all.
Example
Related Information
Closes the ssh-tunnel to the specified virtual machine. If no virtual machine ID is specified, closes all tunnels.
or
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: URL. For acceptable values, see Regions and Hosts Available for the Neo Envi
ronment [page 14].
Type: string
● -h, --host
● -u, --user
● -a, --account
● -p, --password
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-r, --port Port on which you want to close the SSH tunnel
Example
or
Related Information
5.3.2.3.4.14 create-account
Creates a new subaccount with an automatically generated unique ID as subaccount technical name and the
specified display name and assigns the user as a subaccount owner. The user is authorized against the existing
subaccount passed as --account parameter. Optionally, you can clone an existing subaccount configuration to
save time and effort.
If you clone an existing extension subaccount, the new subaccount will not be an extension subaccount but
a regular one. The new subaccount will not have the trust and destination settings typical for extension
subaccounts.
Parameters
To list all parameters available for this command, execute neo help create-account in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
If you want to create a subaccount whose display name has intervals, use quotes when
executing the command. For example: neo ... --display-name "Display Name with Inter
vals"
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: URL. For acceptable values see Regions and Hosts Available for the Neo Environ
ment [page 14]
--clone (Optional) List of settings that will be copied (re-created) from the existing subaccount
into the new subaccount. A comma separated list of values, which are as follows:
● trust
● members
● destinations
● all
Tip
We recommend listing explicitly the required cloning options instead of using --
clone all in automated scripts. This will ensure backward compatibility in case
the available cloning options, enveloped by all, change in future releases.
Example
all All settings (trust, members and destinations) from the ex
isting subaccount will be copied into the new one.
Caution
The list of cloned configurations might be extended in
the future.
trust The following trust settings will be re-created in the new sub
account similarly to the relevant settings in the existing sub
account:
Note
SAP Cloud Platform will generate a new pair of key
and certificate on behalf of the new subaccount.
Remember to replace them with your proprietary
key and certificate when using the subaccount for
productive purposes.
Note
If you do not have any trusted Identity Authentication
tenants in the existing subaccount, cloning the trust set
tings will result in trust with SAP ID Service (as default
identity provider) in the new subaccount.
members All members with their roles from the existing subaccount
will be copied into the new one.
Example of cloning an existing subaccount to create a new subaccount with the same trust settings and
existing destinations:
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Type: string
Optional
-b, --application Application name for Java or HTML5 applications, or productive SAP HANA instance
database name and application name in the format <instance
name>:<application name> for SAP HANA XS applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Default: 50
Type: string
Default: 60
Type: string
-w, --overwrite Should be used only if there is an existing alert that needs to be updated.
Default: false
Type: boolean
Related Information
5.3.2.3.4.16 create-db-ase
This command creates an ASE database with the specified ID and settings on an ASE database system.
Parameters
Required
Type: string
-p, --password To protect your password, enter it only when prompted by the console
client and not explicitly as a parameter in the properties file or the com
mand line.
Type: string
Type: string
Type: string
--db-password Password of the database user used to access the ASE database (op
tional, queried at the command prompt if omitted)
Note
This parameter sets the maximum database size. The minimum da
tabase size is 24 MB. You receive an error if you enter a database
size that exceeds the quota for this database system.
The size of the transaction log will be at least 25% of the database
size you specify.
Note
There is a limit to the number of databases you can create, and you'll see an error message when you reach
the maximum number of databases. For more information on user database limits, see Creating Databases
[page 921].
Example
Related Information
This command creates a SAP HANA database with the specified ID and settings, on a SAP HANA database
system enabled for multitenant database containers.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Note
To create a tenant database on trial, use -trial- instead of the ID of a productive
HANA database system.
--db-password Password of the SYSTEM user used to access the HANA database (optional, queried at
the command prompt if omitted)
Optional
--dp-server Enables or disables the data processing server of the HANA database: 'enabled', 'disa
bled' (default).
--script-server Enables or disables the script server of the HANA database: 'enabled', 'disabled' (de
fault).
--web-access Enables or disables access to the HANA database from the Internet: 'enabled' (default),
'disabled'
--xsengine-mode Specifies how the XS engine should run: 'embedded' (default), 'standalone'.
Note
There is a limit to the number of databases you can create, and you'll see an error message when you reach
the maximum number of databases. For more information on tenant database limits, see Creating
Databases [page 921].
Example
Related Information
5.3.2.3.4.18 create-db-user-ase
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--db-password Password of the database user used to access the ASE database (optional, queried at
the command prompt if omitted)
Example
5.3.2.3.4.19 create-ecm-repository
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Type: string
Type: string
Optional
-d, --display-name Can be used to provide a more readable name of the repository. Equals the --name
value if left blank. You cannot change the display later on.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-e, --description Description of the repository. You cannot change the description later on.
Type: string
-v, --virus-scan Can be used to activate the virus scanner and check all incoming documents for vi
ruses.
Default: true
Type: boolean
Recommendation
For repositories that are used by untrusted users and or for unknown content, we
recommend that you enable the virus scanner by setting this parameter to true.
Enabling the virus scanner could impair the upload performance.
If a virus is detected, the upload process for the document fails with a virus scanner ex
ception.
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
5.3.2.3.4.20 create-jmx-check
Parameters
Note
The JMX check settings support the JMX specification. For more information, see Java Management
Extensions (JMX) Specification .
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
The name must be up to 99 characters long and must not contain the following sym
bols: ` ~ ! $ % ^ & * | ' " < > ? , ( ) =
Type: string
-O, --object-name Object name of the MBean that you want to call
Type: string
-A, --attribute Name of the attribute inside the class with the specified object name.
Type: string
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Note
If the parameter is not used, the JMX check will be on subaccount level for all run
ning applications in the subaccount.
It is needed only if the attribute is a composite data structure. This key defines the item
in the composite data structure. For more information about the composite data struc
ture, see Class CompositeDataSupport .
Type: string
-o, --operation Operation that has to be called on the MBean after checking the attribute value.
It is useful for resetting statistical counters to restart an operation on the same MBean.
Type: string
Type: string
The threshold can be a regular expression in case of string values or compliant with the
official nagios threshold/ranges format. For more information about the format in case
it is a number, see the official nagios documentation .
The threshold can be a regular expression in case of string values or compliant with the
official nagios threshold/ranges format. For more information about the format in case
it is a number, see the official nagios documentation .
Default: false
Type: boolean
Note
When you use this parameter, a new JMX check is not created when the one you
specify does not exist.
For a typical example how to configure a JMX check for your application and subscribe recipients to receive
notification alerts, see Configure a JMX Check to Monitor Your Application.
The following example creates a JMX check that returns a warning state of the metric if the value is between 10
and 100 bytes, and returns a critical state if the value is greater than 100 bytes. If the value is less than 10
bytes, the returned state is OK.
Related Information
JMX Checks
Monitoring Java Applications
5.3.2.3.4.21 create-schema
This command creates a HANA database or schema with the specified ID on a shared or dedicated database.
Caution
This command is not supported for productive SAP HANA database systems. For more information about
how to create schemas on productive SAP HANA database systems, see Binding SAP HANA Databases to
Java Applications [page 926].
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-d, --dbtype Creates the HANA database or schema on a shared database system. Syntax: 'type:ver
sion'. Version is optional.
Type: string
--dbsystem Creates the schema on a dedicated database system. To see the available dedicated
database systems, execute the list-dbms command.
Type: string
Caution
The list-dbms command lists different database types, including productive
SAP HANA database systems. Do not use the create-schema command for
productive SAP HANA database systems. For more information about how to cre
ate schemas on productive SAP HANA database systems, see Binding SAP HANA
Databases to Java Applications [page 926].
It must start with a letter and can contain lowercase letters ('a' - 'z') and numbers ('0' -
'9'). For schemas IDs, uppercase letters ('A' - 'Z') and the special characters '.' and '-' are
also allowed.
Note that the actual ID assigned in the database will be different to this version.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Related Information
5.3.2.3.4.22 create-security-rule
This console client command creates a security group rule for a virtual machine.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--from-port The start of the range of allowed ports. The <from_port> value must be less than or
equal to the <to_port> value.
--to-port The end of the range of allowed ports. The <to_port> value must be greater than or
equal to the <from_port> value.
--source-id The name of the system that you want to connect from.
For an SAP HANA system, the --source-id is the SAP HANA database system
name. For a Java application, it is the application name.
Example
Related Information
5.3.2.3.4.23 create-ssl-host
Creates an SSL host for configuration of custom domains. This SSL host will be serving your custom domain.
To list all parameters available for this command, execute neo help create-ssl-host in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-n, --name Unique identifier of the SSL host. If not specified, 'default' value is set.
Type: string
When creating a new SSL host, the SSL host name must start with a letter and can only
contain lowercase letters (a-z), uppercase letters (A-Z), numbers (0-9), underscores
( _ ), and hyphens (-).
Example
Note
In the command output, you get the SSL host. For example, "A new SSL host [mysslhostname] was
created and is now accessible on host [123456.ssl.ondemand.com]". Write down the
123456.ssl.ondemand.com host as you will later need it for the DNS configuration.
Related Information
5.3.2.3.4.24 create-vm
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Default: off
Note
The passphrase should contain at least five characters in total: lowercase letters (a-
z), uppercase letters (A-Z), numbers (0-9), and special characters. For production
scenarios, make sure to protect your key pair with a passphrase that consists of at
least fifteen characters.
If you do not provide -pkp as a parameter in the command line, you will be prompted
to enter a passphrase.
If you do not enter a passphrase, the command will be executed but the private key will
not be encrypted.
-l, --ssh-key-location The path to a public key of certificate that will be uploaded and used to log in on the
newly created virtual machine.
Type: string
-k, --ssh-key-name The name of the already existing public key to be used to login on the newly created
virtual machine.
Type: string. It can contain only alphanumeric characters (0-9, a-z, A-Z), underscore
(_), and hyphen (-). The allowed name length is between 1 and 128 characters.
-v, --volume-id Unique identifier of the volume from which the virtual machine will be created.
Type: string
Condition: Use when you want to create a new virtual machine from a volume.
Type: string
Condition: Use when you want to create a new virtual machine from a volume snap
shot.
Default: off
Related Information
5.3.2.3.4.25 create-volume-snapshot
Takes a snapshot of the file system of the specified virtual machine volume. The operation is asynchronous.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string. It can contain only alphanumeric characters (0-9, a-z, A-Z), underscore
(_), and hyphen (-). The allowed name length is between 1 and 128 characters.
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-v, --volume-id Unique identifier of the volume from which the snapshot will be taken
Type: string
Example
Related Information
5.3.2.3.4.26 delete-account
Deletes a particular subaccount. Only the user who has created the subaccount is allowed to delete it.
Note
You cannot delete a subaccount if it still has associated services, subscriptions, non-shared database
systems, database schemas, deployed applications, HTML5 applications, or document service
repositories. You need to disable services and delete the other subaccount entities before you proceed with
the subaccount deletion. For information on how to disable services and delete subaccount entities, see
Related Information. Make sure also that there are no running virtual machines in the subaccount.
Parameters
To list all parameters available for this command, execute neo help delete-account in the command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
5.3.2.3.4.27 delete-availability-check
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Optional
-b, --application Application name for Java or HTML5 applications, or productive SAP HANA instance
database name and application name in the format <instance
name>:<application name> for SAP HANA XS applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Example
Related Information
5.3.2.3.4.28 delete-db-ase
This command deletes the ASE database with the specified ID.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
--force or -f Forcefully deletes the ASE database, including all application bindings
Example
Related Information
5.3.2.3.4.29 delete-db-hana
This command deletes the SAP HANA database with the specified ID on a SAP HANA database system
enabled for multitenant database container support.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
--force or -f Forcefully deletes the HANA database, including all application bindings
Example
5.3.2.3.4.30 delete-db-user-ase
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Example
Related Information
5.3.2.3.4.31 delete-destination
This command deletes destination configuration properties files and JDK files. You can delete them on
subaccount, application or subscribed application level.
To list all parameters available for this command, execute neo help delete-destination in the command
line.
Required
-a, --account Your subaccount. The subaccount for which you provide username and password.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-b, --application The application for which you delete a destination. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password Password for the specified user. To protect your password, enter it only when prompted
by the console client and not explicitly as a parameter in the properties file or the com
mand line.
Type: string
Type: string
Examples
5.3.2.3.4.32 delete-ecm-repository
This command deletes a repository including the data of any tenants in the repository, unless you restrict the
command to a specific tenant.
Caution
Be very careful when using this command. Deleting a repository permanently deletes all data. This data
cannot be recovered.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Type: string
Type: string
Deletes the repository for the given tenant only instead of for all tenants. If no tenant
name is provided, the repositories for all tenants are deleted.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Example
5.3.2.3.4.33 delete-domain-certificate
Deletes a certificate.
Note
Cannot be undone. If the certificate is mapped to an SSL host, the certificate will be removed from the SSL
host too.
Parameters
To list all parameters available for this command, execute neo help delete-domain-certificate in the
command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-n, --name Name of the certificate that you set to the SSL host
Example
Related Information
5.3.2.3.4.34 delete-hanaxs-certificates
This command deletes certificates that contain a specified string in the Subject CN.
Restriction
Use this command only for SAP HANA SP9 or earlier versions. For newer SAP HANA versions, use the
respective SAP HANA native tools.
After executing this command, a you need to restart the SAP HANA XS services for it to take effect. See
restart-hana [page 2109].
Parameters
To list all parameters available for this command, execute neo help delete-hanaxs-certificates in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-cn-string, --contained- A part of the certificate CN. All certificates that contain this string shall be deleted.
string
Default: none
Example
To delete all certificates containing John Doe in their Subject DN, execute:
or
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-n, --name or -A, all Name of the JMX check to be deleted or all JMX checks configured for the given subac
count and application are deleted.
Type: string
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Example
JMX Checks
Monitoring Java Applications
Deletes a solution resource file from the system repository of a specified extension subaccount.
Note
This is a beta feature available for SAP Cloud Platform extension subaccounts. For more information about
the beta features, see Using Beta Features in Subaccounts.
Parameters
To list all parameters available for this command, execute neo help delete-resource in the command line.
Required
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
To delete a solution resource from the system repository for your extension subaccount, execute:
5.3.2.3.4.37 delete-ssl-host
Parameters
To list all parameters available for this command, execute neo help delete-ssl-host in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Example
5.3.2.3.4.38 delete-keystore
This command is used to delete a keystore by deleting the keystore file. You can delete keystores on
subaccount, application, and subscription levels.
Parameters
To list all parameters available for this command, execute neo help delete-keystore in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Type: string
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
On Subscription Level
On Application Level
On Subaccount Level
Related Information
5.3.2.3.4.39 delete-mta
This command deletes Multitarget Application (MTA) archives that are deployed to your subaccount.
Parameters
To list all parameters available for this command, execute neo help delete-mta in the command line.
Required
-a, --account The name of the subaccount for which you provide a user and a password.
-p, --password Your user password. We recommend that you enter it only when prompted, and not ex
plicitly as a parameter in a properties file or the command line.
Optional
-y, --synchronous Instructs the console to wait for the operation to finish. It takes no value.
Example
To delete MTA archives with IDs <MTA_ID1> and <MTA_ID2> that have been deployed to your subaccount,
execute:
5.3.2.3.4.40 delete-schema
This command deletes the specified schema, including all data it contains. A schema cannot be deleted if it is
still bound to an application. To enforce the deletion, use the force parameter but bear in mind that this will also
delete all bindings that still exist.
Schema backups are kept for 14 days and may be used to restore mistakenly deleted data (available by special
request only).
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-f, --force Forcefully deletes the schema, including all application bindings
Default: off
Default: off
Example
Related Information
5.3.2.3.4.41 delete-security-rule
This console client command deletes a security group rule configured for a virtual machine.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--from-port The start of the range of allowed ports. The <from_port> value must be less than or
equal to the <to_port> value.
--to-port The end of the range of allowed ports. The <to_port> value must be greater than or
equal to the <from_port> value.
--source-id The name of the system that you want to connect from.
For a SAP HANA system, the --source-id is the SAP HANA database system name.
For a Java application, it is the application name.
Example
5.3.2.3.4.42 delete-vm
Note
By default, deleting a virtual machine doesn't delete its volume and volume snapshots. This gives you the
option to create a new virtual machine from the remaining volume and volume snapshots, and allows you to
not lose any data that was installed on the file system. For more information, see Manage Volumes and
Manage Volume Snapshots.
However, if you want to delete the virtual machine along with its volume and volume snapshots, you can use the
--delete-volume and --delete-volume-snapshots parameters.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Optional
-s, --delete-volume- Deletes all volume snapshots referenced by the virtual machine.
snapshots
-f, --force You won't be asked to confirm the deletion of the virtual machine.
Default: off
Example
5.3.2.3.4.43 delete-volume
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-v, --id Unique identifier of the volume that you want to delete
Type: string
Type: string
Optional
-f, --force You won't be asked to confirm the deletion of the virtual machine volume.
Example
5.3.2.3.4.44 delete-volume-snapshot
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-s, --id Unique identifier of the volume snapshot that you want to delete
Type: string
Type: string. It can contain only alphanumeric characters (0-9, a-z, A-Z), underscore
(_), and hyphen (-). The allowed name length is between 1 and 128 characters.
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-f, --force You won't be asked to confirm the deletion of the virtual machine volume snapshot.
Example
Related Information
Deploying an application publishes it to SAP Cloud Platform. Use the optional parameters to make some
specific configurations of the deployed application.
If you use enhanced disaster recovery, the application is deployed first on the specified region and then on the
disaster recovery region.
Parameters
To list all parameters available for this command, execute neo help deploy in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-s, --source A comma-separated list of file locations, pointing to WAR files, or folders containing
them
Note
The size of an application can be up to 1.5 GB. If the application is packaged as a
WAR file, the size of the unzipped content is taken into account.
If you want to deploy more than one application on one and the same application proc
ess, put all WAR files in the same folder and execute the deployment with this source, or
specify them as a comma-separated list.
To deploy an application in more than one region, execute the deploy separately for
each region host.
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Command-specific parameters
Default: 2
Type: integer
--delta Deploys only the changes between the provided source and the deployed content. New
content will be added; missing content will be deleted. Recommended for development
use to speed up the deployment.
Note
The deployment to the disaster recovery region is not supported with this parame
ter.
--ev Environment variables for configuring the environment in which the application runs.
Note
For security reasons, do not specify any confidential information in plain text for
mat, such as usernames and passwords. You can either encrypt such data, or store
it in a secure manner. For more information, see Keystore Service [page 2459].
Sets one environment variable by removing the previously set value; can be used multi
ple times in one execution.
If you provide a key without any value (--ev <KEY1>=), the –ev parameter is ignored.
Default: depends on the SAP Cloud Platform SDK for Neo environment
-m, Minimum number of application processes, on which the application can be started
Default: 1
-M, --maximum-processes Maximum number of application processes, on which the application can be started
Default: 1
System properties (-D<name>=<value>) separated with space that will be used when
starting the application process.
Memory settings of your compute units. You can set the following memory parameters:
-Xms, -Xmx, -XX:PermSize, -XX:MaxPermSize.
We recommend that you use the default memory settings. Change them only if neces
sary and note that this may impact the application performance or its ability to start.
Use the parameter if you want to choose an application runtime container different
from the one coming with your SDK. To view all available runtime containers, use list-
runtimes [page 2080].
--runtime-version SAP Cloud Platform runtime version on which the application will be started and will
run on the same version after a restart. Otherwise, by default, the application is started
on the latest minor version (of the same major version) which is backward compatible
and includes the latest corrections (including security patches), enhancements, and
updates. Note that choosing this option does not affect already started application
processes.
You can view the recommended versions by executing the list-runtime-versions com
mand.
Note
If you choose your runtime version, consider its expiration date and plan updating
to a new version regularly.
For more information, see Choose Application Runtime Version [page 2173].
Default: off
Possible values: on (allow compression), off (disable compression), force (forces com
pression for all responses) or an integer (which enables compression and specifies the
compression-min-size value in bytes).
For more information, see Enable and Configure Gzip Response Compression [page
2175]
--compressible-mime- A comma separated list of MIME types for which compression is used
type
Default: text/html, text/xml, text/plain
--connection-timeout Defines the number of milliseconds to wait for the request URI line to be presented after
accepting a connection.
Default: 20000
--max-threads Specifies the maximum number of simultaneous requests that can be handled
Default: 200
--uri-encoding Specifies the character encoding used to decode the URI bytes on application request
Default: ISO-8859-1
For more information, see the encoding sets supported by Java SE 6 and Java SE 7
.
Example
Here are examples of some additional configurations. If your application is already started, stop it and start it
again for the changes to take effect.
You can deploy an application on a host different from the default one by specifying the host parameter. For
example, to use the region (host) located in the United States, execute:
To specify the compute unit size on which you want the application to run, use the --size parameter with one
of the following values:
Available sizes depend on your subaccount type and what options you have purchased. For trial accounts, only
the Lite edition is available.
For example, if you have an enterprise account and have purchased a package with Premium edition compute
units, then you can run your application on a Premium compute unit size, by executing the following command:
When deploying an application, name the WAR file with the desired context root.
For example, if you want to deploy your WAR in context root "/hello" then rename your WAR to hello.war.
If you want to deploy it in the "/" context root then rename your WAR to ROOT.war.
Related Information
5.3.2.3.4.46 deploy-local
Parameters
Required
-s, --source Source for deployment (comma separated list of WAR files or folders containing one or
more WAR files)
Optional
Related Information
5.3.2.3.4.47 deploy-mta
This command deploys Multitarget Application (MTA) archives. One or more than one MTA archives can be
deployed to your subaccount in one go.
Parameters
To list all parameters available for this command, execute neo help deploy-mta in the command line.
Required
-a, --account The name of the subaccount for which you provide a user and a password.
-p, --password Your user password. We recommend that you enter it only when prompted, and not ex
plicitly as a parameter in a properties file or the command line.
-s, --source A comma-separated list of file locations, pointing to the MTA archive files, or the folders
containing them.
Optional
Command-specific parameters
-y, --synchronous Triggers the deployment and waits until the deployment operation finishes. The com
mand without the --synchronous parameter triggers deployment and exits imme
diately without waiting for the operation to finish. Takes no value.
-e, --extensions Defines one or more extensions to the deployment descriptor. A comma-separated list
of file locations, pointing to the extension descriptor files, or the folders containing
them. For more information, see Defining MTA Extension Descriptors [page 1276].
--mode Defines whether the deployment method is a standard deployment, or provider deploy
ment. The available values are import (default value), or providerImport.
Example
You can deploy an MTA archive on a host different from the default one by specifying the host parameter. For
example, to use the region (host) located in the United States, execute:
Related Information
5.3.2.3.4.48 disable
This command stops the creation of new connections to an application or application process, but keeps the
already running sessions alive. You can check if an application or application process has been disabled by
executing the status command.
Parameters
To list all parameters available for this command, execute neo help disable in the command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-i, --application- Unique ID of a single application process. Use it to disable a particular application proc
process-id ess instead of the whole application. As the process ID is unique, you do not need to
specify subaccount and application parameters. You can list the application process ID
by using the <status> command.
Default: none
Example
To disable a single applcation process, first identify the application process you want to disable by executing
neo status:
From the generated list of application process IDs, copy the ID you need and execute neo disable for it:
5.3.2.3.4.49 display-application-properties
The command displays the set of properties of a deployed application, such as runtime version, minimum and
maximum processes, Java version.
Parameters
To list all parameters available for this command, execute the neo help display-application-
properties in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Related Information
5.3.2.3.4.50 display-csr
Parameters
To list all parameters available for this command, execute neo help display-csr in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-f, --file name Name of the local file where the CSR is stored
Example
Related Information
5.3.2.3.4.51 display-ecm-repository
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Type: string
Optional
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Example
5.3.2.3.4.52 display-db-info
This command displays detailed information about the selected database. This includes the assigned database
type, the database version, and a list of bindings with the application and data source names.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
5.3.2.3.4.53 display-mta
This command displays a Multitarget Application (MTA) archive that is deployed to your subaccount.
Parameters
To list all parameters available for this command, execute neo help display-mta in the command line.
Required
-a, --account The name of the subaccount for which you provide a user and a password.
-p, --password Your user password. We recommend that you enter it only when prompted, and not ex
plicitly as a parameter in a properties file or the command line.
Example
To display an MTA archive with an ID <MTA_ID> that has been deployed to your subaccount, execute:
5.3.2.3.4.54 display-schema-info
This command displays detailed information about the selected schema. This includes the assigned database
type, the database version, and a list of bindings with the application and data source names.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
5.3.2.3.4.55 display-volume-snapshot
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Type: string. It can contain only alphanumeric characters (0-9, a-z, A-Z), underscore
(_), and hyphen (-). The allowed name length is between 1 and 128 characters.
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
5.3.2.3.4.56 download-keystore
This command is used to download a keystore by downloading the keystore file. You can download keystores on
subaccount, application, and subscription levels.
Parameters
To list all parameters available for this command, execute neo help download-keystore in the command
line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Type: string
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-l,--location Local directory where the keystore will be saved. If it is not specified, the current direc
tory is used.
Type: string
-w, --overwrite Overwrites a file with the same name if such already exists. If you do not explicitly in
clude the --overwrite argument, you will be notified and asked if you want to over
write the file.
Example
On Subscription Level
On Application Level
Related Information
5.3.2.3.4.57 edit-ecm-repository
Changes the name, key, or virus scan settings of a repository. You cannot change the display name or the
description.
Note
With this command, you can also change your current repository key to a different one. If you forgot your
current key, request a new one using the reset-ecm-repository command.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Type: string
Type: string
Optional
Caution
If not used, the virus scan setting of the whole repository changes.
Type: string
Type: string
Type: string
-v, --virus-scan Can be used to activate the virus scanner and check all incoming documents for vi
ruses.
Default: true
Type: boolean
Recommendation
For repositories that are used by untrusted users and or for unknown content, we
recommend that you enable the virus scanner by setting this parameter to true.
Enabling the virus scanner could impair the upload performance.
If a virus is detected, the upload process for the document fails with a virus scanner ex
ception.
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
5.3.2.3.4.58 enable
This command enables new connection requests to a disabled application or application process. The enable
command cannot be used for an application that is in maintenance mode.
Parameters
To list all parameters available for this command, execute neo help enable in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-i, --application- Unique ID of a single application process. Use it to enable a particular application proc
process-id ess instead of the whole application. As the process ID is unique, you do not need to
specify subaccount and application parameters. You can list the application process ID
by using the <status> command.
Default: none
Example
To enable a single applcation process, first identify the application process you want to enable by executing neo
status:
From the generated list of application process IDs, copy the ID you need and execute neo enable for it:
Related Information
5.3.2.3.4.59 get-destination
This command downloads (reads) destination configuration properties files and destination certificate files.
You can download them on subaccount, application or subscribed application level.
To list all parameters available for this command, execute neo help get-destination in the command line.
Required
-a, --account Your subaccount. The subaccount for which you provide username and password.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-b, --application The application for which you download a destination. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
--localpath The path on your local file system where a destination or a JKS file will be downloaded.
If not set, no files will be downloaded.
Type: string
--name The name of the destination or JKS file to be downloaded. If not set, the names of all
destination or JKS files for the service will be listed.
Type: string
-p, --password Password for the specified user. To protect your password, enter it only when prompted
by the console client and not explicitly as a parameter in the properties file or the com
mand line.
Type: string
Note
If you download a destination configuration file that contains a password field, the
password value will not be visible. Instead, after Password =..., you will only
see an empty space. You will need to learn the password in other ways.
Type: string
Related Information
5.3.2.3.4.60 generate-csr
Parameters
To list all parameters available for this command, execute neo help generate-csr in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
When generating a CSR, the certificate name must start with a letter and can only con
tain lowercase letters (a-z), uppercase letters (A-Z), numbers (0-9), underscores ( _ ),
and hyphens (-).
Allowed attributes:
Optional
-s, --subject- A comma-separated list of all domain names to be protected with this certificate, used
alternative-name as value for the Subject Alternative Name field of the generated certificate.
Type: string
Example
Related Information
5.3.2.3.4.61 get-log
Parameters
To list all parameters available for this command, execute neo help get-log in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-d, --directory Local folder location under which the file will be downloaded. If the directory you have
specified does not exist, it will be created.
Type: string
Type: string
Note
To find out the name of the log file to download, use the list-logs command to
see the available log files of your application. For more information, see list-logs
[page 2076].
-p, --password Password for the specified user. To protect your password, enter it only when prompted
by the console client and not explicitly as a parameter in the properties file or the com
mand line.
Type: string
Type: string
Optional
-w, --overwrite Overwrites a file with the same name if such already exists. If you do not explicitly in
clude the --overwrite argument, you will be notified and asked if you want to over
write the file.
Default: true
Type: boolean
Example
5.3.2.3.4.62 grant-db-access
This command gives another subaccount permission to access a database. The subaccount providing the
permission and the subaccount receiving the permission must be part of the same global account.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Optional
-to-account The subaccount to receive access permission. The subaccount provoding the permis
sion and the subaccount receiving the permission must be part of the same global ac
count.
-permissions Comma-separated list of access permissions to the database. Acceptable values: 'TUN
NEL', 'BINDING'.
Related Information
5.3.2.3.4.63 grant-db-tunnel-access
This command generates a token, which allows the members of another subaccount to access a database
using a database tunnel.
Parameters
Required
Type: string
The subaccount to be granted database tunnel access, based on the access token
Type: string
Example
5.3.2.3.4.64 grant-schema-access
This command gives an application in another subaccount access to a schema based on a one-time access
token. The access token is used to bind the schema to the application.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Related Information
5.3.2.3.4.65 hcmcloud-create-connection
Use this command to configure the connectivity of an extension application to an SAP SuccessFactors system
associated with a specified SAP Cloud Platform subaccount, or to configure the connectivity of a specified SAP
Cloud Platform subaccount to an SAP SuccessFactors system associated with this subaccount. The command
creates the required HTTP destination and registers an OAuth client for the extension application in SAP
SuccessFactors.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom
platform role:
● readExtensionConfigurations
● manageExtensionConfigurations
● manageDestinations
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Parameters
To list all parameters available for this command, execute neo help hcmcloud-create-connection in the
command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-w, --overwrite If a connection with the same name already exists, overwrites it. If you do not explicitly
specify the --overwrite parameter, and a connection with the same name already exists,
the command fails to execute
Type: string
-b, --application The name of the extension application for which you are creating the connection.
Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Default:
Condition: If you have not specified the name parameter, the default value
<sap_hcmcloud_core_odata> will be assumed for connections without a technical
user, the default value <sap_hcmcloud_core_odata_technical_user> will be as
sumed for connections with technical user.
Note
If the connection is on a subaccount level, the name is required and must be differ-
ent than <sap_hcmcloud_core_odata> and
<sap_hcmcloud_core_odata_technical_user>.
Type: string (up to 200 characters; uppercase and lowercase letters, numbers, and the
special characters en dash (-) and underscore (_).
Example
To configure a connection of type OData with technical user and with a specific name for an extension
application in a subaccount located in the United States (US East) region, execute:
Result
After executing this command without specifying a name for the connection, you have one of the following
destinations in your subaccount:
● sap_hcmcloud_core_odata
● sap_hcmcloud_core_odata_technical_user
After executing the command with a specific name for the connection, the required destination is created in
your subaccount.
You can consume this destination in your application using one of these APIs:
5.3.2.3.4.66 hcmcloud-delete-connection
This command removes the specified connection configured between an extension application and a SAP
SuccessFactors system associated with the specified SAP Cloud Platform subaccount, or between a specified
SAP Cloud Platform subaccount and the SAP SuccessFactors system associated with it.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom
platform role:
● readExtensionConfigurations
● manageExtensionConfigurations
● manageDestinations
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Parameters
To list all parameters available for this command, execute neo help hcmcloud-delete-connection in the
command line.
Required
Type: string (up to 200 characters; uppercase and lowercase letters, numbers, and the
special characters en dash (-) and underscore (_).
Type: string (up to 200 characters; uppercase and lowercase letters, numbers, and the
special characters en dash (-) and underscore (_).
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Example
To delete an OData connection for an extension application running in an extension subaccount in the US East
region, execute:
5.3.2.3.4.67 hcmcloud-disable-application-access
This command removes an extension application from the list of authorized assertion consumer services for
the SAP SuccessFactors system associated with the specified subaccount.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom
platform role:
● readExtensionConfigurations
● manageExtensionConfigurations
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
To list all parameters available for this command, execute neo help hcmcloud-disable-application-
access in the command line.
Required
-b, --application The name of the extension application for which you are deleting the connection.
Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
--application-type The type of the extension application for which you are deleting the connection
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To remove a Java extension application from the list of authorized assertion consumer services for the SAP
SuccessFactors system associated with a subaccount located in the United States (US East), execute:
5.3.2.3.4.68 hcmcloud-display-application-access-status
This command displays the status of an extension application entry in the list of assertion consumer services
for the SAP SuccessFactors system associated with the specified subaccount. The returned results contain the
extension application URL.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom
platform role:
● readExtensionConfigurations
● readHTML5Applications
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Parameters
To list all parameters available for this command, execute neo help hcmcloud-display-application-
access-status in the command line.
Required
-b, --application The name of the extension application for which you are displaying the status in in the
list of assertion consumer services. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
--application-type The type of the extension application for which you are creating the connection
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To display the status of an application entry in the list of authorized assertion consumer services for the SAP
SuccessFactors system associated with a subaccount in the region located in the United States (US East),
execute:
5.3.2.3.4.69 hcmcloud-enable-application-access
This command registers an extension application as an authorized assertion consumer service for the SAP
SuccessFactors system associated with the specified subaccount to enable the application to use the SAP
SuccessFactors identity provider (IdP) for authentication.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom
platform role:
● readExtensionConfigurations
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Parameters
To list all parameters available for this command, execute neo help hcmcloud-enable-application-
access in the command line.
Required
-b, --application The name of the extension application for which you are creating the connection.
Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
--application-type The type of the extension application for which you are creating the connection
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
To register an extension application as an authorized assertion consumer service for the SAP SuccessFactors
system associated with a subaccount located in the United States (US East) region, execute:
The command creates entry for the application in the list of the authorized service provider assertion
consumer services for the SAP SuccessFactors system associated with the specified subaccount. The entry
contains the main URL of the extension application, the service provider audience URL and service provider
logout URL. If an entry for the given extension application already exists, this entry is overwritten.
5.3.2.3.4.70 hcmcloud-enable-role-provider
This command enables the SAP SuccessFactors role provider for the specified Java application.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom
platform role:
● readExtensionConfigurations
● manageExtensionConfigurations
● readDestinations
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Parameters
To list all parameters available for this command, execute neo help hcmcloud-enable-role-provider in
the command line.
-b, --application The name of the extension application for which you are creating the connection.
Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
--connection-name The name of the destination for connecting to the SAP SuccessFactors system OData
API.
Default: <sap_hcmcloud_core_odata>
Type: string (up to 200 characters; uppercase and lowercase letters, numbers, and the
special characters en dash (-) and underscore (_).
Example
To enable the SAP SuccessFactors role provider for your Java application in an extension subaccount located in
the United States (US East) region, execute:
This command lists the SAP SuccessFactors Employee Central (EC) home page tiles registered in the SAP
SuccessFactorss company instance associated with the extension subaccount.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom
platform role:
● readExtensionConfigurations
● readHTML5Applications
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Parameters
To list all parameters available for this command, execute neo help hcmcloud-get-registered-home-
page-tiles in the command line.
Required
-b, --application The name of the extension application for which you are listing the home page tiles.
Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Note
If you do not specify the application parameter, the command lists all tiles reg
istered in the Successfactors company instance associated with the specified ex
tension subaccount.
--application-type The type of the extension application for which you are listing the home page tiles
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To list the home page tiles registered for a Java extension application running in your subaccount in the US East
region, execute:
There is no lifecycle dependency between the tiles and the application, so the application may not be started or
may not be deployed anymore.
This command imports SAP SuccessFactors HCM suite roles into the SAP SuccessFactors customer instance
linked to an extension subaccount.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom
platform role:
● readExtensionConfigurations
● manageExtensionConfigurations
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Parameters
To list all parameters available for this command, execute neo help hcmcloud-import-roles in the
command line.
Required
Type: string
Note
The file size must not exceed 500 KB.
Type: string
Type: string
Example
To import the role definitions for an extension application from the system repository for your extension
subaccount into the SuccessFactors customer instance connected to this subaccount, execute:
If any of the roles that you are importing already exists in the target system, the commands fails to execute.
Related Information
5.3.2.3.4.73 hcmcloud-list-connections
This command lists the connections configured for a specified extension application or for a specified
subaccount.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom
platform role:
● readExtensionConfigurations
● readDestinations
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
To list all parameters available for this command, execute neo help hcmcloud-list-connections in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Example
To list the connections for an extension application running in an extension subaccount in the US East region,
execute:
This command registers the SAP SuccessFactors Employee Central (EC) home page tiles in the SAP
SuccessFactors company instance associated with the extension subaccount. The home page tiles must be
described in a tile descriptor file for the extension application in JSON format.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom
platform role:
● readExtensionConfigurations
● manageExtensionConfigurations
● readHTML5Applications
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Parameters
To list all parameters available for this command, execute neo help hcmcloud-register-home-page-
tiles in the command line.
Required
Type: string
Note
The file size must not exceed 100 KB.
-b, --application The name of the extension application for which you are registering the home page
tiles. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
--application-type The type of the extension application for which you are registering the home page tiles
Default: java
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To register a home page tile for a Java extension application running in your subaccount in the US East region,
execute::
5.3.2.3.4.75 hcmcloud-unregister-home-page-tiles
This command removes the SAP SuccessFactors EC home page tiles registered for the extension application in
the SAP SuccessFactors company instance associated with the specified extension subaccount.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom
platform role:
● readExtensionConfigurations
● manageExtensionConfigurations
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Parameters
To list all parameters available for this command, execute neo help hcmcloud-unregister-home-page-
tiles in the command line.
-b, --application The name of the extension application for which you are removing the home page tiles.
Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Note
You must use the same application name that you have specified when registering
the tiles.
--application-type The type of the extension application for which you are listing the home page tiles
Default: java
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To remove the home page tiles registered for a Java extension application running in your subaccount in the US
East region, execute::
5.3.2.3.4.76 hot-update
The hot-update command enables a developer to redeploy and update the binaries of an application started on
one process faster than the normal deploy and restart. Use it to apply and activate your changes during
development and not for updating productive applications.
There are three options for hot-update specified with the --strategy parameter:
Limitations:
Parameters
To list all parameters available for this command, execute neo help hot-update in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-s, --source A comma-separated list of file locations, pointing to WAR files, or folders containing
them.
Acceptable values:
● replace-binaries
● restart-runtime
● reprovision-runtime
Optional
Default: 2
Type: integer
--delta Uploads only the changes between the provided source and the deployed content. New
content will be added; missing content will be deleted. Recommended for development
use to speed up the deployment.
Example
5.3.2.3.4.77 install-local
This command installs a server runtime in a local folder, by default <SDK installation folder>/server.
neo install-local
Optional
Default: 8009
Default: 8080
Default: 8443
Default: 1717
Related Information
5.3.2.3.4.78 list-application-datasources
This command lists all schemas and productive database instances bound to an application.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ters)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Optional
-b, --application Application name for Java or HTML5 applications, or productive SAP HANA instance
database name and application name in the format <instance
name>:<application name> for SAP HANA XS applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-R, --recursively Lists availability checks recursively starting from the specified level. For example, if only
'account' is passed as an argument, it starts from the subaccount level and then lists all
checks configured on application level.
Default: false
Type: boolean
Example
Example for listing availability checks recursively starting on subaccount level and listing the checks configured
for Java and SAP HANA XS applications:
Related Information
5.3.2.3.4.80 list-accounts
Lists all subaccounts that a customer has. Authorization is performed against the subaccount passed as --
account parameter.
Parameters
To list all parameters available for this command, execute neo help list-accounts in the command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Example
5.3.2.3.4.81 list-alert-recipients
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-b, --application Application name for Java or HTML5 applications, or productive SAP HANA instance
database name and application name in the format <instance
name>:<application name> for SAP HANA XS applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-R, --recursively Lists alerts recipients recursively starting from the specified level. For example, if only
'subaccount' is passed as an argument, it starts from the subaccount level and then
lists all recipients configured on application level.
Default: false
Type: boolean
Example
Sample output:
application : demo1
[email protected]
application : demo2
[email protected], [email protected]
Related Information
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
Parameters
To list all parameters available for this command, execute neo help list-application-domains in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Example
Related Information
Lists trusted CA certificates in a bundle or bundles that are set to an SSL host or hosts.
Note
The command lists all bundles in a subaccount only if you use the --all parameter.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Type: string
--all Lists the names of all bundles in the subaccount. Takes no value.
Type: string
Default: The CA certificates are saved in the current folder in a file named after the
CA bundle.
Note
If a file with the same name already exists in the specified directory, you will be
asked if you want to overwrite the file.
Example
Related Information
5.3.2.3.4.85 list-custom-domain-mappings
To list all parameters available for this command, execute neo help list-custom-domain-mappings in
the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
5.3.2.3.4.86 list-db-access-permissions
This command lists the permissions that other subaccounts have for accessing databases in the specified
subaccount.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Optional
-i, --id Specify a database to view the permissions only to that database.
-to-account Specify an subaccount to view the permissions only for that subaccount.
-permissions Filter the result by permission. Acceptable values: comma separated list of 'TUNNEL',
'BINDING'.
Example
Related Information
This command lists the dedicated and shared database management systems available for the specified
subaccount with the following details: database system (for dedicated databases), database type, and
database version.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
--verbose Displays additional information about each database: database type and database ver
sion
Default: off
Example
5.3.2.3.4.89 list-domain-certificates
To list all parameters available for this command, execute neo help list-domain-certificates in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
5.3.2.3.4.90 list-db-tunnel-access-grants
This command lists all current database access permissions for databases in other subaccounts.
Note
The list does not include access permissions that have been revoked.
Optional
Type: string
Example
The table below shows the currently active database tunnel access permissions:
Related Information
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Optional
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Example
ExampleRepository
Display name : Example Repository
Description : This is an example repository with Virus Scan enabled.
ID : cdb158efd4212fc00726b035
Application : Neo CLI
Virus Scan : on
ExampleRepositoryNoVS
Display name : Example Repository without Virus Scan
Description : This is an example repository with Virus Scan disabled.
ID : cdb158efd4212fc00726b035
Application : Neo CLI
Virus Scan : off
Number of Repositories: 2
This command lists identity provider certificates available to productive HANA instances. Optionally, you can
include a part of the certificate <Subject CN> as filter.
Note
Use this command for SAP HANA version SPS09 or lower SPs only.
Parameters
To list all parameters available for this command, execute neo help list-hanaxs-certificates in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-cn-string, --contained- A part of the certificate CN. If more than one certificate contain this string, all shall be
string listed.
Default: none
To list all identity provider certificates that contain <John Smith> in their <Subject CN>, execute:
5.3.2.3.4.93 list-jmx-checks
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Note
If the parameter is not used, all JMX checks used for this subaccount will be listed.
-R, --recursively Lists JMX checks recursively, starting from the specified level. For example, if only 'su
baccount' is passed as an argument, it starts from the subaccount level and then lists
all checks configured on application level.
Default: false
Type: boolean
Sample output:
application : demo
check-name : JVM Heap Memory Used
object-name : java.lang:type=Memory
attribute : HeapMemoryUsage
attribute key : used
warning : 600000000
critical : 850000000
unit : B
Related Information
JMX Checks
Monitoring Java Applications
5.3.2.3.4.94 list-keystores
This command is used to list the available keystores. You can list keystores on subaccount, application, and
subscription levels.
Parameters
To list all parameters available for this command, execute neo help list-keystores in the command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Example
On Subscription Level
On Application Level
On Subaccount Level
Related Information
5.3.2.3.4.95 list-loggers
This command lists all available loggers with their log levels for your application.
Parameters
To list all parameters available for this command, execute neo help list-loggers in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password Password for the specified user. To protect your password, enter it only when prompted
by the console client and not explicitly as a parameter in the properties file or the com
mand line.
Type: string
Type: string
Example
5.3.2.3.4.96 list-logs
This command lists all log files of your application sorted by date in a table format, starting with the latest
modified.
Parameters
To list all parameters available for this command, execute neo help list-logs in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password Password for the specified user. To protect your password, enter it only when prompted
by the console client and not explicitly as a parameter in the properties file or the com
mand line.
Type: string
Type: string
Example
5.3.2.3.4.97 list-mtas
This command lists the Multitarget Application (MTA) archives that are deployed to your subaccount or
provided by another subaccount.
Parameters
To list all parameters available for this command, execute neo help list-mtas in the command line.
Required
-a, --account The name of the subaccount for which you provide a user and a password.
-p, --password Your user password. We recommend that you enter it only when prompted, and not ex
plicitly as a parameter in a properties file or the command line.
Optional
Command-specific parameters
--available-for- If you use this parameter, the command will list only the MTAs that are available for
subscription to the corresponding subaccount. The MTAs, which are deployed by the
subscription
subaccount, will not be listed.
Example
To check the MTAs that are available for subscription to a given subaccount, execute:
This command shows the MTA operation status with a given ID.
Parameters
To list all parameters available for this command, execute neo help list-mta-operations in the
command line.
Required
-a, --account The name of the subaccount for which you provide a user and a password.
-p, --password Your user password. We recommend that you enter it only when prompted, and not ex
plicitly as a parameter in a properties file or the command line.
Note
This parameter is optional. If you do not use this parameter, all operations that
have not been cleaned up within the last 24 hours will be listed.
Example
Related Information
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Optional
Separate proxy hostname and port with a colon (':'). For example: loc.corp:123
Example
5.3.2.3.4.100 list-runtimes
Parameters
To list all parameters available for this command, execute neo help list-runtimes in the command line.
Required
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
The command displays the supported application runtime container versions for your SAP Cloud Platform SDK
for Neo environment. Only recommended versions are shown by default. You can also list supported version for
a particular runtime container.
Parameters
To list all parameters available for this command, execute neo help list-runtime-versions in the
command line.
Required
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Optional
--all Lists all supported application runtime container versions. Using a previously released
runtime version is not recommended.
--runtime Lists supported version only for the specified runtime container.
Example
Related Information
5.3.2.3.4.102 list-schemas
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
--verbose Displays additional information about each schema: database type and database ver
sion
Default: off
Example
5.3.2.3.4.103 list-schema-access-grants
This command lists all current schema access grants for a specified subaccount.
Note that the list does not include grants that have been revoked.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Type: string
Example
5.3.2.3.4.104 list-security-rules
This console client command lists the security group rules configured for a virtual machine.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
As an output of the list-security-rules command, you may receive the HANA or JAVA source types
previously created with the create-security-rule command, or an internally managed security group rule
of type CIDR for a registered access point. The security group rule of type CIDR allows communication between
the load balancer of the SAP Cloud Platform and the virtual machine.
Related Information
5.3.2.3.4.105 list-ssh-tunnels
neo list-ssh-tunnels
Related Information
5.3.2.3.4.106 list-ssl-hosts
Parameters
To list all parameters available for this command, execute neo help list-ssl-hosts in the command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
5.3.2.3.4.107 list-subscribed-accounts
Parameters
To list all parameters available for this command, execute neo help list-subscribed-accounts in the
command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
To be able to execute this command, the specified user must be a member of the pro
vider subaccount.
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
Parameters
To list all parameters available for this command, execute neo help list-subscribed applications in
the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
To be able to execute this command, the specified user must be a member of the sub
account.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
5.3.2.3.4.109 list-vms
Lists all virtual machines in the specified subaccount. You can get information for a concrete virtual machine by
name. The command output lists information about the virtual machine, such as size; status; SSH key; floating
IP (if assigned); volume IDs.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Type: string
Example
5.3.2.3.4.110 list-volumes
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Example
5.3.2.3.4.111 list-volume-snapshots
Lists all volume snapshots in the specified subaccount. Use display-volume-snapshot to get information
about a specific volume snapshot.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-v, --volume-id Unique identifier of a volume. If specified, only volume snapshots created from this vol
ume will be displayed.
Type: string
Example
5.3.2.3.4.112 map-proxy-host
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Separate proxy hostname and port with a colon (':'). For example: loc.corp:123
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Related Information
5.3.2.3.4.113 open-db-tunnel
This command opens a database tunnel to the database system associated with the specified schema or
database.
Note
Make sure that you have installed the required tools correctly.
If you face trouble using this command, please check that your installation is correct.
For more information, see Set Up the Console Client [page 1412] and Using the Console Client [page 1928].
● Default mode: The tunnel remains open until you explicitly close it by pressing ENTER in the command line.
It is closed automatically after 24 hours or if the command window is closed.
● Background mode: The database tunnel is opened in a separate process. Use the close-db-tunnel
command to close the tunnel once you are done, or it is closed automatically after one hour.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-i, --id ● SAP ASE database system (ASE): Specify the database ID of an SAP ASE user da
tabase.
● SAP HANA tenant database system (HANAMDC): Specify the database ID of an
SAP HANA tenant database.
● SAP HANA single-container database system (HANAXS): Specify the alias of the
database system.
● Shared SAP HANA database: Specify the schema ID of a schema.
Type: string
--access-token Identifies a database access permission. The access token and database ID parameters
are mutually exclusive.
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Type: string
Example
5.3.2.3.4.114 open-ssh-tunnel
or
Note
The tunnel is closed automatically after 24 hours or if the command window is closed.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-r, --port Port on which you want to open the SSH tunnel
Example
or
Related Information
5.3.2.3.4.115 put-destination
This command uploads destination configuration properties files and JKS files. You can upload them on
subaccount, application or subscribed application level.
To list all parameters available for this command, execute neo help put-destination in the command line.
Required
-a, Your subaccount. The subaccount for which you provide username and password.
--account Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
--host Type: URL. For acceptable values see Regions [page 11].
--localpath The path to a destination or a JKS file on your local file system.
Type: string
-p, Password for the specified user. To protect your password, enter it only when prompted
by the console client and not explicitly as a parameter in the properties file or the com
--password mand line.
Type: string
Note
When uploading a destination configuration file that contains a password field, the
password value remains available in the file. However, if you later download this file,
using the get-destination command, the password value will no more be
visible. Instead, after Password =..., you will only see an empty space.
Examples
Related Information
5.3.2.3.4.116 reboot-vm
Reboots a virtual machine by name or by ID. By default, the reboot is soft. You can perform a hard reboot if you
use the --hard parameter.
or
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
--hard Performs a hard reboot of the specified virtual machine. The hard reboot lets you force
a shutdown before restarting the virtual machine.
Default: soft. The soft reboot attempts to shut down gracefully and restart the vir
tual machine.
Examples
● If you want to perform a soft reboot, execute one of the following two commands:
● If you want to perform a hard reboot, execute one of the following two commands:
Related Information
This command re-applies all already uploaded certificates to all SAP HANA instances. This command is useful
if you already uploaded certificates to SAP Cloud Platform but uploading failed for some of the SAP HANA
instances.
Restriction
Use this command only for SAP HANA SP9 or earlier versions. For newer SAP HANA versions, use the
respective SAP HANA native tools.
Note
After executing this command, a you need to restart the SAP HANA XS services for it to take effect. See
restart-hana [page 2109].
Parameters
To list all parameters available for this command, execute neo help reconcile-hanaxs-certificates in
the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
5.3.2.3.4.118 register-access-point
Registers an access point URL for a virtual machine specified by name or ID.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
The register-access-point command creates an internally managed security rule of type CIDR, which
allows communication between the load balancer of the SAP Cloud Platform and the virtual machine.
Related Information
5.3.2.3.4.119 remove-ca
Removes trusted CAs from a bundle or deletes a whole bundle and all certificates in it.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--bundle Name of a new or existing bundle in which CAs will be added. A bundle can hold up to
120 certificates.
Type: string
The name of a bundle must start with a letter and can only contain 'a' - 'z' 'A' - 'Z' '0' - '9'
".", "_" and "-".
Optional
--expired Removes all expired trusted CA certificates in the specified bundle. Takes no value.
Example
Related Information
5.3.2.3.4.120 remove-custom-domain
Removes a custom domain as an access point of an application. Use this command if you no longer want an
application to be accessible on the configured custom domain.
Parameters
To list all parameters available for this command, execute neo help remove-custom-domain in the
command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-l, --ssl-host SSL host as defined with the --name parameter when created, or 'default' if not
specified.
Example
Related Information
5.3.2.3.4.121 remove-platform-domain
To list all parameters available for this command, execute neo help remove-platform-domain in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: URL
Example
Related Information
If you have forgotten the repository key, use this command to request a new repository key.
This command only creates a new key that replaces the old one. You cannot use the old key any longer. The
command does not affect any other repository setting, for example, the virus scan definition. If you just want to
change your current repository key, use the edit-ecm-repository command.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Example
This example resets the repository key for the com.foo.MyRepository repository and creates a new
repository key, for example fp0TebRs14rwyqq.
5.3.2.3.4.123 reset-log-levels
Parameters
To list all parameters available for this command, execute neo help reset-log-levels in the command
line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password Password for the specified user. To protect your password, enter it only when prompted
by the console client and not explicitly as a parameter in the properties file or the com
mand line.
Type: string
Type: string
Example
5.3.2.3.4.124 restart
Use this command to restart your application or a single application process. The effect of the restart
command is the same as executing the stop command first and when the application is stopped, starting it
with the start command.
Parameters
To list all parameters available for this command, execute the neo help restart command.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-y, --synchronous Triggers the process and waits until the application is restarted. The command without
the --synchronous parameter triggers the restarting process and exits immediately
without waiting for the application to start.
Default: off
-i, --application- Unique ID of a single application process. Use it to restart a particular application proc
process-id ess instead of the whole application. As the process ID is unique, you do not need to
specify subaccount and application parameters. You can list the application process ID
by using the <status> command.
Default: none
Example
To restart the whole application and wait for the operation to finish, execute:
Related Information
5.3.2.3.4.125 restart-hana
Note
To use this command, log on with a user with administrative rights for the subaccount.
Note
The restart-hana operation will be executed asynchronously. Temporary downtime is expected for SAP
HANA database or SAP HANA XS Engine, including inability to work with SAP HANA studio, SAP HANA
Web-based Development Workbench and Cockpit UIs dependent on SAP HANA XS.
After you trigger the command, you can monitor the command execution in SAP HANA Studio, using
Configuration and Monitoring Open Administration .
Parameters
To list all parameters available for this command, execute neo help restart-hana in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Note
You can find the SAP HANA database system ID using the list-dbms [page 2066]
command or in the Databases & Schemas section in the cockpit by navigating to
SAP HANA / SAP ASE Databases & Schemas .
It must start with a letter and can contain uppercase and lowercase letters ('a' - 'z', 'A' -
'Z'), numbers ('0' - '9'), and the special characters '.' and '-'.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--service-name The SAP HANA service to be restarted. You can choose between the following values:
--system If available, the entire SAP HANA database system will be restarted.
Example
To restart the SAP HANA database system with ID myhanaid running on the productive host, execute:
To restart the SAP XS Engine service on SAP HANA database system with ID myhanaid, execute:
Related Information
5.3.2.3.4.126 revoke-db-access
This command revokes the database access permissions given to another subaccount.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Optional
Example
Related Information
This command revokes database access that has been given to another subaccount.
Parameters
Required
-- access-token Access token that identifies the permission to access the da
tabase
Type: string
Type: boolean
Optional
Type: string
Example
Related Information
This command revokes the schema access granted to an application in another account.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--access-token Access token that identifies the grant. Grants can only be revoked by the granting sub
account.
Example
Related Information
The rolling-update command performs update of an application without downtime in one go.
Prerequisites
● You have at least one application process that is not in use, see your compute unit quota.
● The command can be used with compatible application changes only.
Note
If you use enhanced disaster recovery, the application is also deployed on the disaster recovery region
without being started.
Parameters
To list all parameters available for this command, execute neo help rolling-update in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-s, --source A comma-separated list of file locations, pointing to WAR files, or folders containing
them
If you want to deploy more than one application on one and the same application proc
ess, put all WAR files in the same folder and execute the deployment with this source, or
specify them as a comma-separated list.
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Default: off
Possible values: on (allow compression), off (disable compression), force (forces com
pression for all responses) or an integer (which enables compression and specifies the
compression-min-size value in bytes).
For more information, see Enable and Configure Gzip Response Compression [page
2175]
--compressible-mime- A comma separated list of MIME types for which compression will be used
type
Default: text/html, text/xml, text/plain
--connections The number of connections used to deploy an application. Use it to speed up deploy
ment of application archives bigger than 5 MB in slow networks. Choose the optimal
number of connections depending on the overall network speed to the cloud.
Default: 2
Type: integer
--ev Environment variables for configuring the environment in which the application runs.
Sets one environment variable by removing the previously set value; can be used multi
ple times in one execution.
If you provide a key without any value (--ev <KEY1>=), the –ev parameter is ignored.
Default: depends on the SAP Cloud Platform SDK for Neo environment
--timeout Timeout before stopping the old application processes (in seconds)
Default: 60 seconds
-V, --vm-arguments System properties (-D<name>=<value>) separated with space that will be used when
starting the application process.
Memory settings of your compute units. You can set the following memory parameters:
-Xms, -Xmx, -XX:PermSize, -XX:MaxPermSize.
We recommend that you use the default memory settings. Change them only if neces
sary and note that this may impact the application performance or its ability to start.
Default: lite
--runtime-version SAP Cloud Platform runtime version on which the application will be started and will
run on the same version after a restart. Otherwise, by default, the application is started
on the latest minor version (of the same major version) which is backward compatible
and includes the latest corrections (including security patches), enhancements, and
updates. Note that choosing this option does not affect already started application
processes.
You can view the recommended versions by executing the list-runtime-versions com
mand.
Note
If you choose your runtime version, consider its expiration date and plan updating
to a new version regularly.
For more information, see Choose Application Runtime Version [page 2173]
--uri-encoding Specifies the character encoding used to decode the URI bytes on application request.
Default: ISO-8859-1
For more information, see the encoding sets supported by Java SE 6 and Java SE 7
.
Example
Related Information
5.3.2.3.4.130 sdk-upgrade
Use this command to upgrade the SAP Cloud Platform SDK for Neo environment that you are currently
working with.
neo sdk-upgrade
The command checks for a more recent version of the SDK and then upgrades the SDK. There are two possible
cases:
Note
All files and servers that you add to your SDK will be preserved during upgrade.
Example
neo sdk-upgrade
5.3.2.3.4.131 set-alert-recipients
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
We recommend that you use distribution lists rather than personal email addresses.
Keep in mind that you will remain responsible for handling of personal email addresses
with respect to data privacy regulations applicable.
Type: string
Optional
-b, --application Application name for Java or HTML5 applications, or productive SAP HANA instance
database name and application name in the format <instance
name>:<application name> for SAP HANA XS applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Default: false
Type: boolean
Example
Related Information
Use this command to change the value of a single property of a deployed application without the need to
redeploy it. Execute the command separately for each property that you want to set. For the changes to take
effect, restart the application.
To execute the command successfully, you need to to specify the new value of one property from the optional
parameters table below.
Parameters
To list all parameters available for this command, execute the neo help set-application-property in
the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Command-specific parameters
--ev Environment variables for configuring the environment in which the application runs.
Sets the new environment variable without removing the previously set value; can be
used multiple times in one execution.
If you provide a key without any value (--ev <KEY1>=), the environment variable KEY1
will be deleted.
Default: depends on the SAP Cloud Platform SDK for Neo environment
(beta) You can use JRE 8 with the Java Web Tomcat 7 runtime (neo-java-web version
2.25 or higher) in subaccounts enabled for beta features.
-m, --minimum-processes Minimum number of application processes, on which the application can be started
Default: 1
-M, --maximum-processes Maximum number of application processes, on which the application can be started
Default: 1
System properties (-D<name>=<value>) separated with space that will be used when
starting the application process.
Memory settings of your compute units. You can set the following memory parameters:
-Xms, -Xmx, -XX:PermSize, -XX:MaxPermSize.
We recommend that you use the default memory settings. Change them only if neces
sary and note that this may impact the application performance or its ability to start.
--runtime-version SAP Cloud Platform runtime version on which the application will be started and will
run on the same version after a restart. Otherwise, by default, the application is started
on the latest minor version (of the same major version) which is backward compatible
and includes the latest corrections (including security patches), enhancements, and
updates. Note that choosing this option does not affect already started application
processes.
You can view the recommended versions by executing the list-runtime-versions com
mand.
Note
If you choose your runtime version, consider its expiration date and plan updating
to a new version regularly.
For more information, see Choose Application Runtime Version [page 2173]
Default: off
Possible values: on (allow compression), off (disable compression), force (forces com
pression for all responses) or an integer (which enables compression and specifies the
compression-min-size value in bytes).
For more information, see Enable and Configure Gzip Response Compression [page
2175]
--compressible-mime- A comma separated list of MIME types for which compression will be used
type
Default: text/html, text/xml, text/plain
--connection-timeout Defines the number of milliseconds to wait for the request URI line to be presented after
accepting a connection.
Default: 20000
--max-threads Specifies the maximum number of simultaneous requests that can be handled.
Default: 200
--uri-encoding Specifies the character encoding used to decode the URI bytes on application request.
Default: ISO-8859-1
For more information, see the encoding sets supported by Java SE 6 and Java SE 7
.
To change the minimum number of server processes on which you want your deployed application to run,
execute:
Related Information
5.3.2.3.4.133 set-db-properties-ase
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Note
This parameter sets the maximum database size. The minimum database size is 24
MB. You receive an error if you enter a database size that exceeds the quota for this
database system.
The size of the transaction log will be at least 25% of the database size you specify.
Example
5.3.2.3.4.134 set-db-properties-hana
This command changes the properties for a SAP HANA database enabled for multitenant database container
support.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
--web-access Enables or disables access to the HANA database from the Internet: 'enabled' (default),
'disabled'
Example
5.3.2.3.4.135 set-downtime-app
This command configures a custom downtime page (downtime application) for an application. The downtime
page is shown to the user in the event of unplanned downtime of the original application.
Parameters
To list all parameters available for this command, execute neo help set-downtime-app in the command
line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
The downtime page application is provided by the customer and hosted in the same
subaccount as the application itself.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Example
Related Information
5.3.2.3.4.136 set-log-level
Simple Logging Facade for Java (SLF4J) uses the following log levels:
Level Description
ALL This level has the lowest possible rank and is intended to
turn on all logging.
ERROR This level designates error events that might still allow the
application to continue running.
OFF This level has the highest possible rank and is intended to
turn off logging.
Caution
HTTP headers are logged in plain text once the log level is set to DEBUG or ALL. Thus, in case they contain
any sensitive information, such as passwords kept in an HTTP Authorization header, it will be disclosed in
the logs.
Parameters
To list all parameters available for this command, execute neo help set-log-level in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-l, --level The log level you want to set for the logger(s)
Type: string
-p, --password Password for the specified user. To protect your password, enter it only when prompted
by the console client and not explicitly as a parameter in the properties file or the com
mand line.
Type: string
Type: string
Example
Related Information
5.3.2.3.4.137 set-quota
Note
The amount you want to set cannot exceed the amount of quota you have purchased. In case you try to set
bigger amount of quota, you will receive an error message.
To list all parameters available for this command, execute neo help set-quota in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
-m, --amount Compute unit quota type and amount of the quota to be set in the format <type>:
[amount].
In this composite parameter, the <type> part is mandatory and must have one of the
following values: lite, pro, prem, prem-plus. The amount part is optional and must be an
integer value. If omitted, a default value 1 is assigned. Do not insert spaces between the
two parts and their delimiter ":", and use lower case for the <type> part.
Type: string
Example
5.3.2.3.4.138 set-ssl-host
Configures and updates an SSL host. Allows you to replace an SSL certificate with a different one, enable a TLS
protocol of your choice, and set a bundle of trusted CAs.
To list all parameters available for this command, execute neo help set-ssl-host in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-n, --name Name of the SSL host that will be configured and updated.
Optional
-c, --certificate Name of the certificate that you bind to the SSL host. The certificate must already be
uploaded.
Caution
This will replace the previously bound certificate, if there is already one.
Type: string (It can contain alphanumerics, '.', '-', and '_')
--ca-bundle Use a switch to specify if client certificate authentication is mandatory for the respec
tive CA bundle. For more information, see Managing Client Certificate Authentication
for Custom Domains [page 2236].
Type: string
Format: <bundle_name>:<switch>
-t, --supported- Specify the TLS protocols that you want to enable for the SSL host. The remaining TLS
protocols protocols are disabled.
Note
This parameter requires a certificate to be bound to the SSL host.
Note
To check the currently enabled TLS version, run the set-ssl-host command
without using any optional parameters.
Examples
Example
Example
If the optional parameters are not used, the set-ssl-host command returns the current properties of the
SSL host.
Related Information
You can check the current status of an application or application process. The command lists all application
processes with their IDs, state, last change date sorted chronologically, and runtime information.
The command also lists the availability zones where these application processes are running. However, this is
only valid for recently started applications and if you have the latest SAP Cloud Platform SDK for Neo
environment version installed.
The availability zones ensure the high availability of your application processes. If one of the availability zones
experiences infrastructure issues and downtime, only the processes in this zone are affected. The remaining
processes continue to run normally, ensuring that your application is working as expected.
When an application process is running but cannot receive new connection requests, it is marked as disabled in
its status description. Additionally, if an application is in planned downtime and a maintenance page has been
configured for it, the corresponding application is listed in the command output.
Parameters
To list all parameters available for this command, execute neo help status in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-i, --application- Unique ID of a single application process. Use it to show the status of a particular appli
process-id cation process instead of the whole application. As the process ID is unique, you do not
need to specify subaccount and application parameters.
Default: none
--show-full-process-id Shows the full length (40 characters) of the unique application process ID. You may
need to get the full ID when you try to execute a certain operation on the application
process and the process cannot be identified uniquely with the short version of the ID.
In particular, usage of the full length is recommended for tools and batch processing. If
this parameter is not used, the status command lists only the first 7 characters by de
fault.
Default: off
Example
You can list all application processes in your application with their IDs:
Then, you can request the status of a particular application process from the list using its ID:
Related Information
Starts a deployed application in order to make it available for customers. In case the application is already
started, the command starts an additional application process if the quota for maximum allowed number of
application processes is not exceeded.
Parameters
To list all parameters available for this command, execute neo help start in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
--disabled Starts an application process in disabled state, so that it is not available for new con
nections.
Default: off
-y, --synchronous Triggers the starting process and waits until the application is started. The command
without the --synchronous parameter triggers the starting process and exits im
mediately without waiting for the application to start.
Default: off
Example
To start the application and wait for the operation to finish, execute:
Related Information
5.3.2.3.4.141 start-db-hana
This command starts the specified SAP HANA database on a SAP HANA database system enabled for
multitenant database container support.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
5.3.2.3.4.142 start-local
neo start-local
Parameters
Optional
Default: 8003
--wait-url Waits for a 2xx response from the specified URL before exiting
--wait-url-timeout Seconds to wait for a 2xx response from the wait-url before exiting
Default: 180
Related Information
5.3.2.3.4.143 start-maintenance
This command starts the planned downtime of an application, during which it no longer receives requests and
a custom maintenance page for that application is shown to the user. All active connections will still be handled
until the application is stopped.
Parameters
To list all parameters available for this command, execute neo help start-maintenance in the command
line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Optional
--direct-access-code While setting your application in maintenance mode, you can generate an access code,
which you can use later during the maintenance period. While your application is in
maintenance mode, you can use this access code in the Direct-Access-Code HTTP
header so that you can have access to your application for testing and administration
purposes. In the meantime, users will continue to have access to the maintenance ap
plication.
If an application is already in planed downtime, executing the status command for it will show the maintenance
application, to which the traffic is being redirected.
Example
Related Information
Use this command to stop your deployed and started application or application process.
Parameters
To list all parameters available for this command, execute neo help stop in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-y, --synchronous Triggers the stopping process and waits until the application is stopped. The command
without the --synchronous parameter triggers the stopping process and exits im
mediately without waiting for the application to stop.
Default: off
-i, --application- Unique ID of a single application process. Use it to stop a particular application process
process-id instead of the whole application. As the process ID is unique, you do not need to specify
subaccount and application parameters. You can list the application process ID by using
the <status> command.
Default: none
Example
To stop the whole application and wait for the operation to finish, execute:
Related Information
5.3.2.3.4.145 stop-db-hana
This command stops the specified SAP HANA database on a SAP HANA database system enabled for
multitenant database container support.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
5.3.2.3.4.146 stop-local
neo stop-local
Parameters
Optional
Default: 8003
5.3.2.3.4.147 stop-maintenance
This command stops the planned downtime of an application, starts traffic to it and deregisters the
maintenance application page.
Parameters
To list all parameters available for this command, execute neo help stop-maintenance in the command
line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Related Information
5.3.2.3.4.148 subscribe
Subscribes the subaccount of the consumer to a provider Java application. Once the command is executed
successfully, the subscription is visible in the Subscriptions panel of the cockpit in the consumer subaccount.
Remember
You must have the Administrator role in the provider and consumer subaccount to execute this command.
Note
You can subscribe a subaccount to a Java application that is running in another subaccount only if both
subaccounts (provider and consumer subaccount) belong to the same region.
Parameters
To list all parameters available for this command, execute neo help subscribe in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
This parameter must be specified in the format <provider subaccount >:<provider ap
plication>.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
To be able to execute this command, the specified user must be a member of both the
provider and the consumer subaccounts and must possess the Administrator role in
those subaccounts. The command is not available for trial accounts.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
Parameters
To list all parameters available for this command, execute neo help subscribe-mta in the command line.
Required
-a, --account The name of the subaccount for which you provide a user and a password.
-p, --password Your user password. We recommend that you enter it only when prompted, and not ex
plicitly as a parameter in a properties file or the command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Optional
Command-specific parameters
-y, --synchronous Triggers the deployment and waits until the deployment operation finishes. The com
mand without the --synchronous parameter triggers deployment and exits imme
diately without waiting for the operation to finish. Takes no value.
-e, --extensions Defines one or more extensions to the deployment descriptor. A comma-separated list
of file locations, pointing to the extension descriptor files, or the folders containing
them. For more information, see Defining MTA Extension Descriptors [page 1276].
Example
This command unbinds a database from a Java application for a particular data source.
The application retains access to the database until the next application restart. After the restart, the
application will no longer be able to access it.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Default: <DEFAULT>
Example
5.3.2.3.4.151 unbind-domain-certificate
Unbinds a certificate from an SSL host. The certificate will not be deleted from SAP Cloud Platform storage.
Parameters
To list all parameters available for this command, execute neo help unbind-domain-certificate in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-l, --ssl-host SSL host as defined with the --name parameter when created, or 'default' if not
specified.
Related Information
5.3.2.3.4.152 unbind-hana-dbms
This command unbinds a productive SAP HANA database system from a Java application for a particular data
source.
The application retains access to the productive SAP HANA database system until the next application restart.
After the restart, the application will no longer be able to access the database system.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Example
Related Information
5.3.2.3.4.153 unbind-schema
This command unbinds a schema from an application for a particular data source.
The application retains access to the schema until the next application restart. After the restart, the application
will no longer be able to access the schema.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Example
Related Information
5.3.2.3.4.154 undeploy
Undeploying an application removes it from SAP Cloud Platform. To undeploy an application, you have to stop
it first.
If you use enhanced disaster recovery, the application is undeployed first on the disaster recovery region and
then on the specified region.
Parameters
To list all parameters available for this command, execute the neo help undeploy in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
Deletes the mapping between an application host and an on-premise reverse proxy host and port.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Separate proxy hostname and port with a colon (':'). For example: loc.corp:123
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
5.3.2.3.4.156 unregister-access-point
Unregisters the access point URL registered for a virtual machine specified by name or ID.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Example
5.3.2.3.4.157 unsubscribe
Remember
You must have the Administrator role in the provider and consumer subaccount to execute this command.
Parameters
To list all parameters available for this command, execute neo help unsubscribe in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
To be able to execute this command, the specified user must be a member of both the
provider and the consumer subaccounts.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
5.3.2.3.4.158 upload-domain-certificate
Uploads a signed custom domain certificate to SAP Cloud Platform. You can upload either a certificate based
on a previously generated CSR via the generate-csr command, or another valid certificate with its
corresponding private key.
Parameters
To list all parameters available for this command, execute neo help upload-domain-certificate in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-n, --name Name of the certificate previously used in the CSR generation via the generate-csr
command.
If you upload a certificate not based on a CSR generated via generate-csr, you use
this parameter to name the certificate.
Type: string
The certificate name must start with a letter and can only contain lowercase letters (a-
z), uppercase letters (A-Z), numbers (0-9), underscores ( _ ), and hyphens (-).
Note
Some CAs issue chained root certificates that contain an intermediate certificate.
In such cases, put all certificates in the file for upload starting with the signed SSL
certificate.
Caution
Once uploaded, the certificate cannot be downloaded for security reasons. This
also includes intermediate certificates.
Optional
-f, --force Overwrites an existing SSL certificate. For example, this parameter lets you update an
expired certificate based on an already existing CSR. For more information, see Using
an Existing CSR [page 2238].
The --force option is also useful if you had to and you did not upload an intermediate
certificate for some reason. Note that the intermediate certificate must be added to the
file that contains the SSL certificate.
-k, --key-location Location of the file containing the private key of the certificate specified in --name
If you want to upload a signed certificate that is not based on a CSR generated via the
generate-csr command, you must use this parameter to remotely upload this cer
tificate to SAP Cloud Platform along with its private key.
Caution
Uploading a private key from a remote location poses a security risk. Also, there is
no way to download the uploaded private key. SAP recommends that you use only
certificates that are based on CSRs previously generated via the generate-csr
command.
Examples
● An SSL certificate.
Example
-----BEGIN CERTIFICATE-----
Enter your SSL certificate.
-----END CERTIFICATE-----
Example
-----BEGIN CERTIFICATE-----
Enter your SSL certificate.
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
Enter your intermediate certificate.
-----END CERTIFICATE-----
5.3.2.3.4.159 upload-hanaxs-certificates
. This command uploads and applies identity provider certificates to productive HANA instances running on
SAP Cloud Platform.
Restriction
Use this command only for SAP HANA SP9 or earlier versions. For newer SAP HANA versions, use the
respective SAP HANA native tools.
Note
After executing this command, a you need to restart the SAP HANA XS services for it to take effect. See
restart-hana [page 2109].
Parameters
To list all parameters available for this command, execute neo help upload-hanaxs-certificates in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-p, --password To protect your password, enter it only when prompted by the console client and not
explicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-l, --localpath Path to a X.509 certificate or a directory containing certificates on a loca l file system. If
the local path is a directory, all files in it shall be uploaded. You need to restart the HA
NA instances to activate the certificates.
Default: none
Type: string
Example
5.3.2.3.4.160 upload-keystore
This command is used to upload a keystore by uploading the keystore file. You can upload keystores on
subaccount, application, and subscription levels.
Parameters
To list all parameters available for this command, execute neo help upload-keystore in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-l,--location Path to a keystore file to be uploaded from the local file system. The file extension de
termines the keystore type. The following extensions are sup
ported: .jks, .jceks, .p12, .pem. For more information about the keystore for
mats, see Features [page 2461]
Type: string
Type: string
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a let
ter)
-w, --overwrite Overwrites a file with the same name if such already exists. If you do not explicitly in
clude the --overwrite argument, you will be notified and asked if you want to over
write the file.
Example
On Subscription Level
On Application Level
On Subaccount Level
5.3.2.3.4.161 version
This command is used to list the SAP Cloud Platform SDK for Neo environment version and the runtime. It also
lists the command versions and the JAR files in the SDK and checks whether the SDK is up to date.
Use this command to show the SAP Cloud Platform SDK for Neo environment version and the runtime. You can
use parameters to list the command versions and the JAR files in the SDK and to check whether the SDK
version is up to date.
Parameters
To list all parameters available for this command, execute neo help version in the command line.
Required
-c, --commands Lists all commands available in the SDK and their versions.
-j, --jars Lists all JAR files in the SDK and their versions.
-u, --updates Checks if there are any updates and hot fixes for the SDK and whether the SDK version
is still supported. It also provides the version of the latest available SDK.
Optional
Type: string
To show the SAP Cloud Platform SDK for Neo environment version and the runtime, execute:
neo version
neo version -c
To list all JAR files in the SDK and their versions, execute:
neo version -j
neo version -u
Related Information
Overview
The exit code is a number that indicates the outcome of a command execution. It shows whether the command
completes successfully or defines an error if something goes wrong during the execution.
When commands are executed as part of automated scripts, the exit codes provide feedback to the scripts,
which allows the script to bypass known errors that can be met during execution. A script can also interact with
the user in order to request additional information required for the script to complete.
All exit codes in SAP Cloud Platform are aligned to the Bash-Scripting Guide. For more information, see Exit
Codes With Special Meanings .
The set of exit codes is divided into ranges, based on the error type and the reason.
No error 0 0 1
Common errors 1 9 9
Missing parameters 10 39 30
Exit Codes
Exit codes can be defined as general (common for all commands) and command-specific (cover different cases
via different commands).
0 OK
Related Information
How to configure and operate your deployed Java applica Java: Application Operations [page 2168]
tions
How to monitor your SAP HANA applications SAP HANA: Application Operations [page 2210]
How to monitor the current status of the HTML5 applica HTML5: Application Operations [page 2210]
tions in your subaccount
How to securely operate and monitor your cloud applica Security [page 2246]
tions connected to on-premise systems
How to change the default SAP Cloud Platform application Configuring Application URLs [page 2222]
URL by configuring custom or platform domains.
How to enable transport of SAP Cloud Platform applications Transporting Multitarget Applications with CTS+ [page 1646]
via the CTS+.
SAP Cloud Platform allows you to achieve isolation between the different application life cycle stages
(development, testing, productive) by using multiple subaccounts.
Prerequisites
● You have developed an application. For more information, see Developing Java Applications [page 1442].
● You have a subaccount in an enterprise account. For more information, see Global Accounts [page 8].
Context
Using multiple subaccounts ensures better stability. Also, you can achieve better security for productive
applications because permissions are given per subaccount.
For example, you can create three different subaccounts for one application and assign the necessary amount
of compute unit quota to them:
● dev - use for development purposes and for testing the increments in the cloud, you can grant permissions
to all application developers
● test- use for testing the developed application and its critical configurations to ensure quality delivery
(integration testing and testing in productive-like environment prior to making it publicly available)
● prod - use to run productive applications, give permissions only to operators.
You can create multiple subaccounts and assign quota to them either using the console client or the cockpit.
Procedure
Next, you can deploy your application in the newly created subaccount using the Eclipse IDE or the console
client. Then, you can test your application and make it ready for productive use.
You can transfer the application from one subaccount to another by redeploying it in the respective
subaccount.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Create a new subaccount.
Execute:
Execute:
Next, you can deploy your application in the newly created subaccount by executing neo deploy -a
<subaccount> -h <host> -b <application name> -s <file location> -u <user name or
email>. Then, you can test your application and make it ready for productive use.
You can transfer the application from one subaccount to another by redeploying it in the respective
subaccount.
Related Information
After you have developed and deployed your Java application on SAP Cloud Platform, you can configure and
operate it using the cockpit, the console client, or the Eclipse IDE.
Content
Configuring Applications
Console Client Update Application Properties [page Specify various configurations using commands.
2172]
Eclipse IDE Configuring Advanced Configurations Use the options for advanced server and application configu-
[page 1471] rations as well as direct reference to the cockpit UI.
Cockpit Define Application Details (Java Apps) Start, stop, and undeploy applications, as well as start, stop,
[page 2180] and disable individual application processes.
Console Client start [page 2135]; stop [page 2140]; re Manage the lifecycle of a deployed application or individual
start [page 2108] application processes by executing the respective com
mand.
enable [page 2022]; disable [page
2009]; undeploy [page 2151]
Eclipse IDE Deploy Locally from Eclipse IDE [page Start, stop, republish, and perform delta deploy of applica
1468] tions.
Lifecycle Manage Start an Application [page 1464] Start and stop applications using the Lifecycle Management
ment API Stop an Application [page 1466] API.
Monitoring
Cockpit, Console Monitoring Java Applications View the current metrics or the metrics history.
Client, Monitoring
Monitoring HTML5 Applications Configure checks for an application.
API
Monitoring Database Systems Use the Monitoring REST API to get the state or the metric
details of an application.
Profiling
Eclipse IDE Profiling Applications [page 2196] Analyze resource-related problems in your application.
Cockpit Using Logs in the Cockpit View the logs and change the log settings of any applications
deployed in your subaccount.
Console Client Using Logs in the Console Client Manage some of the logging configurations of a started ap
plication.
Eclipse IDE Using Logs in the Eclipse IDE View the logs and change the log settings of the applications
deployed in your subaccount or on you local server.
Cockpit Enable Maintenance Mode for Planned Supports zero downtime and planned downtime scenarios.
Downtimes [page 2191] Disable the application or individual processes in order to
shut down the application or processes gracefully.
Perform Soft Shutdown [page 2193]
Console Client Update Applications with Zero Down Deploy a new version of a productive application or perform
time [page 2188] maintenance.
As an operator, you can configure an SAP Cloud Platform application according to your scenario.
When you are deploying the application using SAP Cloud Platform console client, you can specify various
configurations using the deploy command parameters:
You can scale an application to ensure its ability to handle more requests.
Using the cockpit, you can perform the following identity and access management configuration tasks:
Using the cockpit and the console client, you can configure HTTP, Mail and RFC destinations to make use of
them in your applications:
Using the cockpit and the console client, you can view and download log files of any applications deployed in
your subaccount:
You can update a property of an application running on SAP Cloud Platform without redeploying it.
Context
Application properties are configured during deployment with a set of deploy parameters in the SAP Cloud
Platform console client. If you want to change any of these properties (Java version, runtime version,
compression, VM arguments, compute unit size, URI encoding, minimum and maximum application
processes) without the need to redeploy the application binaries, use the set-application-property
command. Execute the command separately for each property that you want to set.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation folder>/
tools).
2. Execute set-application-property specifying the new value of one property that you want to change.
For example, to change the compute unit size to premium, execute:
3. For the change to take effect, restart your application using the restart command.
Related Information
Applications deployed on SAP Cloud Platform are always started on the latest version of the application
runtime container. This version contains all released fixes, critical patches and enhancements and is
respectively the recommended option for applications. In some special cases, you can choose the version of
the runtime container your application uses by specifying it with the parameter <--runtime-version> when
deploying your application. To change this version, you need to redeploy the application without specifying this
parameter.
Prerequisites
You have downloaded and configured SAP Cloud Platform console client. For more information, see Set Up the
Console Client [page 1412].
Context
If you want to choose the version of the application runtime container, follow the procedure.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation folder>/
tools).
2. In the console client command line, execute the <list-runtime-versions> command to display all
recommended versions. We recommend that you choose the latest available version.
3. Redeploy your application with parameter <--runtime-version> set to the selected version number.
Caution
By selecting an older version of the application runtime, you do not have the latest released fixes and
critical patches as well as enhancements, which may affect the smooth operation and supportability of
your application. Consider updating the selected version periodically. Plan the updates to the latest
version of the application runtime and apply in your test environment first. Older application runtime
versions will be deprecated and expire. Refer to the <list-runtime-versions> command for
information.
You can choose the Java Runtime Environment (JRE) version used for an application.
Prerequisites
You have downloaded and configured SAP Cloud Platform console client.
For more information, see Set Up the Console Client [page 1412]
Context
The JRE version depends on the type of the SAP Cloud Platform SDK for Neo environment you are using. By
default the version is:
If you want to change this default version, you need to specify the --java-version parameter when deploying the
application using the SAP Cloud Platform console client. Only the version number of the JVM can be specified.
You can use JRE 8 with the Java Web Tomcat 7 runtime (neo-java-web version 2.25 or higher) in productive
accounts.
For applications developed using the SAP Cloud Platform SDK for Neo environment for Java Web Tomcat 7
(2.x), the default JRE is 7. If you are developing a JSP application using JRE 8, you need to add a configuration
in the web.xml that sets the compiler target VM and compiler source VM versions to 1.8.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
For Java Web Tomcat 8, Java version 8 is supported by default, but you can also use Java version 7.
Related Information
Usage of gzip response compression can optimize the response time and improve interaction with an
application as it reduces the traffic between the Web server and browsers. Enabling compression configures
the server to return zipped content for the specified MIME type and size of the response.
Prerequisites
You have downloaded and configured SAP Cloud Platform console client.
For more information, see Set Up the Console Client [page 1412]
Context
You can enable and configure gzip using some optional parameters of the deploy command in the console
client. When deploying the application, specify the following parameters:
Procedure
If you enable compression but do not specify values for --compressible-mime-type or --compression-min-
size, then the defaults are used: text/html, text/xml, text/plain and 2048 bytes, respectively.
If you want to enable compression for all responses independently from MIME type and size, use only --
compression force.
Example
Next Steps
Once enabled, you can disable the compression by redeploying the application without the compression
options or with parameter --compression off.
Related Information
Using SAP Cloud Platform console client, you can configure the JRE by specifying custom VM arguments.
Prerequisites
For more information, see Set Up the Console Client [page 1412]
● System properties - they will be used when starting the application process. For example {{-
D<key>=<value>}}
● Memory arguments - use them to define custom memory settings of your compute units. The supported
memory settings are:
-Xms<size> - set initial Java heap size
-Xmx<size> - set maximum Java heap size
-XX:PermSize - set initial Java Permanent Generation size
-XX:MaxPermSize - set maximum Java Permanent Generation size
Note
We recommend that you use the default memory settings. Change them only if necessary and note that
this may impact the application performance or its ability to start.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Deploy the application, specifying your desired configurations. For example, if you want to specify a
currency and maximum heap size 1 GiB, then execute the deploy with the following parameters:
Note
If you are deploying using the properties file, note that you have to use double quotation marks twice:
vm-arguments=""-Dcurrency=EUR -Xmx1024m"".
This will set the system properties -Dcurrency=EUR and the memory argument -Xmx1024m.
To specify a value that contains spaces (for example, -Dname=John Doe), note that you have to use single
quotation marks for this parameter when deploying.
Related Information
Each application is started on a dedicated SAP Cloud Platform Runtime. One application can be started on one
or many application processes, according to the compute unit quota that you have.
Prerequisites
● You have downloaded and configured SAP Cloud Platform console client. For more information, see Set Up
the Console Client [page 1412].
● Your application can run on more than one application processes
Context
Scaling an application ensures its ability to handle more requests, if necessary. Scalability also provides failover
capabilities - if one application process crashes, the application will continue to work. First, when deploying the
application, you need to define the minimum and maximum number of application processes. Then, you can
scale the application up and down by starting and stopping additional application processes. In addition, you
can also choose the compute unit size, which provides a certain central processing unit (CPU), main memory
and disk space.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Deploy the application, specifying --minimum-processes and --maximum-processes. The --minimum-
processes parameter defines the number of processes on which the application is started initially. Make
sure it is at least 2.
4. You can now scale the application up by executing the start command again. Each new execution starts
another application process. You can repeat the start until you reach the maximum number of application
process you defined within the quota you have purchased.
5. If for some reason you need to scale the application down, you can stop individual application processes by
using soft shutdown. Each application process has a unique process ID that you can use to disable and
stop the process.
b. Execute neo disable for the application process you want to stop.
Next Steps
You can also scale your application vertically by choosing the compute unit size on which it will run after the
deploy. You can choose the compute unit size by specifying the --size parameter when deploying the
application.
For example, if you have an enterprise account and have purchased a package with Premium edition compute
units, then you can run your application on a Premium compute unit size, by executing
Related Information
For an overview of the current status of the individual applications in your subaccount, use the cockpit. It
provides key information in a summarized form and allows you to initiate actions, such as starting, stopping,
and undeploying applications.
Related Information
You can view details about your currently selected Java application. By adding a suitable display name and a
description, you can identify the application more easily.
Context
In the overview of a Java application in the cockpit, you can add and edit the display name and description for
the Java application as needed.
● Display name - a human-readable name that you can specify for your Java application and change it later
on, if necessary.
● Description - a short descriptive text about the Java application, typically stating what it does.
Procedure
You can directly start, stop, and undeploy applications, as well as start, stop, and disable individual application
processes.
Context
An application can run on one or more application processes. The use of multiple processes allows you to
distribute application load and provide failover capability. The number of processes that you can start depends
on the compute unit quota available to your global account and how an individual application has been
configured. If you reach the maximum, increase the maximum number of processes first before you can start
another process.
Note
By default the application is started on one application process and is allowed to run on a maximum of one
process. To use multiple processes, an application must be deployed with the minimum-processes and
maximum-processes parameters set appropriately.
Note
While an application name is assigned manually and is unique in a subaccount, an application process ID is
generated automatically whenever a new process is started and is unique across the cloud platform.
Procedure
1. Open the subaccount in the cockpit and choose Applications Java Applications in the navigation
area.
2. You have the following options for the applications in the list:
To... Choose...
Data source bindings are not deleted. To delete all data source bindings created for
this application, select the checkbox.
Note
Bound databases and schemas will not be deleted. You can delete database and
schema bindings using the Databases & Schemas panel.
3. To choose an action for an application process, click the relevant application's name in the list to go the the
application overview page.
To... Choose...
Related Information
The status of an individual process is based on values that reflect the process run state and its monitoring
metrics.
Procedure
This takes you to the overview page for the selected application.
The Processes panel shows the number of running processes and the overall state for the metrics as
follows:
State
○ Started
○ Started (Disabled)
○ Starting
○ Stopping
○ Application Error
○ Infrastructure Error
Metric
○ OK
○ Warning (also shown for intermediate states)
3. Choose Monitoring Processes in the navigation area to go to the process overview to view the status
summary and further details:
Panel Description
Status Summary Displays the current values of the two status categories and the runtime version. A short text
summarizes any problems that have been detected.
State Indicates whether the process has been started or is transitioning between the Started and
Stopped states. The Error state indicates a fault, such as server unavailability, timeout, or VM
failure.
Runtime Shows the runtime version on which the application process is running and its current status:
○ OK: Still within the first three months since it was released
○ No longer recommended: Has exceeded the initial three-month period
○ Expired: 15 months since its release date
Related Information
Context
This page describes the format of the Default Trace file. You can view this file for your Web applications via
the cockpit and the Eclipse IDE.
For more information, see Investigating Performance Issues Using the SQL Trace [page 1564] and Using Logs in
the Eclipse IDE
Parameter Description
RECORD_SEPARATOR ASCII symbol for separating the log records. In our case, it is
"|" (ASCII code: 124)
ESC_CHARACTER ASCII symbol for escape. In our case, it is "\" (ASCII code:
92)
FINEST|Information|FINER|Information|FINE|Information|
CONFIG|Information|DEBUG|Information|PATH|
Information|INFO|Information|WARNING|Warning|ERROR|
Error|SEVERE|Error|FATAL|Error
HEADER_END
Besides the main log information, the Default Trace logs information about the tenant users that have
accessed a relevant Web application. This information is provided in the new Tenant Alias column parameter,
which is automatically logged by the runtime. The Tenant Alias is:
● A human-readable string;
● For new accounts, it is shorter than the tenant ID (8-30 characters);
● Unique for the relevant SAP Cloud Platform landscape;
● Equal to the account name (for new accounts); might be equal to the tenant ID (for old accounts).
Example
In this example, the application has been accessed on behalf of two tenants - with identifiers 42e00744-
bf57-40b1-b3b7-04d1ca585ee3 and 5c42eee4-d5ad-494e-9afb-2be7e55d0f9c.
FILE_TYPE:DAAA96DE-B0FB-4c6e-AF7B-A445F5BF9BE2
FILE_ID:1391169413918
ENCODING:[UTF8|NWCJS:ASCII]
RECORD_SEPARATOR:124
COLUMN_SEPARATOR:35
ESC_CHARACTER:92
COLUMNS:Time|TZone|Severity|Logger|ACH|User|Thread|Bundle name|JPSpace|
JPAppliance|JPComponent|Tenant Alias|Text|
SEVERITY_MAP:FINEST|Information|FINER|Information|FINE|Information|CONFIG|
Information|DEBUG|Information|PATH|Information|INFO|Information|WARNING|Warning|
ERROR|Error|SEVERE|Error|FATAL|Error
HEADER_END
2014 01 31 12:07:09#
+00#INFO#com.sap.demo.tenant.context.TenantContextServlet##anonymous#http-
bio-8041-exec-1##myaccount#myapplication#web#null#null#myaccount#The app was
accessed on behalf of tenant with ID: '42e00744-bf57-40b1-b3b7-04d1ca585ee3'|
Related Information
View information about the application runtime. SAP Cloud Platform provides a set of runtimes. You can
choose the application runtime during application deployment.
Context
The runtime is assigned either by default or explicitly set when an application is deployed. If a version is not
specified during deployment, the major runtime version is determined automatically based on the SDK that is
used to deploy the application. By default, applications are deployed with the latest minor version of the
respective major version.
You are strongly advised to use the default version, since this contains all released fixes and critical patches,
including security patches. Override this behavior only in exceptional cases by explicitly setting the version, but
note that this is not recommended practice.
Procedure
1. In the cockpit, choose Java Applications in the navigation area and then select the relevant application in
the list.
The Runtime panel provides the following information:
Related Information
If you are an application operator and need to deploy a new version of a productive application or perform
maintenance, you can choose among several approaches.
Note
In all cases, first test your update in a non-productive environment. The newly deployed version of the
application overwrites the old one and you cannot revert to it automatically. You have to redeploy the old
version to revert the changes, if necessary.
SAP Cloud Platform provides the following approaches for updating an application:
Zero Downtime
Use: When your new application version is backward compatible with the old version - that is, the new version
of the application can work in parallel with the already running old application version.
Steps: Deploy a new version of the application and disable and enable processes in a rolling manner. For an
automated execution of the same procedure, use the rolling-update command.
See Update Applications with Zero Downtime [page 2188] and rolling-update [page 2115].
Description: Shows a custom maintenance page to end users. The application is automatically disabled.
Use: When the new version is backward incompatible - that is, running the old and the new version in parallel
may lead to inconsistent data or erroneous output.
Soft Shutdown
Description: Supports zero downtime and planned downtime scenarios. Disabled applications/processes stop
accepting new connections from users, but continue to serve already running connections.
Use: As part of the zero downtime scenario or to gracefully shut down your application during a planned
downtime (without maintenance mode).
Steps: Disable the application (console client only) or individual processes (console client or cockpit) in order
to shut down the application or processes gracefully.
Related Information
The platform allows you to update an application in a manner in which the application remains operable all the
time and your users do not experience downtime.
Prerequisites
Context
Each application runs on one or more dedicated application processes. You can start one or many application
processes at any given time, according to the compute unit quota that you have. Each process has a unique
Note
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. List the status of the application which shows all its processes with their attributes (ID, status, last change
date) by executing <neo status>. Identify and make a note of the application process IDs, which you will
need to stop in the following steps. Application processes are listed chronologically by their last change
date.
3. Deploy the new version of your application on SAP Cloud Platform by executing <neo deploy> with the
appropriate parameters.
Note that to execute the update, you need to start one additional application process with the new version.
Therefore, make sure you have configured a high enough number of maximum processes for the
application (at least one higher than the number of old processes that are running). In case you have
already reached the quota for your subaccount, stop one of the already running processes, before
proceeding.
4. Start a new application process which is running the new version of the application by executing <neo
start>.
5. Use soft shutdown for the application process running the old version of the application:
a. Execute <neo disable> using the ID you identified in Step 2. This command stops the creation of
new connections to the application from new end users, but keeps the already running ones alive.
b. Wait for some time so that all working sessions finish. You can monitor user requests and used
resources by configuring JMX checks, or, you can just wait for a given time period that should be
enough for most of the sessions to finish.
6. (Optional) Make sure the application process is stopped by checking its status using the <application-
process-id> parameter.
7. If the application is running on more than one application processes, repeat steps 4 and 5 until all the
processes running the old version are stopped and the corresponding number of processes running the
new version are started.
Example
For example, if your application runs on two application processes, you need to perform the following steps:
Related Information
An operator can start and stop planned application downtime, during which a customized maintenance page
for that application is shown to end users.
Prerequisites
To redirect an application, you need a maintenance application. A maintenance application replaces your
application for a temporary period and can be as simple as a static page or have more complex logic. You need
to provide the maintenance application yourself and ensure that it meets the following conditions:
● It is a Java application.
● It is deployed in the same subaccount as your application.
● It has been started, that is, it is up and running.
● It must not be in maintenance itself.
● Its context path must be the same as the context path of the original application.
Context
Note
Cockpit
Context
You can enable the maintenance mode for an application from the overview page for the application. An
application can be put into maintenance mode only if it is not being used as a maintenance application itself
and is running (Started state).
Procedure
1. Log on to the cockpit, select a subaccount and choose Applications Java Applications in the
navigation area.
2. Click the application's name in the list to open the application overview page and in the Application
Maintenance section choose (Start Maintenance).
○ In Maintenance
○ A link to the assigned maintenance application: Click the link to open the overview page for this
application.
Results
Note that HTTP requests from already active sessions are redirected to the original application, if able. This
approach makes sure that end users can complete their work without noticing the application downtime. Only
new HTTP requests are redirected to the maintenance application.
The temporary redirect to the maintenance application remains effective until you take your application out of
maintenance. To disable the maintenance mode, choose (Switch maintenance mode off). Before doing so,
you should ensure that your application is up and running to avoid end users experiencing HTTP errors.
Console Client
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Start the planned application downtime by executing <neo start-maintenance> in the command line.
This stops traffic to the application and registers a maintenance page application. All active connections
will be still handled until the application is stopped.
If you want to have access to an application during maintenance, use the --direct-access-code
parameter. For more information, see start-maintenance [page 2138].
3. Perform the planned maintenance, update, or configuration of your application:
a. Before stopping the application, wait for the working sessions to finish. You can wait for a given time
period that should be enough for most of the sessions to finish, or configure JMX checks to monitor
user requests and used resources. For more information, see Configure a JMX Check to Monitor Your
Application
4. Stop the planned application downtime by executing <neo stop-maintenance> in the command line.
This resumes traffic to the application and the maintenance page application stops handling incoming
requests.
Related Information
Soft shutdown enables an operator to stop an application or application process in a way that no data is lost.
Using soft shutdown gives sufficient time to finish serving end user requests or background jobs.
Prerequisites
Context
Using soft shutdown, an operator can restart the application (for example, in order to update it) in a way that
end users are not disturbed. First, the application process is disabled. This means that requests by users that
Cockpit
Context
You can disable application processes in the Processes panel on the application dashboard or the State panel
on the process dashboard.
Procedure
1. Log on to the cockpit, select an subaccount and choose Applications Java Applications in the
navigation area.
2. Select an application in the application list.
3. In the Processes panel, choose (Disable process) in the relevant row. The process state changes to
Started (disabled).
Note
You can also select the process and disable it from the process dashboard.
4. Wait for some time so that all working sessions finish and then stop the process.
Related Information
Console Client
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
If you disable the entire application, or all processes of the application, then new users requesting the
application will not be able to access it and will get an error.
3. Wait for some time so that all working sessions finish.
You can monitor user requests and used resources by configuring JMX checks, or, you can just wait for a
given time period that should be enough for most of the sessions to finish.
4. Stop the application by executing <neo stop> with the appropriate parameters. If you want to terminate a
specific application process only and not the whole application, add the <--application-process-id
>parameter.
Related Information
In the event of unplanned downtime when there is no application process able to serve HTTP requests, a
default error is shown to users. To prevent this, an operator can configure a custom downtime page using a
downtime application, which takes over the HTTP traffic if an unplanned downtime occurs.
Prerequisites
Note
● You have downloaded and configured the console client. We recommend that you use the latest SDK. For
more information, see Set Up the Console Client [page 1412]
● You have deployed and started your own downtime application in the same SAP Cloud Platform
subaccount as the application itself.
● The downtime application has to be developed in a way that it returns an HTTP 503 return code. That is
especially important if availability checks are configured for the original applications so that unplanned
downtimes are properly detected.
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Configure the downtime application by executing neo set-downtime-app in the command line.
3. (optional) If the downtime page is no longer needed (for example, if the original application has been
undeployed), you can remove it by executing clear-downtime-app command.
Related Information
The SAP JVM Profiler helps you analyze resource-related problems in your Java application regardless of
whether the JVM is running locally or on the cloud.
Typically, you first profile the application locally. Then you may continue and profile it also on the cloud. The
basic procedure is the following:
Features
Allocation Trace Shows the number, size and type of the allocated objects and the
methods allocating them.
Garbage Collection Trace Shows all details about the processed garbage collections
Synchronization Trace Shows the most contended locks and the threads waiting for or hold
ing them
File I/O Trace Shows the number of bytes transferred from or to files and the meth
ods transferring them
Network I/O Trace Shows the number of bytes transferred from or to the network and
the methods transferring them
Class Statistic Shows the classes, the number and size of their objects currently re
siding in the Java Heap generations
Tasks
Overview
After you have created a Web application and verified that it is functionally correct, you may want to inspect its
runtime behavior by profiling the application. This helps you to:
Prerequisites
● You have developed and deployed a Web application using the Eclipse IDE. For more information, see
Deploying and Updating Applications [page 1453].
● You have installed SAP JVM as the runtime for the local server. For more information, see Set Up SAP JVM
in Eclipse IDE [page 1410]
Procedure
Note
Since profiling only works with SAP JVM, if another VM is used, going to Profile will result in opening a
dialog that suggests two options - editing the configuration or canceling the operation.
● If the server is in profile mode, and you choose Restart in Profile from the context menu, the profile
session will be restarted in [Profiling] state.
● If the server is in profile mode, and you choose Restart or Restart in Debug from the context menu, the
profile session will be disconnected and the server will be restarted.
Result
You have successfully started a profiling run of a locally deployed Web application. You can now trigger your
work load, create snapshots of the profiling data and analyze the profiling results.
When you have finished with your profiling session, you can stop it either by disconnecting the profiling session
from the Profile view or by restarting the server.
Related Information
Refer to the SAP JVM Profiler documentation for details about the available analysis options. The
documentation is available as part of the SAP JVM Profiler plugin in the Eclipse IDE and can be found via
Help Help Contents SAP JVM Profiler .
After you have created a Web application and verified that it is functionally correct, you may want to inspect its
runtime behavior by profiling the application on the cloud. It is best if you first profile the Web application
locally.
Prerequisites
● You have developed and deployed a Web application using the Eclipse IDE. For more information, see
Deploying and Updating Applications [page 1453]
● Optional: You have profiled your Web application locally. For more information, see Profile Applications
Locally [page 2198]
Note
Currently, it is only possible to profile Web applications on the cloud that have exactly one application
process (node).
Procedure
○ From the server context menu, choose Profile (if the server is stopped) or Restart in Profile (if the
server is running).
○ Go to the application source code and from its context menu, choose Profile As Profile on
Server .
Note
Currently, the Profiling perspective cannot be automatically switched but you need to open it manually.
Results
You have successfully initiated a profiling run of a Web application on the cloud. Now, you can trigger your
workload, create snapshots of the profiling data and analyze the profiling results.
When you have finished with your profiling session, you can stop it either by disconnecting the profiling session
from the Profile view or by restarting the server.
Refer to the SAP JVM Profiler documentation for details about the available analysis options. The
documentation is available as part of the SAP JVM Profiler plugin in the Eclipse IDE and you can find it via
Help Help Contents SAP JVM Profiler .
Context
This page describes the format of the Default Trace file. You can view this file for your Web applications via
the cockpit and the Eclipse IDE.
For more information, see Investigating Performance Issues Using the SQL Trace [page 1564] and Using Logs in
the Eclipse IDE
Parameter Description
RECORD_SEPARATOR ASCII symbol for separating the log records. In our case, it is
"|" (ASCII code: 124)
ESC_CHARACTER ASCII symbol for escape. In our case, it is "\" (ASCII code:
92)
FINEST|Information|FINER|Information|FINE|Information|
CONFIG|Information|DEBUG|Information|PATH|
Information|INFO|Information|WARNING|Warning|ERROR|
Error|SEVERE|Error|FATAL|Error
HEADER_END
Besides the main log information, the Default Trace logs information about the tenant users that have
accessed a relevant Web application. This information is provided in the new Tenant Alias column parameter,
which is automatically logged by the runtime. The Tenant Alias is:
● A human-readable string;
● For new accounts, it is shorter than the tenant ID (8-30 characters);
● Unique for the relevant SAP Cloud Platform landscape;
● Equal to the account name (for new accounts); might be equal to the tenant ID (for old accounts).
Example
In this example, the application has been accessed on behalf of two tenants - with identifiers 42e00744-
bf57-40b1-b3b7-04d1ca585ee3 and 5c42eee4-d5ad-494e-9afb-2be7e55d0f9c.
FILE_TYPE:DAAA96DE-B0FB-4c6e-AF7B-A445F5BF9BE2
FILE_ID:1391169413918
ENCODING:[UTF8|NWCJS:ASCII]
RECORD_SEPARATOR:124
COLUMN_SEPARATOR:35
ESC_CHARACTER:92
COLUMNS:Time|TZone|Severity|Logger|ACH|User|Thread|Bundle name|JPSpace|
JPAppliance|JPComponent|Tenant Alias|Text|
SEVERITY_MAP:FINEST|Information|FINER|Information|FINE|Information|CONFIG|
Information|DEBUG|Information|PATH|Information|INFO|Information|WARNING|Warning|
ERROR|Error|SEVERE|Error|FATAL|Error
HEADER_END
2014 01 31 12:07:09#
+00#INFO#com.sap.demo.tenant.context.TenantContextServlet##anonymous#http-
bio-8041-exec-1##myaccount#myapplication#web#null#null#myaccount#The app was
accessed on behalf of tenant with ID: '42e00744-bf57-40b1-b3b7-04d1ca585ee3'|
2014 01 31 12:08:30#
+00#INFO#com.sap.demo.tenant.context.TenantContextServlet##anonymous#http-
bio-8041-exec-3##myaccount#myapplication#web#null#null#subscriberaccount#The app
was accessed on behalf of tenant with ID: '5c42eee4-d5ad-494e-9afb-2be7e55d0f9c'|
Related Information
Trace user actions with excessive execution time within a complex system landscape using the End-to-End
(E2E) trace analysis.
The End-to-End trace analysis consists of features for performing analyses throughout your entire technical
landscape, so that you can isolate problematic components and identify root causes. You analyze a trace to
check the distribution of the response time over the client, network, and server. As a result, the response time
of each component involved in executing the request and the request path through the components are
provided to you for detailed analysis.
For additional information, see Root Cause Analysis and Exception Management.
Related Information
Configuring Connections for Collecting Statistical Data for SAP Cloud Platform
You need to configure the connection to the SAP Cloud Platform for retrieving the statistical data. Proceed as
follows, depending on the tool you use:
You need to configure а connection to your ABAP system for retrieving the statistical data. Proceed as follows,
depending on the tool you use:
● for SAP Solution Manager 7.2 SP06 and higher - see Managing Technical System Information and
Executing the Configuration Scenarios.
● for Focused Run for SAP Solution Manager (FRUN) - see Managed Systems Preparation & Maintenance
Guides and Preparing Managed Systems - SAP NetWeaver Application Server ABAP .
The E2E tracing is supported by default for HTML5 applications. To enable automatic upload of your business
transaction started by an HTML5 application to SAP Solution Manager or FRUN, proceed as described in E2E
Trace Involving SAP Cloud Platform .
Java Applications
The E2E Tracing for Java applications in SAP Cloud Platform is supported. For outgoing connections to other
systems, for example other Java applications in SAP Cloud Platform or on-premise systems, you must use the
Connectivity Service to ensure the correct forwarding of the SAP-PASSPORT for all outgoing connections
depending on the runtime environment.
Context
For Java applications running on Java Web Tomcat 7, Java Web Tomcat 8, and Java EE 7 Web Profile TomEE
7, you have to update the SAP-PASSPORT and forward it as a header. To implement the tracing of outgoing
connection calls, you also have to configure your destinations using the destination names from the SAP Cloud
Platform cockpit, and then call them while forwarding the SAP-PASSPORT header.
Note
All following code blocks contain example code, which might only be similar to what you have to implement
in your application.
Sample Code
Sample Code
return
sapPassportHeaderProvider.getSapPassportHeader(CONNECTION_INFO);
}
Sample Code
Related Information
Interface SapPassportHeaderProvider
HTML5 Applications
The E2E tracing and gathering of statistics is supported by default for HTML5 applications.
For HTML5 applications started from the SAP Fiori Launchpad, you have to manually activate the gathering of
performance statistics for each site. Proceed as follows:
Java Applications
The E2E tracing and collection of data is disabled by default for Java applications. It has to be activated on
demand. As prerequisites, you need a subaccount with a deployed and started Java application, you are a
member of the subaccount, and you have the Developer role enabled.
You then receive an activation confirmation with the valuetrue to notify you that the procedure is successful.
Procedure
To analyze an E2E trace, proceed as described below for the tool you use:
○ for SAP Solution Manager 7.2 SP06 and higher - Trace Analysis.
○ for Focused Run for SAP Solution Manager(FRUN) - Trace Analysis.
SAP Cloud Platform allows you to achieve isolation between the different application life cycle stages
(development, testing, productive) by using multiple subaccounts.
Prerequisites
● You have developed an application. For more information, see Developing Java Applications [page 1442].
● You have a subaccount in an enterprise account. For more information, see Global Accounts [page 8].
Context
Using multiple subaccounts ensures better stability. Also, you can achieve better security for productive
applications because permissions are given per subaccount.
For example, you can create three different subaccounts for one application and assign the necessary amount
of compute unit quota to them:
● dev - use for development purposes and for testing the increments in the cloud, you can grant permissions
to all application developers
● test- use for testing the developed application and its critical configurations to ensure quality delivery
(integration testing and testing in productive-like environment prior to making it publicly available)
● prod - use to run productive applications, give permissions only to operators.
You can create multiple subaccounts and assign quota to them either using the console client or the cockpit.
Procedure
Next, you can deploy your application in the newly created subaccount using the Eclipse IDE or the console
client. Then, you can test your application and make it ready for productive use.
You can transfer the application from one subaccount to another by redeploying it in the respective
subaccount.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Create a new subaccount.
Execute:
Execute:
Next, you can deploy your application in the newly created subaccount by executing neo deploy -a
<subaccount> -h <host> -b <application name> -s <file location> -u <user name or
email>. Then, you can test your application and make it ready for productive use.
You can transfer the application from one subaccount to another by redeploying it in the respective
subaccount.
Related Information
After you have developed and deployed your SAP HANA XS application, you can then monitor it.
Cockpit Configure Availability Checks for SAP HANA XS Applications from the Cockpit [page 1592]
Console Client Configure Availability Checks for SAP HANA XS Applications from the Console Client [page 1594]
For an overview of the current status of the individual HTML5 applications in your subaccount, use the SAP
Cloud Platform cockpit.
It provides key information in a summarized form and allows you to initiate actions, such as starting or
stopping.
Managing Destinations
Monitoring
Logging
Related Information
You can export HTML5 applications either with their active version or with an inactive version.
Procedure
1. Choose Applications HTML5 Applications in the navigation area, and then the link to the application
you want to export.
Procedure
1. Choose Applications HTML5 Applications in the navigation area, and then the link to the application
you want to export.
2. Choose Versioning in the navigation area, and then choose Versions under History.
3. In the table row of the version you want to export, choose the export icon ( ).
4. Save the zip file.
You can import HTML5 applications either creating a new application or creating a new version for an existing
application.
Note
When you import an application or a version, the version is not imported into master branch of the
repository. Therefore, the version is not visible in the history of the master branch. You have to switch to
Versions in the navigation area.
Procedure
1. To upload a zip file, choose Applications HTML5 Applications in the navigation area, and then Import
from File ( ).
2. In the Import from File dialog, browse to the zip file you want to upload.
3. Enter an application name and a version name.
4. Choose Import.
The new application you created by importing the zip file is displayed in the HTML5 Applications section.
5. To activate this version, see Activate a Version [page 1716].
Procedure
1. Choose Applications HTML5 Applications in the navigation area, and then the application for which
you want to create a new version.
2. Choose Versioning in the navigation area.
3. To upload a zip file, choose Versions under History and then Import from File ( ).
4. In the Import from File dialog, browse to the zip file you want to upload.
5. Enter a version name.
6. Choose Import.
The new version you created by importing the zip file is displayed in the History table.
7. To activate this version, select the Activate this application version icon ( ) in the table row for this version.
8. Confirm that you want to activate the application.
On the Application Details panel, you can add or change a display name and a description to the selected
HTML5 application.
Context
If a display name is maintained, this display name is also shown in the list of HTML5 applications and in the list
of HTML5 subscriptions instead of the application name.
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
2. Choose Applications HTML5 Applications in the navigation area, and select the application for which
to add or change a display name and description.
3. Under Application Details of the Overview section, choose Edit.
4. Enter a display name and a description for the HTML5 application.
Field Comment
Display Name Human-readable name that you can specify for your HTML5 application.
Description Short descriptive text about the HTML5 application, typically stating what it
does.
An HTML5 application can have multiple versions, but only one of these can be active. This active version is
then available to end-users of the application.
However, developers can access all versions of an application using unique URLs for testing purposes.
The Versioning view in the cockpit displays the list of available versions of an HTML5 application. Each version
is marked either as active or inactive. You can activate an inactive version using the activation button.
For every version, the required destinations are displayed in a details table. To assign a destination from your
subaccount global destinations to a required destination, choose Edit in the details table. By default, the
destination with the same name as the name you defined for the route in the application descriptor is assigned.
If this destination does not exist, you can either create the destination or assign another one.
When you activate a version, the destinations that are currently assigned to this version are copied to the active
application version.
Related Information
If an HTML5 application requires connectivity to one or more back-end systems, destinations must be created
or assigned.
Prerequisites
Context
For the active application version the referenced destinations are displayed in the HTML5 Application section of
the cockpit. For a non-active application version the referenced destinations are displayed in the details table in
the Versioning section. HTML5 applications use HTTP destinations, which can be defined on the level of your
subaccount.
By default, the destination with the same name as the name you defined for the route in the application
descriptor is assigned. If this destination does not exist, you can create the destination with the same name as
described in Configure Destinations from the Cockpit [page 203]. Then you can assign this newly created
destination. Alternatively, you can assign another destination that already exists in your subaccount. To assign
a destination, follow the steps below.
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
2. Choose Applications HTML5 Applications in the navigation area, and choose the application for
which you want to assign a different destination (than the default one) from your subaccount global
destinations.
3. Choose Edit in the Required Destinations table.
4. In the Mapped Subaccount Destinations column, choose an existing destination from the dropdown list.
End users can only access an application if the application is started. As long as an application is stopped, its
end user URL does not work.
Context
The first start of the application usually occurs when you activate a version of the application. For more
information, see Activating a Version.
Procedure
1. Log on with a user (who is an subaccount member) to the SAP Cloud Platform cockpit.
The end user URL for the application is displayed under Active Version.
Related Information
Resources of an HTML5 application can be protected by permissions. The application developer defines the
permissions in the application descriptor file.
To grant a user the permission to access a protected resource, you can either assign a custom role or one of
the predefined virtual roles to such a permission. The following predefined virtual roles are available:
The role assignments are only effective for the active application version. To protect non-active application
versions, the default permission NonActiveApplicationPermission is defined by the system for every
As long as no other role is assigned to a permission, only subaccount members with developer or administrator
permission have access to the protected resource. This is also true for the default permission
NonActiveApplicationPermission.
You can create roles in the cockpit using either of these panels:
Note
An HTML5 application’s own permissions also apply when the application is reached from another HTML5
application (see Accessing Application Resources [page 1726]). Previously, only the permissions of the
HTML5 application that was accessed first were considered. If you need time to assign the proper roles,
you can temporarily switch back to the previous behavior by unchecking Always Apply Permissions in the
cockpit.
Related Information
You can manage roles and permissions for the HTML5 applications or subscriptions using the HTML5
Applications panel.
You create roles that are assigned to HTML5 applications or HTML5 applications subscriptions. The roles are
available for all HTML5 applications and all subscriptions to HTML5 applications.
Procedure
Prerequisites
● If you want to use groups, you have configured the groups for your identity provider as described in
Application Identity Provider [page 2407].
Context
Since all HTML5 applications and all HTML5 application subscriptions use the same roles, changing a role
affects all applications that use this role.
Procedure
Once you have created the required roles, you can assign the roles to the permissions of your HTML5
application or of your HTML5 application subscription to an HTML5 application.
Procedure
You can manage roles and permissions for the HTML5 applications or subscriptions using the Subscriptions
panel.
You create roles that are assigned to HTML5 applications or HTML5 applications subscriptions. The roles are
available for all HTML5 applications and all subscriptions to HTML5 applications.
Procedure
Prerequisites
● If you want to use groups, you have configured the groups for your identity provider as described in
Application Identity Provider [page 2407].
Context
Since all HTML5 applications and all HTML5 application subscriptions use the same roles, changing a role
affects all applications that use this role.
Procedure
Once you have created the required roles, you can assign the roles to the permissions of your HTML5
application or of your HTML5 application subscription to an HTML5 application.
Procedure
You can view logs on any HTML5 application running in your subaccount or subscriptions to these apps.
Currently, only the default trace log file is written. The file contains error messages caused by missing back-end
connectivity, for example, a missing destination, or logon errors caused by your subaccount configuration.
Context
There is one file a day. The logs are kept for 7 days before they are deleted. If the application is deleted, the logs
are deleted as well. A log is a virtual file consisting of the aggregated logs of all processes. Currently, the
following data is logged:
● The time stamp (date, time in milliseconds, time zone) of when the error occurred
● A unique request ID
● The log level (currently only ERROR is available)
● The actual error message text
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
Related Information
Log Viewers
By default, all applications running on SAP Cloud Platform are accessed on the hana.ondemand.com domain.
According to your needs, you can change the default application URL by configuring application domains
different from the default one: custom or platform domains.
You can configure application domains using SAP Cloud Platform console client.
Note that you can use either platform domains or custom domains.
Custom Domains
Use custom domains if you want to make your applications accessible on your own domain different from
hana.ondemand.com - for example, www.myshop.com. When a custom domain is used, the domain name as
well as the server certificate for this domain are owned by the customer.
Platform Domains
Caution
You can configure different platform domains only for Java applications.
By default, applications accessible on hana.ondemand.com are available on the Internet. Platform domains
enable you to use additional features by using a platform URL different from the default one.
For example, you can use svc.hana.ondemand.com to hide the application from the Internet and access it only
from other applications running on SAP Cloud Platform, or, cert.hana.ondemand.com if you want an application
Related Information
SAP Cloud Platform allows subaccount owners to make their SAP Cloud Platform applications accessible via a
custom domain that is different from the default one (hana.ondemand.com) - for example www.myshop.com.
Note
Prerequisites
To use a custom domain for your application, you must fulfill a number of preliminary steps. For more
information about these steps, see Prerequisites [page 2224].
Scenario
After fulfilling the prerequisites, you can configure the custom domain on your own using SAP Cloud Platform
console client commands.
First, set up secure SSL communication to ensure that your domain is trusted and all application data is
protected. Then, route the traffic to your application:
1. Create an SSL Host [page 2228] - the host holds the mapping between your chosen custom domain and
the application on SAP Cloud Platform as well as the SSL configuration for secure communication through
this custom domain.
2. Upload a Certificate [page 2229] - it will be used as a server certificate on the SSL host.
3. Bind the Certificate to the SSL Host [page 2231].
4. Add the Custom Domain [page 2232] - this maps the custom domain to the application URL.
5. Configure DNS [page 2232]- you can create a CNAME mapping.
The configuration of custom domains has different setups related to the subscriptions of your subaccount. For
more information about custom domains for applications that are part of a subscription, see Custom Domains
for Multitenant Applications [page 2235].
Related Information
5.3.3.5.1.1 Prerequisites
Before configuring SAP Cloud Platform custom domains, you need to make some preliminary steps and fulfill a
number of prerequisites.
You need to have a quota for domains configured for your global account. One custom domain quota
corresponds to one SSL host that you can use. For more information, see Purchase a Customer Account.
The following two steps involve external service providers - domain name registrar and certificate authority.
Note
The domain name and the server certificate for this domain are issued by external authorities and owned
by the customer.
You need to come up with a list of custom domains and applications that you want to be served through them.
For example, you may decide to have three custom domains: test.myshop.com, preview.myshop.com,
www.myshop.com - for test, preview, and productive versions of your SAP Cloud Platform application.
The domain names are owned by the customer, not by SAP Cloud Platform. Therefore, you will need to buy the
custom domain names that you have chosen from a registrar selling domain names.
To make sure that your domain is trusted and all your application data is protected, you have to get an
appropriate SSL certificate from a Certificate Authority (CA).
You need to decide on the number and type of domains you want to be protected by this certificate. One SSL
host can hold one SSL certificate. One certificate can be valid for a number of domains and subdomains.
There are various types of SSL certificates. Depending on your needs, you can choose between:
Note
Choosing the wildcard subdomain certificate ensures protection of all subdomains in your custom
domain (*.myshop.com), but not the domain itself (myshop.com cannot be used).
Using a wildcard certificate allows you to map a large number of subdomains mapped to a single SSL host.
However, this feature comes with several disadvantages:
○ If the certificate suffers from a security breach, it can affect all applications hosted on these
subdomains.
○ If the HTTP traffic is too massive, it may cause performance issues for all applications hosted on these
subdomains.
If there are too many custom domain mappings, consider using more SSL hosts to reduce the HTTP
traffic load.
● Subject Alternative Name (SAN) certificate - secures multiple domain names with a single certificate.
This type allows you to use any number of different domain names or common names. For example, one
certificate can support: www.myshop.com, *.test.myshop.com, *.myshop.eu, www.myshop.de.
4. Generate a CSR
To issue an SSL certificate and sign it with the CA of your choice, you need a certificate signing request (CSR).
You must create the CSR using our generate-csr command. For more information, see generate-csr [page
2025].
Caution
The CSR is valid only for the host on which it was generated and cannot be moved and downloaded. The
host represents the region: for example, hana.ondemand.com for Europe; us1.hana.ondemand.com for the
United States; ap1.hana.ondemand.com for Asia-Pacific, and so on.
Use the CA of your choice to sign the CSR. The certificate has to be in Privacy-enhanced Electronic Mail (PEM)
format (128 or 256 bits) with private key (2048-4096 bits).
Related Information
Install the SAP Cloud Platform SDK for Neo Environment [page 1403]
Set Up the Console Client [page 1412]
Using the Console Client [page 1928]
Configuring Custom Domains [page 2227]
Guided Answers (Neo Environment)
Frequently Asked Questions [page 2240]
Context
Note
For SAP Cloud Platform Integration applications, there are some differences in the procedure. For more
information, see Configuring Custom Domains for SAP Cloud Platform Integration.
You have to create an SSL host that will serve your custom domain. This host holds the mapping between your
chosen custom domain and the application on SAP Cloud Platform as well as the SSL configuration for secure
communication through this custom domain.
Prerequisites
To use the console commands, install an SDK according to the instructions in Install the SAP Cloud Platform
SDK for Neo Environment [page 1403].
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/neo.sh (<SDK
installation folder>/tools).
2. Create an SSL host. In the console client command line, execute neo create-ssl-host. For example:
Note
In the command output, you get the SSL host. For example, "A new SSL host [mysslhostname]
was created and is now accessible on host [123456.ssl.ondemand.com]". Write down
the 123456.ssl.ondemand.com host as you will later need it for the DNS configuration.
You need an SSL certificate to allow secure communication with your application. Once installed, the SSL
certificate is used to identify the client/individual and to authenticate the owner of the site.
Context
The certificate generation process starts with certificate signing request (CSR) generation. A CSR is an
encoded file containing your public key and specific information that identifies your company and domain
name.
The next step is to use the CSR to get a server certificate signed by a certificate authority (CA) chosen by you.
Before buying, carefully consider the appropriate type of SSL certificate you need. For more information, see
Prerequisites [page 2224].
Procedure
1. Generate a CSR.
The --name parameter is the unique identifier of the certificate within your subaccount on SAP Cloud
Platform and will be used later. It can contain alphanumeric symbols, '.', '-' and '_'.
○ CN = Common Name – the domain name(s) for which you are requesting the certificate - for example
‘www.example.com’
○ C = Country - two-digit code - for example, ‘GB’
○ ST = State - state or province name - for example, ‘Hampshire’
○ L = Locality – city full name - for example ‘Portsmouth’
○ O = Organization – company name
○ OU = Organizational Unit – for example ‘IT Department’
○ E = Email Address – to validate the certificate request, some certificate authorities require the email
address of the domain owner
Note
For security reasons, SAP recommends that you use only certificates that are based on CSRs
generated via the generate-csr command.
Note
When sending the CSR to be signed by a CA, make sure you choose F5 BigIP for server type.
3. Upload the SSL certificate you received from the CA to SAP Cloud Platform:
Note
The certificate must be in Privacy-enhanced Electronic Mail (PEM) format (128 or 256 bits) with private
key (2048-4096 bits).
Note that some CAs issue chained root certificates that contain an intermediate certificate. In such cases,
put all certificates in the file for upload starting with the signed SSL certificate.
If you did not upload an intermediate certificate for some reason, you can use the --force parameter
option. Put the missing certificate in the file, add the --force parameter, and retry the previously
executed upload-domain-certificate command without changing the values of the remaining
parameters.
Caution
Once uploaded, the domain certificate (including the private key) is securely stored on SAP Cloud
Platform and cannot be downloaded for security reasons.
Note that when the certificate expires, you will receive a notification from your CA. You need to take care of
the certificate update. For more information, see Update an Expired Certificate [page 2237]
Tip
The number of certificates you can have is limited and is calculated based on the number of custom
domains you have multiplied by 4. For example, if you have one custom domain, you can have 4
certificates.
You need to bind the uploaded certificate to the created SSL host so that it can be used as SSL certificate for
requests to this SSL host.
Procedure
To make your application on the SAP Cloud Platform accessible via the custom domain, you need to map the
custom domain to the application URL.
Procedure
1. In the console client command line, execute neo add-custom-domain with the appropriate parameters.
Note that you can only do this for a started application.
Note
Query strings are not supported in the --application-url parameter and are ignored. For example,
if you specify “mysubaccountmyapp.hana.ondemand.com/sites?idp=example” for --application-
url, the “?idp=example” part will be ignored.
After you configure an application to be accessed over a custom domain, its default platform URL
hana.ondemand.com will no longer be accessible. It will only remain accessible for subscribed
applications with a URL of type https://<application_name><provider_subaccount>-
<consumer_subaccount>.<domain>. You have the option to disable the access to the default platform
URL for subscribed applications with the --disable-application-url parameter of the add-custom-
domain command.
To route the traffic for your custom domain to your application on SAP Cloud Platform, you also need to
configure it in the Domain Name System (DNS) that you use.
Context
You need to make a CNAME mapping from your custom domain to the created SSL host for each custom
domain you want to use. This mapping is specific for the domain name provider you are using. Usually, you can
modify CNAME records using the administration tools available from your domain name registrar.
1. Sign in to the domain name registrar's administrative tool and find the place where you can update the
domain DNS records.
2. Locate and update the CNAME records for your domain to point to the DNS entry you received from us
(*.ssl.ondemand.com) - the one that you got as a result when you created the SSL host using the
create-ssl-host command. For example, 123456.ssl.ondemand.com. You can check the SSL host by
executing the list-ssl-hosts command.
For example, if you have two DNS records : myhost.com and www.myhost.com, you need to configure
them both to point to the SSL host 123456.ssl.ondemand.com.
After you configure the custom domain, make sure that the setup is correct and your application is accessible
on the new domain.
Procedure
1. Log on to the cockpit, select an subaccount and go to your Application Dashboard. In Application URLs,
check if the new custom URL has replaced the default one.
2. Open the new application URL in a browser. Make sure that your application responds as expected.
3. Check that there are no security warnings in the browser. View the certificate in the browser. Check the
Subject and Subject Alternative Name fields - the domain names there must match the custom domain.
4. Perform a small load test - request the application from different browser sessions making at least 15
different requests.
Results
After this procedure, your application will be accessible on the custom domain, and you will be able to log on
(single sign-on) successfully. Single logout, however, may not work yet. If you have a custom trust configuration
in your subaccount, you will need to perform an additional configuration to enable single logout.
Next Steps
Configure single logout. For more information, see Configure Single Logout [page 2234]
To enable single logout, you need to configure the Custom Domain URLs, and, optionally, the Central Redirect
URL for the SAML single sign-on flow. Even if single sign-on works successfully with your application at the
custom domain, you will need to follow the current procedure to enable single logout.
Prerequisites
● You are logged on with a user with administrator role. See Managing Member Authorizations in the Neo
Environment [page 1904].
● You are aware of the productive region that hosts your subaccount. See Regions and Hosts Available for the
Neo Environment [page 14].
● You are using a custom trust configuration for your subaccount. See Configure SAP Cloud Platform as a
Local Service Provider [page 2408].
● You have configured the required trust settings for your subaccount. See Configure Trust to the SAML
Identity Provider [page 2411].
Context
Central Redirect URL is the central node that facilitates assertion consumer service (ACS) and single logout
(SLO) service. By default, this node is provided by SAP Cloud Platform, and has the authn.<productive
region host>.com URL (for example, authn.hana.ondemand.com). If you want to use your application’s
root URL as the ACS, instead of the central node, you will need to maintain the Central Redirect URL.
For Java applications, you can follow the procedure described in the current document.
Note
For HANA XS applications that use SAP ID Service as authenticating authority, create an incident in
component BC-IAM-IDS. For HANA XS applications that use SAP Cloud Platform Identity Authentication
service for authentication, see Configure a Trusted Service Provider to learn how to update the ACS and
SLO endpoints.
Procedure
1. In your Web browser, open the SAP Cloud Platform cockpit and choose Security Trust in the
navigation area.
2. Choose the Custom Application Domains Settings subtab.
3. Choose Edit. The custom domains properties become editable.
4. Select the Use Custom Application Domains option.
Tip
The Central Redirect URL value has to be the same as the host of the ACS endpoint value in the
metadata of the service provider.
Note
Make sure you do not stop the application VM specified as the Central Redirect URL. Otherwise, SAML
authentication will fail for all applications in your subaccount.
6. The values in Custom Domain URLs are used for SLO. Enter the required values (all custom domain URLs)
in Custom Domain URLs.
7. Save your changes. The system generates the respective SLO endpoints. Test them in your Web browser
and make sure they are accessible from there.
Tip
The system will accept URL values with or without https://. Either way, the system will generate the
correct ACS and SLO endpoint URLs.
Configuration of custom domains has different setups related to the subscriptions of your subaccount.
Subscriptions represent applications that your subaccount has purchased for use from an application provider.
A subscription means that there is a contract between an application provider and a tenant that authorizes the
tenant to use the provider's application. As the consumer subaccount, you do not own, deploy, or operate
these applications yourself. Subscriptions allow you to configure certain features of the applications and launch
them through consumer-specific URLs.
● The custom domain is owned by the application provider who uses an SSL host from their subaccount
quota. The provider also does the configuration and assignment of the custom domain. The provider can
assign a subdomain of its own custom domain to a particular subscription URL. To do this, the provider
needs to have rights in both the provider and consumer subaccount.
● The customer (consumer) uses an SSL host from the consumer subaccount quota. In this case, the
customer (consumer) owns the custom domain and the SSL host and is therefore able do the necessary
configuration on their own.
Related Information
If you want your customers to use client certificates when they access your application on SAP Cloud Platform
via a custom domain.
Prerequisites
Your have configured a custom domain for your SAP Cloud Platform application. For more information, see
Using Custom Domains [page 2223].
Features
● Create and upload a list of trusted CA (certificate authority) certificates and assign that list to a previously
created SSL host. For more information, see add-ca [page 1940].
● Configure the SSL host to optionally or mandatorily require client certificate authentication. To do that, use
the --ca-bundle parameter when executing the set-ssl-host command. For more information, see
set-ssl-host [page 2130].
Related Information
When the certificate for the custom domain expires or it is about to expire, you can either upload and bind a
new certificate based on a new CSR, or upload and bind a new certificate based on an already existing CSR.
Context
Upload and bind a new certificate to the SSL host to replace the expired certificate by generating a new CSR. If
you had configured the certificate using the console client commands, follow the steps:
Procedure
1. Generate a new CSR by executing the neo generate-csr command with the appropriate parameters:
The set-ssl-host command allows you to unbind the expired certificate and bind the new one to the
SSL host in one step. For more information, see set-ssl-host [page 2130].
5. To verify that you have configured correctly the new certificate, execute neo list-domain-
certificates.
Context
Some certificate authorities (CA) offer to sign an SSL certificate based on an already existing CSR. If you
choose this option, you don't generate a new CSR.
Note
When you update your old certificate this way, you overwrite it with the new certificate.
Procedure
1. When you receive the signed SSL certificate from your CA, upload it to SAP Cloud Platform by executing:
The --force parameter allows you to overwrite the old certificate and bind the new certificate in one step.
For more information, see upload-domain-certificate [page 2156].
If you don't use the --force parameter, you won't be able to bind the certificate to the SSL host.
2. To verify that the validity of the certificate is updated, execute neo list-domain-certificates.
Related Information
If you do not want to use the custom domain any longer, you can remove it using the console client commands.
As a result, your application will be accessible only on its default hana.ondemand.com domain.
Procedure
Related Information
Answers to some of the most commonly asked questions about custom domains in the SAP Cloud Platform
Neo environment.
How many domains (URLs) do I get to use for one custom domain?
For each custom domain that you purchase, you can create one SSL host. You can then map one domain
certificate to that SSL host. The number of domains (URLs) that you can use with a single domain certificate
depends on the type of the domain certificate:
● If the certificate is issued for a specific domain name (for example, webshop.acme.com), you can use one
domain.
● If you use a wildcard certificate (for example, *.acme.com), the certificate is valid for all subdomains of
acme.com.
Note
Using a wildcard certificate allows you to map a large number of subdomains mapped to a single SSL
host. However, this feature comes with several disadvantages:
○ If the certificate suffers from a security breach, it can affect all applications hosted on these
subdomains.
○ If the HTTP traffic is too massive, it may cause performance issues for all applications hosted on
these subdomains.
If there are too many custom domain mappings, consider using more SSL hosts to reduce the
HTTP traffic load.
● If you use a Subject Alternative Names (SAN) certificate, you can use many domains. This type of
certificate is usually used when multiple aliases of the same application are needed. For example,
www.acme.com, www.login.acme.com.
Note
Each of these options has pros and cons. It's up to you to decide which type of certificate you are going to
use.
For SAP Cloud Platform Integration applications, there are some differences in the procedure. When you map
the custom domain to the Cloud Integration URL, keep in mind that the URL consists of several URL elements.
You can find these URL elements in the cockpit. For more information, see Configuring Custom Domains for
SAP Cloud Platform Integration.
It depends. The default hana.ondemand.com URL remains accessible only if the application is part of a
subscription. This type of applications has the following URL format: https://
<application_name><provider_subaccount>-<consumer_subaccount>.<domain>. If needed, you
can disable the access to the default hana.ondemand.com URL with the --disable-application-url
parameter of the add-custom-domain [page 1942] command. For more information, see Custom Domains for
Multitenant Applications [page 2235].
In all other cases, the default hana.ondemand.com becomes inaccessible and cannot be used along with the
configured custom domain URL.
Yes, there is. If you are facing a technical issue and you are not sure how to proceed, see Guided Answers .
Related Information
Using platform domains, you can configure the application network availability or authentication policy. You can
achieve that by configuring the appropriate platform domain which will change the URL on which your
application will be accessible.
Prerequisites
You have installed and configured SAP Cloud Platform console client. For more information, see Setting Up the
Console Client.
Context
You can configure the platform domains using the application-domains group of console client commands:
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/neo.sh(<SDK
installation folder>/tools).
2. Configure the platform domain you have chosen by executing the add-platform-domain command.
As a result, the specified application will be accessible on cert.hana.ondemand.com and on the default
hana.ondemand.com domain.
Procedure
1. To make sure the new platform domain is configured, execute the list-application-domains
command:
2. Check if the returned list of domains contains the platform domain you set.
Procedure
1. When you no longer want the application to be accessible on the configured platform domain, remove it by
executing the remove-platform-domain command:
2. Repeat the step for each platform domain you want to remove.
Related Information
Using an on-premise reverse proxy allows you to combine on-premise and cloud-based web applications in the
same browser window.
Scope
● Java applications
Note
● HTML5 applications
● Both host and port mapping for reverse proxy
● More than one reverse proxy address can be mapped to the same application URL.
You are often not allowed to combine on-premise and cloud-based web applications in one browser window. It
is not allowed because browsers use the cross-site information transfer prevention policy. Browsers count this
type of information transfer as a security threat by default, which makes it impossible to perform cookie
exchange and, in particular, cookie-based authentication.
You can use an on-premise reverse proxy as the sole origin for the browser. This feature allows you to manage
on-premise and cloud applications in the same browser window. The commands listed in this topic allow SAP
Cloud Platform applications to be accessed via such proxies.
Note
Please have in mind that you cannot use these commands for applications configured with custom
domains.
There are several options available for managing mappings between the cloud application uniform resource
identifier (URI) and the proxy host. Having a proxy-to-application mapping allows access to the application via
the on-premise reverse proxy.
Open the command prompt and navigate to the folder containing neo.bat/neo.sh (<SDK installation
folder>/tools). Then, you can manage the proxy host mappings by using the reverse-proxy group of
console client commands:
● The map-proxy-host command maps an application host to an on-premise reverse proxy host and port.
Example
● The unmap-proxy-host command deletes the mapping between an application host and an on-premise
reverse proxy host and port.
● The list-proxy-host-mappings command lists the proxy hosts mapped to an application host.
Example
Proxy Configuration
You need to set the on-premise proxy with a header with key x-proxy-host. As a result, when your HTTP
request arrives at the cloud, it will be routed properly to App 2. The header value should contain the application
host of App 2, which is app.hana.ondemand.com in this specific example.
Note
If you do not use the x-proxy-host header, you will receive the Service Unavailable error message.
Related Information
Use the security features and functions of SAP Cloud Platform to support the security policies of your
organization. Security features and functions help you control and monitor your applications.
Identity Federation
To enable you to seamlessly integrate SAP Cloud Platform applications with existing on-premise identity
management infrastructures, SAP Cloud Platform provides single sign-on (SSO) and identity federation
features. In SAP Cloud Platform, identity information is provided by identity providers (IdP), and not stored on
SAP Cloud Platform itself. You can have a different IdP for each subaccount you own, and this feature is
configurable using the cockpit.
The following figure illustrates the high-level architecture of identity management in SAP Cloud Platform.
Application authorizations are managed in the platform, and can be grouped to simplify administration.
Related Information
Authorization and Trust Management in the Cloud Foundry Environment [page 2247]
Audit Logging [page 2353]
Authorization and Trust Management in the Neo Environment [page 2358]
Platform Identity Provider [page 2431]
OAuth 2.0 Service [page 2436]
Keystore Service [page 2459]
Audit Logging in the Neo Environment [page 2493]
Principal Propagation [page 2499]
Protection from Web Attacks [page 2518]
Data Protection and Privacy [page 2527]
The subaccounts get their users from identity providers. Administrators make sure that users can only access
their dedicated subaccount by making sure that there is a dedicated trust relationship only between the
identity providers and the respective subaccounts. Developers configure and deploy application-based security
artifacts containing authorizations, and administrators assign these authorizations using the cockpit.
For more detailed descriptions of these functions, see the links in the following table:
Function See
Access management in the Cloud Foundry environment of Access Management in the Cloud Foundry Environment
SAP Cloud Platform, including the User Account and [page 2255]
Authentication service
The Cloud Foundry environment of SAP Cloud Platform adopts common industry security standards in order
to provide flexibility for customers through a high degree of interoperability with other vendors.
SAP Cloud SDK: How to secure your application in the Cloud Tutorial: Securing your application (Cloud Foundry environ
Foundry environment of SAP Cloud Platform ment) for SAP Cloud SDK
Principal Propagation
Exchange user ID information between systems or environments in SAP Cloud Platform.
In This Section
● Principal Propagation from the Cloud Foundry to the Neo Environment [page 2510]
● Principal Propagation from the Neo to the Cloud Foundry Environment [page 2504]
SAP Cloud Platform supports the use of single sign-on (SSO) authentication based on Security Assertion
Markup Language 2.0 (SAML). An SAML identity provider is used by an SAML service provider to authenticate
users who sign in to an application by means of single sign-on. The User Account and Authentication (UAA)
component is the central infrastructure component of the runtime platform for business user authentication
and authorization management. The users are stored in the SAML 2.0 identity provider whereas the user
authorizations are stored inside the UAA component.
Recommendation
If your business users are stored in multiple corporate identity providers and if you want to grant the users
access to SAP Cloud Platform, we recommend that you use SAP Cloud Platform Identity Authentication
Service as a hub.
For this scenario, we recommend that you first connect SAP Cloud Platform Identity Authentication Service
as single custom identity provider to SAP Cloud Platform. Next you use SAP Cloud Platform Identity
Authentication Service to integrate your corporate identity providers. SAP Cloud Platform Identity
Authentication Service supports different kinds of identity providers.
For more information, see Corporate Identity Providers and Configure Conditional Authentication for an
Application in What Is Identity Authentication and SAP Cloud Platform Identity Authentication Service
To make use of your identity provider's user base you must first establish a mutual trust relationship with your
SAML 2.0 identity provider. This configuration consists of two steps.
● Establish trust with the SAML 2.0 identity provider in your SAP Cloud Platform subaccount
● Register your SAP Cloud Platform subaccount in the SAML 2.0 identity provider
To establish trust with your SAML 2.0 identity provider, perform one of the applicable procedures below.
● Establish Trust and Federation with UAA Using SAP Cloud Platform Identity Authentication Service [page
2283]
● Establish Trust and Federation with UAA Using Any SAML Identity Provider [page 2285]
● Default Identity Federation with SAP ID Service in the Cloud Foundry Environment [page 2288]
Note
How you assign users to their authorizations depends on the type of trust configuration. If you are
using the default trust configuration via SAP ID Service, you can assign users directly to role
collections. For more information, see Default Identity Federation with SAP ID Service in the Cloud
Foundry Environment [page 2288].
However, if you are using a custom trust configuration as described in this topic, you can assign
individual users or SAML 2.0 groups to role collections. Assigning users to their authorizations is part
of application administration which is described here. For more information, see Assign Role
Collections [page 2299].
1 Use an existing role or create a new one using role Administrator of the Cloud Foundry en SAP Cloud
vironment
templates Platform
cockpit
Add Roles to Role Collections [page 2296]
Command
line interface
for SAP
Cloud
Platform
2 Create a role collection and assign roles to it Administrator of the Cloud Foundry en SAP Cloud
vironment
Platform
Maintain Role Collections [page 2298]
cockpit
Command
line interface
for SAP
Cloud
Platform
3 Assign the role collections to users Administrator of the Cloud Foundry en SAP Cloud
vironment
Platform
Managing Authorizations (China (Shanghai) region)
cockpit
or Directly Assign Role Collections to Users [page
2300] Command
line interface
for SAP
Cloud
Platform
4 (If you do not use SAP ID Service) Assign the role Administrator of the Cloud Foundry en SAP Cloud
vironment Platform
collections to SAML 2.0 user groups (AWS, Azure, or
cockpit
GCP regions)
5 Assign the role collection to the business users pro Administrator of the Cloud Foundry en SAP Cloud
vironment Platform
vided by an SAML 2.0 identity provider (AWS, Azure,
cockpit
or GCP regions)
Troubleshooting
This section provides information on troubleshooting-related activities for Authorization and Trust
Management in the Cloud Foundry environment.
Tip
We also recommend that you regularly check the SAP Notes and Knowledge Base for component BC-CP-
CF-SEC-IAM in the SAP Support Portal . These contain information about program corrections and
provide additional information.
To help you troubleshoot your issue, we also recommend increasing the log verbosity of your application
and application router. We provide a script to help you. If for some reason you can't use this script,
increase the log verbosity manually, see related link.
Our troubleshooting information can be found in our guided answers Troubleshooting for Authorization and
Trust Management in the Cloud Foundry Environment . Guided answers are a specialized format to guide
you step by step through troublehsooting topics. Check the individual troubleshooting topics for your error
message. If you can't find your problem, create an incident in the component BC-CP-CF-SEC-IAM. For more
information, see the related link.
General
The Cloud Foundry environment provides platform security functions such as business user authentication,
authentication of applications, authorization management, trust management, and other security functions. It
enables Cloud Foundry administrators to manage authorizations and trust. Developers design authorization
information and deploy this information in the Cloud Foundry environment.
Authorizations
Applications and their users require diverse authorizations so that they can integrate seamlessly into SAP
Cloud Platform. Developers configure these authorizations on the level of the application descriptor files so
that security artifacts are available in the cockpit. Administrators use the security artifacts to build roles and
aggregate them into role collections (sets of authorizations that are suitable for distinct user groups).
Security artifacts enable applications to communicate with other applications, for example, making or
receiving calls.
Trust Management
In SAP Cloud Platform, identity providers provide the users. You can have a different identity provider for each
subaccount you own. Using the cockpit, administrators establish the trust relationship between external
identity providers and the subaccounts.
Note
Before you start, make yourself familiar with the sections about authentication and authorization of the
SAP Cloud Platform Planning and Lifecycle-Management Guide. You find the security and compliance
model in the related link.
Related Information
The Cloud Foundry environment provides platform security functions such as business user authentication,
authentication of applications, authorization management, trust management, and other security functions.
For more detailed descriptions of these functions, see the links in the following table:
Function See
Access management in the Cloud Foundry environment of Access Management in the Cloud Foundry Environment
SAP Cloud Platform, including the User Account and [page 2255]
Authentication service
The Cloud Foundry environment of SAP Cloud Platform adopts common industry security standards in order
to provide flexibility for customers through a high degree of interoperability with other vendors.
Identity Federation
Identity federation is the concept of linking and reusing electronic identities of a user across multiple identity
providers. This frees an application from the obligation to obtain and store users' credentials for
authentication. Instead, the application reuses an identity provider that is already storing users' electronic
identities for authentication, provided that the application trusts this identity provider.
This makes it possible to decouple and centralize authentication and authorization functionality. Several major
protocols have been developed to support the concept of identity federation:
● SAML 2.0
● OAuth 2.0
Security Assertion Markup Language (SAML 2.0) is an open standard based on XML for exchanging
authentication and authorization data of a principal (user) between an identity provider (IdP) and a service
provider (SP). The data is exchanged using messages called bearer assertions. A bearer is any party in
possession of the assertion. The integrity of the assertion is protected by XML encryption and an XML
signature. SAML addresses the requirement of web browser single sign-on across the Internet.
Authorizations are implemented through user groups, to which the user is assigned. How these user groups
correlate to specific authorizations depends on the service provider and the respective implementation.
For more information about the SAML specification, see the related link.
Resource owner (usually the end user) A resource owner is capable of granting access to a pro
tected resource.
Resource server (for example, a cloud application/microser The resources are protected by hosts. You can reach them
vice) using REST endpoints. A resource server is capable of ac
cepting and responding to protected resource requests us
ing access tokens.
OAuth 2.0 client (for example, application router) An application that makes protected resource requests on
behalf of the resource owner and with its authorization.
Authorization server (User Account and Authentication The User Account and Authentication service (UAA) issues
service) access tokens for the client. Once the OAuth 2.0 client has
been successfully authenticated by an SAML 2.0 compliant
identity provider, it obtains the authorizations of the re
source owner. An access token represents credentials used
to access protected resources (see also the RFC 6749,
OAuth 2.0 access token section in the related link).
The JWT token contains header and claims information (for example, issuer, subject, expiration time,
consumer-defined information), and is digitally signed with the private key of the authorization server (UAA
service).
The cloud application and/or business application has a trust relationship with the authorization server. The
trust is configured in the environment variable <VCAP_SERVICES> for the application router and each
microservice of the business application. <VCAP_SERVICES> contains a credentials string for the UAA, which
is created by the respective service broker when the application router and/or the microservice is bound to the
UAA service instance. The credentials string contains, among other things, the public key corresponding to the
private key of the UAA. The public key is used to verify the token signature.
Related Information
http://saml.xml.org/saml-specifications
https://tools.ietf.org/html/rfc6749#section-1.4
The Cloud Foundry environment of SAP Cloud Platform supports identity federation with identity providers.
The current section provides an overview of the supported scenarios.
An identity provider provides the business users for SAP Cloud Platform. A mutual trust relationship between
the identity provider and SAP Cloud Platform is required.
The default identity provider is SAP ID Service. It is part of SAP Cloud Platform. The trust relationship is already
established.
For more information, see Default Identity Federation with SAP ID Service in the Cloud Foundry Environment
[page 2288].
You have the option, to use any other identity provider. SAP Cloud Platform supports SAML 2.0 identity
providers. If you want to use it, you must configure your own custom SAML 2.0 identity provider and establish
trust between your SAP Cloud Platform subaccount and the identity provider.
Configuring trust in a subaccount Establish Trust with an SAML 2.0 Identity Provider in a Sub
account [page 2283]
Configuring trust in an SAML 2.0 identity provider Register SAP Cloud Platform Subaccount in the SAML 2.0
Identity Provider [page 2284]
The Cloud Foundry environment extends SAP Cloud Platform. It provides platform security functions such as
granting access to applications for business systems or business users, managing authorizations, and other
security functions.
Applications contain content (for example, Web content, micro services), which is deployed to different
containers. Business users use a user interface or a user agent to access the content of the applications
whereas business systems use APIs to do so. The applications are using OAuth 2.0 to authenticate against the
User Account and Authentication service.
The Cloud Foundry environment uses OAuth 2.0 as authentication method and access tokens. An SAML
identity provider stores the business users. The applications (for example, Java or Node.js) are deployed in the
runtime container. There are multiple ways of accessing the applications in the runtime container:
● Web access
Business users access the runtime container over the Web, using a browser of a browser-based user
interface.
● API access
Using APIs, business systems directly access the runtime container of the Cloud Foundry environment.
Web Access
Access to the static Web content requires user authentication and the appropriate authorization.
Applications authenticate using OAuth 2.0. They use the User Account and Authentication (UAA) service as
OAuth 2.0 authorization server, the application router as OAuth 2.0 client, and application logic running in
Node.js and Java backend services as OAuth 2.0 resource server.
The authentication process is triggered by the application router component, which is configured in the design-
time artifact xs-app.json, if required. Authorization restricts the access to resources based on defined user
permissions. Resources in the context of applications are services provided by a container (for example, an
OData Web service) or SAP HANA database artifacts.
API Access
A business system uses APIs to directly access the resources in the runtime container. After having
authenticated at the UAA, which acts as OAuth 2.0 authorization server, the business system gets the
appropritate access token. It enables the APIs to make calls into the applications of the runtime containers.
You want to integrate an application in the Cloud Foundry environment. This means that the Cloud Foundry
environment needs to know your application. Using the User Account and Authentication (UAA) service, you
initially integrate your application into the platform environment of SAP Cloud Platform. The application needs
to authenticate against the User Account and Authentication service. The authentication concept of the Cloud
Foundry environment is OAuth 2.0.
In this context, the UAA acts as OAuth 2.0 authorization server. The application router itself is the OAuth 2.0
client. To integrate the application into authentication, you must create a service instance of the xsuaa service
and bind it to the application router and the application containers. From the OAuth 2.0 perspective, the
The UAA uses OAuth 2.0 access tokens with the JSON web token (JWT) format for authenticating at the
containers, the OAuth 2.0 resource servers.
Runtime Containers
Static Web content is deployed into the application router, application logic, for example, into Node.js and Java
runtime containers.
Related Information
The User Account and Authentication service (UAA) is the central infrastructure component of the Cloud
Foundry environment at SAP Cloud Platform for user authentication and authorization.
UAA Instances
The Cloud Foundry environment at SAP Cloud Platform distinguishes between two user types and manages
each type in a separate UAA instance. This means the two types are completely separated:
● Platform users perform technical development, deployment, and administration tasks. They use tools like
the cloud cockpit and the cf command line client. These users typically have authorizations for certain
organizations and/or spaces, and other technical services in the Cloud Foundry environment. Apart from
authentication to the platform using the cf command line client, usually there is no direct interaction
between users and the platform UAA.
● Business users only use business applications deployed to SAP Cloud Platform. They do not use SAP
Cloud Platform for technical development, deployment, and administration tasks. A business user is
always bound to a specific tenant which holds the information about the user’s authorizations. Tenants,
business users, and their authorizations are managed by another UAA instance using the extended
services for UAA (XSUAA). This component additionally provides a simple programming model for
authentication and authorization in business applications.
This documentation refers to business users and the extended services of the UAA.
The UAA uses OAuth 2.0 for authentication of the application. In the context of the OAuth flow, the UAA
provides scopes, role templates, and attributes to applications deployed in the runtime of the Cloud Foundry
environment. If a business user has a certain scope, an access token and a refresh token with this scope are
issued. These enable the user to run the application, while the scopes are used to define the model that
describes who has the authorization to start an application that runs in the runtime of the Cloud Foundry
environment at SAP Cloud Platform.
The UAA service provides a programming model for developers; it enables developers to define templates that
can be used to build role and authorization models for business users of applications deployed to the runtime
of the Cloud Foundry environment. In the OAuth 2.0 workflow, the UAA acts as an OAuth server.
Note
The Cloud Foundry environment also supports the following token grant types of Cloud Foundry.
Related Information
Cloud Foundry API Reference for User Account and Authentication Server API
UAA, XSUAA, Platform UAA, CFUAA - What Is It All About?
The XSUAA programming model supports multiple tenants in SAP Cloud Platform. In the context of OAuth 2.0,
the security APIs for the appliction runtime container act as resource servers and provide the security context
for user authentication in an application.
OAuth 2.0 is using SAML 2.0 for authentication of the users, which are provided by identity providers.
Multiple Tenants
SAP Cloud Platform supports multitenant applications. These applications are used by multiple tenants. With
this approach, the tenants share the same code base, but they are not allowed to see the data of other tenants.
The application must maintain data separation between tenants.
In the Cloud Foundry runtime, SAP Cloud Platform provides the application router as an option. It is a point-of-
entry for web applications running on the Cloud Foundry environment at SAP Cloud Platform; the application
router is part of the application and triggers the user authentication process in the UAA. In the OAuth 2.0
workflow for web access, the application (including the application router and any bound containers) is the
OAuth 2.0 client, which initiates user authentication and authorization.
In the context of OAuth 2.0, the security APIs for the runtime container act as resource servers. The APIs
provide the security context for user authentication in an application of the Cloud Foundry environment. When
the user authentication process is triggered, the container security APIs receive an Authorization: Bearer
HTTP header that contains an OAuth access token in the JSON Web token (JWT) format from the application
router.
The access token and the refresh token contain information describing the user and any authorization scopes.
These must be validated by the container security APIs using the security libraries provided by the UAA.
Applications deployed in the Cloud Foundry runtime at SAP Cloud Platform can use the security API to check
whether scope values have been assigned to the user or application. The container security API provides the
security context for applications (for example, scopes, attributes, token information) of the application; the
JWT token initializes the security context of the application. A security API is available for the following
application runtime containers:
Java API using Spring Configuring Authentication for Java API Using Spring Secur
ity [page 2272]
Java web application using sap_java_buildpack Configuring Authentication for SAP Java Buildpacks [page
2272]
For more information, see the related links on authentication. They describe how you configure authentication
for Node.js, Java.
Each application's runtime container of an application must use the container security API to provide the
security context. The security context is initialized with the JWT token and enables you to perform the following
functions:
The Cloud Foundry environment extends SAP Cloud Platform. It provides platform security functions such as
business user authentication, authorization management, and other security functions for access to the
applications in the runtime container. To access the runtime container, the business user can use a browser or
a browser-based user interface.
The following diagram shows the architecture with the components that are responsible for business user
authentication, authorization management, and security. It is not mandatory for applications to use the User
Account and Authentication service and the application router.
● SAP ID service
Applications use OAuth 2.0. When business users access an application, the application router acts as OAuth
client and redirects their request to the OAuth authorization server for authentication (see the Applications
section). Runtime containers act as resource servers, using the container security API of the relevant container
(for example, Java) to validate the token issued by the OAuth authorization server.
Related Information
To access an application with OAuth 2.0, an OAuth 2.0 client must authenticate with an access token. The flow
of the authorization code grant type is used to get an initial access token and a refresh token from an OAuth 2.0
authorization server. The OAuth 2.0 client can then use the refresh token to request new access tokens itself
whenever an access token expires.
An application wants to access a resource on an OAuth 2.0 resource server using a browser-based user
interface. To do this, it must have a valid access token. For more information about the OAuth 2.0 configuration,
see the related link.
1. A user of a browser-based application (user agent) sends a request to the OAuth 2.0 client.
2. Since the application has no access token, the client redirects the request to the browser.
3. The application requests an authorization code at the authorization server.
4. The authorization server checks the validity of the request and grants an authorization code to the client.
5. The client receives the authorization code and requests an access token from the authorization server.
6. The authorization server issues an access token and grants it to the client.
7. The client presents the access token to request the resource on the resource server.
8. In the final step, the resource server validates the acess token and allows the client to access the resource.
The authorization code grant type conforms to the RFC 6749 standard of IETF. For more information, see
http://ietf.org .
Tip
To mitigate the impact of cross-site scripting attacks, we recommend that you avoid sending access tokens
to the browser.
When an application is using the application router, it shares a security session with the browser. The
security session holds the access and refresh token. Requests to back-end systems include the JSON web
token (if configured).
Related Information
When a user logs in to a web application or logs out using a browser, component xsuaa checks the domain in
the URL of the application routes. To avoid open redirect attacks, xsuaa checks the URLs, which are defined in
xs-app.json, against a whitelist of redirect URIs in the application security descriptor file (xs-
security.json).
A web application in the Cloud Foundry environment of SAP Cloud Platform has application routes. These are
URLs that point to the web application. At runtime, component xsuaa checks whether the redirect URI for
Default Domain
Usually, the application uses the default domain in the application route URL. For this reason, enter the relevant
application route.
Custom Domain
See the SAP Note in the related link for information on how to set custom domains in the Cloud Foundry
environment of SAP Cloud Platform.
As an application developer, if you use custom domains in the application routes of your web application, you
must include the login and logout URLs in the xs-security.json of the xsuaa service instance. The
configuration of the redirect URIs is located in the oauth2-configuration custom option of the security
application descriptor file (xs-security.json). Here, you have to define the URLs in redirect-uris of the
custom options in the OAuth 2. 0 configuration. You can also use wild cards (*) to allow multiple URLs that
belong to one domain.
Related Information
To avoid open redirect attacks, you want to direct users to a safe and valid URL when they log out.
Prerequisites
Context
You have deployed an application in the Cloud Foundry environment of SAP Cloud Platform. To use the
application, business users log in and out. As an application developer, you want to set a redirect URI, which
directs users to a specific page once they are logged out. This could be a logout page or your company’s
employee portal. For security reasons, there is a check of this redirect URI.
Procedure
1. Define xs-app.json in the application router folder of your application. Include a logout endpoint and
define a logout page (here logout.html) which can be accessed without authentication. For more
options, see Browser Redirects Using Wildcards [page 2266].
Sample Code
{
"welcomeFile": "index.html",
"authenticationMethod": "route",
"logout": {
"logoutEndpoint": "/my/logout",
"logoutPage": "logout.html"
},
"routes": [{
"source": "^/hw/",
"target": "/",
"destination": "hw-dest",
"scope": {
"GET": [
"$XSAPPNAME.Display",
"$XSAPPNAME.Update"
],
"default": "$XSAPPNAME.Update"
}
Note
The Cloud Foundry command line interface prompts you to choose an org.
https://<application_domain>.*/**
Sample Code
{
"xsappname": "<application_name>",
"tenant-mode": "dedicated",
"description": "My Sample Application with Application Router",
"oauth2-configuration": {
"token-validity": 900,
"redirect-uris": ["https://
<application_domain>...hana.ondemand.com/**"
],
"autoapprove": "true"
}
}
Note
The application domain is the first part of the application URL: For more information, see the related
link.
10. Update the xsuaa service instance that is bound to your application.
Note
If the service instance is not available, create it, bind your application to the service instance (see the
related links), and restage the application.
You have implemented a valid redirect after logout, and xsuaa checks the redirect URL against the
whitelist in the OAuth 2.0 configuration.
Related Information
You want to configure browser redirect URLs for multiple external web sites of your company. We recommend
that you specify absolute URLs and avoid using wildcards.
Caution
When using wildcards in the redirect URLs, we remind you for security reasons to be aware that you open
up the redirect for multiple web sites. Be aware that this increases the risk of redirecting to malicious web
sites.
You want to allow multiple redirect URLs, for example in the sap.com domain. Use wildcards in the redirect
URIs of the OAuth 2.0 configuration of xs-security.json. In this example, the configuration in xs-
security.json allows external redirect URLs that contain sap.com.
Configuration of xs-app.json
In the following example, you configure a logout page, which is a static external URL. Define the external URL in
xs-app.json.
Sample Code
"logout": {
"logoutEndpoint": "/my/logout",
"logoutPage": "https://support.sap.com"
Configuration of xs-security.json
The following example allows redirects to all web sites with sap.com as domain. It allows secure (https) and
(http) pages. Define redirect URLs with wildcards.
Related Information
The Cloud Foundry environment extends SAP Cloud Platform. It provides platform security functions such as
business system authentication, authorization management, and other security functions to enable business
systems to access the applications (for example, Java or Node.js) in the runtime container. Business systems
use APIs to access the runtime container.
The following diagram shows the architecture with the components that are responsible for business system
authentication, authorization management, and security.
Business systems use OAuth 2.0 to access applications in the runtime container. The UAA acts as an OAuth
authorization server and issues an access token. It enables the business system to directly access an
application in the runtime container. Runtime containers act as OAuth resource servers, using the container
security API of the relevant container (for example, Java) to validate the token issued by the OAuth
authorization server.
Related Information
A business application needs to access OAuth-protected resources in SAP Cloud Platform on behalf of a
business user. It is clear that this business user expects his/her user to be propagated smoothly from one
application to the next. What is used here is SAML identity federation.
In this context, the business user's user ID and attributes from a SAML assertion are exchanged for an OAuth
access token. The business user can use the access token to access the OAuth-protected application that
holds the resources. For more information, see the relevant section. This example of the SAML 2.0 bearer
assertion flow describes the case for business users who log on to a web application and want to use web-
based resources in the Cloud Foundry environment of SAP Cloud Platform.
Prerequisites
● There must be a mutual SAML trust between the business application and SAP Cloud Platform. To
implement SAML trust, see the related link. We recommend that you disable the Show SAML login link on
login page checkbox.
Note
The SAML 2.0 bearer assertion can be issued by an SAML token issuer. The SAML token issuer in the
example below might be a SAML 2.0 identity provider.
Example
The following example shows the SAML 2.0 assertion bearer flow with a business user calling from a web
application to a resource server in the Cloud Foundry environment.
The SAML 2.0 assertion bearer flow has the following steps:
1. The business user accesses the web application, which uses a resource server hosted in the Cloud Foundry
environment of SAP Cloud Platform.
2. The application requests a SAML bearer assertion.
3. The SAML token issuer issues the SAML bearer assertion.
4. The web application calls the xsuaa service and requests an OAuth access token (for details, see below).
5. The xsuaa service returns the OAuth access token with the received SAML bearer assertion.
6. The web application can access the resource server on behalf of the business user by providing the OAuth
access token.
The SAML 2.0 assertion bearer flow of OAuth 2.0 is the authentication method here. Business users use the
SAP Cloud Platform to log on to the web application. During logon, the business users are provided with a
token.
To obtain an OAuth access token, the request with the parameters you see in the table below must be send to
the xsuaa service. You find the exact URL in the SAML metadata.
1. Obtain the SAML 2.0 metadata document for your subaccount. It is located at https://
<subdomain_name>.authentication.<landscape_domain>.hana.ondemand.com/saml/
metadata.
2. Go to the end of the document and look for the value Binding="urn:oasis:names:tc:SAML:
2.0:bindings:HTTP-POST" as AssertionConsumerService.
Request Parameters
client_id String Optional OAuth client ID of the OAuth client that receives the token
client_secret String Optional Client secret configured for the OAuth client. This is op
tional if passed as part of the basic authorization header
grant_type String Required This is the type of token grant requested. Use the follow
ing value:
urn:ietf:params:oauth:grant-type:saml2-
bearer
assertion String Required XML based SAML 2.0 bearer assertion, which is Base64
encoded
scope String Optional List of scopes requested for the token. Use this list if you
want to reduce the number of scopes the token will have.
This token enables the web application to connect to the Cloud Foundry environment of SAP Cloud Platform
and access the desired resources.
Related Information
The application router, the application containers, and the User Account and Authentication service are
required for the authentication methods for user and application logon requests.
The Cloud Foundry environment at SAP Cloud Platform uses the User Account and Authentication service
(UAA) and the application router to manage user logon and logoff requests. The UAA service centrally manages
● Application router
The application router is a part of the application instance. It triggers authentication of the user at the UAA.
The application instance (with containers and application router) is the OAuth 2.0 client, which handles
authentication and authorizations. The application router is the single entry point for the authentication of
a (business) application. It has the responsibility to serve static content, authenticate users, rewrite URLs,
and make proxy requests to other micro services while propagating user information.
● Application container
The containers, for example for Node.js or Java, act as resource servers providing the security context for
authentication. The container security APIs receive an HTTP header with Authorization: Bearer and
the JWT token from the application router. This token contains user, scope and token information and is
validated by the container security APIs using the UAA. Applications can use the API to check if scope
values have been assigned to the user or application.
For more information, see the related link.
● User Account and Authentication (UAA)
The User Account and Authentication component xsuaa provides a programming model for business
applications. It is the central infrastructure component of the runtime platform for business user
authentication and authorization management.
Related Information
The Cloud Foundry environment at SAP Cloud Platform provides Java runtimes to which you can deploy your
Java applications.
Authentication for Java applications relies on a usage of the OAuth 2.0 protocol, which is based on central
authentication at the UAA. The UAA vouches for the authenticated user's identity using an OAuth 2.0 access
token. The current implementation uses as access token a JSON web token (JWT). This is a signed text-based
token in JSON syntax. The Java application is specified in the related manifest file.
The Java buildpacks use different authentication methods. For more information, see the related links.
During application deployment, the build pack ensures that the correct SAP Java Virtual Machine (JVM) is
provided and that the appropriate data sources are bound to the corresponding application container.
SAP Cloud Platform makes no assumptions about which frameworks and libraries to use to implement the
Java micro service.
Related Information
Configuring Authentication for Java API Using Spring Security [page 2272]
Configuring Authentication for SAP Java Buildpacks [page 2272]
Spring applications using the Spring-security libraries can integrate with the xsuaa service.
SAP Cloud Platform provides a library for integrating validation of tokens issued by the xsuaa service with
Spring-security. The signature is validated using the verification key received from the xsuaa service.
Description Source
<artifactId>spring-xsuaa</artifactId>
To authenticate requests with JSON Web tokens (JWT), see the following sample application:
The SAP Java buildpack includes the XSUAA authentication method. This method makes an offline validation of
the received JWT token possible. The signature is validated using the verification key received from the service
binding to the xsuaa service.
● You've created the xsuaa service instance (see the related link).
This section provides you with code samples for configuring authentication. SAP Cloud Platform offers an
offline validation of the JWT token. It doesn't require an additional call to the User Account and Authentication
Source
Description Source
<artifactId>api</artifactId>
To enforce a check for the $XSAPPNAME.Display scope, see the following sample application:
Related Information
A collection of Node.js packages developed by SAP is provided as part of the Cloud Foundry environment at
SAP Cloud Platform.
SAP Cloud Platform includes a selection of standard Node.js packages, which are available for download and
use from the SAP NPM public registry to customers and partners. SAP Cloud Platform only includes the
Node.js packages with long-time support (LTS). For more information, see https://nodejs.org . Enabling
access to an NPM registry requires configuration using the SAP NPM Registry.
● Besides the standard NPM Package Manager, you can use the NPM Package Manager to download the
packages from the npm repository.
https://npm.sap.com
● As an alternative option, you can use the SAP CLIENT LIB 1.0. Search for the software component
named SAP CLIENT LIB 1.0 on the ONE Support Launchpad.
ONE Support Launchpad
The SAP CLIENT LIB 1.0 package contains the following modules:
Tip
For more details of the package contents, see the README file in the corresponding package.
@sap/approuter The application router is the single entry point for the (business) application.
@sap/xssec The client security library, including the XS advanced container security API for Node.js
@sap/approuter
The application router is the single entry point for the (business) application. It has the responsibility to serve
static content, authenticate users, rewrite URLs, and proxy requests to other micro services while propagating
user information.
For more information, see the applications section of SAP Cloud Platform.
@sap/xssec
The client security library includes the container security API for Node.js.
Authentication for node applications relies on the usage of the OAuth 2.0 protocol, which is based on central
authentication at the User Account and Authentication (UAA) server that then vouches for the authenticated
user's identity by means of a so-called OAuth access token. The implementation uses as access token a JSON
web token (JWT), which is a signed text-based token formatted according to the JSON syntax.
The trust for the offline validation is created by binding the UAA service instance to your application. The key
for validation of tokens is included in the credentials section in the environment variable <VCAP_SERVICES>.
By default, the offline validation check only accepts tokens intended for the same OAuth 2.0 client in the same
UAAsubaccount. This makes sense and covers the majority of use cases. However, if an application needs to
consume tokens that were issued either for different OAuth 2.0 clients or for different subaccounts, you can
specify a dedicated access control list (ACL) entry in an environment variable named <SAP_JWT_TRUST_ACL>.
The name of the OAuth 2.0 client has the prefix sb-. The content is a JSON string. It contains an array of
subaccounts and OAuth 2.0 clients. To establish trust with other OAuth 2.0 clients and/or subaccounts, specify
the relevant OAuth 2.0 client IDs and subaccounts.
Caution
For testing purposes, use an asterisk (*). This setting should never be used for productive applications.
Subaccounts are not used for on-premise systems. The value for the subaccount is uaa.
SAP_JWT_TRUST_ACL: [ {"clientid":"<OAuth_2.0_client_ID>","subaccount":"<
subaccount>"},...]
In a typical deployment scenario, your node application consists of several parts, which appear as separate
application modules in your manifest file, for example:
● Application logic
Note
The application logic written in Node.js and the application router should be bound to one and the same
UAA service instance so that these two parts use the same OAuth client credentials.
To use the capabilities of the container security API, add the module “@sap/xssec” to the dependencies
section of your application's package.json file.
Note
Usage
All SAP modules, for example @sap/xssec, are located in the namespace of the SAP NPM registry. For this
reason, you must use the SAP NPM registry and the default NPM registry.
If you use express and passport, you can easily plug a ready-made authentication strategy.
Sample Code
We recommend that you disable the session as in the example above. Each request comes with a JWT token so
it is authenticated explicitly and identifies the user. If you still need the session, you can enable it but then you
should also implement user serialization/deserialization and some sort of session persistency.
Tip
For more details of the package contents, see the README file in the corresponding package.
API Description
createSecurityContext Creates the “security context” by validating the received access token against
credentials put into the application's environment via the UAA service binding
● getLogonName
● getGivenName
● getFamilyName
● getEmail
● getHdbToken
● getAdditionalAuthAttribute
● getExpirationDate
● getGrantType
checkLocalScope Checks a scope that is published by the current application in the xs-
security.json file
getToken Returns a token that can be used to connect to the SAP HANA database. If the
token that the security context has been instantiated with is a foreign token
(meaning that the OAuth client contained in the token and the OAuth client of
the current application do not match), “null” is returned instead of a token;
the following attributes are available:
● namespace
Tokens can be used in different contexts, for example, to access the SAP
HANA database, to access another XS advanced-based service such as
the Job Scheduler, or even to access other applications or containers. To
differentiate between these use cases, the namespace is used. In lib/
constants.js we define supported name spaces (for example,
SYSTEM ).
● name
The name is used to differentiate between tokens in a given namespace,
for example, “HDB” for the SAP HANA database. These names are also
defined in the file lib/constants.js.
hasAttributes Returns “true” if the token contains any XS advanced user attributes; other
wise “false”.
getAttribute Returns the attribute exactly as it is contained in the access token. If no attrib
ute with the given name is contained in the access token, “null” is returned.
If the token that the security context has been instantiated with is a foreign to
ken (meaning that the OAuth client contained in the token and the OAuth cli
ent of the current application do not match), “null” is returned regardless of
whether the requested attribute is contained in the token or not. The following
attributes are available:
● name
The name of the attribute that is requested
isInForeignMode Returns “true” if the token, that the security context has been instantiated
with, is a foreign token that was not originally issued for the current applica
tion, otherwise “false”.
getIdentityZone Returns the subaccount that the access token has been issued for.
In the Cloud Foundry environment of SAP Cloud Platform, an application makes API calls to other applications.
Depending on the use case, it may also be necessary for an application from an external system to make API
calls into applications running in the Cloud Foundry environment.
The User Account and Authentication (UAA) service is the central authentication service in the Cloud Foundry
environment of SAP Cloud Platform. The UAA service and the application router manage user logon and logoff
requests. The UAA service centrally manages the issuing of JSON web tokens for propagating the identity to
the applications in the Cloud Foundry environment.
The interaction is browser based. Users are propagated in the following ways:
If applications from an external system must make API calls to applications running in the Cloud Foundry
environment, administrators must make sure that these applications can communicate with the relevant
Authorization restricts access to resources and services based on defined user permissions.
Applications of the Cloud Foundry environment at SAP Cloud Platform contain content that is deployed to
backing services. Access to the content requires not only user authentication but also the appropriate
authorization.
The access-control process controlled by the authorization policy can be divided into the following two phases:
● Authorization
Defined in the deployment security descriptor (xs-security.json), where access is authorized
● Policy enforcement
The general rules by which requests for access to resources are either approved or disapproved.
Access enforcement is based on user identity and performed in the following distinct application components:
● Application router
After successful authentication at the application router, the application router starts an authorization
check (based on scopes).
● Application containers
For authentication purposes, containers, for example the node.js and Java containers, receive an HTTP
header Authorization: Bearer <JWT token> from the application router; the token contains details
of the user and scope. This token must be validated using the Container Security API. The validation
checks are based on scopes and attributes defined in the deployment security descriptor (xs-
security.json).
This topic provides the release notes for Authorization and Trust Management.
Authorization and Trust Management is available in the Cloud Foundry environment of SAP Cloud Platform.
Authorization and Trust Management is automatically enabled in every subaccount, and it comes with a basic
preconfiguration.
● To define your authorization model and to integrate your subaccounts with your corporate identity and
access management environment, see the administration documentation:
Administration: Managing Authentication and Authorization [page 2279]
● To implement identity and access management for your own application using this service, see the
development section:
Developing Security Artifacts [page 2314]
This section describes the tasks of administrators in the Cloud Foundry environment of SAP Cloud Platform.
Administrators ensure user authentication and assign authorization information to users and user groups.
Authentication
Identity providers provide the business users. If you use external SAML 2.0 identity providers, you must
configure the trust relationship using the cockpit. The respective subaccount must have a trust relationship
with the SAML 2.0 identity provider. Using the cockpit, you, as an administrator of the Cloud Foundry
environment, establish this trust relationship.
Authorization
In the Cloud Foundry environment, application developers create and deploy application-based authorization
artifacts for business users. Administrators use this information to assign roles, build role collections, and
assign these collections to business users or user groups. In this way, they control the users' permissions.
1 Use an existing role or create a new one using role Administrator of the Cloud Foundry en SAP Cloud
vironment
templates Platform
cockpit
Add Roles to Role Collections [page 2296]
Command
line interface
for SAP
Cloud
Platform
2 Create a role collection and assign roles to it Administrator of the Cloud Foundry en SAP Cloud
vironment
Platform
Maintain Role Collections [page 2298]
cockpit
Command
line interface
for SAP
Cloud
Platform
3 Assign the role collections to users Administrator of the Cloud Foundry en SAP Cloud
vironment
Platform
Managing Authorizations (China (Shanghai) region)
cockpit
or Directly Assign Role Collections to Users [page
2300] Command
line interface
for SAP
Cloud
Platform
4 (If you do not use SAP ID Service) Assign the role Administrator of the Cloud Foundry en SAP Cloud
vironment Platform
collections to SAML 2.0 user groups (AWS, Azure, or
cockpit
GCP regions)
5 Assign the role collection to the business users pro Administrator of the Cloud Foundry en SAP Cloud
vironment Platform
vided by an SAML 2.0 identity provider (AWS, Azure,
cockpit
or GCP regions)
Related Information
Trust and Federation with SAML 2.0 Identity Providers [page 2281]
Monitoring and Troubleshooting [page 2350]
Running on the AWS, Azure, or GCP regions: When you create a subaccount, SAP Cloud Platform automatically
grants your user the role for the administration of business users and their authorizations in the subaccount.
Having this role, you can also add or remove other users who will then also be user and role administrators of
this subaccount.
After having created a subaccount in the Cloud Foundry environment of SAP Cloud Platform, your user
automatically has the administration role. This means that your user also has the Security tab, where you can
perform security administration tasks. As a security administrator, you can manage authentication and
authorization in this subaccount, such as configuring trust to identity providers, and assigning role collections
to business users.
You can delegate the security administration to other users. Simply add the users as security administrators to
your subaccount. SAP Cloud Platform grants the User & Role Administrator role to these users.
To see the User & Role Administrator role and all users with this role, go to your subaccount (see
Navigate to Global Accounts and Subaccounts [AWS, Azure, or GCP Regions]). Choose Security
Administrators .
You can also remove the users who are not supposed to have this role.
Note
All users with the User & Role Administrator role can manage this subaccount, including the security
administration tasks.
Related Information
SAP Cloud Platform supports SAML 2.0 identity providers. You have the option, to use any SAML 2.0 standard
compliant identity provider.
SAP Cloud Platform supports the use of single sign-on (SSO) authentication based on Security Assertion
Markup Language 2.0 (SAML). An SAML identity provider is used by an SAML service provider to authenticate
users who sign in to an application by means of single sign-on. The User Account and Authentication (UAA)
component is the central infrastructure component of the runtime platform for business user authentication
and authorization management. The users are stored in the SAML 2.0 identity provider whereas the user
authorizations are stored inside the UAA component.
If your business users are stored in multiple corporate identity providers and if you want to grant the users
access to SAP Cloud Platform, we recommend that you use SAP Cloud Platform Identity Authentication
Service as a hub.
For this scenario, we recommend that you first connect SAP Cloud Platform Identity Authentication Service
as single custom identity provider to SAP Cloud Platform. Next you use SAP Cloud Platform Identity
Authentication Service to integrate your corporate identity providers. SAP Cloud Platform Identity
Authentication Service supports different kinds of identity providers.
For more information, see Corporate Identity Providers and Configure Conditional Authentication for an
Application in What Is Identity Authentication and SAP Cloud Platform Identity Authentication Service
To make use of your identity provider's user base you must first establish a mutual trust relationship with your
SAML 2.0 identity provider. This configuration consists of two steps.
● Establish trust with the SAML 2.0 identity provider in your SAP Cloud Platform subaccount
● Register your SAP Cloud Platform subaccount in the SAML 2.0 identity provider
To establish trust with your SAML 2.0 identity provider, perform one of the applicable procedures below.
● Establish Trust and Federation with UAA Using SAP Cloud Platform Identity Authentication Service [page
2283]
● Establish Trust and Federation with UAA Using Any SAML Identity Provider [page 2285]
● Default Identity Federation with SAP ID Service in the Cloud Foundry Environment [page 2288]
Note
How you assign users to their authorizations depends on the type of trust configuration. If you are
using the default trust configuration via SAP ID Service, you can assign users directly to role
collections. For more information, see Default Identity Federation with SAP ID Service in the Cloud
Foundry Environment [page 2288].
However, if you are using a custom trust configuration as described in this topic, you can assign
individual users or SAML 2.0 groups to role collections. Assigning users to their authorizations is part
of application administration which is described here. For more information, see Assign Role
Collections [page 2299].
The SAML 2.0 identity provider provides the business users, who belong to user groups. It is efficient to use
federation by assigning role collections to one or multiple SAML 2.0 user groups. The role collection
contains all the authorizations that are necessary for this user group. This saves time when you add a new
business users. Simply add the users to the respective user group(s), and the new business users
automatically get all the authorizations that are included in the role collection.
Related Information
To establish trust, configure the trust configuration of the SAML 2.0 identity provider in your subaccount using
the cockpit. In this case, the SAML 2.0 identity provider is SAP Cloud Platform Identity Authentication service.
Next, register your subaccount in User Account and Authentication service using the administration console of
User Account and Authentication service. To complete federation, maintain the federation attributes of the
SAML 2.0 user groups. This makes sure that you can assign authorizations to user groups.
You want to use an SAML 2.0 identity provider, for example, SAP Cloud Platform Identity Authentication
service. This is where the business users for SAP Cloud Platform are stored.
Prerequisites
Context
You must establish a trust relationship with an SAML 2.0 identity provider in your subaccount in SAP Cloud
Platform. The following procedure describes how you establish trust in the SAP Cloud Platform Identity
Authentication service.
Procedure
1. In the SAP Cloud Platform cockpit, go to your subaccount (see Navigate to Orgs and Spaces [page 1751])
and choose Security Trust Configuration .
2. Choose New Trust Configuration.
3. Enter a name and a description that make it clear that the trust configuration refers to the identity provider.
Make sure that the users who are supposed to log on to this identity provider understand the name of
the trust configuration.
The name of the new trust configuration now shows the value
<Identity_Authentication_tenant>.accounts.ondemand.com. It represents the custom identity
provider SAP Cloud Platform Identity Authentication service.
This also fills the fields for the single sign-on URLs and the single logout URLs.
7. Save your changes.
8. You have successfully configured trust in the SAP Cloud Platform Identity Authentication service, which is
your SAML 2.0 identity provider.
An SAML service provider interacts with an SAML 2.0 identity provider to authenticate users signing in by
means of a single sign-on (SSO) mechanism. In this scenario the User Account and Authentication (UAA)
service acts as a service provider representing a single subaccount. To establish trust between an identity
provider and a subaccount, you must register your subaccount by providing the SAML details for web-based
authentication in the identity provider itself. The identity provider we use here is the SAP Cloud Platform
Identity Authentication service.
Context
Administrators must configure trust on both sides, in the service provider and in the SAML identity provider.
This description covers the side of the identity provider (SAP Cloud Platform Identity Authentication service).
The trust configuration on the side of the SAP Cloud Platform Identity Authentication service must contain the
following items:
To establish trust from a tenant of SAP Cloud Platform Identity Authentication service to a subaccount, assign
a metadata file and define attribute details. The SAML 2.0 assertion includes these attributes. With the UAA as
SAML 2.0 service provider, they are used for automatic assignment of UAA authorizations based on
information maintained in the identity provider.
Procedure
1. Open the administration console of SAP Cloud Platform Identity Authentication service.
https://<Identity_Authentication_tenant>.accounts.ondemand.com/admin
2. To go to the service provider configuration, choose Applications & Resources Applications in the
menu or the Applications tile.
Note
3. To add a new SAML service provider, create a new application by using the + Add button.
4. Choose a name for the application that clearly identifies it as your new service provider. Save your changes.
Users see this name in the logon screen when the authentication is requested by the UAA service. Seeing
the name, they know which application they currently access after authentication.
5. Choose SAML 2.0 Configuration and import the relevant metadata XML file. Save your changes.
Note
Use the metadata XML file of your subaccount. You find the metadata XML file under the following url
pattern: https://<subdomain>.authentication.<landscape>/saml/metadata.
You can find the value for <subdomain> in the Cloud Cockpit under the subaccount Overviewtab in the
Subaccount Details tile.
If the contents of the metadata XML file are valid, the parsing process extracts the information required to
populate the remaining fields of the SAML configuration. It provides the name, the URLs of the assertion
consumer service and single logout endpoints, and the signing certificate.
6. Choose Default Name ID Format and select E-Mail as a unique attribute. Save your changes.
7. Choose Assertion Attributes, use +Add to add a multi-value user attribute, and enter Groups (case-
sensitive) as assertion attribute name for the Groups user attribute. Save your changes.
Context
You want to use an SAML 2.0 identity provider. This is where the business users for SAP Cloud Platform are
stored.
Prerequisites
Context
You must establish a trust relationship with an SAML 2.0 identity provider in your subaccount in SAP Cloud
Platform. The following procedure tries to guide you though the trust configuration in your SAML 2.0 identity
provider.
Procedure
1. Go to your subaccount (see Navigate to Orgs and Spaces [page 1751]) and choose Security Trust
Configuration in the SAP Cloud Platform cockpit.
2. Choose New Trust Configuration.
3. Enter a name and a description that make it clear that the trust configuration refers to your identity
provider.
Note
Make sure that the users who are supposed to log on to this identity provider understand the name of
the trust configuration.
The name of the new trust configuration now shows the value of your identity provider. It represents your
custom identity provider.
Check whether the fields for the single sign-on URLs and the single logout URLs are filled.
7. Save your changes.
8. You have successfully configured trust in your SAML 2.0 identity provider.
An SAML service provider interacts with an SAML 2.0 identity provider to authenticate users signing in by
means of a single sign-on (SSO) mechanism. In this scenario, the User Account and Authentication (UAA)
service acts as a service provider representing a single subaccount. To establish trust between an identity
provider and a subaccount, you must register your subaccount by providing the SAML details for web-based
authentication in the identity provider itself.
Context
Administrators must configure trust on both sides, in the service provider and in the SAML identity provider.
This description tries to guide you through the configuration of your identity provider. The trust configuration
on the side of the SAM 2.0 identity provider must contain the following items:
Your administrators use the administration console of your SAML 2.0 identity provider to register the
subaccount.
To establish trust from a tenant of your identity provider to a subaccount, assign a metadata file and define
attribute details. The SAML 2.0 assertion includes these attributes. With the UAA as SAML 2.0 service provider,
they are used for automatic assignment of UAA authorizations based on information maintained in the identity
provider.
Users see this name in the logon screen when the authentication is requested by the UAA service. Seeing
the name, they know which application they currently access after authentication.
5. Choose the SAML 2.0 configuration and import the relevant metadata XML file. Save your changes.
Note
Use the metadata XML file of your subaccount. The subdomain name is identical to the tenant name.
You find the metadata file in the following location:
Take care that the fields of the SAML configuration are filled. The metadata provides information, such as
the name, the URLs of the assertion consumer service and single logout endpoints, and the signing
certificate.
6. Choose or create the name ID attribute and select E-mail as a unique attribute. Save your changes.
7. Choose or create a user attribute, and enter Groups (capitalized) as assertion attribute name for the
Groups user attribute. Save your changes.
Related Information
Prerequisites
● To see the default identity provider in Security Trust Configuration , you must have created this
subaccount or you are a security administrator of this account. For more information, see the related link.
SAP ID service is the place where you register to get initial access to SAP Cloud Platform. If you are a new user,
you can use the self-service registration option of SAP ID service.
Trust to SAP ID service in your subaccount is pre-configured in the Cloud Foundry environment of SAP Cloud
Platform by default, so you can start using it without further configuration. Optionally, you can add additional
trust settings, for example if you prefer to use another SAML 2.0 identity provider. Using the SAP Cloud
Platform cockpit you can make changes by navigating to your respective subaccount and by choosing
Security Trust Configuration .
Note
If you do not intend to use SAP ID service, you must establish trust to your custom SAML 2.0 identity
provider. We describe a custom trust configuration using the example of SAP Cloud Platform Identity
Authentication service.
Related Information
Trust and Federation with SAML 2.0 Identity Providers [page 2281]
Security Administrators in Your Subaccount [AWS, Azure, or GCP Regions] [page 2281]
This table is supposed to display the attribute settings of the identity provider and the values administrators
use to establish trust between the SAML 2.0 identity provider and a new subaccount.
Since there are multiple identity providers you can use, we display the parameters and values of SAP Cloud
Platform Identity Authentication service. Use the information in this table as a reference for the configuration
of your identity provider.
Settings of SAP Cloud Platform Identity Authentication Service for SAML 2.0 Trust
SAML 2.0 Configuration Configures the trust with a service provider using Upload the metadata file from the UAA
an uploaded metadata file. service.
Default Name ID Format Configures the attribute that the identity pro E-Mail
vider uses to identify the users. The attribute is
sent as the name ID for the authenticated user in Note
SAML assertions. We recommend that you use ex
Possible settings: actly this name ID format.
● User ID
● E-Mail
● Display Name
● Login Name
● Employee Number
Select Assertion Attributes To define the user authorizations in the UAA Select the Groups user attribute and
service, provide the user groups in the assertion enter Groups as assertion attribute.
attribute Groups (capitalized). This assertion You must set this attribute to enable
attribute is required for the assignment of roles that the assignment from role collec
in the UAA service. tion to user groups has an effect. For
more information, see the related link.
You can change the default names of the asser
tion attributes that the application uses to recog
Caution
nize the user attributes. You can use multiple as
sertion attributes for the same user attribute. Use exactly this spelling:
● Groups
● first_name
● last_name
● mail
Example
In the following, you see what John Doe's SAML 2.0 assertion looks like if Default Name ID Format was set to E-
Mail and Assertion Attribute to Groups.
Sample Code
Related Information
Map Role Collections to User Groups [AWS, Azure, or GCP Regions] [page 2301]
You have configured trust configurations for multiple identity providers. You want to provide an understandable
link on the logon page so that business users know where to log on.
You want to make life easy for business users and provide a logon link that they can easily recognize as such.
This link provides an identity provider logon that enables them to log on to the application that they want to
1. Hide the default identity provider (SAP ID service) from the logon page.
Hide Logon Link for Default Identity Provider [page 2292]
2. Display the custom identity provider at logon time.
Display Logon Link for Custom Identity Provider for Business Users [page 2293]
3. (Optional) Give the logon link of the custom identity provider a name the business users understand.
Rename the Logon Link Text for Custom Identity Providers [page 2294]
You use one or multiple custom identity providers for business users as well as the default identity provider
primarily for platform users. To provide a good logon experience for your business users, you want to hide the
default identity provider, which remains active.
Prerequisites
● You have configured a custom trust configuration for a custom identity provider, for example SAP Cloud
Platform Identity Authentication Service, and set it to active.
● You've checked the Available for User Logon checkbox in your custom trust configuration.
● The default trust configuration (SAP ID service) is active.
Context
Procedure
1. Go to your subaccount (see Navigate to Orgs and Spaces [page 1751]) and choose Security Trust
Configuration in the SAP Cloud Platform cockpit.
You see the list of trust configurations: the default trust configuration and the custom trust
configuration(s).
2. Choose (Edit) for the default trust configuration. It has the status Default.
3. To hide the default trust configuration (SAP ID service) for logon, uncheck the Available for User Logon
checkbox.
You want to display a logon link of the custom identity provider that business users should use to log on to an
application.
Context
Procedure
1. Go to your subaccount (see Navigate to Orgs and Spaces [page 1751]) and choose Security Trust
Configuration in the SAP Cloud Platform cockpit.
You see the list of trust configurations: the default trust configuration and the custom trust
configuration(s).
2. Go to the desired trust configuration with the status Custom.
3. Choose (Edit) for the custom trust configuration.
4. Make sure that the status of your custom trust configuration is Active.
5. Check the Available for User Logon checkbox.
6. Save your changes.
Results
Whenever business users want to log on to their application, they see a logon link for the custom identity
provider that they should use.
You can provide a logon link and give the link a name the business users understand. That way, they know
which link they should use to log on.
Prerequisites
● You have already made your custom identity provider available for user logon (see the related link).
Context
You want to define an understandable name for the logon link of the custom identity provider that the business
users should use.
Procedure
1. Go to your subaccount (see Navigate to Orgs and Spaces [page 1751]) and choose Security Trust
Configuration in the SAP Cloud Platform cockpit.
You see the list of trust configurations: the default trust configuration and the custom trust
configuration(s).
2. Choose the link of your custom identity provider. It has the Custom status.
3. Choose Edit.
4. Go to Link Text for User Logon and give the logon link an understandable name.
5. Save your changes.
Results
Whenever business users want to log on to their application, they see the logon link of the custom identity
provider that they should use. It has the caption you entered in Link Text for User Logon.
Related Information
Display Logon Link for Custom Identity Provider for Business Users [page 2293]
If you switch off creation of shadow users in the configuration of the trust configuration of custom identity
providers, administrators explicitly allow users to log in. Thus, they have full control over who is allowed to log
in.
Context
Whenever a user authenticates at an application in your subaccount using any identity provider, the User
Account and Authentication service always stores user-related data provided by the identity provider in the
form of shadow users. The UAA uses details of the shadow users to issue tokens that refer to a certain user.
By default, the UAA allows any user of any connected identity provider to authenticate to applications in the
subaccount. When there is no corresponding shadow user, it automatically creates one based on the
information received from the identity provider.
Usually, you want your administrators to be fully aware of which users they allow to log in. If you have switched
off automatic creation of shadow users for a certain identity provider, you enforce that only those users can log
in where shadow users have been created explicitly. You can create them in the SAP Cloud Platform cockpit,
typically when you assign the first role collection to them.
Procedure
1. In the SAP Cloud Platform cockpit, go to your subaccount (see Navigate to Orgs and Spaces [page 1751])
and choose Security Trust Configuration .
2. Choose your custom trust configuration.
3. Choose Edit.
4. Go to Create Shadow Users During Login and remove the checkmark.
5. Save your changes.
From now on, the User Account and Authentication service does not automatically create shadow users for
a certain identity provider during login.
Related Information
Delete Shadow Users for Data Protection and Privacy [page 2307]
As an administrator of the Cloud Foundry environment of SAP Cloud Platform, you can maintain application
roles and role collections which can be used in user management.
The user roles are derived from role templates that are defined in the security description (xs-
security.json) of applications that have been registered as OAuth 2.0 clients at the User Account and
Authentication service during application deployment. The application security-descriptor file also contains
details of the authorization scopes that are used for application access and defines any attributes that need to
be applied. The roles you create can be added to role collections.
Tip
Using the SAP Cloud Platform cockpit, you can assign the role collections to users logging on with SAML
2.0 assertions or SAP ID service.
Related Information
Context
A role is an instance of a role template; you can build a role based on a role template and assign the role to a
role collection. Role collections are then assigned to SAML 2.0 groups or users. The role template defines the
type of access permitted for an application, for example: the authorization scope, and any attributes that need
to be applied. Attributes define information that comes with the respective user, for example 'cost center' or
'country' (see the related link). This information can only be resolved at run time.
The User Account and Authentication service automatically creates default roles for all role templates that
do not include attribute references. They have the same name as the role template, and their description
contains Default Instance.
Procedure
Here you see a complete list of all roles sorted by role templates. It also contains information about
attributes used in this role and about the role collections the role has been added to. On the right side, you
find the action buttons.
Note
You cannot edit or delete predefined roles for default role collections. For this reason, the action
buttons are grayed out. For more information, see the related link.
5. To directly assign a role to role collections, choose (Add) in the same row.
A new window shows all role collections that are available in your subaccount.
6. Select the role collections to which you want to add your role.
7. Choose Add.
The number in the role collections column counts up. You have now assigned this role to the role
collections you selected earlier.
Related Information
Role
A role is an instance of a role template; you can build a role based on a role template and assign the role to a
role collection. The cockpit helps you to display information about the selected application and any related
roles in the following windows, tabs, and panes:
● Roles
● Scopes
● Attributes
● Role templates
Role Collection
Roles are assigned to role collections which are assigned in turn to users or SAML 2.0 groups if an SAML 2.0
identity provider stores the users. Using the cockpit, you can display information about the role collections that
have been maintained as well as the roles available in a role collection. Additional information includes: which
templates the roles are based on, and which applications the roles apply to. Role collections enable you to
group together the roles you create. The role collections you define can be assigned as follows:
Role collections group together different roles that can be applied to the application users.
Context
Application developers have defined application-specific role templates in the security descriptor file. The role
templates contain the role definitions. You can assign the role to a role collection.
Tip
Application developers can even directly define role collections with default roles. For more information, see
Quick Start: Create Role Collections (with Predefined Roles) [page 2333].
Once you have created a role collection, you can pick the roles that apply to the typical job of a sales
representative. Since the roles are application-based, you must select the application to see which roles come
with the role template of this application. You are free to add roles from multiple applications to your role
collection.
Finally, you assign the role collection to the users provided by the SAP ID service or by your identity provider or
to SAML 2.0 user groups, for example, sales representatives.
Procedure
2. Go to your subaccount (see Navigate to Orgs and Spaces [page 1751]) and choose Security Role
Collections .
3. To create a new role collection, choose New Role Collection and enter a name.
4. To add roles to your new role collection, edit it.
5. Choose Add Role.
6. SAP Cloud Platform prompts you to make the following mandatory entries:
1. Application Identifier
Since the application security descriptor contains the role template, it is, at first, necessary to choose
the application identifier of the application that provides the role template with the role you want to
add.
2. Role Template
3. Role
7. Save your changes.
You have arranged roles in role collections, and now want to assign these role collections to business users.
Prerequisites
● You are using the following trust configurations (see the related links):
○ Default trust configuration (SAP ID service)
Procedure
Default trust configuration (SAP ID service) Directly Assign Role Collections to Users [page 2300]
Related Information
Prerequisites
● You have created role collections containing authorizations in the form of roles.
Context
If an application developer has defined the role templates accordingly, the roles come with the role templates
of the related applications. A role collection can group roles from a set of applications available to your
subaccount..
Note
Tip
If the user identifier you entered has never logged on to an application in this subaccount, SAP Cloud
Platform cannot automatically verify the user name. To avoid mistakes, you must check and confirm
that the user name is correct.
Running on the AWS, Azure, or GCP regions: You want to assign a role collection to a user group provided by an
SAML 2.0 identity provider that has a custom trust configuration in SAP Cloud Platform. In this case, the
assignment is a mapping of a user group to a role collection. Your identity provider provides the user groups
using the SAML assertion attribute called Groups. Each value of the attribute is mapped to a role collection as
described in this procedure.
Prerequisites
● You have configured your custom SAML 2.0 identity provider and established trust in your subaccount.
The name of the trust configuration to is different from SAP ID Service. The name of a custom trust
configuration to SAP Cloud Platform Identity Authentication service could be as follows:
https://Identity_Authentication_tenant>.accounts.ondemand.com
● You have configured the identity provider so that it conveys the user's group memberships in the Groups
assertion attribute.
● You have created role collections containing authorizations in the form of roles.
Context
The SAML 2.0 identity provider provides the business users, who can belong to user groups. It is efficient to
map user groups to role collections. The role collection as a reusable element contains the authorizations that
are necessary for this user group. This saves time when you want to add a new business user. Simply add the
user to the respective user group or groups, and the business user automatically gets all the authorizations
that are included in the role collections.
For this reason, the assignment is a mapping of user groups to role collections.
Procedure
1. Go to your subaccount (see Navigate to Orgs and Spaces [page 1751]) and choose Security Trust
Configuration .
2. Choose your custom trust configuration in your subaccount. This identity provider provides the user
groups which you want to assign to role collections.
3. Choose Role Collection Mappings.
4. Choose New Role Collection Mapping.
5. Choose the role collection you want to map and enter the name of the user group in the Value field.
Tip
You must know the name of the user group in the identity provider.
Example
In the SAP Cloud Platform Identity Authentication service, you find the user groups in the
administration console of your SAP Cloud Platform Identity Authentication service tenant under
Users & Authorizations User Groups . Open the administration console, for example of SAP
Cloud Platform Identity Authentication service using https://
<Identity_Authentication_tenant>.accounts.ondemand.com/admin.
Trust and Federation with SAML 2.0 Identity Providers [page 2281]
Establish Trust and Federation with UAA Using SAP Cloud Platform Identity Authentication Service [page
2283]
Federation Attribute Settings of Any Identity Provider [page 2289]
Attributes use information that is specific to the user, for example the user's country. If the application
developer in the Cloud Foundry environment of SAP Cloud Platform has created a country attribute to a role,
this restricts the data a business user can see based on this attribute.
A lot of applications provide purely functional role templates which grant access for all data of a certain type
within your subaccount. Roles for such role templates are generated automatically. Some other applications
also provide the possibility for administrators to restrict access not only by functional authorizations, but also
by instance-based authorizations. That means that users can only work with a certain subset of the data in
your subaccount.
The restriction can be either based on information within the respective role, or on user-specific information
provided by the identity provider. This makes instance-based authorizations specific for each customer
because the respective roles cannot be generated automatically. Instead, administrators must create them.
Typical restrictions depend on information like the user's country or cost center.
Each restriction is represented by a dedicated attribute which belongs to a role template of the application.
Note
Application developers can define attribute references in the application security descriptor file (xs-
security.json): For more information, see Application Security Descriptor Configuration Syntax [page
2318].
For each attribute, administrators have multiple options to specify the value which restricts data access:
● Static attributes
They are stored in the role. The user is given the attribute value when the administrator assigns the role
collection with this role to the user.
● Attributes from a custom identity provider
Since an identity provider provides the business users, you can dynamically reference all attributes that
come with the SAML 2.0 assertion. You define the attributes and the attribute values in the identity
provider. In the Cloud Foundry environment of SAP Cloud Platform, administrators can use the assertion
attributes to refine the roles.
Note
If you want to reference attributes from your identity provider, you must know the exact identifier of the
assertion attribute. Go to the SAML 2.0 configuration of your identity provider and use the assertion
attributes as they are defined there.
Related Information
As an administrator of the Cloud Foundry environment, you can specify attributes in roles to refine
authorizations of the business users. Depending on these attributes, business users with this role have
restricted access to data.
Prerequisites
You have maintained the attributes of the users in your identity provider if you want to use the identity provider
as the source of the attributes.
Note
In SAP Cloud Platform Identity Authentication Service or any SAML identity provider, you find the
attributes in the SAML 2.0 configuration.
Procedure
6. Choose (Edit).
The role overview pane displays the attribute name and fields where you can select the source and enter a
value.
7. Choose Edit.
Attribute Sources
Static Enter a static value, for example USA to refine the role de
pending on the country.
Applications .
Example
To use the assertion attribute for cost center, you
must enter the value cost_center.
Related Information
Subscribe to Multitenant Applications in the Cloud Foundry Environment Using the Cockpit [page 1036]
As an administrator of the Cloud Foundry environment, you can specify attributes in a new role to refine
authorizations of business users. Depending on these attributes, business users with this role have restricted
access to data.
Prerequisites
You have maintained the attributes of the users in your identity provider if you want to use the identity provider
as the source of the attributes.
In SAP Cloud Platform Identity Authentication Service or any SAML identity provider, you find the
attributes in the SAML 2.0 configuration.
Procedure
Attribute Sources
Static Enter a static value, for example USA to refine the role de
pending on the country.
Applications .
Example
To use the assertion attribute for cost center, you
must enter the value cost_center.
Related Information
Subscribe to Multitenant Applications in the Cloud Foundry Environment Using the Cockpit [page 1036]
Maintain Role Collections [page 2298]
The User Account and Authentication service of the Cloud Foundry environment stores shadow users, which
contain personal data. Data privacy regulations or policies may require you to delete this data, for example,
when the user has left your organization.
Prerequisites
● xs_user.read
● xs_user.write
Context
Note
When handling personal data, consider the legislation in the various countries where your organization
operates. After the data has passed the end of purpose, regulations might require you to delete the data.
For more information on data protection and privacy, see the related link.
Tip
As an administrator, you can switch off the creation of shadow users in the trust configuration of your
identity provider. However, users can’t log in until you explicitly create their shadow users. For more
information, see the related link.
To delete shadow users, you set up access to the API and then use the SCIM REST APIs to retrieve and delete
shadow users.
Use the following procedure:
Procedure
For more information, see Enable API Access to an XSUAA Configuration [page 2347].
2. To retrieve and delete the shadow users, use the GET and DELETE methods for the System for Cross-
domain Identity Management (SCIM) interface of the platform API of the Authorization and Trust
management service.
Related Information
The Authorization and Trust Management service maintains a number of secrets to ensure secure operation of
the service. Your organization can have policies that require you change secrets or you may need to respond to
the loss of a secret.
The Authorization and Trust Management service uses the following secrets in its operation:
Related Information
When an application consumes a service instance of Authorization and Trust Management service (XSUAA),
the application identifies itself to the service instance with a client ID and client secret. The client ID and client
secret are the credentials with which an application authenticates itself to the service instance.
The system creates these credentials either when you bind the application to the service instance or when you
create a service key for the service instance.
The service instance can use multiple secrets in the application plan.
● The instance secret is the default secret. The secret is the same for all bindings of the service instance. The
secret remains valid as long as the instance exists.
● The binding secret must be enabled in the application security descriptor (xs-security.json) when you
deploy the application. The secret remains valid as long as the binding or the service key exists.
Note
The apiaccess plan only uses binding secrets. However, some old instances of the API access plan might
still use the instance secret.
The following figure illustrates the XSUAA app and its OAuth 2.0 client as part of an Authorization and Trust
Management service instance. A consuming application is bound to the Authorization and Trust Management
service instance. The instance secret is part of the environment of the consuming application and the OAuth
2.0 client. Alternatively, this information is saved as part of a service key.
The credential-types parameter of the OAuth client configuration in the application security descriptor
(xs-security.json) determines which secrets bindings support.
In the following example, the service instance creates a binding secret for all new bindings, but still accepts the
instance secret.
Example
Sample Code
"oauth2-configuration": {
"credential-types": ["binding-secret","instance-secret"]
}
In the following example, the service instance creates a binding secret for all new bindings, but doesn’t accept
the instance secret.
Example
Sample Code
"oauth2-configuration": {
"credential-types": ["binding-secret"]
}
Related Information
Service instances of the Authorization and Trust Management service use different binding secrets for each
binding. To rotate binding secrets, unbind and rebind any consuming applications.
Prerequisites
Procedure
Results
When you rebind the application, the system creates a new binding secret.
Next Steps
Delete any old service keys that you don't need anymore.
Related Information
To simplify the management of secrets for service instances of the Authorization and Trust Management
service, we recommend that you configure service instances to use binding secrets.
Context
By default, service instances of the Authorization and Trust Management service use the instance secret for all
bindings of the service instance. In the application security descriptor (xs-security.json), enable binding
secrets for service instances. All bindings have their own secret. You can enable both at once for the following
plans:
● Application plan
Procedure
1. Modify the application security descriptor (xs-security.json service use the instance) to support both
instance secrets and binding secrets.
Sample Code
"oauth2-configuration": {
"credential-types": ["binding-secret","instance-secret"]
}
2. Update the service instance with the new application security descriptor.
3. Unbind and rebind any consuming applications.
With each new binding, the system creates a new binding secret.
4. Replace any service keys with new service keys.
At this point, none of the applications consuming your service instance need the instance secret anymore.
5. Modify the application security descriptor (xs-security.json) to disable support for instance secrets.
Sample Code
"oauth2-configuration": {
"credential-types": ["binding-secret"]
}
6. Update the service instance with the new application security descriptor.
A service instance of the Authorization and Trust Management service uses the same instance secret for all
bindings. To rotate instance credentials, create a new service instance and then rebind all the applications that
consume the old instance to the new instance.
Context
Recommendation
We recommend that you migrate to using binding secrets instead of instance secrets. If you lose one
binding secret, then you don't need to replace all bindings.
For more information, see Migrate from Instance Secrets to Binding Secrets [page 2312].
Procedure
1. Create a new service instance of the Authorization and Trust Management service.
Related Information
Developers create authorization information for business users in their environment and deploy this
information in an application. They make this available to administrators, who complete the authorization
setup and assign the authorizations to business users.
Developers store authorization information as design-time role templates in the security descriptor file xs-
security.json. Using the cockpit, administrators of the environment assign the authorizations to business
users.
You have already prepared an application that you want to deploy in the Cloud Foundry environment of SAP
Cloud Platform (see the related link).
Application security is maintained in the xs-security.json file; the contents of the xs-security.json file
cover the following areas:
Example
Output Code
AppName
|- web/
| |- xs-app.json
| \- resources/
\- xs-security.json # Security deployment artifacts/scopes/auths
\- [manifest.yml]
Business users in an application of the Cloud Foundry environment at SAP Cloud Platform should have
different authorizations because they work in different jobs.
For example in the framework of a leave request process, there are employees who want to create and submit
leave requests, managers who approve or reject, and payroll administrators who need to see all approved leave
requests to calculate the leave reserve.
The authorization concept of a leave request application should cover the needs of these three employee
groups. This authorization concept includes elements such as roles, scopes, and attributes.
Authorization in the application router and runtime container are handled by scopes. Scopes are groupings of
functional organizations defined by the application.
From a technical point of view, the resource server (the relevant security container API) provides a set of
services (the resources) for functional authorizations. The functional authorizations are organized in scopes.
Scopes cover business users’ authorizations in a specific application. They are deployed, for example, when
you deploy an update of the application. The security descriptor file xs-security.json contains the
application-specific “local” scopes (and, if applicable, also “foreign” scopes, which are valid for other defined
applications).
Scopes provide a set of role templates for a named application. The role templates contain the authorizations
for business users’ activities, such as viewing, editing, or deleting data. Information retrieved from the user's
identity (such as department or cost center) is stored in attributes.
After the developer has created the role templates and deployed them to the relevant application, it is the
Cloud Foundry administrator’s task to use the role templates to build roles, aggregate roles into role
collections, and assign the role collections to business users in the application.
For access to SAP HANA objects, the privileges of the technical user (container owner) apply; for business
data, instance-based authorizations, CDS access policies are used.
Related Information
A file that defines the details of the authentication methods and authorization types to use for access to your
application.
The xs-security.json file uses JSON notation to define the security options for an application; the
information in the application-security file is used at application-deployment time, for example, for
authentication and authorization (see the related links). Applications can perform scope checks (functional
authorization checks with Boolean result) and checks on attribute values (instance-based authorization
checks). The xs-security.json file declares the scopes and attributes on which to perform these checks.
This enables the User Account and Authentication (UAA) service to limit the size of an application-specific
JSON Web Token (JWT) by adding only relevant content.
Scope checks can be performed by the application router (declarative) and by the application itself
(programmatic, for example, using the container API). Checks using attribute values can be performed by the
application (using the container API) and on the database level.
The contents of the xs-security.json are used to configure the OAuth 2.0 client; the configuration is shared
by all components of an SAP multi-target application. The contents of the xs-security.json file cover the
following areas:
● Authorization scopes
A list of limitations regarding privileges and permissions and the areas to which they apply
● Attributes
Attributes refine business users' authorizations according to attributes that come with the business users.
You can use, for example fixed attribute values, such as 'country equals USA' or SAML attributes (AWS,
Azure, or GCP regions).
● Role templates
A description of one or more roles to apply to a user and any attributes that apply to the roles
● Tenant mode
This option is only relevant if you use tenants.
● OAuth 2.0 configuration
Use this property to set custom parameters for tokens.
Sample Code
{
"xsappname" : "node-hello-world",
"scopes" : [ {
"name" : "$XSAPPNAME.Display",
"description" : "display" },
{
"name" : "$XSAPPNAME.Edit",
"description" : "edit" },
{
"name" : "$XSAPPNAME.Delete",
"description" : "delete" }
],
"attributes" : [ {
"name" : "Country",
"description" : "Country",
"valueType" : "string" },
{
"name" : "CostCenter",
"description" : "CostCenter",
"valueType" : "int" }
],
"role-templates": [ {
"name" : "Viewer",
"description" : "View all books",
"scope-references" : [
"$XSAPPNAME.Display" ],
"attribute-references": [ "Country" ]
},
{ "name" : "Editor",
"description" : "Edit, delete books",
"scope-references" : [
"$XSAPPNAME.Edit",
"$XSAPPNAME.Delete" ],
"attribute-references" : [
"Country",
"CostCenter"]
}
]
}
Scopes
Scopes are assigned to users by means of security roles, which are mapped to the user group(s) to which the
user belongs. Scopes are used for authorization checks by the application router.
● Application router
URL prefix patterns can be defined and a scope associated with each pattern. If one or more matching
patterns are found when a URL is requested, the application router checks whether the OAuth 2.0 access
token included in the request contains at least the scopes associated with the matching patterns. If not all
scopes are found, access to the requested URL is denied. Otherwise, access to the URL is granted.
● Application container
Attributes
You can define attributes so that you can perform checks based on a source that is not yet defined. In the
example xs-security.json file included here, the check is based on a cost center, whose name is not known
because it differs according to context.
Role Templates
A role template describes a role and any attributes that apply to the role.
If a role template contains attributes, you must instantiate the role template. This is especially true with
regards to any attributes defined in the role template and their concrete values, which are subject to
customization and, as a result, cannot be provided automatically. Role templates that only contain local scopes
can be instantiated without user interaction. The same is also true for foreign scopes where the scope “owner”
has declared his or her consent in a kind of white list (for example, either for “public” use or for known
“friends”).
Note
The resulting application-specific role instance needs to be assigned to one or more user groups.
Related Information
The syntax required to set the properties and values defined in the xs-security.json application-security
description file.
Sample Code
{
"xsappname" : "node-hello-world",
"scopes" : [ {
xsappname
Use the xsappname property to specify the name of the application that the security description applies to.
"xsappname" : "<app-name>",
Naming Conventions
Bear in mind the following restrictions regarding the length and content of an application name in the xs-
security.json file:
● The following characters can be used in an application name of the Cloud Foundry environment at SAP
Cloud Platform: “aA”-“zZ”, “0”-“9”, “-” (dash), “_” (underscore), “/” (forward slash), and “\” (backslash).
● The maximum length of an application name is 128 characters.
Use the custom tenant-mode property to define the way the tenant's OAuth clients get their client secrets.
During the binding of the xsuaa service, the UAA service broker writes the tenant mode into the credential
section. The application router uses the tenant mode information for the implementation of multitenancy with
the application service plan.
Sample Code
{
"xsappname" : "<application_name>",
"tenant-mode" : "shared",
"scopes" : [
{
"name" : "$XSAPPNAME.Display",
"description" : "display"
Syntax
"tenant-mode" : "shared"
Tenant Modes
Value Description
dedicated (default) An OAuth client gets a separate client secret for each subaccount.
shared An OAuth client always gets the same client secret. It is valid in all
subaccounts. The application service plan uses this tenant mode.
If you do not specify tenant-mode in the xs-security.json, the UAA uses the dedicated tenant
mode.
scopes
In the application security file (xs-security.json), the scopes property defines an array of one or more
security scopes that apply for an application. You can define multiple scopes; each scope has a name and a
short description. The list of scopes defined in the xs-security.jsonlocal and foreign scopes; that is, the
permissions the application requires to be able to respond to all requests. file is used to authorize the OAuth
client of the application with the correct set of
Sample Code
"scopes": [
{
"name" : "$XSAPPNAME.Display",
"description" : "display"
},
{
"name" : "$XSAPPNAME.Edit",
"description" : "edit"
},
{
"name" : "$XSAPPNAME.Delete",
"description" : "delete",
"granted-apps" : [ "$XSAPPNAME(application,business-partner)"]
}
]
Local Scopes
All scopes in the scopes file is used to authorize the OAuth client of the application with the correct set section
are local, that is, application specific. Local scopes are checked by the application's own application router or
checked programmatically within the application's runtime container. In the event that an application needs
access to other services of the Cloud Foundry environment on behalf of the current user, the provided access
token must contain the required foreign scopes. Foreign scopes are not provided by the application itself; they
are checked by other sources outside the context of the application.
In the xs-security.json file, “local”scopes must be prefixed with the variable <$XSAPPNAME> at run time.
The variable is replaced with the name of the corresponding local application name.
Tip
Usually, “foreign” scopes include the service plan and the name of the application to which the scope belongs.
For more information, see Referencing the Application. Use the following syntax:
$XSAPPNAME(<service_plan>,<xsappname>).<local_scope_name>
Example
"$XSAPPNAME(application,business-partner).Create"
If you want to grant a scope from this application to another application for a user scenario, this application
needs to grant access to the scope for the application that wants to use this scope. Using the granted-apps
property in the scopes section, you can specify the application you want to grant your scope to. The other
application (referenced as <service_plan>,<foreign_xsappname>) receives the scope as a
“foreign”scope. For more information, see the related link.
Here is the syntax in the security descriptor of the application that grants the scope. For more information, see
Referencing the Application.
"granted-apps" : [ "$XSAPPNAME(<service_plan>,<foreign_xsappname>)"]
Example
"granted-apps" : [ "$XSAPPNAME(application,business-partner)"]
If you want to grant a scope from this application to another application for a client credentials scenario, use
the grant-as-authorities-to-apps property in the scopes section. In this case, the scopes are granted
as authorities. Specify the other application by name. For more information, see the related link.
Here is the syntax in the security descriptor of the application that grants the scope:
"grant-as-authorities-to-apps" :
[ "$XSAPPNAME(<service_plan>,<foreign_xsappname>)"]
Example
"grant-as-authorities-to-apps" : [ "$XSAPPNAME(application,business-partner)"]
Naming Conventions
Bear in mind the following restrictions regarding the length and content of a scope name:
● The following characters can be used in a scope name: “aA”-“zZ”, “0”-“9”, “-” (dash), “_” (underscore), “/”
(forward slash), “\” (backslash), “:” (colon), and the element is only relevant for user scenarios where roles
are not defined in their own service instance. Example: an admin application integrates different
components, each having an own “.” (dot)
● Scope names cannot start with a leading dot “.”(for example, .myScopeName1).
● The maximum length of a scope name, including the fully qualified application name, is 193 characters.
In the application security file (xs-security.json), the attributes property enables you to define an
array, listing one or more attributes that are applied to the role templates also defined in the xs-
security.json element is only relevant for user scenarios file. You can define multiple attributes.
Sample Code
"attributes" : [
{
"name" : "Country",
"description" : "Country",
"valueType" : "s" },
{
"name" : "CostCenter",
"description" : "CostCenter",
"valueType" : "string" }
],
The attributes element is only relevant for a user scenario. Each element of the attributes array defines an
attribute. These attributes can be referenced by role templates. There are multiple sources of attributes:
● Static attributes
Use the cockpit to assign the value of the attributes. You can use the static value.
● Attributes from an SAML identity provider
If a SAML identity provider provides the users, you can reference a SAML assertion attribute. The SAML
assertion is issued by the configured identity provider during authentication. You find the SAML attribute
value in the SAML configuration of your identity provider. The attributes provided by the SAML identity
provider, appear as a SAML assertion attribute in the JSON web token if the user has assigned the
corresponding roles. You can use the assertion attributes to achieve instance-based authorization checks
when using an SAP HANA database.
● Unrestricted attributes
In this case, you want to express that it is not necessary to set a specific value for this attribute. The
behavior is the same as if the attribute would not exist for this role.
attributes Parameters
valueType The type of value expected for the defined at int
tribute; possible values are: string (or s),
int (integer), or date
Note
If you set the valueRequired key to
true or if you leave out
valueRequired, you can generate a de
fault role and define values in default-
values of the attribute-
references key in the role-
templates element. Administrators
must define attribute values in the cockpit.
Naming Conventions
Bear in mind the following restrictions regarding the length and content of attribute names in the xs-
security.json file:
● The following characters can be used to declare an xs-security attribute name in the Cloud Foundry
environment: “aA”-“zZ”, “0”-“9”, “_” (underscore)
● The maximum length of a security attribute name is 64 characters.
role-templates
In the application-security file (xs-security.json), the role-templates property enables you to define an
array listing one or more roles (with corresponding scopes and any required attributes), which are required to
access a particular application module. You can define multiple role-templates, each with its own scopes
and attributes.
Role templates can be delivered with default values for each attribute reference. When you deploy role
templates with default values for each attribute reference, you create default roles.
Sample Code
"role-templates": [
{
"name" : "Viewer",
"description" : "View all books",
"default-role-name" : "Viewer: Authorized to Read All Books",
"scope-references" : [
"$XSAPPNAME.Display"
],
Sample Code
"role-templates": [
{
"name" :
"Viewer",
"description" :
"View all books",
"default-role-name" :
"Viewer: Authorized to Read All Books",
"scope-references" [ :
"$XSAPPNAME.Display"
],
"attribute-references": [
"Country"
]
},
]
A role template must be instantiated. This is especially true with regard to any attributes defined in the role
template and the specific attribute values, which are subject to customization and, as a result, cannot be
provided automatically. Role templates that only contain “local” scopes can be instantiated without user
interaction. The same is also true for “foreign”scopes where the scope owner has declared consent in a kind of
white list (for example, either for “public” use or for known “friends”).
Note
The resulting (application-specific) role instances need to be assigned to the appropriate user groups.
role-template Parameters
name The name of the role to build from the role tem Viewer
plate
attribute-references One or more attributes to apply to the built role. "name" : "Country",
You can use a JSON array of objects or of string. "default-values" :
If you use an array of objects, name is the name ["USA", "Germany"] (an ar
ray of objects)
of the attribute. Use the attribute name speci
fied in the attributes section. Country (an array of string)
Tip
If you want to deliver attributes that are not
restricted by attribute values, set
"valueRequired": false.
Naming Conventions
Bear in mind the following restrictions regarding the length and content of role-template names in the xs-
security.json file:
● The following characters can be used to declare an xs-security role-template name: “aA”-“zZ”, “0”-“9”, “_”
(underscore)
● The maximum length of a role-template name is 64 characters.
role-collections
The optional role-collections property enables you to define role collections with predefined roles.
Administrators use these predefined role collections. They can assign them to users during the initial setup of
SAP Cloud Platform.
By default, all attributesThe role-collections property only makes sense if application developers
reference role templates that can create default roles at deployment time.
Note
The role-collections element can only reference role templates of the same subaccount.
"role-collections": [
{
"name": "Employee",
"description": "Employee roles",
"role-template-references": [
"$XSAPPNAME.Employee"
]
}
]
role-collections Parameters
Example
$XSAPPNAME(applicati
on,myapp2)
Naming Conventions
Bear in mind the following restrictions regarding the length and content of role-collections names in the
xs-security.json file:
Tip
We recommend that you reference all role templates using either the $XSAPPNAME or
$XSAPPNAME(<service_plan>,<XSAPPNAME>) prefix to link to the correct application where the
role template is defined (see Referencing the Application).
Conditions
The role templates must fulfill one of the following conditions regarding attributes:
authorities
To enable one (sending) application to accept and use the authorization scopes granted by another application,
each application must be bound to its own instance of the xsuaa service. This is required to ensure that the
applications are registered as OAuth 2.0 clients at the UAA. You must also add an authorities property to
the security descriptor file of the sending application that is requesting an authorization scope. In the
authorities property of the sending application's security descriptor, you can either specify that all scope
authorities configured as grantable in the receiving application should be accepted by the application
requesting the scopes, or alternatively, only individual, named, scope authorities:
● Request and accept all authorities flagged as "grantable" in the receiving application's security descriptor:
Sample Code
"authorities":["$ACCEPT_GRANTED_AUTHORITIES"]
● Request and accept only the "specified" scope authority that is flagged as grantable in the specified
receiving application's security descriptor. For more information, see Referencing the Application.
Sample Code
"authorities":["<ReceivingApp>.ForeignCall",
"<ReceivingApp2>.ForeignCall",]
Since both the sending application and the receiving application are bound to UAA services using the
information specified in the respective application's security descriptor, the sending application can accept and
use the specified authority from the receiving application. The sending application is now authorized to access
the receiving application and perform some tasks.
Note
The granted authority always includes the prefixed name of the application that granted it. This information
tells the sending application which receiving application has granted which set of authorities.
The scope authorities listed in the sending application's security descriptor must be specified as "grantable" in
the receiving application's security descriptor.
Tip
Use the custom oauth2-configuration property to define custom values for the OAuth 2.0 clients, such as
the token validity and redirect URIs.
The xsuaa service broker registers and uses these values for the configuration of the OAuth 2.0 clients.
Sample Code
"oauth2-configuration": {
"token-validity": 43200,
"redirect-uris": [
"http://<host_name1>",
"http://<host_name2>"],
"refresh-token-validity": 1800,
"system-attributes ": ["groups","rolecollections"],
"allowedproviders ": ["orgin_key1","origin_key2"]
}
oauth2-configuration Parameters
token-validity Token validity in seconds. If you do not define a 900 (15 minutes)
value, the default value is used.
Values:
Recommendation
We recommend that you configure allowed
identity providers on subaccount level. For
more information, see Provide Logon Link
Help to Identity Provider for Business Users
[page 2291].
If you want to grant scopes to an application for example, you must reference this application. Depending on
where the application is located, you can reference an application in multiple ways:
Application References
Description Syntax
Note
Currently, you can only use the application service plan.
Example
$XSAPPNAME(application,business-partner)
Properties
"granted-apps" :
[ "$XSAPPNAME(application,business
-
partner1)","$XSAPPNAME(application
,business-partner2)"]
"grant-as-authorities-to-apps" :
[ "$XSAPPNAME(application,business
-
partner1)","$XSAPPNAME(application
,business-partner2)"]
"foreign-scope-references":
["$XSAPPNAME(application,busine
ss-
partner1)","$XSAPPNAME(applicat
ion,business-partner2)"]
Related Information
As an application developer, you want to create role collections for immediate use. You want to deliver role
collections that administrators can use in the cockpit, and easily assign to users, for example in an onboarding
process.
Prerequisites
● You have the Space Developer role in this subaccount (see the related link).
Context
You define the role collections in the application security descriptor file (xs-security.json). These role
collections reference role templates. As soon as you've deployed your application, the cockpit displays the role
collections. They contain predefined roles.
Sample Code
{
"role-templates": [
{
"name": "Viewer",
"description": "View Users",
"scope-references": [
"$XSAPPNAME.Display"
]
},
{
"name": "Manager",
"description": "Maintain Users",
"scope-references": [
"$XSAPPNAME.Display",
"$XSAPPNAME.Update"
]
}
],
"role-collections": [
{
"name": "UserManagerRC",
"description": "User Manager Role Collection",
"role-template-references": [
"$XSAPPNAME.Viewer",
"$XSAPPNAME.Manager"
]
}
]
}
3. Go to the folder where the application security descriptor (xs-security.json) file is stored.
4. To deploy the security information, create a service using your xs.security.json file.
Example
5. (If you do not use a manifest file) Bind your application to the service.
Example
You have created a role collection that is visible in the cockpit. It contains predefined roles. Using the
cockpit, administrators can assign this role collection to users.
Developers create authorization information for business users in their environment; the information is
deployed in an application and made available to administrators who complete the authorization setup and
assign the authorizations to business users.
Developers store authorization information as design-time role templates in the security descriptor file xs-
security.json. Using the xsuaa service broker, they deploy the security information to the xsuaa service.
The administrators view the authorization information in role templates, which they use as part of the run-time
configuration. The administrators use the role templates to build roles, which are aggregated in role collections.
The role collections are assigned, in turn, to business users.
The tasks required to set up authorization artifacts are performed by two distinct user roles: the application
developer and the administrator of the Cloud Foundry environment. After the deployment of the authorization
artifacts as role templates, the administrator of the application uses the role templates provided by the
developers for building role collections and assigning them to business users.
Note
To test authorization artifacts after deployment, developers can use the role templates to build role
collections and assign authorization to business users in an authorization tool based on a REST API.
1 Specify the security descriptor file containing the Application developer Text editor
(If applicable) If you want to create an OAuth 2.0 cli Application developer CF command
ent in an application-related subaccount, you must line interface
use a separate security descriptor file where you
specify the subaccount.
2 Create role templates for the application using the Application developer Text editor
3 Create a service instance from the xsuaa service us Application developer CF command
ing the service broker line interface
4 Bind the service instance to the application by in Application developer Text editor
1 Use an existing role or create a new one using role Administrator of the Cloud Foundry en SAP Cloud
vironment
templates Platform
cockpit
Add Roles to Role Collections [page 2296]
Command
line interface
for SAP
Cloud
Platform
2 Create a role collection and assign roles to it Administrator of the Cloud Foundry en SAP Cloud
vironment
Platform
Maintain Role Collections [page 2298]
cockpit
Command
line interface
for SAP
Cloud
Platform
3 Assign the role collections to users Administrator of the Cloud Foundry en SAP Cloud
vironment
Platform
Managing Authorizations (China (Shanghai) region)
cockpit
or Directly Assign Role Collections to Users [page
2300] Command
line interface
for SAP
Cloud
Platform
4 (If you do not use SAP ID Service) Assign the role Administrator of the Cloud Foundry en SAP Cloud
vironment Platform
collections to SAML 2.0 user groups (AWS, Azure, or
cockpit
GCP regions)
5 Assign the role collection to the business users pro Administrator of the Cloud Foundry en SAP Cloud
vironment Platform
vided by an SAML 2.0 identity provider (AWS, Azure,
cockpit
or GCP regions)
Related Information
The functional authorization scopes for your application need to be known to the User Account and
Authentication (UAA) service. To do this, you declare the scopes of your application in the security descriptor
file named xs-security.json and then you assign the security descriptor file to the application.
Prerequisites
● You have the Space Developer role in this subaccount (see the related link).
Context
You have created an OAuth 2.0 client using the service broker. The OAuth 2.0 client includes the following
configuration:
Restriction
This is only available if there are PaaS tenants and if the default SAML 2.0 identity provider, which
is SAP ID service, is used.
When you use the cf create-service command, you create an OAuth 2.0 client with a service instance and
you specify the xs-security.json file that is relevant for your application. Using this security descriptor file,
the OAuth 2.0 client that has been created for your application receives permission to access the scopes of
your application automatically.
Procedure
1. Go to the folder where you want to store your application security descriptor file.
You can now create a service instance using the xs-security.json file.
Related Information
Using the application security descriptor file (xs-security.json), you define role templates, authorization
scopes, and attributes.
Context
Procedure
Related Information
To run an application for multiple tenants, you need to create copies of the OAuth 2.0 client in a an identity
zone.
Context
The User Account and Authentication service supports the concept of identity zones to isolate OAuth 2.0
clients, SAML trust relationships, and local users. An identity zone is identified by a unique host name and a
domain. The design time artifacts like scope, attribute, role template are shared, while the run time entities
role, role collection are distinct for every identity zone.
Procedure
Create a service instance from the xsuaa service using the service broker.
Context
If you want to grant users access to an application, you must create a service instance (acting as OAuth 2.0
client) for this application.
Example
Note
If you want to update an already existing service instance, see the related link.
Related Information
You can update a service instance from the xsuaa service using the service broker.
Context
You are running a service instance that grants user access to an application. It uses the security descriptor file
xs-security.json. If you change properties, for example, you want to reflect the compatible changes you
made in the xs-security.json file in an existing service instance.
Procedure
1. Edit the xs-security.json file and make your changes in the security descriptors.
2. Update the service instance. During the update, you use the security descriptors you changed in the xs-
security.json file.
If you update a service instance, you can add, change, and/or delete scopes, attributes, and role templates.
Whenever you change role templates, you adapt roles that are derived from the changed role template.
Scope ● Add
● Delete
● Change
○ description
○ granted-apps
Attribute ● Add
● Delete
● Change
○ description
Note
Do not change the valueType of the attribute.
You must bind the service instance created by you to your application using the manifest file.
Context
You must bind the service instance that acts as OAuth 2.0 client to your application. You find it under name in
the related manifest file. This file is located in the folder of your application, for example hello-world.
Procedure
1. If you haven't already done so, deploy the application. Go to the root folder of your application and use the
following command:
cf push
2. To bind the service instance to your application, use the following command:
cf bind-service <APP> <SERVICE_INSTANCE>
Example
You have now deployed your application with the authorization artifacts that have been created earlier. The
application also has an OAuth 2.0 client.
You need to get the credentials of a service instance for an application that runs outside of the instance of the
Cloud Foundry environment at SAP Cloud Platform.
Prerequisites
Note
Applications that run inside the instance of the Cloud Foundry environment at SAP Cloud Platform get their
credentials after the respective service has been bound to the application. However, you cannot use
binding for an application that runs outside of the instance of the Cloud Foundry environment at SAP Cloud
Platform.
Instead, you use a service key that is created in the service instance of the remote application. You need to get
the credentials of the service instance for the remote application. The UAA service broker manages all
credentials, including those of the remote applications. The credentials you need are the OAuth 2.0 client ID
and the client secret.
First you generate a service key for the service instance of the remote application to enable access to the UAA
service broker. Then you retrieve the generated service key with the credentials, including the OAuth 2.0 client
ID and the client secret, from the UAA service broker. The remote application stores the service key. The
application can now use this service key with the credentials (OAuth 2.0 client ID and the client secret of the
remote application).
Procedure
1. Create a service key for the service instance of the remote application.
Example
2. You want to retrieve the credentials including the OAuth 2.0 client ID and the client secret for the service
instance of your remote application. Use the following command:
Example
Output Code
{
"clientid" : "sb-sample-app!i1",
"verificationkey" : "-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyi7TbPYgsIV5t2WdkabI/
XWoyrTTzIZCj0mxbp+LMdVCrmufbT9bwZ2ZJlHB3x0DKk3Os9g2CFiB/2yCMm/
jJe2CJJ06zhYGRIZwSu6r7Is7R4TEs8bhCQEV25LvEbK2qvZMUjxcjU
+13VkN9NYViF9PRhbMxRX7i9OJeBHn/
JyYFvvBxUnCIfiiLpnMiNB0tDkNf0EcPd3TWvmR8KwQGNnPT5ccpMD5fKQoC/K5wVy
+BWa43Z1d3AAA4QYBPVQX+PcWifulF7xtZVqLPMDE4Q8eWQYaiGkk6A+RO0RCIJ/
byMbvm50SPe8S6+obB/3j0eJ4b7phGFjpZbTv1UQIDAQAB-----END PUBLIC KEY-----",
"xsappname" : "sample-app!i1",
"identityzone" : "uaa",
Another application wants to use the scope of an existing application. For this purpose, the existing application
needs to grant access to the scope for the application that wants to use this scope.
Let's assume that your application name is sample-leave-request-app. This application provides a
function for creating leave requests (for all employees), for approving leave requests (only for managers), and
scheduling some jobs. The MobileApprovals application wants to use the scopes of the sample-leave-
request-app. Let's also assume that the exact scope names are not yet defined. In this case, the xs-
security.json file of your application might resemble the following code example.
Note
The xs-security.json file only includes own scopes (starting with $XSAPPNAME.) in the scopes
section. Scopes provided by other applications (like the JobScheduler scope) are only referenced in the
role template.
Sample Code
{
"xsappname" : "sample-leave-request-app",
"scopes" : [ { "name" : "$XSAPPNAME.createLR",
"description" : "create leave requests" },
{ "name" : "$XSAPPNAME.approveLR",
"description" : "approve leave requests",
"granted-apps" :
[ "$XSAPPNAME(application,mobile-subaccount-id,MobileApprovals)"]
} ],
"attributes" : [ { "name" : "costcenter",
"description" : "costcenter",
"valueType" : "s"
} ],
"role-templates": [ { "name" : "employee",
"description" : "Role for creating leave
requests",
"scope-references" :
[ "$XSAPPNAME.createLR","JobScheduler.scheduleJobs" ],
"attribute-references": [ "costcenter"] },
{ "name" : "manager",
"description" : "Role for creating and
approving leave requests",
"scope-references" :
[ "$XSAPPNAME.createLR","$XSAPPNAME.approveLR","JobScheduler.scheduleJobs" ],
"attribute-references": [ "costcenter" ] }
]
}
The role-template section in the xs-security.json of the MobileApprovals application contains the
reference to the scope granted by the sample-leave-request-app application.
Sample Code
"scope-references" : [ "$XSAPPNAME(application,subaccount-id,sample-leave-
request-app).approveLR" ]
After logging on to an application, you want to be redirected exactly to the page of the application in question. It
should therefore not be possible to use an open redirect, which might take you to the wrong page for example,
or to a malicious page.
At runtime, the User Account and Authentication service checks the redirect URI for correctness and rejects
access attempts to incompatible redirect URIs. This is possible because the security descriptor file xs-
security.json contains the correct redirect URI.
Sample Code
"oauth2-configuration": {
"redirect-uris": ["http://
<host_name1>","http://<host_name2>"]
The User Account and Authentication service stores the correct redirect URIs in the OAuth client table. To avoid
this kind of arbitrary redirect after a logon request, the UAA checks the redirect URI and thus makes sure that
users access the correct page.
The REST services of the Authorization and Trust Management service (XSUAA) provide APIs that enable you
to manage shadow users in the Cloud Foundry environment at SAP Cloud Platform.
You need a OAuth 2.0 client to access the API. For more information, see how to enable the apiaccess plan in
the related links.
The APIs of the Authorization and Trust Management service (XSUAA) are described on SAP API Business Hub.
To enable programmatic access to the XS user authentication and authorization (UAA) service in your
subaccount of the Cloud Foundry environment, create an XSUAA service instance under the apiaccess plan.
Prerequisites
You have the role User & Role Administrator for the subaccount where you want to enable API access. This role
ensures that you have the required scopes:
● xs_authorization.read
● xs_authorization.write
● xs_idp.read
● xs_idp.write
● xs_user.read
● xs_user.write
Context
One use case for this scenario is to use your own identity management system to integrate it with SAP Cloud
Platform. The API is a RESTful API that includes access to authorization, user, group, and identity provider
interfaces. The user, group, and identity provider interfaces use System for Cross-domain Identity
Management (SCIM) protocol.
For more information about the available APIs, see https://api.sap.com/package/authtrustmgmnt on SAP
API Business Hub.
Procedure
For example:
For example:
This command creates an entry for the OAuth client in the database of the cloud controller.
3. Create a service key.
For example:
The system writes the credentials for the OAuth client to the service key.
4. Get the credentials for the OAuth client.
For example:
Output Code
With the URL, client ID, and client secret, you can get the access token. With the access token and the
apiurl, you have access to the API.
5. Configure or program your application to get a token from the OAuth client.
In your call to the OAuth client, you send the client ID and client secret, separated with a colon (:) and
base64 encoded to the URL from the service key.
Output Code
{"access_token":"eyJhbGciOiJSUzI1N...",
"token_type":"bearer",
"expires_in":43199,
"scope":"xs_user.write uaa.resource xs_authorization.read
xs_idp.write xs_user.read xs_idp.read xs_authorization.write",
"jti":"be340353ac694b4cb504c6823f938647"}
6. Use the apiurl of the client with the token to access the APIs.
Sample Code
Output Code
[
{
"roleTemplateName": "Viewer",
"roleTemplateAppId": "myapp!t1111",
"name": "Viewer",
"attributeList": [],
"roleCapabilityIDList": [],
"description": "Default instance",
"scopes": [
{
"description": "Display forecast",
"name": "myapp!t1111.Tourist",
"custom-granted-apps": [],
"granted-apps": [],
"grant-as-authority-to-apps": [],
"custom-grant-as-authority-to-apps": []
}
]
}
]
https://docs.cloudfoundry.org/devguide/services/service-keys.html
SAP Note 2760424 - API Access to xsuaa Configuration Data
https://api.sap.com
Follow the tutorials below to get familiar with Authorization and Trust Management in the Cloud Foundry
environment of SAP Cloud Platform.
SAP Cloud SDK: How to secure your application in the Cloud Tutorial: Securing your application (Cloud Foundry environ
Foundry environment of SAP Cloud Platform ment) for SAP Cloud SDK
This section provides information on troubleshooting-related activities for Authorization and Trust
Management in the Cloud Foundry environment.
If there are authentication problems in your application, enable logging for the container security library in
question, reproduce the problem, and attach the application logs. To obtain more details, set the environment
variables for the application.
Procedure
Note
The Cloud Foundry command line interface prompts you to choose an org. To find the org of your
subaccount, use the cockpit to go to your subaccount. You find the org in the Cloud Foundry tile under
Organization (see Navigate to Orgs and Spaces [page 1751]).
Example
Example
7. (For Node.js) To set detailed logs of the Security API for Node.js, use the following command:
Example
cf restage <application>
Example
cf restage your-app
Example
10. Reproduce the problem in your application. The events that occur in your application are logged in your
local log file.
Note
You can stop the recording of the log messages using CTRL+C . You can revert the environment
variables using the following command:
Example
cf restage <application>
Example
cf restage your-app
11. .
12. Attach the log files to the incident.
To obtain more details about the token validation, set the following environment variables of the container
security library and Node.js (if applicable):
Values (integer):
○ 0 (off)
○ 1 (default)
○ 2 (medium)
○ 3 (high)
In this section you can find information for audit log functionalities in the SAP Cloud Platform.
Related Information
Audit Log Retrieval API Usage for the Cloud Foundry Environment [page 2353]
Audit Log Retention for the Cloud Foundry Environment [page 2356]
The audit log retrieval API allows you to retrieve the audit logs for your SAP Cloud Platform Cloud Foundry
environment account. It provides the audit log results as a collection of JSON entities.
The audit log retrieval API is protected by OAuth, and to consume it a valid OAuth client needs to be generated
and used by the client system.
Retrieving of audit logs via the Audit Log Retrieval API is limited to the size of the audit logs generated for the
account.
Prerequisites
The user executing the procedure is expected to have the Space Developer role for the corresponding
Space.
Note
1. Login to the Cloud Foundry landscape using the corresponding Cloud Foundry API (Infrastructure/
Landscape Overview).
cf login -a <API_URL> -o <ORG> -s <SPACE> -u <USER>
2. Create a service instance of the service “auditlog-management”
cf create-service auditlog-management default <SERVICE_INSTANCE>
3. Create a key for the service instance:
cf create-service-key <SERVICE_INSTANCE> <SERVICE_KEY>
4. List the key of the service instance:
cf service-key <SERVICE_INSTANCE> <SERVICE_KEY>
5. Extract values for "uaa.url", "uaa.clientid" and "uaa.clientsecret" of the key of the service
instance for access token creation. Extract the "url" to be used for later request to retrieve audit logs.
1. Request the OAuth access token via the OAuth client credentials flow, by executing an HTTP POST request
with the following parameters:
2. Extract the value of the "access_token" attribute from the JSON response.
This token grants access to the audit logs of the subaccount on the landscape where the service instance is
created.
Note
Note
The content of the access token can be displayed using a standard JWT decoder
The Audit Log Retrieval API supports server-side paging. If a given query produces a result with significant size,
the result will be chunked. Then the response will contain an HTTP header with a handle, with which to retrieve
the next chunks of the result. The handle can be passed via URL parameter in the subsequent retrieval request.
● time_from and time_to – if no time filter is specified the default timeframe of 30 days back is
returned. The time should be provided in the following format: 2018-05-11T10:42:00. Times are UTC.
<url_from_service_key>/auditlog/v2/auditlogrecords
The response is in JSON format, containing the audit log entries, split on pages if needed.
<url_from_service_key>/auditlog/v2/auditlogrecords?
time_from=2018-05-10T10:42:00&time_to=2018-05-11T10:46:00
Example: Get audit logs next chunk determined by the server-side paging
Execute the following HTTP GET request:
<url_from_service_key>/auditlog/v2/auditlogrecords?
handle=2018-06-14T10:11:18.968%3C4f932695-8616-4e1f-ac9a-1cdfb758f01d
<url_from_service_key>/auditlog/v2/auditlogrecords?handle=<handle value>
Response codes:
HTTP Codes
HTTP 200 OK
'audit.security-events'
'audit.configuration'
'audit.data-access'
'audit.data-modification'
Results
Related Information
The audit log data stored for your account will be retained for 30 days, after which it will be deleted.
In case you want to retain and use the data for more than 30 days you can retrieve it via the Audit Log Retrieval
API Usage for the Cloud Foundry Environment [page 2353] and store it in another persistent storage..
The audit log viewer displays the audit logs for your Cloud Foundry account, produced by SAP applications and
services you’ve subscribed to.
● read the audit logs written for your account in selected time frame,
● User - the person that has executed the event reflected in the written audit log
● Time - creation time for the audit log message
● Message - summary of the audit log message that is written
● Category - audit log message category. The four-supported audit log message categories are:
○ audit.security-events
○ audit.configuration
○ audit.data-access
○ audit.data-modification
The appearance of the UI can be modified by selecting the rows to be displayed on a single page, as well as the
columns that you want to be visible.
To use the audit log viewer you have to subscribe for it using the SAP Cloud Platform cockpit in the
Subscriptions tab of your subaccount. Once you have subscribed, select Go to Application to you open the audit
log viewer and login there.
To retrieve the audit logs for your subaccount using the audit log viewer you need to have proper
authorizations. See https://docs.cloudfoundry.org/concepts/roles.html#permissions .Create a
RoleCollection, include the auditlog-viewer!t*/Auditlog Auditor role and the auditlog-management!b*/
Auditlog Auditor role. Assign it to a user or create a rule to assign it to users based on the SAML Assertion
coming from the IDP.
Note
Only account members with the Security Administrator role are authorized to edit application
authorizations.
The Neo environment of SAP Cloud Platform supports identity federation and single sign-on with external
identity providers. The current section provides an overview of the supported scenarios.
Contents
SAP Cloud Platform applications can delegate authentication and identity management to an existing
corporate IdP that can, for example, authenticate your company's employees. It aims at providing a simple and
flexible solution: your employees (or customers, partners, and so on) can single sign-on with their corporate
user credentials, without a separate user store and subaccount in SAP Cloud Platform. All information required
by SAP Cloud Platform about the employee can be passed securely with the logon process, based on a proven
and standardized security protocol. There is no need to manage additional systems that take care for complex
user account synchronization or provisioning between the corporate network and SAP Cloud Platform. Only
the configuration of already existing components on both sides is needed, which simplifies administration and
lowers total cost of ownership significantly. Even existing applications can be "federation-enabled" without
changing a single line of code.
You can use Identity Authentication as an identity provider for your applications. is a cloud solution for identity
lifecycle management. Using it, you can benefit from features such as user base, user provisioning, corporate
branding or logo, and social IdP integration. See Identity Authentication.
Identity Authentication provides an easy way for your applications to delegate authentication and identity
management and keep developers focused on the business logic. It allows authentication decisions to be
removed from the application and handled in a central service.
SAP Cloud Platform offers solid integration with Identity Authentication. When you request an Identity
Authentication tenant for your SAP Cloud Platform subaccount, you can automatically use it as a trusted IdP.
SAP ID service is the place where you have to register to get initial access to SAP Cloud Platform. If you are a
new user, you can use the self-service registration option at the SAP Web site or SAP ID Service . SAP ID
Service manages the users of official SAP sites, including the SAP developer and partner community. If you
already have such a user, then you are already registered with SAP ID Service.
In addition, you can use SAP ID Service as an identity provider for your identity federation scenario, or if you do
not want to use identity federation. Trust to SAP ID Service is pre-configured on SAP Cloud Platform by default,
so you can start using it without further configuration. Optionally, on SAP Cloud Platform you can configure
additional trust settings, such as service provider registration, role assignments to users and groups, and so
on.
● A central user store for all your identities that require access to protected resources of your application(s)
● A standards-based Single Sign-On (SSO) service that enables users to log on only once and get seamless
access to all your applications deployed using SAP Cloud Platform
The following graphic illustrates the identity federation with SAP ID Service scenario.
Roles allow you to control the access to application resources in SAP Cloud Platform, as specified in Java EE. In
SAP Cloud Platform, you can assign groups or individual users to a role. Groups are collections of roles that
allow the definition of business-level functions within your subaccount. They are similar to the actual business
roles existing in an organization.
The following graphic illustrates a sample scenario for role, user and group management in SAP Cloud
Platform. It shows a person, John Doe, with corporate role: sales representative. On SAP Cloud Platform, all
sales representatives belong to group Sales, which has two roles: CRM User and Account Owner. On SAP Cloud
Platform, John Doe inherits all roles of the Sales group, and has an additional role: Administrator.
You can use a user store from an on-premise system for user authentication scenarios. SAP Cloud Platform
supports two types of on-premise user stores:
SAP Cloud Platform uses the Security Assertion Markup Language (SAML) 2.0 protocol for authentication and
single sign-on.
By default, SAP Cloud Platform is configured to use SAP ID service as identity provider (IdP), as specified in
SAML 2.0. You can configure trust to your custom IdP, to provide access to the cloud using your own user
database.
SAP ID Service provides Identity and Access Management for Java EE Web applications hosted on SAP Cloud
Platform through the mechanisms described in Java EE Servlet specification and through dedicated APIs.
Cross-site Scripting (XSS) is one of the most common types of malicious attacks to Web applications. In order
to help protecting against this type of attacks, SAP Cloud Platform provides a common output encoding library
to be used from applications.
Cross-Site Request Forgery (CSRF) is another common type of attack to Web applications. You can protect
applications running on SAP Cloud Platform from CSRF, based on the Tomcat Prevention Filter.
This section describes how you can implement security in your applications.
SAP Cloud Platform provides the following APIs for user management and authentication:
Package Description
com.sap.security.um The user management API can be used to create and delete
users or update user information.
com.sap.security.um.user
com.sap.security.um.service
Authentication API
Related Information
6.2.1.1.1.1 Authentication
In the Neo environment, enable user authentication for access to your applications.
Prerequisites
● You have installed the SAP Cloud Platform Tools for Java. See Setting Up the Development Environment
[page 1402].
● You have created a simple HelloWorld application. See Creating a Hello World Application [page 1416].
● If you want to use Java EE 6 Web Profile features in your application, you have downloaded the SAP Cloud
Platform SDK for Java EE 6 Web Profile. See Using Java EE Web Profile Runtimes [page 1445]
Context
Note
Context
The Java EE servlet specification allows the security mechanisms for an application to be declared in the
web.xml deployment descriptor.
FORM Trusted SAML 2.0 identity FORM authentication imple You want to delegate authen
provider tication to your corporate
mented over the Security As
identity provider.
Application-to-Application sertion Markup Language
SSO (SAML) 2.0 protocol. Authen
tication is delegated to SAP
ID service or custom identity
provider. You can specify the
custom identity provider us
ing the trust configuration for
your subaccount. See Appli
cation Identity Provider
[page 2407].
BASIC User name and password HTTP BASIC authentication Example 1: You want to dele
delegated to SAP ID service gate authentication to SAP ID
or an on-premise SAP Net service. Users will log in with
Weaver AS Java system. Web their SCN user name and
browsers prompt users to password.
enter a user name and pass
Example 2: You have an on-
word.
premise SAP NetWeaver AS
By default, SAP ID service is Java system used as a user
used. (Optional) If you con store. You want users to log
figure a connection with an in using the user name and
on-premise user store, the password stored in AS Java.
authentication is delegated
to an on-premise SAP Net
Weaver AS Java system. See
Using an SAP System as an
On-Premise User Store [page
2422].
Note
If you want to use your
Identity Authentication
tenant for BASIC authen
tication (instead of SAP
ID service/SAP NetWea
ver), create a customer
ticket in component BC-
NEO-SEC-IAM. In the
ticket, specify the techni
cal name of the subac
count, region and Iden
tity Authentication ten
ant you want to use.
Restriction
BASIC authentication
with a third-party corpo
rate identity provider is
not supported.
Restriction
The trust configuration
( cloud cockpit
Security Trust
Application Identity
Provider ) you set for
your subaccount does
CERT Client certificate Used for authentication only Users log in using their cor
with client certificate. See porate client certificates.
Enabling Client Certificate
Authentication [page 2478].
BASICCERT User name and password Used for authentication ei Within the corporate net
ther with client certificate or work, users log in using their
Client certificate
with user name and pass client certificates. Outside
word. See Enabling Client that network, users log in us
Certificate Authentication ing user name and password.
[page 2478].
OAUTH OAuth 2.0 token Authentication according to You have a mobile application
the OAuth 2.0 protocol with consuming REST APIs using
an OAuth access token. See the OAuth 2.0 protocol.
OAuth 2.0 Authorization Users log in using an OAuth
Code Grant [page 2438] access token.
Application-to-Application
SSO
If you need to configure the default options of an authentication method, or define new methods, see
Authentication Configuration [page 2427]
Tip
Note
By default, any other method (DIGEST, CLIENT-CERT, etc. or custom) that you specify in the web.xml are
executed as FORM. You can configure those methods using the Authentication Configuration section at
Java application level in the Cockpit. See Authentication Configuration [page 2427].
Tip
For the SAML and FORM authentication methods, if your application sends multiple simultaneous requests
without an authenticated session, they may fail. We recommend that you first send one request to a
protected resource, establish a session, and then use the session for the multiple simultaneous requests.
Tip
Although BASIC authentication is usually used for technical users to consume REST services (stateless
communication) we recommend that the client leverages the security session instead of sending
credentials with every call. This means the client needs to make sure it preserves and re-sends all HTTP
cookies it receives. Thus, authentication will happen only once and this could improve performance.
● When FORM authentication is used, you are redirected to SAP ID service or another identity provider,
where you are authenticated with your user name and password. The servlet content is then displayed.
● When BASIC authentication is used, you see a popup window and are prompted to enter your credentials.
The servlet content is then displayed.
Example
The following example illustrates using FORM authentication. It requires all users to authenticate before
accessing the protected resource. It does not, however, manage authorizations according to the user roles - it
authorizes all authenticated users.
<login-config>
<auth-method>FORM</auth-method>
</login-config>
<security-constraint>
<web-resource-collection>
<web-resource-name>Protected Area</web-resource-name>
<url-pattern>/index.jsp</url-pattern>
<url-pattern>/a2asso.jsp</url-pattern>
</web-resource-collection>
<auth-constraint>
<!-- Role Everyone will not be assignable -->
<role-name>Everyone</role-name>
</auth-constraint>
</security-constraint>
<security-role>
<description>All SAP Cloud Platform users</description>
<role-name>Everyone</role-name>
</security-role>
Note
All authenticated users implicitly have the Everyone role. You cannot remove or edit this role. In the SAP
Cloud Platform Cockipt, the Everyone role is not listed in role mapping (see Managing Roles [page 2397] ).
If you want to manage authorizations according to user roles, you should define the corresponding constraints
in the web.xml. The following example defines a resource available for users with role Developer, and another
resource for users with role Manager:
<security-constraint>
<web-resource-collection>
<web-resource-name>Developer Page</web-resource-name>
<url-pattern>/developer.jsp</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>Developer</role-name>
</auth-constraint>
</security-constraint>
<security-constraint>
<web-resource-collection>
Remember
If you define roles in the web.xml, you need to manage the role assignments of users after you deploy your
application on SAP Cloud Platform. See Managing Roles [page 2397]
Context
With programmatic authentication, you do not need to declare constrained resources in the web.xml file of your
application. Instead, you declare the resources as public, and you decide in the application logic when to trigger
authentication. In this case, you have to invoke the authentication API explicitly before executing any
application code that should be protected. You also need to check whether the user is already authenticated,
and should not trigger authentication if the user is logged on, except for certain scenarios where explicit re-
authentication is required.
If you trigger authentication in an SAP Cloud Platform application protected with FORM, the user is redirected
to SAP ID service or custom identity provider for authentication, and is then returned to the original application
that triggered authentication.
If you trigger authentication in an SAP Cloud Platform application protected with BASIC, the Web browser
displays a popup window to the user, prompting him or her to provide a user name and password.
package hello;
import java.io.IOException;
import javax.security.auth.login.LoginContext;
import javax.security.auth.login.LoginException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.security.auth.login.LoginContextFactory;
public class HelloWorldServlet extends HttpServlet {
...
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
String user = request.getRemoteUser();
if (user != null) {
response.getWriter().println("Hello, " + user);
} else {
LoginContext loginContext;
try {
loginContext = LoginContextFactory.createLoginContext("FORM");
loginContext.login();
In the example above, you create LoginContext and call its login() method.
Note
All the steps below are described using the FORM authentication method, but they can also be applied to
BASIC.
Procedure
1. Open the source code of your HelloWorldServlet class. Add the code for programmatic authentication to
the doGet() method.
2. Make the doPost() method invoke programmatic authentication. This is necessary because the SAP ID
service always returns the SAML2 response over an HTTP POST binding, and in order to be processed
correctly, the LoginContext login must be called during the doPost() method. The authentication
framework is responsible for restoring the original request using GET after successful authentication.
Another alternative is that your doPost() method simply calls your doGet() method.
3. Test your application on the local server. It does not need to be connected to the SAP ID service, and
authentication is done against local users. For more information, see Testing User Authentication on the
Local Server.
4. Deploy the application to SAP Cloud Platform. If you are using FORM, you are redirected to SAP ID service
or another identity provider, depending on your trust configuration for this subaccount. If you are using
BASIC, you are redirected to SAP ID service (not configurable using trust settings). The servlet content is
then displayed and you should be able to see the content returned by the hello servlet.
When BASIC authentication is used, you should see a popup window prompting you to provide credentials
to authenticate. Once these are entered successfully, the servlet content is displayed.
You can configure session timeout using the web.xml. Default value: 20 minutes. For example:
<session-config>
<session-timeout>15</session-timeout> <!-- in minutes -->
</session-config>
jQuery(document).ajaxComplete(function(e, jqXHR){
if(jqXHR.getResponseHeader("com.sap.cloud.security.login")){
alert("Session is expired, page
shall be reloaded.");
window.location.reload();
}
}
Note
For requests made with the X-Requested-With header and value XMLHttpRequest (AJAX requests),
you need to check for session expiration (by checking the marker header
com.sap.cloud.security.login). If the session is expired and you are using SAML2 or FORM
authentication method, the system does not trigger an authentication request.
6.2.1.1.1.1.4 Troubleshooting
When testing in the local scenario, and your application has Web-ContextPath: /, you might experience the
following problem with Microsoft Internet Explorer:
Output Code
HTTP Status 405 - HTTP method POST is not supported by this URL
If you see such issues, you will have to add the following code into your protected resource:
@Override
protected void doPost(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException { doGet(req, resp); }
Next Steps
You can now test the application locally. See Test Security Locally [page 2386].
After testing, you can proceed with deploying the application to SAP Cloud Platform. See Deploying and
Updating Applications [page 1453].
After deploying on SAP Cloud Platform, you need to configure the role assignments users and groups will have
for this application. See Managing Roles [page 2397].
Example
To see the end-to-end scenario of managing roles on SAP Cloud Platform, watch the complete video tutorial
Managing Roles in SAP Cloud Platform .
6.2.1.1.1.2 Authorizations
if(!request.isUserInRole("Developer")){
response.sendError(403, "Logged in user does not have role Developer");
return;
} else {
out.println("Hello developer");
}
}
Next Steps
You can now test the application locally. For more information, see Test Security Locally [page 2386].
After testing, you can proceed with deploying the application to SAP Cloud Platform. For more information, see
Deploying and Updating Applications [page 1453].
After deploying on SAP Cloud Platform, you need to configure the role assignments users and groups will have
for this application. For more information, see Managing Roles [page 2397].
The Authorization Management API allows you to manage user roles and groups, and their assignments in your
applications.
The Authorization Management API is protected with OAuth 2.0 client credentials. Create an OAuth client and
obtain an access token to call the API methods. See Using Platform APIs [page 1737].
We strongly recommend that you use this API only for administration, not for runtime checks of
authorizations. For the runtime checks, we recommend using
HttpServletRequest.isUserInRole(java.lang.String role). See Authorizations [page 2372].
Note
HTML5 applications are using a more feature-rich authorization model, which allows to assign permissions
on various URI paths. Those permissions are then mapped to SAP Cloud Platform custom roles. Since all
HTML5 applications are run via a central app called dispatcher from the services account – all of them
share the same custom roles and mappings. This the reason why when you are managing roles of HTML5
applications, in the API calls you need to use dispatcher for appName and services for providerAccount
name.
{
"roles": [
{
"name": "Developer",
"type": "PREDEFINED",
"applicationRole": true,
"shared": true
},
{
"name": "Administrator",
"type": "PREDEFINED",
Related Information
Overview
The Platform Authorization Management API is implemented over the System for Cross-domain Identity
Management (SCIM) protocol. The HTTP requests that you send to this API need to be SCIM-compliant.
The Platform Authorization Management API is protected with OAuth 2.0 client credentials. Create an OAuth
client and obtain an access token to call the API methods. See Using Platform APIs [page 1737]. The required
scopes for the token are: readAccountMembers and manageAccountMembers.
API URL:
For the SAP Cloud Platform host, see Regions and Hosts Available for the Neo Environment [page 14].
ServiceProviderConfig Endpoint:
By the SCIM specification, you can access this endpoint (using HTTP GET) to retrieve the API configuration
and its supported features. This endpoint is unprotected.
Filtering Users
You can do two types of filtering: based on the user ID or the user base. For more information about changing
and managing the user base, see Platform Identity Provider [page 2431].
Restriction
Only the eq operator is supported for filtering. For more information, see Section 3.4.2.2: Filtering
or
Note
The above two URLs are case insensitive. For more information, see Section 7.8: Case-Insensitive
Comparison and International Languages of the SCIM protocol specification.
The Platform Authorization Management API returns two types of user roles: Predefined and Custom. For
more information about roles, see Managing Roles [page 2397].
If a user comes from a custom user base (that is, your custom Identity Authentication tenant, not SAP ID
service), the Platform Authorization Management API returns the user ID with a suffix _<your Identity
Authentication tenant>.
A prerequisite is having a valid OAuth access token with the required scopes. See Using Platform APIs [page
1737]
A prerequisite is having a valid OAuth access token with the required scopes. See Using Platform APIs [page
1737]
As with the previous example, you can use an HTTP destination object for convenience in managing your
connections. For testing purposes, you can access directly the API using an HTTP GET request. Let's try to get
all users with name P1234567. Then our HTTP URL looks like that:
A response returning the list of users with the same name available in different user bases could be:
{
"Resources": [
{
"id": "P1234567",
"meta": {
"created": "2019-02-12T13:54:14.604Z",
"lastModified": "2019-02-12T13:54:14.604Z",
"location": "https://api.hana.ondemand.com/authorization/v1/platform/
accounts/<subaccount>/Users/P1234567"
},
Related Information
Platform APIs of SAP Cloud Platform are protected with OAuth 2.0 client credentials. Create an OAuth client
and obtain an access token to call the platform API methods.
Context
For description of OAuth 2.0 client credentials grant, see the OAuth 2.0 client credentials grant specification .
For detailed description of the available methods, see the respective API documentation.
Tip
Do not get a new OAuth access token for each and every platform API call. Re-use the same existing access
token throughout its validity period instead, until you get a response indicating the access token needs to
be re-issued.
Context
The OAuth client is identified by a client ID and protected with a client secret. In a later step, those are used to
obtain the OAuth API access token from the OAuth access token endpoint.
Procedure
1. In your Web browser, open the Cockpit. See SAP Cloud Platform Cockpit [page 1006].
Caution
Make sure you save the generated client credentials. Once you close the confirmation dialog, you
cannot retrieve the generated client credentials from SAP Cloud Platform.
Context
OAuth access token endpoint and use the client ID and client secret as user and password for HTTP Basic
Authentication. You will receive the access token as a response.
By default, the access token received in this way is valid 1500 seconds (25 minutes). You cannot configure its
validity length.
If you want to revoke the access token before its validity ends, delete the respective OAuth client. The access
token remains valid up to 2 minutes after the client is deleted.
Procedure
1. Send a POST request to the OAuth access token endpoint. The URL is landscape specific, and looks like
this:
The parameter grant_type=client_credentials notifies the endpoint that the Client Credentials flow is used.
2. Get and save the access token from the received response from the endpoint.
The response is a JSON object, whose access_token parameter is the access token. It is valid for the
specified time (in seconds) in the expires_in parameter. (default value: 1500 seconds).
Example
Retrieving an access token on the trial landscape will look like this:
POST https://api.hanatrial.ondemand.com/oauth2/apitoken/v1?
grant_type=client_credentials
Headers:
Authorization: Basic eW91ckNsaWVudElEOnlvdXJDbGllbnRTZWNyZXQ
{
"access_token": "51ddd94b15ec85b4d54315b5546abf93",
"token_type": "Bearer",
"expires_in": 1500,
"scope": "hcp.manageAuthorizationSettings hcp.readAuthorizationSettings"
}
1. At the required (application, subaccount or global account) level, create an HTTP destination with the
following information (the name can be different):
○ Name=<yourdestination name>
○ URL=https://api.<SAP Cloud Platform host>/oauth2/apitoken/v1?grant_type=client_credentials
○ ProxyType=Internet
○ Type=HTTP
○ CloudConnectorVersion=2
○ Authentication=BasicAuthentication
○ User=<clientID>
○ Password=<clientSecret>
See Create HTTP Destinations [page 206].
2. In your application, obtain an HttpURLConnection object that uses the destination.
See ConnectivityConfiguration API [page 255].
3. With the object retrieved from the previous step, execute a POST call.
urlConnection.setRequestMethod("POST");
urlConnection.setRequestProperty("Authorization", "Basic <Base64 encoded
representation of {clientId}:{clientSecret}>");
urlConnection.connect();
Procedure
In the requests to the required platform API, include the access token as a header with name Authorization and
value Bearer <token value>.
Example
GET https://api.hanatrial.ondemand.com/authorization/v1/accounts/p1234567trial/
users/roles/?userId=myUser
Headers:
Authorization: Bearer 51ddd94b15ec85b4d54315b5546abf93
Related Information
You can access user attributes using the User Management Java API (com.sap.security.um.user). It can
be used to get and create users or to read and update their information.
To get UserProvider, first, declare a resource reference in the web.xml. For example:
<resource-ref>
<res-ref-name>user/Provider</res-ref-name>
<res-type>com.sap.security.um.user.UserProvider</res-type>
</resource-ref>
Then look up UserProvider via JNDI in the source code of your application. For example:
If you are using the SDK for Java EE 6 Web Profile, you can look up UserProvider via annotation (instead
of embedding JNDI lookup in the code). For example:
@Resource
private UserProvider userProvider;
try {
// Read the currently logged in user from the user storage
return userProvider.getUser(request.getRemoteUser());
} catch (PersistenceException e) {
throw new ServletException(e);
}
import com.sap.security.um.user.User;
import com.sap.security.um.user.UserProvider;
import com.sap.security.um.service.UserManagementAccessor;
...
// Check for a logged in user
if (request.getUserPrincipal() != null) {
try {
// UserProvider provides access to the user storage
UserProvider users = UserManagementAccessor.getUserProvider();
// Read the currently logged in user from the user storage
User user = users.getUser(request.getUserPrincipal().getName());
// Print the user name and email
response.getWriter().println("User name: " + user.getAttribute("firstname")
+ " " + user.getAttribute("lastname"));
response.getWriter().println("Email: " + user.getAttribute("email"));
} catch (Exception e) {
// Handle errors
}
}
In the source code above, the user.getAttribute method is used for single-value attributes (the first name
and last name of the user). For attributes that we expect to have more than one value (such as the assigned
groups), we use user.getAttributeValues method.
Next Steps
You can now test the application locally. For more information, see Test Security Locally [page 2386].
After testing, you can proceed with deploying the application to SAP Cloud Platform. For more information, see
Deploying and Updating Applications [page 1453].
Context
You can provide a logout operation for your application by adding a logout button or logout link.
When logout is triggered in a SAP Cloud Platform application, the user is redirected to the identity provider to
be logged out there, and is then returned to the original application URL that triggered the logout request.
The following code provides a sample servlet that handles logout operations. When loginContext.logout()
is used, the system automatically redirects the logout request to the identity provider, and then returns the
user to the logout servlet again.
import javax.security.auth.login.LoginContext;
import javax.security.auth.login.LoginException;
import com.sap.security.auth.login.LoginContextFactory;
...
public class LogoutServlet extends HttpServlet {
. . .
//Call logout if the user is logged in
LoginContext loginContext = null;
if (request.getRemoteUser() != null) {
try {
loginContext = LoginContextFactory.createLoginContext();
loginContext.logout();
} catch (LoginException e) {
// Servlet container handles the login exception
// It throws it to the application for its information
response.getWriter().println("Logout failed. Reason: " + e.getMessage());
}
} else {
response.getWriter().println("You have successfully logged out.");
}
. . .
}
We add a logout link to the HelloWorld servlet, which references this logout servlet:
response.getWriter().println("<a href=\"LogoutServlet\">Logout</a>");
CSRF is a common Web hacking attack. For more information, see Cross-Site Request Forgery (CSRF) (non-
SAP link). You might consider protecting the logout operations for your applications from CSRF to prevent your
users from potential CSRF attack related problems (for example, XSRF denial of service on single logout).
Note
Although SAP Cloud Platform provides ready-to-use support for CSRF filtering, with logout operations you
cannot use it. The reason is users are sent to the logout servlet twice: first, when they trigger logout by
Source Code
We add a logout link to the HelloWorld servlet, which references this logout servlet:
Source Code
try {
HttpSession session = request.getSession(false);
if(session != null){
long tokenValue = 0L;
if(session.getAttribute("csrf-logout") != null){
tokenValue = (Long) session.getAttribute("csrf-logout");
} else {
SecureRandom instance =
java.security.SecureRandom.getInstance("SHA1PRNG");
instance.setSeed(instance.generateSeed(5));
tokenValue = instance.nextLong();
session.setAttribute("csrf-logout", tokenValue);
}
response.getWriter().println("<a href=\"LogoutServlet?csrf-
logout="+tokenValue+"\">Logout</a>");
}
} catch (NoSuchAlgorithmException e) {
throw new ServletException(e);
}
For efficient logout to work, the servlet handling logout must not be protected in the web.xml. Otherwise,
requesting logout will result in a login request. The following example illustrates how to unprotect successfully a
logout servlet. The additional <security-constraint>...</security-constraint> section explicitly enables access to
the logout servlet.
<security-constraint>
<web-resource-collection>
<web-resource-name>Start Page</web-resource-name>
<url-pattern>/*</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>Everyone</role-name>
</ auth-constraint>
</security-constraint>
<security-constraint>
<web-resource-collection>
<web-resource-name>Logout</web-resource-name>
<url-pattern>/LogoutServlet</url-pattern>
</web-resource-collection>
</security-constraint>
Avoid mapping a servlet to resources using wildcard (<url-pattern>/*</url-pattern> in the web.xml). This may
lead to an infinite loop. Instead, map the servlet to particular resources, as in the following example:
<servlet-mapping>
<servlet-name>Logout Servlet</servlet-name>
<url-pattern>/LogoutServlet</url-pattern>
<servlet-class>test.LogoutServlet</servlet-class>
</servlet-mapping>
Next Steps
You can now test the application locally. For more information, see Test Security Locally [page 2386].
After testing, you can proceed with deploying the application to SAP Cloud Platform. For more information, see
Deploying and Updating Applications [page 1453].
This section describes the error messages you may encounter when using BASIC authentication with SAP ID
Service as an identity provider.
For more information about using BASIC authentication, see Authentication [page 2364].
Your account is temporarily locked. It will be automatically SAP ID Service has registered five unsuccessful login at
unlocked in 60 minutes. tempts for this account in a short time. For security reasons,
your account is disabled for 60 minutes.
Password authentication is disabled for your account. Log in The owner of this account has disabled password authenti
with a certificate. cation using their user profile settings in SAP ID service.
Inactive account. Activate it via your account creation confir- This is a new account and you haven’t activated it yet. You
mation email will receive an e-mail confirming your account creating, and
containing an account activation link.
Login failed. Contact your administrator. You cannot log in for a reason different from all others listed
here.
This section describes how you can test the security you have implemented in your Java applications.
First, you need to test your application on your local runtime. If you use the Eclipse Tools, you can easily test
with local users. This is useful if you are implementing role-based identity management in your application.
Then, if everything goes well on the local runtime, you can deploy your application on SAP Cloud Platform, and
test how the application works on the Cloud with your local SAML 2.0 identity provider. This makes use if you
are implementing SAML 2.0 identity federation.
Related Information
When you add user authentication to your application, you can test it first on the local server before uploading
it to SAP Cloud Platform.
Note
On the local server, authentication is handled locally, that is, not by the SAP ID service. When you try to
access a protected resource on the local server, you will see a local login page (not SAP ID service's or
another identity provider's login page). User access is then either granted or denied based on a local JSON
(JavaScript Object Notation ) file (<local_server_dir>/config_master/
com.sap.security.um.provider.neo.local/neousers.json), which defines the local set of user accounts, along
Using SAP Cloud Platform Tools (Eclipse Tools), you can easily manage local users. You can use the visual
editor for configuring the users, or edit the JSON file directly.
User attributes provide additional information about a user account. Applications can use attributes to
distinguish between users or customization according to users. To add a new attribute, proceed as follows:
Roles are used by applications to define access rights. By default, each user is assigned the User.Everyone role.
It is read-only, which means you cannot remove it. To add a new role, proceed as follows:
1. From the list of JSON files, select the user you want to export.
Tip
The default name of the exported file is localusers.json. You can rename it to something more
meaningful to you.
If you prefer using the console client instead of the Eclipse IDE, you have to find and edit manually the JSON file
configuring local test users. It is located at <local_server_dir>/config_master/
com.sap.security.um.provider.neo.local/neousers.json.
The following example shows a sample configuration of a JSON file with two users, along with their attributes
and roles:
{
"Users": [
{
"UID": "P000001",
"Password": "{SSHA}OA5IKcTJplwLLaXCjmbcV+d3LQVKey+bEXU\u003d",
"Roles": [
"Employee",
"Manager"
],
"Attributes": [
{
"attributeName": "firstname",
"attributeValue": "John"
},
{
"attributeName": "lastname",
"attributeValue": "Doe"
},
{
"attributeName": "email",
"attributeValue": "[email protected]"
}
]
},
{
"UID": "P000002",
"Password": "{SSHA}OA5IKcTJplwLLaXCjmbcV+d3LQVKey+bEXU\u003d",
"Roles": [
"SomeRole"
],
"Attributes": [
{
"attributeName": "firstname",
"attributeValue": "Boris"
},
{
"attributeName": "lastname",
"attributeValue": "Boykov"
},
{
"attributeName": "email",
"attributeValue": "[email protected]"
}
]
}
]
}
When stopping your local server, you might see the following error logs:
#ERROR#org.apache.catalina.core.ContainerBase##anonymous#System Bundle
Shutdown###ContainerBase.removeChild: stop:
org.apache.catalina.LifecycleException: Failed to stop component
[StandardEngine[Catalina].StandardHost[localhost].StandardContext[/idelogin]]
This error causes no harm and you don't need to take any measures.
Next Steps
● After testing, you can proceed with deploying the application to SAP Cloud Platform. For more information,
see Deploying and Updating Applications [page 1453].
● After deploying on the cloud, you may need to perform configuration steps using the cockpit. For more
information, see Security Configuration [page 2397].
You can use a local test identity provider (IdP) to test single sign on (SSO) and identity federation of an SAP
Cloud Platform application end-to-end.
This scenario offers simplified testing in which developers establish trust to an application deployed in the
cloud with an easy-to-use local test identity provider .
For more information about the identity provider concept in SAP Cloud Platform, see Application Identity
Provider [page 2407].
Contents:
● You have set up and configured the Eclipse IDE for Java EE Developers and SAP Cloud Platform Tools for
Java. For more information, see Setting Up the Tools and SDK [page 1402].
● You have developed and deployed your application on SAP Cloud Platform. For more information, see
Creating an SAP Cloud Platform Application [page 1445].
Procedure
The usage of the local test identity provider involves the following steps:
1. In a Web browser, open the cockpit and navigate to Security Trust Local Service Provider .
2. Choose Edit.
3. For Configuration Type, choose Custom.
4. Choose Generate Key Pair to generate a new signing key and self-signed certificate.
5. For the rest of the fields, leave the default values.
6. Choose Save.
7. Choose Get Metadata to download and save the SAML 2.0 metadata identifying your SAP Cloud Platform
account as a service provider. You will have to import this metadata into the local test IdP to configure trust
to SAP Cloud Platform in the procedure that follows.
You need to configure your local IdP name if you want to use more than one local IdP. Default local IdP name:
localidp.
The trust settings on SAP Cloud Platform for the local test IdP are configured in the same way as with any other
productive IdP.
1. During the configuration, use the local test IdP metadata that can be requested under the following link:
http://<idp_host>:<idp_port>/saml2/localidp/metadata,
where <idp_host> and <idp_port> are the local server host and port.
To find the <idp_port>, go to Servers, double click on the local server and choose Overview Ports
Configuration .
Assertion-based attributes are used to define a mapping between attributes in the SAML assertion issued by
the local test IdP and user attributes on the Cloud.
This allows you to essentially pass any attribute exposed by the local test IdP to an attribute used in your
application in the cloud.
Define user attributes in the local test IdP by using the Eclipse IDE Users editor for SAP Cloud Platform as is
described in Setting up the local test IdP.
1. Open the cockpit in a Web browser, navigate to Security Trust Application Identity Provider .
2. From the table, choose the entry localidp, open the Attributes tab page, and click on Add Assertion-Based
Attribute.
3. In Assertion Attribute, enter the name of the attribute contained in the SAML 2.0 assertion issued by the
local test IdP. These are the same user attributes you defined in the Eclipse IDE Users editor when setting
the local test IdP.
5. Generate self sign-key pair and certificate for the local test IdP (optional)
If an error occurs while requesting the IdP metadata and the metadata cannot be generated, you can do the
following:
1. Generate a localidp.jks keyfile manually. The key and certificate are needed for signing the information that
the local test IdP will exchange with SAP Cloud Platform.
2. Open the directory <JAVA_HOME>/jre/bin/keytool
3. Open a command line and execute the following command:
where <fullpath_dir_name> is the directory path where the jks will be saved after the creation.
4. Under the Server directory, go to config_master\com.sap.core.jpaas.security.saml2.cfg and
create a directory with name localidp.
5. Copy the localidp.jks file under localidp directory.
1. In the Eclipse IDE, go to the already set up local test IdP Server.
2. Copy the file with the metadata describing SAP Cloud Platform as a service provider under the local server
directory config_master/com.sap.core.jpaas.security.saml2.cfg/localidp. To get this
metadata, in the cockpit, choose Security Trust Local Service Provider Get Metadata .
You can now access your application, deployed on the cloud, and test it against the local test IdP and its defined
users and attributes.
When you have implemented security in your application, you need to perform a few configuration tasks using
the Cockpit to enable the scenario to work successfully on SAP Cloud Platform.
Related Information
In SAP Cloud Platform, you can use Java EE roles to define access to the application resources.
Context
Terms
Term Description
Role Roles allow you to diversify user access to application resources (role-based authorizations).
Note
Role names are case sensitive.
Predefined roles Predefined roles are ones defined in the web.xml of an application.
After you deploy the application to SAP Cloud Platform, the role becomes visible in the Cockpit, and
you can assign groups or individual users to that role. If you undeploy your application, these roles
are removed.
● Shared - they are shared by default. A shared role is visible and accessible within all accounts
subscribed to this application.
● Restricted - an application administrator could restrict a shared role. A restricted role is visible
and accessible only within the subaccount that deployed the application, and not to accounts
subscribed to the application.
Note
If you restrict a shared role, you hide it from visibility for new assignments from subscribed ac
counts but all existing assignments will continue to take effect.
Custom roles Custom roles are ones defined using the Cockpit. Custom roles are interpreted in the same way as
predefined roles at SAP Cloud Platform: they differ only in the way they are created, and in their
scope.
You can add custom roles to an application to configure additional access permissions to it without
modifying the application's source code.
Custom roles are visible and accessible only within the subaccount where they are created. That’s
why different accounts subscribed to the same application could have different custom roles.
User Users are principals managed by identity providers (SAP ID service or others).
Note
SAP Cloud Platform does not have a user database on its own. It cares to map the users author
ized by identity providers to groups, and groups to roles.
Note
When a user logs in, its roles are stored in the user's current browser session. They are not up
dated dynamically, and removed from there only if the session is terminated or invalidated. This
means if you change the set of roles for a user currently logged, they will take effect only after
logout or session invalidation.
Group Groups are collections of roles that allow the definition of business-level functions within your sub
account. They are similar to the actual business roles existing in an organization, such as "manager",
"employee", "external" and so on. They help you to get better alignment between technical Java EE
roles and organizational roles.
Note
Group names are case insensitive.
For each identity provider (IdP) for your subaccount, you define a set of rules specifying the groups
a user for this IdP belongs to.
Context
This can be done in two ways: using predefined roles in the web.xml at development time, or using custom roles
in the UI.
Tip
If you need to do mass role or group assignment, to a very large number of users simultaneously, we
recommend using the Authorization Management API instead of the cockpit UI. See Using Platform APIs
[page 1737].
Procedure
● Predefined Roles
Context
Groups allow you to easily manage the role assignments to collections of users instead of individual users.
Procedure
Context
You can assign individual users to the roles or, more conveniently, assign groups for collective role
management.
You can do it in either of the two ways: using the Security Roles section for the application, or using the
Security Authorizations section for the subaccount.
Procedure
Tip
Context
For each different IdP, you then define a set of rules specifying to which groups a user logged by this IdP
belongs.
Note
You must have defined groups in advance before you define default or assertion-based groups for this IdP.
Default groups are the groups all users logged by this IdP will have. For example, all users logged by the
company IdP can belong to the group "Internal".
Assertion-based groups are groups determined by values of attributes in the SAML 2.0 assertion. For example,
if the assertion contains the attribute "contract=temporary", you may want all such users to be added to the
group "TEMPORARY".
Procedure
a. In the cockpit, navigate to Security Authorizations Groups , and choose Add Default Group.
b. From the dropdown list that appears, choose the required group.
● Defining Assertion-Based Groups
a. In the cockpit, navigate to Security Authorizations Groups , and choose Add Assertion-Based
Group. A new row appears and a new mapping rule is now being created.
b. Enter the name of the group to which users will be mapped. Then define the rule for this mapping.
c. In the first field of the Mapping Rules section, enter the SAML 2.0 assertion attribute name to be used
as the mapping source. In other words, the value of this attribute will be compared with the value you
specify (in the last field of Mapping Rules).
d. Choose the comparison operator.
Equals Choose Equals if you want the value of the SAML 2.0 as
sertion attribute to match exactly the string you specify.
Note that if you want to use more sophisticated rela
tions, such as "starts with" or "contains", you need to
use the Regular expression option.
.*@sap.com$
^(admin).*
e. In the last field of Mapping Rules, enter the value with which you compare the specified SAML 2.0
assertion attribute.
f. You can specify more than one mapping rule for a specific group. Use the plus button to add as many
rules as required.
Note
Note
Adding a new subrule binds it to the rest of the subrules using a logical AND operator.
In the image below, all users logged by this IdP are added to the group Government. The users that
have an arrtibute corresponding to their department name will also be assigned to the respective
department groups.
When you open the Groups tab page of the Authorizations section, you can see the identity provider
mappings for this group.
Try to access the required application logging on with users with and without the required roles respectively.
Context
You may use the following steps to configure default role caching settings. This may be required if you have
automated test procedures for role assignments in your applications. Tests may not work properly with the
default subaccount settings.
Tip
● Increase the time in which the requests are counted to more than the default 2 minutes
● Increase the number of requests – instead of the default 20, set 100 or 200, for example.
The table below shows the VM system properties available for configuring role caching:
Set the required values to the required VM system properties as described in Configure VM Arguments [page
2176].
The application identity provider supplies the user base for your applications. For example, you can use your
corporate identity provider for your applications. This is called identity federation. SAP Cloud Platform
supports Security Assertion Markup Language (SAML) 2.0 for identity federation.
Contents
Prerequisites
● You have a key pair and certificate for signing the information you exchange with the IdP on behalf of SAP
Cloud Platform. This ensures the privacy and integrity of the data exchanged. You can use your pre-
generated ones or use the generation option in the cockpit.
● You have provided the IdP with the above certificate. This allows the IdP administrator to configure its trust
settings.
● You have the IdP signing certificate to enable you to configure the cloud trust settings.
● You have negotiated with the IdP administrator which information the SAML 2.0 assertion will contain for
each user. For example, this could be a first name, last name, company, position, or an e-mail.
● You know the authorizations and attributes the users logged by this IdP need to have on SAP Cloud
Platform.
Tip
You can configure your SAP Cloud Platform account for identity federation with more than one identity
provider. In such case, make sure all user identities are unique across all identity providers, and no user is
available in more than one identity provider. Otherwise, this could lead to wrong assignment of security
roles at SAP Cloud Platform.
Context
In the SAML 2.0 communication, each SAP Cloud Platform account acts as a service provider. For more
information, see Security Assertion Markup Language (SAML) 2.0 protocol specification.
Tip
Each SAP Cloud Platform account is a separate service provider. If you need each of your applications to be
represented by its own service provider, you must create and use a separate account for each application.
See Create a Subaccount in the Cloud Foundry Environment [AWS, Azure, or GCP Regions].
Note
In this documentation and SAP Cloud Platform user interface, we use the term local service provider to
describe the SAP Cloud Platform account as a service provider in the SAML 2.0 communication.
You need to configure how the local service provider communicates with the identity provider. This includes, for
example, setting a signing key and certificate to verify the service provider’s identity and encrypt data. You can
use the configuration settings described in the table that follows.
Default The local provider's own trust settings For testing and exploring the scenario
will inherit the SAP Cloud Platform de
fault configuration (which is trust to
SAP ID service).
None The local provider will have no trust set For disabling identity federation for your
tings, and it will not participate in any account
identity federation scenario.
Custom The local provider settings will have a For identity federation with a corporate
specific configuration, different from identity provider or Identity
the default configuration for SAP Cloud Authentication tenant
Platform.
Force authentication If you set it to Enabled, you enable force authentication for
your application (despite SSO, users will have to re-authenti
cate each time they access it). Otherwise, set this option to
Disabled.
Procedure
1. In your Web browser, log on to the cockpit, and select an account. See SAP Cloud Platform Cockpit [page
1006].
Make sure that you have selected the relevant global account to be able to select the right account.
Note
7. In Signing Key and Signing Certificate, place the Base64-encoded signing key and certificate. You can use
one generated with the cockpit (using the Generate Key Pair button) or externally generated one.
Note
Certificates generated using the cockpit have validity of 10 years. If you want your identifying certificate
to have different validity, generate the key and certificate pair using an external tool, and copy the
contents in the Signing Key and Signing Certificate fields respectively in the cockpit.
Note
For more information how to use an externally generated key and certificate pair, see (Optional) Using
External Key and Certificate [page 2410].
8. Choose the required value of the Principal Propagation and Force authentication option.
9. Save the changes.
10. Choose Get Metadata to download the SAML 2.0 metadata describing SAP Cloud Platform as a service
provider. You will have to import this metadata into the IdP to configure trust to SAP Cloud Platform.
If you want to use for the local service provider a signing key and certificate generated using an external tool
(such as OpenSSL), use the following guidelines:
Example
As a result, OpenSSL generates two files in your current folder: spkey.pem (your private key) and
spcert.pem (a self-signed signing certificate).
Note
If you need the certificate to be signed by a certificate authority (CA), you need to proceed with a few more
steps:
1. Generate a certificate signing request (CSR) by executing the following command in the folder of your
spkey.pem:
OpenSSL will ask you to enter the fields of the CSR. For the Common Name field, we recommend that
you use the following format:
https:\/\/<SAP Cloud Platform host>\/<your account name>.
As a result, OpenSSL generates one more file in your current folder: spkey.csr (the CSR for your key/
certificate pair).
2. Send the spkey.csr to your CA to get it signed.
The CA returns the signed certificate. You can use that certificate in the steps below.
Convert the private key file spkey.pem into the unencrypted PKCS#8 format using the following command:
openssl pkcs8 -nocrypt -topk8 -inform PEM -outform PEM -in spkey.pem -out
spkey.pk8
Now open the file spkey.pk8 in a text editor and copy all contents except for the tags —–BEGIN PRIVATE
KEY—–, —–END PRIVATE KEY—– into the Signing Key text field in the cockpit. Then open the file spcert.pem
in a text editor and copy all contents except for the tags —–BEGIN CERTIFICATE—– and —–END CERTIFICATE
—– into the Signing Certificate text field in the cockpit.
After clicking Save you should get a message that you can proceed with the configuring of your trusted identity
provider settings.
Context
Note
To benefit from fully-featured identity federation with SAML identity providers, you need to have chosen the
Custom configuration type in the Local Service Provider section.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the required SAP Cloud Platform subaccount. See Navigate
to Global Accounts and Subaccounts [AWS, Azure, or GCP Regions].
Assertion Consumer Service The SAP Cloud Platform endpoint type (application root
or assertion consumer service). The IdP will send the
SAML assertion to that endpoint.
Single Sign-on URL The IdP's endpoint (URL) to which the SP's
authentication request will be sent.
Single Sign-on Binding The SAML-specified HTTP binding used by the SP to send
the authentication request.
Single Logout URL The IdP's endpoint (URL) to which the SP's logout
request will be sent.
Note
If there is no single logout (SLO) end point specified,
no request to the IdP SLO point will be sent, and only
the local session will be invalidated.
Signing Certificate The X.509 certificate used by the IdP to digitally sign the
SAML protocol messages.
User ID Source Location in the SAML assertion from where the user's
unique name (ID) is taken when logging into the Cloud. If
you choose subject, this is taken from the name identifier
in the assertions's subject (<saml:Subject>) element. If
you choose attribute, the user's name is taken from an
SAML attribute in the assertion.
Source Value Name of the SAML attribute that defines the user ID on
the cloud.
Note
If nothing else is specified, the default IdP is used for
authentication. Alternatively, you can use a different
IdP using a URL parameter. See Using an IdP
Different from the Default [page 2418].
Only for IDP-initiated SSO If this checkbox is marked, this identity provider can be
used only for IdP-initiated single sign-on scenarios. The
applications deployed at SAP Cloud Platform cannot use
it for user authentication from their login pages, for
example. Only users coming from links to the application
at the IdP side will be able to authenticate.
Note
When you add a new application identity provider and
you have selected Default configuration type in the
Local Service Provider section, this checkbox is
always marked. This means that SAP ID Service
(accounts.sap.com) will be used for authentication
when accessing applications/services on SAP Cloud
Platform, and the additional application identity
provider can be used only for IDP-initiated SSO.
5. In the Attributes tab, configure the user attribute mappings for this identity provider.
User attributes can contain any other information in addition to the user ID.
Default attributes are user attributes that all users logged by this IdP will have. For example, if we know that
"My IdP" is used to authenticate users from MyCompany, we can set a default user attribute for that IdP
"company=MyCompany".
Assertion-based attributes define a mapping between user attributes sent by the identity provider (in the
SAML assertion) and user attributes consumed by applications on SAP Cloud Platform (principal
attributes). This allows you to easily map the user information sent by the IdP to the format required by
your application without having to change your application code. For example, the IdP sends the first name
and last name user information in attributes named first_name and last_name. You, on the other hand,
have a cloud application that retrieves user attributes named firstName and lastName. You need to
define the relevant mapping in the Assertion-Based Attributes section so the application uses the
information from that identity provider properly.
Note
○ There are no default mappings of assertion attributes to user attributes. You need to define those if
you need them.
○ The attributes are case sensitive.
○ You can specify that all assertion attributes will be mapped to the corresponding principal
attributes without a change, by specifying mapping * to *.
○ SAML assertions larger than 25K are not supported.
○ We recommend that you avoid sending from the IdP side unnecessary user attributes (the same
applies also for unnecessary groups mapping) as assertion attributes. Too many assertion
attributes will result in a very long SAML assertion, which may put unnecessary load on
communication (and potentially result in errors). Send only the user attributes that your cloud
applications will really need.
In the screenshot above, all users authenticated by this IdP will have an attribute
organization="MOKMunicipality" and type="Government". In addition, several attributes (corresponding to
first name, last name and e-mail) from the SAML assertion will also be added to authenticated users. Note
that those attribute names provided in the assertion by the IdP are different from the principal attributes,
which are the attributes used by the cloud applications.
For more information about using user attributes in your application, see Authentication [page 2364].
6. In the Groups tab, configure the groups associated with this IdP's users.
Groups that you define on the cloud are later mapped to Java EE application roles. As specified in Java EE,
in the web.xml, you define the roles authorized to access a protected resource in your application. You
therefore define the groups that exist there and the roles to which each group is mapped via the Groups tab
in the cockpit. For each different IdP, you then define a set of rules specifying to which groups a user logged
by this IdP belongs.
For more information about configuring groups, see Managing Groups and Roles [page 2397].
Note
You must have defined groups in advance before you define default or assertion-based groups for this
IdP.
Default groups are the groups all users logged by this IdP will have. For example, all users logged by the
company IdP can belong to the group "Internal".
In the image above, all users logged by this IdP are added to the group Citizens.
All users from the ITSupport department (of organization MOKMunicipality) and the user with e-mail
[email protected] are added to group MOKMunicipalityAdmins for this subaccount. The rest of
the employees at MOKMunicipality (having an e-mail address in the mokmunicipality.org domain) are
assigned to group Government.
Context
You can define more than one identity provider for your account. There is always the default IdP. Initially, SAP ID
service is the default IdP but you can change that after you add another IdP.
You can register a tenant for Identity Authentication service as an identity provider for your subaccount.
Prerequisites
● You have defined service provider settings for the SAP Cloud Platform subaccount. See Configure SAP
Cloud Platform as a Local Service Provider [page 2408].
● You have chosen a custom local provider configuration type for this subaccount (using Cockpit Trust
Local Service Provider Configuration Type Custom )
Context
Identity Authentication service provides identity management for SAP Cloud Platform applications. You can
register a tenant for Identity Authentication service as an identity provider for the applications in your SAP
Cloud Platform subaccount.
Note
If you add a tenant for Identity Authentication service already configured for trust with the same service
provider name, the existing trust configuration on the tenant for Identity Authentication service side will be
updated. If you add a tenant for Identity Authentication configured for trust with SAP Cloud Platform with a
different service provider name, a new trust configuration will be created on the tenant for Identity
Authentication service side.
Note
When you remove a tenant for Identity Authentication service as trusted identity provider, the relevant
service provider configuration in the Identity Authentication tenant is preserved.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the required SAP Cloud Platform subaccount. See Navigate
to Global Accounts and Subaccounts [AWS, Azure, or GCP Regions].
○ You have a tenant for Identity Authentication service registered for your current SAP customer user (s-
user). You want to add the tenant as an identity provider.
1. Click Add Identity Authentication Tenant.
2. Choose the required Identity Authentication tenant and save the changes.
In this case, the trust will be established automatically upon registration on both the SAP Cloud
Platform and the tenant for Identity Authentication service side. See Getting Started with Identity
Authentication
○ You want to add a tenant for Identity Authentication service not related to your SAP user.
In this case, you need to register the tenant for Identity Authentication service as any other type of
identity provider. This means you need to set up trust settings on both the SAP Cloud Platform and the
Identity Authentication tenant side. See Integration.
Results
The tenant for Identity Authentication appears in the list of SAML identity providers. You can now administrate
further the Identity Authentication tenant by opening Identity Authentication Admin Console (hover over the
registered tenant for Identity Authentication and click Identity Authentication Admin Console). You can manage
the registered tenant for Identity Authentication as any other registered identity provider.
Note
It will take about 2 minutes for the trust configuration with the tenant for Identity Authentication to become
active.
Each SAP Cloud Platform subaccount is a separate service provider in the tenant for Identity
Authentication .
Tip
If you need each of your SAP Cloud Platform applications to be represented by its own service provider, you
must create and use a separate subaccount for each application. See Create a Subaccount in the Cloud
Foundry Environment [AWS, Azure, or GCP Regions].
Related Information
Context
● check credentials
● search for users
● retrieve user details
● retrieve information about the groups a specific user is a member of. You can use this information for user
authorizations. See Managing Roles [page 2397].
● SAP Single Sign-On with a SAP NetWeaver Application Server for Java System - the applications on SAP
Cloud Platform connect to the SAP on-premise system using Destination API (and, if necessary, SAP
HANA Cloud Connector), and make use of the user store there.
Alternatively to the above scenarios, you can implement identity federation with a Identity Authentication
tenant, where the tenant is configured to use an on-premise user store. See:
Related Information
Overview
You can configure applications running on SAP Cloud Platform to use a user store of an SAP NetWeaver (7.2 or
higher) Application Server for Java system and a SAP Single Sign-On system. That way SAP Cloud Platform
does not need to keep the whole user database, but requests the necessary information from an on-premise
system.
Prerequisites
When deploying the application, you have to set system properties of the application VM. For more information,
see Configure VM Arguments [page 2176].
Note
The WAR file that you are using as a source during the deployment has to be protected declaratively or
programmatically. For more information, see Authentication [page 2364].
Example
Note
The VM arguments passed using this command will have effect only until you re-deploy the application.
Context
The on-premise system is an AS Java with a deployed SCA from SAP Single Sign-On (SSO) 2.0. For the
configuration of the on-premise AS Java system, proceed as follows:
Procedure
For more information about the role assignment process, see Assigning Principals to Roles or Groups.
For more information about the policy configuration, see Editing the Authentication Policy of AS Java
Components.
3. If your user does not exist in the on-premise system, create a technical user.
For the proper communication with the on-premise AS Java system, you need to configure the destination of
the Java application on SAP Cloud Platform. For more information, see Configure Destinations from the
Cockpit [page 203].
You have to set the following properties for the destination of the cloud application:
URL https:// < AS Java Host>:<AS Java The URL to the on-premise AS Java
HTTPS Port>/scim/v1/ Or http:// system if it is exposed via reverse proxy.
<Virtual host configured in Cloud Or in case the on-premise systems is
Connector>:<virtual Port>/scim/v1/ exposed via HANA Cloud Connector the
virtual URL configured in Cloud Con
nector. In this case, the configured pro
tocol should be http as the connectivity
service is using secure tunneling to the
on-premise system.
You can use Microsoft Active Directory as an on-premise LDAP server providing a user store for your SAP Cloud
Platform applications.
Prerequisites
When deploying the application, you have to set system properties of the application VM. For more information,
see Configure VM Arguments [page 2176].
Note
The WAR file that you are using as a source during the deployment has to be protected declaratively or
programmatically. For more information, see Authentication [page 2364].
Example
Note
The VM arguments passed using this command will have effect only until you re-deploy the application.
Create the required destination and configure SAP HANA clolud connector as described in Configure the User
Store [page 478]
This is an optional procedure that you can perform to configure the authentication methods used in a cloud
application. You can configure the behavior of standard Java EE authentication methods, or define custom
ones, based on custom combinations of login options.
Prerequisites
● You have an application with authentication defined in its web.xml or source code. See Authentication
[page 2364] .
Context
The following table describes the available login options. In the default authentication configuration, they are
pre-assigned to standard Java EE authentication methods. If you want to change this, you need to create a
custom configuration.
For each authentication method, you can select a custom combination of options. You may need to select more
than one option if you want to enable more than one way for users to authenticate for this application.
If you select more than one option, SAP Cloud Platform will delegate authentication to the relevant login
modules consecutively in a stack. When a login module succeeds to authenticate the user, authentication ends
with success. If no login module succeeds, authentication fails.
Trusted SAML 2.0 identity pro Authentication is implemented over the Security Assertion Markup Language (SAML) 2.0
vider protocol, and delegated to SAP ID service or custom identity provider (IdP). The creden
tials users need to present depend on the IdP settings. See Application Identity Provider
[page 2407].
User name and password HTTP BASIC authentication with user name and password. The user name and password
are validated either by SAP ID service (default) or by an on-premise SAP NetWeaver AS
Java. See Using an SAP System as an On-Premise User Store [page 2422].
Note
If you want to use your Identity Authentication tenant for BASIC authentication (in
stead of SAP ID service/SAP NetWeaver), create a customer ticket in component
BC-NEO-SEC-IAM. In the ticket, specify the Identity Authentication tenant you want
to use.
Client certificate Users authenticate with a client certificate installed in an on-premise SAP NetWeaver Ap
plication Server for Java system. See Enabling Client Certificate Authentication [page
2478]
Application-to-Application SSO Used for AppToAppSSO destinations. See Application-to-Application SSO Authentication
[page 227].
Note
When you select Trusted SAML 2.0 identity provider, Application-to-Application SSO
becomes enabled automatically.
OAuth 2.0 token Authentication is implemented over the OAuth 2.0 protocol. Users need to present an
OAuth access token as credential. See OAuth 2.0 Authorization Code Grant [page 2438].
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the required SAP Cloud Platform subaccount. See Navigate
to Global Accounts and Subaccounts [AWS, Azure, or GCP Regions].
Example
You have a Web application that users access using a Web browser. You want users to log in using a SAML
identity provider. Hence, you define the FORM authentication method in the web.xml of the application.
However, later you decide to provide mobile access to your application using the OAuth protocol (SAML is not
optimized for mobile access). You do this by adding the OAuth 2.0 token option for the FORM method for your
application. In this way, desktop users will continue to log in using a SAML identity provider, and mobile users
will use an OAuth 2.0 access token.
Related Information
The security guide provides an overview of the security-relevant information that applies to HTML5
applications.
Related Information
6.2.1.2.1 Authentication
SAP Cloud Platform uses the Security Assertion Markup Language (SAML) 2.0 protocol for authentication and
single sign-on.
By default, the SAP Cloud Platform is configured to use the SAP ID service as identity provider (IdP), as
specified in SAML 2.0. You can configure a trust relationship to your custom IdP to provide access to the cloud
using your own user database. For information, see Application Identity Provider [page 2407].
HTML5 applications are protected with SAML2 authentication by default. For publicly accessible applications,
the authentication can be switched off. For information about how to switch off authentication, see
Authentication [page 1718].
6.2.1.2.2 Authorization
Permissions for an HTML5 application are defined in the application descriptor file. For more information about
how to define permissions for an HTML5 application, see Authorization [page 1719].
Permissions defined in the application descriptor are only effective for the active application version. To protect
non-active application versions, the default permission NonActiveApplicationPermission is defined by
the system for every HTML5 application.
To assign users to a permission of an HTML5 application, a role must be assigned to the corresponding
permission. As a result, all users who are assigned to the role get the corresponding permission. Roles are not
application-specific but can be reused across multiple HTML5 applications. For more information about
creating roles and assigning roles to permissions, see Managing Roles and Permissions [page 2216].
HTML5 application permissions can only protect the access to the REST service through the HTML5
application. If the REST service is otherwise accessible on the Internet or a corporate network, it must
implement its own authentication and authorization concept..
To access a system that is running in an on-premise network, you can set up an SSL tunnel from your on-
premise network to the SAP Cloud Platform using the SAP Cloud Platform Cloud Connector.
For more information about setting up the Cloud connector, see the Cloud Connector Operator's Guide.
Related Information
Cross-site scripting (XSS) is one of the most common types of malicious attacks on web applications.
If an HTML5 application is connected to a REST service, the corresponding REST service must take measures
to protect the application against this type of vulnerabilities. For REST services implemented on the SAP Cloud
Platform a common output encoding library may be used to protect applications. For more information about
XSS protection on the SAP Cloud Platform, see Protection from Cross-Site Scripting (XSS) [page 2518].
Cross-Site Request Forgery (CSRF) is another common type of attack on web applications.
If an application connects to a REST service, the corresponding REST service must take measures to protect
against CSRF. For REST services implemented on the SAP Cloud Platform a CSRF prevention filter may be used
in the corresponding REST service. For more information about CSRF protection on the SAP Cloud
Platform,see Protection from Cross-Site Request Forgery [page 2519].
In this section, you can find information relevant for securing SAP HANA applications running on SAP Cloud
Platform.
Security Information
Info Type See
General security concepts for SAP HANA applications SAP HANA Security Guide
Specific security concepts for SAP HANA applications run Configure SAML 2.0 Authentication [page 1601]
ning on SAP Cloud Platform
Setting up SAML authentication for SAP HANA XS applica How to Set Up SAML Authentication For Your SAP Cloud
tions Platform Trial Instance
The platform identity provider is the user base for access to your SAP Cloud Platform subaccount in the Neo
environment. The default user base is provided by SAP ID Service. You can switch to an Identity Authentication
tenant if you want to use a custom user base.
Overview
By default, the cloud cockpit and console client are configured to use SAP ID Service as the platform identity
provider (providing the user base for subaccount members). SAP ID Service, however, uses the SAP user base
(providing, for example, your s- or p-user). If you want to have subaccount members from your custom user
base, and use custom security configuration (such as two-factor user authentication, or corporate user store,
for example), you can switch to a custom Identity Authentication tenant as a platform identity provider.
Note
There is a difference between a platform identity provider and application identity provider at SAP Cloud
Platform.
The diagram below describes the basic features of platform identity providers and application identity
providers, and provides a brief comparison between them.
Changing the platform identity provider settings ( Security Trust Platform Identity Provider in the
cloud cockpit) does not affect the application identity provider settings ( Security Trust Platform
Identity Provider in the cloud cockpit) for this subaccount. See Application Identity Provider [page 2407].
Prerequisites
● You have a user with Administrator role for your subaccount (provided by the default user base, SAP ID
Service).
● You have enabled the Platform Identity Provider service. See Using Services in the Neo Environment [page
1740].
● You have an Identity Authentication tenant configured. See Identity Authentication documentation.
1. Log in to the SAP Cloud Platform cockpit with the Administrator user from the default user base.
2. Navigate to the required SAP Cloud Platform subaccount. See Navigate to Global Accounts and
Subaccounts [AWS, Azure, or GCP Regions].
The Identity Authentication tenant appears as a platform identity provider. The trust configuration with it is
complete. You can proceed with adding tenant users as subaccount members, and the rest of the steps
described in this document.
Context
Now that you have switched the user base, you need to add the users that you will use for access to this
subaccount as subaccount members.
Go to the Members tab in the cockpit. You can see all cockpit users, with their IDs, roles and user base, listed
here. To add a new member, choose Add Members and configure the member users from the respective user
base (Identity Authentication tenant). See also Add Members to Your Neo Subaccount [page 1903].
Note
The account members for access to this subaccount from the console client must have Administrator role.
Context
You can configure the Identity Authentication tenant for specific authentication scenarios using its
Administration Console UI.
To do so, choose the Administration Console button next to the registered tenant in the Security Trust
Platform Identity Provider section of the cloud cockpit.
In the tenant's Administration Console you will notice it displays the SAP Cloud Platform cockpit as a registered
application. The application has <Identity Authentication tenant ID> as display name, and https://
account.hana.ondemand.com/<account name>/admin as SP name.
Context
If you open the default cockpit URL, https://account.<SAP Cloud Platform host>/cockpit (see SAP
Cloud Platform Cockpit [page 1006]), SAP ID Service will be used for user authentication.
To request the cockpit using the Identity Authentication tenant user base, use the following URL:
For the SAP Cloud Platform host, see Regions [page 11].
Tip
Make sure you use the subaccount name, not the subaccount display name, which could be different.
Check the value of the subaccount name in the subaccount overview section in the cloud cockpit.
Note
● You can see only those subaccounts that are in the region of the tenant cockpit URL.
● If you want to use risk-based authentication, for example, to enable two-factor authentication (TFA),
you have to enable it for all subaccounts in your global account. This means for each subaccount you
need to configure the platform identity provider to be an Identity Authentication tenant configured
properly for risk-based authentication.
Procedure
1. In an incognito browser window, open the tenant cockpit URL. This is required to make sure you are not
logged in with the SAP ID Service user.
2. Log in with a user name and password from the Identity Authentication tenant.
Context
When using the console client with a custom platform identity provider, you must supply a user from your
custom Identity Authentication tenant. For example, you want to execute the list-schemas command. In the
corresponding command parameter, you can provide the login id or email address of your user in the Identity
Authentication tenant as follows:
If you have enabled two-factor authentication (TFA) in your Identity Authentication tenant, you can enter the 6-
digit passcode after the user’s password when the console client prompts you for password.
For more information about two-factor authentication in your Identity Authentication tenant, see Two-Factor
Authentication.
Tip
If you want to switch back to the default user base of SAP ID Service in the console client, you need to
remove the custom platform identity provider configuration you created.
Use the OAuth 2.0 Service to protect applications in the Neo environment using the OAuth 2.0 protocol.
OAuth 2.0 is a widely adopted security protocol for protection of resources over the Internet. It is used by many
social network providers and by corporate networks. It allows an application to request authentication on
behalf of users with third-party user accounts, without the user having to grant its credentials to the
The following graphic illustrates protecting applications with OAuth on SAP Cloud Platform.
● Authorization code grant - there is a human user who authorizes a mobile application to access resources
on his or her behalf. See OAuth 2.0 Authorization Code Grant [page 2438]
● Client credentials grant - there is no human user but a device instead. In such case, the access token is
granted on the basis of client credentials only. See OAuth 2.0 Client Credentials Grant [page 2444]
Related Information
Use OAuth 2.0 service in the Neo environment of SAP Cloud Platform to enable your cloud applications for
authorization code grant flow. Authorization code grant is one of the basic flows specified in the OAuth 2.0
protocol.
Overview
OAuth 2.0
OAuth has taken off as a standard way and a best practice for applications and websites to handle
authorization. OAuth defines an open protocol for allowing secure API authorization of desktop, mobile and
web applications through a simple and standard method.
In this way, OAuth mitigates some of the common concerns with authorization scenarios.
The following table shows the roles defined by OAuth, and their respective entities in SAP Cloud Platform:
Authorization server SAP Cloud Platform infrastructure The server that manages the
authentication and authorization of the
different entities involved.
<login-config>
<auth-method>OAUTH</auth-method>
</login-config>
<security-constraint>
<web-resource-collection>
<web-resource-name>Protected Area</web-resource-name>
<url-pattern>/rest/get-photos</url-pattern>
</web-resource-collection>
<auth-constraint>
<!-- Role Everyone will not be assignable -->
<role-name>Everyone</role-name>
</auth-constraint>
</security-constraint>
<security-role>
<description>All SAP Cloud Platform users</description>
<role-name>Everyone</role-name>
</security-role>
In your protected application you can acquire the user ID and attributes as described in Working with User
Profile Attributes [page 2381].
There are two additional user attributes you can use to retrieve token specific information:
Handling Sessions
The Java EE specification requires session support on the client side. Sessions are maintained with a cookie
which the client receives during the authentication and then passes it along to the server on every request. The
OAuth specification, however, does not necessarily require the client to support such a session mechanism.
That is, the support of cookies is not mandatory. On every request, the client passes along to the server only
the token instead of passing cookies. Using the OAuth login module described in the Protecting Resources
Declaratively section, you can implement a user login based on an access token. The login, however, occurs on
every request, and thus it implies the risk of creating too many sessions in the Web container.
To use requests that do not hold a Web container session, use a filter with the proper configuration, as
described in the following example:
<filter>
<display-name>OAuth scope definition for viewing a photo album</display-name>
<filter-name>OAuthViewPhotosScopeFilter</filter-name>
<filter-class>
com.sap.cloud.security.oauth2.OAuthAuthorizationFilter
</filter-class>
<init-param>
<param-name>scope</param-name>
<param-value>view-photos_upload-photos</param-value>
</init-param>
<init-param>
<param-name>no-session</param-name>
<param-value>false</param-value>
</init-param>
</filter>
One of the ways to enforce scope checks for resources is to declare the resource protection in the web.xml.
This is done by specifying the following elements:
Element Description
Initial parameters With these, you specify the scope, user principal and HTTP
method:
● scope
● http-method
● user-principal - if set to "yes", you will get the
user ID
● no-session - if you set this to "true", the session will
be destroyed when you finish using the filter. This
means that each time the filter is used, a new session
will be created. Default value: false.
The following example shows a sample web.xml for defining and configuring OAuth resource protection for the
application.
<filter>
<display-name>OAuth scope definition for viewing a photo album</display-name>
<filter-name>OAuthViewPhotosScopeFilter</filter-name>
<filter-class>
com.sap.cloud.security.oauth2.OAuthAuthorizationFilter
</filter-class>
<init-param>
<param-name>scope</param-name>
<param-value>view-photos</param-value>
</init-param>
<init-param>
<param-name>http-method</param-name>
<param-value>get post</param-value>
</init-param>
</filter>
In this code snippet you can observe how the PhotoAlbumServlet is mapped to the previously specified
OAuth scope filter:
<filter-mapping>
<filter-name>OAuthViewPhotosScopeFilter</filter-name>
<servlet-name>PhotoAlbumServlet</servlet-name>
</filter-mapping>
<filter-mapping>
<filter-name>OAuthViewPhotosScopeFilter</filter-name>
<url-pattern>/photos/*.jpg</url-pattern>
</filter-mapping>
In the second case, all files with the *.jpg extension that are served from the /photos directory will be
protected by the OAuth filter.
For more information regarding possible mappings, see the filter-mapping element specification.
Alternatively to the declarative approach with the web.xml (described above), you can use the OAUTH login
module programmatically. For more information, see Programmatic Authentication [page 2369].
When a resource protected by OAuth is requested, your application must pass the access token using the
HTTP "Authorization" request header field. The value of this header must be the token type and access token
value. The currently supported token type is "bearer".
When the protected resource access check is performed the filter calls the API and the API calls the
authorization server to check the validity of the access token and retrieve token’s scopes.
In the table below the result handling between the authorization server and resource server, resource server
and the API, and resource server and filter is presented.
If user-
principal=tr
ue ->
request.getU
serPrincipa
l().
getName() re
turns user_id
reason =
"access_forb
idden"
reason =
"missing_acc
ess_token
reason =
"missing_acc
ess_token
reason =
"missing_acc
ess_token
Next Steps
1. You can now deploy the application on SAP Cloud Platform. For more information, see Deploying and
Updating Applications [page 1453]
2. After you deploy, you need to configure clients and scopes for the application. For more information, see
OAuth 2.0 Configuration [page 2445].
Use OAuth 2.0 service in the Neo environment of SAP Cloud Platform to enable your cloud applications for
client credentials grant flow.
Context
Client credentials grant is one of the basic flows specified in the OAuth 2.0 protocol. It enables grant of an
OAuth access token based on the client credentials only, without user interaction. You can use this flow for
enabling system-to-system communication (with a service user), for example, in device communication in an
Internet of things scenario.
Procedure
1. Register a new OAuth client of type Confidential. See Register an OAuth Client [page 2445].
2. Using that client, you can get an access token using a REST call to the endpoints shown in cockpit
Security OAuth Branding .
○ Protect your application declaratively with the OAuth login method in the web.xml. See OAuth 2.0
Authorization Code Grant [page 2438].
○ Use the getRemoteUser() method of the HTTP request
(javax.servlet.http.HttpServletRequest) to get the client ID.
The getRemoteUser() method returns the client ID prefixed by oauth_client_ as follows:
oauth_client_<client ID>
Tip
You can use the client ID returned as remote user to assign Java EE roles to clients, and use them
for role-based authorizations. See:
Caution
Having multiple clients with the same case-sensitive name will lead to having the same user ID at
runtime. This could lead to incorrect user role assignments and authorizations.
Register clients, manage access tokens, configure scopes and perform other OAuth configuration tasks in the
Neo environment of SAP Cloud Platform.
Prerequisites
● You have an account with administrator role in SAP Cloud Platform. See Managing Member Authorizations
in the Neo Environment [page 1904].
● You have developed an OAuth-protected application (resource server). See OAuth 2.0 Authorization Code
Grant [page 2438].
● You have deployed the application on SAP Cloud Platform. See Deploying and Updating Applications [page
1453].
Contents:
Procedure
1. In your Web browser, log on to the cockpit, and select an account. See SAP Cloud Platform Cockpit [page
1006].
Field Description
Subscription The application for which you are registering this client.
To be able to register for a particular application, this
account must be subscribed to it. For more information,
see Getting Started with Business Applications
Subscriptions in the Neo Environment [page 1061].
Note
The client ID must be globally unique within the
entire SAP Cloud Platform.
Confidential If you mark this box, the client ID will be protected with a
password. You will need to supply the password here, and
provide it to the client.
Skip Consent Screen If you mark this option, no end user action will be
required for authorizing this client. Otherwise, the end
user will have to confirm granting the requested
authorization.
Redirect URI The application URI to which the authorization server will
connect the client with the authorization code.
Token Lifetime The token lifetime.This value applies to the access token
and authorization code.
Results
Define scopes for your OAuth-protected application to fine-grain the access rights to it.
Procedure
1. In your Web browser, log on to the cockpit, and select an account. See SAP Cloud Platform Cockpit [page
1006].
With revoking access tokens, you can immediately reject access rights you have previously granted. You may
wish to revoke an access token if you believe the token is be stolen, for example.
● The Cockpit - an administrator user may use the Cockpit to revoke tokens on behalf of different end users
● The end user UI - an end user may access its tokens (and no other user's) and revoke the required using
that UI
1. In the Cockpit, choose the Security OAuth section, and go to the Branding tab.
2. Click the End User UI link.You are now opening the end user UI in a new browser window. You can see all
access tokens issued for the current user.
3. Choose the Revoke button for the tokens to revoke.
Context
When your account is configured for trust with a corporate identity provider (IdP), it is often impossible to
connect to the IdP directly using a personal mobile device. The corporate IdP is often part of a protected
corporate network, which does not allow personal devices to access it. To facilitate OAuth authentication on
mobile devices, you can use the end user UI's QR code generation option. It provides as a scannable QR code
the authorization code sent by the OAuth authorization server.
Procedure
You can customize the lookandfeel of the authorization page displayed to end users with your corporate
branding. This will make it easier for them to recognize your organization.
Procedure
1. In your Web browser, log on to the cockpit, and select an account. See SAP Cloud Platform Cockpit [page
1006].
Results
The authorization page that end users see contains the company logo and colors you specify. The following
image shows an example of a customized authorization page.
Propagate users from external applications with SAML identity federation to OAuth-protected applications
running in the Neo environment of SAP Cloud Platform. Exchange the user ID and attributes from a SAML
assertion for an OAuth access token, and use the access token to access the OAuth-protected application.
Prerequisites
● You have an application external to SAP Cloud Platform. The application is integrated with a third-party
library or system functioning as a SAML identity provider. That application has a SAML assertion for each
authenticated user.
Note
How the external application and its SAML identity provider work together and communicate is outside
the scope of this documentation. They can be separate applications, or the external application may be
using a library integrated in it.
Note
If you are using a separate third-party identity provider system for this scenario, make sure you have
configured correctly trust between the external application and the identity provider system. Refer to
the identity provider vendor's documentation for details.
Context
This scenario follows the SAML 2.0 Profile for OAuth 2.0 Client Authentication and Authorization Grants
specification. The scenario is based on exchanging the SAML (bearer) assertion from the third-party identity
provider for an OAuth access token from the SAP Cloud Platform authorization server. Using the access token,
the external application can access the OAuth-protected application.
The graphic below illustrates the scenario implemented in terms of SAP Cloud Platform.
Procedure
1. Configure SAP Cloud Platform for trust with the SAML identity provider. See Configure Trust to the SAML
Identity Provider [page 2411].
2. Register the external application as an OAuth client in SAP Cloud Platform. See Register an OAuth Client
[page 2445].
3. Make sure the SAML (bearer) assertion that the external application presents contains the following
information:
Format="urn:oasis:names
:tc:SAML:1.1:nameid
format:unspecified"
xmlns:saml="urn:oasis:n
ames:tc:SAML:
2.0:assertion">p1235678
9
</saml:NameID>
Land Required
scape Descrip Audience
Host tion Value
Signing Certificate ).
xmlns:xsi="http://
www.w3.org/2001/
XMLSchema-instance"
xsi:type="xs:string">te
[email protected]
</AttributeValue>
</Attribute>
<Attribute
Name="first_name">
<AttributeValue
xmlns:xs="http://
www.w3.org/2001/
XMLSchema"
xmlns:xsi="http://
www.w3.org/2001/
XMLSchema-instance"
xsi:type="xs:string">Jo
n
</AttributeValue>
</Attribute>
4. In the code of the OAuth-protected application, you can retrieve the user attributes using the relevant SAP
Cloud Platform API. See User Attributes [page 2381].
The Keystore Service provides a repository for cryptographic keys and certificates to the applications in the
Neo environment of SAP Cloud Platform.
If you want to use cryptography with unlimited strength in an SAP Cloud Platform application, you need to
enable it via installing the necessary Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy
Files on SAP JVM.
Related Information
The Кeystore API provides a repository for cryptographic keys and certificates to the applications in the Neo
environment of SAP Cloud Platform. It allows you to manage keystores at subaccount, application or
subscription level.
The Keystore API is protected with OAuth 2.0 client credentials. Create an OAuth client and obtain an access
token to call the API methods. See Using Platform APIs [page 1737].
Using an HTTP destination is a convenient way to establish connection to the keystore. Once created, you can
re-use the destination for different API calls. To create the required destination, do the following steps:
1. At the required level, create an HTTP destination with the following information:
○ Name=<your destination name>
○ URL=https://api.<SAP Cloud Platform host>/keystore/v1
○ ProxyType=Internet
○ Type=HTTP
○ CloudConnectorVersion=2
○ Authentication=NoAuthentication
See Create HTTP Destinations [page 206].
2. In your application, obtain an HttpURLConnection object that uses the destination.
See ConnectivityConfiguration API [page 255].
Tip
We recommend using If-None-Match header for subsequent calls to the keystore to check if the keystore
contents have been modified since your last GET call.
From the response, copy the Etag header value and repeat the request with the added header below:
You can do it using the same code excerpt as above with added the following line before the last line:
Expected responses:
If you want to overwrite the keystore, set to true the query parameter overwrite. For example:
Expected response:
Related Information
Overview
The Keystore Service provides a repository for cryptographic keys and certificates to the applications hosted
on SAP Cloud Platform. By using the Keystore Service, the applications could easily retrieve keystores and use
them in various cryptographic operations such as signing and verifying of digital signatures, encrypting and
decrypting messages, and performing SSL communication.
Features
The SAP HANA Keystore Service stores and provides keystores encoded in the following formats:
The keystore service works with keystores available on the following levels:
● Subscription level
Keystores available for a certain application provided by another account.
● Application level
Keystores available for a certain application in a particular consumer account.
● Account level
Keystores available for all applications in a particular consumer account.
When searching for a keystore with a certain name, the keystore service will search on the different levels in
following order: Subscription level Application level Account level .
Once a keystore with the specified name has been found at a certain location, further locations will no more be
searched for.
To consume the Keystore Service, you need to add the following reference to your web.xml file:
<resource-ref>
<res-ref-name>KeyStoreService</res-ref-name>
<res-type>com.sap.cloud.crypto.keystore.api.KeyStoreService</res-type>
</resource-ref>
Then, in the code you can look up Keystore Service API via JNDI:
import com.sap.cloud.crypto.keystore.api.KeyStoreService;
...
KeyStoreService keystoreService = (KeyStoreService) new
InitialContext().lookup("java:comp/env/KeyStoreService");
For more information, see Tutorial: Using the Keystore Service for Client Side HTTPS Connections.
The keystore console commands are called from the SAP Cloud Platform console client and allow users to list,
upload, download, and delete keystores. To be able to use them, the user must have administrative rights for
that account. The console supports the following keystore commands: list-keystores, upload-keystore,
download-keystore, and delete-keystore.
Related Information
SAP JVM, used by SAP Cloud Platform, trusts the below-listed certificate authorities (CAs) by default. This
means that the external HTTPS services which use X.509 server certificates (which are issued by those CAs),
are trusted by default on SAP Cloud Platform and no trust needs to be configured manually.
For SSL connections to services which use different certificate issuers, you need to configure trust to use the
keystore service of the platform. For more information, see Using the Keystore Service for Client Side HTTPS
Connections [page 2472].
Properties
Note
The following certificates will be removed from SAP JVM7 on 21 November 2019:
If you are using any of these certificates, switch to any other from the approved list above. You can do that
in one of the following ways:
Related Information
Prerequisites
● You have downloaded and configured the SAP Eclipse platform. For more information, see Setting Up the
Development Environment [page 1402].
● You have created a HelloWorld Web application as described in the Creating a HelloWorld Application
tutorial. For more information, see Creating a Hello World Application [page 1416].
● You have an HTTPS server hosting a resource which you would like to access in your application.
● You have prepared the required key material as .jks files in the local file system.
Note
File client.jks contains a client identity key pair trusted by the HTTPS server, and cacerts.jks
contains all issuer certificates for the HTTPS server. The files are created with the keytool from the
standard JDK distribution. For more information, see Key and Certificate Management Tool .
Context
This tutorial describes how to extend the HelloWorld Web application to use SAP Cloud Platform Keystore
Service. It tells you how to make an SSL connection to an external HTTPS server by using the JDK and Apache
You test and run the application on your local server and on SAP Cloud Platform.
Procedure
To enable the look-up of the Keystore Service through JNDI, you need to add a resource reference entry to
the web.xml descriptor.
a. In the Project Explorer view, select the HelloWorld/WebContent/WEB-INF node.
b. Open the web.xml file in the text editor and insert the following content:
<resource-ref>
<res-ref-name>KeyStoreService</res-ref-name>
<res-type>com.sap.cloud.crypto.keystore.api.KeyStoreService</res-type>
</resource-ref>
package com.sap.cloud.sample.keystoreservice;
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter;
import java.io.PrintWriter;
import java.security.KeyStore;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.net.ssl.KeyManager;
import javax.net.ssl.KeyManagerFactory;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSocket;
import javax.net.ssl.SSLSocketFactory;
import javax.net.ssl.TrustManager;
import javax.net.ssl.TrustManagerFactory;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.cloud.crypto.keystore.api.KeyStoreService;
public class SSLExampleServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
/**
String clientKeystorePassword =
request.getParameter("client.keystore.password");
if (clientKeystorePassword == null || (clientKeystorePassword =
clientKeystorePassword.trim()).isEmpty()) {
response.getWriter().println("Password for client keystore is not
specified");
return;
}
String trustedCAKeystoreName = "cacerts";
// get a named keystores with password for integrity check
KeyStore clientKeystore;
try {
clientKeystore = keystoreService.getKeyStore(clientKeystoreName,
clientKeystorePassword.toCharArray());
} catch (Exception e) {
response.getWriter().println("Client keystore is not available: " +
e);
return;
}
// get a named keystore without integrity check
KeyStore trustedCAKeystore;
try {
trustedCAKeystore =
keystoreService.getKeyStore(trustedCAKeystoreName, null);
} catch (Exception e) {
response.getWriter().println("Trusted CAs keystore is not
available" + e);
return;
}
f. Save the Java editor and make sure that the project compiles without errors.
3. Deploy and Test the Web Application
Procedure
1. Add the required .jar files of the Apache HTTP Client (version 4.2 or higher) to the build path of your
project.
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.conn.scheme.Scheme;
import org.apache.http.conn.scheme.SchemeSocketFactory;
import org.apache.http.conn.ssl.SSLSocketFactory;
import org.apache.http.impl.client.DefaultHttpClient;
import org.apache.http.util.EntityUtils;
3. Replace callHTTPSServer() method with the one using Apache HTTP client.
Related Information
Procedure
1. To deploy your Web application on the local server, follow the steps for deploying a Web application locally
as described in Deploy Locally from Eclipse IDE [page 1468].
2. To upload the required keystores, copy the prepared client.jks and cacerts.jks files into <local
server root>\config_master\com.sap.cloud.crypto.keystore subfolder.
3. To test the functionality, open the following URL in your Web browser: http://localhost:<local
server HTTP port>/HelloWorld/SSLExampleServlet?host=<remote HTTPS server host
name>&port=<remote HTTPS server port number>&path=<remote HTTPS server
resource>&client.keystore.password=<client identity keystore password>.
Related Information
Procedure
1. To deploy your Web application on the cloud, follow the steps for deploying a Web application to SAP Cloud
Platform as described in Deploy on the Cloud with the Console Client [page 1475].
2. To upload the required keystores, execute upload-keystore console command with the prepared .jks
files. For more information, see the Cloud Configuration section in Keys and Certificates [page 2461].
Example
Assuming you have mySubaccount subaccount, myApplication application, myUser user, and the
keystore files in folder C:\Keystores, you need to execute the following commands in your local <SDK
root>\tools folder:
For more information about the keystore console commands, see Keystore Console Commands [page
2463].
3. To test the functionality, open the application URL shown by SAP Cloud Platform cockpit with the following
options:<SAP Cloud Platform Application URL>/SSLExampleServlet?host=<remote HTTPS
server host name>&port=<remote HTTPS server port number>&path=<remote HTTPS
server resource>& client.keystore.password=<client identity keystore password>.
Related Information
You can enable the users for your Web application to authenticate using client certificates. This corresponds to
the CERT and BASICCERT authentication methods supported in Java EE.
Overview
Prerequisites
(For the mapping modes requiring certificate authorities) You have a keystore defined. See Keys and
Certificates [page 2461].
Context
Using information in the client certificate, SAP Cloud Platform will map the certificate to a user name using the
mapping mode you specify.
Context
By default, SAP Cloud Platform supports SSL communication for Web applications through a reverse proxy
that does not request a client certificate. To enable client certificate authentication, you need to configure the
reverse proxy to request a client certificate.
Add cert.hana.ondemand.com as a platform domain. See Using Platform Domains [page 2241].
In your Web application, use declarative or programmatic authentication to protect application resources.
Use one of the following two methods for client certificate authentication:
If you use the declarative approach, you need to specify the authentication method in the application web.xml
file. See Declarative Authentication [page 2364].
If you use the programmatic approach, specify the authentication method as a parameter for the login context
creation. For more information, see Programmatic Authentication [page 2369].
The user mapping defines how the user name is derived from the received client certificate. You configure user
mapping using Java system properties.
com.sap.cloud.crypto.clientcert.keystore Defines the name of the keystore used during the user map
_name ping process, and it is mandatory for the mapping modes
that use the keystore.
Note
Use a keystore that is available in the Keystore Service.
See Keys and Certificates [page 2461].
Note
Use the keystore name without the keystore file exten
sion (jks for example).
Note
Depending on the value of the
com.sap.cloud.crypto.clientcert.mappi
ng_mode property,using the
com.sap.cloud.crypto.clientcert.keyst
ore_name property may be mandatory.
For more information how to set the value of the system property, see Configure VM Arguments [page 2176].
For more information about the particular values you need to set, see the table below.
CN The user name equals the Set the A client certificate with
common name (CN) of the com.sap.cloud.crypt cn=myuser,ou=security as a
certificate’s subject. o.clientcert.mappin subject is mapped to a
g_mode property with value myuser user name.
CN.
Note
The client certificate is
not accepted if its issuer
is not in the keystore or
is not in a chain trusted
by this keystore, and
then the authentication
fails. For more informa
tion about the Keystore
Service, see Keys and
Certificates [page 2461].
CN@issuer For this mapping mode, the To use this mapping mode, A client certificate with
user name is defined as <CN you have to set the following CN=john, C=DE, O=SAP,
of the certificate’s system properties: OU=Development as a sub
subject>@<keystore alias of
● com.sap.cloud.cr ject and CN=SSO CA, O=SAP
the certificate’s issuer>. Use ypto.clientcert. as an issuer is received. The
this mapping mode when you mapping_mode with a specified keystore with
have certificates with identi value CN@Issuer trusted issuers contains the
cal CNs. ● com.sap.cloud.cr same issuer, CN=SSO CA,
ypto.clientcert. O=SAP, that has an sso_ca
keystore_name with alias. Then the user name is
a value the keystore defined as john@sso_ca.
name containing the
trusted issuers
The issuer is trusted if it
is in the keystore or is
part of a trusted certifi-
cate chain. A certificate
chain is trusted if at
least one of its issuers
exists in the keystore.
Note
The client certificate is
not accepted if its issuer
is not in the keystore or
is not in a chain trusted
by this keystore, and
then the authentication
fails. For more informa
tion about setting the
Keystore Service, see
Keys and Certificates
[page 2461].
wholeCert For this mapping mode, the To use this mapping mode, The following client certifi-
whole client certificate is you have to set the following cate is received:
compared with each entry in system properties:
Subject: CN=john.miller,
the specified keystore, and
● com.sap.cloud.cr C=DE, O=SAP,
then the user name is de ypto.clientcert. OU=Development
fined as the alias of the mapping_mode with a
matching entry. value wholeCert Validity Start
● com.sap.cloud.cr Date: March 19 09:04:32
ypto.clientcert. 2013 GMT
keystore_name with
Validity End Date:
a value the keystore
name containing the re March 19 09:04:32 2018 GMT
spective user certifi-
…
cates
The specified keystore con
Note tains the same certificate
The client certificate is with an alias john. Then the
not accepted if no exact user name is defined as john.
match is found in the
specified keystore, and
then the authentication
fails. For more informa
tion about the Keystore
Service, see Keys and
Certificates [page 2461].
subjectAndIssuer For this mapping mode, only To use this mapping mode, A certificate with
the subject and issuer fields you have to set the following CN=john.miller, C=DE,
of the received client certifi- system properties: O=SAP, OU=Development as
cate are compared with the
● com.sap.cloud.cr a subject and CN=SSO CA,
ones of each keystore entry, ypto.clientcert. O=SAP as an issuer is re
and then the user name is mapping_mode with a ceived. The specified key
defined as the alias of the value subjectAndIssuer store contains a certificate
matching entry. ● com.sap.cloud.cr with alias john that has the
ypto.clientcert. same subject and issuer
Use this mapping mode
keystore_name with fields. Then the user name is
when you want authentica
a value the keystore defined as john.
tion by validating only the
name containing the re
certificate’s subject and is spective user certifi-
suer. cates
Note
The client certificate is
not accepted if an entry
with the same subject
and issuer is missing in
the specified keystore,
and then the authentica
tion fails. For more infor
mation about the Key
store Service, see Keys
and Certificates [page
2461].
Context
After you set up client certificate authentication, you need to use a special URL to call the application with that
authentication type. You need to use the following URL pattern:
Note
In comparison, the default application URL pattern at SAP Cloud Platform is:
For the available SAP Cloud Platform regions and their hosts, see Regions [page 11].
Example 1: You have an application running in the Europe (Rot) region. It has the following default application
URL:
https://bigideaX.hana.ondemand.com/exampleX
To call the application with client certificate authentication, you need to use the following URL:
https://bigideaX.cert.hana.ondemand.com/exampleX
Example 2: You have an application running in the Canada (Toronto) region. It has the following default
application URL:
https://bigideaZ.ca1.hana.ondemand.com/exampleZ
To call the application with client certificate authentication, you need to use the following URL:
https://bigideaZ.cert.ca1.hana.ondemand.com/exampleZ
To enable client certificate authentication in your application, users need to present client certificates issued by
some of the certificate authorities (CAs) listed below.
Trusted CAs
CN=Go Daddy Root Certificate Author CN=Go Daddy Root Certificate Author 47:BE:AB:C9:22:EA:E8:0E:
ity - G2, O="GoDaddy.com, Inc.", ity - G2, O="GoDaddy.com, Inc.", 78:78:34:62:A7:9F:45:C2:54:FD:E6:8B
L=Scottsdale, ST=Arizona, C=US L=Scottsdale, ST=Arizona, C=US
CN=SAP Internet of Things CA, O=SAP CN=SAP Internet of Things CA, O=SAP 45:53:D3:F2:22:58:FE:35:59:B1:84:9F:
IoT Trust Community II, C=DE IoT Trust Community II, C=DE 27:3B:8C:69:C2:4C:FA:15
CN=SAP Passport CA, O=SAP Trust CN=SAP Passport CA, O=SAP Trust 10:BD:99:32:E8:3A:01:CD:C4:4F:
Community, C=DE Community, C=DE 56:10:05:47:30:A8:73:18:16:6D
CN=thawte Primary Root CA, OU="(c) CN=thawte Primary Root CA, OU="(c) 91:C6:D6:EE:3E:
2006 thawte, Inc. - For authorized use 2006 thawte, Inc. - For authorized use 8A:C8:63:84:E5:48:C2:99:29:5C:75:6C:
only", OU=Certification Services Divi only", OU=Certification Services Divi 81:7B:81
sion, O="thawte, Inc.", C=US sion, O="thawte, Inc.", C=US
OU=Go Daddy Class 2 Certification Au OU=Go Daddy Class 2 Certification Au 27:96:BA:E6:3F:
thority, O="The Go Daddy Group, Inc.", thority, O="The Go Daddy Group, Inc.", 18:01:E2:77:26:1B:A0:D7:77:70:02:8F:
C=US C=US 20:EE:E4
CN=SAP Cloud Root CA, O=SAP SE, CN=SAP Cloud Root CA, O=SAP SE, 6d:80:92:77:4a:f2:d5:ed:ae:3a:5c:
L=Walldorf, C=DE L=Walldorf, C=DE 99:d6:56:93:1c:21:97:a9:50
For a complete list of Root CA certificates that are approved by SAP Global Security, see SAP Note 2801396 .
By default, SAP JVM provides Java Cryptography Extension (JCE) with limited cryptography strength. If you
want to use cryptography with unlimited strength in an SAP Cloud Platform application, you need to enable it
via installing the necessary Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files on
SAP JVM. To do that, follow the procedure below.
Prerequisites
You have the appropriate Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files
enabling cryptography with unlimited strength.
1. Pack the encryption policy files (JCE Unlimited Strength Jurisdiction Policy Files) in the following folder of
the Web application:
Results
The encryption policy files (Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files)
will be installed on the JVM of the application prior to start. As a result, the application can use unlimited
strength encryption.
Example
The WAR file of the application must have the following file entries:
META-INF/ext_security/jre7/local_policy.jar
META-INF/ext_security/jre7/US_export_policy.jar
Related Information
Context
Using the Password Storage API , you can securely persist passwords and key phrases such as passwords
for keystore files. Once persisted in the password storage, they:
Before transportation and persistence, passwords are encrypted with an encryption key which is specific for
the application that owns the password. They are stored according to subscription, and accessible only when
the owning application is working on behalf of the corresponding subscription.
Note
Each password is identified by an alias. To check the rules and constraints about passwords aliases,
permitted characters and length, see the security javadoc.
To use the password storage API, you need to add a resource reference to PasswordStorage in the web.xml
file of your application, which is located in the \WebContent\WEB-INF folder as shown below:
<resource-ref>
<res-ref-name>PasswordStorage</res-ref-name>
<res-type>com.sap.cloud.security.password.PasswordStorage</res-type>
</resource-ref>
An initial JNDI context can be obtained by creating a javax.naming.InitialContext object. You can then
consume the resource by looking up the naming environment through the InitialContext class as follows:
Note that according to the Java EE Specification, the prefix java:comp/env should be added to the JNDI
resource name (as specified in the web.xml file) to form the lookup name.
Below is a code example of how to use the API to set, get or delete passwords. These methods provide the
option of assigning an alias to the password.
import javax.naming.InitialContext;
import javax.naming.NamingException;
import com.sap.cloud.security.password.PasswordStorage;
import com.sap.cloud.security.password.PasswordStorageException;
.......
Note
It is recommended to cache the obtained value, as reading of passwords is an expensive operation and
involves several internal remote calls to central storage and audit infrastructure. As passwords are different
for the different tenant the cache should be tenant aware. PasswordsStorage instance obtained via lookup
can be cached and used by multiple threads.
Local Testing
When you run applications on SAP Cloud Platform local runtime, you can use a local implementation of the
password storage API, but keep in mind that the passwords are not encrypted and stored in a local file.
Therefore, for local testing, use only test passwords.
Related Information
In this section you can find information for audit log functionalities in the SAP Cloud Platform Neo
environment.
Related Information
Audit Log Retrieval API Usage for the Neo Environment [page 2493]
Audit Log Retention API Usage for the Neo Environment [page 2498]
The audit log retrieval API allows you to retrieve the audit logs for your SAP Cloud Platform Neo environment
account. It follows the OData 4.0 standard, providing the audit log results as OData with collection of JSON
entities.
The audit log retrieval API is protected with OAuth 2.0 client credentials.
● Read Audit Logs – allow usage of the Audit Log Retrieval API to retrieve audit logs;
● Manage Audit Logs – allow usage of the Audit Log Retention API to read retention and set custom
retention.
To call the API methods, create an OAuth client and obtain an access token. See Using Platform APIs [page
1737].
Note
The account provided as part of the URL should be the randomly generated technical name of the
subaccount not the Display Name.
To authenticate, in the header provide the taken OAuth token similar to: "Authorization: Bearer
41fce723412c6c18961f7e95d911ad37"
Example: Get audit log filtering by time, user, category and application
Note
You can only filter by time, user, category and application. You can combine multiple filters in one request.
Note
To authenticate, in the header provide the taken OAuth token similar to: "Authorization: Bearer
41fce723412c6c18961f7e95d911ad37"
The returned results would be split on pages with size – the default server page size. If the number of results is
higher than the default server page size, in the response @odata.nextLink would be provided with the URL, to
retrieve the next results' chunk.
To get results based on pages with size 50, first check the total results number, execute a similar GET request:
To split the pages on a desired size, 50 results per page in this example, execute a similar GET request:
Continue the same requesting pattern, until the number of the results returned by count, in the first example of
this section, is reached.
To authenticate, in the header provide the taken OAuth token similar to: "Authorization: Bearer
41fce723412c6c18961f7e95d911ad37"
Note
If you use client-side pagination and request a client-side page bigger that the server-side default page, the
audit log retrieval API will split the requested page in several chunks that would be returned. As a result you
will receive a response containing an @odata.nextLink field, where the next data chunk could be retrieved
(for more information, see the Results section below). Go to the next client-side page value only after you
have iterated all the chunks the server breaks the result to, which means that there is no @odata.nextLink
field as part of the response provided.
Results
Executing a GET request towards the audit log retrieval API, results in response similar to the one below. The
information for the AuditLogRecords can be checked in the metadata OData part. In the “value” part you
receive the audit log messages in the format shown in the response example. The results returned on page are
limited to the server page size. To get the next result page, navigate to the URL provided in @odata.nextLink.
Sample Code
{
"@odata.context": "$metadata#AuditLogRecords",
"value": [
{
"Uuid": "3b8a8b-16247c70836-8",
"Category": "audit.data-access",
"User": "<user>",
"Tenant": "<tenant>",
"Account": "<account>",
"Application": "<application>",
"Time": "2018-03-21T09.00.40.572+0000",
"Message": "Read data access message. \"%void\"The
accessed data belongs to {\"type\":\"account\",\"role\":\"account\",\"id\":
{\"id :\":\"auditlog\"}} and read from object with name \"Auditlog
Retrieval API\" and identifier {\"type\":\"Legacy.Object\",\"id\":{\"key\":
\"Auditlog Retrieval API\"}} by user null",
"InstanceId": null,
"FormatVersion": "2.2"
},
…
{
"Uuid": "33a87d-1621e7debb2-1be",
//Second Page:
{
"@odata.context": "$metadata#AuditLogRecords",
"value": [
{
"Uuid": "2a70bd-1621a471259-3653",
"Category": "audit.configuration",
"User": "<user>",
"Tenant": "<tenant>",
"Account": "<account>",
"Application": "<application>",
"Time": "2018-03-20T15.59.14.878+0000",
"Message": "Configuration change message. Attribute
attributes update from value \"[old value]\" to value \"[new value]\". ",
"InstanceId": null,
"FormatVersion": "2.2"
},
…
{
"Uuid": "33a87d-1621e7debb2-1bf",
"Category": "audit.data-modification",
"User": "<user>",
"Tenant": "<tenant>",
"Account": "<account>",
"Application": "<application>",
"Time": "2018-03-20T15.59.14.898+0000",
"Message": "Data modification message. Attribute
attribute1 update from value \"some old value\" to value \"some new value\".
Attribute attribute3 update from value \"old Value\" to value \"new Value\".
The data update from object with name \"object1 display name\" and
identifier {\"type\":\"Legacy.Object\",\"id\":{\"key\":\"object1_ID\"}} by
user null",
"InstanceId": null,
"FormatVersion": "2.2"
},
],
"@odata.nextLink": "http://localhost:8001/auditlog/v1/
accounts/auditlog/AuditLogRecords?$top=5000&$skip=0&$skiptoken=2000"
}
The retrieved audit logs are in JSON format. The semantics of the JSON fields are as follows:
Category Category of the audit log message. It could be one of the pre
defined audit log types (audit.security-events , audit.config-
uration , audit.data-access or audit.data-modification) or a
subcategory provided when invoking the “log” method with
“subcategory” parameter ( e.g. audit.data-modification.test ,
audit.data-access.my-sub-category etc.)
User The user that has executed the auditable event. The result of
the user field could be:
Note
Users that are set by the component writing audit logs
and not further verified as a validity by audit logging are
visible only in the “Message” field in the “Custom
defined attributes” part in field “caller_user”.
Application