Sap Analytics Cloud User Help
Sap Analytics Cloud User Help
15 Planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2157
As user of SAP Analytics Cloud, you can take advantage of Help articles, organized into categories that focus
on what you want to achieve.
Use one of the image map formats below to find the Help content you need: a swimlane diagram if you already
know what types of information you are looking for, or a decision tree if you are a new user who wants some
extra guidance in how to navigate through SAP Analytics Cloud's features.
Also check out our SAP Analytics Cloud page on http://help.sap.com for additional documentation and
learning resources.
Find out what's new in the latest version of SAP Analytics Cloud.
Note
For What's New information on quarterly releases, please refer to the What's New Viewer, which lets you
filter for different categories in SAP Analytics Cloud quarterly release versions and release dates.
Introduce yourself to the basics of SAP Analytics Cloud. Find your way around the product and personalize
your experience the way you want it.
Understanding Licensing
Glossary
SAP Analytics Cloud is an end-to-end cloud solution that brings together business intelligence, augmented
analytics, predictive analytics, and enterprise planning in a single system.
The main benefits of SAP Analytics Cloud include ease of viewing content, connectivity to trusted data, access
to various visualization tools, augmented analytic capabilities, and financial planning features. In a single cloud
system you can analyze, ask, predict, plan, and report.
Viewing Content
Increase your work efficiency by personalizing your home screen to display recent stories, files, and analytic
applications. Organize your view between four tabs on the home screen:
Easily find the information you need. The catalog makes it simple for content creators to publish stories and
share with the right users. In this way, even casual users (those with read permission) can quickly browse and
access the content they need.
Trusted Data
Rest assured that content is built from the right data since you can directly view the source. Securely connect
to data from various spreadsheets, databases, and other applications in the cloud or on-site to make accurate
data-driven decisions. Avoid the hassle of data replication by setting up live data connections. Data is "live"
when a user works on a story and the changes are reflected instantly in the source system. This is convenient
since in certain cases data can't be moved into the cloud due to security and privacy reasons.
Experience the seamless process of authenticating access to connected data systems and SAP Analytics
Cloud by enabling Single Sign-On (SSO). You no longer need to remember multiple sign in details and service
providers can now easily manage user identities.
Furthermore, keep confidential information secure by customizing access to certain folders with the access
and privacy settings.
Visualization Tools
With the connected data tell a story and visualize your findings. Data visualization aids the analysis process
by making data easy to understand. Use the powerful visualization tools provided by SAP Analytics Cloud to
design and present your data effectively to users.
Explore, analyze and visualize data to get actionable insights for decision making. Create
Story Designer stories by visualizing data with charts, tables, Geo-mapping and pictogram diagrams.
Represent data of an organization or business segment in a model. You can use a model as
Modeler the basis for your story.
Use analytic templates created by SAP and SAP Partners to start building stories right away.
Business Content
Create and present real-time interactive executive meetings for fluid exchange of ideas.
Digital Boardroom
Analyze models built in SAP Analytics Cloud and datasets by connecting and importing to
MS Office Integration Microsoft Excel.
Augmented Analytics
When you're analyzing a report you may wonder what influenced the variance in your data. SAP Analytics
Cloud uses artificial intelligence and machine learning to provide answers to your questions.
Augmented analytic capabilities are called "smart features" and empower you to:
• Ask questions conversationally and receive instant results explained in a way you can understand without
prior technical knowledge.
• Automatically discover meaningful insights and jump-start the creation of a dashboard.
• Use machine learning algorithms to reveal relationships in data, hidden patterns, and outliers in your data.
• Forecast and predict potential outcomes, then populate a table or graph from the results into your plan.
Smart Features
Feature Description
Search to Insight A natural language search function that provides quick answers regarding stories.
Smart Insight A panel besides your story that provides further insights into a data point or variance.
Smart Discovery Explore your data in a specific context to uncover new or unknown relationships between
columns within a dataset.
Smart Predict Formulate predictive scenarios to show possible future events or trends.
While the executive board and finance department heads are the main contributors in business planning,
an effective plan requires the input of the whole organization. The planning features aid in setting goals and
strategic plans that are accurate and aligned with your business. In addition, allow you to analyze budgets,
forecasts and share plans across departments.
Next Steps
Now that you know more about SAP Analytics Cloud, what's next? To get started, check the following:
• If you are a new user, get to know the interface better. Get Around SAP Analytics Cloud [page 31].
• If you want to test out SAP Analytics Cloud before making a commitment access the free trial.
Related Information
Here's some helpful info for when you log in to SAP Analytics Cloud for the first time.
After you create an SAP Analytics Cloud account, or one is created for you, you'll receive a welcome email.
Click the activation button in the email to activate your account. You'll need to enter some information about
yourself, and set your password.
(Note that the activation link in the email is valid for 7 days. If you want to activate your account after 7 days,
use the Forgot password? link, and enter the email address that the welcome email was sent to. Or if you
have a custom identity provider (IdP), ask your administrator how to reactivate your account. If you didn't
receive a welcome email, it could be because of your spam filtering. See this knowledge base article for details:
https://apps.support.sap.com/sap/support/knowledge/public/en/2527607 .)
Once you've activated your account, you might want to edit your user profile, to set things like your user avatar,
your preferred language, and the date format.
After that, go ahead and explore! For example, try opening the sample story, to get an idea of what you'll be
able to do in SAP Analytics Cloud. Also, have a look at the other information in the welcome email.
Related Information
SAP Analytics Cloud offers fast and simple ways to navigate to your tools and applications. Take a tour of the
one-click menu navigation, the quick access product-wide features, and learn how to use common shortcuts to
increase your productivity and do your work efficiently.
After signing into SAP Analytics Cloud, you'll be able to see tools and applications that you can use to increase
your work efficiency and productivity. These tools and applications are divided into three main parts.
1. The side navigation is the leftmost section and is made up of single-click entry points, allowing you access
to features such as viewing and creating content. You can collapse or expand the navigation menu, or enter
full-screen display for viewing certain content.
The side navigation lets you easily move around and interact with different features, so you can be productive
and maintain focus on your tasks. In this section, we'll focus on three key side navigation features that will help
you use SAP Analytics Cloud efficiently.
Simple Access
The side navigation is a flattened list on the left side of the product, making it quick and easy for you to switch
between different applications.
So for example, whether you want to create a new story or continue working on an existing story, in either case
click on Stories to get working.
The options that you see in your side navigation are dependent on your standard application role and
permission access. For example, an administrator or content creator who builds models and stories will have
different side navigation options from a content viewer who only consumes stories and dashboards.
You can collapse menu options to gain more screen space to work with and expand the menu options when you
want to see all the menu options.
If you want to present stories, agendas, or Digital Boardroom dashboards, the side navigation will be hidden for
a full-screen experience.
Use full-screen mode to increase the amount of screen space for presentations and personal viewings
The side navigation lets you easily move between applications, allowing you to navigate between product
applications quickly.
Tip
Application Pages
Page Description
Home (screen) Access your recent stories and visualizations as tiles. See
Customize Your Home Screen [page 47].
Product Applications
Application Description
Analytic Applications Design and create highly customized analytic and planning
applications. See Analytic Application Design (Analytics De-
signer) [page 1867].
Data Analyzer Analyze your data based on SAP BW queries, SAP HANA live
views, and SAP Analytics Cloud models. See Data Analyzer
[page 922].
Datasets Access your raw datasets, which are used for presenting
data in stories. See About Datasets and Dataset Types [page
544].
Modeler Create, maintain, and load data into models. See Learn
About Models [page 599].
Value Driver Trees Use value driver trees (VDTs) to visualize the entire value
chain of your business, instead of looking at isolated KPIs.
See Set Up Value Driver Trees for Planning [page 2282].
Predictive Scenarios Create and compare predictive models to find the best one
to bring the best predictions to address the business ques-
tion. See Smart Predict – Using Predictive Scenarios [page
2039].
Multi Actions Create multi actions that link together a sequence of steps
such as data actions, version management steps, and pre-
dictive steps, which all run from a single planning trigger. See
Automate a Workflow Using Multi Actions [page 2347].
Content Network Get SAP business content, 3rd party business content, and
samples to add to your system from within SAP Analytics
Cloud. The type of content available includes technical sam-
ples and templates, as well as end-to-end sample business
scenarios for specific industries and lines of business. See
Getting Business Content and Samples from the Content
Network [page 2916].
Translation View the source and the translated text. See Learn About the
Translation Process [page 2965].
Security Set up authentication for your users, and learn how to se-
curely manage access to SAP Analytics Cloud using licenses,
roles, and teams. You can also view auditing activity for re-
quests, data changes, and user activities. See Security Ad-
ministration [page 2829].
• Export
• Import
Connections View live data connections from on-premise and cloud data
sources to SAP Analytics Cloud. See Data Connections
[page 263].
The shell bar lets you see the breadcrumb navigation and actions relating to the active screen. It also contains
universal features such as search, notifications, collaboration, help, user profile settings, and the product
switch.
The Shell Bar
1 Back button Lets you navigate to the previous screen you were in.
2a Application or Tool Lets you see where you are when working on your data, content, or configurations.
Name
2b File Name and Folder Lets you see file details, including the description of the file and the folder path.
Path
Tip
Click any folder in the breadcrumb to jump directly to that file location in the Files
area.
3 File Actions
Note
If you see different file actions, it's because file actions are dependent on the file
you're in.
Lets you use the shortcuts in the Actions menu to create something new based on
the file that's currently opened. The Close button lets you close the current file that's
opened and returns you to the appropriate start page.
Note
Additional actions will appear for different file types, such as marking the current
file as a favorite.
4 Search Lets you search for content across the whole product.
5 Search to Insight Lets you ask questions about your data and immediately see your answers as visuali-
zations.
6 Notifications Lets you see your notifications such as system messages informing you of files that
have been shared with you, comments added to your story, when you've been added
to a discussion, calendar tasks and processes, and other administrative reminders.
7 Collaboration Lets you send messages to collaborate with other users and log your discussions. See
Collaborate by Having Group Discussions [page 239].
8 Feedback Lets you send feedback about SAP Analytics Cloud, including comments or other
suggestions.
9 Help Lets you find contextual help articles, links to videos, and additional learning resour-
ces.
10 Profile Lets you change your user profile preferences, customize your Home Screen appear-
ance, request additional roles, and sign out.
11 Product Switch Lets you quickly navigate between your SAP Analytics Cloud, SAP Datasphere, and
SAP Analytics Hub tenants.
The start pages optimize your workflow through a frictionless experience for multi-application scenarios. With
the start pages, you'll be able to seamlessly create new content, view recent files, and filter results in your
recent files table.
1 Welcome Message Lets you see a welcome message stating the purpose and value of each tool. You can
select Learn More... to open the help and ask questions and access additional learning
materials.
2 Create New Options Lets you create new content, from a new blank file, or from other starting options.
Note
Analytic Applications, Digital Boardroom, Datasets, Modeler, Data Actions, Allo-
cations, Value Driver Trees, and Smart Predict have their own unique create new
options displayed in the same design.
3 Recent Files Lets you easily re-open previous files so you can continue with your work.
Note
You can only see the last 25 files you recently worked on.
4 Search Filter Lets you filter the results in the Recent Files table.
In this section, we'll focus on two key features that will optimize how you use SAP Analytics Cloud.
When creating content, you might want to jump between different content creation workflows. For example,
you may want to edit a model and then go to the story designer to consume that model in a new story. There
are different sets of contextual file actions accessible from the shell bar.
File actions are placed next to the file name in the Actions menu and allow you to directly jump to other
workflows based on the selected file.
Create new content based on your currently opened file, then jump directly to it and continue working.
These shortcuts allow you to open or create a new file in another application quickly without breaking flow.
The Files area preserves the state of where you left off, by showing you the folder path you had, when you
navigate away to another part of the product.
Your files and folders can be accessed directly from several places, including the new start pages and the new
breadcrumb in the shell bar.
Related Information
To be able to perform tasks on content, such as view, create, or edit files, you must be assigned a role that
provides you with permission to access and use those features. If you can’t perform certain tasks, you can
request a role.
Note
Requesting a Role
Procedure
2. In the upper-right corner, select your profile image and then select Request Roles.
If you have not uploaded a personal image, the profile image is the default image.
3. In the Role Configuration dialog, select the type of role you want to request:
• Default Roles are roles that automatically assigned to you if you are not assigned any other role.
Note
Default roles are set up by the administrator. For more information, see Create Roles [page 2877]
and Assign Roles to Users and Teams [page 2882].
• Self Service Roles are roles that the administrator has allowed users to request.
If you are an administrator and want more information on these options, see Create Roles [page 2877].
4. Select the roles you need.
5. (Optional) In the Comment box, you can provide additional information. For example, explain why you need
the selected roles.
6. Select Send Request.
Results
The request is added to a queue that the administrator or approver will review and then approve or reject it.
If you're an administrator or an approver, see Approve Role Requests for Your Users [page 2888].
Related Information
Take advantage of curated help content if you need assistance in using SAP Analytics Cloud.
There are two main options for accessing help content for SAP Analytics Cloud, depending on what type of
assistance you require:
SAP Companion
If you are in the middle of a page or a workflow and require additional information about your current task, SAP
Companion provides concise, context-sensitive summaries of what you can see on your screen.
When you open the panel, hotspots appear on the screen to identify elements for which you can review
additional information. Brief descriptions appear in the side panel, and clicking an entry or a hotspot opens
a dialog with more detailed information, along with links to the full Help Portal articles. Related videos can be
viewed onscreen, without leaving the workflow. Additional links are also available to a catalog of interactive
tutorials and other helpful resources.
If you are getting started with using SAP Analytics Cloud, you are looking for information that is unrelated to
your current workflow, or you want to dive deeper into any topic, the SAP Help Portal provides a full catalog of
help articles with embedded screenshots, diagrams, imagemaps, animations, interactive tutorials, and videos.
You can access the portal via SAP Companion, or you can navigate directly to https://help.sap.com/docs/
SAP_ANALYTICS_CLOUD.
If you encounter a problem that you can't solve on your own, you can reach out for assistance. Use the
Contact Admin option in the Discussion panel to start a discussion with a preferred administrator in your
organization.
Note
You can invite anyone assigned the Admin, BI Admin, or System Owner default roles.
The discussion panel opens to a new discussion with your administrator. An invitation to the discussion is sent
to their notifications where they can join the discussion and follow up with your issue.
Note
If the problem you encountered resulted in an error message, you'll notice that the error message includes
a Correlation ID (click Show More to see it):
Include this Correlation ID when you contact your administrator. It can help track down the cause and resolve
your issue more quickly.
Personalize and customize your profile settings, user preferences, and home screen.
When creating a new user in the user management area, a system administrator must enter at least a user ID,
the first name, and the last name of the user, which automatically display in the user profile.
User Profile
The user profile is reused in many parts of the application, for example, to easily identify the participants in
a discussion. When you select a user in any dialog, for example when you assign a manager to a user on the
Users management page, the picture from the user profile displays in addition to the display name and ID. In
addition, you can display the full user profile to get information about the user in the following areas:
• In the Collaboration panel, the name and picture of every participant of a discussion displays. If you hover
the cursor over the picture, you can display the full profile and start a one-to-one discussion with that user.
• In the Event area, the picture of the user who is assigned to an event or a task is displayed in the details of
the event and task. For example, the profile of the user that needs to review a task displays.
• In the Users area of user management, you can display the full profile by clicking on the user's ID.
Note
If you use SAP Cloud Identity (ID), your profile information will be pulled from your SAP Cloud ID profile. To
edit your SAP Cloud ID profile, select Edit Profile & Change Password from the Profile Settings dialog.
Profile Picture
By default, a photo placeholder is displayed. To upload your own picture from your computer, choose Upload
Profile Picture or click the placeholder from the Profile Settings dialog.
Password
If you are using authentication via an SAML Identity Provider (IdP), or connecting with the SAP Identity
Authentication service (IAS), select Edit Profile & Change Password in your user profile. The Password section
will appear as an option in your user profile. Selecting Change Password will redirect you to the password
change page.
If you use an SAML IdP, and the SAML administrator has not supplied the URL of the password change
page to SAP, you will not be able to change your password. Contact your SAML administrator.
Note
If you are using a trial account, do the following to change your password:
1. Go to https://www.sapanalytics.cloud/.
2. Select Log In.
3. Select I'm on a trial.
4. On the login page, select Forgot password?.
5. Enter your account email address and follow the prompts to reset your password.
User Preferences
Select (Edit) from the Profile Settings dialog to update the following settings:
Note
If Data Access Language is set to Default, the value used in the Language setting will be used when
displaying stories created from live data connections.
The Data Access Language setting is used by default unless an Administrator selects a different
language when the live data connection is created. For more information, see Setting the Default
Language for Live Data Connections [page 453].
Note
When you send a model for translation, the Data access language becomes the source language of the
model. For more information, see Learn About the Translation Process [page 2965].
Note
The chosen number format influences the expected input format for Story Calculated Measures and
Calculated Dimensions:
• Choosing a number format that uses periods (.) as decimal separators means that
commas (,) must be used to separate function parameters (for example, IF(Condition,
ValueIfConditionIsTrue, ValueIfConditionIsFalse)).
If the formula is typed in from scratch, the correct function auto-completion happens based on the
user preferences. However, if you copy and paste a full formula string, auto-complete won't be able to
adapt if there is a mismatch between separators used and the user preferences.
• Scale Formatting: Choose to format the number scale as short (k, m, bn) or long.
• Currency Position: Choose where to display the currency symbol (or ISO code) and scale formatting.
Note
For SAP BW, this features requires the following patch:“ 2700031 - InA: Complex Units / Unit Index”
• Clean up notifications: Choose a date after which you want the system to delete notifications. Otherwise,
the system keeps the most recent 500 notifications.
• Default Application: Sets the landing page displayed when you access the SAP Analytics Cloud URL.
Note
This setting will appear only when the user has an Analytics Hub license.
• Email Notification: To receive information via email about system activities, for example when a new
password is set, enable this feature.
When you start SAP Analytics Cloud, the Home screen displays tile-like widgets to help you get started working
on your analytics tasks.
Note
Your Home screen is private and you can't share it with other users or add it to discussions. This also
applies to the notes and tiles you pin there.
Select one of the following options from the Get Started widget:
• Explore a sample story: Explore a feature-rich story to help familiarize you with key SAP Analytics Cloud
capabilities.
• Create your first story: Quickly launch the story creation process.
• Change your profile settings: Access your profile settings including contact details, display language
preference, and a display name.
• Learn more in the help center: Access the SAP Analytics Cloud help topics, release notes, and instructional
videos.
On the Home screen, you can also display tiles for Featured Files and Recent Files. The Recent Files tile lists
the most recent files you have viewed. You can use the tile filter to view specific file types (including stories,
presentations, and analytic applications).
To add or remove a tile from the Home screen, click your Profile avatar in the upper-right corner, select Home
Screen Settings, and select or deselect the item you want to add or remove.
Administrators can make content easier for end users to access by adding the Featured Files tile to their Home
screen.
2. Select the System icon from the menu bar to the left of the file list.
3. Search and select the file(s) you want to feature, and then select the Featured Files icon from the
toolbar.
You can customize the display background and the logo displayed on the Home screen.
1. Click your Profile avatar in the upper-right corner, and select Home Screen Settings.
2. Select an option from the Background list.
3. To change the currently displayed logo, select an option from the Logo list and choose OK.
Note
If you want to brand your background with your company logo, select Custom Choose Logo to
access the file containing your custom logo.
Creating a Note
You can create a note to add text alongside the stories and other items pinned to your Home screen:
1. Click your Profile avatar in the upper-right corner, and select New Home Screen Note to display the New
Note dialog.
2. Enter a title for the note and the text.
3. Choose OK to create your note and pin it to the Home screen.
• Today: This tab is the default tab. It contains the tiles and notes that you've pinned.
• Catalog: This is a single access point for content published for users within SAP Analytics Cloud. Each
published item is displayed as a card. For more information, see Enable Content Discoverability with the
Analytics Catalog [page 2941].
• Favorites: This tab lists the resources you have chosen to favorite.
• Shared With Me: This tab lists all the resources shared with you by other users.
You can choose which one of the tabs is displayed by default when you click Home:
1. Click your Profile avatar in the upper-right corner, and select Home Screen Settings.
2. Select a tab from the Default Tab list.
Edit a note
Choose Preferences to edit the note text and
title.
Navigate to a story
If the tile is part of a story, choose the Go to to
open the source story. If you edit and save the story, the
changes are reflected in the tile.
Related Information
On the Files page, you can browse and organize your content in private or public folders or in workspaces that
you're a member of. The Files page also provides list views for quick access to files, such as favorite, featured,
or deleted files.
To access the Files page, choose Files from the side navigation.
You can display a subcategory of your SAP Analytics Cloud content by choosing a file list view on the left
side of the files list. The list views available are based on your role and workspace membership. The following
examples show the list views for different users.
Tip
Use the Privacy Filter located in the upper-left corner to view all files, only files that you own, or only files
that are shared with you but owned by other users. This filter only works with the My Files, Favorites, and
Featured Files views.
My Files All users Displays content in private and public folders that you own or that is shared with you
but owned by other users. This view also includes the following folders:
• Input Forms: Contains input tasks assigned to you. For information, see Gather
Planning Data with Input Tasks [page 2249].
• Public: Contains content that all users have saved in a public folder. Content that
you save in a public folder has read-only permission for other users by default.
• Samples: Contains sample models and stories.
Owned by All users Displays content from private, public, and workspace folders that you own. This view is
Me separate from the My Files view when the Owned by me privacy filter is applied.
Shared with All users Displays content from private, public, and workspace folders that is shared with you.
Me This view is separate from the My Files view when the Shared with me privacy filter is
applied.
Favorites All users Displays content marked as your favorites. You can mark a file or folder as a favorite by
selecting the star icon Favorites.
For quick access, you can see this content from the Favorites tab on the Home screen.
To customize your Home screen, see Customize Your Home Screen [page 47].
Featured All users Displays content marked as featured. For quick access, you can see this content from
Files
the Featured Files tile on the Home screen. For more information, see Customize Your
Home Screen [page 47].
Note
Only administrators can remove or add content to this view. To remove con-
tent from this view, administrators can select the files or folders and choose
Featured Files. To add content to this view, administrators use the System view.
System Administrators Displays the administration view of the files and folders. If you are an administrator, you
can use this view to get a better understanding of the files and folders for content in
Workspace Ad-
SAC Analytics Cloud. You can also use this view for managing content for workspaces,
ministrators
team folders and inactive users, and to mark content to be included in the Featured Files
view.
This view is available to administrators who have Manage rights on either the Private
Files or Public Files application privilege and workspace administrators. For more infor-
mation, see Permissions [page 2844].
Deleted Files All users Displays files and folders that you or other users have deleted. Deleted content is
permanently deleted after 30 days by default, but administrators can configure the
number of days that deleted content is stored for. For information, see Configure Sys-
tem Settings [page 2760].
Users with Manage rights on the Deleted Files permission can also manage other users'
deleted files. For more information, Permissions [page 2844].
If you deleted a file or folder by accident, choose Restore to restore the content to its
original location or a new location, and, optionally, change the file name or description.
Note
Team folders can only be restored to their original locations. For more information,
see Create Teams [page 2871].
Also, when restoring a file to a new location, the new location you select can only be
within the originating workspace or the My Files view.
Workspaces Users who be- Displays views for the workspaces that you are a member of. If you are not a member of
long to one or
any workspaces, this list view is hidden. Each workspace view displays files and folders
more workspa-
that you own or that are shared with you by other users. This view also includes the
ces
Input Forms folder.
For more information, see Share and Collaborate Within Workspaces [page 212].
You can apply any of the tools from the toolbar to multiple files or folders by selecting the checkboxes for
several lines. These tools are available based on the role that you're assigned and the permissions defined for
the role. For more information, see Create Roles [page 2877].
General Tools
Tool Description
1 Create Create a new file of the selected data type (for example, stories, analytic applications,
digital boardroom agendas and dashboards, models, and other content). If you have
the Create permission for the data type you want to create, the tool is available and you
can select it. For more information, see Permissions [page 2844].
Tip
You can also create folders when saving, copying, or moving files, by selecting
in the Save, Copy To, or Move To dialogs.
• Move To: Move files or folders to different locations. To be able to move content,
you must have Full Control access of the content or be the content owner or a
system administrator.
Team folders can’t be moved. These folders always reside directly under the
System level. For more information, see Create Teams [page 2871].
Note
If the file you are moving has dependencies, users who previously had access
to the dependencies may no longer have access. For example, File1 depends
on Model1. You move Model1 to a different location. Some users who have
access to File1 might see an error message that they don’t have access to the
dependency (Model1).
3 Edit Details Change a file or folder name and its description. Depending on the type of content you
selected, the dialog will also have other details that you can edit.
• Share: Share the files or folders with users or teams. When you share content with
other users, you can choose to give other users view, edit, full control, or custom
access. Content that you share with other users will appear on the Shared With Me
tab on their Home screen. For more information, see Share Files or Folders [page
193].
To be able to change the sharing access level for content, you must have Full
Control access of the content or be the content owner or a system administrator.
• Publish to Catalog: Publish the files or folders to the Catalog. Content that you
publish will appear on the Catalog tab of the Home screen. For more information,
see Publish and Share Content to the Catalog [page 207].
Note
If the content that you are sharing has dependencies that are from different loca-
tions, the users who have access to the shared content also need access to the
dependencies and their locations.
5 Copy To Make copies of selected files or folders in the current location, or you can select a
different location. If the content you are copying has dependencies, all links to the
dependencies are retained. To be able to copy content, you must have Copy access
right for the content or be the content owner or a system administrator..
6 Delete Delete files or folders. Deleted content will appear in the Deleted Files view and remains
on the system for a default of 30 days. This tool is available if you have the Manage
permission on Deleted Files. For more information, see Permissions [page 2844].
If you deleted content by accident, you can restore it from the Deleted Files view by
selecting ( Restore).
7 Draft Select draft data content for working with new models or datasets. For more informa-
Data
tion, see Import Data Source Reference [page 811].
8 Refresh Update the file list view with the latest changes.
9 Upload Files Upload Microsoft Office, text, CSV, and PDF files from your computer to SAP Analytics
Cloud. For the maximum file sizes, see System Sizing, Tuning, and Limits [page 2743].
This tool is available if you have the Create permission on Uploaded Files. For more
information, see Permissions [page 2844].
11 Filter Filter the file list view to display only certain types of files that you own or that are
shared with you.
If you are an administrator, you can use the administrator tools in the toolbar to control content in the Featured
Files list view or to manage content ownership.
Administrator Tools
Tool Description
1 Change Change the owner of a file or folder from one user to a different user. This tool is
Owner
available if you have Execute rights on the Ownership of Content permissions. For more
information, see Change Content Owners [page 2963].
2 Featured Add or remove the selected content to the Featured Files view. Administrators can add
Files
files to the Featured Files view using the System list view, or they can remove files from
the Featured Files view using the Featured Files list view. For more information, see
Customize Your Home Screen [page 47].
Related Information
• Users who are assigned a role with the Read permission for various content types.
The catalog is a single access point within SAP Analytics Cloud where users and teams can find content.
Content creators can publish content to the catalog, and users will be able to discover that content on the
Catalog tab of their Home screen. Because a special permission is needed to publish content to the catalog,
users will know that the catalog content has been vetted by a reliable person.
Each item published to the catalog is displayed as a card that users can open to access the underlying content.
The following content types can be published and displayed in the catalog:
• Stories
• Analytic Applications
• Digital Boardroom presentations
• Models
• Datasets
• Uploaded SAP Analytics Cloud files
• Content Links
• Insights
• Analysis Workbooks
Tip
If you use an iOS mobile device, you can access the catalog for viewing purposes only. For details, see Using
Catalog on the Mobile App [page 119].
You can quickly find content in the catalog using one of these options:
• Select the sort option to display the content in the order that best suits your interests.
• Type keywords in the Search to find a specific card. You can search for a resource name, description, tags,
user ID, and display names.
• Select predefined content filters and type filters on the catalog content. For more information, see the next
section.
When a lot of content has been published to the catalog, it can be difficult to find specific content. To help you
see only the content that you’re interested in, use the Filter panel. The Filter panel is where you select the filters
to show only the content that you're interested in.
Tip
If you're a new user and open the catalog for the first time, some filters may be selected by default. To clear
all the filters, open the Filter panel and select Clear Filters.
1 Number of Selected Displays the number of selected filters at the top of the panel. This area only appears
Filters when one or more filters are selected.
2 Content Filters Displays the list of all filters that have been applied to the published content.
When no filters are selected, all published content is displayed. When you select
certain filters, only the content that has those filters applied are visible; content that
doesn't have those filters are hidden. You can clear each checkbox individually or
select Clear Filters to clear all selected filters.
Filters that have not been applied to content are hidden from the panel. For more
information on applying filters to content, see Publish and Share Content to the
Catalog [page 207].
Tip
The appearance of content and type filters is controlled by system administra-
tors. System administrators create and set the list order of the content filters.
Also, they can choose to hide the type filters section. For both types of filters,
system administrators can set which filters are selected by default. For more
information, see Enable Content Discoverability with the Analytics Catalog [page
2941].
3 Type Filters Displays filters for the different content types that you can create. The content type
filters appear based on whether the content type has been published to the catalog
and based on which custom filters have been selected.
4 Item Count Displays the count of the number of items that the filter has been applied to. The item
count is based only on the items that are displayed in the catalog. Only filters that
have an item count of 1 or more appear in the panel.
5 Filter button Opens the Filter panel. If you have selected any filters, the button is updated to show
you the number of selected filters.
Consider this example. When no filters are selected, you can see all five published items in the catalog. With the
Filter panel open, you can see which filters have been applied to the content and the item count for each filter.
Under the Line of Business filter, you select Sales. Only two published content cards that have the Sales filter
applied to them appear in the catalog.
• The number of filters selected (one in this example) appears at the top of the Filter panel and on the Filter
button.
• The custom content filters show only the ones that have been applied to the content.
• The Type filters only show the content types that have been published to the catalog (Stories and Models).
You then select a second filter: under Language, select English. Only one published item appears. The Filter
panel is updated to show only the remaining filters that are applied to the content and the Type filter shows
only the types of content that the remaining custom filters have been applied to. The item counts are updated
to reflect the remaining content.
Content that is recommended appears for all users at the top of the catalog before the unrecommended items.
These recommended items are marked with a pushpin. The sorting and filtering features are also applied to
recommended items.
Note
Only system administrators and users who have the Execute permission for Publish Content can
recommend content. For more information, see Enable Content Discoverability with the Analytics Catalog
[page 2941].
• If they don't grant Read access, you'll need to request access to the content by selecting Request on the
card. See the next section Requesting access to content published to the catalog for details.
• If they grant Read access, you can select Open on the card.
For cards that you have Read access to, you can either open the card's underlying content (for example, a story
or dataset) or view the card details:
• Select Open on the card to view the underlying content. If the card contains secondary links, for example
links to related websites, choose whether to open the main content link (which is shown first) or one of the
secondary links.
• Select the card to show the detailed information about the content, such as links to related websites, and
any filters that have been applied.
If you have the appropriate permissions, you can perform any of the following tasks as needed:
• If you are assigned a role with the Execute permission for Publish Content, you can select Recommend
to recommend the content.
• If you are assigned a role with the Execute permission for Publish Content and have the Share access
right for the content, select Manage to choose who can see the content.
Note
Changing the name or description of a catalog card will also change the name or description of the
underlying file.
Select on the card if you want to add it to your favorites. If the Favorites is enabled, you can select the tab
and then view and access the card.
Context
If the person who published the content you're interested in didn't grant Read access, you'll need to request
access to the content.
Procedure
1. Select Request on the card to request access to all content included in the card.
Or select the card to show the detailed information about the content. Here, you can choose to request
access to individual files or links.
2. Type a comment to explain why you'd like access to this content.
3. Select Send Request.
Results
A request for access for the selected file is sent to the system administrators. If the file has any file
dependencies, requests for access for those files are automatically generated and sent to the system
administrators. If an administrator approves your requests, you'll have access to the card content and all
dependencies. Also, you'll receive the notifications in the shell bar. If you're an administrator and want more
information on approving file requests, see Enable Content Discoverability with the Analytics Catalog [page
2941].
Note
Access requests for file dependencies where the file dependencies are from different workspaces that you
don't have access to are not automatically generated, and no automated notification is provided. For more
information, see Enable Content Discoverability with the Analytics Catalog [page 2941].
You can request SAP to provision a subscription-based SAP Analytics Cloud tenant or create your own
consumption-based tenant. For more information on the different models for enterprise accounts, see
Commercial Models.
Subscription-based Tenant
Characteristics Specifications
Provisioning • For information about region availability, see the SAP Discovery Center .
• SAP provisions your tenant with the licenses you've purchased based on your subscription
contract.
• The system owner of SAP Analytics Cloud is notified via email when the tenant is provisioned.
Licenses The number of licenses provisioned for your tenant is dependent on the contract.
Note
The SAP Analytics Cloud for testing tenant will show the named license and have the
same capabilities of a Planning Professional license, but the functionality will only be for
non-productive use.
License add-ons:
• Digital Boardroom
For more information about the features available for each license type, see Features by License
Type for Analytic Models [page 65] and Features by License Type for Planning Models [page
68].
Provisioning • For information about region availability, see the SAP Discovery Center .
• You create your SAP Analytics Cloud instance in the BTP cockpit. For more information, see
Create Your SAP Analytics Cloud Instance in SAP BTP Cockpit [page 2754].
The system owner of the SAP Analytics Cloud service instance (tenant) in SAP BTP cockpit is
notified via email when the tenant is provisioned.
• 20 Test licenses
Or
For more information about the features available for each license type, see Features by License
Type for Analytic Models [page 65] and Features by License Type for Planning Models [page
68].
• No time limitation.
• You're permitted only one free tenant under the global account.
• No limitation.
Note
The consumption-based model comes in two flavors: SAP BTPEA and Pay-As-You-Go for SAP BTP. You can
choose between the two options depending on your business needs and level of financial commitment. For
more information, see What Is the Consumption-Based Commercial Model?
Three license types are available for SAP Analytics Cloud. The following tables describe the features available
for each license type when you use an analytic model. Analytic models do not contain planning information. For
a list of planning model features, see Features by License Type for Planning Models [page 68].
Note
An SAP Analytics Cloud for business intelligence license can be purchased as a User License.
• A User License ensures that the assigned user can always access the application, regardless of how
many other users are logged on. We recommend that administrators be given user licenses.
For more information, see Assign Roles to Users and Teams [page 2882].
Note
When some licenses are removed from a customer account, content created from those licenses will be
restricted for other end users, even those with administrator roles:
• For digital boardroom, users, including content creators, will not be able to see the content. This
can cause confusion when trying to delete an apparently empty folder. The message “The selected
resources do not exist, or you do not have permission to delete them” is displayed.
If this happens, the account owner can still see, and delete and move content files. We suggest that a
tenant owner performs these tasks.
Note
An SAP Analytics Cloud Test license can be purchased via a Business Technology Platform Enterprise
Agreement (BTP EA) in the SAP Business Technology Platform (BTP) cockpit. The BTP EA test plan
provides a Test license that has the same capabilities of a Planning Professional license, but only for
non-productive use.
An SAP Analytics Cloud for testing subscription plan provides a Planning Professional license, but only for
non-productive use.
The following list gives you quick access to the table you need:
Data Modeling
Models represent the business data of an organization, and are the basis for all data output in SAP Analytics
Cloud. For more information, see Learn About Models [page 599].
Auto-Generate Models X X X
Hierarchies X X X
As a system administrator, you create users and assign them to predefined or newly created roles. You can
also carry out maintenance tasks related to users and roles as well as monitor activities and data. For more
information, see Security Administration [page 2829].
Administer Users X X
Administer Roles X X
Administer Teams X X
Audit X X
Content Deployment X X
Story Experience
A story is a presentation-style document that uses charts, visualizations, text, images, and pictograms to
describe data. For more information, see Data Visualization (Stories) [page 951].
Create Stories X X X
Create Charts X X X
Hierarchies X X X
Linked Analysis X X X
Calculated Dimensions X X X
Explorer X X X
Thresholds X X X
Analytics Designer
Use the analytics designer to create or run analytic applications. An analytic application is a document that
can contain visualizations such as tables, charts, and filters to allow navigation and analysis of data. You can
personalize the visualizations and interaction behaviors of the UI elements according to user requirements. For
more information, see Analytic Application Design (Analytics Designer) [page 1867].
Predictive
Smart Discovery X X X
R-visualizations X X X
Smart Predict X X X
Smart Insight X X X
Note
• Smart Predict is currently not available in all regions or all sales channels. For more information
2661746 .
• It is currently not possible to create predictive models with analytic models as a source.
The SAP Digital Boardroom is a place to design a real-time, interactive boardroom presentation by combining
your stories with agenda items. For more information, see Digital Boardroom Presentation [page 1826].
Note
The SAP Digital Boardroom is an add-on to user licenses and requires at least one user license of type BI
User, Predictive, Planning Standard, or Planning Professional.
SAP Analytics
SAP Analytics SAP Analytics Cloud for plan-
Cloud for busi- Cloud for plan- ning, predictive
ness intelligence, ning, predictive professional edi- Digital Boardroom
Features predictive edition standard edition tion Add-On
Related Information
Two license types allow you to use planning tools: SAP Analytics Cloud for planning, standard edition and SAP
Analytics Cloud for planning, professional edition. For Analytics features the Planning Users can perform please
When planning licenses are removed from a customer account, planning content created from those licenses
will be restricted for other end users, even those with administrator roles:
• BI users will not be able to see the content. This can cause confusion when trying to delete an apparently
empty folder. The message “The selected resources do not exist, or you do not have permission to delete
them” is displayed.
• For planning models, BI users will not be able to move or delete the content. The delete button will be
disabled.
If this happens, the account owner can still see, and delete and move content files. We suggest that a tenant
owner performs these tasks.
The following table describe the features available for each license type when you use a planning model.
Planning features include a set of tools to work with the data in planning models, such as creating and editing
versions of the data, performing allocations, and working with value driver trees. For more information, see
Learn About Planning Model Data [page 2165].
Calendar ** X X
Currency Conversion†† * X X
SAC Connection to BPC Standard Model (Ac- Can only simulate in Can take Stand- Can take Professional
quired Data with Write Back)††† SAC, not publish back ard user actions actions against SAC
to SAC (and therefore against SAC Planning Planning Model. SAC
BPC)**** model**** UI does not allow BPC
Professional features
SAC Connection to BPC Embedded Model (Live Can only simulate in SAC UI will only allow Can take Professional
Connection) data entry in BPC and actions against SAC
SAC, not publish
running a planning se- Planning Model. SAC
back to BPC**** quence/function**** UI does not allow BPC
Professional features
* Users with this license type cannot create or edit a Currency Conversion table, but they can view such tables
and their converted values in reports.
** Users with this license type can work with calendar tasks and processes. For planning related content,
however, the planning licenses are required.
*** Users with any license type can create value driver trees in a story and simulate scenarios with them. See
Set Up Value Driver Trees for Planning [page 2282].
**** Note that at least one user needs to have the SAP Analytics Cloud for planning, professional edition
license to establish the connection to BPC.
† A private tenant option, the SAP Analytics Cloud, predictive enterprise edition, private option, is available to
susbcribe to. The quantity of users are provisioned as Planning Professional users in a private tenant and must
be 10,000 users or higher. All tiers include 2TB of memory.
†† Currency conversion functionality has been enhanced in current models, compared to classic account
models.
††† For models with measures, only exporting data to BPC standard is supported, not importing.
3.5 Glossary
Tip
For quick reference, you can also download the glossary as a PDF. Choose Download PDF Create
Custom PDF at the top, select the checkbox next to Glossary in the TOC, and choose Create PDF.
account A special dimension that can include the definition of ac- Data Models
counts like operating income and employee expenses, and Planning
drivers like units sold and headcount.
account model A type of import data model with a mandatory account di- Data Models
mension and a single default measure.
advanced mode A mode in optimized story experience for designing highly Stories
customized stories, with additions such as CSS, scripting,
popup, and more widgets and settings.
agenda item An item in a Digital Boardroom agenda containing one or Digital Boardroom
aggregation A grouping of a set of numbers into one number that repre- Data Models
sents the whole set. Planning
allocation A structured planning operation that takes data from source Planning
allocation process A type of allocation that assigns values from source dimen- Planning
analytic application A document that can contain visualizations such as tables, Analytics Designer
Analytics Designer Capability in SAP Analytics Cloud that lets you create appli- Analytics Designer
analytic model A fully-featured model that is geared towards data analysis Data Models
and helps looking for trends and anomalies.
application design The entire process of creating an analytic application. This Analytics Designer
automatic data action task A calendar task that lets you schedule the automatic execu- Calendar
tion of a data action at a specific time or start condition. Planning
automatic multi action task A calendar task that lets you schedule the automatic execu- Calendar
tion of a multi action at a specific time or start condition. Planning
blended table A table created with data from more than one model. Stories
booked data Booked data refers to table cells that contain a value, even if Data Models
the value is zero or an empty (single space) string. Stories
a measure.
calendar event Collective term used for tasks, processes, input tasks, and Calendar
publications, particularly when describing activities that ap- Stories
ply to more than one task or process.
calendar highlights Tile on the SAP Analytics Cloud home screen that displays Calendar
canvas A flexible space where you can explore and present your Stories
data.
classic data analyzer A a predefined application for multidimensional, pivot styled Data Analyzer
discussion or a thread.
composite task A calendar task with one or more assignees, and one or Calendar
copy step A step in a data action that allows you to copy data from one Data Models
set of members to another. Stories
cross calculation A calculation that is applied across all account members Data Models
or measures (for example, a deviation from last year's ac- Planning
tuals for all the measures) instead of just being applied to
account members/measures explicitly selected in the calcu-
lation definition.
cross-model copy step A type of copy step in a data action that allows you to copy Data Models
data from one planning model to another. Stories
Planning
cross-model parameter A type of parameter that you can reuse across models and Data Models
steps within a multi action. Stories
currency conversion A function to convert financial values from one currency to Data Models
currency table In a planning model that has currency conversion enabled, Data Models
custom widget A widget created by developers, which complements the Analytics Designer
predefined set of widgets delivered with SAP Analytics Stories
Cloud. It mainly consists of a JSON file and a JavaScript file.
data access control A security feature that allows you to restrict read and write Planning
data action A tool that allows users to create a sequence of copy-paste Planning
data action tracing A debugging tool for data actions that is used to execute a Planning
data analyzer A predefined application for multidimensional, pivot styled Data Analyzer
data changes A feature that logs information about each change to the Data Models
data change insights A feature in analytics designer and optimized story experi- Analytics Designer
ence to auto-discover significant data changes of specific Stories
charts on a regular basis.
data locking A feature that allows you to lock, restrict, and unlock specific Planning
areas or specific data within a model. It also allows you to
delegate ownership of data locks to other users.
data locking task A calendar task that lets you define which data slice of your Calendar
planning model shall be locked or unlocked, and when. Planning
You can change both the value of a data point and various
customizing attributes that determine how it appears in a
chart such as its color.
data source A combination of the source of data, for example relational Connections
dataset A set of data used for reporting, modeling, predictive analyt- Smart Predict
ics, or planning. The data is usually in tabular format, where Connections
columns represent variables, and rows contain values that
are observations or instances of each variable.
date A dimension that contains the time periods for which you Stories
dimension A data object that represents categorical data (for example, Data Models
product category, date, or location). Stories
direct connection A type of live data connection that does not use a reverse Connections
proxy.
drill down The act of navigating into more detailed information from a Stories
general category.
drill up The act of navigating from a detailed to a more general level Stories
of data.
exception aggregation A grouping of a set of numbers into one number that repre- Data Models
sents the whole set. Stories
feature layer A custom geo map layer defined by external third party (Esri Stories
file repository A central location where your objects, such as models and Planning
stories, can be saved and accessed.
flow layer A map layer that illustrates relationships between points; for Stories
fluid data entry A data entry mode that processes all new data changes in a Planning
table at the same time if they are made in a fast sequence. Stories
general task A calendar task with one or more assignees, but no review- Calendar
ers.
hierarchy Two types of hierarchy are available: level-based hierarchies, Data Models
and parent-child hierarchies. Stories
merical values.
import data connection A connection to a data source or application, which allows Connections
InA Used for “Information Access”. It is usually used when refer- Connections
ring to live data connections or features that use live data Stories
connections.
input form A summary sheet created by a user in order to collect data Planning
from colleagues, characterized by cells for which data input Stories
is enabled.
input task A task created for a user who is requested to provide or Planning
approve information entered in an input form of a story. Stories
Calendar
insight An insight represents the displayed data source in data ana- Data Analyzer
inverse formula Inverse formulas specify how a formula will be reversed if Data Models
data is entered on the formula in a story. Planning
KPI scope The range or extent of values to check for a KPI visualization Stories
or threshold.
leaf member A dimension member that does not have any child mem- Data Models
bers. Stories
Planning
For example, for a Date dimension that is maintained with
monthly granularity, January 2018 is a leaf member. In a
planning model, all data is stored in leaf members.
linked analysis A feature that allows users to drill through hierarchical data Stories
linked widgets diagram An interactive graphic that lets application designers or Analytics Designer
story designers in optimized story experience view and mod- Stories
ify the relationships among widgets in an analytic application
or optimized story.
live data connection A connection to a data source or application, where the data Connections
live data model A model that is based on a live data connection. Data Models
Connections
lock The act of freezing individual values – for example, during a Planning
disaggregation – or an entire data region; a combination of Stories
entity, time, and version, for example.
mapping A function that matches or relates one set of data to another Planning
set in a pre-defined way.
mass data entry A data entry mode that processes all new data changes in a Planning
table when a user confirms they've finished entering values. Stories
master data Data in dimension tables that is typically shared across the Data Models
measure A data object that represents numerical and quantitative Data Models
data that you want to compare, aggregate, or use for calcula- Planning
tions. Stories
A model can store and display data that has been acquired
by an import connection, or it can show data from a live
connection to a source system.
new model Also known as a model with measures, the new model type Data Models
exposes measures as single entities and lets you add and Stories
configure multiple measures with aggregation and units to fit
your data.
notification
• An alert, warning, or error message raised by the appli- Calendar
optimized story experience Improved experience for designing and viewing stories, with Stories
features such as scripting capabilities and CSS, usability im-
provements and better performance.
organization A type of dimension that represents the organizational struc- Data Models
ture that applies to a model's accounts or measures.
outline A panel in analytics designer and optimized story experience Analytics Designer
to display and manage the structure of the analytic applica- Stories
tion or optimized story, including its widgets and scripting
elements.
owner A person who has the same rights as the creator of a calen- Calendar
dar event.
palette A set of colors to be used for formatting pages, data in visu- Stories
parent process Superordinating event that contains child events such as Planning
task or processes. The child processes can also act as pa- Calendar
rent processes. In this way,individually tailored hierarchies
can be created. Each parent process provides a high-level
overview of how much work in this area has been completed
and how much remains.
planning area A subset of model data that corresponds to a version’s data Planning
You can filter the planning area when editing a public version,
or copying to a private version. To optimize resource con-
sumption and avoid processing unnecessary data, the model
owner can define a recommended planning area.
planning model A planning model is always based on import data connec- Data Models
product switch A central menu in the shell bar that offers quick access to User Interface
other product tenants in your landscape, and common serv-
ices and tools.
rate version In a planning model that uses currency conversion, a rate Planning
recommended planning A subset of data used to limit the size of a planning model Planning
area
and enhance planning model performance.
record Records are the granular values that make up model data. Data Models
responsive page A story page that lets you control the order and arrangement Stories
of tiles, and preview the page on different screen sizes.
restricted measure A measure that is restricted so that it excludes data from Stories
reverted lock state Access status that is applied to a defined set of data when a Planning
reviewer Someone who reviews the task that was done by the as- Calendar
signee. Stories
review task A calendar task with one or more assigned reviewers to ap- Calendar
script Series of statements which are created by application de- Analytics Designer
signers or story developers in optimized story experience in Stories
the script editor.
script editor A tool within analytics designer and optimized story experi- Analytics Designer
ence to specify the actions that should take place when an Stories
event is triggered by a viewer.
search to insight A tool used to query data for quick answers to questions and Stories
incorporate these insights into a story.
shell bar The uppermost, horizontal section of the user interface that User Interface
displays breadcrumb navigation and actions relating to the
active screen. The shell bar also contains universal features
including search, notifications, discussions, help, user profile
settings, and the product switch.
single data entry A data entry mode that processes each new data change in a Planning
table immediately. Stories
smart discovery The result generated by mapping your data to a statistical Stories
source member A member from which the data is copied to another mem- Planning
ber, called a target member.
source model A model from which data is copied to another model, called Planning
a target model.
story filter A filter for all charts in a story that are based on the same Stories
model.
story page In stories, a blank canvas, a responsive page, or a grid to help Stories
you explore and present data.
story preferences In stories, a set of default formatting options, such as the Stories
page size and background color, the text style and back-
ground color of tiles, or the color palettes of charts.
target lock state Access status that is applied to a defined set of data when a Planning
target member A member that receives data copied from a source member. Planning
target model A model to which data is copied from a source model. Planning
task A calendar event that supports you to organize your work- Calendar
flows.
• General tasks
• Review tasks
• Composite tasks
• Data locking tasks
• Automatic data action tasks
• Automatic multi action tasks
• Input tasks
threshold A value or value range that provides visual cues to let you Stories
know what areas are doing well and what areas may need
improvements.
topic A container for one or more story pages in a Digital Board- Digital Boardroom
room presentation.
tracepoint A manually specified location used in the data action tracing Planning
tuple filter A tuple filter is a complex filter that is a combination of other Stories
filters. Planning
unassigned member An unassigned member is automatically created for all ge- Data Models
neric and organization dimensions in a planning model. Planning
unbooked cell Unbooked cells (unbooked data) are displayed in a table for Data Models
sets of dimension members that do not yet have a value. Stories
validation rule Rule in a planning model that defines which range of data Data Models
is valid for data entry and planning operations by specifying Planning
the allowed member combinations across dimensions for
this planning model.
value driver tree A directed graph made up of nodes showing data from differ- Planning
ent accounts or cross calculations, with optional filters for
each node.
For example, you can use a value driver tree to simulate how
changes to driver accounts such as head count and average
contract value will affect top line metrics such as gross reve-
nue.
variance The difference between two data values, such as time peri- Planning
ods or categories. Stories
version management A feature that allows you to find and perform actions such as Planning
editing, reverting, and deleting on all versions that you have
access to.
Guidelines for viewing SAP Analytics Cloud stories or boardroom presentations, as well as information on
creating bookmarks for story views or analytic applications.
• Change the Filter Bar Orientation (Optimized Story Experience) [page 175]
Learn how to enable your stories for either the iOS or Android mobile apps.
• You can use Responsive or Canvas pages to consume stories in the Optimized Story Experience on your
mobile app. It is however recommended to use Responsive for a better experience.
• Stories created using the Classic Design Experience must be created using Responsive pages (not Canvas
or Grid). When creating your story, you can preview how it will look on a device by choosing (Device
Preview) from the story toolbar. Select either Android or iOS for your Device type and specify a Size from
the list provided.
Note
The size previews for Phone and Tablet correspond to the following Android device specifications:
Note
The size previews for Phone and Tablet correspond to the following iOS devices:
• Small Phone: iPhone 12 and up, iPhone 11 Pro, iPhone XS, iPhone X, iPhone 8
• Large Phone: iPhone 14 Plus and up, iPhone 11 Pro Max and up, iPhone XS Max, iPhone XR, iPhone 8
Plus
• Small Tablet: iPad Pro (9.7-inch), iPad (5th generation and up), iPad Mini (5th generation and up),
iPad Air (3rd generation and up)
• Large Tablet: iPad Pro (10.5-inch, 11 inch, 12.9 inch)
The mobile apps can only be viewed in portrait mode for phones and landscape mode for tablets. The apps
do not rotate orientation.
You can set different font sizes for the resolutions of different device types. This allows story designers to target
how text is displayed for a single story when viewed across multiple mobile devices. First select a device and
size, such as an iOS Small Phone. Then with a tile in the story selected, open the Designer panel and select
Below are guidelines to best design stories containing geo maps for an optimal mobile experience.
The optimized story experience enables content within an SAP Analytics Cloud story to load faster, but not
all features and story options will be available when you enable the Optimized Design Experience or Optimized
View Mode features. These usage restrictions will be removed over time or changed into improvements. For the
latest information on feature support, see Optimized Story Experience Restrictions [page 1160].
For stories that have been created using the Optimized Design Experience you do not need to enable
optimization settings for either the iOS or Android mobile apps. Supported features will be automatically
available to mobile app users. Simply select either Android or iOS for your Device type and specify a Size from
the list provided.
For stories created created in either the classic (non-optimized) mode or the Optimized View Mode, two
optimization settings are provided:
• Optimized iOS: this mode enables stories to load faster on iOS mobile devices by leveraging the embedded
browser. This setting is not relevant for stories viewed using the Android app.
• View Time Optimization: this mode enables mobile users to benefit from improved performance of
dashboards (in certain scenarios) and leverage usability improvements in stories on iOS and Android
mobile devices.
Note
Not all features and story options will be available when you enable these modes.
• From the ( ) Main Menu, go to Browse, then Files, and then select the story.
2. Next to Controls, select Edit.
Note
4. Under View Time Optimization, turn on the Enable Optimized Mode setting.
Note
Stories must be enabled for Optimized iOS for View Time Optimization mode to work on your iOS
device. Classic stories enabled for mobile will by default now have Optimized iOS set to ON. Also, when
Enable Optimized mode is turned ON, Optimized iOS will automatically be set to ON. Story designers
can opt to turn off either setting.
For a listing of supported features on the iOS app see iOS Mobile App Feature Compatibility [page 131].
For a listing of supported features on the Android app see Android Mobile App Feature Compatibility [page
148].
Learn how to get and set up the SAP Analytics Cloud mobile app.
Topic Sections
Device Requirements
iOS Android
Mobile operating systems Requires iOS/iPadOS 15.0 or later. Requires Android 9.0 or later.
Devices Compatible with iPhone and iPad only. Requires an Android mobile device
with at least 4 GB of RAM. Supported
devices include Samsung Galaxy
Note
The SAP Analytics Cloud iOS mobile app will cache your data (all sources) for 7 days. This cache is
encrypted on the device. The cached data is only accessible via the app. You can always clear the story
related cached data using theClear Storage (Clear Cache on the Android app) setting. To access the setting
go to Profile Settings .
1. Download and install the mobile app from the App store here .
2. Tap Analytics to launch the app.
Note
To view and consent to our Chinese Cyber Security Law privacy statement, your device needs to be on
Chinese Standard Time, and the device language must be set to Chinese.
3. The first time you sign in, set an application password in the Set App Password screen to protect your data.
To reset a forgotten application password, on the welcome screen tap Forgot Password. To change your
Note
As an administrator, you can disable the application password prompt for all users. From the ( )
Main Menu, select System Administration and select the System Configuration tab.
Toggle Disable mobile app password to ON. Once a user sets up the mobile app on their device
(application password required for first-time setup), they will no longer be prompted to re-enter their
application password each time the app is changed to the foreground and made active.
4. Enter the server URL on the Enter Server URL screen and tap Connect.
Use the base URL from the browser of your SAP Analytics Cloud web application:
5. On the Authorize Client screen, tap Authorize to enable access to your SAP Analytics Cloud data.
https://<customerdomain>.<region>.sapanalytics.cloud
Note
If you receive the error: Logon failed: Cannot connect to SAP Analytics Cloud. Error code: 500, check
with your administrator to see if your user account is associated with more than one tenant. If so,
specify the tenant name in the server URL with /tenants/<tenantname>/ as follows:
https://<customerdomain>.<region>.sapanalytics.cloud/tenants/<tenantname>/
Note
Embedded SAP Analytics Cloud tenants are not supported for the mobile app.
To find your tenant name, log on to SAP Analytics Cloud from a web browser note the name as shown
below:
Tip
You can use Touch ID or Face ID on your iOS device to log on to the SAP Analytics Cloud app. Let's enable
Touch ID as an example.
Once you have set up Touch ID for your device, here's how to enable it with the SAP Analytics Cloud app:
1. Go to your user Profile and tap the Enable Touch ID slider. Tap OK to accept.
1. Download and install the mobile app from the Google Play store.
Note
To view and consent to our Chinese Cyber Security Law privacy statement, your device needs to be on
Chinese Standard Time, and the device language must be set to Chinese.
4. Enter the server URL in the Enter Server URL screen and tap on the device keyboard.
Note
Embedded SAP Analytics Cloud tenants are not supported for the mobile app.
Use the base URL from the browser of your SAP Analytics Cloud web application:
https://<customerdomain>.<region>.sapanalytics.cloud
Chrome is the default supported browser for authentication. Use the displayed Chrome browser for the
following authentication step. For authentication to work, OAuth needs to be configured on the server
running the SAP Analytics Cloud web application.
5. Enter your username and password and tap Log On. Use the same authentication credentials you use to
log in to the SAP Analytics Cloud web application.
6. On the Authorize Client screen, tap Authorize to enable access to your SAP Analytics Cloud data.
password prompt for all users in the SAP Analytics Cloud web application. From the ( ) Main Menu, select
System Administration and select the System Configuration tab. Toggle Disable mobile app
password to ON. This will enable the Skip option in the Set App Password screen, thereby giving the user a
choice of setting or not setting the application password. If a user opts to Skip in the Set App Password screen,
they can later set the password by going to User Profile Settings Enable Application Password .
• If you have forgotten the application password, on the welcome screen tap Forgot Password. This option
resets the application.
• The application will reset after you provide an incorrect password in ten attempts to launch the app.
• To change your application password, tap User Profile Settings App Password and tap Change
Application Password.
The mobile app can only connect to URLs that are trusted connections. This includes both intermediary URLs
used to log on to the SAP Analytics Cloud system, or live data source connections consumed by stories viewed
in the app. For example, if you are using a reverse proxy server to relay requests from the mobile app to SAP
Analytics Cloud, and the connection does not satisfy certain requirements, errors may be displayed to the user.
Use the information below to secure your connections for your particular app:
iOS
• A trusted SSL certificate must be installed on the server (such as the reverse proxy server). Self-signed or
untrusted SSL certificates are not supported.
This means that the server has secure server certificates from a Certificate Authority (CA), and the CA is
trusted by iOS (for a list see: https://support.apple.com/en-ca/HT204132 ). Possible error: An SSL error
has occurred and a secure connection to the server cannot be made.
• The connection must satisfy Apple's App Transport Security (ATS)
requirements. For complete requirements, see Requirements for Connecting
Using ATS in https://developer.apple.com/library/content/documentation/General/Reference/
InfoPlistKeyReference/Articles/CocoaKeys.html#//apple_ref/doc/uid/TP40009251-SW57 .
Possible error message:
The resource could not be loaded because the App Transport Security policy
requires the use of a secure connection.
To test your secure connection, you may want to use a public SSL testing service, for example: https://
www.ssllabs.com/ssltest/. Check the handshake simulation results for Apple ATS. More troubleshooting
tips are available at: Troubleshooting Connection Issues [page 160].
Note
For the latest the ATS compliance requirements refer to communication from your ATS certificate provider.
Android
A trusted SSL certificate must be installed on the server (such as the reverse proxy server). This means that
the server has secure server certificates from a Certificate Authority (CA), and the CA is trusted by Android
( the trusted root certificates are installed on the device under Settings > Biometrics and Security > Other
Security Settings > View security certificates). Possible error message:
An SSL Error has occurred. This may be due to the detection of a self-signed
certificate on the server. Please only use certificates generated by a CA.Self-
signed or untrusted SSL certificates are not supported.
Note
Related Information
Learn how to view and interact with your stories in the SAP Analytics Cloud iOS and Android mobile apps.
Navigate to the story you want to view using either the Home or Files tabs.
• The Home tab contains your favorite and featured content, as well as recently viewed items.
• The Files tab displays all the folders and files containing stories and presentations enabled for mobile.
Note
You can also use the search tool provided on the Home and Files tabs to find your story.
Tap Stories to choose a story you want to view. You can view stories you own, stories shared with you by
others, and public stories all from your mobile device.
For stories with multiple pages, swipe left to go the next page and swipe right to go the previous page by
tapping the page title at the top of the screen. This brings up a dialog where you can jump directly to any
page.
Story filters, page filters, model and dimension filters, and calculations that exist in the story can be interacted
with in the Input Controls panel of the mobile app.
1. While viewing a story, tap (found in the upper right part of your screen).
2. In the Input Controls panel, you'll see a list of filters (calculations appear under Others). Select a filter.
3. In the panel for the selected filter, you can see a list of all possible filter values. Select one or multiple values
by tapping the check box next to each value.
4. (Optional) If the dimension contains a hierarchy, you can select the dimension to see the next level of
dimension values and select one or more of the child values.
5. Tap Apply and the story will refresh with your changes.
Additional options, including chart filters, are available from a Quick Actions menu after entering full screen
mode. Tap in the upper right corner of a chart to expand it full screen. Initially the menu shows several
chart-wide options, but more options become available after selecting a value in the chart. Select a value in a
chart by tapping it, such as a bar in a bar chart, to see the full list of options.
Note
Table rows and columns are initially truncated, showing only a preview. Tap in the upper right corner of the
table to expand it full screen and show the full table details.
Tap to enable multi-selection of chart values. For example, enabling this option allows you to tap and
select more than one bar in a bar chart before applying a filter. When this option is enabled, you can
also tap and drag to lasso multiple values in a given area.
When disabled, you can only select a single value at a time. Tapping a new value changes the selection.
Sort the measures and dimensions in charts in ascending or descending order. For example, sort
dimension values from A-Z or Z-A, or measure values from lowest to highest or highest to lowest.
Filter data by the ranking of the highest or lowest values. For example, show only the Top 5 or Bottom 5
measure values.
Custom Top N options saved to the original story are applied when viewing in the mobile app.
Filter by (include) the selected values. Tap first to allow you to tap and select multiple values.
To remove a chart filter, select in the upper-right part of the chart to open the Chart Filters panel.
Swipe left over a filter and tap Delete.
Note
If using the Android App, you cannot remove the chart filter. You can only reset the chart filter.
Filter out (exclude) the selected values. Tap first to allow you to tap and select multiple values.
To remove a chart filter, select in the upper-right part of the chart to open the Chart Filters panel.
Swipe left over a filter and tap Delete.
Note
If using the Android App, you cannot remove the chart filter. You can only reset the chart filter.
For example, assume you have a hierarchy for a Location dimension that consists of Country > State
> City. If your chart is displaying at the State level, tap a value to select it, and then tap
Location to drill down one level. The chart refreshes and now displays values at the City level.
For example, assume you have a hierarchy for a Location dimension that consists of Country > State
> City. If your chart is displaying at the City level, tap a value to select it, and then tap
Location to drill up one level. The chart refreshes and now displays values at the State level.
For example, let's say you have two values in a bar chart much larger than the rest. These two
values determine the scale of the chart, and make it harder to read smaller values. Tap to allow
multi-selection, tap the two long bars in the bar chart, and then tap to apply the axis break and
shorten their represented length. The scale of the chart is changed, and the bars of shorter values
appear longer and are easier to read.
(Beta) Add a datapoint to your Watchlist. This icon is displayed when you select a datapoint in a chart.
Note
Currently, only selected Beta customers can use this feature. If you are interested in the Beta
program, please contact your SAP account manager. Please note that the following conditions
apply during the Beta period: Your company acknowledges that the software is a preliminary
version and not subject to any productive use license agreement or any other agreement with
SAP. SAP has no obligation to include or remove any functionality from the software in any future
version or in any SAP standard product.
Open a chart link to other stories, story pages, external web pages, and links that launch other apps.
Aside from the Quick Action menu, additional gestures can be used directly on chart values:
Option Description
Change chart value details. Tap and hold on a single value, then drag across other val-
ues.
Show a changing difference marker. Tap and hold to select a single value, then select a second
value.
Related Information
Learn how to view and interact with your boardroom presentations in the iOS SAP Analytics Cloud mobile app.
Tap to choose a Digital Boardroom presentation to view. Navigate to the presentation you want to view
using either the Home or Files tabs.
• The Home tab contains your favorite and featured content, as well as recently viewed items.
• The Files tab displayes all the folders and files containing stories and presentations enabled for mobile.
Note
You can also use the search tool provided on the Home and Files tabs to find your presentation.
A presentation can be either an agenda or a dashboard, indicated by the Digital Boardroom Type column of the
list, each with similar navigation options in the mobile app. Select a presentation to open it.
Note
The Table of Contents organizes content in your presentation in a tree view, and allows you to search for topics
by title. Your most recently viewed pages are also displayed so you can jump to them easily. Agendas show all
Selecting a topic opens the first story page for that topic. In the story, swipe left to go to the next page or swipe
right to go to the previous page. After the last page, swipe left on the page to continue to the next topic. You
can also change pages by tapping the page title at the top of the screen. Doing this brings up a dialog where
you can jump directly to any page.
At any time while presenting a story page, tap the Table of Contents icon to go back to the overall navigation.
Note
You can hide an individual topic so it doesn't appear on mobile devices. While creating your Digital
Boardroom presentation on the canvas, select a topic and choose Hide in Mobile .
Related Information
Learn how to view and interact with analytic applications in the SAP Analytics Cloud mobile app.
Before viewing your analytic application, mark an application as mobile enabled. To do this, from the File in
the toolbar, choose (Edit Analytic Application) Analytic Application Details , and turn on the option
Access the app from mobile device.
Then navigate to the analytic application you want to view using either the Home or Files tabs.
• The Home tab contains your favorite and featured content, as well as recently viewed items.
• The Files tab displays all the folders and files containing content (including analytic applications) enabled
for mobile.
Note
You can also use the search tool provided on the Home and Files tabs to find your application.
Tap Analytic Applications to choose the application you want to view. You can view applications you have
created, those shared with you by others, and public applications all from your mobile device.
Additional options, including chart filters, are available from a Quick Actions menu after entering full screen
mode. Tap in the upper right corner of a chart to expand it full screen. Initially the menu shows several
chart-wide options, but more options become available after selecting a value in the chart. Select a value in a
chart by tapping it, such as a bar in a bar chart, to see the full list of options.
Note
Table rows and columns are initially truncated, showing only a preview. Tap in the upper right corner of the
table to expand it full screen and show the full table details.
Tap to enable multi-selection of chart values. For example, enabling this option allows you to tap and
select more than one bar in a bar chart before applying a filter. When this option is enabled, you can
also tap and drag to lasso multiple values in a given area.
When disabled, you can only select a single value at a time. Tapping a new value changes the selection.
Sort the measures and dimensions in charts in ascending or descending order. For example, sort
dimension values from A-Z or Z-A, or measure values from lowest to highest or highest to lowest.
Filter data by the ranking of the highest or lowest values. For example, show only the Top 5 or Bottom 5
measure values.
Custom Top N options saved to the original applications are applied when viewing in the mobile app.
Filter by (include) the selected values. Tap first to allow you to tap and select multiple values.
Filter out (exclude) the selected values. Tap first to allow you to tap and select multiple values.
For example, assume you have a hierarchy for a Location dimension that consists of Country > State
> City. If your chart is displaying at the State level, tap a value to select it, and then tap
Location to drill down one level. The chart refreshes and now displays values at the City level.
For example, assume you have a hierarchy for a Location dimension that consists of Country > State
> City. If your chart is displaying at the City level, tap a value to select it, and then tap
Location to drill up one level. The chart refreshes and now displays values at the State level.
For example, let's say you have two values in a bar chart much larger than the rest. These two
values determine the scale of the chart, and make it harder to read smaller values. Tap to allow
multi-selection, tap the two long bars in the bar chart, and then tap to apply the axis break and
shorten their represented length. The scale of the chart is changed, and the bars of shorter values
appear longer and are easier to read.
Open a chart link to other analytic applications, stories, story pages, external web pages, and links that
launch other apps.
Aside from the Quick Action menu, additional gestures can be used directly on chart values:
Option Description
Change chart value details. Tap and hold on a single value, then drag across other val-
ues.
Show a changing difference marker. Tap and hold to select a single value, then select a second
value.
Feature Compatibility
When working with an analytic application on a mobile device, not all SAP Analytics Cloud Analytics Designer
functionality and APIs are supported by the mobile app.
Related Information
Note
Note
Depending on how your mobile administrator has configured the app, some settings may not appear.
Profile Settings
iOS Setting Android Setting Description
Server Details Server Details Displays the SAP Analytics Cloud sys-
tem the mobile app is connected to. Tap
Switch ServerTap to connect to another
server. Tap the displayed URL see fur-
ther details. Tap Log Off to log off the
system.
Settings Settings Security and Defaults Tap to see more configurable settings.
For more information see the table be-
low.
Note
Not available for the Android app.
The table below lists the options displayed when you select Settings under your user profile.
Settings
iOS Setting Android Setting Description
Change App Password App Password Tap to change the application password
used to control access to the app on
your device.
Clear Storage Clear Cache Tap to clear the app's cached data.
Note
An SAP Analytics Cloud admin can
enable the Disable the mobile app
cache setting on the web applica-
tion and thereby turn the cache off
for all users.
Files Content Files Content Tap Files Content and use Select File
and choose one of your stories, Digital
Boardroom presentations, or analytic
application to automatically open when
the app launches.
Note
Currently you can only set a story
as your default content when using
the Android app.
Note
If your admin has enabled Disable
the mobile app cache on the web
application , this setting is no lon-
ger relevant as the story pages are
always refreshed with latest data.
Enable Log File Enable Log File Enable if you want to create log files and
send them to SAP Support to trouble-
shoot issues.
Share story, Digital Boardroom, and analytic application links, create discussions with colleagues, and manage
your notifications in the mobile app.
Topic Sections
Note
Most of the information below does not apply to the Android app.
You can now add a drawing or text to a story, Boardroom presentation, or analytic application by selecting the
Tap on the top toolbar and use these tools to mark up the page:
Add text using the iOS keyboard. Use your finger to tap and
drag the text box to the location you want on the page. With
the box selected, tap Edit to change your text or Delete to
remove the text box.
Once you're done, tap to share your annotated page with others by email or other iOS options. You can also
save it for later; for example, by using Add to Notes:
You can add to an existing note, or tap Save to create a note. Open your Notes app on your device to see your
new note with the added image.
Links to stories, Digital Boardroom presentations, and analytic applications opened on a mobile device
automatically launch the SAP Analytics Cloud mobile app and go directly to that content. For example, let's say
you receive an email on your mobile device with a story link. To open the story directly:
1. Tap the link in your email. Your mobile browser opens, prompting you to log on.
2. Log on with your SAP Analytics Cloud credentials.
3. Tap OK when promoted by the message Do you want to open with the Mobile App?.
4. Tap Open when prompted by the message Open this page in "Analytics"?.
You can also copy the URL when viewing a story in the SAP Analytics Cloud web application by choosing
Share Story .
Another way to share is to swipe left on an entry in the Home or Files tabs and tap on the Share option.
You can access content on Android devices using App links sent through email or shared with you through the
Note
When using an Android device, you cannot access content when it is shared from the iOS app. Only links
containing the following domains are currently supported:
• *.sapbusinessobjects.cloud
• *.sapanalytics.cloud
Customized links are not supported for App links on the Android device.
If you are a content designer or administrator, you can embed story, Digital Boardroom, and analytic
application deep links into content that uses the SAP Analytics Cloud mobile app's custom URL scheme. The
mobile app is opened directly from this link, without the extra confirmations and dialogs your users encounter
with a normal browser link. The custom URL scheme for a story looks like this:
sap-analytics-cloud://<customerdomain>.<region>.<domain>/?type=story&id=<storyID>
Note
When you establish a deep link for the mobile app into other applications, these links can be used to pass
prompt parameters, page indexes, and filters.
You can also open Digital Boardroom presentations directly by changing the type
parameter to type https://help.sap.com/viewer/a4406994704e4af5a8559a640b496468/release/en-US/
4818f7d4ffc241d6ac646e4974e7ba6c.html=presentation.
2. Tap on the Share icon and tap on Mail to send the link via email.
3. Open your mail box to find the URL that will serve as the deep link.
2. To pass your parameters, you need to identify the following:
• Model ID
• Parameter ID
• Parameter Value
Note
You can view and manage all discussions directly from your mobile device, including archiving and deleting
discussions that are no longer active.
Note
You can open shared stories, analytic applications, and invited discussions directly by tapping the appropriate
notification. Additional notifications, such as when discussions have been archived, can also be read on your
mobile device.
Related Information
View published content through the Catalog tab on your mobile device.
The Catalog is a single access point for content published to users and teams within SAP Analytics Cloud.
Mobile app users can discover published content in the Catalog tab.
Note
Mobile app users can only view content published to the Catalog. To actually publish content you need to
use the SAP Analytics Cloud web application. For more information on publishing to the Catalog refer to
Publish and Share Content to the Catalog [page 207].
The following content types can be displayed in the Catalog, however not all these types are supported in the
mobile app.
• Stories
• Analytic Applications
• Digital Boardroom presentations
• Models
• Datasets
• Uploaded SAP Analytics Cloud files
• Content Links
1. Tap on the Catalog icon on the bottom navigation to display the Catalog.
To display the Search field you need to swipe down below the Catalog heading.
• Alternatively you can change the sort order of the Catalog listing by tapping on the Filter icon.
The default sort order prioritizes the most recently published items first. You can also sort the Catalog
based on:
• Most viewed
Note
Filter categories are not defined or created in the mobile app. They are inherited from the SAP
Analytics Cloud web application.
Note
If there are more than 8 options for a filter category a > icon is displayed on the right. Tap on
the > to display the full list view on a separate page. Select the options you want and then tap <
to return to the main Filter screen.
Note
If there is only a story or presentation associated with a catalog item, the Files screen does not display,
the associated story or presentation will be displayed automatically.
Details include attached images, applied tags, the underlying file used to create the item, ownership and
update details, external links and other resources. Any associated filter categories are also listed.
A star sign will appear on the left of the item when it is displayed in the Catalog.
Access quick insights about your data by querying the Search to Insight tool on the iOS mobile app.
Context
While working with SAP Analytics Cloud models, you can query the Search to Insight tool to get quick
answers to questions and share these insights with other users. You launch Search to Insight by selecting
the corresponding icon from the Home screen.
For information on Search to Insight in the SAP Analytics Cloud web application see Using Search to Insight
[page 2031].
Note
Search to Insight on the mobile app does not support the following:
Note
Results for mobile app users accessing a system containing both live and local acquired models may
not be identical to Search to Insight using SAP Analytics Cloud.
Procedure
• Select any of the search recommendations displayed under Recommendations to Start your Search.
Note
When you select a search recommendation, a search is automatically launched. The search scope
will be the model on which the recommendation is based.
recognition, tap the Search icon when you are ready to run the search. To end voice recognition tap
the X displayed under the search field.
Note
When you activate voice recognition for the first time, select OK when prompted for permission to
allow e ap to access Apple's voice recognition service and your device's microphone. By default,
the app will wait for five seconds before requesting that you tap the microphone icon to restart
voice recognition search.
As you type or use voice recognition, auto-complete suggestions will display above the search field.
Note
For a summary of how and what you can format as a Search to Insight question, see Search to
Insight [page 2027]. You cannot query Describe <model> metadata when using the mobile app.
You can use synonyms for measure and dimension names if they are defined in your system. The
synonyms will be suggested in the search field auto complete, and will be part of consequent results
and visualizations if used in a query.
Note
You cannot define or add synonyms via the mobile app. For more information, seeDefining and Working
with Synonyms [page 2035].
See what SAP Analytics Cloud story, digital boardroom and analytics designer functionality is compatible with
the iOS mobile app.
• Optimized Design Experience [page 131]: this mode enables mobile users to benefit and leverage usability
improvements in stories on iOS mobile devices. Use this mode to work with stories created through the
optimized design experience.
• iOS Optimized Mode [page 136]: for faster loading of stories.
• Legacy Mode [page 139]: Non-iOS optimized mode: this is the legacy mode for running the app.
Note
For information on the Android app feature compatibility see Android Mobile App Feature Compatibility
[page 148].
With some noted exceptions, the information below applies to stories created using Optimized Design
Experience or Optimized View Mode.
What is supported with this mode? What is not supported with this Mode?
Only available in the iOS Mobile App starting from version • Planning:
2.115.
• Value Driver Tree
• Planning Trigger
• Dynamic Images
• R-Visualizations
• Clock
• Chart footers
• Smart Insights
• Top N options
• Analytic Applications
• Data Analyzer is currently not supported in the mobile
app.
Note
Table cell data is read only
• Themes
Note
Background gradients and background images are
not supported for mobile devices.
• Images/Shapes
• Text/Headers with Dynamic text support
• RSS
• Story Filters and Input Controls
• Sharing and Annotation
• Web Fonts
• Hyperlinks
Note
Hyperlinks to non-Optimized iOS stories will
automatically set target stories to Optimized iOS
mode.
• Prompts
• Custom widgets:
Note
Custom widget author decides in custom widget
configuration (contribution.json) whether this
widget is displayed in the app.
Note
You cannot rename bookmarks in the mobile app.
• Chart Scaling
• Geo maps
Note
Only Choropleth layer supported
• Popups
Note
Popups aren’t responsive, thus likely to be cut on
smaller devices. When designing popups, make
sure that their sizes are fit to mobiles.
Note
Other commenting features are not supported
• Mobile application management through MDMs Offline mode does not support the following features:
• iOS Mobile SDK
• Bookmarks
• Offline – limited to analytics models
• Geo maps
• iOS Safari:
• Histogram charts
• Note • Measure-based filters
Supported for iPad only. For more information, see • Filtering across models
Mobile Requirements in System Requirements and Features not supported with iOS Safari:
Technical Prerequisites [page 2723].
• Localization
• Dataplane
• Data Analyzer
Mobile
App Sup-
APIs port Notes
Export APIs No
Navigation panel APIs Limited Tablets are supported, while mobile phones aren't supported.
Support
NavigationUtils.createStoryUrl
NavigationUtils.createApplicationUrl
executeInBackground
getParameterValue
setParameterValue
setQuickActionsVisibility
Working with Live Data Sources for the Optimized Story Experience
What is supported? What is not supported?
• Direct type connections for BW & HANA data sources • Direct type connections for BW & HANA data sources
(authentication type): (authentication type):
• SAML
• Token-based Single Sign On to live data sources
• Basic
• SAP Datasphere connection:
(authentication type):
• SAML
• HANA Cloud connection (authentication type):
• SAML
• Basic
For more information on usage restrictions for this
connection type see Live Data Connection to SAP
HANA Cloud Using an "SAP HANA Cloud" Connection
[page 311].
• SAP Datasphere connection (authentication type):
• SAML
Note
For direct SAML Connection, the IDP login is displayed
for up to 60 sec to enter credentials. It closes
automatically. In case of a timed-out or failed
connection, tap Story refresh to reconnect your live
data source.
Note
It is recommended that you manually refresh the story
to reconnect.
What is supported with this mode? What is not supported with this Mode?
Only available in the iOS Mobile App starting from version • Hyperlinks in tables
2.77.0
• Planning:
• Value Driver Tree
• Data Action
• Dynamic Images
• R-Visualizations
• Clock
• Geo Maps
• Chart filter reset
• Chart footers
• Smart Insights
• Translation of measure/dimension/cross-calculation
input controls is not supported
• Web page
• Top N options
• Canvas pages: canvas and grid layouts
• Data Analyzer is currently not supported in the mobile
app.
• Charts
Note
Forecasting in timeseries and timestamp charts is
not supported.
Note
You cannot rename bookmarks in the mobile
app. The translation of bookmark titles is not
supported.
Note
Only available for Optimized View Mode
• Chart Scaling
• Comment widget is supported:
Note
Other commenting features are not supported
• Offline – limited to analytics models Offline mode does not support the following features:
• Mobile application management through MDMs
• Bookmarks
• Mobile SDK
• Geo maps
• Histogram charts
• Measure-based filters
• Filtering across models
• Direct type connections for BW & HANA data sources • Direct type connections for BW & HANA data sources
(authentication type): (authentication type):
• SAML
• Token-based Single Sign On to live data sources
• Basic
• Tunnel type connection for BW & HANA data sources
(authentication type):
• SAML
• HANA Cloud connection (authentication type):
• SAML
• Basic
For more information on usage restrictions for this
connection type see Live Data Connection to SAP
HANA Cloud Using an "SAP HANA Cloud" Connection
[page 311].
• S/4 live connections (authentication type):
• None - Using the None authentication option
allows you to connect to data source systems that
use SSO that are not based on SAML 2.0. For more
information see Using the 'None' Authentication
Option [page 454].
• User Name and Password - provide a user name
and password for your data source system.
• SAML SSO - use this option for SAP S/4HANA
on-premise versions older than 1909 (7.54) or
when using the cloud connector to connect to live
SAP S/4 HANA.
• SAML SSO (Standard-Compliant) - use this
option for SAP S/4HANA on-premise 1909 (7.54)
versions or newer.
• SAP Datasphere connection (authentication type):
• SAML
Note
For direct SAML Connection, the IDP login is displayed
for up to 60 sec to enter credentials. It closes
automatically. In case of a timed-out or failed
connection, tap Story refresh to reconnect your live
data source.
Note
It is recommended that you manually refresh the story
to reconnect.
Mobile
App Sup-
General port Notes
iOS Touch ID Yes iPhone X does not include Touch ID. Use iOS Face ID instead.
Single sign-on Yes After first sign-on, only the application password is required. Your per-
sonal credentials will be remembered.
User languages Yes The menus, buttons, messages, and other elements of the user interface
in the mobile app respect your device language settings. If you change
the device language when the app is open, close and re-open the app for
the changes to take effect. The following languages are supported:
Mobile Application Management - Limited Currently only VMware AirWatch, Microsoft Intune, and MobileIron Cloud
AppConfig support Support are supported.
SAP Digital Boardroom presenta- Limited Story filter and prompts are not supported in the app. Story styling
tions (agendas and dashboards) Support overrides created in the Digital Boardroom canvas are not supported on
mobile.
Live data refresh Limited For live data, currently the following connection types and authentication
Support
methods are supported by the mobile app:
• SAP HANA
• Type: Direct; Authentication: User Name and Password, SAML
SSO, Cloud Connector, and Tunnel
• SAP BW
• Type: Direct; Authentication: User Name and Password, SAML
SSO, Cloud Connector, and Tunnel
• SAP S/4HANA
• Type: Direct; Authentication: User Name and Password, SAML
SSO
Note
• The None authentication option is not supported in any scenario
on the mobile app.
• Only existing Path connections are supported. New Path con-
nection types can no longer be created in SAP Analytics Cloud.
• Currently on the mobile app, SAML SSO authentication will
prompt the user for their username and password. A certificate
can be imported to the device.
Live data source connections must be trusted and meet Apple ATS re-
quirements. See Troubleshooting Connection Issues [page 160] for more
details.
Story offline mode Limited Cached data is supported in iOS Airplane mode. You must have first
Support connected to and loaded the data once. Any chart or story loaded and
actions are cached. For example, if you have filtered a country to "Can-
ada", then the results will be cached.
Note
Story data is kept cached on the device for 7 days when offline.
Afterward, it is removed from the cache. If you are online, you can go
to Settings and tap Clear Storage to clear the cache. When offline, the
only way to remove story data currently is to delete the mobile app
from the device.
Data Analyzer No
Mobile
App Sup-
Story Elements port Notes
Responsive Pages Yes If a story has a combination of responsive and non-responsive pages
(Canvas and Grid), only the responsive pages can be opened.
Canvas Pages No
Grid Pages No
KPI Yes
Chart Footer No
Chart Scaling No
Currency Conversion No
Table Limited Table rows and columns are initially truncated, showing only a preview.
Support
Tap in the upper right corner of the table to expand it full screen and
show the full table details.
Note
In full screen mode, there is a limit of 10,000 total table cells that can
be displayed, with an upper limit of 1000 rows or 1000 columns.
Filters and Input Controls Limited As in the SAP Analytics Cloud web application, the mobile app supports
Support
a variety of story filters, page filters (Input Controls), chart filters, and
Dimension and Measure Input Controls. The following differences apply
to the mobile app:
• Filter by range:
• Dimensions must be numeric
• Date dimension
• Dynamic time filters are not modifiable
• On the web, each range is modifiable between the mini-
mum and maximum of all of the ranges for the dimension.
On mobile, each range is limited to the minimum and max-
imum of its own original range or extended to what the
designer has selected at save time.
• Chart filter:
• The filters can only be deleted.
• Search: Only available for local (on device) filter values. You cannot
use to mobile app to perform server-side (system) searches on filter
values.
Note
For best performance when modifying a hierarchy member filter,
limit hierarchy levels to include less than 7500 members.
BEx Variables and Prompts Limited When working in online mode, a limited set of BEx variable types can be
Support interacted with in the Story Variables section of the Input Controls panel.
In offline mode, the BEx variables are not listed under Story Variables,
and they are not included in the displayed Input Controls count.
Additional BEx variable types used in the story are visible in the Input
Controls panel but cannot be interacted with.
HANA Variables and Prompts Limited When working in online mode, a limited set of HANA variable types can
Support be interacted with in the Story Variables section of the Input Controls
panel. In offline mode, the HANA variables are not listed under Story
Variables, and they are not included in the displayed Input Controls count.
Additional HANA variable types used in the story are visible in the Input
Controls panel but cannot be interacted with.
Image Yes
Shape Yes
Text Limited Dynamic Text elements created from the following sources are not sup-
Support ported on mobile:
If your Story tile uses an unsupported source, the tile is not displayed
on mobile. Dynamic Text elements from other sources are supported on
mobile.
Clock No
RSS Reader No
Web Page No
Symbol Yes
Geo Map Limited Some features of Geo Maps are currently not supported in the mobile
Support app. Unsupported features include:
• Transparent Dark Gray and Night Time Streets basemaps are not
supported. The Light Gray basemap will be used as fallback when
online.
• To enable Geo Maps in offline mode, please contact SAP Product
Support. Once offline support is enabled, you can choose to use one
of the following basemap options:
• Default offline basemap: to download this basemap, open a geo
map in a story when online.
• Non-default offline basemap: as a prerequisite you will need to
create your own ESRI account. To disable the default basemap,
turn off Enable Default Offline Basemap in the app Settings. To
download a basemap, open a geo map in a story when online,
you will be prompted to provide your ESRI credentials.
• When offline, Satellite, Hybrid, and Terrain basemaps are not sup-
ported.
• When offline, basemaps only support zoom levels 1-5.
• Zoom to Data is not supported.
• Light and Dark styling themes are not supported.
• Opacity is not supported.
• Drawing with the lasso tool is not supported.
• Advanced legend and tooltip charts are not supported.
• Only the Choropleth style in the Choropleth/Drill layer is supported.
The Bubble style is not supported.
R Visualization No
Catalog Yes Mobile app users can only view content published to the Catalog. To
actually publish content you need to use the SAP Analytics Cloud web
application.
Dynamic page and story filters Limited Last and Next values are not available. You can choose only between the
Support different dynamic ranges.
Synonyms Yes You can consume synonyms when working with Search to Insight on the
mobile app. You cannot define or create synonyms.
Top N Options Limited Only Top 5 and Bottom 5 are available on mobile. Custom Top N options
Support are not currently displayed in the mobile app.
Bookmarks Limited If bookmarks exist, the user's default bookmark is applied when opening
Support
the story. The name of the bookmark is visible in the Input Controls
panel. To change the default bookmark, you must use the SAP Analytics
Cloud web application.
Mobile
App Sup-
Analytics Designer Elements port Notes
Charts, Chart Footer, Chart Scaling Yes Predictive forecasts are not currently supported.
Table Limited Table rows and columns are initially truncated, showing only a preview.
Support
Tap in the upper right corner of the table to expand it full screen and
show the full table details.
Note
In full screen mode, there is a limit of 10,000 total table cells that can
be displayed, with an upper limit of 1000 rows or 1000 columns.
Note
For best performance when modifying a hierarchy member filter,
limit hierarchy levels to include less than 7500 members.
BEx Variables and Prompts Limited The prompt dialog needs to be opened with script API, for example:
Support table_1.getDataSource().openPromptDiaglog().
HANA Variables and Prompts Limited The prompt dialog needs to be opened with script API, for example:
Support table_1.getDataSource().openPromptDiaglog().
Shape Yes
Text Limited Dynamic Text elements created from the following sources are not sup-
Support ported on mobile:
If your app tile uses an unsupported source, the tile is not displayed on
mobile. Dynamic Text elements from other sources are supported on
mobile.
Clock Yes
Web Page No
Symbol Yes
Geo Map No
R Visualization No
Top N Options Limited Only Top 5 and Bottom 5 are available on mobile. Custom Top N options
Support
are not currently displayed in the mobile app.
Analytic Application Bookmarks Yes To view all available bookmarks of an analytic application, swipe an app
left, open the Details. All relevant bookmarks are listed there.
Custom Widget Yes Custom widget author decides in custom widget configuration (contribu-
tion.json) whether this widget is displayed in iOS Mobile App.
Commenting No
Smart Capabilities No Smart Capabilities such as Smart Insight, Smart Discovery and Search to
Insight are not supported.
Planning Limited • Only specific planning related APIs are supported, including
Support
setUserInput, copy planning version API and data action related
ones.
• Editing data in cross tabs is not supported.
• Value driver tree is not supported.
• Planning sequence widgets are not supported.
Mobile
App Sup-
Analytics Designer APIs port Notes
Export APIs No
Navigation panel APIs Limited IPads are supported, while iPhones aren't supported.
Support
NavigationUtils.createStoryUrl
NavigationUtils.createApplicationUrl
executeInBackground
getParameterValue
setParameterValue
setQuickActionsVisibility
Mobile
App Sup-
Collaboration port Notes
Discussions Yes
Events No
Hyperlinks from story Limited Links to other stories and story pages are supported. Also supported are
Support
links to external web pages and links that launch other iOS apps.
Restriction
• For tables, using the selected dimensions as a temporary filter
for the destination story in the link is not supported. Charts do
support this option.
• Links to story pages that are hidden are not supported on mo-
bile.
Hyperlinks from app Limited Links to other analytic applications, stories and story pages are sup-
Support
ported. Also supported are links to external web pages and links that
launch other iOS apps.
Restriction
Hyperlink is available to text, image and shape, but currently not
available to table and chart.
Workspaces No You might be able to access content saved in workspaces through the
use of deep links. For more information, see Sharing and Collaborating on
Mobile [page 112].
See what SAP Analytics Cloud functionality is compatible with the Android mobile app.
For iOS Mobile app compatibility see the following iOS Mobile App Feature Compatibility [page 131]
• Optimized Design Experience [page 149]: this mode enables mobile users to benefit and leverage usability
improvements in stories on Android mobile devices. Use this mode to work with stories created through
the optimized design experience.
• Legacy Mode [page 153]: this is a non-optimized mode.
The information below applies to and Android mobile apps, and applies to stories created using Optimized
Design Experience or Optimized View Mode.
What is supported with this mode? What is not supported with this Mode?
Note
• Value Driver Tree
• Planning Trigger
Other commenting features are not supported.
• Dynamic Images
• R-Visualizations
• Clock
• Chart footers
• Smart Insights
• Top N options
• Analytic Applications
• Charts
• Complex filters (read-only support)
Note
Table cell data is read only
• Themes
Note
Background gradients and background images are
not supported for mobile devices.
• Images/Shapes
• Text/Headers with Dynamic text support
• RSS
• Story Filters and Input Controls
• Sharing and Annotation
• Web Fonts
• Web Pages
• Hyperlinks
• Prompts
• Custom widgets:
Note
Custom widget author decides in custom widget
configuration (contribution.json) whether this
widget is displayed in Android Mobile App.
Note
Tablets only. Fullscreen landscape is not supported
for Android phones.
Note
You cannot rename bookmarks in the mobile app.
• Chart Scaling
• Geo maps
Note
Only Choropleth layer supported
• Popups
Note
Popups aren’t responsive, thus likely to be cut on
smaller devices. When designing popups, make
sure that their sizes are fit to mobiles.
• Mobile application management through MDMs Offline mode does not support the following features:
Most script APIs in Optimized Story Experience are supported, except for:
Mobile
App Sup-
APIs port Notes
Export APIs No
Navigation panel APIs Limited Tablets are supported, while mobile phones aren't supported.
Support
NavigationUtils.createStoryUrl
NavigationUtils.createApplicationUrl
executeInBackground
getParameterValue
setParameterValue
setQuickActionsVisibility
Working with Live Data Sources for the Optimized Story Experience
With some noted exceptions, the information below applies to the Android mobile app, and applies to stories
created using Optimized Design Experience or Optimized View Mode.
• Direct type connections for BW & HANA data sources • Direct type connections for BW & HANA data sources
(authentication type): (authentication type):
• SAML
• Token-based Single Sign On to live data sources
• Basic
• SAP Datasphere connection:
(authentication type):
• Geo maps
• SAML
• HANA Cloud connection (authentication type):
• SAML
• Basic
For more information on usage restrictions for this
connection type see Live Data Connection to SAP
HANA Cloud Using an "SAP HANA Cloud" Connection
[page 311].
• SAP Datasphere connection (authentication type):
• SAML
Note
For direct SAML Connection, the IDP login is displayed
for up to 60 sec to enter credentials. It closes
automatically. In case of a timed-out or failed
connection, tap Story refresh to reconnect your live
data source.
Note
It is recommended that you manually refresh the story
to reconnect.
App Sup-
General port Notes
Touch ID No
Face ID No
Notifications No
Single sign-on Yes Certificate, Cloud Connector and Tunnel support available.
User languages Yes The menus, buttons, messages, and other elements of the user interface
in the mobile app respect your device language settings. If you change
the device language when the app is open, close and re-open the app for
the changes to take effect. The following languages are supported:
Mobile Application Management - Limited Currently only VMware AirWatch, Microsoft Intune, and MobileIron Cloud
AppConfig support Support are supported.
Data Analyzer No
Live data refresh Limited For live data, currently the following connection types and authentication
Support
methods are supported by the mobile app:
• SAP HANA
• Type: Direct; Authentication: User Name and Password, SAML
SSO, Cloud Connector, and Tunnel
• Type: SAPCP; Authentication: User Name and Password, SAML
SSO
Note
For User Name and Password, you can allow users to en-
ter their credentials when prompted. Alternatively, when
setting up the connection, select Save this credential for all
users on this system, and users won't have to enter their
credentials.
• SAP BW
• Type: Direct; Authentication: User Name and Password, SAML
SSO, CloudConnector, Tunnel
• SAP HANA CLOUD
• Type: Direct; Authentication: User Name and Password, SAML
SSO
• DWC (SAP Datasphere connection)
• Type: Direct; Authentication: SAML SSO
• SAP S/4HANA
• Type: Direct; Authentication: User Name and Password, SAML
SSO
Note
• The None authentication option is not supported in any scenario
on the mobile app.
• Only existing Path connections are supported. New Path con-
nection types can no longer be created in SAP Analytics Cloud.
• Currently on the app, SAML SSO authentication will prompt the
user for a certificate though a popup screen if the certificate is
imported on the device. If no certificate is available or access is
denied, use the IDP login page to provide credentials.
Offline mode No
Phone orientation Limited Portrait mode only. Landscape is not supported in Fullscreen mode
Support
Mobile
App Sup-
Story Elements port Notes
Responsive Pages Yes If a story has a combination of responsive and non-responsive pages
(Canvas and Grid), only the responsive pages can be opened.
Canvas Pages No
Grid Pages No
KPI No
Chart Footer No
Table Limited Table rows and columns are initially truncated, showing only a preview.
Support
Tap in the upper right corner of the table to expand it full screen and
show the full table details.
Note
In full screen mode, there is a limit of 10,000 total table cells that can
be displayed, with an upper limit of 1000 rows or 1000 columns.
Filters and Input Controls Limited As in the SAP Analytics Cloud web application, the mobile app supports
Support
a variety of story filters, page filters (Input Controls), chart filters, and
Dimension and Measure Input Controls. The following differences apply
to the mobile app:
• Chart filter:
• The filters can only be deleted.
Note
For best performance when modifying a hierarchy member filter,
limit hierarchy levels to include less than 7500 members.
Image Yes
Shape Yes
Text Limited Dynamic Text elements created from the following sources are not sup-
Support ported on mobile:
If your Story tile uses an unsupported source, the tile is not displayed
on mobile. Dynamic Text elements from other sources are supported on
mobile.
Clock No
Symbol Yes
Geo Maps Limited Will only work for stories loaded in View Time Optimized mode. For more
Support
information see Preparing Stories for Mobile [page 90].
Note
Only the Choropleth layer is supported.
R Visualization No
Hyperlinks from story Limited Links to other stories and story pages are supported. Also supported are
Support
links to external web pages and links that launch other apps.
Restriction
• For tables, using the selected dimensions as a temporary filter
for the destination story in the link is not supported. Charts do
support this option.
• Links to story pages that are hidden are not supported on mo-
bile.
Search to Insight No
Top N Options Limited Only Top 5 and Bottom 5 are available on mobile. Custom Top N options
Support are not currently displayed in the mobile app.
Bookmarks Limited If bookmarks exist, the user's default bookmark is applied when opening
Support
the story. The name of the bookmark is visible in the Input Controls
panel. To change the default bookmark, you must use the SAP Analytics
Cloud web application.
Mobile
App Sup-
Analytics Designer Elements port Notes
Charts, Chart Footer, Chart Scaling Yes Predictive forecasts are not currently supported.
Table Limited Table rows and columns are initially truncated, showing only a preview.
Support
Tap in the upper right corner of the table to expand it full screen and
show the full table details.
Note
In full screen mode, there is a limit of 10,000 total table cells that can
be displayed, with an upper limit of 1000 rows or 1000 columns.
Note
For best performance when modifying a hierarchy member filter,
limit hierarchy levels to include less than 7500 members.
BEx Variables and Prompts Limited The prompt dialog needs to be opened with script API, for example:
Support table_1.getDataSource().openPromptDiaglog().
HANA Variables and Prompts Limited The prompt dialog needs to be opened with script API, for example:
Support table_1.getDataSource().openPromptDiaglog().
Shape Yes
Text Limited Dynamic Text elements created from the following sources are not sup-
Support ported on mobile:
If your app tile uses an unsupported source, the tile is not displayed on
mobile. Dynamic Text elements from other sources are supported on
mobile.
Clock Yes
Web Page No
Symbol Yes
Geo Map No
R Visualization No
Top N Options Limited Only Top 5 and Bottom 5 are available on mobile. Custom Top N options
Support
are not currently displayed in the mobile app.
Analytic Application Bookmarks Yes To view all available bookmarks of an analytic application, swipe an app
left, open the Details. All relevant bookmarks are listed there.
Custom Widget Yes Custom widget author decides in custom widget configuration (contribu-
tion.json) whether this widget is displayed in mobile app.
Commenting No
Smart Capabilities No Smart Capabilities such as Smart Insight, Smart Discovery and Search to
Insight are not supported.
Planning Limited • Only specific planning related APIs are supported, including
Support
setUserInput, copy planning version API and data action related
ones.
• Editing data in cross tabs is not supported.
• Value driver tree is not supported.
• Planning sequence widgets are not supported.
Mobile
App Sup-
Analytics Designer APIs port Notes
Export APIs No
Navigation panel APIs Limited Tablets are supported, while mobile phones aren't supported.
Support
NavigationUtils.createStoryUrl
NavigationUtils.createApplicationUrl
executeInBackground
getParameterValue
setParameterValue
setQuickActionsVisibility
Mobile
App Sup-
Collaboration port Notes
Accessing content shared using Limited Cannot access content shared through the iOS app.
app links Support
Discussions No
Events No
Save as PDF No
Workspaces No
Having trouble connecting to SAP Analytics Cloud or your live data sources from the mobile app? As an
administrator, learn how to troubleshoot common connection issues.
This topic is for administrators supporting mobile app users on SAP Analytics Cloud. A number of factors
can prevent logging onto SAP Analytics Cloud or your live data sources from the mobile app. Common issues
include:
Depending on the error, there are different steps you can take to resolve your user's connection issue:
Failed to connect to SAP Analytics 1. It could be that the app is not Authorized with tenant. To remedy this you will
Cloud need to add OAuth client configuration in the SAC tenant.
An SSL error has occurred and a 1. Is the user using cellular data or Wi-Fi on their device?
a secure connection to the server
• Check that user's cellular connection works for other apps on the device
cannot be made.
and the SAP Analytics Cloud app.
• Can occur when logging on to • Check that user's Wi-Fi connection works for other apps on the device and
SAP Analytics Cloud. the SAP Analytics Cloud app.
• Can occur when accessing a • If the cellular connection works for but Wi-Fi does not, check if your IT Wi-Fi
live data source. infrastructure has restrictions or is blocking app usage.
2. Are you using a reverse proxy server or VPN to relay requests from the mobile
app to SAP Analytics Cloud?
• Both the iOS and Android mobile apps can only connect to URLs that are
trusted connections. A trusted SSL certificate must be installed on the
server (such as the reverse proxy server). Self-signed or untrusted SSL
certificates are not supported.
• To diagnose the iOS app, on a Mac terminal, run the command nscurl --
ats-diagnostics <server url>. Confirm that the ATS Default
Connection result is PASS.
The server only supports a secure • Ensure that the server URL confirms to the https protocol.
connection
It seems your profile is not config- This message is displayed when a user’s account has not been created on the SAP
ured to use this system. Analytics cloud.
The resource could not be loaded 1. Are you using a reverse proxy server or VPN to relay requests from the mobile
because the App Transport Security app to SAP Analytics Cloud?
policy requires the use of a secure
• The connection must satisfy Apple's App Transport Security (ATS)
connection.
requirements. For complete requirements, see Requirements for Con-
• Can occur when logging on to necting Using ATS in https://developer.apple.com/library/content/doc-
SAP Analytics Cloud. umentation/General/Reference/InfoPlistKeyReference/Articles/Cocoa-
• Can occur when accessing a Keys.html#//apple_ref/doc/uid/TP40009251-SW57 .
live data source.
• For example, on a Mac terminal, run the command nscurl --ats-
diagnostics <server url>. Confirm that the ATS Default
Connection result is PASS.
User session expired. 1. The SAP Analytics Cloud system your mobile app connects to may not be run-
ning, or may be getting updated.
• Can occur when logging on to
SAP Analytics Cloud. • Try opening SAP Analytics Cloud in a web browser and check that the user
can log on to the web application successfully.
Note 2. For Android, go to Chrome Settings on the Android device and tap Clear History
You are given the option to and Website Data. Try to log on again to the mobile app.
login again. 3. Has the user changed their user name or password recently? If so they may need
to re-logon to the mobile app.
• Try opening SAP Analytics Cloud in a web browser and check that the user
can log on to the web application successfully.
• (For iOS) Go to Settings Safari on the users's iOS device and tap
Clear History and Website Data. Have the user try logging on again to the
mobile app.
Authorization Rejected 1. Has the user account been disabled or deleted from the SAP Analytics Cloud
system?
• Can occur when logging on to
SAP Analytics Cloud. • Re-enable or re-create the user account on SAP Analytics Cloud.
2. Has the user's access or refresh token from your IdP expired?
• Have the user logon again.
3. Has the user's access or refresh tokens from your IdP been revoked?
• Check with your IdP administrator.
This version of SAC server is not The Android app will only work with SAP Analytics Cloud server version 2020.3 or
supported by Android application. newer.
Please update the server.
When logging in to the Android app, 1. User can manually logout from the IDP page.
the Authorize page is displayed. 2. Clear the chrome browser cache explicitly to login with a different user account.
App not authorized The mobile app version is out-of-date and not supported with your current version of
• Can occur when logging on to the SAP Analytics Cloud web application.
SAP Analytics Cloud.
Update the mobile app to the latest version and have the user try logging on again.
When trying to connect to SAP Check that both the following URLs on your reverse proxy server are mapped to your
Analytics Cloud via a reverse proxy SAP Analytics Cloud system:
server URL, your user ends up at a
different website or IdP.
• https://<reverseproxyserver>/ (default reverse proxy URL)
• https://<reverseproxyserver>/oauth
When viewing an SAP Analytics Cloud story, you can make some changes to filters.
When you view a story, you can't make a lot of changes to the content, but you can make some, including
changing how values are displayed in story and page filters.
When viewing an SAP Analytics Cloud story, you can change story and page filter values as well as how the
values are displayed in the filters.
You can change the values for different filters, including member filters (filters defined by selecting members
from a list) and range filters (filters defined by selecting ranges of values). For range filters, if you have edit
permissions or if the filter is an input control, you can also redefine the ranges.
The filters in the story (including story, page, input control, or widget filters) usually show the description for
the values, not the ID. However, you may want to see the ID or a combination of ID and description. (In some
systems, the ID is necessary if you want to search the data.)
• If the filter is an input control: select Display As and then select Description, ID, or ID and
Description.
Related Information
When searching in SAP Analytics Cloud story and page filters and in input controls, different search parameters
return different results.
You can use the wildcard character (*) to control how you search for your IDs or descriptions in story and page
filters and in input controls. The placement of the wildcard character determines the search results.
Note
There are some restrictions when searching filters for SAP BW (see Searching in Filters and Input Controls
(Specifics for SAP BW) [page 1578]).
For best results, display the input control data as either ID or Description, not both.
Example
The following dialog shows a page filter that is displaying descriptions instead of IDs for its members.
Restriction
You can use a wildcard search in story and page filters and in input controls, but not in the member selector
filter.
The following table shows you the search options and the expected search results.
Result:
• Janet Bury
• James Frank
• John Minker
• Kiran Raj
Result:
• Janet Bury
• James Frank
• John Minker
Result:
• Kiran Raj
Result:
• Janet Bury
• James Frank
• John Minker
• Kiran Raj
• Lois Wood
• Lia Armand
• Janet Bury
• James Frank
• John Minker
• John Minker
• Nancy Miller
Result:
• Janet Bury
• James Frank
• John Minker
• Kiran Raj
Remember
The search strings don't have to be next to each other, but they
do need to be consecutive: strings where “b” occurs first won't be
returned.
In an SAP Analytics Cloud story, instead of scrolling through a long filter list for specific values, you can paste
the values from a list.
When you have a page filter that has a large number of values, it can take a long time to find the specific values
to display in your filter. If you have the list of values saved somewhere else, you can paste them in and update
your filter.
Note
Pasting values only works for flat data, not hierarchical data.
Pasting values is available for Live HANA and acquired data models. The feature is also available for SAP
BW, with some usage restrictions. For SAP BW live data sources, see Pasting Values into Page Filters
(Specifics for SAP BW) [page 1580].
Only exact matches will be found, that is, exact spelling (not case-sensitive) or complete IDs. For example, if
you are trying to find “AB3D7” and you paste in “AB3”, you will see a message stating that “AB3” could not
be found.
The values you paste must match the displayed format: if you display ID and Description, you can paste in
either ID or Description (but not both values for any one member). If you display the member IDs, you can
only paste in ID values, and when displaying member descriptions, you can only paste in description values.
Pasting values for date dimensions (time, day, or date) is not available when your display is set to ID and
Description.
Prerequisite
Set up a text file with the values that you want to paste, one value per line. You can include both IDs and
descriptions, but not on the same line.
Procedure
Restriction
A message will display the first few values that couldn't be found, but if there are a lot of values, it won't
display all of the them.
Related Information
Create bookmarks to save or share different states of an SAP Analytics Cloud story.
About Bookmarks
When viewing a story, you may want to come back to the same view of the data every time or you may want to
set up different states or scenarios.
For example, you have several pages in your story that have filters, input controls, or prompts applied to them.
You don't want to spend time resetting all of them each time you want to see a different scenario. You would
like to open the story, see one scenario, and then quickly switch to another scenario. You can even create global
bookmarks so that anyone who can view the story can also see the different scenarios.
For classic stories, bookmark only includes filters, input controls, and prompts.
Note
SAP BW dynamic variables are saved in backend, so bookmarks won't include their value changes.
You cannot bookmark changes you make to the nonprimitive type script variables (types other than
string, boolean, integer and number).
Note
You cannot bookmark translated contents, except for the bookmark name.
Note
If you import a story from a .tgz file with the option Drop Objects selected and the story already exists on
your system, both the story and its bookmarks are overwritten and its data are updated.
Tip
To make sure that you bookmark everything you want, check with story developers or in edit time whether
bookmarks include all the components in story, only state-changed ones or selected ones, and whether
you're allowed to keep last changed dynamic variable values in your bookmarks. For more information
about bookmark settings, see Configure Bookmark Settings and Use Related APIs (Optimized Story
Experience) [page 1683].
Now for optimized stories, global and personal bookmarks permissions can be customized. If you’re a custom
role, ensure that you have the relevant permissions so that you can create bookmarks, for example. For more
information, refer to Permissions [page 2844].
When you're satisfied with how your story looks, save the current state:
When you create your story and save it, you automatically get a default bookmark.
Restriction
If you apply a bookmark that was created before the story was optimized, you won't be able to create new
bookmarks for that story. To update the bookmark or to include additional functionality (such as Drill, Rank,
Sort, and so on), delete the legacy bookmark and create a new one.
The story opens with the customized settings for filter, input control, prompt and so on.
Note
If you change the default story settings after you've saved bookmarks, the next time you try to use one of
those bookmarks you may get a warning message.
For example, you change a page filter to be a story filter. The next time you open the bookmark that has a
page filter, you'll see that the bookmark has been partially applied.
2. In the My Bookmarks area, select a personal bookmark and then select (Share).
3. Add Uses or Teams: add all the users that you want to share with. You can either type a username or select
For how to modify sharing settings, see Share Files or Folders [page 193].
Note
When you share a story with other users, only global bookmarks are shared, while personal bookmarks
aren't included.
To manage your bookmarks, from the side navigation select Stories, and on the start page, go to
Bookmarks tab. Select a bookmark, and then you can edit, share or delete it:
• Select (Edit Details) to change the name or type of the bookmark, or whether to make it the default
view of the story.
• Select (Share) to share the personal bookmark with other viewers or teams.
To open and view a bookmark, you need to select a valid one. It'll be opened in view mode.
Note
For optimized stories, a bookmark is valid only when its version is consistent with the current version of the
bookmark set technical object set by the story designer or developer. Otherwise, the bookmark is invalid
and can't be opened, and story designer or developer needs to set the version back to your bookmark's
version.
To search for a specific bookmark, enter the bookmark name in the search box. To narrow down the filter
scope, choose Valid Bookmarks, Invalid Bookmarks or All Bookmarks from the dropdown list to the left of the
search box.
To view all bookmarks of a specific story, go to (Files) to find the story. When you hover over the story,
Restriction
Personal bookmarks require a story ID and the USER ID of the person that created the bookmark.
If you want to include your personal bookmarks when you move your story to another system, you must
use the same USER ID on both systems.
If the user ID doesn't match, your bookmarks won't be available on the new system.
For information on how to move a story to another system, see Transporting Your Content Through File System
[page 2935].
Related Information
As an application user you can create, view and manage bookmarks, which let you capture and save the current
state of your analytic application.
When viewing an analytic application, you may want to come back to a specific view of the data without having
to spend time resetting all the widget states in the application. In this case, you can save the current state as a
bookmark by selecting the save button on the application, for example.
In an analytic application, you can bookmark the change of a widget itself and the display of data on it,
including but not limited to the following:
Note
To save bookmarks with last changed dynamic variables values, at design time, go to (Edit
Analytic Application) Analytic Application Details . Then, under Bookmarks in the Viewer Settings
section, make sure you've selected Override default settings and Keep last saved values for dynamic
variables. By default the options are selected. Related API is also available, and for details see Use
Bookmark Set Technical Objects in Analytic Applications [page 1686].
Note
You cannot bookmark changes you make to the non-primitive type script variables (types other than string,
boolean, integer and number).
You cannot bookmark translated contents except for the bookmark name.
Note
For any new bookmark versions created after wave 2019.19, the Unrestricted Drilling option in date range
for time dimension settings will take effect. However, this won't apply to the old bookmark versions created
before this wave.
Note
If you import an analytic application from a .tgz file with the option Drop Objects selected and the analytic
application already exists on your system, both the analytic application and its bookmarks are overwritten
and its data is updated.
Manage Bookmarks
To manage your bookmarks, select (Analytic Applications) from the side navigation, and in the start page
go to Bookmarks tab. Select a bookmark, and then you can edit, share or delete it:
• Select (Edit) to change the name and type, Personal or Global, of the bookmark.
• Select (Share) to share the personal bookmark with other application users or teams.
For how to modify sharing settings, see Share Files or Folders [page 193].
Note
When you share an analytic application with other users, only global bookmarks will be shared.
Personal bookmarks won't be included.
To open and view a bookmark, you need to select a valid one. It will be opened in view mode.
Note
A bookmark is valid only when its version is consistent with the current version of the bookmark set
technical object set by the application designer. If not, the bookmark is invalid and cannot be opened.
To search for a specific bookmark, enter the bookmark name in the search box. To narrow down the filter
scope, choose Valid Bookmarks, Invalid Bookmarks or All Bookmarks from the dropdown list to the left of the
search box.
To view all bookmarks of a specific analytic application, go to (Files) to find the application. When you hover
over the application, select (More actions) to the right of it and choose Browse Bookmarks.
When viewing a story, you can choose to use the story filter bar as a horizontal bar or as a vertical panel.
The story filter bar can be converted to a filter panel. Both orientations provide similar features, such as adding
new features or editing existing ones, but there may be some differences in using each orientation.
To show or hide the filter bar or filter panel, from the top navigation toolbar, select (Filter Panel). When the
filter bar is hidden and you have story filters, the number of filters is displayed on the filter icon ( ).
Tip
If you would like the default filter bar orientation and visibility to be set differently,
your story designer can update the settings. For more information, see Edit Filter Bar Settings (Optimized
Story Experience) [page 1549].
• (3) - Select (Pin to Side) to switch from the horizontal bar to the vertical panel.
• (4) - Close the filter bar.
• (5) - Select (Pin to Top) to switch from the vertical panel to the horizontal bar.
Modify Filters
To open or close the filter, select either the label (1) or the arrow (2). To undo all your changes, select Reset
Filter(3).
Select the filter label (1) or the arrow (2) when you want to close the filter, not the close panel option (4).
You can do more than just modifying the selected values or ranges in a filter. Some filter types have more
options listed at the bottom of the filter panel, options like Sort, Paste Filter Values, and so on. The options vary
depending on the filter type.
As an application user or story viewer, you can subscribe to top N data change insights in an analytic
application or optimized story to auto-discover significant data changes of specific charts on a regular basis.
Data Change Insights helps to discover important and relevant chart-level data changes even when analytic
applications or optimized stories aren't running.
Value change across thresholds Bar/column, combination column & line, combination
stacked column & line, stacked bar/column, line, stacked
area, bubble, cluster bubble, scatterplot, radar, bullet, nu-
meric point, marimekko
Value change across reference lines Bar/column, combination column & line, combination
stacked column & line, stacked bar/column, line, stacked
area, bubble, scatterplot, radar, marimekko
Note
Data change insights can only be triggered automatically, after the onInitialization event completion
and before the PDF generation.
Restriction
Currently, data change insights feature doesn't support Extended HANA Live Connections.
Application or story designers need to enable data change insights for the whole application or story and
individual charts, so that viewers can make daily, weekly or monthly subscription and configure value range
to trigger data change insights. Related APIs are supported. For more information, see Enable Data Change
Insights and Use Related APIs [page 1697].
Data Change Insights is under the license of scheduling. If you’re an admin, refer to Configure Scheduling of
Publication [page 2957].
In view time, you can subscribe to data change insights of your analytic application or optimized story to keep
informed and track of data changes on a regular basis.
Before subscription, make sure that specific charts are included. From Data Change Insights in a chart's
(More Actions) menu, you can change its subscription mode to Subscribed: Normal Importance (default),
Subscribed: High Importance or Unsubscribed. You can also set a value range to be included in or excluded
from triggering data change insights there via Set Subscription Range. The value range can be either for a
specified measure or all measures, and use either fixed value or delta value.
If the chart you subscribe to is on a popup, you can receive related data change insights only when the
popup is open.
To subscribe to data change insights, from Tools in the toolbar, select Subscribe to Data Change
Insights. The subscription dialog appears.
Note
Data change insights are triggered by day as the smallest unit. Therefore, if the recurrence pattern is by
day, recipients won't receive notifications when there’re data changes within a day, while they can only
compare changes between two different dates.
After subscription, a calendar event is created. If you want to unsubscribe, you can go to Calendar to remove
the event. You can also change the subscription settings there.
• Emails (one with data change insights, the other with a PDF attachment of the corresponding analytic
application or story)
• SAP Analytics Cloud notifications
Besides viewing data change insights of individual analytic applications or stories via email and system
notifications, you can view all of your data change insights across applications and stories on your SAP
Analytics Cloud home screen.
First, you need to add the data change insights tile to your home screen. Go to your user profile, select
Home Screen Settings, and switch on Data Change Insights.
• Top N (by default top 10): Displays top N data change insights across applications and stories. From the
dropdown menu you can customize the number of insights to be shown on the top N list, ranging from 1 to
20.
• Notifications: Displays all the data change insights by date. You can see how many data increases, data
decreases and top N member changes happened within a certain period and expand to view all the
insights.
Note
Furthermore, you can filter all the data change insights by time range (by default last seven days), importance
(by default all) and subscription type, Subscribed by Me (default) or Shared with Me.
Note
Your tile display settings will be lost if you refresh the home screen.
The tile is resizable. In the enlarged tile, you can search by keywords to find the insights.
Special Cases
• On application or story level, no date changes insights are generated and users get a warning if:
• The application or story gets modified during subscription period.
• The application or story view is a different one.
• The model variables from the URL parameter are changed.
• On chart level, the comparison within a chart is ignored if:
• A chart's model variable is changed.
• A chart's filter is changed (except for dynamic datetime filter).
• A chart's calculated measure's referenced script variable is changed.
When viewing a story, use the keyboard instead of the mouse to find a specific widget or action menu on a
story page.
You can use specific keys on your keyboard to move between different parts of your SAP Analytics Cloud story
pages. When the focus is on a widget such as a chart, you can also use the keys to open the context menu and
change the information that is displayed in the chart.
• Navigate through story and page filters (input controls), and through variables.
• Add filters or variables to your story.
Tip
To use the function keys on some keyboards, you need to press the Fn key first.
For example, you would need to use the following key combination: Fn + F6
• Use to change the focus from a chart to its action menu ( More Actions).
• Arrow keys:
• Navigate between items in a list.
For example, you can use the arrows to navigate between the items in the action menu.
• Navigate between widgets on a page.
The right and left arrows will navigate to all the widgets on a page. The up and down arrows can handle
vertical navigation, but may not get to every widget on the page.
To navigate out of the table data cells (and into another widget on the same page), do the following:
1. Press F6 .
The focus moves away from the story page.
2. To move the focus back to the same story page, press Shift + F6 .
The focus moves to one of the widgets on the story page.
3. Use the arrow keys to move the focus to the next widget that you want to work on.
Tab to the action menu ( (More Actions)) and then press Enter to open it.
You can navigate input controls (also referred to as page filters) using a combination of keys, including F6 ,
Tab , arrow keys, and Enter .
The following workflow describes the navigation process to change selected members in an input control.
Use this process when the pages are listed in the menu Use this process when the Tab Bar is displayed.
toolbar. 1. Press F6 until the focus is on the menu toolbar.
1. Press F6 until the focus is on the menu toolbar. 2. Use Tab to move to the page numbers list. (The focus
2. Use Tab to move to the page names list. should be on the right arrow.)
Use the arrow keys to find the page. 3. Press Enter until the desired page is selected.
3. Press Enter . You've switched to the page with an input control.
You've switched to the page with an input control.
Restrictions
In the following situations, you may not be able to navigate as expected with the keyboard.
Scenario Issue
Account dropdown list in Advanced Sorting dialog. You need to press Enter instead of Alt + Down
arrow .
Using F6 for group skipping from a widget in the story to Focus moves to a hidden element.
the hamburger menu (Home sidebar).
Using keyboard to navigate between widgets when the The Geo map contents are scrolled.
mouse focus in on a Geo map widget.
Geo map and R visualization: navigating the context (more Opening sub-menus in the context (more actions) menu is
actions) menu. not currently supported.
Apply custom styling to a widget or story. Dotted rectangle focus isn't visible around the widgets.
Button background is a darker shade. Dotted rectangle focus isn't clearly visible on the button.
Learn how to share your analytics with others, export data, and collaborate with colleagues across your
organization.
• Learn How to Share and Collaborate in SAP Analytics Cloud [page 188]
• Share Files or Folders [page 193]
• Share Stories or Global Bookmarks [page 205]
• Publish and Share Content to the Catalog [page 207]
• Share and Collaborate Within Workspaces [page 212]
• Schedule a Publication [page 219]
• Share Links to Additional Analytics Content [page 233]
Exporting Data
Get an overview of sharing and collaboration, and learn different ways you can share your analytics content
with colleagues and collaborate with them on it.
• Users who have the Share permission for any data objects.
• Users who have the Create or Manage permission for Schedule Publication.
• Users who have the Execute permission for Publish Content.
When you've created some valuable content, like an informative story or boardroom presentation, you might
want to share it with others so they can benefit from the information too, or contribute to it.
There are many reasons for sharing your content with others. For example, you might want to show your
forecasts to decision makers in your organization, or you might have developed a great analytic application that
you want your team to use.
Keep all of your team's content in one place, separate from other teams'
content
Especially in larger organizations with many users and a lot of content, it can be challenging to organize and
manage the content, or to restrict access (for example, for HR or Finance data).
Collaboration features let you invite others to contribute their ideas or data to your work, or you might just want
to run your content by your peers for feedback.
• You and your team can add comments to specific pages or widgets within a story.
If people depend on you for regular updates with the latest information, you might want to take advantage of
the scheduling feature so that they automatically and reliably get that information.
• You can set up a schedule to periodically deliver content, such as stories or analytic applications, to a
group of users.
Schedule a Publication [page 219]
SAP Analytics Cloud has many analysis and presentation features, but some of your colleagues might prefer to
use different tools, like Excel, PowerPoint, or Google Slides. It's easy to export your data to other formats, like
comma-separated-values (CSV) or PDF, for use with those applications.
When you share content, you'll want to be sure that the users you're sharing your content with have the
appropriate access to the content. For example, you might want some users to have only Read access so they
can't make any changes. For others, you'll want to give them Update access so they can make changes to suit
their line of business.
You can share files and folders with other users or teams to give them access to content so that they can
benefit from the information or contribute to it.
• Users who are assigned a role with the appropriate permissions for the object type and file type (public
or private), including the Read or Manage (only for the file type) permissions, and the Read permission for
Users or Teams.
• Users who are content owners or have the Share permission for any files or folders that are shared with
them.
Note
The information in this section is for a system administrator. For more information, see Permissions [page
2844] and Create Roles [page 2877].
For a user to be able to share files and folders, the system administrator must create custom roles that have
the appropriate permissions for the following:
• Object type (for example Content Link or Planning Model), including the Read permission
• File type (Public Files or Private Files), including the Read or Manage permissions
Note
In most cases, the Read permission for Public Files or Private Files is enough to allow users to share files
and folders. In cases where you want users to have elevated administrator-type permissions, select the
Manage permission.
• Content creator role that has Create, Read, Update, and Delete permissions selected for Content Link.
• Content viewer role that has Read permission selected for Content Link.
Both custom roles have Read selected for Public Files and Read selected for Users or Teams. The system
administrator assigns the content creator role to you and the content viewer role to a different user. In the
Public folder, you create a content link file. When you share the file with that user, you can choose to give them
the Share permission so that they can also share the file.
Note
The information in this section focuses on how you, the content owner or user with the Share permission,
shares files and folders with others. It's assumed that the system administrator has set up all the roles to
have all relevant permissions available to the users (as described in the previous section) and has assigned
the roles to the users.
You can share files for data objects, such as stories, models, and analytic applications, and folders by giving
users or teams a predefined access level (View, Edit, or Full Control) or by choosing Custom to give them
custom access.
Predefined access levels are groups of permissions and can be given to users or teams. In many cases, the
predefined access levels will meet your needs. They are also easier to manage than using custom access
settings. For example, to allow some users to view a file but not edit it, you can give them the View access level.
Edit Includes all View access level permissions plus: Update, Create files*, Create folders*, Maintain
Full Control Includes all View and Edit access level permissions plus: Delete, Delete Comment, Share
Users with this access level can move files or folders between locations.
Note
The functionality for Execute is currently not available and reserved for future enhancements.
You can ignore this option as selecting or clearing the checkbox has no effect on the function-
ality.
If the predefined access levels don’t suit your needs, select Custom and choose which permissions to give
users and teams access to. For example, if you give a user Full Control access, that user automatically gets
access to both Delete and Share content. But if you choose Custom, in the Set Custom Access dialog, you can
give the user access to Delete content but not Share content.
Context
You can share a file from the Files page or when you have it open.
Note
• When sharing a file, for example a story, that depends on underlying objects, like models or datasets,
you'll also need to share to those other objects with the same access level as the file. Furthermore, if
the dependencies are from different workspaces, you must also request access to the workspaces and
objects on behalf of those users.
• If an object in your story uses a private version, the dimension member IDs of the private version might
be exposed when you share the story with another user.
Procedure
1. From the side navigation, choose Files, select one or more files, and select ( Share Share ) to
open the Share dialog.
Or if you have the file open (for example, a story or analytic application), select (Share).
2. Under Add Users or Teams, select (Add usernames or teams) to open a dialog where you select the
users or teams that you want to share the file with.
If you know the user and team names, you can type them instead of using the dialog.
Tip
• When working within a workspace, you can share files only with other members of the workspace.
• When working in the My Files view, you can select All Users to share with everyone. This option
can be disabled by a system administrator setting when configuring the system settings. For more
information, see Configure System Settings [page 2760].
3. Select the access level you want to assign for those users, or choose Custom to set custom access.
• The Public Folder itself can't be deleted or copied, but system administrators can select Copy and
Delete permissions on the folder so that items that are created in this folder will inherit the sharing
settings by default.
• For stories and analytic applications, users with only the Read permission are able to change the
variables for the model in the story or analytic application. Changing the model variables updates
all the charts and tables in the story or application unless they were overridden. Model variable
settings set by the viewer are not saved.
4. (Optional) For some types of files, for example, stories, analytic applications, and digital boardrooms, you
can select Customize Link to customize the URL link:
5. (Optional) To email people to let them know that you are sharing a file with them, select Email new
recipients.
The email is sent the first time the file is shared with someone. This option is cleared by default. If you
choose to add all users, this option is not available. The users will still receive a notification in the system,
but they won't receive an email.
If you later change their access rights, they will receive a notification in the system, but they won't receive
an email.
6. (Optional) If the file you are sharing is a story that contains global bookmarks and you have it open in Edit
mode, select Apply global bookmark default, and then select a bookmark to use
For more information about bookmarks, see Bookmark and Share Story Views [page 169].
You can review the changes and then close the dialog.
Results
After the file is shared, the Shared with list appears below the sharing settings. Expand the list to see all users
and teams and their access level for the shared file. You can use the list to change the sharing settings for the
file.
On the Files page, the icon appears beside the name of the shared file. You can select the icon to reopen the
Share dialog and see or change the file's sharing settings.
Sharing a Folder
Context
The steps for sharing a folder are the same steps as sharing file with a couple of differences.
Procedure
1. From the side navigation, choose Files, select one or more folders, and select ( Share Share ).
to open the Share dialog.
2. Under Add Users or Teams, select (Add usernames or teams) to open a dialog where you select the
users or teams that you want to share with. Or if you know the name of the user or team, you can type it
instead.
Tip
• When working within a workspace, you can share folders only with other members of the
workspace.
• When working in the My Files view, you can select All Users to share with everyone. This option
can be disabled by a system administrator setting when configuring the system settings. For more
information, see Configure System Settings [page 2760].
By default, the checkbox is cleared and must be set each time you share a folder. When the checkbox is
cleared, the sharing settings are only applied to the selected folder. The sharing settings of its subfolders
and files are not affected.
For example, you share a folder and select Share existing subfolders and files. All existing subfolders and
files will inherit the same sharing settings of the selected folder. However, if you don't select Share existing
subfolders and files, the sharing settings of the selected folder won't be applied to any of its existing
subfolders or files, and all subfolders and files will retain their original sharing settings. Also, regardless
of the setting of the Share existing subfolders and files checkbox all new subfolders or files created in the
selected folder will automatically inherit the sharing settings.
5. (Optional) To email people to let them know that you are sharing a file with them, select Email new
recipients.
6. Choose Share.
You can review the changes and then close the dialog.
Results
After the folder is shared, the Shared with list appears below the sharing settings. Expand the list to see all
users and teams and their access level for the shared folder. You can use the list to change the sharing settings
for the folder. For users and teams that have inherited sharing settings from the parent folder, the icon
appears next to their names.
On the Files page, the icon appears beside the name of the shared folder. You can select the icon to reopen
the Share dialog and see or change the folder's sharing settings.
After a file or folder is shared, you can open the Share dialog. The lower half of the dialog has the Shared with
list, which you can use to see who the item is shared with or to change the sharing settings.
• When one file or folder is selected, the Shared with list shows all the users and teams who have shared
access to the file or folder and their access level.
You can change the access level for a single user or team directly in the Access column, or you can use the
toolbar options for changing the share settings for several users and teams:
Tool Description
(Unshare) Unshare (or stop sharing) the selected files or folders with the selected users or teams.
• Inherit from parent: Applies the existing sharing settings from the parent folder to the se-
lected files or folders. This option appears for folders and files that have a parent folder and is
available when no users or teams have been selected.
• Apply to subfolders: Applies the existing sharing settings from the selected folder to all its
subfolders and files. This option appears only for folders and is available when no users or
teams have been selected.
• View, Edit, or Full Control access levels: Applies the new access level to the selected files and
folders for all selected users and teams.
• Custom: Applies only the permissions that you custom select to the selected files and folders
for the selected users and teams.
Caution
Changes to the sharing settings for files and folders and access levels for users and teams are
immediate and can’t be easily reversed.
For example, you choose to apply the sharing settings from the parent folder to the selected
folder (Inherit from parent). After the change is applied, users who previously didn’t have view
or edit access to the files and folders will get that access. Or users who had view and edit
access to the files or folders, will lose that access.
Context
If the file or folder you selected has a parent folder, you can use Inherit from parent option to apply the sharing
settings of that parent folder to the selected files or folders.
Procedure
1. From the side navigation, choose Files, select one or more files or folders, and select ( Share
Share ) to open the Share dialog.
2. Expand the Shared with list, select , and select Inherit from parent.
Results
A message letting you know that the sharing settings have been updated appears and the Shared with list is
updated to reflect the changes. Any new subfolders and files that you create will use the sharing settings from
the parent folder.
Note
If you use the Inherit from parent option for the Public folder, the Public folder's sharing settings are reset
and restored to its original settings. Any modifications you made to the sharing settings on the Public
folder will be lost. The settings on the Public folder are always applied to all users. However, system
administrators can use the System view to change the sharing settings on the Public folder.
Context
You can change the access level to shared files for a single user or team directly in the Access column or for
several users or teams using the toolbar.
Procedure
1. From the side navigation, choose Files, select the files, and select ( Share Share ) to open the
Share dialog.
2. Expand the Shared with list::
• For one user or team, select a new access level directly in the Access column.
• For several users or teams, select the users and teams, select , and then select the access level
(View, Edit, or Full Control) or select Custom.
A message letting you know that the sharing settings have been updated appears and the Shared with list is
updated to reflect the changes. The new access levels for the selected users and teams are applied. The access
levels for users and teams that are not selected don't change. Any new subfolders and files that you create will
use the updated access levels.
Context
You can change the access level to shared folders for a single user or team directly in the Access column or for
several users or teams using the toolbar.
Procedure
1. From the side navigation, choose Files, select the folders, and select ( Share Share ) to open
the Share dialog.
2. Expand the Shared with list:
• For one user or team, select a new access level directly in the Access column.
• For several users or teams, select the users and teams, select , and then select the access level
(View, Edit, or Full Control) or select Custom.
3. In the Confirmation dialog, choose how to apply the access change to the selected folders and select Apply
Changes:
• Only to the selected folder: The new access level for the selected users and teams is applied to the
selected folder only. If the selected folder has existing subfolders and files, the access levels for those
Results
A message letting you know that the sharing settings have been updated appears and the Shared with list is
updated to reflect the changes based on the option you chose. Any new subfolders and files that you create will
use the updated access levels.
Tip
You can use a combination of the toolbar actions to change the sharing settings of a selected folder and all
its content. For example, to change access levels, remove users, and apply the changes to a selected folder
and its content, do the following:
1. Change the access level for several users and teams by using these steps.
2. Remove users and teams by using (Unshare).
3. Apply these changes to any subfolders and files by using the Apply to subfolders option.
• Files or folders that are copied don't retain their own explicit sharing settings. The sharing settings of the
copied files and folders are replaced and inherited from the new parent folder they are copied to.
• Files or folders that are moved retain their own explicit sharing settings. However, any sharing settings
inherited from their original parent folders are replaced with the new parent folder's sharing settings.
• Files and folders that are restored automatically inherit the sharing settings of the parent folder they are
restored to.
Code Syntax
Application.openShareApplicationDialog() : void
Learn How to Share and Collaborate in SAP Analytics Cloud [page 188]
Permissions [page 2844]
Share Stories or Global Bookmarks [page 205]
Bookmark and Share Story Views [page 169]
Publish and Share Content to the Catalog [page 207]
When viewing a story, you can share stories or change the story sharing settings. As a story designer, you can
also share a global bookmark for the story.
Tip
Another way to share the stories is to publish them to a catalog (select the menu option Publish to
Catalog ). For detailed publishing information, see Publish and Share Content to the Catalog [page 207].
Sharing Stories
Note
Users with read-only access are able to change the variables for the model in the story. Changing
the model variables updates all the charts and tables in the story unless they were overridden. Model
variable settings set by the viewer are not saved.
Note
If an object in your story uses a private version, the dimension member IDs of the private version might
be exposed when you share the story with another user.
4. If you want to email people to let them know that you are sharing with them, select Email new recipients.
Note
If the story you are sharing uses content from different workspaces or folders, you must also give (or
request) access permission to those users so that they can access the content associated with the
workspace or folders where the content is saved.
Note
When users tap the web sharing link from their mobile device (the link they get when you share a story with
them), after they authenticate on the mobile browser, they are prompted to open the story in the Mobile
App.
Modifying the share settings allows you to grant custom share permissions to all users, teams, or individual
users.
Note
You can only modify the sharing settings for a single story. To set permissions for multiple stories, select
stories, then set permissions in the Access list in the Share Story dialog.
1. From the side navigation, select Files and then select the check box for a story.
2. Select ( Share Share ).
3. Select one user or select all users that the story has been shared with.
4. (Optional) Select Add Users and Teams, and choose individual teams or users from the list, then assign
permissions to teams or users.
5. Select Share.
When you share the selected story, the settings you saved will be applied.
Note
The settings are only saved for an individual story. If you select a different story, you will need to save new
share settings.
You can publish content such as stories, digital boardroom presentations, analytic applications, models and
datasets, and files to the catalog.
• Users who are assigned a role with the Execute permission for Publish Content.
• Users who are assigned a custom role based on the one of these standard application roles: Admin or BI
Admin.
Your organization may have hundreds or thousands of saved stories. You or other content creators probably
carefully created some of those stories, but some others might contain outdated information, or be the result
of users experimenting with various features. How can users tell which stories they should be using?
Each item published to the catalog is displayed as a card that users can open to access the underlying content.
The following content types can be published and displayed in the catalog:
• Stories
• Analytic Applications
• Digital Boardroom presentations
• Models
• Datasets
• Uploaded SAP Analytics Cloud files
• Content Links
• Insights
• Analysis Workbooks
Note
The catalog can be accessed through iOS mobile devices for viewing purposes only. For details, see Using
Catalog on the Mobile App [page 119].
Publishing Content
Context
When your content is ready for other users to consume, you can publish it to the catalog.
Procedure
1. From the side navigation, choose Files, and find the content you want to publish.
2. Select the check box for the content, and then select (Share) Publish to Catalog .
Tip
You can share a saved file (for example, a story, presentation, analytic application, etc.) if you have it
opened in Edit mode.
3. Under Add Teams, specify which teams can view the published content.
If the content you are publishing is in a private or public location and you want to publish the content
for all current and future users in your system, type All Users or select the All Users option. This
option can be disabled by an administrator setting when configuring the system settings. For more
information, see Configure System Settings [page 2760].
4. Choose whether you want to give users and teams Read access to the content.
• If you grant Read access, users will be able to access the content by selecting Open on the published
catalog card. If the content you're publishing depends on other files (dependencies), such as stories or
datasets, access to those dependencies needs to be granted separately if you want users to have full
access to the content.
• If you don't grant Read access, users will need to request access to the content by selecting Request on
the published catalog card. Access requests are sent to the system administrators, and automatically
include requests for any dependencies. For more information, see Publish and Share Content to the
Catalog [page 207].
Note
Access requests for file dependencies where the file dependencies are from different workspaces that
you don't have access to are not automatically generated, and no automated notification is provided.
For more information, see Enable Content Discoverability with the Analytics Catalog [page 2941].
6. (Optional) To enhance the content card details, you can add images, add a description, assign filters, and
add links, select Edit details.
In the dialog that appears, the system administrator adds tabs and can set them to mandatory.
Mandatory tabs always appear, have an asterisk (*), and require information before you can publish
the content. Tabs that aren't mandatory can be added or removed by the user. If you're a system
administrator and want more information, see Enable Content Discoverability with the Analytics
Catalog [page 2941].
Note
Changing the name or description of a catalog card created from a file (for example, story or
model) will also change the name or description of the underlying file.
Users can find your content by typing the tags in the Search.
c. In the first tab, type a description of the content and add an image.
By default, only the first tab is available and is named Overview, but your system administrator can
change its name.
d. Add details to any mandatory tabs.
f. Under File, you can add additional links that are relevant to this content. Links can be other SAP
Analytics Cloud files or third-party URLs.
Note
In the catalog card, if linked resources become unavailable on the tenant, the links are not
clickable, and the link icons change to show a generic file icon. The Edit link button is also not
available.
Tip
Applying filters is important in helping users find the content they are interested in. For more
information, see Find and Access Content in the Catalog [page 55].
Results
The content is published and will appear as a card in the catalog. From the side navigation, select Home and
then select the Catalog tab to view the card associated with the published content. If you have the appropriate
permissions, you can perform any of the following tasks as needed:
• If you're assigned a role with the Execute permission for Publish Content, select Recommend to
recommend the content.
• If you're assigned a role with the Execute permission for Publish Content and have the Share access right
for the content, select Manage to choose who can see the content.
• If you're the content owner or have the Update access right for the content, select Edit to edit the content
card details.
Note
Changing the name or description of a catalog card will also change the name or description of the
underlying file.
Context
You may want to remove content from the catalog when it's outdated or no longer useful.
Procedure
Related Information
Within a workspace, you and other members of your team can access and collaborate on shared content. If
you’re a designated workspace administrator, you can also organize content within the workspace for easier
access.
Tip
If you’re a system administrator who wants to learn about planning and creating workspaces for your
organization, check out Configure Workspaces [page 2950]. You’ll also find information on how to assign
workspace administrators.
Review these examples to understand the basics for how sharing and collaboration work within workspaces.
In real-world situations, there can be several different and complex combinations of users, teams, and
workspaces.
In this simple example, a company has 30 users in three business units. The system administrator creates
the following users (Users 1-30), teams (BU1, BU2, and BU3), and workspaces (BusUnit1, BusUnit2, and
BusUnit3).
Each business unit has a secure area for their content that only users of the specific teams can access.
Users in team BU1 can create content only within workspace BusUnit1. They can share and collaborate on
content with other users in team BU1. Also, when they publish content to the Catalog, only users in team
To build on the previous example, a system administrator can set up a user to be members of more than
one team. When this happens, the user can access the content in and collaborate with other users from one
or more workspaces. Using the same users, teams, and workspaces as the previous example, the following
change is made: User1 and User2 are members of teams BU1 and BU2.
User1 and User2 have access to workspaces BusUnit1 and BusUnit2. Everything in the previous example is still
true with the update that User1 and User2 can access content from workspaces BusUnit1 and BusUnit2 and
share and collaborate with members of teams BU1 and BU2.
When User1 creates a file (for example, a story) in one workspace BusUnit1 that relies on content (for example,
a model) from the other workspace BusUnit2, only User1 and User2 can see the full story. Users in team BU1
will be able to access the story, but won’t see the full story because they can’t access the content in workspace
BusUnit2. Users in team BU2 only have access to workspace BusUnit2, so they can’t see the story.
Note
When a file (for example, a story) from one workspace depends on underlying objects like models or
datasets from a second workspace, only users who are members of the two workspaces will be able to
see the full story. In these cases, the content creator must contact the system administrator and request
access for users they want to share the story with.
To build on the previous example, a system administrator can assign a team to more than one workspace.
When this happens, the users in the team can access the content from and collaborate with other users from
one or more workspaces. Using the same users, teams, and workspaces as the previous examples, one change
is made: Team BU3 is assigned to workspaces BusUnit2 and BusUnit3.
All users in team BU3 have access both workspaces BusUnit2 and BusUnit3. Everything in the previous
example is still true with the update that users in BU3 can access content from workspaces BusUnit3 and
BusUnit2 and can share and collaborate with members of team BU3 and BU2.
When a user in team BU3 creates a file (for example, a story) in workspace BusUnit3 that relies on content (for
example, a model) from the other workspace BusUnit2, only users in team BU3 will be able to see the full story.
Users in team BU2 only have access to workspace BusUnit2, so they can’t access the story that is created in
workspace BusUnit3.
As a workspace administrator, you can see the details of a workspace. Also, you can access and organize
content in workspaces from the System view on the Files page.
Context
On the Workspace Management page, you can see the workspaces that you have administrative rights to.
Procedure
Tip
You can see which users are on a team by clicking the number under the Team Members column.
Note
Only a system administrator can edit the workspace details and add or remove teams and workspace
administrators.
Context
Sometimes, users will leave a team before the end of a project, making it difficult to access their files. As a
workspace administrator, you can access any user's content or files within the workspace.
Procedure
There, you can see all the content created by the inactive user and share it to be accessible to other
members of the workspace.
Note
The System view is only available for users with administrative privileges. System administrators and
workspace administrators can access any content created by members within a workspace, regardless
of whether the user shared the files with them.
Workspace administrators can also access content through the workspace view but can only view or
edit files that have been shared by the user who owns the file.
Context
As a workspace administrator, you can organize content in a single workspace or between workspaces by
creating folders and moving files to the folders. Organizing content into folders makes the content easier to
access or share with other members of the workspace. For example, you can create a folder for a project and
easily share it with the team.
Tip
If needed, check the sharing settings of the content before you move it. You can make note of the share
settings and then apply them when you share the newly created folders.
Note
If you move content from a workspace to the Public folder, users who previously didn't have access will
gain access. If you move content from one workspace to another workspace, users who previously had
access to the content will lose access. However, if the users or teams are also members of the receiving
workspace, you can share the content with them again.
Procedure
4. For each folder you create, go to the toolbar and select Folder Actions New Folder.
5. In the dialog, name the folder, and click OK.
6. After you've set up your folders, find the content you want, select the checkbox next to it, and select
Folder Actions Move To.
7. In the move dialog, select the workspace where you created the folders and then select one of the folders,
and click OK.
8. After you have organized the content, share the folders with the teams and users.
Tip
If you’re not a workspace administrator but own content in a workspace, you can move that content
between workspaces or between workspaces and the Public folder on the Files page.
Learn How to Share and Collaborate in SAP Analytics Cloud [page 188]
Collaborate by Having Group Discussions [page 239]
Publish and Share Content to the Catalog [page 207]
Share Files or Folders [page 193]
Find and Access Content in the Catalog [page 55]
Configure Workspaces [page 2950]
As a content owner, you can set a schedule for sharing a copy of your story or analytic application, with users
and teams of your choice. You can share the publication with any number of SAP Analytics Cloud users within
the tenant and a maximum of three non-SAP Analytics Cloud users.
Scheduling publication is the process by which you create a specific task and have it run once or at recurring
intervals. In our context, you can schedule stories or analytic application and have them delivered to desired
recipients both within SAP Analytics Cloud or outside. Let's take a look at some of the common terms you
encounter while creating a scheduled publication.
Publication: The output of a story or analytic application that is generated when a schedule runs.
Broadcast: Allows the scheduling and sharing of analytics content with SAP Analytics Cloud users as well
as non-SAP Analytics Cloud users. Using the Broadcast mode, publication schedulers can share stories
Burst: Allows the scheduling and sharing of analytics content with SAP Analytics Cloud users as well as
non-SAP Analytics Cloud users. Using the Burst mode, publication schedulers can share stories with SAP
Analytics Cloud users that contain data the recipients are authorized to view. For non-SAP Analytics Cloud
recipients in the Burst mode, the publication will still contain information that the schedule creators have
access to.
Note
Destination: The location where the scheduled publication will be delivered to.
Authorization: The data access rights associated with a scheduled publication. While scheduling a publication,
if the mode chosen is Broadcast, the publications will be delivered to the desired recipients based on the data
that the schedule owner has access to. If the mode chosen is Burst, the publications will be delivered to the
respective recipients based on the data each recipient is authorized to view.
Bookmark: The copy of a story that you customize for a specific audience and that's saved for quick reference
and retrieval. In scheduling, personalization for a particular audience is achieved through bookmarks and by
using Prompts.
Understanding what scheduling publication offers will help you better decide on how and what you want to
schedule and share with your team.
Formats supported
For stories in classic mode PDF and PowerPoint (.ppt)
Note
You can also share a link to the story while scheduling a publication.
Content you can schedule Stories along with the Bookmarks, and Analytic Application
Scheduling locations Favorites menu, Featured files, the Files page, or from within the story itself
• BI Admin or Admin roles that come with the Schedule Publication privilege by default.
• Any custom role that has been assigned the Schedule Publication privileges, such as Create and Manage.
• Anyone with a Copy or Edit permission to a story, once the Schedule Publication privilege is granted.
Note
When a user is deleted, all schedules created and owned by the user will be deleted.
Note
If you're an administrator and are looking at configuring the publication schedule as well as
understanding how many licenses you would need to create the necessary number of scheduled
publications, see Configure Scheduling of Publication [page 2957].
Note
Note
You can also assign Schedule Publication permission to custom roles. To create custom roles, see
Create Roles [page 2877].
Note
While choosing Burst as the scheduling mode, make sure that the recipients have access to the story and
underlying models.
1. Once you have the required privileges to create a schedule, and you choose Broadcast as the scheduling
mode, the output generated when the schedule runs is based on the data you're authorized to access. To
put it simply, the recipients get to view the information that you can view.
2. If you choose Burst, the output generated will be based on the recipient’s data authorization for SAP
Analytics Cloud users and the scheduler’s data authorization for non-SAP Analytics Cloud users.
Note
Users with Manage permission on Schedule Publication, and public and private files, can modify the
scheduled publication you've created.
Make sure your application is hosted on one of the following Data Centers:
You can schedule a publication if you've a minimum of 25 SAP Analytics Cloud licenses within a non-SAP
data center environment such as the following:
• SAP Analytics Cloud can be hosted either on SAP data centers or on non-SAP data centers. Determine
which environment SAP Analytics Cloud is hosted in by inspecting your SAP Analytics Cloud URL:
• A single-digit number, for example us1 or jp1, indicates an SAP data center.
• A two-digit number, for example eu10 or us30, indicates a non-SAP data center.
For more information on licensing, see Configure Scheduling of Publication [page 2957].
Note
Schedules of stories based on live connections will always run with the user credentials corresponding to
the live connection.
Context
You can schedule a publication for story or analytic application which is either in classic mode or optimized
view mode.
• On the Files page (from the main menu, Browse Files ), select the story that you want to schedule,
and select the dropdown arrow from the (Share) icon from the toolbar. Select Schedule Publication. Or
• From the File menu, select Schedule Publication.
Note
When a schedule runs, each publication gets 15 minutes to execute the workflow and generate an output.
If your story takes longer than 15 minutes to generate the output, your schedule will fail with an error
message Scheduling the publication failed because the export task couldn’t be completed in time.
Tip
Before scheduling a story, it is recommended that you do a manual export to make sure that the story
doesn’t take more than 15 minutes to be exported. Otherwise, your publication will fail when the schedule
runs. Also make sure that the maximum size of the output in either PDF or PPT doesn’t exceed 25 MB.
By default, the name of the story is populated as the name of the schedule. You can edit the name as per your
preference.
Procedure
1. Under Choose Mode, choose whether you want to Broadcast or Burst the story.
2. In the Start Date field, select to specify the start date and the time you want the schedule to run.
Note
On selecting Check Availability you get to view, the time slots that are fully available, partially
available, unavailable, conflicting, and nonconflicting, in the calendar view, for a single occurrence
or a recurrence of a schedule. This enables you to schedule the number of publications based on the
available time slots.
3. Select OK.
Note
You can also directly enter the date and time in the Start field and provide a future date in a valid
format.
You can set the recurrence pattern on an hourly, daily, or weekly basis.
For weekly recurrence, select the day or the days of the week you want to schedule the story and set the
start date.
5. To end the recurrence, select the relevant option from the End Recurrence by dropdown menu.
The No End Date option is deprecated and will be removed by 2024.Q1. We recommend you use the
End Date or Number of Occurrences option instead. For more information, see SAP Note 3358834
You can end the recurrence based on the end date or the number of repetitions. You can also choose the
No End Date option to set the schedule to run for 365 days, after which it ends and a new schedule needs
to be created.
6. Select OK.
7. Choose a File Type.
Note
For stories in classic mode only PDF and PowerPoint file types are supported.
For stories in Optimized View Mode, Optimized Design Experience, and Optimized Story Experience,
only PDF and CSV file types are supported.
Note
You can hit the ‘[’ key to get dynamic text prompts that can be inserted within the email subject, body,
or as file name. Dynamic text helps you in personalizing the story sharing experience at runtime.
9. In the Message field, type in an optional message. You can also use the formatting options available, such
as emphasize text, include a list and so on.
10. Select the Include link to story option, if you want to include a link to the story in the email.
11. (In Optimized Story Experience) If there's a script variable in your story, you can switch on Customize
Script Variable to let story viewers customize its value when they open the story.
Then, add one or more script variables, and define a default value for each of them.
12. You can customize the stories you want to schedule publications, by choosing the recipients of your choice
and choosing between different story Views from either global Bookmarks or personal Bookmarks from the
Distribution menu.
a. Add the name of the users and teams within SAP Analytics Cloud either by typing in the names or by
selecting the Add Users or Teams icon in the input field.
b. Enter the email addresses of the non-SAP Analytics Cloud recipients separated by a comma.
Note
Note
You can also view the last saved prompt within the story.
Keep the check box in the Set Variables dialog selected to retain the last saved value for dynamic
variables for the prompt you’ve chosen in a story.
By default, the prompt values that were defined while creating the story will be considered if you
don't edit them while scheduling.
Note
If you’ve chosen Burst as the mode of scheduling and you’re sharing the analytics content with
non-SAP Analytics Cloud users, the content shared, will be that with the data authorization of the
scheduler and not the recipient.
f. To specify filter values during schedule creation, select Change Story Filter.
Note
• Change Story Filter is disabled if there are no filters selected during the story creation.
• Only dimension type filters by ID or by Description are supported. Measure type filters and
range filters that are applied to the stories are however not supported while creating a
scheduled publication.
Restriction
Change Story Filter is not available for stories in Optimized View Mode.
A list of dimensional filters applied to the story drops down. Select the filter to which you need to make
the changes. The Set Filters dialog appears.
g. Make the necessary changes to the filter in the Set Filters dialog. Select OK to update the filter. To
discard the changes, select Cancel.
Note
• During the schedule creation and modification, you can only choose from the filters that are
applied and saved to the story. Hence it is recommended to select All members while applying
filters to the story.
• By default, the filters that were defined while creating the story will be considered if you don't
edit them while scheduling.
• The Change Story Filter button is disabled for in-progress and successful scheduled
publications.
13. Optionally, edit the name of the file from the File Name field.
14. To edit the file settings, select File Settings.
Note
For stories in classic mode only PDF and PowerPoint file types are supported.
For stories in Optimized View Mode, Optimized Design Experience, and Optimized Story Experience,
only PDF and CSV file types are supported.
If you select PDF or PowerPoint as the file type, you can set the following:
Note
You can choose a resolution for PDF and PowerPoint files shared via email that best fits your
purpose of publication. 1920 x 1080 being the default page resolution, you have more options to
choose from while scheduling a publication.
Note
Selecting a higher resolution will increase the size of the file. This could exceed the size limit set for
email deliveries.
Page Resolution settings is disabled for the scheduled publications that are in progress or
successful.
Restriction
Note
Due to the limit of publications that can be scheduled per hour, which is based on your license,
you might notice unavailable slots for your current schedule. In such a scenario, a warning message
is displayed with the unavailable slots. For more information on slots availability, refer to Licensing
information in this article.
As a Schedule Manager or Schedule Administrator, you get notified via email whenever the status of
your scheduled publication changes from open to successful, partially successful, failed, or canceled.
You can view details of the status by clicking the Open Task button on the email.
Caution
Before switching from classic view mode to optimized view mode and vice versa for a story or analytic
application, consider the following:
• After scheduling a publication with the CSV file type, if the story or analytic application that is in
optimized view mode is converted back to classic view mode, the schedule will fail.
• After scheduling a publication with the PPT file type, if the story or analytic application that is in
classic view mode is converted to optimized view mode, the schedule will fail.
• If the file type selected for the scheduling publication is PDF, switching between classic view mode
and optimized view mode and vice versa will not affect the scheduled publication.
• Formatting (such as cell color, font styles, and so on) are not retained in the publication.
• The Histogramand Waterfall charts are not supported, make sure you do not include them in the
publication:
• All chart data is exported, not just the currently visible data.
For example, if your chart has a hierarchy, all the nodes of the hierarchy are exported, even if you have not
drilled down on the data in your chart.
• The hierarchy export behavior applies to all chart types except Time Series charts. Time Series charts
export only the currently selected level.
• During the publication export, hyperlinks are removed and hierarchies are flattened.
• If you have renamed a measure or dimension, you won't see your names in the exported data.
• The measure and dimension names from the data source are also exported.
• SAP Analytics Cloud profile settings do not influence the format of the exported data.
• There is no export of BW display attributes.
• When you export table data, only All Data export is supported. You can't share Point of View.
• Formatting (such as cell color, font styles, and so on) are not retained in the publication.
• During the publication export, hyperlinks are removed, hierarchies are flattened, and only the table data
region is exported.
• You can't export custom cells outside the data region (for example, on a grid page).
• If your table has a hierarchy, all the nodes of the hierarchy will be exported, even if you have not drilled
down on the data in your table.
• The All export doesn't include visible content applied to the table such as calculations, comment columns,
or renamed dimensions.
Stories may be made up of single or multiple widgets that are based on different types of models connected to
different kinds of data sources.
There can be instances where a data source isn't supported, or a connection isn't working as intended, or an
incorrect formula or errors during data validation leads to the publication being partially successful. You have
the flexibility of choosing whether to deliver such publications or deliver only complete publications. You can
choose from the following delivery options under Advanced Settings.
By turning on the Deliver partially successful publications to scheduler toggle, the creator of the schedule
receives all the publications irrespective of whether the content is complete or not.
You can view, update, copy, and delete the schedules you've created, from the Calendar.
Context
Note
Make sure you have the Manage permission granted for the object type Schedule Publication, which lets
you manage scheduled publications in the calendar. For more information, see Permissions [page 2844].
Note
Users with Manage permission on Schedule Publication, and public and private files can modify the
schedule you've created.
Procedure
1. To view the list of scheduled publications you’ve created, choose the view of the calendar you would like to
see: Day, Week, or Month.
2. To copy the series or an instance of an existing scheduled publication job and reuse it with the desired
scheduling parameters, search for the scheduled publication that you want to copy and then, select it.
Select (Copy) in the General menu. You can modify the Name and add a Suffix to the title of the
schedule and also change the date as per your preference. Select OK .
3. You can modify the scheduling parameters from the Details pane after copying the schedule in the
calendar.
4. To delete a scheduled publication, select the schedule you want to delete, and in the general toolbar at the
top panel, select Remove. Select OK to delete a recurrence or an occurrence of a schedule.
For every scheduled publication event, you can view the detailed status of the publication generation and
delivery for each individual view and user.
Context
While managing scheduled publication, you can view more details about the status of your scheduled events
from your workspace calendar. For every scheduled publication event, you can view the detailed status of the
publication generation and delivery for each individual view and user.
Note
• Make sure you have the Manage permission granted for the object type Schedule Publication, which
lets you manage scheduled publications in the calendar. For more information, see Permissions [page
2844]
• Users with Manage permission on Schedule Publication, and public and private files can modify the
schedule you've created.
Procedure
Results
A detailed status dialog appears, showing the status of the publication generation and delivery for each
individual view and user on the selected scheduled publication event.
During their lifecycle, SAP Analytics Cloud scheduled publication events have different statuses. Use this
reference to find out which statuses are available and under which circumstances an event receives a specific
status.
The start date has been reached and the scheduled publica-
In Progress tion is being prepared to deliver.
Partially Successful Some steps of the event were executed successfully, while
others failed. The detailed view provides you with more pre-
cise information on publication generation and delivery sta-
tuses.
Failed The publication task failed due to an error. The detailed view
provides you with more precise information.
Canceled The owner canceled the event or the selected slots were
unavailable.
To modify an occurrence or a recurrence of a scheduled publication, from the calendar, either double-click the
publication or select the publication you want to modify and select the Designer button at the top-right corner
of the application. Make the required changes and select Update.
Note
You can specify filter values while modifying a scheduled publication using the Change Story Filter option
and only those filters that were defined during the publication creation will be considered.
Restriction
Change Story Filter isn’t available for stories and analytic applications that are in Optimized View Mode.
You can view the changes from the View Changes option. To discard the changes, select Revert.
You can't discontinue a scheduled publication that's in progress or has ended. The discontinued publication will
still be available on the calendar in the Canceled state and if needed you can create a copy of the scheduled
publication. A scheduled publication that's in the canceled state can be subsequently deleted.
You can create URL links to content outside of SAP Analytics Cloud, and save them in the Files list or publish
them to the Catalog. For example, you may have documents on a different BI platform system, or various web
resources, that you want to make available to SAP Analytics Cloud users.
Procedure
If there are many filters and links in the right-side panel, you might have to scroll down to see the switch.
Context
When you enable a content link for translation, the following metadata is translated:
• Title
• Description
• Titles of secondary links to other web content
Procedure
Caution
If you disable translation for a content link that has been enabled for translation, all of its translations
will be permanently deleted.
Related Information
You can collaborate with other users by adding and viewing comments on a specific page tab or widget in a
story. To view comments, select (Collaboration) on the shell bar and access the Comments panel or
select the comment icon at the top right of the page tab or widget.
Tip
If you're a system administrator, you can ensure that users can add comments to a specific page or widget
within a story by creating custom roles that have the Create and Read permissions for Comment. For more
information, see Permissions [page 2844].
Tip
If you're sharing a story with other users, you can ensure that they can add and view comments by giving
them access to the Add Comment and View Comment actions. To allow users to delete comments, give
them access to the Delete Comment action. For more information, see About Comment Permissions and
Options [page 241].
You can add comments to specific page tabs or widgets within any story that has been shared with you.
Note
Your comment can be up to 10,000 characters long in all form of comments excluding comments on
dimensions. Comments on dimension have a limit of 255 characters.
Note
You can't add comments to widgets that have input controls. For more information, see Configure Input
Controls in Analytic Applications [page 1899].
1. Select a widget to see (More Actions) in the upper-right corner of the widget.
2. Select More Options Add Comment.
3. In the Comment dialog, type a comment and select Add Comment or select Cancel to discard comment.
Note
Your comment can be up to 10,000 characters long in all form of comments excluding comments on
dimensions. Comments on dimension have a limit of 255 characters.
Note
• When comments are copied and pasted in the comment dialog from an external text source, the rich
text formatting options such as Bold, Italics, Underline, Bullet list, and Numbered list are retained.
• For the best comment formatting experience, it is recommended to use the default rich text formatting
options available in the comment dialog.
After a comment is added, the comment icon appears at the top right corner of the page tab or widget. The
comment icon and comment thread fade away if the story is not set to Comment Mode. To view a story in
Comment Mode, select Display Comment Mode .
Other users can select the comment icon to view your comment and reply to it or simply Like it. You can
also view your own comment and Like it, edit it, or delete it.
Up to four distinct comment threads can be added to a specific page tab or widget in a story. Each comment
thread can have a maximum of 100 comments. As users add multiple threads to a page tab or widget,
comment icons will be superimposed on one another. The top icon is associated with the most recent thread.
In addition to adding comments to specific page tabs and widgets, you can add comments to cells in tables. For
more information see Adding Comments to a Data Cell [page 1407].
Tip
For applications in embedded mode, you can add comments if your system administrator has enabled
commenting in embedded mode. For more information, see Work with Comments in Analytic Applications
[page 1887].
You can notify users of your comment to a story by mentioning them in the comment. To mention a user,
type @<username>. When you type (@) sign, a list of users who have shared access with the story appears.
You can scroll through the list of users or continue typing the user name to filter the list. When you find the
user you want, select it. The user you mention will receive a notification that they can access by selecting
(Notifications) on the shell bar. By default, all types of notifications are visible. If needed, you can select a
filter to see a specific type of notification.
Notifications can also be read by users with the SAP Analytics Cloud mobile app and sent to users by email.
Tip
If you're a system administrator, you can configure a custom SMTP server for email delivery. For more
information, see Configure Email Server [page 2787].
To view a list of the comment threads for the displayed story, select (Collaboration) on the shell bar and
choose the Comments panel.
• Time, where you can select the sort order from newest to oldest or vice versa.
• Pages, where all comments appear under their respective page headings and in chronological order of
when they are added.
To view an entire comment thread, select the comment. In the comment thread, you can view and like any
previous comment, add a new comment, or delete a comment or the whole comment thread.
Tip
If you want to delete all comment threads in the story, in the Comments panel, select .
When working with a story that has comments, remember the following points:
• Creating a copy of a story using the Copy To or Save As operations doesn't copy the comments to the new
story file. Comments are retained in the original story file.
• Duplicating a page tab or widget doesn't duplicate the comments in the new page tab or widget.
Comments are retained with the original page tab or widget.
• Exporting a story to PDF doesn't export the comments to the PDF. Comments are only visible in the story
file when viewed in SAP Analytics Cloud.
• Viewing a story in or adding a story to the Digital Boardroom doesn't allow you to see the comments.
Comments aren't visible in stories when using the Digital Boardroom.
• Creating a private version of a story will copy comments from one version to another. For more
information, see Sharing Private Versions [page 2186] and Making Private Versions Public [page 2185].
• Renaming a story and viewing or adding it in Present mode preserves all comments in the story.
Related Information
You can collaborate with other users by having discussions when making group plans and decisions. To
view the discussion threads that you're a part of select (Collaboration) on the shell bar to open the
Discussions panel.
Note
Tip
If you're a system administrator, ensure that users are assigned roles that have the following permission
settings:
• To create and participate in discussions, set the Create and Read permissions for Discussion
• To add attachments to a discussion, set the Create permission for Uploaded Files.
From the Discussions panel, you can start a new discussion by choosing (New Discussion) and inviting
participants (users or teams). Only the participants who have been invited can see the discussion.
The group name for a discussion thread is automatically generated and based on the display names of the
other participants. As the discussion creator (or owner), you can change the group name. When you review a
discussion thread, select (Manage) and choose Preferences. Then select , enter a new name in the Select
a Group Name field, and select .
After a new discussion is created, all participants can type messages to the discussion or choose (Add) next
to the Type a message field, and do any of the following activities:
• Attach a file to the discussion. The saved file can be from the file repository or your computer.
• Create a task for any of the discussion participants.
• Link the currently displayed story to a discussion. Only saved stories can be linked to a discussion.
Note
If you create a new discussion while working on a story, you must explicitly link the story to the
discussion.
In the Discussions panel, you can select a discussion thread to view it. All discussion activity appears in
chronological order, where you can read all previous messages or add new messages. Users who are no longer
part of SAP Analytics Cloud appear as Deleted User. However, the user IDs for deleted users remain visible in
the @mentions, as well as invitations to a discussion.
For any linked story, simply select the linked story in the discussion to open the content. For long discussion
threads, you can view a list of all linked stories and file attachments by selecting (Manage) and choosing
Receiving Notifications
Notifications are used throughout SAP Analytics Cloud to send system messages to users. For example, you
might receive a notification when:
To read notifications choose (Notifications) on the shell bar. By default, all types of notifications are visible. If
needed, you can select a filter to see a specific type of notification.
Notifications can also be read by users with the SAP Analytics Cloud mobile app and sent to users by email.
Tip
If you're a system administrator, you can configure a custom SMTP server for email delivery. For more
information, see Configure Email Server [page 2787].
After you read a notification, you can delete it by selecting the X icon that appears when you hover over it. To
delete all notifications, select (Delete All Notifications).
Tip
On the Profile Settings dialog, you can set the user preferences for receiving email notifications for file
requests, system notifications, and product updates and learning. Also, you can set when old notifications
are deleted. For more information, see Edit Your Profile [page 44].
Events
Events are also closely integrated with discussions. When you create a new event, a discussion thread is
automatically created so that you can collaborate with the group of colleagues assigned to the event. For any
Related Information
For users to be able to comment on files that you share with them, two things are required. First, the system
administrator creates a role with comment permissions enabled and assigns it to users. Second, when you
share content, you must give users access to view, add, or delete comments.
Note
The information in this section is for a system administrator. For more information, see Permissions [page
2844] and Create Roles [page 2877].
The actions to view, add, or delete comments are available to users through custom roles that are assigned
to them. When the system administrator creates a custom role, they select the Read, Create, and Delete
permissions for Comment, and then assign it to the users.
Permission Description
Create Users can view comments and start new comment threads or add comments to existing comment threads.
If a role has all the permissions selected for Comment, the relevant actions are available to the user. For
example, a role has the Read, Create, and Delete permissions for Comment. The user with that role can view,
add, and delete comments. If a permission is not selected for Comment in a role, the relevant action is not
available to the user. For example, a role has the Read and Create permissions for Comment. The user with that
role can view and add comments, but not delete them.
After the system administrator sets up custom roles and assigns them to users, it's up to you, the user sharing
the file, to choose whether users can view, add, or delete comments for the file shared with them.
Note
The information in this section focuses on how you, the file owner or user with the proper permissions,
shares the file with other users and lets them view, add, or delete comments.
It is assumed that the system administrator has set up all the roles to have all relevant actions available to
the users (the Create, Read, and Delete permissions for Comment are selected) and has assigned the roles
to the users.
When you share a file, you can select a predefined access level (View, Edit, or Full Control), or you can choose
Custom to set custom access.
After you choose Custom, the Set Custom Access dialog opens and you can choose to let users view, add, or
delete comments.
Add Comment When selected, the View Comment is automatically selected as well.
Users can view and like comments, start new comment threads, and add comments to existing
threads. They can also delete comment threads they started and their own comments.
Delete Comment When selected, the View Comment is automatically selected as well.
Users can view and like comments and delete their own or others’ comments.
To let users view, add, and delete, you must set the following:
• When sharing a file without file dependencies, select the Add Comment, View Comment, and Delete
Comment options for the story.
• When sharing a file with file dependencies (such as, models), select the Add Comment, View Comment,
and Delete Comment options for the story. Also, you must share the file dependencies with the same
comment options selected.
If you don't select a comment option, the relevant action is not available to the users. For example, if Delete
Comment is not selected, the user can only view and like comments. They won't be able to delete any
comments.
To give you a better idea of how the comment options work, review the following scenario.
Tip
This information also applies to viewing, adding, and deleting comments on folders and analytic
applications.
Example Scenario
The system administrator creates custom roles for the content viewer and content creator. Both roles have the
Create, Read, and Delete permissions selected for Comment. The system administrator assigns the content
creator role to you and the content viewer role to other users.
You create a story and share it with other users. Instead of using a predefined access level, you select Custom.
On the Set Custom Access dialog, you select one comment option for the users who you're sharing the file with
to have.
View Comment • For stories that do not have file dependencies, the users can only view and like the comments
that are available in the story.
• For stories with file dependencies, you must also share the file dependencies with the same
comment options selected. After both the story and the file dependencies are shared, users
can view and like comments in the story and on the data point.
Add Comment When Add Comment is selected, View Comment is also automatically selected:
• For stories that do not have file dependencies, the user can view and like the comments that
are available in the story and start new comment threads or add comments to an existing
thread.
• For stories with file dependencies, you must also share the file dependencies with the same
comment options selected. After both the story and the file dependencies are shared, users
can view and like comments or start new comment threads or add comments to existing
threads in the story and on the data points.
Also, the content viewer can delete any comment thread they start and delete only their own
comments in other threads.
Delete Comment When Delete Comment is selected, View Comment is also automatically selected:
• For stories that do not have file dependencies, the content viewer can view and like the
comments that are available in the story and delete comments even if they didn’t add it.
• For stories with file dependencies, you must also share the file dependencies with the same
comment options selected. After both the story and the file dependencies are shared, users
can view and like comments in the story and on the data point or delete comments even if
they didn’t add it.
When you share a story that has a file dependency, it’s a best practice to share the file dependency and
select the same comment options for both the story and the file dependency. To help you understand, review
the following examples for different combinations of the selected comment options when a story has a file
dependency.
Scenario 1
You let user view, add, and delete comments for both the story and file dependency.
Story ✓ ✓ ✓
File Dependency ✓ ✓ ✓
Users can start a new comment thread, add comments to existing threads, view all comments, and delete any
comment in the story and data point.
Scenario 2
You let users view, add, and delete comments for only the story.
Story ✓ ✓ ✓
File Dependency
Users can’t view, add, or delete comments in the data point. However, they can view, add, or delete comments
in the story.
Scenario 3
You let users view, add, and delete comments for only the file dependency.
Story
File Dependency ✓ ✓ ✓
Users can’t view, add, or delete comments in the story or data point.
Related Information
Prerequisites
Restriction
Before exporting data, make sure that the number formats are the same for the model and your local
machine. For example, if the data you want to export uses a comma as a decimal separator and a period as
a thousands separator (123.456,78), you will need to verify that your machine's number format is the same.
On a PC, in the Control Panel, go to Clock, Language, and Region Region Additional Settings and
verify the separator formats.
Context
You can export both acquired and live data from a chart as a CSV file. However, live data may take some time to
export, because it must be downloaded before it can be exported.
In the Examine workflow, you can export from an auto-generated chart for a table.
You can also export from the Explorer. For more information, see Accessing the Explorer (Classic Story
Experience) [page 1189].
Note
All chart data is exported, not just the currently visible data.
• For example, if your chart has a hierarchy, all the nodes of the hierarchy will be exported, even if you
have not drilled down on the data in your chart.
• The hierarchy export behavior applies to all chart types except Time Series charts.
Time Series charts export only the currently selected level.
Restriction
• Formatting (such as cell color, font styles, and so on) will not be exported.
• Hyperlinks are removed.
• Hierarchies are flattened.
• (Classic story) If you have renamed a measure or dimension, you won't see your names in the exported
data.
The measure and dimension names from the data source are exported.
• SAP Analytics Cloud profile settings do not influence the format of the exported data.
• There is no export of BW display attributes.
The notification dialog provides information on the export progress or completion. It also allows you to
cancel the export.
Tip
Use Include Formatting when you want the formatting to exactly match what is in the chart or table.
For example, if the table shows $12.04 Million, that is how the value will be exported, even if that is a
rounded value.
Don't use Include Formatting if the actual data values are different from those displayed in the chart or
table (for example, the actual data has more decimal places).
Depending on your browser settings, your file is saved automatically or a dialog opens for you to choose
where to save your file.
Results
Next Steps
Remember
Before opening the exported CSV file in Excel, verify that your Excel list separator settings are set correctly.
Tip
If you have data columns that contain only numeric values (such as ID columns, zip codes, or data with
leading zeros), you will need to import your CSV file into Excel. If you try to open it directly in Excel, the
numeric columns will be treated as numbers, not text.
1. Make a note of where you saved the CSV file, but do not open it.
2. Open Excel.
The following instructions apply to Excel 365. If you have an older version of Excel, the steps may be
slightly different.
3. Select the “Data” tab.
4. Select “From Text/CSV”.
The “Import Data” dialog box (Windows explorer dialog) appears.
Related Information
Restriction
Before exporting data, make sure that the number formats are the same for the model and your local
machine. For example, if the data you want to export uses a comma as a decimal separator and a period as
a thousands separator (123.456,78), you will need to verify that your machine's number format is the same.
On a PC, in the Control Panel, go to Clock, Language, and Region Region Additional Settings and
verify the separator formats.
You can also export from the Explorer. For more information, see Accessing the Explorer (Classic Story
Experience) [page 1189].
The notification dialog provides information on the export progress or completion. It also allows you to
cancel the export.
For table data, you can choose the scope of your CSV or XLSX file export to be either Point of view or All.
Scope Information
Point of View All
Scope description Exports what you see in the table data Exports all the data, not just the cur-
region (all the visible rows and col- rently visible data.
umns).
Example
This includes visible comment columns,
If your table has a hierarchy, all the
story/model calculations, and hierar-
nodes of the hierarchy will be ex-
chy levels in separate columns.
ported, even if you have not drilled
down on the data in your table.
Restriction
The All export doesn't include the
following:
Scope restrictions and notes When there are too many rows or col- Classic Design Experience: if you have
umns in your table, you will get a drill renamed a measure or dimension,
limitation warning. you won't see your names in the ex-
ported data. The measure and dimen-
Tip sion names from the data source are
to export more Point of view data. SAP Analytics Cloud profile settings do
To export more data, do the follow- not influence the format of the exported
ing: data.
The following table contains restrictions specific to scope and file type:
CSV There are no specific restrictions. You can't export more than 3 million
cells or 60 measure columns.
Note
Browser, memory, and personal
computer restrictions can still ap-
ply.
XLSX You can't export more than 500 thou- You can't export more than 1.5 million
sand cells. cells of data.
You can export both acquired and live data from a table as a CSV or XLSX file. However, live data may take
some time to export, because it must be downloaded before it can be exported.
Tip
If you want to include custom cells (that are outside the table data region) in your export, you need to
select all the cells, copy them, and then paste them into Excel.
Tip
Use Include Formatting when you want the formatting to exactly match what is in the chart or table.
For example, if the table shows $12.04 Million, that is how the value will be exported, even if that is a
rounded value.
7. Depending on the file type you selected, you will see one of the following options in the dialog:
8. If you selected CSV as the file type, choose a CSV Delimiter (column separator).
9. Select OK.
Depending on your browser settings, your file is saved automatically or a dialog opens for you to choose
where to save your file.
Note
If you have characters such as parentheses or the percent sign in your exported data, Microsoft Excel may
treat your numeric data as text and align it to the left instead of the right.
Remember
Before opening the exported CSV file in Excel, verify that your Excel list separator settings are set correctly.
Tip
If you have data columns that contain only numeric values (such as ID columns, zip codes, or data with
leading zeros), you will need to import your CSV file into Excel. If you try to open it directly in Excel, the
numeric columns will be treated as numbers, not text.
1. Make a note of where you saved the CSV file, but do not open it.
2. Open Excel.
The following instructions apply to Excel 365. If you have an older version of Excel, the steps may be
slightly different.
3. Select the “Data” tab.
4. Select “From Text/CSV”.
The “Import Data” dialog box (Windows explorer dialog) appears.
5. Select the CSV file to import and click “Import”.
The text import dialog appears.
6. From the “File Origin” list, select “Unicode (UTF-8)”.
7. Select “Transform Data”.
The “Power Query Editor” opens.
8. Find a column that needs the data type to be changed.
9. In the column header, right-click, select “Change Type”, and then select “Text”.
When the prompt appears, select “Replace Current”.
10. Repeat the process for each column that needs to have its type changed.
11. When you are finished, from the menu select “File” and then select “Close & Load”.
Related Information
You can export a story as a PDF file or as a Microsoft PowerPoint (PPTX) file.
• The PDF or PowerPoint file shows exactly what appears on your story pages at the moment that you export
them. If any charts or tables in your story are scrollable, only the visible parts are included in the exported
file.
• The visible content of the story pages in the story determines the size of the exported content: for PDF, the
pages will be scaled up; for PowerPoint the content will be scaled down.
If you want to print the PDF file to paper, you may need to use scaling options in your PDF viewing software.
The following invisible elements won't be exported:
• Invisible part in scrollable charts or tables
• Collapsed table cells
• Lazy-rendered widgets
• Lazy-loaded data source
• Comments on invisible data cells
Besides these invisible elements, web page widgets won’t be exported.
Note
For an R visualization widget, a static plot will be exported. But if the R widget is written in RHTML (iFrame),
it won't be exported.
Procedure
Note
The story's URL is included in the appendix. If you don't want to include it, you need to enable the
following system configuration option: Remove Story URL from Appendix.
5. (Optional) If you've applied scaling to any charts, you can select Apply Scaled Measures Across All Included
Pages to scale the charts across pages.
When charts are scaled per page, the chart density is automatically determined using all charts from the
same page only. When charts are scaled across pages, the chart density is determined by all charts that
are being exported.
6. Grid Page Settings: when your story has grid pages, you can choose how to export the grid page content.
In some PDF-viewing software, large tables will have truncated content. You can use Split Grid into
Pages to avoid truncation and to improve the readability of the generated pages in external PDF
viewers.
Just remember that splitting the grid into several pages will lead to more exported pages, and that
might increase the time needed for exporting.
Note
We don’t recommend to enable the export in the background for HTML R Visualization widgets as in
this case the exported PDF may not work as expected.
Note
The Enable Export in the Background option will be disabled for classic stories that have enabled
Optimized View Mode.
8. Batch Exporting: Select this option to export some or all members of a story filter.
Note
If you have a large number of members in the batch, the export may take a long time to complete.
A message shows how many members are currently selected for that filter.
b. Select the message to open the Set Filters dialog.
c. Select the members to include in the export.
d. When you've finished making your selections, click OK.
9. Select Export to create and download the file to your computer.
Results
Note
Exported tables are best viewed in your PDF viewer at 100% zoom, and with custom line rendering turned
off.
• Adobe Acrobat Reader editing preferences: in the Page Display section, select Rendering, and then
clear the checkboxes for Enhance thin lines and Smooth images.
Related Information
When viewing a story, you can export it as a PDF file or as a Microsoft PowerPoint (PPTX) file.
Context
• The PDF or PowerPoint file shows exactly what appears on your story pages at the moment that you export
them. If any charts or tables in your story are scrollable, only the visible parts are included in the exported
file.
• The visible content of the story pages in the story determines the size of the exported content: for PDF, the
pages will be scaled up; for PowerPoint the content will be scaled down.
If you want to print the PDF file to paper, you may need to use scaling options in your PDF viewing software.
The following invisible elements won't be exported:
• Invisible part in scrollable charts, tables, or containers (including tab strips, panels, flow layout panels
and page books).
• Popups that haven't been opened.
Popups need to be opened before they can be included in the export.
• Collapsed table cells.
• Lazy-rendered widgets.
• Lazy-loaded data source.
• Comments on invisible data cells.
Besides these invisible elements, web page widgets won’t be exported.
Note
For an R visualization widget, a static plot will be exported. But if the R widget is written in RHTML (iFrame),
it won't be exported.
Widgets might not keep all CSS styling settings when exported to PDF, for example, text decoration and
opacity.
Restriction
Tip
Custom widgets support PDF export now. However, to make sure that all the defined elements can be
exported to PDF, custom widget developers need to be aware of the export restrictions and test first and
then set supportsExport to true in the custom widget JSON file.
Procedure
• All
• Range: provide the specific pages
5. For PDF export, you can define the following settings:
• Paper size:
• Auto
• A2
• A3
• A4
• A5
• Letter
• Legal
Note
Note
The story's URL is included in the appendix. If you don't want to include it, you need to enable
the following system configuration option: Remove Story URL from Appendix.
Note
6. (Optional) Select Enable Export in the Background if you want to continue working while the file is being
processed and saved. You may experience a slower response time until the export is complete.
Note
We don’t recommend to enable the export in the background for HTML R Visualization widgets as in
this case the exported PDF may not work as expected.
Results
Note
Exported tables are best viewed in your PDF viewer at 100% zoom, and with custom line rendering turned
off.
• Adobe Acrobat Reader editing preferences: in the Page Display section, select Rendering, and then
clear the checkboxes for Enhance thin lines and Smooth images.
• PDF-XChange Editor settings: in the Page Display section, select Rendering, and then set Stroke Adjust
to Off.
Related Information
If you use Google Drive, you can export your SAP Analytics Cloud story as a Google Slides presentation.
Prerequisites
Note
If your SAP Analytics Cloud system is hosted on a data center located within China, this feature is not
available.
Procedure
Remember
You'll be reminded that any data you export to Google Slides is subject to Google's terms of service and
privacy policy. Since you are exporting a copy of the data in your story outside SAP Analytics Cloud to
Google Drive, this copy of your data may leave your preferred geographical region. It's recommended to
read Google's data and privacy policies before exporting.
If you don't want to be reminded again, check Got it! Don't remind me again. and then click Accept &
Continue.
3. If prompted, sign in to your own personal Google account with your username and password.
4. You'll be prompted to allow SAP Analytics Cloud access your Google account. Click Allow.
Your Google account is only ever used with your own SAP Analytics Cloud user account, and never shared
with other users. Contents of your personal Google Drive are never shared with anyone else, and no one
can add slides to your Google account.
5. Once signed into Google, set the Google Drive options under My Drive.
• Choose New Presentation to create a new Google Slides presentation from the story. Enter a name for
the presentation and choose a folder to save it to.
• Choose Add to Existing Presentation and select an existing Google Slides presentation. Story pages are
added to the end of the presentation.
Results
Once the export is complete, you can find your new or updated Google Slides presentation by signing on to
your Google Drive account.
Note
If you want to remove SAP Analytics Cloud from the list of apps that can access your Google account, sign
in to your Google account and go to https://myaccount.google.com/permissions . Select SAP Analytics
Cloud in the list of apps and click Remove Access
Connect your on-premise and cloud data sources to SAP Analytics Cloud.
You can create live data connections to on-premise and cloud systems. Data is “live”, meaning that when a
user opens a story in SAP Analytics Cloud, changes made to the data in the source system are reflected
immediately.
In SAP Analytics Cloud, you can create models from data sources in on-premise or cloud systems, build stories
based on those models, and perform online analysis without data replication. This feature allows SAP Analytics
Cloud to be used in scenarios where data cannot be moved into the cloud for security or privacy reasons, or
your data already exists on a different cloud system.
Note
For an overview of connection types and guidelines for system administrators, see the SAP Analytics Cloud
Connection Guide.
Configure your SAP on-premise data sources to issue cookies with SameSite=None; Secure attributes.
Topic Sections
Summary
A direct live connection (using CORS) from SAP Analytics Cloud to your SAP on-premise data source is a
cross-site scenario. Your SAP on-premise data source, such as SAP HANA, SAP S/4HANA, SAP BW, and
SAP BW/4HANA, issues cookies for authentication and session management. Every cookie has a domain
associated with it. These cookies are considered by your browser to be third-party, or cross-site, meaning
the domain of the cookie doesn't match the SAP Analytics Cloud domain in the user's address bar (ex:
sapanalytics.cloud).
As of Google Chrome version 80, Chrome restricts cookies to first-party access by default and requires you to
explicitly mark cookies for access in third-party, or cross-site, contexts. Chrome does this by treating cookies
that have no declared SameSite value as SameSite=Lax cookies. Only cookies with the SameSite=None;
Secure attributes will be available for cross-site access, and require secure HTTPS connections. Other
browser vendors are looking at similar support for this new cookie behavior.
Note
Google is introducing this behavior through gradually increasing rollouts. Not all Chrome 80 users will see
this behavior immediately.
Action
You must configure your SAP on-premise data source to issue cookies with the following attributes:
• SameSite=None
• Secure
This will ensure Chrome and other browsers allow cross-site access to your SAP on-premise data source
cookies from SAP Analytics Cloud. Without these settings, user authentication to your live data connections
will fail, and Story visualizations based on these connections will not render.
Note
Certain browser versions will reject cookies that use the SameSite attribute. We'll account for that with our
configuration, and avoid setting SameSite for these incompatible browsers.
Follow the steps in the sections below for your specific SAP system.
Note
You will need to restart your system after performing the steps, so plan ahead for a short system downtime.
SAP HANA comes with a built-in Web Dispatcher, where an Internet Communication Manager (ICM) rewrite
rule can be executed. To create and execute the rewrite rule, follow these steps:
1. Log on to the SAP HANA system's operating system with the <SID>adm user.
2. Go to /hana/shared/<SID>/profile, where <SID> is your three-character system ID, and create a
rewrite.txt file with a text editor. Insert the following script into the file, and save it:
Note
If the SAP HANA system is multi-tenanted, use the System database connection.
Cookies from SAP HANA on SAP BTP Cloud Foundry already use the Secure attribute. You need to
add theSameSite=None attribute to your own analytics adapter approuter and rebuild and redeploy the
application from your local computer.
Note
The analytics adapter (HAA) project from SAP GitHub already has the changes needed for any new BTP
applications. The steps in this section are to modify your existing application.
1. On your local computer, go to <HAA_ROOT> where your analytics adapter application is located.
2. Open and edit mta.yaml. Under properties: add the following COOKIES: property with the SameSite
attribute, leaving all other properties unchanged:
properties:
CORS: '[{"uriPattern": "^/sap/bc/ina/(.*)$", "allowedOrigin":
[{"host":"<sac-host>", "protocol":"https"}], "allowedMethods": ["GET",
"POST", "OPTIONS"], "allowedHeaders": ["Origin", "Accept", "X-
Requested-With", "Content-Type", "Access-Control-Request-Method", "Access-
Control-Request-Headers", "Authorization", "X-Sap-Cid", "X-Csrf-Token"],
"exposeHeaders": ["Accept", "Authorization", "X-Requested-With", "X-Sap-
Cid", "Access-Control-Allow-Origin", "Access-Control-Allow-Credentials", "X-
Csrf-Token", "Content-Type"]}]'
COOKIES: '{"SameSite": "None"}'
INCOMING_CONNECTION_TIMEOUT: 600000
TENANT_HOST_PATTERN: '^(.*)-<space>-haa.cfapps.(.*).hana.ondemand.com'
3. Save mta.yaml.
4. Open and edit package.json. Change the dependency to 6.7.2:
"dependencies": {
"@sap/approuter": "^6.7.2"
},
5. Save package.json.
6. From the <HAA_ROOT> directory in a command prompt, run the following commands to rebuild and
redeploy your application:
Tip
Find your <api-endpoint> value (Cloud Foundry API endpoint) from your subaccount view of the BTP
Cockpit.
Once deployed, verify the new version of your analytics adapter. Go to your subaccount view of the BTP
Cockpit, and then check the analytics adapter User-Provided Variables. Make sure that the COOKIES variable is
set to {“SameSite: “None”}.
If CORS was enabled through HTTP allowlists, or in other words, if CORS was configured within the
UCONCOCKPIT transaction, you need to create an Internet Communication Manager (ICM) rewrite rule file
to append the SameSite=None and Secure attributes to all the cookies issued by the NetWeaver ABAP
application server (AS ABAP).
1. Log on to the operating system of the AS ABAP system, and create a rewrite.txt file in the system
profiles folder using a text editor.
2. Add the following rewrite script to the file, to append the cookie attributes to compatible web browsers,
and save it:
3. Log on to the AS ABAP system from SAP GUI with a system administrator user account.
4. Go to transaction RZ10, and edit the AS ABAP system's DEFAULT profile.
icm/HTTP/mod_0 = PREFIX=/,FILE=$(DIR_PROFILE)/rewrite.txt
SAP S/4HANA, SAP BW or BW/4HANA, and SAP BPC Embedded with CORS
Enabled by ICM Rewrite Script
If CORS was enabled with an Internet Communication Manager (ICM) rewrite script, the NetWeaver ABAP
application server (AS ABAP) already has an existing ICM rewrite file. To append the SameSite=None and
Secure cookie attributes to the cookies, follow these steps:
1. Find the path to the ICM rewrite file by inspecting the profile parameter icm/HTTP/mod_0 in the system's
DEFAULT profile.
2. Log on to the operating system with the <SID>adm user.
3. Edit the ICM rewrite file. At the end of the file, append the following script:
The SAP BusinessObjects Live Data Connect component, together with the Tomcat server that it runs on,
already issues cookies with the Secure attribute. Therefore, we just need to configure the Live Data Connect
component to issue cookies with the SameSite attribute set to None.
1. Check the version of the Tomcat server where the Live Data Connect component runs. If the Tomcat
version is lower than 8.5.50 or 9.0.30, upgrade or migrate it to at least 8.5.50 or 9.0.30, respectively.
See the migration guides at http://tomcat.apache.org/migration.html .
2. If you upgraded or migrated your Tomcat server, make sure to migrate the Live Data Connect component
as well. This can be done by copying the sap#boc#ina.war file and the sap#boc#ina directory under
<original_tomcat_root>/webapps to the new Tomcat server. Make sure all the Live Data Connect
settings are preserved, and HTTPS is properly configured.
For more details, refer to Configuring SAP BusinessObjects Live Data Connect and Setting up SAML
authentication in the SAP BusinessObjects Live Data Connect Installation and Security Guide.
3. Go to the <Tomcat_root>/webapps/sap#boc#ina/METADATA-INF directory.
4. Open the xml file in a text editor, and insert the CookieProcessor segment to set the SameSite attribute
to None. It should look like this:
Related Information
You can create live data connections to an SAP HANA system using the Direct, Tunnel, and SAP Business
Technology Platform (BTP) connection types, and also connections to SAP HANA Cloud.
Prerequisites
You must use a supported version of SAP HANA or SAP Business Technology Platform (BTP). For more
information, see System Requirements and Technical Prerequisites [page 2723].
Direct Connection
Note
To make the direct live connection available to users on the internet, see Direct Live Connections in the
Internet Scenario.
To use this connection type, you must configure Cross-Origin Resource Sharing (CORS) support on your SAP
HANA on-premise system.
Direct connection setup: Live Data Connection to SAP HANA On-Premise Using a Direct Connection [page
279].
Note
The SAP HANA Analytic Adapter (HAA) is not required for an on-premise SAP HANA 2 installation. Instead,
SAP HANA 2 customers must enable SAP HANA XS Classic (if not enabled by default) for connectivity with
SAP Analytics Cloud.
You can use the Tunnel connection type if your organization wants to expose some of your data to users outside
of your corporate network, without giving them VPN rights.
Example: Giant Bread Company wants to expand their business, so they hire Very Clever Consultants (VCC)
to do market research for them. VCC prepares a story with market analysis. VCC wants to safeguard this data
inside their own firewall, but provide access to their customer. They don't want to give Giant's executives VPN
access to VCC's network, but by providing a tunnel connection, VCC can give access to the report and its data,
without compromising their network.
Tunnel connection setup: Live Data Connection to SAP HANA On-Premise Using a Tunnel Connection [page
287].
Note
• Systems on SAP data centers support only SAML connections, while systems on non-SAP data centers
support Basic and SAML connections. A two-digit number in your SAP Analytics Cloud URL, for
example eu10 or us30, indicates a non-SAP data center.
Use this connection type if you want to connect to data on an BTP system.
Use this connection type if you want to connect to data on an SAP HANA Cloud system.
Live Data Connection to SAP HANA Cloud Using an "SAP HANA Cloud" Connection [page 311]
Live Data Connection to SAP HANA Cloud Using a Direct Connection and SSO [page 315]
Restrictions to SAP HANA and BTP Live Data Connections [page 349]
SAP Note 2628214: Restriction using expression default values for HANA variables / parameters with HANA
Information Access (InA) Service
Once a live data connection to SAP HANA is established, SAP HANA is polled every 30 seconds to extend the
session. When your connection to SAP HANA has been lost, you will be notified if you are inside a story or a
model, and you must refresh the page to re-establish your live data connection.
The 30 second timeout interval can be adjusted by modifying the Avoid Remote Session Timeout (in seconds)
Note
Setting Avoid Remote Session Timeout (in seconds) to 0 disables session timeout.
Related Information
Configure your on-premise SAP HANA system for live data connections that use the direct connection type.
• Users with any of these permissions for Connections: Create, Read, Update, Delete, and Maintain.
• Users with Execute permission for Other Data Sources.
• Users with any of these standard application roles: Admin, Application Creator, BI Content Creator, BI
Admin, and Planner Reporter.
• Setting up a live connection requires working with the SAP Analytics Cloud system owner and different
IT and application stakeholders within your organization. Most configuration steps are done on your SAP
HANA server before creating the connection in your SAP Analytics Cloud tenant.
• If end users will access the live data connection from outside of your corporate network, ensure that the
SAP Information Access (InA) service (/sap/bc/ina/service/v2) on your SAP HANA server is exposed
to browser users directly.
• Ensure that the InA package (/sap/bc/ina/service/v2) or a higher-level package is configured for
basic authentication.
• Ensure that the sap.bc.ina.service.v2.userRole::INA_USER role is assigned to all users who will
use the live connection. This role is required in addition to the usual roles and authorizations that are
granted to users for data access purposes.
• Ensure that your SAP HANA XS server is configured for HTTPS (SSL) with a signed certificate, and that you
know which port it is using for HTTPS requests. For details, see Maintaining HTTP Access to SAP HANA
and SAP Knowledge Base Article 2502174.
• For single sign-on (SSO) (optional):
• If you want users to have a single sign-on experience to your data, check that you are using the same
Identity Provider (IdP) for SAP Analytics Cloud and SAP HANA. For more information on setting up
your identity provider in SAP Analytics Cloud, see Enable a Custom SAML Identity Provider [page 2895].
• Ensure that all users are SAML configured.
• Ensure that the InA package (/sap/bc/ina/service/v2) or a higher-level package is configured for
SAML authentication. For details, see the SAP HANA XS Classic Configuration Parameters.
• Configure cross-site cookies: To ensure that Chrome and other browsers allow cross-site access to your
SAP on-premise data source cookies from SAP Analytics Cloud, you must configure your SAP on-premise
data source to issue cookies with specific attributes. Without these settings, user authentication to your
live data connections will fail, and Story visualizations based on these connections will not render.
For steps on how to do this, see SameSite Cookie Configuration for Live Data Connections [page 266].
Note
Users require both the INA_USER role, and additional object rights. The SAP HANA administrator must
grant users SELECT privileges on all view items in the _SYS_BIC schema that users should have access to.
For more information, see SAP Knowledge Base Article 2353833.
Note
For information on supported versions of SAP HANA, see System Requirements and Technical
Prerequisites [page 2723].
Related Information
Context
You must ensure that the HTTP responses from the InA service to users' web browsers include CORS headers.
Procedure
1. Log on to your SAP HANA XS Admin page (/sap/hana/xs/admin) as the System user or a user
assigned to the following roles: sap.hana.xs.admin.roles::RuntimeConfAdministrator and
sap.hana.xs.admin.roles::SAMLViewer.
2. Go to the XS Artifact Administration panel and navigate to sap.bc.ina.service.v2.
3. Select the sap.bc.ina.service.v2 package, switch to the CORS panel, and use the following instructions to
edit your CORS configuration:
1. Select Enable Cross Origin Resource Sharing.
2. Add your SAP Analytics Cloud host to Allowed Origins. For
example, https://<Customer-Prefix>.<Data-Center>.sapbusinessobjects.cloud.
Note
More than one URL can be added to the allowOrigin variable. For more information on CORS
options, see Application-Access File Keyword Options.
3. If single sign-on (SSO) is used, add the IdP host to Allowed Origins.
4. Add the following to Allowed Headers:
• accept
• authorization
• content-type
• x-csrf-token
• x-request-with
• x-sap-cid
• accept-language
For SSO only, deploy the custom web content to your SAP
HANA server
Context
To enable SSO when using a direct connection, you must deploy some custom web content to your SAP HANA
server. This web content is what will appear briefly to users once per session when they first create a live data
connection to your SAP HANA system, or when they refresh charts or tables against that live data connection.
Procedure
<html>
<script type="text/javascript">
open(location, '_self').close();
</script>
</html>
8. Save auth.html.
9. Create another file under the cors package, and name it .xsaccess.
10. Open .xsaccess, and add the following code:
If the html page is configured correctly, the page will load and close automatically.
You will need to repeat the configuration in this procedure after every SAP HANA or SAP EPM library
upgrade.
Procedure
You'll need to increase the sessiontimeout parameter in the httpserver section of the xsengine.ini file.
For example, if you change the parameter to 43200, the session will be active for 12 hours.
For more information, see the SAP HANA XS Classic Configuration Parameters.
Procedure
Your user's browsers must allow 3rd party cookies from the SAP HANA server's domain and pop-ups from the
SAP Analytics Cloud domain. This can be easily configured in the browser's settings. As an example, see the
steps below for Google Chrome.
[*.]sapanalytics.cloud
[*.]hanacloudservices.cloud.sap
[*.]hcs.cloud.sap
[*.]analytics.sapcloud.cn
d. Go back to Privacy and security and click Cookies and other site data.
e. Under Sites that can always use cookies add your SAP HANA server's domain.
Procedure
This language will always be used for this connection and cannot be changed by users without
administrator privileges.
Note
You must know which languages are installed on your SAP HANA system before adding a language
code. If the language code you enter is invalid, SAP Analytics Cloud will default to the language
specified by your system metadata.
7. Under Authentication Method select None for no authentication, or select User Name and Password, or for
single sign-on, select SAML Single Sign On.
• Using the None authentication option allows you to connect to an SAP HANA system that uses SSO
that is not based on SAML 2.0. For more information, see Using the 'None' Authentication Option [page
454].
• For the User Name and Password option, enter an SAP HANA user name and password.
Note
To enable single sign-on for the mobile app, see the "Single Sign-On Requirements" topic in the SAP
Analytics Cloud Mobile Administration Guide
8. Select OK.
Note
After creating a connection to a remote system and before creating a model from a remote system, you
must log off and log on to SAP Analytics Cloud again.
Note
Select the Enable users to schedule for story publishing option if you want to let your users schedule the
publishing of stories. For details on scheduling, see Schedule a Publication [page 219].
Note
Select the Enable model metadata generation option in Advanced Features if you want to access
Smart Insights on the models in your connection. For details on how to generate model metadata see
Manually Generating Model Metadata on a Live SAP HANA Model [page 332]. To learn more about
Smart Insights, see Smart Insights [page 2003].
Note
The connection is not tested until you create a model. For more information, see Create a New Model [page
645].
You must configure your on-premise SAP HANA system for live data connections that use the tunnel
connection type.
• Users with any of these permissions for Connections: Create, Read, Update, Delete, and Maintain.
• Users with Execute permission for Other Data Sources.
• Users with any of these standard application roles: Admin, Application Creator, BI Content Creator, BI
Admin, and Planner Reporter.
• Setting up a live connection requires working with the SAP Analytics Cloud system owner and different
IT and application stakeholders within your organization. Most configuration steps are done on your SAP
HANA server before creating the connection in your SAP Analytics Cloud tenant.
Prerequisites
• If end users will access the live data connection from outside of your corporate network, ensure that the
SAP Information Access (InA) service (/sap/bc/ina/service/v2) on your SAP HANA server is exposed
to browser users directly.
• Ensure that the InA package (/sap/bc/ina/service/v2) or a higher-level package is configured for
basic authentication.
• Ensure that the sap.bc.ina.service.v2.userRole::INA_USER role is assigned to all users who will
use the live connection. This role is required in addition to the usual roles and authorizations that are
granted to users for data access purposes.
• Ensure that your SAP HANA XS server is configured for HTTPS (SSL) with a signed certificate, and that you
know which port it is using for HTTPS requests. For details, see Maintaining HTTP Access to SAP HANA
and SAP Knowledge Base Article 2502174.
• Set up SSO (optional):
• To set up SSO, see the following instructions: Set Up Trust Between SAP Analytics Cloud and Your
On-Premise HANA Systems Using a Tunnel Connection [page 452]
• Ensure that all users are SAML configured.
• Ensure that the InA package (/sap/bc/ina/service/v2) or a higher-level package is configured for
SAML authentication. For details, see the SAP HANA XS Classic Configuration Parameters.
Users require both the INA_USER role, and additional object rights. The SAP HANA administrator must
grant users SELECT privileges on all view items in the _SYS_BIC schema that users should have access to.
For more information, see SAP Knowledge Base Article 2353833.
Note
For information on supported versions of SAP HANA, see System Requirements and Technical
Prerequisites [page 2723].
Related Information
Procedure
Procedure
You'll need to increase the sessiontimeout parameter in the httpserver section of the xsengine.ini file.
For example, if you change the parameter to 43200, the session will be active for 12 hours.
For more information, see the SAP HANA XS Classic Configuration Parameters.
Procedure
This language will always be used for this connection and cannot be changed by users without
administrator privileges.
Note
You must know which languages are installed on your SAP HANA system before adding a language
code. If the language code you enter is invalid, SAP Analytics Cloud will default to the language
specified by your system metadata.
7. Under Authentication Method select User Name and Password, or for single sign-on, select SAML Single
Sign On.
• For the User Name and Password option, enter an SAP HANA user name and password.
8. Select OK.
Note
After creating a connection to a remote system and before creating a model from a remote system, you
must log off and log on to SAP Analytics Cloud again.
Note
Select the Enable users to schedule for story publishing option if you want to let your users schedule the
publishing of stories. For details on scheduling, see Schedule a Publication [page 219].
Results
Note
The connection is not tested until you create a model. For more information, see Create a New Model [page
645].
You can create a live data connection to an SAP Business Technology Platform (BTP) system, and give an SAP
HANA user permissions to use the connection in SAP Analytics Cloud. The SAP BTP was formerly named the
SAP Cloud Platform.
• Users with any of these permissions for Connections: Create, Read, Update, Delete, and Maintain.
• Users with Execute permission for Other Data Sources.
• Users with any of these standard application roles: Admin, Application Creator, BI Content Creator, BI
Admin, and Planner Reporter.
• Setting up a live connection requires working with the SAP Analytics Cloud system owner and different
IT and application stakeholders within your organization. Most configuration steps are done on your SAP
HANA server before creating the connection in your SAP Analytics Cloud tenant.
Prerequisites
• You have set up and activated the SAP HANA Info Access Service (InA), version 4.10.0 or above, on your
SAP HANA system.
Note
For more information on how to set up your SAP HANA InA service, see Installing the Info Access,
Toolkit, API, and Service.
• If end users will access the live data connection from outside of your corporate network, ensure that the
SAP Information Access (InA) service (/sap/bc/ina/service/v2) on your SAP HANA server is exposed
to browser users directly.
• Ensure that the sap.bc.ina.service.v2.userRole::INA_USER role is assigned to all users who will
use the live connection. This role is required in addition to the usual roles and authorizations that are
granted to users for data access purposes.
Note
Users require both the INA_USER role, and additional object rights. The SAP HANA administrator must
grant users SELECT privileges on all view items in the _SYS_BIC schema that users should have access to.
For more information, see SAP Knowledge Base Article 2353833.
Note
For information on supported versions of SAP HANA, see System Requirements and Technical
Prerequisites [page 2723].
Context
Procedure
1. In the XS Admin page of your SAP HANA system, select (menu) XS Artifact Administration .
Procedure
Note
The SAP BTP was formerly named the SAP Cloud Platform, and may still appear as the SAP Cloud
Platform in the list.
5. Add your SAP BTP account name, database name, and landscape host.
This information is available from the Cockpit overview screen, when you have a running SAP HANA
instance.
Note
The following landscape hosts are not yet available: Europe (Frankfurt), KSA (Riyadh), Russia
(Moscow), UAE (Dubai).
This language will always be used for this connection and cannot be changed by users without
administrator privileges.
Note
You must know which languages are installed on your SAP HANA system before adding a language
code. If the language code you enter is invalid, SAP Analytics Cloud will default to the language
specified by your system metadata.
After creating a connection to a remote system and before creating a model from a remote system, you
must log off and log on to SAP Analytics Cloud again.
Note
Select the Enable users to schedule for story publishing option if you want to let your users schedule the
publishing of stories. For details on scheduling, see Schedule a Publication [page 219].
Results
Note
The connection is not tested until you create a model. For more information, see Create a New Model [page
645].
If you use the SAP Business Technology Platform (BTP) with SAML SSO, you can configure SSO to work with
SAP Analytics Cloud, and create a live data connection to your SAP HANA system. The SAP BTP was formerly
named the SAP Cloud Platform.
• Users with any of these permissions for Connections: Create, Read, Update, Delete, and Maintain.
• Users with Execute permission for Other Data Sources.
• Users with any of these standard application roles: Admin, Application Creator, BI Content Creator, BI
Admin, and Planner Reporter.
• Setting up a live connection requires working with the SAP Analytics Cloud system owner and different
IT and application stakeholders within your organization. Most configuration steps are done on your SAP
HANA server before creating the connection in your SAP Analytics Cloud tenant.
Prerequisites
• To perform these steps, you must use an SAP HANA administrator account that is assigned to the
following roles:
sap.hana.xs.admin.roles::SAMLAdministrator
sap.hana.xs.admin.roles::RuntimeConfAdministrator
sap.hana.ide.roles::CatalogDeveloper
sap.hana.ide.roles::SecurityAdmin
Note
For more information on how to set up your SAP HANA InA service, see Installing the Info Access,
Toolkit, API, and Service.
• Ensure that the sap.bc.ina.service.v2.userRole::INA_USER role is assigned to all users who will
use the live connection. This role is required in addition to the usual roles and authorizations that are
granted to users for data access purposes.
Note
Users require both the INA_USER role, and additional object rights. The SAP HANA administrator must
grant users SELECT privileges on all view items in the _SYS_BIC schema that users should have access to.
For more information, see SAP Knowledge Base Article 2353833.
Note
For information on supported versions of SAP HANA, see System Requirements and Technical
Prerequisites [page 2723].
Related Information
Context
You can use the same SAML Identity Provider (IdP) to log on to both SAP HANA and SAP Analytics Cloud.
When you set up SAML SSO, you will also create a live data connection to your SAP HANA system.
Procedure
1. In the XS Admin page of your SAP HANA system, select (menu) SAML Service Provider .
You can access the XS Admin page at the following URL: https://<SAP HANA SYSTEM>/sap/hana/xs/
admin.
Replace <SAP HANA SYSTEM> with your SAP HANA system name.
2. Under Service Provider Information, copy the name of the SAML service provider.
Note
The SAP BTP was formerly named the SAP Cloud Platform, and may still appear as the SAP Cloud
Platform in the list.
7. Add your SAP BTP account name, database name, and landscape host.
This information is available from the Cockpit overview screen, when you have a running SAP HANA
instance.
Note
The following landscape hosts are not yet available: Europe (Frankfurt), KSA (Riyadh), Russia
(Moscow), UAE (Dubai).
This language will always be used for this connection and cannot be changed by users without
administrator privileges.
You must know which languages are installed on your SAP HANA system before adding a language
code. If the language code you enter is invalid, SAP Analytics Cloud will default to the language
specified by your system metadata.
Note
Select the Enable users to schedule for story publishing option if you want to let your users schedule the
publishing of stories. For details on scheduling, see Schedule a Publication [page 219].
13. In the XS Admin page of your SAP HANA system, select (menu) SAML Identity Provider .
Note
Results
Note
The connection is not tested until you create a model. For more information, see Create a New Model [page
645].
Enable SAML
Procedure
1. In the XS Admin page of your SAP HANA system, select (menu) XS Artifact Administration .
6. Select Save.
Procedure
If you are using the same IdP for SAP HANA and SAP Analytics Cloud, you can automatically map all existing
users to SAP Analytics Cloud.
If you use different IdPs for SAP HANA and SAP Analytics Cloud, you must perform a manual user mapping.
Note
If you do not map users, they will not have access to the SAP HANA database.
Note
Replace <MYSCHEMA> with the name of the schema you created. The name is case sensitive.
Replace <SCHEMA> with the selected schema name, <LOGIN IdP> with the name of the SAP HANA
IdP you use, <IMPORTED IdP NAME> with the name of the SAP Analytics Cloud IdP you noted in the
preceding section Set up the trust relationship between SAP HANA and SAP Analytics Cloud.
Note
To find the name of your SAP HANA IdP, go to the XS Admin page, select (menu) SAML
Identity Provider . Under Destination, note the Base URL.
Note
If new users are added to SAP Analytics Cloud, or SAP HANA, you can run the SQL command again
to create a new mapping.
1. In SAP Analytics Cloud, from the side navigation, choose Security Users .
Copy a USER ID.
2. Log on to the SAP Business Technology Platform Cockpit and select Databases & Schemas.
3. Select the required DB/Schema ID from the list, then SAP HANA Web-based Development
Workbench Catalog . A list of available schemas will appear.
ALTER USER <HANA USER> ADD IDENTITY '<SAML MAPPING>' FOR SAML PROVIDER
<IMPORTED IdP NAME>;
ALTER USER <HANA USER> ENABLE SAML;
Note
Replace <HANA USER> with an SAP HANA user ID, <SAML MAPPING> with the corresponding ID
you copied from SAP Analytics Cloud, and <Imported IdP Name> with the name of the SAP
Analytics Cloud IdP you noted in the preceding section Set up the trust relationship between SAP
HANA and SAP Analytics Cloud.
The second command enables SAML authentication for the specified user. If authentication is
already enabled, this command has no effect.
Procedure
You can create a live data connection to an SAP HANA 2.0 system that is operating on an SAP Business
Technology Platform (BTP) system that uses a Cloud Foundry (CF) environment. To access SAP HANA HDI
containers on an BTP system that is running a Cloud Foundry (CF) environment, you can create a live data
connection using the SAP HANA Analytics Adapter for Cloud Foundry.
• Users with any of these permissions for Connections: Create, Read, Update, Delete, and Maintain.
• Users with Execute permission for Other Data Sources.
• Users with any of these standard application roles: Admin, Application Creator, BI Content Creator, BI
Admin, and Planner Reporter.
• Setting up a live connection requires working with the SAP Analytics Cloud system owner and different
IT and application stakeholders within your organization. Most configuration steps are done on your SAP
HANA server before creating the connection in your SAP Analytics Cloud tenant.
• SAP HANA 2.0 must be installed on an SAP Business Technology Platform (BTP) system that uses a Cloud
Foundry (CF) environment.
• A SAML 2 Identity Provider (IdP) must be configured. Both the SAP BTP and SAP Analytics Cloud must use
the same IDP.
For how to set up SAML 2.0 on SAP BTP, see Trust and Federation with SAML 2.0 Identity Providers.
For how to set up SAML in SAP Analytics Cloud, see Enable a Custom SAML Identity Provider [page 2895].
• You must have enough quota for the SAP HANA service and applications.
• You must be assigned an Organization Manager role in your SAP BTP subaccount.
• If using HDI containers, you must have HDI containers available on your SAP BTP system. For more
information, see Set Up an HDI Container.
Note
The SAP BTP geo map feature is not currently supported for this connection type.
Note
Related Information
Procedure
4. Go to Spaces Services Service Marketplace and select the SAP HANA Service.
5. Go to the Instances menu and click New Instance.
6. In the wizard that appears, select your service plan depending on your subscription.
Instances are divided into 16GB blocks and use a minimum two blocks. You can select a SAP HANA
instance up to 2TB in size.
9. Choose if you want to allow the SAP HANA service to be available on the Internet or only to a specific IP
Address.
10. (Optional) Bind any existing applications to the SAP HANA service.
11. In the Confirm screen, provide a name for your SAP HANA instance.
12. Click Finish.
The instance will be created and its status will be updated in the SAP BTP Cockpit in a few minutes.
Procedure
1. Open your instance dashboard and log in with your SAP BTP login credentials.
2. Make note of the database endpoint.
You will need this endpoint when connecting to this database.
3. Go to the SAP HANA Cockpit, and in the Overall Database Status tile, note your tenant database name and
instance number.
Procedure
Procedure
cf install-plugin mta-plugin-windows.exe -f
To verify that the JDK is ready, check the version of the JDK you have installed. For example:
javac -version
To verify that Maven is ready, check the version of Maven you have installed. For example:
mvn --version
To verify that Node is ready, check the version of Node you have installed. For example:
node --version
Also verify that npm (Node Package Manager) is installed with Node. For example:
npm --version
6. Set npm for the sap registry modules with the command:
Procedure
Procedure
cf api <api-endpoint>
cf login
cf deploy <HAA_ROOT>.mtar
After deployment, two new services and three new applications should appear in the SAP BTP Cockpit.
7. To authenticate with a named user, set USE_NAMED_USER=true.
Procedure
ID: haa
_schema-version: '2.0'
version: 0.0.1
modules:
- name: haa-java
- name: haa
type: nodejs
path: haa-entry
parameters:
memory: 512M
buildpack: nodejs_buildpack
requires:
- name: haa-uaa
- name: haa-java
group: destinations
properties:
name: haa-java
url: ~{url}
forwardAuthToken: true
timeout: 600000
properties:
CORS: '[{"uriPattern": "^/sap/bc/ina/(.*)$", "allowedOrigin":
[{"host":"<sac-host>", "protocol":"https"}], "allowedMethods": ["GET",
"POST", "OPTIONS"], "allowedHeaders": ["Origin", "Accept", "X-
Requested-With", "Content-Type", "Access-Control-Request-Method", "Access-
Control-Request-Headers", "Authorization", "X-Sap-Cid", "X-Csrf-Token"],
"exposeHeaders": ["Accept", "Authorization", "X-Requested-With", "X-Sap-
Cid", "Access-Control-Allow-Origin", "Access-Control-Allow-Credentials", "X-
Csrf-Token", "Content-Type"]}]'
INCOMING_CONNECTION_TIMEOUT: 600000
TENANT_HOST_PATTERN: '^(.*)-<space>-haa.cfapps.
(.*).hana.ondemand.com'
resources:
- name: haa-uaa
type: com.sap.xs.uaa
parameters:
path: ./xs-security.json
- name: db-connector
type: org.cloudfoundry.user-provided-service
parameters:
config:
url: "jdbc:sap://<jdbc_server>:<jdbc_port>?
encrypt=true&validateCertificate=false"
Note
You can find your JDBC server name and port on your SAP HANA Service Dashboard, under the Direct
SQL Connectivity endpoint.
6. In the SAP BTP Cockpit, in your subaccount overview, find the CF API endpoint.
cf api <api-endpoint>
cf login
cf deploy haa.mtar
After deployment, two new services and three new applications should appear in the SAP BTP Cockpit.
Procedure
1. Select the haa-java application, click Service Binding, and select the HDI service.
2. Click Show Sensitive Data, locate the hdi_user, and copy the user ID.
3. In the SAP BTP Cockpit, access the SQL Console and run the command:
Replace <db_user> with the HDI user ID you copied in the previous step.
4. Go to the subaccount and click Role Collections.
Procedure
Procedure
1. In the SAP BTP Cockpit, under Security Trust Configuration , set the default IDP as inactive.
Procedure
5. Click Save.
6. In the Authorization Mode area for the new user, click Assign Roles.
Note
If your HAA version is earlier than 1.5.2, add EXECUTE_MDS_DEV instead of EXECUTE_MDS.
Procedure
This language will always be used for this connection and cannot be changed by users without
administrator privileges.
Note
You must know which languages are installed on your SAP HANA system before adding a language
code. If the language code you enter is invalid, SAP Analytics Cloud will default to the language
specified by your system metadata.
Note
After creating a connection to a remote system and before creating a model from a remote system, you
must log off and log on to SAP Analytics Cloud again.
Note
Select the Enable users to schedule for story publishing option if you want to let your users schedule the
publishing of stories. For details on scheduling, see Schedule a Publication [page 219].
To access SAP HANA Cloud data without having to set up the SAP HANA Analytics Adapter, you can create a
live data connection using the “SAP HANA Cloud” connection type.
• Users with any of these permissions for Connections: Create, Read, Update, Delete, and Maintain.
• Users with Execute permission for Other Data Sources.
• Users with any of these standard application roles: Admin, Application Creator, BI Content Creator, BI
Admin, and Planner Reporter.
• Setting up a live connection requires working with the SAP Analytics Cloud system owner and different
IT and application stakeholders within your organization. Most configuration steps are done on your SAP
HANA server before creating the connection in your SAP Analytics Cloud tenant.
Note
This connection type works only in Cloud Foundry environments (non-SAP data centers). For Neo
environments (SAP data centers), see Live Data Connection to SAP HANA Cloud Using a Direct Connection
and SSO [page 315].
SAP Analytics Cloud can be hosted either on SAP data centers or on non-SAP data centers (for
example, Amazon Web Services (AWS)). Determine which environmentSAP Analytics Cloud is hosted on by
inspecting yourSAP Analytics Cloud URL:
• A single-digit number, for example us1 or jp1, indicates an SAP data center (Neo environment).
• A two-digit number, for example eu10 or us30, indicates a non-SAP data center (Cloud Foundry
environment).
Note
To grant access to SAP HANA Cloud data, you'll need the access_role and
external_privileges_role for the HDI container. For more information, see Assign Roles to a
Database User in the SAP HANA Cockpit documentation.
Related Information
Procedure
This language will always be used for this connection and cannot be changed by users without
administrator privileges.
Note
You must know which languages are installed on your SAP HANA system before adding a language
code. If the language code you enter is invalid, SAP Analytics Cloud will default to the language
specified by your system metadata.
7. Under Authentication Method select User Name and Password, or for single sign-on, select SAML Single
Sign On.
8. For the user name and password option:
a. Provide a user name that exists in SAP HANA Cloud as a database user.
The database user needs to have roles and privileges for InA, and needs to have access to the SAP
HANA artifacts that are needed to retrieve data for the InA request.
b. Select OK, and skip the remaining steps.
9. For SSO only, copy the SAML Identity Provider (IdP) from the Provider Name field in the connection dialog,
and also download the certificate from this dialog.
You'll need these two items to perform the trust configuration to set up SAML SSO.
Continue with the next step now, before you select OK to finish creating this connection.
Procedure
Context
You need to create the user, or you can modify an existing user, and provide the proper role.
Procedure
1. Go to User Management.
7. In your SAP Analytics Cloud tenant, go to System Administration , and select the Security tab.
Copy the EMAIL field from the Security Users tab in SAP Analytics Cloud.
9. Or, if SAML Single Sign-On is selected, then you'll need to look at the SAML Single Sign-On (SSO)
Configuration section, Step 3: Choose a user attribute to map to your identity provider, and note which
option is selected in the User Attribute field.
• If the User Attribute is set to Email, then the email ID of the user who logs in to the SAP Analytics
Cloud tenant needs to be mapped (as shown in a later step). Copy the EMAIL field from the Security
Users tab in SAP Analytics Cloud.
• If the User Attribute is set to USER ID, then instead, copy the USER ID field from the Security
Users tab in SAP Analytics Cloud.
• If the User Attribute is set to Custom SAML User Mapping, there will be a new column SAML USER
MAPPING in the Security Users tab in SAP Analytics Cloud, that needs to be mapped with the
SAP HANA Cloud database user.
10. Take the value from the appropriate column for the SAP Analytics Cloud user, and enter that in the External
Identity field on the Authentication tab in User Management.
11. Click Save.
12. To add the required roles and privileges, click Assign Roles or Assign Privileges.
For another user from the same SAP Analytics Cloud tenant to be able to access the same SAP HANA
Cloud system, you'd need to create another user in SAP HANA and map the appropriate ID, or use the
same SAP HANA user and map the appropriate ID.
You can also add another identity provider in an existing SAP HANA user, because you can attach multiple
SAML identities with one SAP HANA user. In that way, you can access the SAP HANA Cloud instance from
multiple SAP Analytics Cloud tenants using a single SAP HANA database user, if desired.
To access SAP HANA Cloud on an SAP Business Technology Platform (BTP) system that is running a Cloud
Foundry (CF) environment, you can create a live data connection using the SAP HANA Analytics Adapter for
Cloud Foundry.
• Users with any of these permissions for Connections: Create, Read, Update, Delete, and Maintain.
• Users with Execute permission for Other Data Sources.
• Users with any of these standard application roles: Admin, Application Creator, BI Content Creator, BI
Admin, and Planner Reporter.
• Setting up a live connection requires working with the SAP Analytics Cloud system owner and different
IT and application stakeholders within your organization. Most configuration steps are done on your SAP
HANA server before creating the connection in your SAP Analytics Cloud tenant.
Note
We highly recommend that you use the SAP HANA Cloud connection type (see details here [page 311]) for
Cloud Foundry environments. However, this direct connection type is still available for both Neo and Cloud
Foundry environments.
• SAP HANA Cloud must be installed on an SAP Business Technology Platform (BTP) system that uses a
Cloud Foundry (CF) environment.
• A SAML 2 Identity Provider (IdP) must be configured. Both the BTP and SAP Analytics Cloud must use the
same IdP.
For instructions on setting up SAML 2.0 on the BTP, see Trust and Federation with SAML 2.0 Identity
Providers.
For instructions on setting up SAML in SAP Analytics Cloud, see Enable a Custom SAML Identity Provider
[page 2895].
• You must have HDI containers available on your BTP system. For more information, see Set Up an HDI
Container.
Note
The following SAP Analytics Cloud features are not supported with this connection type at this time:
Related Information
Procedure
cf install-plugin mta-plugin-windows.exe -f
To verify that the JDK is ready, check the version of the JDK you have installed. For example:
javac -version
To verify that Maven is ready, check the version of Maven you have installed. For example:
mvn --version
To verify that Node is ready, check the version of Node you have installed. For example:
node --version
Also verify that npm (Node Package Manager) is installed with Node. For example:
npm --version
6. Set npm for the sap registry modules with the command:
Procedure
Context
Procedure
_schema-version: "2.0.0"
ID: com.sap.xsahaa
version: "1.0"
modules:
- name: xsahaa-entry
type: javascript.nodejs
path: approuter/
requires:
- name: xsahaa-java
group: destinations
properties:
name: xsahaa-java
url: ~{url}
forwardAuthToken: true
timeout: 600000
- name: xsahaa-uaa
properties:
CORS: >
[
{
"uriPattern": "^/sap/bc/ina/(.*)$",
"allowedOrigin": [{"<orca-tenant-host>" : "", "protocol" : "",
"port" : ""}],
"allowedMethods": ["GET", "POST", "HEAD", "OPTIONS", "PUT",
"DELETE"],
"allowedHeaders": ["Origin", "Accept", "X-Requested-With",
"Content-Type", "Access-Control-Request-Method", "Access-Control-Request-
Headers", "Authorization", "X-Sap-Cid", "X-Csrf-Token"],
"exposeHeaders": ["Accept", "Authorization", "X-Requested-
With", "X-Sap-Cid", "Access-Control-Allow-Origin", "Access-Control-Allow-
Credentials", "X-Csrf-Token", "Content-Type"]
}
]
INCOMING_CONNECTION_TIMEOUT: 600000
- name: xsahaa-java
type: java.tomee
path: java-xsahaa.war
provides:
Note
7. Edit the xs-security.json file in the folder <HAA_ROOT>, and replace “SalesApp” with “HAAApp”.
8. Edit the xs-app.json file in the folder <HAA_ROOT>/haa-entry and replace this code block:
"source": "^/(.*)",
"localDir": "resources",
"authenticationType": "xsuaa",
"scope": "$XSAPPNAME.USER"
"source": "^/(.*)",
"localDir": "resources",
"authenticationType": "none"
Procedure
cf api <api-endpoint>
cf login
cf deploy <HAA_ROOT>.mtar
After deployment, two new services and three new applications should appear in the SAP BTP Cockpit.
7. To authenticate with a named user, set USE_NAMED_USER=true.
Procedure
Procedure
If the certificate collection already exists and you want to edit it, you'll need to restart your SAP HANA
instance afterward.
1. In the Database Overview, click Certificate Collections Add Collection to create a certificate
collection.
2. Type a name for your certificate collection, and click OK.
3. Select Add Certificate to open the Select Certificate dialog.
4. Select the certificate you created earlier, and then click OK.
5. Click Edit Purpose.
6. In the Edit Purpose dialog, select JWT in the Purpose field, choose the JWT provider you created earlier
in the Providers field, and then click Save.
Procedure
1. In the SAP BTP Cockpit, under Security Trust Configuration , set the default IDP as inactive.
Context
This step depends on the states of the User-Provided Variables from the HAA service in the preceding section
Build and deploy the analytics adapter. If USE_NAMED_USER is true, then follow all the substeps below, down
to and including “Click Object Privileges”.
Note
If your HAA version is earlier than 1.5.2, add EXECUTE_MDS_DEV instead of EXECUTE_MDS.
Procedure
This language will always be used for this connection and cannot be changed by users without
administrator privileges.
Note
You must know which languages are installed on your SAP HANA system before adding a language
code. If the language code you enter is invalid, SAP Analytics Cloud will default to the language
specified by your system metadata.
Note
After creating a connection to a remote system and before creating a model from a remote system, you
must log off and log on to SAP Analytics Cloud again.
Note
You can schedule a story based on SAP HANA Cloud when the connection type is SAP HANA Cloud.
If you’re using a Direct connection and still see the option to enable scheduling under theAdvanced
settings, you can't schedule a publication.
You can set up Smart Insights for live SAP HANA direct connections.
Using live connections means you can access your on-premise data in real time without having to deal with
data updates or uploads. As data stays on-premise, there is no storage in the cloud, so your data security is
never an issue.
Smart Insights requires metadata about each remote model. The model metadata required to generate Smart
Insights on live SAP HANA models is actually cardinality statistics that help to determine the most interesting
candidates for further calculations related to Smart Insights. This is generated and stored on SAP Analytics
Cloud servers to help speed up the responsiveness of Smart Insights, and improves the ability of the Smart
Insights calculation to determine the most interesting dimensions and dimension members in a model. The
model metadata is only stored when enabled on a live SAP HANA connection.
Before you can generate this model metadata, you need to enable the generation of model metadata on the live
SAP HANA direct connection. For more information, see the following diagram:
Related Information
Connecting to Live SAP HANA to Access Smart Insights - Overview [page 327]
SAP HANA Technical User and Security Model on Remote System [page 329]
Enabling the Generation of Model Metadata [page 331]
Manually Generating Model Metadata on a Live SAP HANA Model [page 332]
Automatic Generation of Model Metadata [page 332]
Smart Insights [page 2003]
You can access Smart Insights on the models in your live SAP HANA direct connections.
Connection Types
Smart Insights works with live SAP HANA models using the Direct (CORS – Cross-Origin Resource Sharing)
connection type. Direct connectivity is a simple approach as it doesn’t require any additional hardware, is easy
to setup, and provides superior performance. For more information on the Direct connection type see the
chapter called Direct Live Connection with CORS.
For more details on creating your live SAP HANA direct connections, see the chapter Live Data Connection to
SAP HANA On-Premise Using a Direct Connection [page 279].
Note
The most secure and recommended approach for a Smart Insights metadata connection to an on-premise
SAP HANA system is through the Cloud Connector. The Cloud Connector is a component available with the
SAP Business Technology Platform (BTP). The Cloud Connector needs to be deployed inside the corporate
network, where there is direct access to the on-premise SAP HANA system. This creates a secure tunnel
between the SAP HANA system’s HTTP InA endpoint and the system, where SAP Analytics Cloud is hosted.
Communication across the tunnel is secured using Transport Layer Security (TLS). It can be audited,
monitored and controlled using the Cloud Connector administration user interface. The Cloud Connector
removes the need to explicitly open any communication port in the firewall to the on-premise system. It
acts as a reverse invoke proxy, as any communication is initiated by the Cloud Connector. The trust to
the Cloud Connector is also configured on the SAP Analytics Cloud side to a specific SAP Analytics Cloud
subaccount. The Cloud Connector component is available on Linux, Windows, and Mac OS X.
For more information, see Live Data Connections Advanced Features Using the Cloud Connector [page
445].
To benefit from Smart Insights on live SAP HANA models, it helps to understand the flow of information
between the web browser, SAP Analytics Cloud, and your SAP HANA data source.
In the Overview diagram - connecting to live SAP HANA to access Smart Insights, the Smart Insights Metadata
connection is used to generate model metadata (statistics) about the SAP HANA modeling view, and store
that metadata in SAP Analytics Cloud. The model metadata is generated against models on a live SAP HANA
connection using a technical user and the other connection settings detailed in the New HANA Live Connection
dialog. To be able to generate this model metadata, it's essential that the Enable model metadata generation
checkbox is selected in the Advanced Features section of the New HANA Live Connection dialog. For more
information about how to do this, see the chapter called Enabling the Generation of Model Metadata [page
331].
Smart Insights can then be used against the on-premise data accessed from a web browser inside your
organization's domain.
As with the normal Direct (CORS) connection type, the other flows from the browser are:
• Get/Post requests from the web browser to SAP Analytics Cloud are dedicated to metadata.
• Get/Post requests from the web browser to the on-premise data source are dedicated to data. Data
remains inside the customer domain.
For more information see the chapter called Live Connections Overview.
So, as we've already said, to use Smart Insights on live SAP HANA models, some model metadata needs to be
generated first. This metadata helps Smart Insights work in your user interface.
The model metadata required to generate Smart Insights on live SAP HANA models is actually cardinality
statistics that help to determine the most interesting candidates for further calculations related to Smart
Insights.
The load on an SAP HANA system can be managed by selectively applying restrictions and priorities to how
resources are used, such as the CPU, the number of active threads and memory. Settings can be applied
globally, or at the level of individual user sessions by using workload classes. The SAP HANA technical user
used to generate Smart Insights Metadata can be assigned to a workload class.
• Create a workload class controlling either the memory or the number threads. See SAP HANA 2x for more
information.
• Create a workload mapping to bind the workload class to the SAP HANA technical user used to generate
Smart Insights Metadata. See SAP HANA 2x for more information.
Related Information
Live Data Connection to SAP HANA On-Premise Using a Direct Connection [page 279]
Live Data Connections Advanced Features Using the Cloud Connector [page 445]
Configure System Settings [page 2760]
Enabling the Generation of Model Metadata [page 331]
SAP HANA Technical User and Security Model on Remote System [page 329]
Technical User
We recommend that you create a new technical user for model metadata generation. The technical user must
have the following InA role assigned – sap.bc.ina.service.v2.userRole::INA_USER.
The technical user authentication is only basic authentication i.e. username and password. Basic
authentication needs to be enabled on the HANA XS administration side.
Analytic Privileges
Analytic privileges grant different users access to different portions of data in the same view based on their
business role. Within the definition of an analytic privilege, the conditions that control which data users see is
either contained in an XML document or defined using SQL.
SAP HANA analytic privileges enable the access control to SAP HANA modeling views at the data level, for
example, by filtering out certain values in a column. If the SAP HANA modeling view has the Apply Analytic
Privileges property set, then analytic privileges can also be used to restrict data points (i.e. dimensions or
dimension members).
To collect statistics across all data points the SAP HANA technical user can be assigned the special analytical
privilege _SYS_BIC_CP_ALL. This means analytic privileges that filter data would not be applied to that user. It
is not recommended to do this in production, and use more detailed analytic privileges instead.
For more information, see the chapter called Analytic Privileges in the SAP HANA Administration Guide for SAP
HANA Platform on the SAP Help Portal at https://help.sap.com/viewer/index.
Object Privileges
The technical user must have the appropriate object level privileges to be able to read the data from a view.
• The SELECT privilege on the _SYS_BIC schema. For more information, you can refer to the SAP Note
2353833 called SAP Analytics Cloud cannot read data from an SAP HANA live connection (remote
SAP HANA System).
• The SELECT privilege on all dependent objects, such as tables or other dependent modeling views.
Related Information
You can create multiple live SAP HANA connections to the same SAP HANA system.
For example, one connection could have model metadata generation enabled, and another could have it
disabled. This approach allows more sensitive models to be used on a separate connection.
Metadata is only generated and stored for SAP HANA modeling views that have been explicitly added as a data
source in SAP Analytics Cloud.
Related Information
SAP HANA Technical User and Security Model on Remote System [page 329]
You need to enable the generation of model metadata when setting up a new live SAP HANA connection.
Context
The Smart Insights lightbulb remains disabled until the required configuration steps are performed at the
connection level. When you enable model metadata generation at the connection level, you’re also enabling it
for all models on that connection. So, any SAP Analytics Cloud models using the live SAP HANA connection,
or stories that embed the live SAP HANA connection as a data source, are then enabled for model metadata
generation.
Procedure
Related Information
Live Data Connection to SAP HANA On-Premise Using a Direct Connection [page 279]
Configure Your On-Premise Systems to Use the Cloud Connector [page 446]
Manually Generating Model Metadata on a Live SAP HANA Model [page 332]
Model metadata is automatically generated and stored on the following user actions:
• A new model is created for a live SAP HANA connection. The connection must be configured to enable
model metadata generation.
• An SAP Analytics Cloud user triggers Smart Insights by selecting the light bulb icon on a data point for the
first time, before model metadata is generated. This can happen if a user with the maintain privilege on
the connection has enabled model metadata generation on an existing connection, but hasn't yet manually
triggered generation in the Generate Model Metadata dialog.
Related Information
You can manually generate model metadata in Connections using the Generate Model Metadata dialog.
Context
Manually generating model metadata is required for existing connections. Only a user with the maintain
privilege on the live connection can use the Generate Model Metadata dialog box to trigger the generation of
model metadata against all models for a live SAP HANA connection.
For more information about the automatic generation of model metadata, see the chapter called Automatic
Generation of Model Metadata [page 332].
Procedure
Model metadata generation is queued in SAP Analytics Cloud for each model on the live connection. When
the model metadata is generated, users can access Smart Insights in stories based on the models using
this live connection, as Smart Insights relies on metadata about the underlying HANA modeling views. Your
users then have access to Smart Insights about how their live SAP HANA data point has changed over time,
what the top contributors are, and how it’s calculated.
Related Information
You can check the status of model metadata generation on your live SAP HANA connection in the Generate
Model Metadata dialog.
Prerequisites
You've already generated model metadata on your live SAP HANA connection.
You can use the Generate Model Metadata dialog to check the status of model metadata generation for the
models on the live SAP HANA connection. The dialog can also be used to view helpful error messages that
may arise if there are any problems with a particular model, for example, with connectivity or authentication. If
errors arise, it may be necessary to check your credentials, or edit your connection settings.
Procedure
You can see the number of models in your connection for which model metadata was generated. You can
also see a list of any models for which model metadata wasn't generated.
5. Expand the Troubleshooting section to understand why model metadata can't be generated for one or
more models.
• You can refer to the following SAP Note for troubleshooting information - Issues accessing Smart
Insights on a SAP HANA Live connection - 2958669 .
• To see the full error message, hover over the text.
Model statistics are generated using the Smart Insights metadata connection that connects the on-premise
SAP HANA system to SAP Analytics Cloud, typically through the Cloud Connector.
The New HANA Live Connection dialog contains an Advanced Features to manage the metadata connection.
When model metadata is triggered manually from the Generate Model Metadata dialog, the statistics are
generated one model at a time. This helps to balance the workload on the live SAP HANA system. The statistics
are calculated using InA queries. The queries are executed using SAP HANA's sophisticated MDS component
(Multi-Dimensional Services), similar to the way most SAP Analytics Cloud BI queries are executed. It typically
takes between a few seconds and a few minutes to generate the model metadata for one model, depending on
the size of the model, both in terms of data and number of dimensions.
• Deletion of a model.
• Deletion of a connection also deletes metadata for all models in that connection.
• When a user with the maintain privilege on the live connection changes the Enable model metadata
generation checkbox from checked to unchecked.
• When the data source of a model is changed.
The model metadata stored is the minimum required to calculate the most interesting candidate dimensions or
hierarchical dimension members for Smart Insights. For a model, SAP Analytics Cloud stores the following for
each dimension:
Data changes can affect the model metadata, so we recommend that you manually generate model metadata
periodically. We also recommend that you manually generate model metadata if, for example, the underlying
SAP HANA modeling view has changed structure or has new tables.
Related Information
The following usage restrictions apply when working with live SAP HANA connections and Smart Insights.
Connection Types
Data Sources
Smart Insights is only supported on live SAP HANA 2.0 SPS 5 or higher.
Sizing
If your model contains more than 20 million rows of data, we may not be able to generate model metadata in all
cases. In this scenario, dimension usage statistics will be used as a fallback option if possible.
If you see a query time out error listed beside a model in the Troubleshooting section of the Generate Model
Metadata dialog, access to Smart Insights on this model in your live connection may not be possible. The
timeout could be caused by one of the following:
You'll notice the following unsupported scenarios when working with Smart Insights on live SAP HANA:
• Cross calculations in tables don't support aggregations for Smart Insights on live SAP HANA.
• Smart Insights doesn't support currency conversion on live SAP HANA models.
• When viewing the top contributor insights on your data point, Smart Insights doesn't display unassigned
values in the top contributor summary chart, like it does when working with acquired models.
• The grouping feature of a model with live SAP HANA connection isn't supported for Smart Insights.
• On live SAP HANA, top contributor Smart Insights aren't supported for linked models.
For more information specifically on Smart Insights, see the chapter called Smart Insights [page 2003].
If you have not set default values for mandatory variables and input parameters in your SAP HANA modeling
views, the model metadata can't be generated. In this scenario, dimension usage statistics will be used as a
fallback option if possible.
Using live SAP HANA data means you can access your on-premise data in real time without having to deal with
data updates or uploads. As data stays on-premise, there is no storage in the cloud, so your data security is
never an issue.
You can use live SAP HANA data to create predictive models with Smart Predict. You'll get reports and KPIs
with predictive insights that will enhance stories in SAP Analytics Cloud.
This section is targeted at SAP Analytics Cloud administrators setting up the SAP live Data access. The major
set up and configuration stages are listed in the related topics. Once the live data access is available, refer to
the section About Preparing Datasets for Predictive Scenarios [page 553] for information on using live datasets
with Smart Predict or look at this blog .
Live connectivity support in Smart Predict allows you to train and apply a predictive model using business
data that stays in your on-premise SAP HANA system. The predictions are also stored in your on-premise SAP
HANA system. Up until now the data had to be uploaded, or acquired in SAP Analytics Cloud (SAC), before
being processed within the SAC platform. Live connectivity support means that your business data stays where
it is, removing the need for physical data movement.
The following diagram shows how the different components of the live data connectivity fit in with each other:
Restriction
You can use Smart Predict on SAP HANA on-premise only. Cloud deployments of SAP HANA are not
supported currently.
Using Smart Predict on live SAP HANA Data is based on the following principles:
• Your business data stays within the on-premise SAP HANA system when a predictive model is trained or
applyed from SAP Analytics Cloud.
• The execution of the predictive algorithms is delegated to the on-premise SAP HANA system using the SAP
Automated Predictive Library (APL).
Related Information
Before you start configuring the different components to set up live SAP HANA connectivity for Smart Predict,
here is a quick run through some of the questions you are likely to have before starting:
Q: With live data, what happens during training and application of my predictive models?
A: Your data stays on-premise, so there is no data movement during the training and application workflows.
Smart Predict sends SQL instructions to the on-premise SAP HANA system, that initiates and coordinates the
execution of these workflows. Most of the work, including reading and writing to the data, is delegated to the
SAP Automated Predictive Library (APL) that is installed on your on-premise SAP HANA system.
During predictive model debriefing, Smart Predict queries predictive model metadata, for example the
predictive model performance indicators and influencer contributions, stored in the on-premise SAP HANA
system.
However, there are data movement when displaying live datasets in SAP Analytics Cloud. SQL queries go
through the Cloud Connector and the data used for the purpose of data preview goes from the on-premise
system to the cloud system.
Q: What sort of data sources are available for predictive modeling with live data?
A: Currently, the live connectivity for Smart Predict supports on-premise SAP HANA systems. This does not
include SAP HANA Cloud systems.
You can create predictive models on SAP HANA tables (row store or column store), and SAP HANA SQL Views.
Note
While Smart Predict doesn't support SAP HANA Calculation Views, you can create predictive models using
an SQL View built manually on top of a Calculation View. You can also materialize data in the calculation
views into tables.
Q: How does Smart Predict connect with a SAP HANA on-premise system?
The Cloud Connector removes the need to explicitly open any communication port on the on-premise system;
it acts as a reverse invoke proxy, as any communication is initiated by the Cloud Connector. The Cloud
Connector component is available on Linux, Windows, and Mac OS X. For more information, see SAP Cloud
Connector.
A: Authentication to the SAP HANA system relies on the credentials of a SAP HANA technical user defined
while setting connection parameters to the on-premise system; see Configuring a SAP HANA technical User
in the On-Premise SAP HANA System [page 344] for information on doing this. Connection parameters are
securely stored in a data repository entity on the SAP Analytic Cloud side. All users using the same connection
entity will share the same SAP HANA technical user. It's possible to define several connections with different
technical users pointing to the same SAP HANA system.
The SAP HANA technical user shall be granted with the following rights:
• Read permissions for training, and application tables and SQL views.
• Write permissions for output tables.
• Write permissions for the schema in which predictive models, debriefing, predictions will be persisted.
• Right to execute APL functions: you need the role: sap.pa.apl.bases.roles ::APL_EXECUTE, which
is assigned in SAP HANA, only once APL has been installed.
Note
If you install a SAP HANA 2.0x version, you also need to install the EPM-MDS plug- in to be able
to consume the predictions generated by Smart Predict in a SAP Analytics Cloud story. For more
information, refer to the SAP note 2444261 .
• Install the correct version (1911 or higher) of the Automated Predictive Library (APL).*
• Create a write back-schema that corresponds to the Data Repository. See Adding and Configuring the Data
Repository in SAP Analytics Cloud [page 346] for information. It's used to store the predictions and the
predictive models.
• Create a SAP HANA technical user to access the Data Repository, and setup privileges (see above).*
• Create a SAP HANA workload class if you need it:
• Create a workload class controlling either the memory or the number threads. See more information
for SAP HANA 2x.
• Create a workload mapping binding the previously created workload class to the SAP HANA technical
user defined in the Data Repository. See more information for SAP HANA 2x.
• Install and configure the Cloud Connector. See Configuring the Cloud Connector [page 345]
To allow Smart Predict to use live datasets in your on-premise SAP HANA system, you have to complete the
following configuration pre-requisites:
• Configure SAP HANA 2.0 SPS05 revision 51+. For information on installing and configuring SAP HANA,
refer to the SAP HANA Master Guide.
• Create a schema on the SAP HANA system, which will be used by Smart Predict to store predictions and
predictive models. This is described in the SAP HANA Developer Guide.
• Implement these connectivity steps:
Install the latest version of The SAP Automated Predictive Library is an Application Installing SAP HANA Auto-
SAP Automated Predictive Function Library (AFL) that exposes automated machine mated Predictive Library
Library (APL). learning capabilities in SAP HANA. The database role (APL) [page 342]
sap.pa.apl.base.roles::APL_EXECUTE must be
assigned to the SAP HANA technical user to ensure full ac-
cess to the schema, and execution of functions. Installing
the SAP Automated Predictive Library is a pre-requisite.
While we recommend that you install the latest version,
please note that if you haven't installed the latest version,
you should at least have version 2201 or higher.
Configure the SAP HANA You need to assign Configuring a SAP HANA
technical user in your on- • READ access on the data used for the training and ap- technical User in the On-
premise SAP HANA sys- plication of the predictive model, as well as the APL Premise SAP HANA Sys-
tem. rolesap.pa.apl.base.roles::APL_EXECUTE . tem [page 344]
• WRITE permissions for output tables and for the
schema, in which predictive models, debriefing and pre-
dictions will be persisted.
Install the Cloud The Cloud Connector is a component available with the SAP Installing the Cloud Con-
Connector. Business Technology Platform (BTP). The Cloud Connector nector [page 463]
is deployed on the on-premise system and allows any TCP
connection from the BTP system to tunnel securely through
the Transport Layer Security (TLS) to the on-premise sys-
tem.
Configure the Cloud Con- Configure the Cloud Connector to enable SAP Analytics Configuring the Cloud
nector. Cloud to connect to the SAP HANA system. Connector [page 345]
Add and configure the The data repository references your on-premise SAP HANA Adding and Configuring
data repository to SAP system. Based on the rights granted in each of the previous the Data Repository in SAP
Analytics Cloud. stages of the live connectivity setup, the SAP HANA techni- Analytics Cloud [page 346]
cal user can see the schemas to train and apply predictive
models.
Restriction
You can use Smart Predict on SAP HANA on-premise only. Cloud deployments of SAP HANA are not
supported.
If you have installed the right Automated Predictive Library version that corresponds to your SAP HANA
system, you can use live dataset in your SAP HANA environment.
If you still need to install Automated Predictive Library, refer to the related links below.
Related Information
Go to Software Downloads , select Download and enter as search criteria APL 4 for HANA 2 along with the
latest version number of Automated Predictive Library:
SAP HANA 2.0 SPS05 revision 51+ You need to download APL 4 for SAP HANA 2.0 SPS05 revi-
sion 51+. Select either LINUX on POWER LE 64 BIT
or LINUX ON X86_64BIT, and select the latest version,
or at least version 2201 or higher.
Note
To consume your predictions in SAP Analytics Cloud
story, you need to install as well the EPM-MDS plug-in.
Note
Each new version of SAP HANA Automated Predictive
Library (APL) introduces new features and bug fixes. You
should update your SAP HANA system with the latest
version of APL to enjoy the most recent improvements
in SAP Analytics Cloud Smart Predict. Smart Predict
supports APL versions that were released in the past 18
months. Smart Predict does not support APL versions
that are more than 18 months old.
Perform the steps below to install the function library on the database and prepare your environment.
Extracting the SAP HANA APL installer from the pack- Extracting the package contents
age
Installing SAP HANA APL on the SAP HANA database Installing SAP HANA APL
server.
Configuring SAP HANA APL and SAP HANA Running Admin scripts
You can access the full SAP HANA APL guide for any other information that you require for the installation and
configuration in the SAP HANA Automated Predictive Library Reference Guide.
Once you have installed and configured SAP HANA APL, you can configure the SAP HANA technical user in
your on-premise SAP HANA system. See the previous page Setting Up Live SAP HANA Connectivity [page 341]
for more information.
Configuring a SAP HANA technical User in the On-Premise SAP HANA System [page 344]
Prerequisites
SAP HANA APL is installed and configured. Refer to the previous section for install information:Installing SAP
HANA Automated Predictive Library (APL) [page 342]
Context
You configure the SAP HANA technical user in your on-premise SAP HANA system:
Procedure
1. Create a database schema in your on-premise SAP HANA system, in which predictions are persistent when
a user initiates a predictive model training in SAP Analytics Cloud.
2. Create a technical user in the on-premise SAP HANA system. This user needs the following privileges and
roles:
• SELECT privileges on the SAP HANA tables/SQL views used by SAC users for predictive models
training and application in SAP Analytics Cloud.
• Privileges on the “write-back” schema that allows the user to write back to the data repository, for
example CREATE ANY, UPDATE, INSERT, DELETE and SELECT.
Example
Here is a sample SQL Script that you can use: grant SELECT, CREATE
ANY, INSERT, UPDATE, DELETE on SCHEMA "<write_back_schema_name>" to
<technical_user_name>.
call _SYS_REPO.GRANT_ACTIVATED_ROLE
('sap.pa.apl.base.roles::APL_EXECUTE','<APL_user>');
Note
You can access full information on user privileges, installation and configuration of SAP HANA APL
in the SAP HANA Automated Predictive Library Reference Guide.
Prerequisites
The Cloud Connector is installed. For installation information, see Installing the Cloud Connector [page 463]
Context
You configure the Cloud Connector to enable SAP Analytics Cloud to connect to the SAP HANA system.
Procedure
1. Follow the steps 1 to 7 from Configuring the Cloud Connector [page 465]
2. Enter the mapping information:
Internal Host Add the internal host name of the SAP HANA server.
Note
The default SAP HANA port for single tenant HANA instances is 3xx15,
xx being the instance number (00 for instance) and the default HTTPS
port is 8443. For multi-tenants instances, refert to this procedure to find
out the correct port information if you are working on SAP HANA 2x. If
you are working on SAP HANA 1x, refer to this procedure.
(Optional) Virtual Host The default virtual host and port are the internal host and port. You can
rename the host and port so that the internal host name and port are not
exposed. The virtual host name and port will be used when configuring the
data repository.
(Optional) Virtual Port The port number used by the virtual host.
Prerequisites
To create and configure data repositories, you need to have on of the security roles Predictive Admin, BI
Admin or one of the Admin roles.
Context
You add and configure a data repository that references your on-premise SAP HANA system. As soon as you
apply a predictive model, the generated predictions will be stored in a distinct table of the schema referenced
as the write-back schema.
Procedure
1. In SAP Analytics Cloud, from the side navigation, choose System Administration Data
Source Configuration .
It's possible to define multiple data repositories in SAP Analytics Cloud and to use distinct technical
users.
Your users have generated predictions in a live dataset and now want to use the results in SAP Analytics Cloud
live BI stories.
Context
Once your users have generated predictions in a live dataset, predictions are stored in tables in the write-back
schema of your SAP HANA on premise system. So that your users can consume these predictions in SAP
Analytics Cloud, you need to create at least one calculation view (or add the table containing predictions and
join it with actuals in the calculation views if the view already exists).
Caution
The process described below can slightly change depending on your SAP HANA version. Make sure to
consult the SAP HANA documentation reletated to your current version when you click on one of the
related links. You can easily navigate to your documentation version by using the dropdown buttons in the
help portal.
Collect the write-back schema name and the data source name information by displaying the live output
dataset from the Files section of SAP Analytics Cloud.
Note
The data source name is currently truncated and the copy and paste function is not available. Hover on
the data source name to see the full ID data source name to identify the table (this mean that in some
very unlikely case, there might be several tables with same starting ID).
Procedure
1. Open SAP HANA Studio (following steps can also be done through SAP Web IDE). For more information on
SAP HANA Studio you can refer to SAP HANA Studio, and for more information on SAP Web IDE, refer to
Create Graphical Calculation Views
2. Connect to the SAP HANA on-premise system that corresponds to the remote data repository used in the
Smart Predict training and application workflows.
Caution
3. Create a calculation view from the content folder in SAP HANA studio and choose:
• Subtype: Standard
• Type: Graphical
• Data Category: Cube
Note
The above mentionned settings are the simpliest settings to create an SAP HANA view. If you have the
expertise, you can change those settings according to your preferences as long as they are supported
by the live data acquisition.
Remember
You need at least one measure for this type of calculation view.
The following product restrictions apply when you create a live data connection to an SAP HANA or SAP
Business Technology Platform HANA Service.
Models
• Definition of additional measures with complex formulas (LOOKUP, CAGR, SMA, YoY, and so on).
• Using variables within calculations, if the underlying variable or input parameter in the SAP HANA view
does not have a default value.
• Only the cube data type is supported. For more information, see Supported Data Categories for Calculation
Views.
• Geospatial is not currently supported for Live Data Connections to SAP HANA HDI containers on BTP on
Cloud Foundry (CF) systems.
Stories
YQ YHQ
YQM YHQM
YQMD YHQMD
YM YHM
YMD YHMD
YWD
Each letter represents a hierarchy level or granularity. These are the expected formats:
• Y = Year. Format: YYYY
• The following features are only enabled for specific SAP HANA revisions. For more information, see the
Data Connectivity - Live section of the System Requirements and Technical Prerequisites [page 2723].
Note
These features are not supported by connections to the SAP Business Technology Platform (BTP).
Planning
Predictive Features
You can access Smart Insights on models on your live SAP HANA connection. There are some restrictions. For
more information, see Live SAP HANA Restrictions for Smart Insights [page 336].
Note
Smart Insights is supported on acquired and live SAP HANA direct connections only.
Smart Predict
You can use Live SAP HANA data to create predictive models with Smart Predict. You'll get reports and KPIs
with predictive insights that will enhance stories in SAP Analytics Cloud. However, there are some restrictions.
For more information, see Restrictions [page 2040].
Other
Related Information
You can copy an existing Extended HANA Live Connection from your EmbeddedSAP Analytics Cloud tenant for
SuccessFactors to your Enterprise SAP Analytics Cloud tenant.
Prerequisites
• You have enabled the live connection between SAP SuccessFactors and SAP Analytics Cloud. For more
information, see Availability of Live Connection from SAP SuccessFactors to SAP Analytics Cloud.
• You have enabled and configured Stories in People Analytics on your SAP SuccessFactors tenant.
• You have a common Identity Authentication service between SAP SuccessFactors and SAP Analytics
Cloud.
To create Extended HANA Live Connections using the Copy Connection feature to an SAP SuccessFactors
system, follow the steps mentioned.
Note
You must be assigned the Story Admin permission in SAP SuccessFactors to be able to configure the live
connection on SAP Analytics Cloud.
Procedure
Note
If the connection already exists in the tenant, an error message as below is displayed.
Note
Note
Select the Back button to go back and edit the properties (if needed).
Next Steps
Once the connection is set up, you can test the connection by creating a new Story.
Note
• The steps to create a story might differ depending on whether you’re working with the Classic
Design Experience, or the Optimized Design Experience. We recommend using the Optimized Design
Experience to benefit from all the latest improvements brought to stories.
• All the features supported in People analytics is available with this connection.
• The permissions granted to users in SAP SuccessFactors apply to the stories created in SAP Analytics
Cloud.
• There are limitations to the features supported by live SAP SuccessFactors connections to SAP
Analytics Cloud. Planning, predictive, and smart features for instance are not supported. The Learning
connection that allows you to access the Learning schemas is not supported.
For the steps to enable the live connection to SuccessFactors in SAP Analytics Cloud, see SAP Note
3385646 .
Related Information
https://help.sap.com/docs/SAP_ANALYTICS_CLOUD/
8aa651b6a16748379f70b623ae681691/80aa81f307d24f67b51cb5034fadc36d.html
You can create live data connections to SAP S/4HANA systems using the Direct, Tunnel, and Cloud connection
types.
You can build stories based on released CDS views or leverage queries created by you. For a list of released CDS
views, see CDS Views.
Note
Only CDS Views containing the @Analytics.query: true annotation will appear in SAP Analytics Cloud.
A CDS view with this kind of annotation generates a transient query automatically, with the naming
convention 2C<sqlViewName>. For more information, see SAP Note 2595552
Direct Connection
Note
To make the direct live connection available to users on the internet, see Direct Live Connections in the
Internet Scenario.
To use this connection type, you must configure Cross-Origin Resource Sharing (CORS) support on your SAP
S/4HANA on-premise system.
Live Data Connection to SAP S/4HANA On-Premise Using a Direct CORS Connection via Unified Connectivity
[page 356]
Tunnel Connection
You can use the Tunnel connection type if your organization wants to expose some of your data to users outside
of your corporate network, without giving them VPN rights.
Example: Giant Bread Company wants to expand their business, so they hire Very Clever Consultants (VCC)
to do market research for them. VCC prepares a story with market analysis. VCC wants to safeguard this data
inside their own firewall, but provide access to their customer. They don't want to give Giant's executives VPN
access to VCC's network, but by providing a tunnel connection, VCC can give access to the report and its data,
without compromising their network.
Live Data Connection to SAP S/4HANA On-Premise Using a Tunnel Connection [page 368]
Note
• Systems on SAP data centers support only SAML connections, while systems on non-SAP data centers
support Basic and SAML connections. A two-digit number in your SAP Analytics Cloud URL, for
example eu10 or us30, indicates a non-SAP data center.
Cloud Connection
Use this connection type if you want to connect to data on an SAP S/4HANA Cloud Edition or SAP Marketing
Cloud system. There are several integration scenarios for creating live data connections:
1. If you want to create stories in SAP Analytics Cloud using data you have in SAP S/4HANA Cloud Edition,
see Live Data Connection to SAP S/4HANA Cloud Edition via OAuth [page 372].
2. If you want to create stories in SAP Analytics Cloud using data you have in SAP S/4HANA Cloud Edition,
and then embed stories in your SAP S/4HANA Cloud Edition system, see Integrating SAP Analytics Cloud.
3. If you want to create stories in SAP Analytics Cloud using data you have in SAP Marketing Cloud, and then
embed stories in your SAP Marketing Cloud system, see Integration with SAP Analytics Cloud (1SO).
For information on supported features and usage restrictions, see SAP Note 2715030.
Related Information
Setup Cross-origin resource sharing (CORS) between your SAP S/4HANA on-premise system and SAP
Analytics Cloud to establish a direct live connection.
• Users with Create, Read, Update, Delete and Maintain permissions for Connections.
• Users with Execute permission for Other Data Sources.
• Admin, Application Creator, BI Content Creator, BI Admin, and Planner Reporter standard application roles.
• Setting up a live connection requires working with the SAP Analytics Cloud system owner and different
IT and application stakeholders within your organization. Most configuration steps are done on your SAP
NetWeaver ABAP Application Server (AS) before creating the connection in your SAP Analytics Cloud
tenant.
• Configure cross-site cookies: To ensure that Chrome and other browsers allow cross-site access to your
SAP on-premise data source cookies from SAP Analytics Cloud, you must configure your SAP on-premise
data source to issue cookies with specific attributes. Without these settings, user authentication to your
live data connections will fail, and Story visualizations based on these connections will not render.
For steps on how to do this, see SameSite Cookie Configuration for Live Data Connections [page 266].
• Setup SSO (optional): If you want users to have a single sign-on experience to your data, check you are
using same Identity Provider (IdP) for SAP Analytics Cloud and SAP NetWeaver. For more information on
setting up your identity provider in SAP Analytics Cloud, see Enable a Custom SAML Identity Provider
[page 2895].
• If you have multiple authentication methods configured on your ABAP AS, see Alternative Logon Order.
Context
SAP Information Access (InA) is a REST HTTP-based protocol used by SAP Analytics Cloud to query your data
sources in real time. Confirm that your InA package is enabled and services are running on the ABAP AS for
your data source.
Procedure
To check if the Ina package is enabled, open the following URL in your browser: https://
<Your_ABAP_Server>/sap/bw/ina/GetServerInfo?sap-client=<Your_Client_ID>. Make sure
you are prompted for user credentials, and after login you get a JSON response. Replace
<Your_ABAP_Server> with your ABAP system host, and <Your_Client_ID> with your SAP BW client
ID.
2. Check that the required Information Access Services are active in your SAP BW/4HANA or SAP BW
system.
a. Access your system using SAP Logon.
b. Enter transaction code: SICF.
c. Enter the Service Path: /sap/bw/ina and then select Execute.
The following dialog will appear:
BatchProcessing
GetCatalog
GetResponse
GetServerInfo
Logoff
ValueHelp
Context
Cross-Origin Resource Sharing (CORS) is the method you'll use to let your users successfully access live data
in an SAP Analytics Cloud page from their Web browser. Configure CORS on your ABAP AS data source.
Note
If you are using SAP NetWeaver ABAP AS version 7.52 or above, you must apply SAP Note 2531811 or
import ABAP 7.52 SP1 to fix CORS related issues, and then follow the steps below.
Note
Note
For more information on SAP NetWeaver HTTP Allowlists, see Managing HTTP Allowlists.
x-csrf-token
x-sap-cid
authorization
mysapsso2
x-request-with
sap-rewriteurl
sap-url-session-id
content-type
x-csrf-token
sap-rewriteurl
sap-url-session-id
sap-perf-fesrec
sap-system
• Ensure Allow Credentials and Allow Private Network Access are selected.
Allow Private Network Access ensures that your ABAP AS responds with theAccess-Control-
Allow-Private-Network: true header to Google Chrome and other browsers when they send
a CORS preflight request ahead of any private network request for a subresource.
Context
If you've set up SAML 2.0 Single Sign-On (SSO) for SAP Analytics Cloud and your data source system with
the same Identity Provider, you must add a dummy HTML file to authenticate your users and follow the SAML
HTTP redirects.
If you are using User Name and Password or None authentication methods, skip this section.
Procedure
method IF_HTTP_EXTENSION~HANDLE_REQUEST.
DATA:
html_content TYPE string.
html_content = '<html><script type="text/javascript">window.close();</
script></html>'.
server->response->set_header_field( name = 'Cache-Control' value = 'no-
cache,no-store').
server->response->set_cdata( data = html_content ).
endmethod.
9. Under default_host sap bw , right click ina, then choose New Sub-Element.
10. In Service Name, enter auth then select Input.
You can see the HTML file merely closes the dialog. This is needed because SAP Analytics Cloud will trigger
this URL (/sap/bw/ina/auth). As this URL is SAML protected the browser first redirects to your IdP. The
IdP then recognizes that the user is already authenticated from SAP Analytics Cloud and has a session.
So your browser follows the redirects by the IdP and finally the dummy HTML content is delivered which
closes the dialog.
Context
Your user's browsers must allow 3rd party cookies from the ABAP AS domain and pop-ups from the SAP
Analytics Cloud domain. This can be easily configured in the browser's settings. As an example, see the steps
below for Google Chrome.
Procedure
[*.]sapanalytics.cloud
[*.]hanacloudservices.cloud.sap
[*.]hcs.cloud.sap
[*.]analytics.sapcloud.cn
4. Go back to Privacy and security and click Cookies and other site data.
5. Under Sites that can always use cookies add your ABAP AS domain.
Context
Now that you've configured your data source, you can finally create the live conection in SAP Analytics Cloud.
Procedure
This language will always be used for this connection and cannot be changed by users without
administrator privileges.
Note
You must know which languages are installed on your data source system before adding a language
code. If the language code you enter is invalid, SAP Analytics Cloud will default to the language
specified by your system metadata.
• None - Using the None authentication option allows you to connect to data source systems that use
SSO that are not based on SAML 2.0. For more information, see Using the 'None' Authentication
Option [page 454].
• User Name and Password - Enter a user name and password for your data source system.
• SAML Single Sign On - Select this option for versions of SAP S/4HANA on-premise older than 1909
(7.54) or if you are using the Cloud Connector to connect to live SAP S/4 HANA.
Note
To enable single sign-on for the mobile app, see the "Cloud Connector-based Mobile Single Sign-On"
topic in the SAP Analytics Cloud Mobile Administration Guide.
8. If you are connecting to SAP S/4HANA through a front-end server, in the Connection Details section enter
a Target System Alias.
Note
For example, if you use an SAP Fiori Front-end Server (FES) to connect to a SAP S/4HANA, you must enter
the system alias for your SAP S/4HANA back-end. For information on setting the back-end system alias,
see Creating BW Systems in the Portal.
9. (Optional) Select Let SAP support user sign in using basic authentication for this connection. By enabling
this feature, support users are granted access to the new live data connection using basic authentication.
Note
Advanced features are not available when the Authentication Method is set to none.
To enable an Advanced Feature, you must allow live on-premise data to securely leave your network.
Results
Once you've created your live data connection, test it by creating a model.
Setup the Cloud Connector between your SAP S/4HANA system and SAP Analytics Cloud to establish a live
tunnel connection.
• Users with Create, Read, Update, Delete and Maintain permissions for Connections.
• Users with Execute permission for Other Data Sources.
• Admin, Application Creator, BI Content Creator, BI Admin, and Planner Reporter standard application roles.
• Setting up a live connection requires working with the SAP Analytics Cloud system owner and different
IT and application stakeholders within your organization. Most configuration steps are done on your SAP
NetWeaver ABAP Application Server (AS) before creating the connection in your SAP Analytics Cloud
tenant.
• Configure cross-site cookies: To ensure that Chrome and other browsers allow cross-site access to your
SAP on-premise data source cookies from SAP Analytics Cloud, you must configure your SAP on-premise
data source to issue cookies with specific attributes. Without these settings, user authentication to your
live data connections will fail, and Story visualizations based on these connections will not render.
For steps on how to do this, see SameSite Cookie Configuration for Live Data Connections [page 266].
• Setup SSO (optional): If you want users to have a single sign-on experience to your data, check you are
using same Identity Provider (IdP) for SAP Analytics Cloud and SAP NetWeaver. For more information on
setting up your identity provider in SAP Analytics Cloud, see Enable a Custom SAML Identity Provider
[page 2895].
• If you have multiple authentication methods configured on your ABAP AS, see Alternative Logon Order.
Context
SAP Information Access (InA) is a REST HTTP-based protocol used by SAP Analytics Cloud to query your data
sources in real time. Confirm that your InA package is enabled and services are running on the ABAP AS for
your data source.
Procedure
To check if the Ina package is enabled, open the following URL in your browser: https://
<Your_ABAP_Server>/sap/bw/ina/GetServerInfo?sap-client=<Your_Client_ID>. Make sure
you are prompted for user credentials, and after login you get a JSON response. Replace
<Your_ABAP_Server> with your ABAP system host, and <Your_Client_ID> with your SAP BW client
ID.
2. Check that the required Information Access Services are active in your SAP BW/4HANA or SAP BW
system.
a. Access your system using SAP Logon.
b. Enter transaction code: SICF.
c. Enter the Service Path: /sap/bw/ina and then select Execute.
The following dialog will appear:
BatchProcessing
GetCatalog
GetResponse
GetServerInfo
Logoff
ValueHelp
Context
Setup the Cloud Connector between your data source system and SAP Analytics Cloud to establish a live
tunnel connection.
Context
Now that you've configured your data source, you can finally create the live conection in SAP Analytics Cloud.
Procedure
This language will always be used for this connection and cannot be changed by users without
administrator privileges.
Note
You must know which languages are installed on your data source system before adding a language
code. If the language code you enter is invalid, SAP Analytics Cloud will default to the language
specified by your system metadata.
• User Name and Password - Enter a user name and password for your data source system. Only the
user whose credentials you added will have access to the live data connection.
• SAML Single Sign On - Select this option if you've completed the necessary prerequistes and steps for
SSO outlined in the rest of this article.
Note
To enable single sign-on for the mobile app, see the "Cloud Connector-based Mobile Single Sign-On"
topic in the SAP Analytics Cloud Mobile Administration Guide.
Once you've created your live data connection, test it by creating a model.
You can create a live data connection from SAP Analytics Cloud to your S/4HANA system that uses OAuth 2.0.
Prerequisites
• SAP Analytics Cloud can be hosted either on SAP data centers or on non-SAP data centers (for
example, Amazon Web Services (AWS)). Determine which environmentSAP Analytics Cloud is hosted on
by inspecting yourSAP Analytics Cloud URL:
• A single-digit number, for example us1 or jp1, indicates an SAP data center.
• A two-digit number, for example eu10 or us30, indicates a non-SAP data center.
• You must use OAuth 2.0 for authentication.
• SAML Single Sign-On (SSO) must be enabled in SAP Analytics Cloud. For more information, see Enable a
Custom SAML Identity Provider [page 2895]. The following settings must be applied:
1. Under Step 3 , set the SAML User Mapping to Custom SAML User Mapping. Under Security
Users , the value in the Custom SAML User Mapping column must equal the User Data User
Name of the corresponding business user in the SAP S/4HANA system.
• The following steps must be carried out by a user who logs on to both the SAP S/4HANA and SAP
Analytics Cloud system via the SAML Identity Provider. For the steps in the SAP Analytics Cloud system,
the BI Admin role is required. For the steps in the SAP S/4 HANA system, the Administrator role
(role template ID SAP_BR_ADMINISTRATOR) is required. For more information, see SAP Analytics Cloud
Integration.
• To display custom analytical queries you must apply SAP Note 2710858.
Context
The steps below show how to connect SAP Analytics Cloud to SAP S/4HANA Cloud Edition using OAuth 2.0.
Note
With OAuth 2.0, you do not need to configure SAP Analytics Cloud to use the same SAML identity provider
(IdP) as you use for SAP S/4HANA cloud edition.
1. If you want to create stories in SAP Analytics Cloud using data you have in SAP S/4HANA Cloud Edition,
and then embed stories in your SAP S/4HANA Cloud Edition system, see Integrating SAP Analytics Cloud.
2. If you want to create stories in SAP Analytics Cloud using data you have in SAP Marketing Cloud, and then
embed stories in your SAP Marketing Cloud system, see Integration with SAP Analytics Cloud (1SO).
Procedure
This language will always be used for this connection and cannot be changed by users without
administrator privileges.
Note
You must know which languages are installed on your SAP S/4HANA system before adding a
language code. If the language code you enter is invalid, SAP Analytics Cloud will default to the
language specified by your system metadata.
To find the Authorization URL and Token URL, in SAP Analytics Cloud, from the side navigation, choose
You can find your tenant ID in the SAP Analytics Cloud URL. For example:
Note
e. Under Inbound Communication User Name Authentication Method , select the user you
created in Step 3g, with Authentication with OAuth 2.0, using the input help of the User Name field.
f. Set all Outbound Services to inactive.
g. Save the communication arrangement.
Note
You can connect multiple SAP Analytics Cloud tenants to one SAP S/4HANA Cloud system.
Results
The live data connection is saved, and users will have access to SAP Analytics Cloud.
Note
Users must have Read or Maintain privileges on the Connection permission in order to view models and
stories created from this connection. For more information, see Permissions [page 2844].
The connection is not tested until you create a model. For more information, see Create a New Model [page
645].
Next Steps
When you log on to SAP Analytics Cloud, you are notified if the service provider certificate is about the expire.
For information on how to renew the certificate, see Renew the SAP Analytics Cloud SAML Signing Certificate
[page 2906].
Related Information
You can create live data connections to SAP BW or SAP BW/4HANA systems using the Direct and Tunnel
connection types.
Direct Connection
Note
To make the direct live connection available to users on the internet, see Direct Live Connections in the
Internet Scenario.
• You will have direct connectivity with no additional devices required. Your browser directly connects SAP
Analytics Cloud, your IdP, and backend data sources by securely unlocking the same-origin policy.
• Because there are no additional devices, a direct connection enables better performance.
To use this connection type, you must configure Cross-Origin Resource Sharing (CORS) support on your SAP
BW system.
Live Data Connection to SAP BW Using a Direct CORS Connection via Unified Connectivity [page 378]
Live Data Connection to SAP BW Using a Direct CORS Connection via ICM Script [page 390]
Tunnel Connection
You can use the Tunnel connection type if your organization wants to expose some of your data to users outside
of your corporate network, without giving them VPN rights.
Example: Giant Bread Company wants to expand their business, so they hire Very Clever Consultants (VCC)
to do market research for them. VCC prepares a story with market analysis. VCC wants to safeguard this data
inside their own firewall, but provide access to their customer. They don't want to give Giant's executives VPN
access to VCC's network, but by providing a tunnel connection, VCC can give access to the report and its data,
without compromising their network.
Note
• Systems on SAP data centers support only SAML connections, while systems on non-SAP data centers
support Basic and SAML connections. A two-digit number in your SAP Analytics Cloud URL, for
example eu10 or us30, indicates a non-SAP data center.
• This connection type is not supported in SAP Analysis for Microsoft Office.
• SAP Analytics Cloud Support Matrix for Live Connectivity to SAP NetWeaver BW and SAP BW/4HANA
[page 403]
• For information on usage restrictions, see 2788384 .
• For information on BW version dependencies, see 2715030 .
Note
HTTP/2 is supported for SAP BW data sources. This backwards compatible protocol optimizes network
and server resource usage to increase performance.
Measurements show that performance gains can be expected when there are long running queries with
multiple widgets.
Related Information
Setup Cross-origin resource sharing (CORS) between your SAP BW or SAP BW/4HANA system and SAP
Analytics Cloud to establish a direct live connection.
• Users with Create, Read, Update, Delete and Maintain permissions for Connections.
• Users with Execute permission for Other Data Sources.
• Admin, Application Creator, BI Content Creator, BI Admin, and Planner Reporter standard application roles.
• Setting up a live connection requires working with the SAP Analytics Cloud system owner and different
IT and application stakeholders within your organization. Most configuration steps are done on your SAP
NetWeaver ABAP Application Server (AS) before creating the connection in your SAP Analytics Cloud
tenant.
Prerequisites
• Check that you are using a supported version of SAP BW. For more information, see System Requirements
and Technical Prerequisites [page 2723].
Note
Additional correction notes must be applied for some versions of SAP BW. For more information, see
SAP Note 2541557
• If your SAP NetWeaver ABAP AS version does not meet the specifications in SAP Note 2547381, stop
here and follow the steps in this article instead: Live Data Connection to SAP BW Using a Direct CORS
Connection via ICM Script [page 390]
• Configure SSL on your SAP NetWeaver ABAP AS. For more information, see Configuring SAP NetWeaver
AS for ABAP to Support SSL, and SAP Note 510007.
• Configure cross-site cookies: To ensure that Chrome and other browsers allow cross-site access to your
SAP on-premise data source cookies from SAP Analytics Cloud, you must configure your SAP on-premise
data source to issue cookies with specific attributes. Without these settings, user authentication to your
live data connections will fail, and Story visualizations based on these connections will not render.
For steps on how to do this, see SameSite Cookie Configuration for Live Data Connections [page 266].
• Setup SSO (optional): If you want users to have a single sign-on experience to your data, check you are
using same Identity Provider (IdP) for SAP Analytics Cloud and SAP NetWeaver. For more information on
setting up your identity provider in SAP Analytics Cloud, see Enable a Custom SAML Identity Provider
[page 2895].
• If you have multiple authentication methods configured on your ABAP AS, see Alternative Logon Order.
Context
SAP Information Access (InA) is a REST HTTP-based protocol used by SAP Analytics Cloud to query your data
sources in real time. Confirm that your InA package is enabled and services are running on the ABAP AS for
your data source.
Procedure
To check if the Ina package is enabled, open the following URL in your browser: https://
<Your_ABAP_Server>/sap/bw/ina/GetServerInfo?sap-client=<Your_Client_ID>. Make sure
you are prompted for user credentials, and after login you get a JSON response. Replace
<Your_ABAP_Server> with your ABAP system host, and <Your_Client_ID> with your SAP BW client
ID.
2. Check that the required Information Access Services are active in your SAP BW/4HANA or SAP BW
system.
a. Access your system using SAP Logon.
b. Enter transaction code: SICF.
c. Enter the Service Path: /sap/bw/ina and then select Execute.
The following dialog will appear:
BatchProcessing
GetCatalog
GetResponse
GetServerInfo
Logoff
ValueHelp
Context
Cross-Origin Resource Sharing (CORS) is the method you'll use to let your users successfully access live data
in an SAP Analytics Cloud page from their Web browser. Configure CORS on your ABAP AS data source.
Note
If you are using SAP NetWeaver ABAP AS version 7.52 or above, you must apply SAP Note 2531811 or
import ABAP 7.52 SP1 to fix CORS related issues, and then follow the steps below.
Note
Note
For more information on SAP NetWeaver HTTP Allowlists, see Managing HTTP Allowlists.
x-csrf-token
x-sap-cid
authorization
mysapsso2
x-request-with
sap-rewriteurl
sap-url-session-id
content-type
x-csrf-token
sap-rewriteurl
sap-url-session-id
sap-perf-fesrec
sap-system
• Ensure Allow Credentials and Allow Private Network Access are selected.
Allow Private Network Access ensures that your ABAP AS responds with theAccess-Control-
Allow-Private-Network: true header to Google Chrome and other browsers when they send
a CORS preflight request ahead of any private network request for a subresource.
Context
If you've set up SAML 2.0 Single Sign-On (SSO) for SAP Analytics Cloud and your data source system with
the same Identity Provider, you must add a dummy HTML file to authenticate your users and follow the SAML
HTTP redirects.
If you are using User Name and Password or None authentication methods, skip this section.
Procedure
method IF_HTTP_EXTENSION~HANDLE_REQUEST.
DATA:
html_content TYPE string.
html_content = '<html><script type="text/javascript">window.close();</
script></html>'.
server->response->set_header_field( name = 'Cache-Control' value = 'no-
cache,no-store').
server->response->set_cdata( data = html_content ).
endmethod.
9. Under default_host sap bw , right click ina, then choose New Sub-Element.
10. In Service Name, enter auth then select Input.
You can see the HTML file merely closes the dialog. This is needed because SAP Analytics Cloud will trigger
this URL (/sap/bw/ina/auth). As this URL is SAML protected the browser first redirects to your IdP. The
IdP then recognizes that the user is already authenticated from SAP Analytics Cloud and has a session.
So your browser follows the redirects by the IdP and finally the dummy HTML content is delivered which
closes the dialog.
Context
Your user's browsers must allow 3rd party cookies from the ABAP AS domain and pop-ups from the SAP
Analytics Cloud domain. This can be easily configured in the browser's settings. As an example, see the steps
below for Google Chrome.
Procedure
[*.]sapanalytics.cloud
[*.]hanacloudservices.cloud.sap
[*.]hcs.cloud.sap
[*.]analytics.sapcloud.cn
4. Go back to Privacy and security and click Cookies and other site data.
5. Under Sites that can always use cookies add your ABAP AS domain.
Context
Now that you've configured your data source, you can finally create the live conection in SAP Analytics Cloud.
Procedure
This language will always be used for this connection and cannot be changed by users without
administrator privileges.
Note
You must know which languages are installed on your data source system before adding a language
code. If the language code you enter is invalid, SAP Analytics Cloud will default to the language
specified by your system metadata.
• None - Using the None authentication option allows you to connect to data source systems that use
SSO that are not based on SAML 2.0. For more information, see Using the 'None' Authentication
Option [page 454].
• User Name and Password - Enter a user name and password for your data source system. Only the
user whose credentials you added will have access to the live data connection.
Note
To enable single sign-on for the mobile app, see the "Cloud Connector-based Mobile Single Sign-On"
topic in the SAP Analytics Cloud Mobile Administration Guide.
8. (Optional) Select Let SAP support user sign in using basic authentication for this connection. By enabling
this feature, support users are granted access to the new live data connection using basic authentication.
Note
Advanced features are not available when the Authentication Method is set to none.
To enable an Advanced Feature, you must allow live on-premise data to securely leave your network.
9. Select OK.
Results
Once you've created your live data connection, test it by creating a model.
Use an ICM Script to setup Cross-origin resource sharing (CORS) between your SAP BW system running SAP
NetWeaver ABAP Application Server (AS) lower than 7.52.
• Users with Create, Read, Update, Delete and Maintain permissions for Connections.
• Users with Execute permission for Other Data Sources.
• Admin, Application Creator, BI Content Creator, BI Admin, and Planner Reporter standard application roles.
• Setting up a live connection requires working with the SAP Analytics Cloud system owner and different
IT and application stakeholders within your organization. Most configuration steps are done on your SAP
NetWeaver ABAP Application Server (AS) before creating the connection in your SAP Analytics Cloud
tenant.
Prerequisites
• Check that you are using a supported version of SAP BW. For more information, see System Requirements
and Technical Prerequisites [page 2723].
Additional correction notes must be applied for some versions of SAP BW. For more information, see
SAP Note 2541557
Context
SAP Information Access (InA) is a REST HTTP-based protocol used by SAP Analytics Cloud to query your data
sources in real time. Confirm that your InA package is enabled and services are running on the ABAP AS for
your data source.
Procedure
To check if the Ina package is enabled, open the following URL in your browser: https://
<Your_ABAP_Server>/sap/bw/ina/GetServerInfo?sap-client=<Your_Client_ID>. Make sure
you are prompted for user credentials, and after login you get a JSON response. Replace
<Your_ABAP_Server> with your ABAP system host, and <Your_Client_ID> with your SAP BW client
ID.
2. Check that the required Information Access Services are active in your SAP BW/4HANA or SAP BW
system.
a. Access your system using SAP Logon.
b. Enter transaction code: SICF.
c. Enter the Service Path: /sap/bw/ina and then select Execute.
The following dialog will appear:
BatchProcessing
GetCatalog
GetResponse
GetServerInfo
Logoff
ValueHelp
Context
Note
If your SAP BW landscape is running behind SAP Web Dispatcher, we recommend that you apply these
CORS changes directly to the NetWeaver ABAP application server if possible.
This file will contain CORS rewrite rules. For example, /usr/sap/<SID>/SYS/profile/
<cors_rewrite>.
2. Adjust the ICM parameter to point to the file you created in step 1.
You can find this parameter in the SAP profile parameter settings for your ABAP server.
For example,
icm/HTTP/mod_0 = PREFIX=/,FILE=<Path_To_CORS_Rewrite_File>
Note
Replace <Path_To_CORS_Rewrite_File> with the path to the CORS rewrite file you created.
if %{HEADER:isSACOriginAllowed} = true
setHeader isSACOriginAllowed false
Replace <HOSTNAME> with your SAP Analytics Cloud host. For example,
mytenant.us1.sapanalytics.com.
Note
Multiple hosts can be added to the rewrite file. For more information, see How to Enable CORS on SAP
NetWeaver Platform.
Context
If you've set up SAML 2.0 Single Sign-On (SSO) for SAP Analytics Cloud and your data source system with
the same Identity Provider, you must add a dummy HTML file to authenticate your users and follow the SAML
HTTP redirects.
If you are using User Name and Password or None authentication methods, skip this section.
Procedure
method IF_HTTP_EXTENSION~HANDLE_REQUEST.
DATA:
html_content TYPE string.
html_content = '<html><script type="text/javascript">window.close();</
script></html>'.
server->response->set_header_field( name = 'Cache-Control' value = 'no-
cache,no-store').
server->response->set_cdata( data = html_content ).
endmethod.
9. Under default_host sap bw , right click ina, then choose New Sub-Element.
10. In Service Name, enter auth then select Input.
Context
Your user's browsers must allow 3rd party cookies from the ABAP AS domain and pop-ups from the SAP
Analytics Cloud domain. This can be easily configured in the browser's settings. As an example, see the steps
below for Google Chrome.
Procedure
2. Under Privacy and security click Site Settings Pop-ups and redirects .
3. In the Allow section, add the domains relevant for your SAP Analytics Cloud tenant.
[*.]sapanalytics.cloud
[*.]hanacloudservices.cloud.sap
[*.]hcs.cloud.sap
[*.]analytics.sapcloud.cn
4. Go back to Privacy and security and click Cookies and other site data.
5. Under Sites that can always use cookies add your ABAP AS domain.
Context
Now that you've configured your data source, you can finally create the live conection in SAP Analytics Cloud.
Procedure
This language will always be used for this connection and cannot be changed by users without
administrator privileges.
Note
You must know which languages are installed on your data source system before adding a language
code. If the language code you enter is invalid, SAP Analytics Cloud will default to the language
specified by your system metadata.
• None - Using the None authentication option allows you to connect to data source systems that use
SSO that are not based on SAML 2.0. For more information, see Using the 'None' Authentication
Option [page 454].
• User Name and Password - Enter a user name and password for your data source system. Only the
user whose credentials you added will have access to the live data connection.
• SAML Single Sign On - Select this option if you've completed the necessary prerequistes and steps for
SSO outlined in the rest of this article.
Note
To enable single sign-on for the mobile app, see the "Cloud Connector-based Mobile Single Sign-On"
topic in the SAP Analytics Cloud Mobile Administration Guide.
8. (Optional) Select Let SAP support user sign in using basic authentication for this connection. By enabling
this feature, support users are granted access to the new live data connection using basic authentication.
Note
Advanced features are not available when the Authentication Method is set to none.
To enable an Advanced Feature, you must allow live on-premise data to securely leave your network.
9. Select OK.
Results
Once you've created your live data connection, test it by creating a model.
Setup the Cloud Connector between your SAP BW or SAP BW/4HANA system and SAP Analytics Cloud to
establish a live tunnel connection.
• Users with Create, Read, Update, Delete and Maintain permissions for Connections.
• Users with Execute permission for Other Data Sources.
• Admin, Application Creator, BI Content Creator, BI Admin, and Planner Reporter standard application roles.
• Setting up a live connection requires working with the SAP Analytics Cloud system owner and different
IT and application stakeholders within your organization. Most configuration steps are done on your SAP
NetWeaver ABAP Application Server (AS) before creating the connection in your SAP Analytics Cloud
tenant.
Prerequisites
• Check that you are using a supported version of SAP BW. For more information, see System Requirements
and Technical Prerequisites [page 2723].
Note
Additional correction notes must be applied for some versions of SAP BW. For more information, see
SAP Note 2541557
• Configure SSL on your SAP NetWeaver ABAP AS. For more information, see Configuring SAP NetWeaver
AS for ABAP to Support SSL, and SAP Note 510007.
• Configure cross-site cookies: To ensure that Chrome and other browsers allow cross-site access to your
SAP on-premise data source cookies from SAP Analytics Cloud, you must configure your SAP on-premise
data source to issue cookies with specific attributes. Without these settings, user authentication to your
live data connections will fail, and Story visualizations based on these connections will not render.
For steps on how to do this, see SameSite Cookie Configuration for Live Data Connections [page 266].
• Setup SSO (optional): If you want users to have a single sign-on experience to your data, check you are
using same Identity Provider (IdP) for SAP Analytics Cloud and SAP NetWeaver. For more information on
setting up your identity provider in SAP Analytics Cloud, see Enable a Custom SAML Identity Provider
[page 2895].
• If you have multiple authentication methods configured on your ABAP AS, see Alternative Logon Order.
Context
SAP Information Access (InA) is a REST HTTP-based protocol used by SAP Analytics Cloud to query your data
sources in real time. Confirm that your InA package is enabled and services are running on the ABAP AS for
your data source.
Procedure
To check if the Ina package is enabled, open the following URL in your browser: https://
<Your_ABAP_Server>/sap/bw/ina/GetServerInfo?sap-client=<Your_Client_ID>. Make sure
you are prompted for user credentials, and after login you get a JSON response. Replace
<Your_ABAP_Server> with your ABAP system host, and <Your_Client_ID> with your SAP BW client
ID.
2. Check that the required Information Access Services are active in your SAP BW/4HANA or SAP BW
system.
a. Access your system using SAP Logon.
b. Enter transaction code: SICF.
c. Enter the Service Path: /sap/bw/ina and then select Execute.
The following dialog will appear:
BatchProcessing
GetCatalog
GetResponse
GetServerInfo
Logoff
ValueHelp
Context
Setup the Cloud Connector between your data source system and SAP Analytics Cloud to establish a live
tunnel connection.
Context
Now that you've configured your data source, you can finally create the live conection in SAP Analytics Cloud.
Procedure
This language will always be used for this connection and cannot be changed by users without
administrator privileges.
Note
You must know which languages are installed on your data source system before adding a language
code. If the language code you enter is invalid, SAP Analytics Cloud will default to the language
specified by your system metadata.
• User Name and Password - Enter a user name and password for your data source system. Only the
user whose credentials you added will have access to the live data connection.
• SAML Single Sign On - Select this option if you've completed the necessary prerequistes and steps for
SSO outlined in the rest of this article.
Note
To enable single sign-on for the mobile app, see the "Cloud Connector-based Mobile Single Sign-On"
topic in the SAP Analytics Cloud Mobile Administration Guide.
Once you've created your live data connection, test it by creating a model.
See what SAP NetWeaver BW and SAP BW/4HANA query functionality is compatible with SAP Analytics Cloud.
Characteristics Yes
Restricted Key Figures Yes When using data from SAP BW, creat-
ing a restricted measure on another re-
stricted measure (referred to as a key
figure in SAP BW) that uses the same
dimension results in incorrect data in
SAP Analytics Cloud. A further restric-
tion is not possible because SAP BW
treats restrictions as OR operations. In
most cases restricted measures are in-
tended to get the result as an AND op-
eration.
BEx Query Filter (defined under Default Limited Support Range filters are not supported. Please
Values) use fixed filters instead.
Custom Characteristic Structure Yes See restrictions below for "Support for
2 structures from BEx Query" in the Ex-
plorer.
Hierarchical Custom Key Figure Struc- Yes Hierarchical measure structures will
ture be displayed as flattened measures in
chart visualizations.
Hierarchical Custom Characteristic Yes See restrictions below for "Support for
2 structures from BEx Query" in the Ex-
Structure
plorer.
Support for 2 structures from BEx Yes The Explorer does not support SAP BW
queries with a second Structure, but
Query
you can visualize such SAP BW queries
in table and chart widgets. Search to In-
sight does not support SAP BW queries
with two characteristic structures.
Compounded Characteristics Limited Support Key display Mix Compounded has lim-
ited support.
Support for Characteristic Data Type Limited Support Chart, table, and filter widgets do not
TIMS support sorting based on the Data Type
TIMS.
Support for Key Figures Data Type Yes Key Figures of type DATS and TIMS are
DATS relevant only for Table visualizations.
Support for Key Figures Data Type Yes Key Figures of type DATS and TIMS are
TIMS relevant only for Table visualizations.
Variable Processing Type: Replacement Yes For the variable processing type Re-
Path placement Path: Variables with replace-
ment from the query result are not sup-
ported when the second query also has
variables that are configured as ready
for input.
Support for manual entry of values Yes Only manual entry by Key value is sup-
ported.
Merge / Unmerge Variables Yes Merge is based upon the technical IDs
of the prompts.
Support for validating values before ex- Yes Syntactic validation happens upon sub-
mission of selected prompt values to
ecuting the report
SAP BW.
List of Values : Sort values based on Key No Sorting based on user preference is not
supported.
Values
List of Values : Sort values based on De- No Sorting based on user preference is not
supported.
scription Values
Cell Editor Calculation Yes Cell Editor calculated values will be ex-
posed in SAP Analytics Cloud.
Read Mode for characteristics in BEx Yes The Read Mode for filter values can op-
Query (result values, filter values) tionally be changed in the Filter Dialog
of SAP Analytics Cloud (i.e., the setting
can be either the booked value or mas-
terdata).
Read Mode for characteristics in In- Yes The Read Mode for filter values can op-
foObject tionally be changed in the Filter Dialog
of SAP Analytics Cloud (i.e., the setting
can be either the booked value or mas-
terdata).
Display Options for characteristics Yes Users can switch between ID/Descrip-
(Key, Key & Text, Text) tion during Story design. The ability
to switch between Short, Medium, and
Long presentations of Description is not
supported.
Support for time-dependent Hierarchy Limited Support Please refer to SAP Note 3081404 .
Structures
Support for Text of External Character- Limited Support If the base characteristic of the hierar-
istics in Hierarchies chy doesn't have text, but the other
external characteristics do have text,
then those texts won't be displayed cor-
rectly.
Expand to Level Functionality (BEx Yes Expand to Level setting defined in the
Query Designer) BEx Query is supported for Tables only.
Expand to Level Functionality (defined Yes Drill level capability for Tables only.
in the BI client)
Drill level capability is not offered for
Chart visualizations.
Display as Hierarchy (Universal Display Yes Not supported by SAP BW 7.4 release
versions.
Hierarchy)
You can create live data connections to SAP Datasphere systems, and access any SAP Datasphere Analytical
Dataset, Analytic Model or Perspective within SAP Analytics Cloud.
Prerequisites
Before setting up your live connection in SAP Analytics Cloud, you first need to add the URL of your SAP
Analytics Cloud as a trusted origin in your SAP Datasphere system.
5. Click Save.
If you're using Chrome as a browser and want to use the incognito mode, make sure that all cookies are
allowed:
Context
If you have an SAP Datasphere tenant, you can leverage and combine the data management and modeling
capabilities of SAP Datasphere with the business intelligence and other features in SAP Analytics Cloud.
Note
There are limitations to the features supported by live SAP Datasphere connections to SAP Analytics
Cloud. Planning and smart features for instance are not supported. For a full list of the limitations to live
SAP Datasphere connections to SAP Analytics Cloud, refer to SAP Note 2832606 .
By creating a live connection in the Connections area to your SAP Datasphere tenant, your content creators
can build stories directly against the data that is modeled in SAP Datasphere. Live connections between SAP
Analytics Cloud and SAP Datasphere can be established across tenants and data centers.
Tip
Analytical Datasets must be exposed for consumption first in SAP Datasphere. If you don't see the
Analytical Dataset you're looking for after you've set up a live connection, check that the Analytical Dataset
is deployed and set for consumption in your SAP Datasphere system.
Procedure
For example, mydatawarehouse.eu10.hcs.cloud.sap and 443. You can copy the host name from the
browser's address bar when logged on to your SAP Datasphere tenant:
Note that SAML Single Sign On is preselected as the authentication method allowed. A valid SAP Datasphere
user account in that tenant's Identity Provider is required to log on to the data source.
5. Click OK and you'll be prompted to log on to SAP Datasphere.
6. Enter a valid username and password for SAP Datasphere and click OK to set up the connection.
Next Steps
Once the connection is set up, you can test the connection by creating a new Story.
• The steps below might differ depending on whether you’re working with the Classic Design Experience,
or the Optimized Design Experience. We recommend using the Optimized Design Experience to benefit
from all the latest improvements brought to stories.
• To be able to consume SAP Datasphere data in stories, make sure that you have the DW Consumer
role assigned in SAP Datasphere. This role is required for SAP Datasphere data consumption in stories
in SAP Analytics Cloud.
Restriction
Analytic models cannot be consumed using the Classic Design Experience. If you want to work with analytic
models, switch to the Optimized Design Experience.
Note
If there is only one connection then it is selected automatically and this dialog is not displayed.
Restriction
Analytic Models aren't available in stories created using the Classic Design Experience.
The data should load successfully. Add a simple chart with measures and dimensions to check that you see the
data from SAP Datasphere.
• In the toolbar, click Add Data in the left panel, or Add New Data in the main toolbar.
• In the toolbar, click to add a chart or to add a table.
4. Choose Data from an existing dataset or model.
5. In the left panel of the Select Dataset or Model dialog, select an SAP Datasphere connection.
6. If you’re asked to log in to SAP Datasphere, enter your credentials.
Note
Results only display models and datasets that are exposed for consumption. Models, datasets or
perspectives created before the Expose for Consumption option was introduced in SAP Datasphere
must be redeployed for them to appear in the search results.
You can create live data connections to SAP Business Planning and Consolidation, version for SAP NetWeaver
(SAP BPC for NetWeaver) and SAP Business Planning and Consolidation, version for SAP BW/4HANA (SAP
BPC for BW/4HANA) embedded configuration systems using the Direct and Tunnel connection types.
License
In order to write data back to a BPC embedded configuration model from SAC, you will need a planning license
(minimum SAP Analytics Cloud for planning, standard edition).
For more information, refer to Features by License Type for Planning Models [page 68].
Direct Connection
Note
To make the direct live connection available to users on the internet, see Direct Live Connections in the
Internet Scenario.
To use this connection type, you must configure Cross-Origin Resource Sharing (CORS) support on your SAP
BW system.
Live Data Connection to SAP BPC Embedded Using a Direct CORS Connection via Unified Connectivity [page
417]
Live Data Connection to SAP BPC Embedded Using a Direct CORS Connection via ICM Script [page 429]
You can use the Tunnel connection type if your organization wants to expose some of your data to users outside
of your corporate network, without giving them VPN rights.
Example: Giant Bread Company wants to expand their business, so they hire Very Clever Consultants (VCC)
to do market research for them. VCC prepares a story with market analysis. VCC wants to safeguard this data
inside their own firewall, but provide access to their customer. They don't want to give Giant's executives VPN
access to VCC's network, but by providing a tunnel connection, VCC can give access to the report and its data,
without compromising their network.
Note
• Systems on SAP data centers support only SAML connections, while systems on non-SAP data centers
support Basic and SAML connections. A two-digit number in your SAP Analytics Cloud URL, for
example eu10 or us30, indicates a non-SAP data center.
• Tunnel connection types are not supported on mobile devices in this release.
• This connection type is not supported in SAP Analysis for Microsoft Office.
• Currently, this connection type is only supported in the Cloud Foundry environment.
For BPC specific feature restrictions in a live connection, see SAP note 2969947 .
In addition to that, there are some usage restrictions and minimum version dependency requirements on BW
side, which equally applies to the embedded configuration of SAP BPC:
• For details of the support of BW query functionality, please refer to SAP Analytics Cloud Support Matrix for
Live Connectivity to SAP NetWeaver BW and SAP BW/4HANA [page 403]
• For information on usage restrictions to SAP BW, see SAP note 2788384 .
• For information on BW reporting features and their version requirements, see SAP note 2715030 .
Note
HTTP/2 is supported for SAP BW data sources. This backwards compatible protocol optimizes network
and server resource usage to increase performance.
Measurements show that performance gains can be expected when there are long running queries with
multiple widgets.
Related Information
Setup Cross-origin resource sharing (CORS) between your SAP BPC embedded system and SAP Analytics
Cloud to establish a direct live connection.
• Users with Create, Read, Update, Delete and Maintain permissions for Connections.
• Users with Execute permission for Other Data Sources.
• Admin, Application Creator, BI Content Creator, BI Admin, and Planner Reporter standard application roles.
• Setting up a live connection requires working with the SAP Analytics Cloud system owner and different
IT and application stakeholders within your organization. Most configuration steps are done on your SAP
NetWeaver ABAP Application Server (AS) before creating the connection in your SAP Analytics Cloud
tenant.
Prerequisites
• Check that you are using a supported version of SAP BPC embedded configuration. For more information,
see System Requirements and Technical Prerequisites [page 2723].
Additional correction notes must be applied for some versions of SAP BW. For more information, see
SAP Note 2541557
• If your SAP NetWeaver ABAP AS version does not meet the specifications in SAP Note 2547381, stop here
and follow the steps in this article instead: Live Data Connection to SAP BPC Embedded Using a Direct
CORS Connection via ICM Script [page 429]
• Configure SSL on your SAP NetWeaver ABAP AS. For more information, see Configuring SAP NetWeaver
AS for ABAP to Support SSL, and SAP Note 510007.
• Configure cross-site cookies: To ensure that Chrome and other browsers allow cross-site access to your
SAP on-premise data source cookies from SAP Analytics Cloud, you must configure your SAP on-premise
data source to issue cookies with specific attributes. Without these settings, user authentication to your
live data connections will fail, and Story visualizations based on these connections will not render.
For steps on how to do this, see SameSite Cookie Configuration for Live Data Connections [page 266].
• Setup SSO (optional): If you want users to have a single sign-on experience to your data, check you are
using same Identity Provider (IdP) for SAP Analytics Cloud and SAP NetWeaver. For more information on
setting up your identity provider in SAP Analytics Cloud, see Enable a Custom SAML Identity Provider
[page 2895].
• If you have multiple authentication methods configured on your ABAP AS, see Alternative Logon Order.
Context
SAP Information Access (InA) is a REST HTTP-based protocol used by SAP Analytics Cloud to query your data
sources in real time. Confirm that your InA package is enabled and services are running on the ABAP AS for
your data source.
Procedure
To check if the Ina package is enabled, open the following URL in your browser: https://
<Your_ABAP_Server>/sap/bw/ina/GetServerInfo?sap-client=<Your_Client_ID>. Make sure
you are prompted for user credentials, and after login you get a JSON response. Replace
<Your_ABAP_Server> with your ABAP system host, and <Your_Client_ID> with your SAP BW client
ID.
2. Check that the required Information Access Services are active in your SAP BW/4HANA or SAP BW
system.
a. Access your system using SAP Logon.
b. Enter transaction code: SICF.
c. Enter the Service Path: /sap/bw/ina and then select Execute.
The following dialog will appear:
BatchProcessing
GetCatalog
GetResponse
GetServerInfo
Logoff
ValueHelp
Context
Cross-Origin Resource Sharing (CORS) is the method you'll use to let your users successfully access live data
in an SAP Analytics Cloud page from their Web browser. Configure CORS on your ABAP AS data source.
Note
If you are using SAP NetWeaver ABAP AS version 7.52 or above, you must apply SAP Note 2531811 or
import ABAP 7.52 SP1 to fix CORS related issues, and then follow the steps below.
Note
Note
For more information on SAP NetWeaver HTTP Allowlists, see Managing HTTP Allowlists.
x-csrf-token
x-sap-cid
authorization
mysapsso2
x-request-with
sap-rewriteurl
sap-url-session-id
content-type
x-csrf-token
sap-rewriteurl
sap-url-session-id
sap-perf-fesrec
sap-system
• Ensure Allow Credentials and Allow Private Network Access are selected.
Allow Private Network Access ensures that your ABAP AS responds with theAccess-Control-
Allow-Private-Network: true header to Google Chrome and other browsers when they send
a CORS preflight request ahead of any private network request for a subresource.
Context
If you've set up SAML 2.0 Single Sign-On (SSO) for SAP Analytics Cloud and your data source system with
the same Identity Provider, you must add a dummy HTML file to authenticate your users and follow the SAML
HTTP redirects.
If you are using User Name and Password or None authentication methods, skip this section.
Procedure
method IF_HTTP_EXTENSION~HANDLE_REQUEST.
DATA:
html_content TYPE string.
html_content = '<html><script type="text/javascript">window.close();</
script></html>'.
server->response->set_header_field( name = 'Cache-Control' value = 'no-
cache,no-store').
server->response->set_cdata( data = html_content ).
endmethod.
9. Under default_host sap bw , right click ina, then choose New Sub-Element.
10. In Service Name, enter auth then select Input.
You can see the HTML file merely closes the dialog. This is needed because SAP Analytics Cloud will trigger
this URL (/sap/bw/ina/auth). As this URL is SAML protected the browser first redirects to your IdP. The
IdP then recognizes that the user is already authenticated from SAP Analytics Cloud and has a session.
So your browser follows the redirects by the IdP and finally the dummy HTML content is delivered which
closes the dialog.
Context
Your user's browsers must allow 3rd party cookies from the ABAP AS domain and pop-ups from the SAP
Analytics Cloud domain. This can be easily configured in the browser's settings. As an example, see the steps
below for Google Chrome.
Procedure
[*.]sapanalytics.cloud
[*.]hanacloudservices.cloud.sap
[*.]hcs.cloud.sap
[*.]analytics.sapcloud.cn
4. Go back to Privacy and security and click Cookies and other site data.
5. Under Sites that can always use cookies add your ABAP AS domain.
Context
Now that you've configured your data source, you can finally create the live conection in SAP Analytics Cloud.
Procedure
This language will always be used for this connection and cannot be changed by users without
administrator privileges.
Note
You must know which languages are installed on your data source system before adding a language
code. If the language code you enter is invalid, SAP Analytics Cloud will default to the language
specified by your system metadata.
• None - Using the None authentication option allows you to connect to data source systems that use
SSO that are not based on SAML 2.0. For more information, see Using the 'None' Authentication
Option [page 454].
• User Name and Password - Enter a user name and password for your data source system. Only the
user whose credentials you added will have access to the live data connection.
Note
To enable single sign-on for the mobile app, see the "Cloud Connector-based Mobile Single Sign-On"
topic in the SAP Analytics Cloud Mobile Administration Guide.
8. (Optional) Select Let SAP support user sign in using basic authentication for this connection. By enabling
this feature, support users are granted access to the new live data connection using basic authentication.
Note
Advanced features are not available when the Authentication Method is set to none.
To enable an Advanced Feature, you must allow live on-premise data to securely leave your network.
Results
Once you've created your live data connection, test it by creating a model.
Use an ICM Script to setup Cross-origin resource sharing (CORS) between your SAP BPC 10.1 NW system
running SAP NetWeaver ABAP Application Server (AS) lower than 7.52.
• Users with Create, Read, Update, Delete and Maintain permissions for Connections.
• Users with Execute permission for Other Data Sources.
• Admin, Application Creator, BI Content Creator, BI Admin, and Planner Reporter standard application roles.
• Setting up a live connection requires working with the SAP Analytics Cloud system owner and different
IT and application stakeholders within your organization. Most configuration steps are done on your SAP
NetWeaver ABAP Application Server (AS) before creating the connection in your SAP Analytics Cloud
tenant.
Prerequisites
• Check that you are using a supported version of SAP BPC embedded configuration. For more information,
see System Requirements and Technical Prerequisites [page 2723].
Additional correction notes must be applied for some versions of SAP BW. For more information, see
SAP Note 2541557
Context
SAP Information Access (InA) is a REST HTTP-based protocol used by SAP Analytics Cloud to query your data
sources in real time. Confirm that your InA package is enabled and services are running on the ABAP AS for
your data source.
Procedure
To check if the Ina package is enabled, open the following URL in your browser: https://
<Your_ABAP_Server>/sap/bw/ina/GetServerInfo?sap-client=<Your_Client_ID>. Make sure
you are prompted for user credentials, and after login you get a JSON response. Replace
<Your_ABAP_Server> with your ABAP system host, and <Your_Client_ID> with your SAP BW client
ID.
2. Check that the required Information Access Services are active in your SAP BW/4HANA or SAP BW
system.
a. Access your system using SAP Logon.
b. Enter transaction code: SICF.
c. Enter the Service Path: /sap/bw/ina and then select Execute.
The following dialog will appear:
BatchProcessing
GetCatalog
GetResponse
GetServerInfo
Logoff
ValueHelp
Context
Note
If your SAP BW landscape is running behind SAP Web Dispatcher, we recommend that you apply these
CORS changes directly to the NetWeaver ABAP application server if possible.
This file will contain CORS rewrite rules. For example, /usr/sap/<SID>/SYS/profile/
<cors_rewrite>.
2. Adjust the ICM parameter to point to the file you created in step 1.
You can find this parameter in the SAP profile parameter settings for your ABAP server.
For example,
icm/HTTP/mod_0 = PREFIX=/,FILE=<Path_To_CORS_Rewrite_File>
Note
Replace <Path_To_CORS_Rewrite_File> with the path to the CORS rewrite file you created.
if %{HEADER:isSACOriginAllowed} = true
setHeader isSACOriginAllowed false
Replace <HOSTNAME> with your SAP Analytics Cloud host. For example,
mytenant.us1.sapanalytics.com.
Note
Multiple hosts can be added to the rewrite file. For more information, see How to Enable CORS on SAP
NetWeaver Platform.
Context
If you've set up SAML 2.0 Single Sign-On (SSO) for SAP Analytics Cloud and your data source system with
the same Identity Provider, you must add a dummy HTML file to authenticate your users and follow the SAML
HTTP redirects.
If you are using User Name and Password or None authentication methods, skip this section.
Procedure
method IF_HTTP_EXTENSION~HANDLE_REQUEST.
DATA:
html_content TYPE string.
html_content = '<html><script type="text/javascript">window.close();</
script></html>'.
server->response->set_header_field( name = 'Cache-Control' value = 'no-
cache,no-store').
server->response->set_cdata( data = html_content ).
endmethod.
9. Under default_host sap bw , right click ina, then choose New Sub-Element.
10. In Service Name, enter auth then select Input.
Context
Your user's browsers must allow 3rd party cookies from the ABAP AS domain and pop-ups from the SAP
Analytics Cloud domain. This can be easily configured in the browser's settings. As an example, see the steps
below for Google Chrome.
Procedure
2. Under Privacy and security click Site Settings Pop-ups and redirects .
3. In the Allow section, add the domains relevant for your SAP Analytics Cloud tenant.
[*.]sapanalytics.cloud
[*.]hanacloudservices.cloud.sap
[*.]hcs.cloud.sap
[*.]analytics.sapcloud.cn
4. Go back to Privacy and security and click Cookies and other site data.
5. Under Sites that can always use cookies add your ABAP AS domain.
Context
Now that you've configured your data source, you can finally create the live conection in SAP Analytics Cloud.
Procedure
This language will always be used for this connection and cannot be changed by users without
administrator privileges.
Note
You must know which languages are installed on your data source system before adding a language
code. If the language code you enter is invalid, SAP Analytics Cloud will default to the language
specified by your system metadata.
• None - Using the None authentication option allows you to connect to data source systems that use
SSO that are not based on SAML 2.0. For more information, see Using the 'None' Authentication
Option [page 454].
• User Name and Password - Enter a user name and password for your data source system. Only the
user whose credentials you added will have access to the live data connection.
• SAML Single Sign On - Select this option if you've completed the necessary prerequistes and steps for
SSO outlined in the rest of this article.
Note
To enable single sign-on for the mobile app, see the "Cloud Connector-based Mobile Single Sign-On"
topic in the SAP Analytics Cloud Mobile Administration Guide.
8. (Optional) Select Let SAP support user sign in using basic authentication for this connection. By enabling
this feature, support users are granted access to the new live data connection using basic authentication.
Note
Advanced features are not available when the Authentication Method is set to none.
To enable an Advanced Feature, you must allow live on-premise data to securely leave your network.
Results
Once you've created your live data connection, test it by creating a model.
You can create live data connections to SAP BusinessObjects Business Intelligence (BI) Universes and Web
Intelligence documents using the Direct connection type.
Prerequisites
For more information, see System Requirements and Technical Prerequisites [page 2723]
Direct Connection
Note
To make the direct live connection available to users on the internet, see Direct Live Connections in the
Internet Scenario.
Note
The SAP BusinessObjects Live Data Connect component must be deployed on your server. For more
information, see the SAP BusinessObjects Live Data Connect Installation and Security Guide available here.
Note
To prevent errors in your charts associated with session timeouts, we recommend using the latest version
of SAP BusinessObjects Business Intelligence (BI) Universes available on SMP.
Live Data Connection to SAP Universes Using a Direct Connection [page 439]
Live Data Connection to SAP Universes Using a Direct Connection and SSO [page 441]
Related Information
You can create a direct live data connection to SAP BusinessObjects Business Intelligence (BI) Universes.
Prerequisites
Caution
As of Google Chrome version 80, Chrome restricts cookies to first-party access by default, and
requires you to explicitly mark cookies for access in third-party, or cross-site, contexts.
To ensure that Chrome and other browsers allow cross-site access to your SAP on-premise data
source cookies from SAP Analytics Cloud, you must configure your SAP on-premise data source to
issue cookies with specific attributes. Without these settings, user authentication to your live data
connections will fail, and Story visualizations based on these connections will not render.
For details, see SameSite Cookie Configuration for Live Data Connections [page 266].
Make sure that you have the following elements in place before installing the SAP BusinessObjects Live Data
Connect component for SAP Analytics Cloud:
• An SAP BusinessObjects Business Intelligence 4.2 SP9, 4.3 SP1 or higher system installed.
• An SAP BusinessObjects Web Intelligence license
• Microsoft Visual C++ 2015 Redistributable Packages.
• A signed SSL certificate in JKS format for the machine running SAP BusinessObjects Live Data Connect.
• One of the supported operating system listed below:
• Windows Server 2012 R2
• Windows Server 2016
• Windows Server 2019
• SuSE SLES 12 x86_64
• SuSE SLES 15 x86_64
• RedHat EL 7.7 x86_64
• RedHat EL 8 x86_64
• Oracle Linux 7.2 (RHCK) (UEK)
• Oracle Linux 7.5 (RHCK) (UEK)
• Oracle Linux 7.7 (RHCK) (UEK)
• You must configure CORS to work with the SAP BusinessObjects Live Data Connect component. For more
information, see Configuring Cross-Origin Resource Sharing (CORS).
With the 3.0 release, the Java Virtual Machine, the application server and the .war file are shipped as a
standalone executable package.
You can leverage your SAP BusinessObjects Business Intelligence system and get access to SAP Universes
directly and Web Intelligence documents in SAP Analytics Cloud using the SAP BusinessObjects Live Data
Connect component.
Procedure
• Allow pop-up windows from your SAP Analytics Cloud domain, if you'll be choosing the single sign-on
(SSO) authentication method in step 2. For example:
[*.]sapanalytics.cloud
[*.]hanacloudservices.cloud.sap
[*.]hcs.cloud.sap
[*.]analytics.sapcloud.cn
• Allow 3rd party cookies from the Tomcat server's domain. For example, in Internet Explorer 11, go to
Internet Options Security Trusted Sites , add your domain name, then select Enable Protected
Mode.
2. Add a remote system to SAP Analytics Cloud:
Note
After creating a connection to a remote system and before creating a model from a remote system,
you must log off and log on to SAP Analytics Cloud again.
Prerequisites
Caution
As of Google Chrome version 80, Chrome restricts cookies to first-party access by default, and
requires you to explicitly mark cookies for access in third-party, or cross-site, contexts.
To ensure that Chrome and other browsers allow cross-site access to your SAP on-premise data
source cookies from SAP Analytics Cloud, you must configure your SAP on-premise data source to
issue cookies with specific attributes. Without these settings, user authentication to your live data
connections will fail, and Story visualizations based on these connections will not render.
For details, see SameSite Cookie Configuration for Live Data Connections [page 266].
Context
Tip
We recommend using SAML authentication if you plan on deploying SAP BusinessObjects Live Data
Connect in production mode.
Procedure
1. In the Central Management Console of your BIP system, click Authentication Enterprise .
2. In the Trusted Authentication section, check Trusted Authentication is enabled.
3. Click New Shared Secret.
4. Click Download Shared Secret.
Note
6. Set a validity period using the Shared Secret validity Period (days) parameter.
7. Click Update.
For more information on trusted authentication, refer to the Enabling Trusted Authentication section of the
Business Intelligence Platform Administrator Guide.
8. In the ldc.properties file, set the boe.authenticationMode parameter to saml.
a. Run the following command to generate a new keystore:
Tip
If your SAP BI Platform has been configured for HTTPS, rather than creating a new keystore, you
can reuse the same key store for SAP BusinessObjects Live Data Connect.
In the ldc.properties file, add the path to this keystore in the value attribute in
saml.keystore.file="/WEB-INF/samlKeystore.jks"/>.
c. If not in the file already, add the SAML parameters in the ldc.properties file. Refer to Configuring
SAP BusinessObjects Live Data Connect for more details.
9. Download the SAML metadata of the Identity Provider (IDP) and save it in the WEB-
INF\classes\metadata directory as idp_metadata.xml. Refer to the Tenant SAML 2.0 Configuration
to know how to download the SAML metadata.
10. Restart Tomcat.
11. Go to https://<HOST>:<PORT>/sap/boc/ina/saml/metadata to download the metadata file locally
on your file system.
12. Go to your IDP, create an application and upload the metadata file:
a. In the administration of the IDP, click Administration and Resources Applications + Add .
b. Give a name to the application.
c. Click SAML 2.0 Configuration.
d. In the Define from Metadata section, click Browse and upload the metadata file you have previously
downloaded (see step 3).
13. Click Name ID Attribute and select the attribute (Login Name or User ID) to map and match the Account
Name property value of SAP BI Platform’s users.
14. Create the mapping between the IDP user and the BIP user:
Note
Make sure the name you set in the selected attribute (Login Name or User ID) strictly corresponds
to an existing Account Name in the SAP BI Platform (see step 5 of Defining the trust between the
Identity Provider and SAP BusinessObjects Live Data Connect).
Note
The e-mail corresponds to the e-mail address of the IDP user, and the SAML user mapping
corresponds to the user ID.
The following restrictions apply when you create a live data connection to an SAP BusinessObjects Business
Intelligence (BI) Universe.
You can find the release notes for this version in this SAP Note .
Models
The full list of BOE universe features supported can be found in the SAP BusinessObjects Live Data Connect
Installation and Security Guide.
Planning
• No planning features are supported (data entry, disaggregation, allocations, and so on).
Predictive
Other
Live Data Connection to SAP Universes and Web Intelligence Documents [page 438]
You can create live data connections to your on-premise data sources through the Cloud Connector, to make
use of advanced features.
Some advanced features for live data connections in SAP Analytics Cloud, such as scheduling the publication
of stories, require a connection using the Cloud Connector. The Cloud Connector is a component available with
the SAP Business Technology Platform (BTP). The Cloud Connector is also required for Smart Insights when
connecting to on-premise data sources.
The Cloud Connector is an SAP application that you would install inside your own internal network, which then
allows SAP Analytics Cloud to communicate with your on-premise systems.
Note
• Prerequisites:
• If you plan to use single sign-on (SSO), SAML SSO needs to be enabled between the SAP Analytics
Cloud tenant and the SAP BW, SAP S/4HANA, or SAP HANA system.
• The Cloud Connector is installed and preconfigured.
• At this time, this capability exists only for SAP BW, SAP BPC embedded, SAP S/4HANA, and SAP
HANA data sources.
• This information is intended for the system administrator for both SAP Analytics Cloud and the SAP
BW, SAP BPC embedded, SAP S/4HANA, or SAP HANA systems.
• Only advanced feature workflows are connected via the Cloud Connector. Users who use the web
application are connected without using the Cloud Connector.
• When live connection advanced feature workflows use the Cloud Connector, be aware that your data
will proxy via the BTP (your data won't be stored in SAP's network). You'll need to first agree to allow
your live data to leave your network. Go to (Main Menu) System Administration
Data Source Configuration and enable the Allow live data to securely leave my network switch in the
Live Data Sources section. This switch is audited, so that administrators can see who switched this
feature on and off. To see the changes in the switch state, go to ( ) Main Menu Security
Activities , and search for ALLOW_LIVE_DATA_MOVEMENT. For more information, see Track User
Activities [page 2892].
1. Configure Your On-Premise Systems to Use the Cloud Connector [page 446]
2. Then set up trust between those systems and BTP if the live connection uses single sign-on:
• Set Up Trust Between the Cloud Connector and Your On-Premise ABAP Systems (BW, BPC, or S/
4HANA) [page 449]
• Set Up Trust Between SAP Analytics Cloud and Your On-Premise HANA Systems Using a Tunnel
Connection [page 452]
3. Create your live data connections:
• Live Data Connection to SAP BW Using a Direct CORS Connection via Unified Connectivity [page 378]
• Live Data Connections to SAP S/4HANA [page 355]
• Live Data Connections to SAP HANA [page 277]
Related Information
Configure your on-premise data source systems to use the Cloud Connector.
Context
To use the SAP Business Technology Platform (BTP) Cloud Connector for data source connections, you'll need
to complete these configuration steps:
Note
At this time, this capability exists only for SAP BW, SAP S/4HANA, and SAP HANA data sources.
Procedure
Virtual Host <can use the same host as the inter- <can use the same host as the inter-
nal host> nal host>
Virtual Port
<can use the same port as the inter- <can use the same port as the inter-
nal port> nal port>
a. Select (Edit).
b. In the Edit Trust Configuration dialog, find the lcs entry in the Description column.
c. Select the Trusted check box for the lcs entry, and save the configuration.
10. Download the Cloud Connector's system certificate:
If the Cloud Connector is newly installed, there is no certificate available to download. The certificate
needs to be either uploaded or generated first. To add a certificate, see Configure a CA Certificate for
Principal Propagation .
In addition to the CA certificate, you'll first need to install a system certificate for mutual
authentication.
Remember
The system certificate needs to be renewed periodically, or else connections that use the Cloud
Connector may stop working.
d. In the System Certificate section, select (Download certificate in DER format), and save the system
certificate file.
11. Generate a Cloud Connector sample certificate based on a valid user's identifier value:
a. In the Principal Propagation section, select the Create a sample certificate icon.
b. Type a valid user identifier.
For example, if you configured “User ID” as the user identifier attribute in your identity provider, use
that User ID value here.
c. Select Generate, and save the sample certificate file.
12. Set the Common Name for the Cloud Connector:
a. In the Principal Propagation section, select Edit.
b. Set the Common Name (CN) field to an assertion attribute. For example, you can set it to ${name}.
For a list of assertion attributes, see Enable a Custom SAML Identity Provider [page 2895].
c. Select Save.
Next Steps
Next, you'll need to set up the trust between SAP Analytics Cloud and your on-premise systems:
• Set Up Trust Between the Cloud Connector and Your On-Premise ABAP Systems (BW, BPC, or S/4HANA)
[page 449]
• Set Up Trust Between SAP Analytics Cloud and Your On-Premise HANA Systems Using a Tunnel
Connection [page 452]
Configure your on-premise SAP ABAP system so that it trusts the Cloud Connector. This step is needed only if
your live connection uses single sign-on.
Prerequisites
Note
Make sure that you've already configured the Cloud Connector. For details, see Configure Your On-Premise
Systems to Use the Cloud Connector [page 446].
Context
• The Cloud Connector needs to trust the identity provider (IdP) that the customer uses (via syncing the
IdPs in the Cloud Connector interface).
• The live system needs to trust the Cloud Connector (via the system certificate).
• The live system needs to be configured to accept a short-lived X.509 certificate that is forwarded by the
Cloud Connector.
• The following steps are for uploading the certificate that you previously downloaded from the Cloud
Connector (see related link) to an SAP BW or BPC on-premise system, and configuring the BW or BPC
system to use principal propagation. For more information, see Configure Principal Propagation to an
ABAP System for HTTPS.
Procedure
1. Establish trust between the ABAP System and the Cloud Connector by importing the CA-issued system
certificate.
a. Start SAP Logon.
b. Log on to your on-premise ABAP system.
c. Open the Trust Manager.
You can type strust to find the Trust Manager.
d. Double-click SSL server Standard.
e. Switch to Edit mode.
f. Select the Import certificate icon at the bottom of the screen.
g. Choose the system certificate file that you previously downloaded from the Cloud Connector (not the
sample certificate file).
Note
The preceding steps describe how to configure one trusted proxy. If you want to configure multiple
trusted proxies, use the parameter icm/trusted_reverse_proxy_0, which can be included
in the profile multiple times, instead of the icm/HTTPS/trust_client_with_issuer and icm/
HTTPS/trust_client_with_subject parameters. (Add the parameter multiple times using an
incremented index at the end.)
For more information, and examples, see this SAP note: 2052899 .
h. When both parameters appear in the parameter list, select the Back icon, and select Yes to update the
profile.
i. Save the profile, and select Yes to activate the profile.
j. Select the Back icon to go back to the SAP Easy Access screen.
k. Open the ICM Monitor (smicm).
l. Select More Administration ICM Exit Hard Global .
m. Select More Goto Parameters Display .
The two new parameters are visible under HTTPS (SSL) settings.
3. Map the short-lived certificate.
For the Certificate Attr. field, select CN=<valid user identifier>. See Configure Your On-Premise
Systems to Use the Cloud Connector [page 446] for details.
For the Login As field, this setting depends on which attribute you configured in your identity provider
as your user identifier. If you used a user name or email address, you can select those options from the
drop-down list. If you chose any other attribute, select Alias from the list.
l. Select Continue to create the rule.
m. In the Rules list, double-click the check box in the Ext. Attributes or Attr column for the new rule, to
open the Extended Attributes dialog.
n. Select the check box Ignore case sensitivity in certificate entries, and select Continue.
o. Verify that the rule has been added, and then save the change.
p. Check that the user is mapped in the Certificate Status based on Persistence area.
4. Access ICF Services.
a. In the ABAP system, choose transaction code SICF and go to Maintain Services.
b. Select the GetServerInfo service.
c. Double-click the service, and go to the Logon Data tab.
d. Switch to Alternative Logon Procedure, and ensure that the Logon Through SSL Certificate
logon procedure is listed before SAML LOGON.
Related Information
Configure Your On-Premise Systems to Use the Cloud Connector [page 446]
Configure your on-premise SAP HANA system so that it trusts SAP Analytics Cloud. This step is needed only if
your live connection uses single sign-on.
Prerequisites
Note
Make sure that you've already configured the Cloud Connector. For details, see Configure Your On-Premise
Systems to Use the Cloud Connector [page 446].
Context
• The Cloud Connector maps to the SAP HANA system using no authentication.
• The live system needs to trust the metadata that is downloaded from SAP Analytics Cloud.
Procedure
8. In the SAP HANA XS Administration Tool, go to (menu) SAML Identity Provider and select
(Add) near the bottom of the screen to add a new SAML IdP.
9. Open the metadata file you downloaded and copy the contents into the Metadata input area. Save the
changes.
Related Information
Configure Your On-Premise Systems to Use the Cloud Connector [page 446]
Live Data Connection to SAP HANA On-Premise Using a Tunnel Connection [page 287]
You can set the default language used by all live data connections.
If the data sources you are connecting to are translated to multiple languages, then users in SAP Analytics
Cloud should use the Data Access Language preference in their user profiles to control which language to
retrieve when viewing stories and digital boardrooms.
1. Users should set a Data Access Language in their user profile settings.
For more information on how to edit your user profile, see Edit Your Profile [page 44].
Note
If the language that you select in SAP Analytics Cloud is not available from the data source, the data
source will determine what language it uses as the default.
2. When creating a new live data connection, administrators should not select a language from the Default
Language list in the connection dialog. Any language selected in the connection dialog will override the
Data Access Language users have set in their user profile settings.
If your data sources are available in only one language, administrators should set a language when creating
a live data connection. This ensures that the available language is retrieved regardless of which Data Access
Language users have chosen in their user profiles.
Note
If the language that you select in SAP Analytics Cloud is not available from the data source, the data
source will determine what language it uses as the default.
Related Information
For many live data connections, you can choose None as your authentication option.
Single Sign-On (SSO) using SAML 2.0 is recommended when you are creating live data connections, but if your
SSO method does not support SAML 2.0, you can still use it with SAP Analytics Cloud.
For SAML 2.0 authentication, when creating a live connection, you must select SSO as your authentication
method.
Advantages:
• SSO with one central identity management system. For example, a SAML 2 Identity Provider.
• Works across both intranet and Internet.
• Works on all devices with a supported web browser.
Disadvantages:
• A SAML 2 Identity Provider must be setup to manage identities for both SAC and HANA.
For all other types of SSO you must select None as your authentication method. Any authentication type
supported by your SAP HANA system can be used, including X.509 Client Certificate authentication, Kerberos/
SPNego authentication, or SAP Logon Tickets.
Advantages:
Disadvantages:
• SAP HANA needs to configured with an automated authentication option, if you have not done this already.
• Although providing an SSO user experience, identities on SAP Analytics Cloud and SAP HANA are not
centrally managed. The user logged on to SAP Analytics Cloud may not necessarily be the same user
logged on to your on-premise SAP HANA system.
• The authentication option may not work in all use cases. For example, X.509 Client Certificate
authentication requires that an existing PKI infrastructure must be in place in the corporate network,
and that the user’s browser has access to the user’s certificate.
• Kerberos/SPNego authentication only works in the intranet scenario, as Kerberos is an intranet
authentication protocol.
• SAP Logon Ticket authentication can only be used in embedding scenarios, and the portal that embeds the
SAP Analytics Cloud content must be able to issue SAP Logon Ticket beforehand. Additionally, the portal
and the SAP HANA system must be in the same DNS sub-domain.
For details on how to setup automated authentication on your SAP HANA system, see the following:
The APOS Live Data Gateway provides live data connectivity from SAP Analytics Cloud to an expanded set of
on-premise data sources and cloud data sources.
Data provided through APOS Live Data Gateway is kept safely behind the firewall, with no data replication or
data upload requirements, and no need to re-create or duplicate data models or data security.
The APOS Live Data Gateway provides live data connectivity for numerous data sources of OLAP, relational,
data appliances, cloud, and legacy SAP data source types:
• Relational Data Sources: Microsoft SQL Server, Oracle, IBM DB2, Sybase IQ, Sybase SQL Anywhere,
MySQL
The process for building and using a live data connection with the APOS Live Data Gateway is simple, straight-
forward, and aligned with the process for building and using a SAP HANA live data connection. However, SAP
HANA is not required.
For more information, see APOS Live Data Gateway - Technical Resources.
An APOS View is a semantic layer that identifies measures and dimensions for non-OLAP data sources, as such
data sources do not natively indicate them. APOS Views are simple for DBAs or Analysts to build and shield end
users from the need to write SQL, know how to join database tables, or understand complex data structures.
For more information on how this solution enables live data connectivity and expanded data source options,
see APOS Live Data Gateway - Technical Resources.
About APOS
APOS Systems Inc. is an SAP Partner focused on business intelligence and analytics solutions. APOS develops,
sells and supports solutions that enhance and extend the capabilities of the SAP BusinessObjects and SAP
Analytics Cloud platforms, delivering strong value to hundreds of customers globally. APOS solutions for SAP
Analytics Cloud expand data connectivity options, and provide content bursting and distribution capabilities.
Creating a direct live data connection results in the error Failed to connect to system.
Follow these troubleshooting steps to verify the configuration of your live data connections.
• Failed to connect to system. Possible 1. Check that the HTTPS certificate used by your data
causes: SSL certificate untrusted, source is trusted.
network error. Open a new browser tab and connect directly to the
• Failed to connect to system. Please data source. For example, for SAP HANA you can
contact your administrator to check use: https://<xs-host:port>/sap/bc/ina/
possible causes: SSL certificate service/v2/GetServerInfo and verify if the cer-
untrusted, network error. tificate is automatically trusted or warnings are dis-
played. For more information, see KBA 2482807.
2. Check that the system you are trying to connect to is
online, and that your host address and port are correct.
• Failed to connect to system. Possible 1. Make sure that your browser is not blocking third-party
causes: CORS settings, third-party cookies.
cookies blocked. 2. Check that CORS is correctly configured.
• Failed to connect to system. Please After verifying the two previous steps, you may still
contact your administrator to check have problems if the backend CORS configuration is
possible causes: CORS settings, incorrect. To verify the configuration is incorrect, you
third-party cookies blocked. can open your Chrome Developer Tools and check that
the console shows an error similar to the following:
"https:// <xs-host:port>/sap/bc/ina/
service/v2/GetServerInfo?
timestamp=1507241400382: Response to
preflight request doesn't pass access
control check: No 'Access-Control-
Allow-Origin' header is present
on the requested resource. Origin
'https://yoursapanalytics.cloud'
is therefore not allowed access."
If this error appears, it means that your data source has
incorrect CORS settings and the request to allow your
SAP Analytics Cloud tenant to get data is rejected.
• Failed to connect to system. More Please ask your SAP Analytics Cloud administrator to inves-
information can be found on the tigate possible causes. For more information, see Find Help
in SAP Analytics Cloud [page 41].
troubleshooting page.
• Failed to connect to system. Please
contact your administrator to check
possible causes.
You can create connections to data source systems to allow data acquisition by SAP Analytics Cloud. Data is
imported (copied) to SAP Analytics Cloud, and changes made to the data in the source system don't affect the
imported data.
• For an overview of connection types and guidelines for system administrators, see the SAP Analytics
Cloud Connection Guide.
• Salesforce connection will be deprecated in QRC4 2024. Please utilize a SQL Database connection
using JDBC or APOS Live Data Gateway for Acquired Connections. See SAP Note 3402695 .
Workflow Overview
You must first install servers and components required by your connections using one of the following options:
Note
Some connection types do not require the installation or configuration of the Cloud Connector or the SAP
Analytics Cloud agent. Check the connection prerequisites to see which components are required.
1. Using the SAP Analytics Cloud Agent Simple Deployment Kit: Choose this option if you want to get
started with basic import data connections quickly. The deployment kit does not provide full customization
options.
For more information, see SAP Analytics Cloud Agent Simple Deployment Kit [page 461].
Note
It is recommended that the Cloud Connector, SAP Analytics Cloud agent, and the SAP Java Connector
(JCo) are installed together on a dedicated server, and not a personal computer. This helps to ensure that
multiple users can use an import data connection without experiencing slowness or downtime.
1. For SAP Analytics Cloud to connect to on-premise backend systems using the Cloud Connector, it
is not necessary to create inbound holes in your firewalls. The Cloud Connector doesn't need to be
deployed in your DMZ.
2. Placing the Cloud Connector and agent in the DMZ may cause connectivity issues, because there are
network filters on the DMZ’s inner firewall.
3. The Cloud Connector and agent should be deployed in a network segment that allows outgoing
connections to the Internet, and direct (unblocked) connections to on-premise backend systems. Such
a network segment may exist in your corporate network instead of the DMZ.
4. If you choose to deploy the Cloud Connector and agent on separate servers, connections from the
Cloud Connector to the agent's Tomcat port must be allowed.
Once you have set up the required components, you must configure each connection in SAP Analytics Cloud.
Note
All import data connections may be viewed, edited, deleted or shared by users with roles that include the
Manage permission on Connections, except SAP ERP, Concur, Fieldglass and Salesforce connections. For
more information, see Permissions [page 2844].
The SAP Analytics Cloud Agent Simple Deployment Kit allows you to quickly get your import data connections
working.
The SAP Analytics Cloud Agent Simple Deployment Kit is ideal if you do not already have a Tomcat Server
deployment running, or an existing SAP Business Technology Platform (BTP) Cloud Connector configured.
Note
The simple deployment kit is only supported on Windows 64-bit operating systems. If you have a different
server, such as Linux, you'll need to install and configure the agent manually. See Installing SAP Analytics
Cloud Agent [page 472].
Note
The SAP Analytics Cloud agent deployment kit prior to version 1.0.269 has the Apache Tomcat memory
allocation set to 1GB out of the box. For better handling of large data volumes, ensure that Apache Tomcat
has at least 4GB of memory. Refer to SAP Note 2732879 for information on how to increase Tomcat
memory allocation.
The deployment kit has limited configuration options. If you would like all available options, perform a
full manual installation of all required components. If you have already installed the deployment kit, and
require more options, uninstall the kit before performing a full manual installation. For more information,
see Import Data Connection Overview Diagram [page 458].
When you run the deployment kit, a Tomcat server, the Cloud Connector, and the SAP Analytics Cloud agent
will be installed. Environment variables and folders for JDBC and File Server will also be created or configured.
If your connection type requires the SAP Java Connector (JCo) or JDBC drivers, you must manually deploy
them following the Post-Setup Guide included in the deployment folder. All components will be installed on
the same machine.
The deployment kit is available at SAP Software Downloads. Go to Support Packages and Patches, and search
for the CLOUD KIT. Follow the instructions in the README.txt included in the .zip and the Post-Setup
Guide available in the deployment folder.
The file parameters-example.ini is included in the deployment kit .ZIP file. To overwrite the preset user
settings used by the deployment kit, rename this file to parameters.ini and provide the following custom
parameters:
Option Behavior
Clean Install Default installation: Performs the installation using a Tomcat heap size of 4096 MB.
Custom installation: Choose one of the options for Tomcat heap size, or type a custom value
between 256 MB and 32768 MB.
• Installs required software components unless you already have the Cloud Connector
installed on your machine.
• Depending on what types of import data connections you plan to use, you may need
to add the SAP Java Connector or JDBC drivers. Instructions are available in the
Post-Setup Guide included in the deployment folder.
• If an existing parameters.ini file is found, choose whether to import the cus-
tom parameters from that file, or proceed with the clean installation, overwriting the
parameters.ini file.
Modify SAP Analytics Cloud Choose one of the options for Tomcat heap size, or type a custom value between 256 MB
agent settings and 32768 MB.
Repair/Upgrade • Performs a repair on the existing setup or an upgrade to a newer version of the deploy-
ment kit. All user settings are preserved.
Remove Everything • Removes all software components installed with the kit, and user settings will not be
preserved.
• If you choose to reinstall the deployment kit at a later time, user settings from the
previous installation will not be available.
For better handling of large data volumes, ensure that Apache Tomcat has at least 4GB of memory. Refer to
SAP Note 2732879 for information on how to increase Tomcat memory allocation.
Related Information
Prerequisites
• You must use a System Owner account to perform the following steps. If you don't know who the
system owner is, log on to SAP Analytics Cloud and from the side navigation, choose Security
Users . If the system owner has left your company you can transfer ownership to another user. For
more information, see Transfer the System Owner Role [page 2887]
• SAP Analytics Cloud can be hosted either on SAP data centers or on non-SAP data centers. Determine
which environment SAP Analytics Cloud is hosted in by inspecting your SAP Analytics Cloud URL:
• A single-digit number, for example us1 or jp1, indicates an SAP data center.
• A two-digit number, for example eu10 or us30, indicates a non-SAP data center.
• You must know your S-User account ID. If your SAP Analytics Cloud tenant is hosted on a non-SAP data
center, and an S-User account has already been set up, your account ID is an email address. If you do not
know your S-User account ID, contact your SAP Administrator or SAP support. Or, if your tenant is on a
non-SAP data center and an S-User account hasn't been set up yet, you can follow these steps to create
one:
1. Log on to SAP Analytics Cloud.
Context
In order to create an import data connection to an SAP Business Planning and Consolidation (BPC) system, an
SAP Business Warehouse (BW) system, an SAP Universe, an SAP Enterprise Reporting Platform (ERP) system,
OData, or SAP S/4HANA On-Premise connection, you must set up and configure the Cloud Connector. To learn
more about the Cloud Connector, see Cloud Connector.
Procedure
b. Select (Edit).
c. In the SAP Business Technology Platform (BTP) Account section, enter your S-User account ID,
and then select Add S-User. When the S-User account ID is accepted, account information will be
displayed.
Note
If your SAP Analytics Cloud tenant is hosted on a non-SAP data center, your account ID is an email
address.
d. Record this account information, because you will need it for setting up the Cloud Connector in step 5.
e. Select (Save).
2. Download or locate an existing SAP JVM.
For supported SAP JVM versions, see Prerequisites. You can download the SAP JVM here.
3. Extract the SAP JVM on the server where the Cloud Connector will be installed, and note the folder
location.
Note
It is recommended, but not mandatory, that the Cloud Connector, SAP Analytics Cloud agent, and
the SAP JCO are installed together on a dedicated server, and not a personal computer. This helps
to ensure that multiple users can use an import data connection without experiencing slowness or
downtime.
You can download the Cloud Connector here. You must specify the SAP JVM folder during the installation.
Note
Ensure that you select the option to start the Cloud Connector after installation.
The following information should be used in the Set Up Initial Configuration dialog:
• Region Host: The region host where your SAP Analytics Cloud tenant is running. For example:
us1.hana.ondemand.com.
• Subaccount: The SAP BTP account name for your SAP Analytics Cloud tenant.
Note: Don't use any personal subaccount you may have created in your SAP BTP global account.
• Subaccount User: Enter your <S-USER username>.
• Password: <S-USER password>.
• Location ID: Leave empty to use the default location, or add the ID of the location you set in SAP
Analytics Cloud.
If you enter Default as the location ID, this will not connect to the default location set in SAP
Analytics Cloud. We recommend that you leave the Location ID empty if you don't plan to set up
multiple Cloud Connectors in your system landscape.
Each Cloud Connector instance must use a different location, and an error will appear if you
choose a location that is already been used. You can click the Edit icon to change the location.
• HTTPS Proxy: If you access the internet through a proxy, specify your proxy host and port number,
otherwise leave blank.
6. Confirm that you are connected in the Connection Information area. You may need to select the Connect
button to start the connection.
Next Steps
To connect to SAP BPC MS, SAP BW, SAP UNX, or SAP ERP, you must also install and configure the SAP
Analytics Cloud agent. For more information, see SAP Analytics Cloud Agent [page 471].
To connect to SAP BPC NW and SAP BPC for BW/4HANA, OData and SAP S/4HANA you only need to
configure the Cloud Connector. For more information, see Configuring the Cloud Connector [page 465].
Related Information
How to configure the Cloud Connector to work with SAP Analytics Cloud.
Prerequisites
1. The Cloud Connector is installed. For more information, see Installing the Cloud Connector [page 463].
2. The SAP Analytics Cloud agent is installed. For more information, see Installing SAP Analytics Cloud Agent
[page 472].
Note
The SAP Analytics Cloud agent is required for all import data connections except SAP BPC NW, SAP
BPC for BW/4HANA, OData Services, and SAP S/4HANA.
Context
To enable import data connections, you must add mappings to your SAP Analytics Cloud system, and all
remote systems, to the Cloud Connector administration page. For additional setup information, see Configure
access control for HTTP.
Procedure
1. Launch the Cloud Connector administration page and log on: https://<HCC HOST>:8443.
Note
Replace <HCC HOST> with the host name of the system where the Cloud Connector is installed.
Note
To find the Region Host, Subaccount Name, and Subaccount user, in SAP Analytics Cloud, from the
Note
If you enter Default as the location ID, this will not connect to the default location set in SAP
Analytics Cloud. We recommend that you leave the Location ID empty if you don't plan to set up
multiple Cloud Connectors in your system landscape.
Each Cloud Connector instance must use a different location, and an error will appear if you
choose a location that is already been used. You can click the Edit icon to change the location.
Note
You can choose HTTPS if SSL is configured on the Apache Tomcat instance where the SAP
Analytics Cloud agent is deployed.
• (Optional) Virtual Host: The default virtual host and port are the internal host and port. You can rename
the host and port so that the internal host name and port are not exposed. The virtual host name and
port will be used when configuring the SAP Analytics Cloud agent.
• (Optional) Virtual Port: The port number used by the virtual host.
• Internal Host: If you are connecting to an SAP BPC NW or SAP S/4HANA On-Premise system, add the
host name of your SAP BPC NW or SAP S/4HANA On-Premise system. For all other connections, add
the internal host name of the Tomcat server where the SAP Analytics Cloud agent is running.
• Internal Port: The port number used by the internal host.
Note
The default Tomcat port is 8080 and the default HTTPS port is 8443.
Next Steps
For all connections except SAP BPC NW, SAP BPC for BW/4HANA, SAP S/4HANA, and OData Services, you
must configure the SAP Analytics Cloud agent. For more information, see Configuring SAP Analytics Cloud
Agent [page 475].
Related Information
The SAP Analytics Cloud, on-premise access agent (SAP Analytics Cloud agent) is a connectivity component.
SAP Analytics Cloud agent is an on-premise data connectivity component that is used to establish import data
connections for these data sources:
• SAP HANA.
• SAP Business Planning and Consolidation, version for Microsoft Platform (BPC MS).
• SAP Business Warehouse (BW).
• SAP BusinessObjects Business Intelligence Universes (UNX).
• SAP Enterprise Resource Planning (ERP).
• SQL databases.
• File servers.
Note
We recommend that you use the SAP Analytics Cloud Agent Simple Deployment Kit. For details, see SAP
Analytics Cloud Agent Simple Deployment Kit [page 461].
Tasks
You must install the SAP Analytics Cloud agent for some import data connections to work.
Prerequisites
• Apache Tomcat and the Java Standard Edition Runtime Environment (JRE) must be installed. For more
information, see System Requirements and Technical Prerequisites [page 2723].
• The Cloud Connector is installed on the same server or a different server. For more information, see
Installing the Cloud Connector [page 463].
Note
It is recommended, but not mandatory, that the Cloud Connector, SAP Analytics Cloud agent, and
the SAP JCO are installed together on a dedicated server, and not a personal computer. This helps
to ensure that multiple users can use an import data connection without experiencing slowness or
downtime.
Modifying the Cloud Connector’s embedded web application and deploying the agent to it is not
supported.
Context
To create an import data connection to one of the data sources listed here [page 471], you'll need to set up and
configure the cloud agent.
Note
As SAP Analytics Cloud is updated frequently, the SAP Analytics Cloud agent may also require updates,
and upgrading the agent will require a complete Apache Tomcat restart. To avoid impacting an existing
production application server, it is recommended to set up a separate application server for SAP Analytics
Cloud agent deployment. You may install Tomcat and deploy the agent on the same machine the Cloud
Connector is installed. For more information, see Updating SAP Analytics Cloud Agent [page 477].
1. Download the SAP Analytics Cloud agent from the SAP Support Portal.
The agent is available on the SAP Software Downloads page. Expand By Category, select SAP
Cloud Solutions SAP ANALYTICS CLOUD CONN SAP ANALYTICS CLOUD CONN 1.0 SAP ANALYTICS
CLOUD AGENT 1.0 and download the latest version.
2. Unzip the downloaded file and rename the WAR file to C4A_AGENT.war.
3. Extract the package and copy the file C4A_AGENT.war to your Tomcat webapps directory.
The agent will automatically deploy when Tomcat is restarted. For more information, see Tomcat Web
Application Deployment .
Note
If you plan to use Tomcat Web Application Manager to deploy the WAR file, you need to
update web.xml under \webapps\manager\WEB-INF\ and change <max-file-size> and <max-
request-size> to a limit higher than 52428800.
4. Create a user for the SAP Analytics Cloud agent and assign the Services role to the user.
The user credentials will be needed later for setting up the connection to SAP Analytics Cloud. For
more information about how to create the user and assign the role, see Configuring Manager Application
Access .
Note
We recommend you follow the Apache Tomcat documentation on encrypting and securing your Tomcat
user password.
Alternately, you can use the SAP Analytics Cloud Agent Simple Deployment Kit to set up your
SAP Analytics Cloud Agent and Cloud Connector. The installation script automatically encrypts your
Tomcat user password. For more information, see SAP Analytics Cloud Agent Simple Deployment Kit
[page 461].
5. Restart the Tomcat application server for the settings to take effect.
Note
If you also are creating a connection to SAP BW or SAP ERP, it is recommended to install the SAP
JCO before restarting the Tomcat application server. For more information, see Installing the SAP Java
Connector (JCo) [page 474].
6. Test if the installation was successful, by opening the following URL in your browser: http://
<Host>:<Port>/C4A_AGENT/deploymentInfo
The version of the SAP Analytics Cloud agent installed is displayed.
Next Steps
For connections to SAP BW or SAP ERP, you must install the SAP Java Connector. For more information, see
Installing the SAP Java Connector (JCo) [page 474].
Refer to SAP Note 2902697 for frequently asked questions and more information about the SAP Analytics
Cloud agent.
Related Information
How to install the SAP JCo to enable SAP Business Warehouse (BW) and SAP Enterprise Resource Planning
(ERP) connectivity via the SAP Analytics Cloud agent.
Prerequisites
1. The Cloud Connector is installed. For more information, see Installing the Cloud Connector [page 463].
2. The SAP Analytics Cloud agent is installed. For more information, see Installing SAP Analytics Cloud Agent
[page 472].
Procedure
1. Download the SAP Java Connector (SAP JCo). For supported versions, see System Requirements and
Technical Prerequisites [page 2723].
The SAP JCo is available on the SAP Software Downloads Center SAP Software Download Center.
Search for Java connector and download the 64-bit Java connector for the appropriate operating
system. Using a 64-bit Java connector requires 64-bit Apache Tomcat.
Note
You must ensure that the version of the SAP JCo you download is compatible with your on-premise
SAP system.
• On a Windows server, put the sapjco3.jar and sapjco3.dll files in the lib folder of Apache
Tomcat.
Note
It is recommended that the Cloud Connector, SAP Analytics Cloud agent, and the SAP JCo are installed
together on a dedicated server, and not a personal computer. This helps to ensure that multiple users
can use an import data connection without experiencing slowness or downtime.
Next Steps
You must configure the Cloud Connector. For more information, see Configuring the Cloud Connector [page
465].
How to configure the SAP Analytics Cloud agent settings in SAP Analytics Cloud.
Prerequisites
1. The Cloud Connector is installed. For more information, see Installing the Cloud Connector [page 463].
2. The SAP Analytics Cloud agent is installed. For more information, see Installing SAP Analytics Cloud Agent
[page 472].
Note
The SAP Analytics Cloud agent is required for all import data connections except SAP BPC NW and
SAP BPC for BW/4HANA.
3. (For SAP BW and SAP ERP connections): The SAP JCO is installed. For more information, see Installing the
SAP Java Connector (JCo) [page 474].
4. The Cloud Connector is configured. For more information, see Configuring the Cloud Connector [page
465].
Procedure
2. From the side navigation, choose System Administration Data Source Configuration .
3. In the On-premise data sources section, do one of the following:
You can connect to multiple locations that have on-premise data sources. Each location that you add
should already have a Cloud Connector installed and configured on it.
Note
You can edit the Default location but you can't delete it.
4. If you selected Add a new location, enter the ID for the Cloud Connector installed at the location that you
want to use.
5. Make sure Enable Agent is set to ON.
6. Enter the following information:
• Host: Virtual host specified during the SAP Analytics Cloud configuration. For more information, see
Configuring the Cloud Connector [page 465].
• Port: Virtual port specified during the SAP Analytics Cloud configuration. For more information, see
Configuring the Cloud Connector [page 465].
• Username: Agent user name specified in tomcat-users.xml file. For more information, see Installing
SAP Analytics Cloud Agent [page 472].
• Password: Agent password specified in tomcat-users.xml file. For more information, see Installing
SAP Analytics Cloud Agent [page 472].
7. Select Create.
Next Steps
You must set up connections to your on-premise systems. For the list of systems that require configuration, see
Import Data Connection Overview Diagram [page 458].
Related Information
Prerequisites
• Apache Tomcat is installed. For supported versions, see System Requirements and Technical Prerequisites
[page 2723].
• Java Standard Edition Runtime Environment (JRE) is installed. For supported versions, see System
Requirements and Technical Prerequisites [page 2723].
Context
Note
Upgrading the SAP Analytics Cloud agent requires a complete Apache Tomcat restart.
Procedure
1. Download SAP Analytics Cloud agent from the SAP Support Portal.
The agent is available on the SAP Software Downloads page. Expand By Category, select SAP
Cloud Solutions SAP ANALYTICS CLOUD CONN SAP ANALYTICS CLOUD CONN 1.0 SAP ANALYTICS
CLOUD AGENT 1.0 and download the latest version.
2. Unzip the downloaded file and rename the WAR file to C4A_AGENT.war.
3. Undeploy the existing SAP Analytics Cloud agent running on Apache Tomcat.
If Apache Tomcat cannot undeploy the contents successfully, do the following:
a. Stop Tomcat.
b. Under the Apache Tomcat installation folder, delete \webapps\C4A_AGENT\ and any subfolders and
files.
c. Restart Tomcat.
4. Copy the file C4A_AGENT.war to your Tomcat directory.
The agent will automatically deploy when Tomcat is restarted. For more information, see Tomcat Web
Application Deployment .
5. Restart the Tomcat application server for the settings to take effect.
Learn how to enable and collect more detailed trace log information for the SAP Analytics Cloud agent to help
you troubleshoot data acquisition connections to your-premise systems.
The following trace log information can be collected with the SAP Analytics Cloud agent and used to
troubleshoot errors between SAP Analytics Cloud and your data source you're acquiring data from.
You can increase the Apache Tomcat log level in one of several ways to enable more detailed logging on your
SAP Analytics Cloud agent.
Increasing the log level can affect performance. It's recommended to do this for troubleshooting
connection issues, and then reduce the log level back to its previous settings once trace files are collected.
Procedure
Procedure
1. Edit the \conf\logging.properties file under the Apache Tomcat folder. Refer to the Apache Tomcat
documentation for details.
2. Restart Apache Tomcat.
Procedure
handlers = com.sap.fpa.logging.CoreLogge
com.sap.fpa.logging.CoreLogger.level=WARNING
2. Optionally, you can also Enable extra logging of request and response details by doing the following.
(Important! These extra details may contain credentials and other secrets, so Log cleanup is highly
recommended after debugging):
a. Start the Apache Tomcat client tool.
b. Go to the Java tab.
c. Under Java Options, add the following line to the end:
-DSAP_CLOUD_AGENT_LOG_REQUEST_DETAILS=true
Results
After restarting Tomcat, trigger the workflow a few times that causese your error in SAP Analytics Cloud and
then go to the Apache Tomcat log folder to get the latest logs:
Then reduce the log level back to its previous settings once you've collected the more detailed trace logs.
Related Information
KBA 3406144
KBA 2630653
You can create a connection that allows you to import data and models from an SAP Business Planning and
Consolidation (BPC) system.
Prerequisites
To connect to the standard configuration of SAP BPC for BW/4HANA, version 11.0 SP01 or higher or version
11.1 SP00 or higher, you have completed ONE of the following setup options:
• Performed setup using the SAP Analytics Cloud Agent Simple Deployment Kit. For more information, see
SAP Analytics Cloud Agent Simple Deployment Kit [page 461].
• Performed a manual setup of the following required components:
1. The Cloud Connector is installed. For more information, see Installing the Cloud Connector [page 463].
2. The Cloud Connector is configured. For more information, see Configuring the Cloud Connector [page
465].
To connect to SAP BPC for SAP NetWeaver, version 10.0 SP12 or higher, or the standard configuration of 10.1
SP2 or higher, you have completed ONE of the following setup options:
• Performed setup using the SAP Analytics Cloud Agent Simple Deployment Kit. For more information, see
SAP Analytics Cloud Agent Simple Deployment Kit [page 461].
• Performed a manual setup of the following required components:
To connect to SAP BPC for Microsoft Platform, version 10.0 SP12 or higher, or 10.1 SP2 or higher, you have
completed ONE of the following setup options:
• Performed setup using the SAP Analytics Cloud Agent Simple Deployment Kit. For more information, see
SAP Analytics Cloud Agent Simple Deployment Kit [page 461].
• Performed a manual setup of the following required components:
1. The Cloud Connector is installed. For more information, see Installing the Cloud Connector [page 463].
2. The SAP Analytics Cloud agent is installed. For more information, see Installing SAP Analytics Cloud
Agent [page 472].
3. The Cloud Connector is configured. For more information, see Configuring the Cloud Connector [page
465].
4. The SAP Analytics Cloud agent is configured in SAP Analytics Cloud. For more information, see
Configuring SAP Analytics Cloud Agent [page 475].
Note
Procedure
There are two types of authentication methods: Basic Authentication and OAuth Authentication.
e. If you choose the Basic Authentication, enter the User ID and Password for your BPC system.
f. If you choose to use OAuth Authentication, you need to create your OAuth client in BPC system
following the prerequisites and detailed steps described in Creating an OAuth Client for BPC Data
Acquisition with SAP Analytics Cloud [page 482].
g. Then go back to SAP Analytics Cloud, enter the OAuth Client ID, Secret, and Authentication URL you
defined in BPC.
h. Choose the button Go Authenticate. A dialog will pop up for you to enter your BPC user and password.
If you've configured SAML SSO, then you can directly single sign on to your BPC authorization page.
• You are using the same Identity Provider (IdP) for SAP Analytics Cloud and SAP NetWeaver.
For more information on setting up your identity provider in SAP Analytics Cloud, see Enable a
Custom SAML Identity Provider [page 2895].
• Ensure that SAML authentication related settings have been configured for BPC in SAP
NetWeaver. For more information on enabling SAML on SAP NetWeaver, see Enabling the
SAML Service Provider.
i. In the next dialog, choose Allow to confirm to assign the OAuth scope to the SAP Analytics Cloud user.
5. After you are authenticated successfully, choose Create.
The new connection is added to the list of connections on the Connections screen.
6. In a case when you use an OAuth connection and the OAuth token expires, you need to re-authenticate
again.
7. If you want to create a model based on this connection, see Import Data to Your Model [page 702].
Note
Select the Enable users to schedule for story publishing option if you want to let your users schedule the
publishing of stories. For details on scheduling, see Schedule a Publication [page 219].
Related Information
Compared to basic authentication, OAuth can provide you a more secure way to split the client credentials of
your BPC system from SAP Analytics Cloud by configuring authorization in the OAuth authorization server.
Prerequisites
2. In the SAP Business Technology Platform (BTP) Cloud Connector, add "/sap/bc" as an accessible
resource URL to corresponding BPC hosts.
3. Minimum BPC support packages for SAP_BASIS: upgrade BPC 740 to SP22, BPC 750 to SP12, BPC751 to
SP06, BPC 752 to SP02; or apply the note 2602370 .
4. Apply the note 2687977 to register OAuth scope in BPC.
5. When requesting an authorization code, a SAP Analytics Cloud user either needs to be on same intranet
with BPC or needs to maintain a reverse proxy.
Unsupported Features:
Context
Previously, when you entered your BPC credentials in SAP Analytics Cloud and the BPC connection
authorization dialog popped up for the first time, the credentials were stored in SAP Analytics Cloud. Now
with the support of OAuth, BPC user credentials won't be stored directly in SAP Analytics Cloud; instead an
OAuth token is generated and used in subsequent calls to BPC.
The token can also be revoked if the user credentials are leaked accidentally; the life cycle of the token is
decided by the authorization server. You can configure in BPC how frequently the SAP Analytics Cloud client
should refresh the token. After the token expires, SAP Analytics Cloud users need to re-authenticate to access
BPC.
If you combine OAuth with SAML, users no longer need to enter their BPC credentials again after single-sign to
the system.
Procedure
Related Information
You can create a connection that allows you to import data from SAP Business ByDesign Analytics.
Prerequisites
SAP Analytics Cloud needs a Data Service URL to create a connection to SAP Analytics Business ByDesign.
Before creating a new connection, check that the Data Service URL uses the right format in SAP Analytics
Business ByDesign. For more information, check SAP Note 2913312 .
Note
While OData exposes one-to-many navigation, SAP Analytics Cloud cannot follow these relationships
because doing so would distort the measures at the parent level. OData v4.0 supports Lambda operators
“any” and “all”, which can reduce the collection of children to a single Boolean value. For this to work, both
the server and SAP Analytics Cloud must support OData v4.0.
Procedure
Note
All the parameters entered in the Data Service URL field are ignored in both authentication and data
queries.
Note
Select the Enable users to schedule for story publishing option if you want to let your users schedule the
publishing of stories.
Related Information
How to create a connection that allows you to import models from an SAP Business Warehouse (BW) system.
Prerequisites
• You are connecting to an SAP BW system, version 7.3x or higher, an SAP BW/4HANA 1.0 system, SP4 or
higher, or an SAP BW/4HANA 2.0 system, SP0 or higher.
• You have completed ONE of the following setup options:
• Performed setup using the SAP Analytics Cloud Agent Simple Deployment Kit. For more information,
see SAP Analytics Cloud Agent Simple Deployment Kit [page 461].
Note
The SAP Java Connector (JCo) must be installed using the instructions in the Post-Setup Guide
included in the kit.
RSBOLAP_BICS_STATISTIC_
INFO
RSOBJS_GET_NODES
RSOBJS_GET_SUPPORTED_TY
PES RSOBJS_INIT
RSBOLAP_BICS_PROVIDER_V
AR
Authorization
Object Authorizations Value Comments
Procedure
Note
If you want to share the credential details, select the option Share these credentials when sharing
this connection. Otherwise, users will need to enter their own credentials in order to use the
connection. If you don't share your credentials, users will be able to edit their credentials at any
time without having to start a data acquisition process.
Select the Enable users to schedule for story publishing option if you want to let your users schedule the
publishing of stories.
Note
To export data back to an SAP BW or SAP BW/4HANA System, see the OData Services procedure in
Export Models and Data [page 750].
4. Choose Create.
The new connection is added to the list of connections on the Connections screen.
Related Information
See what SAP NetWeaver BW and SAP BW/4HANA data-import functionality is compatible with SAP Analytics
Cloud.
Note
For information on supported versions of SAP BW and SAP BW/4HANA, see System Requirements and
Technical Prerequisites [page 2723].
Characteristics Yes
Support for Characteristic Data Type Restriction Characteristics members with type
NUMC are interpreted as String type in
NUMC
SAP Analytics Cloud, and not as num-
bers.
Support for Characteristic Data Type Restriction Characteristics members with type
DATS can be acquired in SAP Analytics
DATS
Cloud. However, the data type must be
manually set by the end user as type
Date.
Variable Processing Type: Replacement Restriction Restriction: For the variable processing
type Replacement Path, variables with
Path
replacements from the query result are
not supported, when the query again
(first query) has variables that are con-
figured as ready for input.
Condition in Rows (from BEx Query) Restriction Conditions along the Rows only.
Conditions for fixed set of characteris- Restriction Conditions along the Rows only.
tics (from BEx Query)
It is possible to acquire BW data results
based upon a BEx condition rule at the
time of data import. The condition is
determined by the underlying rules de-
fined in the BEx query, and the "struc-
ture of the query".
Condition for independent characteris- Restriction Conditions along the Rows only.
tics (from BEx Query)
It is possible to acquire BW data results
based upon a BEx condition rule at the
time of data import. The condition is
determined by the underlying rules de-
fined in the BEx query, and the "struc-
ture of the query".
Constant Selection No
Exception Aggregation No
Scaling Factor No
Number of Decimals No
Sort Characteristics No
Exchange of hierarchies No
Scheduling of a Story No
Context
Note
While OData exposes one-to-many navigation, SAP Analytics Cloud cannot follow these relationships
because doing so would distort the measures at the parent level. OData v4.0 supports Lambda operators
Procedure
Note
All the parameters entered in the Data Service URL field are ignored in both authentication and data
queries.
Note
If you want to share the credential details, select the option Share these credentials when sharing this
connection. Otherwise, users will need to enter their own credentials in order to use the connection. If
you don't share your credentials, users will be able to edit their credentials at any time without having
to start a data acquisition process.
Note
Select the Enable users to schedule for story publishing option if you want to let your users schedule the
publishing of stories.
8. Choose Create.
The new connection is added to the list of connections on the Connections screen.
9. If you want to create a model based on this connection, see Import Data to Your Model [page 702].
Related Information
You can create a connection that allows you to import data from SAP Concur.
Prerequisites
A valid SAP Concur account is required to import data into SAP Analytics Cloud. The user must have Web
Services Administrator and Expense User roles assigned in SAP Concur.
To enable the connection, the SAP Concur web admin must generate a Company Request Token under
Administration Company Authentication Company Request Token using SAP Analytics Cloud’s App
ID: 8d70b41a-a89e-4a8a-834a-8399d38bff3e, and record the Company UUID and Company Request
Token.
If the Authentication menu is not available, please raise a case in SAP Concur support portal with Case subject:
SAP Analytics Cloud to get the Company UUID and Company Request Token.
The Company Request Token is valid for 24 hours only. Make sure you create the Concur Connection in SAP
Analytics Cloud within 24 hours or repeat the steps to generate a new one.
Note
Procedure
You can create a connection that allows you to import data from an SAP ERP system.
Prerequisites
You use a supported version of SAP ERP Central Component. For information on supported versions, see
System Requirements and Technical Prerequisites [page 2723].
If you are connecting to a Message Server, the SAP GUI must be installed.
• Performed setup using the SAP Analytics Cloud Agent Simple Deployment Kit. For more information, see
SAP Analytics Cloud Agent Simple Deployment Kit [page 461].
Note
The SAP Java Connector (JCo) must be installed using the instructions in the Post-Setup Guide
included in the kit.
Note
If you're connecting to a Message Server, the “services” file on the computer where your SAP Analytics
Cloud agent is deployed may need to be configured. The machine that runs Tomcat must have SAP GUI
installed and corresponding connection entries must be created in SAP GUI. Have your IT department
perform the following steps:
1. Contact your SAP BASIS Administrator for the port number of the Message Server you are connecting
to. If you are the SAP BASIS Administrator, you can find the Message Server name and port number
by logging into the SAP system, and using transaction SMMS. In the Message Server Monitor, the
Message Servers and their port numbers are shown. For example: Message (3601).
2. Edit the “services” file, located in the following directory:
C:\Windows\system32\drivers\etc\services
3. Add this Message Server entry:
sapms<SID> <Port Number>/tcp #SAP System Message Server Port
For example, if the SAP Message Server <SID> is R79, and the port number is 3601, the entry to add to
the services file is:
sapmsR79 3601/tcp #SAP System Message Server Port
If the Message Server entry has already been added, confirm that the port number is correct.
4. Save the file.
5. Enter your SAP ERP system information, and log on credentials.
6. Choose Create.
The new connection is added to the list of connections on the Connections screen.
7. If you want to create a model based on this connection, see Import Data to Your Model [page 702].
Related Information
How to create a connection that allows you to import models from SAP HANA views.
Prerequisites
• Performed setup using the SAP Analytics Cloud Agent Simple Deployment Kit. For more information, see
SAP Analytics Cloud Agent Simple Deployment Kit [page 461].
• Performed a manual setup of the following required components:
1. The Cloud Connector must be installed. For more information, see Installing the Cloud Connector
[page 463].
2. The SAP Analytics Cloud agent must be installed. For more information, see Installing SAP Analytics
Cloud Agent [page 472].
3. The SAP JCo must be installed. For more information, see Installing the SAP Java Connector (JCo)
[page 474].
4. The Cloud Connector must be configured. For more information, see Configuring the Cloud Connector
[page 465].
5. The SAP Analytics Cloud agent must be configured in SAP Analytics Cloud For more information, see
Configuring SAP Analytics Cloud Agent [page 475].
6. (Optional) For TLS security options, the JVM instance that is used to run the SAP Analytics Cloud
agent bundling HANA JDBC driver ngdbc.jar (Tomcat) should be registered with keystore. Some
things to consider:
• Tomcat is not required to be secured with the same SSL certificate used to import data, nor does it
require HTTPS.
• SSL Certificates pointing to multiple DNS locations need to be further secured by having the
certificate imported to the local SAP Analytics Cloud agent JVM keystore.
• Refer to the SAP HANA Client documentation and KBA on TLS requirements and configuration:
Server Certificate Authentication and KBA 2487698
Procedure
Self-signed certificate All communication with the SAP HANA database is encrypted.
However, this option is less secure than the certificate authority
option.
Signed by a certificate authority This is the recommended option for best security.
If you choose this option, you can then enter a different host name
if you want to override the host name in the certificate.
Note
If you want to share the credential details, select the option Share these credentials when sharing
this connection. Otherwise, users will need to enter their own credentials in order to use the
connection. If you don't share your credentials, users will be able to edit their credentials at any
time without having to start a data acquisition process.
4. Choose Create.
The new connection is added to the list of connections on the Connections screen.
Related Information
You can create a connection that allows you to import data from SAP S/4HANA Cloud Edition or On-Premise
exposed OData V2 services.
Prerequisites
To connect to SAP S/4HANA On-Premise OData services, you must do the following:
Note
The Cloud Connector is not required for a connection to SAP S/4HANA Cloud Edition.
1. The Cloud Connector is installed. For more information, see Installing the Cloud Connector [page 463].
2. The Cloud Connector is configured. For more information, see Configuring the Cloud Connector [page
465].
Note
If you encounter issues configuring the SAP Gateway, see SAP Note 1797736 (SAP Gateway
Troubleshooting Guide) for possible solutions.
To connect to SAP S/4HANA Cloud OData services, the necessary configuration is documented in scope item
1YB https://rapid.sap.com/bp/#/scopeitems/1YB .
Note
The SAP Analytics Cloud agent doesn't need to be installed during the configuration process.
Context
An import data connection allows you to bring data from your SAP S/4HANA system into SAP Analytics Cloud.
For additional integration information and detailed configuration steps, see Integrating SAP Analytics Cloud
with SAP S/4HANA.
Note
• To export data back to SAP S/4HANA Cloud, see Exporting Data to SAP S/4HANA.
• While OData exposes one-to-many navigation, SAP Analytics Cloud cannot follow these relationships
because doing so would distort the measures at the parent level. OData v4.0 supports Lambda
operators “any” and “all”, which can reduce the collection of children to a single Boolean value. For
this to work, both the server and SAP Analytics Cloud must support OData v4.0.
• SAP Analytics Cloud supports OData Version 4.0. Logical Operators (such as Equal, Not Equal, Greater
than, Greater than or equal, Less than, Less than or equal, Logical and, Logical or) are supported. Not
logical negation, arithmetic operators, or functions are not supported.
• Embedded Complex types are not supported.
• SAP Analytics Cloud does not consume CDS views from SAP S/4HANA, it consumes OData services
only. You can only consume CDS based analytical queries in a live connection to SAP S/4HANA, but
not using an import data connection.
Note
Only the server address and port are required for both SAP S/4HANA Cloud and SAP S/
4HANA On-Premise. For example, https://<S/4HANA SERVER> for SAP S/4HANA Cloud or
https://<S/4HANA SERVER:PORT> for SAP S/4HANA On-Premise.
For SAP S/4HANA Cloud, SAP Analytics Cloud will connect to https://<S/4HANA
SERVER >/sap/opu/odata/IWFND/CATALOGSERVICE;v=2/, if the gateway service is
enabled. Otherwise, it will connect to https://<S/4HANA SERVER >sap/opu/odata/sap/
API_CA_ODATA_CATALOG_SRV/. To verify that the page can be loaded in the browser without
error, please load the page with your credentials from the browser.
For SAP S/4HANA On-Premise, SAP Analytics Cloud will connect to https://<S/4HANA
SERVER:PORT>/sap/opu/odata/IWFND/CATALOGSERVICE;v=2/. To verify that the page can
be loaded in the browser without error, please load the page with your credentials from the
browser.
In the Data Service URL field, you can add extra url parameters to accommodate data-source-
specific constraints on authentication. For example, you may pass in the saml2=disabled
parameter to disable SAML, or sap-system-login-basic_auth=X to disable custom login. All
parameters entered in the Data Service URL field (anything following "?" in the url) are only used
for authentication and are ignored in data queries.
Depending on how your on-premise system is set up, you might need to add ?sap-
client=<client-number> to the end of the URL when creating a connection to route to the
OData client. If the OData client is the default one, then the connection will be created successfully
without specifying the client number. Otherwise, you'll need to specify the client number.
Note
If you want to share the credential details, select the option Share these credentials when sharing
this connection. Otherwise, users will need to enter their own credentials in order to use the
connection. If you don't share your credentials, users will be able to edit their credentials at any
time without having to start a data acquisition process.
You can create a connection that allows you to import data from SAP Fieldglass.
Prerequisites
A valid SAP Fieldglass account is required to import data into SAP Analytics Cloud.
Procedure
If SSO is set up for the user, the user will be taken to their own IDP for authentication in order to create the
connection. Otherwise, enter the password associated with the username in Fieldglass.
The new connection is added to the list of connections on the Connections screen.
6. If you want to create a model based on this connection, see Import Data to Your Model [page 702].
Prerequisites
A valid SAP SuccessFactors account is required to import data into SAP Analytics Cloud. A proper setup on
SAP SuccessFactors is also required to enable the connection. For more information, see the HXM Suite OData
API documentation in the SAP SuccessFactors platform product page.
When registering the OAuth Client Application on the SAP SuccessFactors site, after Certificate Generation you
need to download the certificate before selecting register. If you miss this step, you need to regenerate the
X.509 Certificate and download the .pem file.
When editing a SAP SuccessFactors connection, you have to re-enter the certificate and other secure
credentials.
Context
Note
While OData exposes one-to-many navigation, SAP Analytics Cloud cannot follow these relationships
because doing so would distort the measures at the parent level. OData v4.0 supports Lambda operators
any and all, which can reduce the collection of children to a single Boolean value. For this to work, the
server must support OData v4.0. Freehand queries can be used to apply the Lambda expression to the
corresponding columns.
Procedure
The service URL is based on the data center you are connecting to and can be found in the SAP
SuccessFactors documentation: About HCM Suite OData APIs.
Note
Remove the text /odata/v2 from the URL. For example, if your data center is Chandler1/HCM
(DC4) you would use the following text in the service URL field: https://api4.successfactors.com
The User ID must belong to an administrative user in SAP SuccessFactors and have Admin access to
OData API and Manage OAuth2 Client Applications permissions.
d. Enter the API Key used to connect to SAP SuccessFactors.
The API Key is the OAuth API Key. To locate the API Key, log on to SAP SuccessFactors, and go to
Manage OAuth2 Client Applications.
e. Under Private Key, select and upload the certificate file that you generated during the SAP
SuccessFactors registration process.
Note
You can create a connection that allows you to import models from Workforce Analytics.
Prerequisites
A valid Workforce Analytics account is required to import data from Workforce Analytics to SAP Analytics
Cloud. A proper setup on Workforce Analytics is also required to enable the connection. For more information,
see the SAP Analytics Cloud Connector for SAP SuccessFactors Workforce Analytics guide.
When registering the OAuth Client Application on the Workforce Analytics site, after Certificate Generation you
need to download the certificate before selecting register. If you miss this step, you need to regenerate the
X.509 Certificate and download the .pem file.
When editing a Workforce Analytics connection, you have to re-enter the certificate and other secure
credentials.
Procedure
The service URL is based on the data center you are connecting to and can be found in the Workforce
Analytics documentation: Environments Supporting the Integration.
c. Enter the Workforce Analytics User ID of the user you want to import data from.
The User ID must belong to an administrative user in Workforce Analytics, and have Admin access to
OData API and Manage OAuth2 Client Applications permissions.
d. Enter the API Key used to connect to Workforce Analytics.
The API Key is the OAuth API Key. To locate the API Key, log on to Workforce Analytics, and go to
Manage OAuth2 Client Applications.
e. Under Private Key, select and upload the certificate file that you generated during the Workforce
Analytics registration process.
Note
Prerequisites
• You are connecting to a supported version of SAP BusinessObjects Business Intelligence platform. For
information on supported versions, see System Requirements and Technical Prerequisites [page 2723].
• You have completed ONE of the following setup options:
• Performed setup using the SAP Analytics Cloud Agent Simple Deployment Kit. For more information,
see SAP Analytics Cloud Agent Simple Deployment Kit [page 461].
• Performed a manual setup of the following required components:
1. The Cloud Connector is installed. For more information, see Installing the Cloud Connector [page
463].
2. The SAP Analytics Cloud agent is installed. For more information, see Installing SAP Analytics
Cloud Agent [page 472].
3. The Cloud Connector is configured. For more information, see Configuring the Cloud Connector
[page 465].
Procedure
How to create a connection that allows you to import models from a Google BigQuery system.
Prerequisites
You need to have a Google account that has access to a BigQuery project.
Note
External data sources, including Google Drive, Google Cloud BigTable, and Google Cloud Storage, are not
supported as a BigQuery data source. For Google Drive data, you can choose the Google Drive data source
instead.
Note
If your SAP Analytics Cloud system is hosted on a data center located within China, this feature is not
available.
Note
If you're unable to create a connection when using Microsoft Edge, please check your Trusted Sites
(Control Panel > Internet Options > Security > Trusted Sites > Sites), and do one of the following:
Or
You may need to restart your computer after making this change.
5. Choose a BigQuery project from the list, and then select OK.
The new connection is added to the list of connections on the Connections screen.
6. If you want to create a model based on this connection, see Import Data to Your Model [page 702].
Note
(Beta) Select the Enable users to schedule for story publishing option if you want to let your users
schedule the publishing of stories. For details on scheduling, see Schedule a Publication [page 219].
How to create a connection that allows you to import models from Google Drive.
Prerequisites
Note
If your SAP Analytics Cloud system is hosted on a data center located within China, this feature is not
available.
If your Google account doesn't appear in the list, select Use another account.
Note
If you're unable to create a connection when using Microsoft Edge, please check your Trusted Sites
(Control Panel > Internet Options > Security > Trusted Sites > Sites), and do one of the following:
Or
You may need to restart your computer after making this change.
4. In the Connect to Google Account dialog, enter a name and a description for the connection.
5. Select OK to create the connection.
The new connection is added to the list of connections on the Connections screen.
6. If you want to create a model based on this connection, see Import Data to Your Model [page 702].
Note
Select the Enable users to schedule for story publishing option if you want to let your users schedule the
publishing of stories.
You can create a connection that allows you to import data from both on-premise and cloud data sources using
generic OData services. It is possible to request a customized OData data source solution.
Prerequisites
Note
Context
Note
• SAP Analytics Cloud supports OData Version 4.0. Logical Operators (such as Equal, Greater than or
equal, and logical Or) are supported. The Not operator (logical negation), arithmetic operators, and
arithmetic functions are not supported.
• While OData exposes one-to-many navigation, SAP Analytics Cloud cannot follow these relationships
because doing so would distort the measures at the parent level. OData v4.0 supports Lambda
operators “any” and “all”, which can reduce the collection of children to a single Boolean value. For
this to work, both the server and SAP Analytics Cloud must support OData v4.0.
• Supported OData Authorization request formats:
Authorization Code Request
The data service must accept the token request as URL encoded.
The data service must accept an Authorization header with the client ID and client secret encoded as a
base 64 string.
The data service must accept as a body parameter grant_type: authorization_code.
Scopes are optional, but if supported, the data service must accept them as a body parameter scope:
<SCOPE1 SCOPE2 SCOPE3...>. The scopes are sent as a string.
Example:
curl -X POST \
<Token URL> \
-H 'Accept: */*' \
-H 'Authorization: Basic <BASE 64 Encoded String>' \
-d grant_type=authorization_grant
-d code=<Authorization code>
-d redirect_uri=<Redirect URI>
The response must be of content type JSON, and include in the response body the following
parameters as top level JSON objects: access_token, refresh_token, and token_type. It may
contain other parameters.
Example:
{
"access_token": "<ACCESS TOKEN>",
"refresh_token": <REFRESH TOKEN>",
"token_type": "Bearer",
"expires_in": 0,
"scope": ""
}
{
"authorization"
{
"access_token": "<ACCESS TOKEN>",
"refresh_token": <REFRESH TOKEN>",
"token_type": "Bearer",
"expires_in": 0,
"scope": ""
}
}
Client Credentials
The data service must accept the token request as URL encoded.
The data service must accept an Authorization header with the client ID and client secret encoded as a
base 64 string.
The data service must accept as a body parameter grant_type: client_credentials.
Scopes are optional, but if supported, the data service must accept them as a body parameter scope:
<SCOPE1 SCOPE2 SCOPE3...>. The scopes are sent as a string.
Example:
curl -X POST \
<Token URL> \
-H 'Accept: */*' \
-H 'Authorization: Basic <BASE 64 Encoded String>' \
-d grant_type=client_credentials
The response must be of content type JSON, and include in the response body the following
parameters as top level JSON objects: access_token, and token_type. It may contain other
parameters.
Example:
{
"access_token": "<ACCESS TOKEN>",
"token_type": "Bearer",
"expires_in": 0,
"scope": ""
}
Note that if the required parameters are returned in arrays or as nested parameters, they may not be
processed correctly.
Incorrect Example:
{
"authorization"
{
"access_token": "<ACCESS TOKEN>",
"token_type": "Bearer",
"expires_in": 0,
"scope": ""
}
}
Basic
The data service must accept an Authorization header with the user name and password encoded as a
base 64 string.
Procedure
Note
Advanced features of customized OData data sources, such as SAP Hybris Cloud for Customer and
SAP Business ByDesign Analytics, are only available using customized data source types. These
features are not available using generic OData services. It is highly recommended to use the relevant
customized data source types, if available, for your data service to leverage full capability of the data
connector. It is possible to request a customized OData data source solution.
5. If you are connecting to an on-premise OData service, select the Location of your Cloud Connector from
the list.
6. Enter the Data Service URL published during your configuration.
Note
You may add extra URL parameters in the Data Service URL field, to accommodate data-source-
specific constraints on authentication. For example, you may pass in the saml2=disabled parameter
to disable SAML, or sap-system-login-basic_auth=X to disable custom login. All the parameters
entered in the Data Service URL field (anything following "?" in the URL) are only used in the
authentication flow, and are ignored in the data queries.
To add parameters to the URL, use the & separator between parameters.
Note
To successfully create a connection, a document that describes the services metadata must exist and
be accessible at {the URL you enter in the Data Service URL field}/$metadata.
• For Basic Authentication, enter the User Name and Password of the user you want to import data from.
• For OAuth 2.0 Client Credentials, enter the OAuth Client ID, Secret, Token URL and Scope (if required)
of the application you want to access.
• For OAuth 2.0 Authorization Code, enter the OAuth Client ID, Secret, Token URL, Authorization URL and
Scope (if required) of the application you want to access. Before the connection is created, you will be
redirected to the Authorization URL of your application where you can authorize SAP Analytics Cloud
to access your data.
9. Choose Create.
The new connection is added to the list of connections on the Connections screen.
10. If you want to create a model based on this connection, see Import Data to Your Model [page 702].
Note
Select the Enable users to schedule for story publishing option if you want to let your users schedule the
publishing of stories.
You can create a direct OData connection that allows you to import data from SAP Integrated Business
Planning.
Prerequisites
To allow SAP Analytics Cloud to import data from SAP Integrated Business Planning (IBP), you need to perform
several setup steps in IBP, like creating a communication arrangement. For a complete list of the prerequisites
in IBP, please follow the prerequisites section described on the IBP help page.
Context
Note
While OData exposes one-to-many navigation, SAP Analytics Cloud cannot follow these relationships
because doing so would distort the measures at the parent level. OData v4.0 supports Lambda operators
“any” and “all”, which can reduce the collection of children to a single Boolean value. For this to work, both
the server and SAP Analytics Cloud must support OData v4.0.
Note
You may add extra url parameters in the Data Service URL field, to accommodate data-source-specific
constraints on authentication. For example, you may pass in the saml2=disabled parameter to
disable SAML, or sap-system-login-basic_auth=X to disable custom login. All the parameters
entered in the Data Service URL field (anything following "?" in the url) are only used in the
authentication flow, and are ignored in the data queries.
Note
Select the Enable users to schedule for story publishing option if you want to let your users schedule the
publishing of stories.
You can create a connection that allows you to import data from salesforce.com (SFDC).
Prerequisites
Caution
Salesforce connection will be deprecated in QRC4 2024. Please utilize a SQL Database connection using
JDBC or APOS Live Data Gateway for Acquired Connections. See SAP Note 3402695 .
A valid SFDC account is required to import data from SFDC to SAP Analytics Cloud. If you want to use OAuth
for authentication and authorization, a proper setup on SFDC is required to enable the OAuth connection. For
more information, see the OAuth 2.0 on Force documentation.
Note
If your SAP Analytics Cloud system is hosted on a data center located within China, this feature is not
available.
There are two types of authentication for the Salesforce Connector: Basic Authentication and OAuth2
Authentication.
c. Enter the information required for the type of authentication you chose to configure.
• To configure a Basic Authentication, enter the Authorization URL, Username, and Password for
your SFDC account. The authorization URL is used for accessing the web service to the third party
application to access your user's data.
• To configure an OAuth2 Authentication, enter the Authorization URL, Client ID, and Client Secret.
Copy the Callback URI and paste it in your SFDC setup page.
If you chose to configure an OAuth2 Authentication, follow the steps on the screen to finish creating the
connection.
The new connection is added to the list of connections on the Connections screen.
5. If you want to create a model based on this connection, see Create a New Model [page 645].
Note
(Beta) Select the Enable users to schedule for story publishing option if you want to let your users
schedule the publishing of stories. For details on scheduling, see Schedule a Publication [page 219].
You can create a connection that allows you to import data from an on-premise SQL database (JDBC +
Freehand SQL).
Prerequisites
• Performed setup using the SAP Analytics Cloud Agent Simple Deployment Kit. For more information, see
SAP Analytics Cloud Agent Simple Deployment Kit [page 461].
JDBC drivers must be installed using the instructions in the Post-Setup Guide included in the kit.
Note
You must use a Simba JDBC driver for Amazon Redshift, Amazon EMR, Apache Hadoop Hive,
Cloudera Impala, Apache Spark, and Hortonworks Data Platform.
2. Create a properties file that specifies the paths to JDBC drivers you want to use, and place it in the
same file system where the SAP Analytics Cloud agent is installed. The properties file can have any
name, such as DriverConfig.properties.
Example of a properties file:
#This file is to specify where your JDBC drivers are on the file system
of the machine where the SAP BusinessObjects Cloud Agent is installed
#For each database you have a driver for, you can remove the # from
before the name of the database and specify the path to the jar file
#If a driver requires multiple jar files you can separate the paths
with semicolons
#This list corresponds to BOE 4.2 SP5
#Amazon EMR 5.6 (Hive 2.1)="path_to_JDBCdriver"
#Amazon EMR Hive 0.11="path_to_JDBCdriver"
#Amazon EMR Hive 0.13="path_to_JDBCdriver"
#Amazon Redshift="path_to_JDBCdriver"
#Apache Hadoop HIVE="path_to_JDBCdriver"
#Apache Hadoop Hive 0.10="path_to_JDBCdriver"
#Apache Hadoop Hive 0.12="path_to_JDBCdriver"
#Apache Hadoop Hive 0.13 HiveServer2="path_to_JDBCdriver"
#Apache Hadoop Hive 0.14 HiveServer2="path_to_JDBCdriver"
#Apache Hadoop Hive 0.7="path_to_JDBCdriver"
#Apache Hadoop Hive 0.8="path_to_JDBCdriver"
#Apache Hadoop Hive 0.9="path_to_JDBCdriver"
#Apache Hadoop Hive 0.x HiveServer1="path_to_JDBCdriver"
#Apache Hadoop Hive 0.x HiveServer2="path_to_JDBCdriver"
#Apache Hadoop Hive 1.0 HiveServer2="path_to_JDBCdriver"
#Apache Hadoop Hive 1.x HiveServer2="path_to_JDBCdriver"
#Apache Hadoop Hive 2.x HiveServer2="path_to_JDBCdriver"
#Apache Spark 1.0="path_to_JDBCdriver"
#Apache Spark 2.0="path_to_JDBCdriver"
#BusinessObjects Data Federator Server XI R3="path_to_JDBCdriver"
#BusinessObjects Data Federator Server XI R4="path_to_JDBCdriver"
#Cloudera Impala 1.0="path_to_JDBCdriver"
#Cloudera Impala 2.0="path_to_JDBCdriver"
Note
Any JDBC driver can be added as a Generic JDBC datasource as long as it uses queries
that are supported by MySQL.
3. Modify your properties file by un-commenting the lines of the databases you want to connect to,
and enter the path to the driver as the property value. If the driver requires more than one jar file,
the paths can be separated by a semicolon.
Caution
The names of databases in your properties file must EXACTLY match the names shown in
this example. If you change the names, the SQL connections will fail.
7. You must restart the SAP Analytics Cloud agent, using either the Java option
-DSAP_CLOUD_AGENT_PROPERTIES_PATH (if the agent is installed on Linux), or an environment
variable SAP_CLOUD_AGENT_PROPERTIES_PATH (if the agent is installed on Windows), to specify
the complete path up to and including the properties file. Example path: C:\<full path to
file>\DriverConfig.properties.
• If you choose to use a Java option, restart the agent via the command line by navigating to the
tomcat/bin directory and doing the following:
1. Run the shutdown.bat or shutdown.sh script.
2. Open the catalina.bat or catalina.sh file in the tomcat/bin directory
and find the line where Java options are set. It should look similar
to this: set "JAVA_OPTS=%JAVA_OPTS% %LOGGING_CONFIG% -Xms1024m
-Xmx10246m -XX:NewSize=256m -XX:MaxNewSize=356m -XX:PermSize=256m
-XX:MaxPermSize=4096m"
3. Modify this line so that the -DSAP_CLOUD_AGENT_PROPERTIES_PATH option is included and
points to your .properties file.
Example: set "JAVA_OPTS=%JAVA_OPTS% %LOGGING_CONFIG% -Xms1024m
-Xmx10246m -XX:NewSize=256m -XX:MaxNewSize=356m -XX:PermSize=256m
-XX:MaxPermSize=4096m -DSAP_CLOUD_AGENT_PROPERTIES_PATH=C:\<path to
driver config file> \DriverConfig.properties"
Procedure
1. In the Modeler app, while creating a new model or importing data into an existing SQL Database model,
choose Get from a Data Source or Data source.
2. Select SQL Databases.
The server name must be used. If you use a named installation of MS SQL Server, the instance name is also
required. If the SQL Browser service is running, the port number is not required.
Note
Note
If you want to share the credential details, select the option Share these credentials when sharing this
connection. Otherwise, users will need to enter their own credentials in order to use the connection. If
you don't share your credentials, users will be able to edit their credentials at any time without having
to start a data acquisition process.
You can create a connection that lets you import data from Qualtrics.
Prerequisites
Note
If your SAP Analytics Cloud system is hosted on a data center located within China, this feature is not
available.
Procedure
For information about the Qualtrics data center, see the Qualtrics website: https://www.qualtrics.com/
support/integrations/api-integration/overview/#GettingStarted .
For information about how to generate the API token, see the Qualtrics website: https://
www.qualtrics.com/support/integrations/api-integration/overview/#GeneratingAnAPIToken .
5. Select Create.
6. If you want to create a model based on this connection, see Import Data to Your Model [page 702].
Use SAP Integration Suite Open Connectors capabilities with SAP Analytics Cloud to be able to import data
from several third-party cloud applications, such as Dropbox and OneDrive.
Prerequisites
Before you use SAP Integration Suite Open Connectors, you will need to get an SAP Business Technology
Platform (BTP) account and enable the Open Connectors service. To learn more about Open Connectors, see
the SAP Open Connectors documentation. Contact your SAP representative to purchase and set up Open
Connectors.
Note
Context
Follow these steps to use the SAP Integration Suite Open Connectors with SAP Analytics Cloud. Once
you complete the integration, you and other users in your organization will be able to create import data
connections to Open Connectors data sources. At the moment, the following data sources are available:
Close.io
Infusionsoft REST
Insightly
Box
Dropbox
Egnyte
Microsoft OneDrive
SFTP Element
Note
Keep in mind that Open Connectors uses cloud credits and may incur additional charges. These are the API
requests that use cloud credits:
Refreshing the model with an Open Connectors data 1+ (Depending on the number of rows of data)
source
Opening folders 1+ (about 1 request for every 200 files per folder opened)
You can review your Open Connectors cloud credit usage in your SAP Integration Suite overview page.
Note
After you've integrated Open Connectors, you can pause or delete the account at any time to stop
incurring charges.
When the account is paused, users in your workspace won't be able to use any Open Connectors data
sources to refresh their models. They can continue using data they've already acquired though. For
example, they can still see a model in a story, but can't refresh the data.
If the account is deleted, users won't be able to use existing connections to Open Connectors data sources,
or models based on Open Connectors data sources.
Procedure
1. In SAP Analytics Cloud, from the side navigation, choose System Administration Data
Source Configuration .
Results
From the Connections screen, users can now create import data connections to the available third-party data
sources.
Related Information
Import Data Connection to SAP Open Connectors Cloud Storage Data Sources [page 529]
Import Data Connection to SAP Open Connectors Query-Based Data Sources [page 532]
You can create import data connections to several file-based data sources using SAP Integration Suite
Open Connectors capabilities. Follow these steps to create a connection to data stored in a file-based cloud
application.
Prerequisites
Note
• Your SAP Analytics Cloud system is hosted on a data center located within China.
• Your SAP Analytics Cloud system is hosted on an SAP data center. Your SAP Analytics Cloud system
must be hosted on a non-SAP data center (and therefore running in a Cloud Foundry environment).
Determine which environment SAP Analytics Cloud is hosted in by inspecting your SAP Analytics Cloud
URL:
• A single-digit number, for example us1 or jp1, indicates an SAP data center.
• A two-digit number, for example eu10 or us30, indicates a non-SAP data center.
To create connections to file-based data sources powered by Open Connectors, please integrate SAP Open
Connectors with SAP Analytics Cloud first. For more information, see Use SAP Integration Suite Open
Connectors [page 526].
You may need to refer to the following documentation before creating the connection (see Step 3 below):
Amazon S3 For more information on how to obtain the Access Key ID, Secret Access Key,
Bucket Name, and Region Endpoint, see Amazon S3 API Provider Setup.
Box For more information on the connection details, see Box API Provider Setup.
Dropbox For more information on how to obtain the API Key and Secret, see Dropbox API
Provider Setup.
Egnyte For more information on how to obtain the Egnyte Subdomain, API Key, and
Secret, see Egnyte API Provider Setup.
Microsoft OneDrive For more information on how to obtain the API Key and Secret, see Microsoft
OneDrive API Provider Setup.
Note
Two redirect URLs are needed:
Microsoft OneDrive for Business For more information on how to obtain the API Key and Secret, see Microsoft
OneDrive for Business API Provider Setup.
• SharePoint and OneDrive for Business can share the application setup
through the Microsoft Azure portal.
• API permission requires Delegated permissions, SharePoint > MyFiles.Read.
Note
Two redirect URLs are needed:
SharePoint For more information on how to obtain the API Key and Secret, see SharePoint
API Provider Setup.
• SharePoint and OneDrive for Business can share the application setup
through the Microsoft Azure portal.
Note
The SharePoint user will need sufficient permissions for accessing the OAuth
application; otherwise the message “An error occurred while retrieving the
authentication code from SharePoint” may appear, or access_denied
may appear in the redirected URL.
To check whether a user has the required permissions, paste this URL in a
browser, replacing the variables with the information that was entered in the
connection dialog:
One way to grant the required permissions is to assign the user the Admin
role for the main SharePoint site, or assign Full Control for the team sites.
Please consult with Microsoft if you need to fine tune the SharePoint user
settings.
Context
You can create a connection to import cloud storage data from the following file-based data sources:
• Amazon S3
Procedure
Note
In the steps below, if you need to use a redirect URL, use the following URL instead of the ones shown in
the screenshots: https://bocauth.us1.sapbusinessobjects.cloud:443.
a. Amazon S3: Enter a unique Connection Name and your Amazon Access Key ID, Secret Access Key,
Bucket Name, and Region Endpoint, and then log on to Amazon using your account credentials.
b. Box: Enter a unique Connection Name, and the other required connection details, and then log on to
Box using your account credentials.
c. Dropbox: Enter a unique Connection Name, the Dropbox API Key, and the Dropbox API Secret, and then
log on to Dropbox using your account credentials.
d. Egnyte: Enter a unique Connection Name, The Egnyte Subdomain, OAuth API Key, and the OAuth API
Secret, and then log on to Egnyte using your account credentials.
e. Microsoft OneDrive: Enter a unique Connection Name, the OneDrive OAuth API Key, and the OneDrive
OAuth API secret, and then sign in to Microsoft using your account credentials.
f. Microsoft OneDrive for Business: Enter a unique Connection Name, OneDrive for Business Site Address,
OAuth API Key, and the OAuth API Secret, and then log on to OneDrive for Business using your account
credentials.
Note
Example: CompanyName-my.sharepoint.com
g. SFTP Element: Enter a unique Connection Name, and then enter a user name, password, and host
name.
Note
4. Choose Create.
The new connection is added to the list of connections on the Connections screen.
You can create import data connections to several third-party data sources using SAP Integration Suite Open
Connectors capabilities. Follow these steps to create a connection to query-based data stored in a third-party
cloud application.
Prerequisites
Note
• Your SAP Analytics Cloud system is hosted on a data center located within China.
• Your SAP Analytics Cloud system is hosted on an SAP data center. Your SAP Analytics Cloud system
must be hosted on a non-SAP data center (and therefore running in a Cloud Foundry environment).
Determine which environment SAP Analytics Cloud is hosted in by inspecting your SAP Analytics Cloud
URL:
• A single-digit number, for example us1 or jp1, indicates an SAP data center.
• A two-digit number, for example eu10 or us30, indicates a non-SAP data center.
To create connections to third-party data sources powered by Open Connectors, please integrate SAP Open
Connectors with SAP Analytics Cloud first. For more information, see Use SAP Integration Suite Open
Connectors [page 526].
You can create a connection to import query-based data from the following third-party data sources:
• Autotask CRM
• Close.io
• ConnectWise CRM REST Beta
• Infusionsoft REST
• Insightly
• Microsoft Dynamics CRM
• NetSuite CRM 2018 Release 1
Procedure
Note
In the steps below, if you need to use a redirect URL, use the following URLs instead of the ones shown
in the screenshots:
• If your SAP Business Technology Platform (BTP) landscape is operating in European
Union (EU) Access mode (for data protection within the EU), use https://
bocauth.eu1.sapbusinessobjects.cloud/.
• Or, for an SAP Analytics Cloud host name that ends with
"hana.ondemand.com", "analyticscloud.sap.com", "sapanalytics.cloud", "sapanalytics.cn",
"hanacloudservices.cloud.sap" or "sapbusinessobjects.cloud", use https://
bocauth.us1.sapbusinessobjects.cloud:443.
• Or, for an SAP Analytics Cloud host name that ends with "int.sap.hana.ondemand.com" or others,
use https://oauth-r.cnry.projectorca.cloud:443.
a. Autotask CRM: Enter a unique Connection Name, the Server URL, and a Username and Password, and
then select Create.
b. Close.io: Enter a unique Connection Name, enter an API Key, and then log on to Close using your
account credentials.
For more information, see Close.io API Provider Setup.
c. ConnectWise CRM REST Beta: Enter a unique Connection Name, and the other required connection
details, and then log on to ConnectWise using your account credentials.
For more information on the connection details, see ConnectWise CRM REST API Provider Setup.
d. Infusionsoft REST: Enter a unique Connection Name, enter the OAuth API Key and the OAuth API
Secret, and then sign in to Infusionsoft using your account credentials.
You can share import data connections with other users, which will allow them to create models based on the
connection, as well as delete and refresh data sources that are based on the connection.
• SAP BW
• SAP BPC
• Google Drive
• Google BigQuery
• OData Services
• SAP Cloud for Customer
• SAP Business ByDesign Analytics
• SAP SuccessFactors
• SAP S/4 HANA
• SQL
• File Server
Note
Connections can be shared or unshared by the owner of the connection and any user with a role that
includes a Manage permission on Connections. For more information, see Permissions [page 2844].
1. From the connection page, select an existing connection that supports sharing.
The connection is shared with the users you selected. They will only be able to use the connection to create
models and data sources. When using the connection to connect to the database or external system where the
data is stored, your credentials will be used. In some cases, you can share a connection without sharing your
credentials, prompting users to enter their own credentials in order to use the connection.
The (Share) icon appears next to the connection that has been shared.
When the connection is unshared, users can no longer see the connection and can no longer refresh or delete
data sources associated with that connection.
Related Information
Note
If your SAP Analytics Cloud system is hosted on a data center located within China, this feature is not
available.
Your SAP Analytics Cloud system includes a default export connection to Google Drive. This connection is used
when exporting a story to Google Slides, allowing your users to sign in to their Google account and save the
story to their personal Google Drive. See Export a Story as Google Slides (Classic Story Experience) [page 259]
for steps to help your users export their stories.
• When your tenant system is created, the System Owner, Admin roles, and any existing user with Manage
permission on Public File and Read permission on User items can see, manage, and share the Google Drive
connection. This also happens on existing tenant systems who receive the 2019.1 update with the Google
Drive connection.
• The System Owner is the default owner of the connection. Note that if the System Owner is changed, the
owner of the connection is not automatically changed.
• Only users who have the Maintain permission on the Connection item can see the export to Google Slides
option in their story's Export Story As dialog. However, these users will not be able to perform the export
unless an administrator also shares the connection with their user account. They can request permission
to an administrator from the Export Story As dialog.
• No user can delete or edit this connection, and the connection can't be exported from the Deployment
area.
There are two cases where you need to share the Google Drive connection with new users:
• A new user who is not an administrator must have the connection shared with them before they can use
the export to Google Slides option in the Stories area. These users do not have permission to share or
manage the connection.
• A new administrator can only manage the connection in the Connection area if the connection is first
shared with them.
Related Information
You can create a connection that allows you to export data to OData services.
Procedure
Note
Make sure to complete the prerequisites found in the OData Services procedure in Export Models
and Data [page 750] before connecting to an on-premise source.
c. Enter the Data Service URL that was created during your configuration.
Note
The server address and port are required for both SAP BW on-premise products. For example:
https://<BW SERVER:PORT> for SAP BW On-Premise. And the URL should look like: https://<BW
SERVER:PORT>/sap/opu/odata/sap/RSBPC_ODATA_EXPORT_SRV
Note
If you want to share the credential details, select Share these credentials when sharing this
connection. Otherwise, users will need to enter their own credentials in order to use the
4. Choose Create.
The new connection is added to the list of connections on the Connections screen.
Learn how to prepare, enhance, and optimize your data using SAP Analytics Cloud with advanced data
wrangling and data preparation workflows for both datasets and models.
• About Preparing Datasets for Stories and Analytic Applications [page 551]
• About Preparing Datasets for Predictive Scenarios [page 553]
• Create Standalone (Public) Datasets from Files and Data Sources [page 565]
Managing Models
Depending of your business case, you can choose between preparing your data using a model or a dataset.
Before building your story, you need to make sure that your data is prepared for scenario analysis. With SAP
Analytics Cloud’s wrangling experience, data preparation can be done using either dataset or a model.
They both offer the same level of functionalities and don’t impact the way the data is consumed in stories.
However, the way you create and manage these data objects are different, as they address different purposes.
Note
Datasets and remote models are not supported in story features using Smart Insight on a variance chart
and cross-calculations on tables and charts.
The graphic below summarizes the main differences between a model and a dataset:
Datasets store data in a single table, and the semantic structure is defined by the metadata. Go for a dataset if
you’re only looking to upload data, using a .csv or .xlsx file, and analyze it in a story straight away.
Models store data as a star schema, and the structure of the model is reflected in the database. Go for a model
if the structure of the data is already set, or if you already have a structure in mind before importing the data to
fit into the model. A model is preferred when it comes to govern the data processing, like in case of planning.
Models guarantee that the data they hold follows a series of business rules that certify that workflows such as
planning can be run. Changes made to the structure of the model can be done either at the structure level, if
the fact table is empty, or by rebuilding the model from the original data preparation session.
Models also support row-level security, fine-grained data management of dimensions, and fact tables.
Datasets are reactive to change, any modifications you make to the data or data structure are done simply by
editing the dataset, without data loss or restriction.
Example
Say you want to change the data type of a field from a Dimension to a Measure. If you're using a dataset,
only the metadata definition of that column would need to be changed. Whereas if you are using a model,
it' more time-consuming: You need to delete the dimension table and update the fact table to include an
additional column.
Note
Related Information
A dataset is a simple collection of data, usually presented in a table. You can use a dataset as the basis for your
story, and as a data source for Smart Predict.
Dataset are first choice when you want to create story/ visualization quickly and do not want to get into
structure definition, during data processing or when development do not demand IT governance. In SAP
Analytics Cloud, you can encounter different types of datasets:
Embedded Dataset
When you create a story and import data from a file or other data source, but not from an existing saved model
or dataset, that data is saved as an embedded dataset (also called a private dataset) within the story, and this
dataset doesn't appear in the Files list. However, if you want others to be able to use this dataset, you can
convert it to a public dataset:
Note
After you've converted an embedded dataset to a public one, you become the owner of the public dataset,
and you may need to consider setting permissions and sharing on the new dataset, because the security
settings for the new dataset are independent of the story that contained the embedded dataset.
Standalone Datasets
This type of dataset is stored in SAP Analytics Cloud and you can find it in a folder location (for example, public,
private, or workspace) on the Files page. You create them collecting your data either by importing a data file or
collecting files from other systems.
Dataset can be used as data source for Smart Predict. However, they must have a certain structure and must
contain some mandatory information depending on the type of predictive scenario you are creating and where
you are in the modeling process. Each row represents an observation (which is the object of your interest),
and each column represents information corresponding to this observation. One of the columns represents the
target variable.
The graphic below summarizes which dataset is used depending on the step of the predictive process:
Note
There are sizing restrictions based on acquired dataset. For more information refer to System Sizing,
Tuning, and Limits [page 2743].
Input Datasets
In SAP Analytics Cloud, you can use one of the following types of input datasets:
• Acquired: Data is imported (copied) and stored in SAP Analytics Cloud. Acquired dataset have already
been prepared on your computer (supported formats are .TXT, .CSV and .XLSX ).
• Live: Data is stored in the source system. It isn't copied to SAP Analytics Cloud, so any changes in the
source data are available immediately if no structural changes are brought to the table or SQL view. You
can connect to live data and create a live dataset.
Restriction
For live datasets, any data changes you make to your tables and SQL views in your SAP HANA on-premise
system appear immediately in live datasets. However, to update your predictive model, you need to do a
retraining.
Depending where you are in the predictive model lifecycle, your input dataset can be a training or an application
dataset (in the case of a classification or regression predictive model) or both (in case of a time series
predictive model as only one dataset is used).
An input dataset is used to train the predictive model (training dataset) or is used to apply the predictive model
(application dataset).
Training Dataset
The training dataset contains the past observations that will be used to generate the predictive model. In this
set, the values of the target variable, which is the variable corresponding to your business issue, are known. By
Application Dataset
You apply a predictive model on an application dataset (for classification and regression predictive models).
This dataset must contain the same information structure as the corresponding training dataset as follows:
Note
Empty values in your dataset remain empty and they appear in the Blank Count column in the Dataset
Preview.
Generated Datasets
When you click the Apply button to get your predictions, a dataset containing your predictions is generated.
You can choose in which directory you want to save your dataset. By default, they are saved in this folder:
Main Menu Browse Files .
Note
When a dataset already exists with the same name as the dataset you are saving, then the following rules
apply:
• If both datasets have identical variables, the new dataset will automatically replace the existing one.
• If the datasets are different, you receive an Apply Failed message. To continue, save your dataset under
a different name.
The generated dataset contains the predictions and any additional columns you have requested.
Note
You can then use this generated dataset to create a story or an SAP Analytics Cloud model. However if
you intend to get updates in your generated dataset, SAP recommends to use it in a story: If you reapply
your predictive model and erase the generated dataset with an updated one, the story will be updated. For
example, if you have added rows to your apply dataset, the generated predictions for these new rows will be
added to the story. However, if you decide to use the generated dataset in an SAP Analytics Cloud model,
note that the SAP Analytics Cloud model won't be updated.
In this video, you will create a standalone dataset, perform data wrangling, review the measure and dimension
properties, review the data transformation and enrichment options, and see how to create an embedded
dataset in a story.
Access any dataset you are authorized to view and check how the data are distributed.
A dataset is a simple collection of data, usually presented in a table. You can use a dataset as the basis for your
story, and as a data source for Smart Predict.
In this area, you can see all datasets that you've created or that you are authorized to view.
During the initial data import, a data structure is created; the data type of each column is inferred. The results
are displayed in the right-hand Details panel.
• Dataset Overview: lists all the available columns under separate Measures and Dimensions headings in the
Output tab. You can view a list of all available columns in the Columns tab.
• Details: includes information for a given column, such as histograms (rectangles indicating frequency of
data items in successive numerical intervals), and the column's inferred Data Type.
Numerical or textual histograms are displayed under Data Distribution.
Text histograms are horizontal, and the values are clustered by count. The number of clusters can be
adjusted by dragging the slider displayed above the histogram. When a cluster contains more than one
value, the displayed count is the average count for the cluster. The count is prefaced by a tilde symbol
(~) if there are multiple different occurrences. Expand the cluster for a more detailed view of the values in
the cluster along with individual counts. Use the search tool to look up specific column values, and press
Enter to initiate the search. When you select a value in the histogram, the column is sorted, and the value
is highlighted in the grid.
Note
The displayed histogram is determined by the column's data type. A column containing numbers could
still be considered as text if Data Type is set to String.
Numerical histograms are vertical and represent the range of values along the x-axis. Hover over any bar
to show the Count, Min, and Max values for the data in the bar. The number of bars can also be adjusted
by using the slider above the histogram. Toggling the Show Outliers box includes or removes outlier values
from the histogram.
Note
Below a numerical histogram, there is a box and whisker plot to visualize the histogram's distribution of
values as well as Min, Median, and Max values.
If you cannot see the create options to create a dataset, it's because you don't have the right role or permission
to do this. Standard Application Roles [page 2838]
If you want to create a new dataset that's similar to an existing dataset, you can copy an existing dataset. You
can also move datasets to other folders or delete them.
Context
Datasets are treated like other files in SAP Analytics Cloud. To learn how to copy, move, or delete datasets, see
Manage Files and Folders [page 50].
Note
• You can delete datasets only if your user role has the privilege for deleting datasets.
• Datasets that are in use in stories can't be deleted.
• Datasets that are used in predictive scenarios can be deleted, but the scenarios would become
unusable.
Dataset dimensions and measures are created when you import the data.
Data structures and data types are inferred during the initial data import, letting you immediately start working
on story layout. For example, if you import this data:
2020/04/20 Blueberry 1
2020/04/20 Grapefruit 2
The resulting dataset could have a Date dimension, a String dimension, and an Integer measure, which you
could start using in a story. If those inferred data types aren't what you want though, you can change them first.
For example, if a column of dates is initially set to the String data type, you can change it to the Date type.
Columns of numbers are usually defined as measures, while other columns are defined as dimensions.
For measures, you can define the number of Decimal Places, and the Measure Units; for example, bottles or
kilograms.
01-001 Pants
01-002 Shirts
01-003 Dresses
You could set the Product Description column to be the Description property for the Product dimension.
If you don't want to include all of the imported data columns as dimensions and measures in your dataset,
you can delete them in the Dataset Overview panel, in the Output list (on a dimension token, select
Delete ). The raw data columns will still be included in the dataset, but dimensions and measures won't be
created for them, and therefore won't be available to use in stories.
If you later decide that you do want one of those deleted columns to be made into a dimension or measure,
select the column, and then in the Details panel, select Use as dimension or Use as measure.
You can easily create level hierarchies based on the imported dimensions. For details, see Create Level-Based
Hierarchies in Datasets [page 594].
• For Measures: there are currently two supported data types – Decimal or Integer
• For Dimension:
• Date
• Integer
• Decimal
• String
• Time
• Date and Time
• Boolean
Note
When you change the Data Type for a dimension, you may have to specify a Conversion Format for the
new selection.
Note
For Smart Predict, also see this topic: Variable Data Types [page 2052].
Acquired Datasets
Standalone (public) datasets can be shared the same way that stories and folders can be shared. In the sharing
dialog, you can choose the access level for the users or teams that the dataset is shared with: View, Edit, Full
Control, or a Custom access level. Datasets that aren't shared can't be viewed or modified by anyone but the
dataset owner.
For information about sharing files, see Share Files or Folders [page 193].
You can also apply security settings based on user roles. In the Security Roles area, you can assign
general permissions for datasets, but you can't assign permissions for individual datasets. For information on
user roles, see Standard Application Roles [page 2838].
Users should be assigned a role with the appropriate dataset permission level. For example, someone assigned
only Read access to datasets in their role won't be able to create datasets.
Think of it like a combination: to read a dataset that another user shares with you, you'll need three things:
• Rights to read the dataset via the sharing rights that are set by the user when they share it.
• Read rights on the Dataset application privilege.
• Read Rights on the Private Files application privilege.
If you don't have one of these three rights, you won't be able to read (open or use) the dataset.
Note
For each role, an equivalent Team is created, and all users assigned to a role are also assigned to that team.
When you share a dataset, you can choose to share it with one of these role-based teams. For example, the
role BI Admin has full access to datasets. You can share a dataset with the team BI_Admin. Then, all users
who are assigned to the BI Admin role would have full access to that dataset.
Note
Embedded datasets inherit their permission settings from the containing story. For more information, see
Create Standalone (Public) Datasets from Files and Data Sources [page 565].
• Restrictions Using Live Dataset With Smart Predict in Restrictions [page 2040]
Related Information
Create a dataset and go through an analytical cycle of accessing your data, manipulating it, cleansing it,
creating analytics, validating different scenarios, and sharing your insights – easily and flexibly to give you an
empowered experience.
With the Smart wrangling capabilities in SAP Analytics Cloud, you prepare your data in flexible way without
worrying about getting the data prepared perfectly from the start. With datasets, you decide that you want to
take care of data first. You collect, prepare and analyze your data in a few steps.
• About Preparing Datasets for Stories and Analytic Applications [page 551]
Step 1 - Gather Your Data: Create Your Dataset and Import Your Data
You can create a standalone dataset (also called a public dataset) and save it as a separate object that appears
in the Files list.
Alternatively, when you create a story and import data from a file or other data source, but not from an existing
saved model or dataset, that data is saved as an embedded dataset (also called a private dataset) within the
story, and this dataset doesn't appear in the Files list. For more info, see About Datasets and Dataset Types
[page 544].
You can create a dataset from a file or a datasource. From more informations, see .Create Standalone (Public)
Datasets from Files and Data Sources [page 565]
Step 2 - Improve Your Data Quality: Clean up, Redefine, Organize, Enrich,
Tranform Your Data
From the dataset overview panel, you can easily do some simple wrangling actions to improve your data quality.
For more information, see About Data Wrangling, Mapping, and Transformation in Datasets [page 587].
Create custom transformations using the Wrangling Expression Language. For more information, see About
Data Wrangling, Mapping, and Transformation in Datasets [page 587].
Use your dataset in a story and explore your data using Augmented Analytics features. For more information,
see Data Visualization (Stories) [page 951] and Augmented Analytics (Smart Features) [page 1985].
Predictive Scenarios supports acquired datasets and live dataset. However, datasets for predictive scenarios
must have a certain structure and must contain some mandatory information depending on the type of
predictive scenario you are creating and where you are in the modeling process.
To create Predictive Scenarios, you can use one of the following types of input datasets:
• Acquired: Data is imported (copied) and stored in SAP Analytics Cloud. Acquired dataset have already
been prepared on your computer (supported formats are .TXT, .CSV and .XLSX ).
• Live: Data is stored in the source system. It isn't copied to SAP Analytics Cloud, so any changes in the
source data are available immediately if no structural changes are brought to the table or SQL view. You
can connect to live data and create a live dataset.
Dataset Prerequisites
Depending on the type of predictive model that you are creating, you need to provide some essential
information for the input dataset.
Note
While using live datasets, both live datasets (training and apply datasets) must come from the same
SAP HANA system: you cannot train a predictive model with a live dataset with data from SAP HANA
system 1 and then apply this predictive model on a live dataset with data coming from SAP HANA
system 2.
Note
where YYYY stands for the year, MM stands for the month, DD stands for the day of the month, hh
stands for hour, mm stands for minutes, and ss stands for seconds.
Example
January 25, 2018 will take one of the following supported formats:
• 2018-01-25
• 2018/01/25
• 2018/01-25
• 2018-01/25
• 20180125
• Forecast accuracy can be improved when the training dataset includes influencers. These are other
variables that you think may have an influence on the signal variable. While values would normally be
available for these over the observation period, you must ensure that values for influencers are also
provided for the period you want to forecast. If values for influencers to cover the forecasted periods
are not available, the predictive model won’t be successful, as you need observations to cover all of the
requested forecast dates.
For example, if you want to forecast chocolate sales for the year, you could add the specific dates of festive
occasions to your dataset, for example, Easter, Christmas, Mother’s Day, or Valentine's day.
Note
The predictive forecasts will be influenced by the quality of the data you provided: both the historical
data but also the future values for the influencers can impact your predictive model's accuracy. In
some cases, it can be difficult to provide data with high quality for future values. For example: Future
values for weather information can only be forecasts.
Dataset Structure
Whereas, if you want to create a time series predictive model, the training dataset should contain:
• A column with the date variable (one date per period of time). The date formats must be one amongst:
• YYYY-MM-DD
• YYYY/MM/DD
• YYYY/MM-DD
• YYYY-MM/DD
• YYYYMMDD
• YYYY-MM-DD hh:mm:ss
where YYYY stands for the year, MM stands for the month, DD stands for the day of the month, hh stands
for the hour, mm stands for the minutes and ss stands for the seconds.
Example
January 25, 2018 will take one of the following supported formats:
• 2018-01-25
• 2018/01/25
• 2018/01-25
• 2018-01/25
• 20180125
• A column with the value that you want to forecast, being the signal variable.
• While this is not mandatory, we highly recommend also including influencer variables as part of the training
dataset for the past dates and for the forecasted period, in order to improve the forecast accuracy.
If your goal is to create a segmented time series predictive model, make sure that one column includes the
segment information.
Example
For example, say that you work for an energy supplier, and you want to estimate the energy consumption
over the next 24 months, by dwelling sector of a given district. Your dataset should then contain at least
three columns, including the following data:
To summarize your dataset must contain the right data in the right format, depending on the type of predictive
scenario.
You create an acquired dataset following these step from Datasets start page, select From a CSV or
Excel File .
You can create a live dataset using data from your SAP HANA on-premise system. However you must take care
that the following prerequisites are mets:
• Ensure that a connection has been established between your SAP HANA on-premise system and SAP
Analytics Cloud. For more information, see Live Data Connection Overview Diagram [page 265].
• Ensure that you have one of the following security roles: Predictive Admin,BI Admin, or Admin.
1. From the Datasets start page, select Create Dataset . And select Data from a data source.
2. From the Connect to Live Data area, select your existing SAP HANA Live Data Repository and fulfill the
steps to complete connection:
1- Select a Data Repository Select the Data Repository, which has been set up to cre-
ate your live datasets.
2- Select a table The existing schemas available for this Data repository are
displayed. Select the one that points to the actual data
stored in your SAP HANA on premise system.
3. Give a name to your dataset in SAP Analytics Cloud and save it.
The dataset is now created and appears in the Files menu of SAP Analytics Cloud.
Note
As the data stays in your on-premise SAP HANA system, the data is available in real time: you don't need
to manually upload the dataset each time you update your data. Any data changes you make to your tables
and SQL views in your SAP HANA on-premise system appear immediately in the live dataset stored in SAP
Analytics Cloud, as long as you do not change the list of columns used. To update the predictions, you need
only two steps:
Note
2. Re-apply the updated predictive model to get the output SAP HANA table updated. The updated
predictions will be right-away visible into your stories.
When applying classification and regression predictive models, make sure to select the exact same
list of predictive outputs so that the existing output table can be updated in SAP HANA.
In SAP Analytics Cloud you can use one of the following types of input datasets:
• Acquired: Data is imported (copied) and stored in SAP Analytics Cloud. Changes made to the data in the
source system don't affect the imported data.
• Live: Data is stored in the source system. It isn't copied to SAP Analytics Cloud, so any changes in the
source data are available immediately if no structural changes are brought to the table or SQL view.
When you select an input dataset to create a predictive model, you won't see any distinction between acquired
and live datasets, although it is a good practice for the administrator creating the connection to a dataset to
include "live" in the dataset name, or its description. You can also see live datasets in the list of objects available
to you on the page Browse Files <User> as they include (Live) in the file type.
There are some differences between the two types of input datasets that concern mainly the display of results
in the predictive model debrief, and how some types of data are handled, when training or applying a predictive
model. These differences will be resolved over time, as predictive models are improved with each release of
SAP Analytics Cloud, but for the immediate future, please take into account the following differences that can
be observed when using either an acquired or live input dataset:
In the column Storage under Edit If necessary, the physical data type, for
Note
Variable Metadata, you can specify the example, a string, is automatically con-
Training a predictive model won't
data type for each variable. verted to the data type specified by the
complete if a user specified data
user, for example, a number.
type doesn’t match the physical
data type.
Caution
The input datasets used to train and apply a predictive model must come from the same data source type.
You can't apply a predictive model on a live dataset if it was trained with an acquired dataset, nor can
you apply a predictive model on an acquired dataset if it was trained using a live one. However, you can
have several predictive models trained and applied with live and acquired datasets in the same predictive
scenario.
Note
While using live datasets, both live datasets (training and apply datasets) must come from the same
SAP HANA system: you cannot train a predictive model with a live dataset with data from SAP HANA
system 1 and then apply this predictive model on a live dataset with data coming from SAP HANA
system 2.
Datasets - Examples
Your dataset must have a specific structure so that it can be relevant for Smart Predict.
Once you have created your predictive model, you apply it to an application dataset. This dataset contains the
same information about the customers you want to target with the new product. The target variable column
("Will_Buy?") is empty or even doesn't exist because this is what you are expecting to predict:
You apply the regression predictive model to a new dataset which contains the same influencers. The values of
the target variable for this week ("number of complaints this week") are unknown.
Note
To give you a quick and clear overview of the structure, this example shows a 2:1 ratio, or 6 months of
historical data for 3 months of forecast. When creating your own time series predictive scenarios with a
month granularity, we recommend you use a 5:1 ratio, or 5 months of historical data for each month of
forecast. This ratio ensures that the engine can detect enough cycles to create a forecast.
The generated dataset will look like the one displayed below:
Of course, you can combine entities and influencers. In fact, this is recommended if you want to increase the
accuracy of your time series models.
You can create a standalone dataset (also called a public dataset) and save it as a separate object that appears
in the Files list.
(When you create a story and import data from a file or other data source, but not from an existing saved
model or dataset, that data is saved as an embedded dataset (also called a private dataset) within the story,
and this dataset doesn't appear in the Files list. For details, see About Adding Data to a Story [page 1054]. Also
see About Datasets and Dataset Types [page 544].)
To create a dataset you can import data from a data file or from other systems : SAP sources and external
sources.
Check out the interactive image below and click on a data source to access the detailed steps.
Related Information
This section acts as a reference for all procedures to create datasets for all supported data sources.
Personal Files
You can import data from an external file, such as an Excel spreadsheet or comma-separated-values file, into a
new dataset. The data columns in the dataset will be exposed as dimensions or measures that you can use in
stories.
Context
First, the source data is analyzed, and then the data is shown with proposed dimensions for the new dataset.
You then refine the proposal by specifying dimension types and fixing any data-quality problems.
Data files can be in your local file system or in your network. The source data can be an Excel spreadsheet
(.xlsx) or a delimited text file (.csv or .txt). If you import data from Microsoft Excel, and if the data is saved in
1. From the Datasets start page, select From a CSV or Excel File.
2. In the Create Dataset From File dialog, choose whether you want to import data from a file on your local
system, or from a file server.
If you don't see the options to choose a local system or file server, see Allow Data Import and Model Export
with a File Server [page 712].
Tip
If you import a file from a file server, you can also schedule imports from that file. For more information,
see Update and Schedule Models [page 772].
3. If you're importing from a file server, choose a file server connection, or select Create New Connection.
If you create a new file server connection, specify the path to the folder where the data files are located. For
example: C:\folder1\folder2 or \\servername\volume\path or /mnt/abc.
4. Choose the file you want to import.
5. If you are importing from a local Excel workbook containing multiple sheets, select the Sheet you want to
import.
Note
If you are importing from an Excel file on a file server, the first sheet is automatically imported.
6. If you're importing a .csv file, select which delimiter is used in the file, or select Auto-detect.
7. Select Import.
For small data files, the data is imported and displayed in the data integration view, which shows the
imported data columns that you'll define as dimensions, measures, and attributes.
Larger data files may take some time to upload. You can work on other tasks while the data is being
uploaded in the background.
When the draft data is finished uploading, the data appears in the data integration view.
Next Steps
After importing the raw data, continue with data preparation before completing your dataset: About Adding
Data to a Story [page 1054].
Context
For more information about extending SAP Business ByDesign using SAP Analytics Cloud, see Extending SAP
ByDesign Analytics using SAP Analytics Cloud . Information about available data sources in SAP Business
ByDesign can be found here.
From the acquire data panel, select the filter icon to narrow down the number of data sources in the
list. You can filter by data source type or by category.
3. Select an existing connection, or select Create New Connection to create a new connection.
Note
Advanced features of customized OData data sources, such as SAP Cloud for Customer and SAP
Business ByDesign Analytics, are only available using customized data source types. These features
are not available using generic OData services. It is highly recommended to use the relevant
customized data source types, if available, for your data service to leverage full capability of the data
connector. It is possible to request a customized OData data source solution.
Next Steps
After importing the raw data, continue with data preparation before completing your dataset: About Adding
Data to a Story [page 1054].
SAP BW
Prerequisites
You use an SAP Business Warehouse (BW) system, version 7.3x or higher release, or an SAP BW/4HANA
system, SP4 or higher, and both the SAP Business Technology Platform (BTP) Cloud Connector and SAP
Analytics Cloud agent are installed and configured.
Note
If you're importing data via a BEx query using an SAP BW import data connection, see these SAP Notes:
2416705 2408693 .
5. You can select the icon to open the Select Presentations dialog, and then choose which
presentations you want to see. Depending on the characteristic, you could have these presentations
available:
Key (DISPLAY_KEY)
Key (Internal) (KEY)
Long Text (LONG_TEXT)
Medium Text (MIDDLE_TEXT)
Short Text (SHORT_TEXT)
Text (TEXT)
6. If one of the selected dimensions has a hierarchy, you can click (Select Hierarchy and Drill Level)
for more options, such as specifying which hierarchy to use. Note that No Hierarchy / Flat Presentation
(Default) is the default.
You can also specify the drill level. The default drill level is 2.
7. Select Create.
You can continue to work on other tasks while the data is being uploaded in the background.
When the draft data is finished uploading, the data appears in the data integration view.
5. Complete the following substeps if your SAP BW data source contains date dimensions that you want to
enrich with time-hierarchy information.
If you enrich the date dimensions, you can use date-related features such as sorting by date, and date
range sliders for filtering.
1. Select the column that contains the SAP BW date dimension data.
2. In the right hand Details side panel, in the Column Details section, choose the Date data type.
3. Choose the appropriate Conversion Format.
Next Steps
After importing the raw data, continue with data preparation before completing your dataset: About Adding
Data to a Story [page 1054].
Related Information
Context
Note
SAP Cloud for Customer was formerly named SAP Hybris Cloud for Customer.
Note
Advanced features of customized OData data sources, such as SAP Cloud for Customer and SAP
Business ByDesign Analytics, are only available using customized data source types. These features
are not available using generic OData services. It is highly recommended to use the relevant
customized data source types, if available, for your data service to leverage full capability of the data
connector. It is possible to request a customized OData data source solution.
Next Steps
After importing the raw data, continue with data preparation before completing your dataset: About Adding
Data to a Story [page 1054].
Context
Note
SAP Cloud for Customer was formerly named SAP Hybris Cloud for Customer.
Note
Advanced features of customized OData data sources, such as SAP Cloud for Customer and SAP
Business ByDesign Analytics, are only available using customized data source types. These features
are not available using generic OData services. It is highly recommended to use the relevant
customized data source types, if available, for your data service to leverage full capability of the data
connector. It is possible to request a customized OData data source solution.
Next Steps
After importing the raw data, continue with data preparation before completing your dataset: About Adding
Data to a Story [page 1054].
SAP HANA
Prequisites
Note
This help topic refers to acquiring SAP HANA data into an SAP Analytics Cloud dataset. For Smart Predict,
you can create a live SAP HANA dataset (the data is not acquired; it remains in an SAP HANA table). For
details, see: Connecting to Live Data in SAP HANA - Overview [page 338].
Note
Also for Smart Predict, you cannot consume the live dataset generated with predictions directly in a
story. You need to create a calculation view and then create an SAP Analytics Cloud model on top of this
calculation view. Then you can use this model in a story. For details, see Creating Calculation Views to
Consume Live Output Datasets [page 347] and Using Your Live Generated Dataset in an SAP Analytics
Cloud Model [page 2148].
• The HANA database must first be set up by the system administrator. The HANA views in the database
(analytic or calculation-type views) are available to create new models and datasets from.
• You need to install the SAP Analytics Cloud agent, with location ID. This location ID is configured through
the Cloud Connector, and the agent needs to be allowlisted there. For more information, see:
Restriction
Next Steps
After importing the raw data, continue with data preparation before completing your dataset: About Adding
Data to a Story [page 1054].
If your HANA data contains location information, you will need to add location dimensions. For more
information on live HANA data see Creating Geo Spatial Models from HANA Calculation Views [page 1483].
For information on acquired HANA data see Creating a Model with Coordinate or Area Data for Geospatial
Analysis [page 1479].
Related Information
Note
To connect to an On-Premise OData service, ensure that the following tasks are completed:
1. The cloud connector is installed. For more information, see Installing the Cloud Connector
[page 463].
2. The cloud connector is configured. For more information, see Configuring the Cloud Connector
[page 465].
The SAP Analytics Cloud agent doesn't need to be installed during the configuration process.
For more information, see: Import Data Connection to SAP Integrated Business Planning [page 518].
4. Type a name for your query.
5. Choose whether you want to build a query, or type in a query manually.
1. Select Build a Query to build a query using the query builder, or Freehand Query to manually type a
query using V2 OData query syntax.
2. If you chose to use the query builder, select a table, and then select Next.
Build your query by moving data elements into the Selected Data and Filters areas. For more
information, see Building a Query [page 704].
3. If you selected the Freehand Query option, type a query in the box and select OK
Next Steps
After importing the raw data, continue with data preparation before completing your dataset: About Adding
Data to a Story [page 1054].
SAP S4/HANA
Prerequisites
You are using a supported version of SAP S/4HANA. For more information, see System Requirements and
Technical Prerequisites [page 2723].
Context
Note
• SAP Analytics Cloud supports OData Version 4.0. Logical Operators (such as Equal, Not Equal, Less
or Equal, Greater than, Greater than or Equal, Less than, Less than or equal, Logical and, Logical or,
Startswith, and substringof) are supported for S/4HANA. Not logical negation, arithmetic operators, or
Number (Edm.Decimal) "gt", "ge", "lt", "le", "eq", "ne", "M" [value] m
Number (Edm.Double) "gt", "ge", "lt", "le", "eq", "ne", "d" [value] d
Number (Edm.Single) "gt", "ge", "lt", "le", "eq", "ne", "f" [value] f
Number (Edm.Int64) "gt", "ge", "lt", "le", "eq", "ne", "L" [value] L
Next Steps
After importing the raw data, continue with data preparation before completing your dataset: About Adding
Data to a Story [page 1054].
Context
For more information on the data you can access, see the SAP SuccessFactors HCM Suite OData API:
Reference Guide in the SuccessFactors Product Page.
• Select (Refresh list) to display the list of reports from SuccessFactors in real time. Note: The list of
templates cannot be refreshed.
6. Select Create.
Next Steps
After importing the raw data, continue with data preparation before completing your dataset: About Adding
Data to a Story [page 1054].
SAP Universe
Prerequisites
• The Cloud Connector and SAP Analytics Cloud agent are installed and configured.
• You use a supported version of SAP BusinessObjects Business Intelligence platform. For information on
supported versions, see System Requirements and Technical Prerequisites [page 2723].
Note
When creating a dataset based on data from an SAP univese, data is acquired from the database and
retrieved to SAP Analytics cloud. The acquired data can be altered by the succession of serialization
processes or business logic of the semantic layer.
The table below details the mapping strategy between the different layers involved:
REF_CURSOR N/A No
UNDEFINED N/A
XML N/A
Next Steps
After importing the raw data, continue with data preparation before completing your dataset: About Adding
Data to a Story [page 1054].
Related Information
Context
Note
If your SAP Analytics Cloud system is hosted on a data center located within China, this feature is not
available.
Note
The login prompt for Google BigQuery is displayed in a popup dialog. You'll need to disable the popup
blocker in your browser before trying to connect.
Next Steps
After importing the raw data, continue with data preparation before completing your dataset: About Adding
Data to a Story [page 1054].
Google Drive
Context
Note
If your SAP Analytics Cloud system is hosted on a data center located within China, this feature is not
available.
When you connect to Google Drive, you can import any of the following file formats: Google Sheets, comma-
separated-values text files (csv), and Microsoft Excel files (xlsx).
The login prompt for Google Drive is displayed in a popup dialog. You'll need to disable the popup blocker in
your browser before trying to connect.
Note
You can also import files from Google Drive in stories when adding data to a new story.
The data is imported from Google Drive, and is displayed in the data integration view.
Note
You are still signed in to Google. When you are finished using SAP Analytics Cloud, it is recommended to
sign out of your Google account. To sign out, you can select Sign Out in the Select Google Drive File dialog,
or sign out of Google in your browser
Next Steps
After importing the raw data, continue with data preparation before completing your dataset: About Adding
Data to a Story [page 1054].
OData Services
Context
Note
• SAP Analytics Cloud supports OData Version 4.0. Logical Operators (such as Equal, Not Equal, Greater
than, Greater than or equal, Less than, Less than or equal, Logical and, Logical or) are supported. Not
logical negation, arithmetic operators, or functions are not supported.
The following table shows which operators need to be supported for each data type, for a generic
OData service to integrate with SAP Analytics Cloud:
String (Edm.String) "eq", "ne", "startswith", "toLower" "eq", "ne", "startswith", "toLower";
Number (Edm.Decimal) "gt", "ge", "lt", "le", "eq", "ne", "M" [value] m
Number (Edm.Double) "gt", "ge", "lt", "le", "eq", "ne", "d" [value] d
Number (Edm.Single) "gt", "ge", "lt", "le", "eq", "ne", "f" [value] f
Number (Edm.Int64) "gt", "ge", "lt", "le", "eq", "ne", "L" [value] L
• To integrate with SAP Analytics Cloud your OData service must support these query parameters and
paging capabilities:
Query parameters • $select: Filters properties (columns). Lets you request a limited set
of properties for each entity.
• $filter: Filters results (rows). Lets you filter a collection of
resources that are addressed by a request URL.
• $expand: Retrieves related resources. Specifies the related
resources to be included with retrieved resources.
• $skip: Specifies the number of items in the queried collection that
are to be skipped and not included in the result.
• $top: Sets the page size of the results. Specifies the number of items
in the queried collection to be included in the result.
• $orderby: Orders the results. Lets you request resources in either
ascending or descending order using asc and desc.
• $inlinecount: OData V2 only. Specifies that the response to the
request includes a count of the number of Entries in the Collection of
Entries identified by the Resource Path section of the URI.
• $count: OData V4 only. Lets you request a count of the matching
resources included with the resources in the response.
• If you want to use the query builder in the step below when you create a new query, the data service
must support the select system query option. Example: https://services.odata.org/OData/
OData.svc/Products?$select=Price,Name.
Key-as-Segment isn't supported by the query builder, and should only be used with freehand queries.
Also, the $skip parameter must be supported by the data service.
• Embedded Complex types are not supported.
Note
To connect to an On-Premise OData service, ensure that the following tasks are completed:
1. The Cloud Connector is installed. For more information, see Installing the Cloud Connector
[page 463].
2. The Cloud Connector is configured. For more information, see Configuring the Cloud
Connector [page 465].
The SAP Analytics Cloud agent doesn't need to be installed during the configuration process.
• Connect to an SAP OData service When you select this option, specific SAP metadata is respected.
This metadata specifies default behaviors based on SAP OData services guidelines.
Note
Advanced features of customized OData data sources, such as SAP Cloud for Customer and SAP
Business ByDesign Analytics, are only available using customized data source types. These features
are not available using generic OData services. It is highly recommended to use the relevant
customized data source types, if available, for your data service to leverage full capability of the data
connector. It is possible to request a customized OData data source solution.
Note
When using freehand queries, the columns (including expanded entity columns) need to be
specified, otherwise they won't be picked up in SAP Analytics Cloud.
For example, the following queries let you get the product by rating, or find a nearby airport:
OData V2 syntax:
KeyPredicate:
Categories(1)/Products?$format=json&$select=Name,Category/Name,Supplier/
Name,Supplier/Address&$expand=Category,Supplier&$filter=Supplier/ID eq 0
FunctionImport:
GetProductsByRating?rating=3&$format=json&$select=Name,Rating,Category/
Name,Supplier/Name,Supplier/Address&$expand=Category,Supplier
OData V4 syntax using the expand parameter:
KeyPredicate:
Products?
$select=ID,Name,Description,ReleaseDate,DiscontinuedDate,Rating,Price&$expand
=ProductDetail($select=ProductID),ProductDetail($select=Details)
FunctionImport:
GetNearestAirport(lat=80, lon=90)
For more information on the OData query syntax, refer to the OData documentation .
SAP Analytics Cloud has the following validation rules for freehand queries:
• Duplicated parameters ($select, $expand, $format, $top, $skip, $inlinecount, $filter) in the query
are not allowed.
• Only entity set and function import are supported.
• For function import, entity set is only supported as a return type.
• If $select contains the Nav property but without $expand property, the query is invalid.
Next Steps
After importing the raw data, continue with data preparation before completing your dataset: About
Adding Data to a Story [page 1054].
Qualtrics
Context
Note
If your SAP Analytics Cloud system is hosted on a data center located within China, this feature is not
available.
Here are some suggestions to help you get the best out of analytics on Qualtrics surveys.
Note that questions in the Thrash section of your survey shouldn't be used when creating your model or
dataset.
During the data preparation step while creating a model or dataset, the following basic initial steps will
help in getting better insights:
Qualtrics offers several options for question types that may be included in a survey. Here are suggestions for
handling some question types in SAP Analytics Cloud.
For a question of type NPS, for example Q1, Qualtrics generates a field Q1_NPS_GROUP with the value
Promoter or 3, Detractor or 1, or Passive or 2 for each response. To calculate an aggregated NPS
in SAP Analytics Cloud, calculate the counts of Promoters and Detractors, and calculate the NPS as:
(Promotercount/count - Detractorcount/count)*100.
Matrix Table
For a question of type Matrix Table, columns are generated based on the number of statements. For example, if
Q2 has 3 statements, then the generated columns are Q2_1, Q2_2, and Q2_3, and the descriptions are available
as mentioned in basic step 2 above. You can create a model or dataset for each matrix question, by selecting
the columns ResponseID and Q2_1 to Q2_3, doing an unpivot on the Q2 columns, and then adding a count as
mentioned in basic step 1 above.
Link this dataset to the other dataset for this survey through ResponseID.
A similar approach can be followed for the Rank Order question type.
Text Entry
For these questions, if there is a TextIQ license, then performing text analysis generates some sentiment
related fields. It is useful to have calculated measures based on the Sentiment score field, which can have
Positive, Negative, or Neutral values.
Highlight
Avoid importing this question type during model creation because it generates a large number of columns,
which could exceed the maximum number of columns supported by SAP Analytics Cloud. For these kinds of
questions, use datasets instead.
The question numbers are available as dimension names. To have dimensions look more meaningful, add
a description for each dimension in the model or dataset, based on your Qualtrics survey. For dimension
members, the code values are provided. You'll need to provide descriptions based on the Qualtrics survey. You
can get all of this information using the “Export Survey to Word” option in Qualtrics.
Restrictions
• CSV export size limit = 1.8 GB: Currently, response exports that exceed 1.8 GB will fail. To prevent your
export from failing, use limits and filters to limit the size of your final export file.
• Each API token can run a maximum of 15 jobs per minute.
Next Steps
After importing the raw data, continue with data preparation before completing your dataset: About Adding
Data to a Story [page 1054].
SQL Databases
Prerequisites
• The SAP Business Technology Platform (BTP) Cloud Connector and SAP Analytics Cloud agent are
installed and configured.
• You have installed a JDBC driver. For details, see Import Data Connection to an SQL Database [page 520].
4. If two or more tables are joined, you can select the (Inner Join) icon to change the type of join to
one of the following options:
• Remove matched data (Exception) – This option removes rows from the left-hand side table
that have a match in the right-hand side table.
5. Select Preview to review the data; if two or more tables are joined, the Joined Table Quality chart will
show the number of accepted rows from the first table as well as matched values, duplicated values,
and omitted values from other joined tables. The values are displayed on the next page.
6. Select Hide Preview.
7. Select View SQL to show the SQL query generated for joined tables. You can then select Save as
Freehand SQL Query to create a freehand SQL query based on the content of the View SQL dialog.
Note
When creating a Freehand SQL query, if the query contains parameters that are shown with
question marks, the Freehand SQL query cannot be edited.
Next Steps
After importing the raw data, continue with data preparation before completing your dataset: About Adding
Data to a Story [page 1054].
Related Information
Prerequisites
Note
If your SAP Analytics Cloud system is hosted on a data center located within China, this feature is not
available.
Note
Only files that have been loaded will appear in the search results. To see more results, open more
folders.
Note
If you're importing an Excel workbook containing multiple sheets, the first sheet is automatically
imported.
Next Steps
After importing the raw data, continue with data preparation before completing your dataset: About Adding
Data to a Story [page 1054].
Prerequisites
Note
If your SAP Analytics Cloud system is hosted on a data center located within China, this feature is not
available.
After importing the raw data, continue with data preparation before completing your dataset: About Adding
Data to a Story [page 1054].
From the grid view, you can edit the contents of your dataset. Clean, structure, enrich or transform your data
to model using a range set of tools and model them at your needs: For example, manage dimensions and
measures, filter your data, resolve data quality issues, set hierarchical relationship, define transform in cells,
rows or columns, geo enrich your data, etc.
Context
When you create a story and import data from a file or other data source, but not from an existing saved model
or dataset, that data is saved as an embedded dataset (also called a private dataset) within the story, and this
dataset doesn't appear in the Files list. For details, see About Adding Data to a Story [page 1054].
With your dataset open in SAP Analytics Cloud, you make changes directly to the data in the grid. For example,
you can:
• Modify the name of the (embedded) dataset: In Dataset Overview, select the (Edit) icon to the
right of the current dataset name. In the displayed dialog, provide a new name under Dataset Name, and
then select OK.
• Resolve warnings data quality in your dataset: Anomalies in your data could result in omissions from your
visualizations. Typically, these data quality issues involve missing data, measures containing non-numeric
Note
To undo the transformation, open the Transformation Log and delete the associated entry.
Note
You can only perform one transpose transformation per session on a dataset. The table resulting from
the transformation cannot contain more than 50 million cells.
• Add geo data in your dataset: Before you perform geospatial analysis in stories, you must first enrich your
dataset with coordinate or area data. For more information, see Add Geomap Support to Datasets [page
596].
• Generate new custom transforms with the Custom Expression Editor : Type in
a custom expression directly or use keyboard shortcuts to build an expression:Apply Data Transforms to
Cells, Rows, and Columns in Datasets [page 590] .
• Reimport data in your dataset: Reimport Data into a Dataset [page 598].
• Associate a scaling factor to your measuers: As it might be difficult to read large numbers, on charts for
example, you can associate a scaling factor to your measures. Select the measure you want to scale, go
to Measure Details Measure Properties and select one of the available scaling options: Thousand,
million, billion or percent.
Related Information
Open your dataset and update measures and dimensions using a large range of tools.
For every entity listed in Dataset Overview, select the icon to display a list of actions. For example:
Currently you can perform the following on both measures and dimensions:
Note
The underlying column will not be deleted from the dataset, but consumption workflows will no longer
consider it as part of the dataset.
your dimension, and in the Column Details panel, choose a column from the list under Description.
• Geo-enriched your data with coordinates or by areas. If your dataset contains geographical data, you can:
• Enrich a specific dimension with coordinates or area data.
• Use columns containing geographic data to create a location dimension.
For more information see Add Geomap Support to Datasets [page 596].
• Create your own defined hierarchies using the available dimensions in Dataset Overview. For detailed
information on creating hierarchies seeAbout Data Wrangling, Mapping, and Transformation in Datasets
[page 587].
• Change the data type: Although the data type for each column is inferred during the initial import, the
assigned data type can be modified. Select the icon to view Details
Define the default aggregation type of a measure so that you don't have to create new calculations every time
you want to aggregate a measure in your story.
By default, in the Dataset Overview panel, all measures have an aggregation type of SUM. However, you can
change the default behavior either in the Dataset Overview panel, or the Measure Details panel:
Or
• Click to open the Measure Details panel and under Measure Properties, select the relevant aggregation
type.
Note
Type Description
To transform your data, you can use the Context-Sensitive Editing Features and the Transform Bar. When you
select a column, a cell, or a content in a cell, a menu appears with options to perform transforms:
• Quick Actions: allows to perform actions such as remove duplicate columns, hide columns, or delete
columns or rows. The table below lists all the available actions:
Delete Column Delete a column. Use the Shift key to select and then
delete multiple columns.
• Create a Transform: lists suggested transformations to apply to the column, such as replacing the
value in a cell with a suggested value. You can also select Create a Transform and choose from the options
listed under the transformation bar displayed above the grid. The transformation bar is used to edit and
create transforms.
As you hover over a suggested transformation, the anticipated results are previewed in the grid. To apply,
simply select the transform. You can manually define your transformation in the transformation bar; as the
transform is built, a preview is provided in the affected column, cell, or content within the cell. The table
below lists the available transformations you can apply to selected columns, cells, or content within cells.
Remove duplicate Remove all duplicate rows when creating or Available only as a task bar icon
rows adding data to a model across all columns of
a dataset.
Concatenate Combine two or more columns into one. An Concatenate [<Column1>], [<Column2>]…
optional value can be entered to separate the using "value"
column values.
Split Split a text column on a chosen delimiter, Split [<Column>] on "delimiter" repeat
starting from left to right. The number of "#"
splits can be chosen by the user.
Extract Use this transform to extract a block of text Extract [<what to extract]>] [<where
specified as numbers, words, or targeted to extract>] [<which occurrence>] [""]
values within a column to a new column. from [<column name>] [<include value
option> ].
Note
You must specify two target values
when using between.
Note
Use occurrence when there are
multiple instances of the target.
Replace Replaces either an entire cell or content that Replace (<cell/content>) in [<Column>]
could be found in multiple different cells. matching "value" with "value"
Filter Filter a column to include or exclude values or Filter [<Column>] Matching, Not Matching,
date ranges. Between value.
Note
For Between a date range is required.
Open the Custom Expression Editor and access the Wrangling Expression Language (WEL) to define your own
transformations using predefined functions and a scripting editor for specific wrangling capabilities.
Once your dataset is created, you need to model your data for further needs. SAP Analytics cloud offers a
large range of tools to help you transforming your data. However, it might be that the available transforms do
not cover all your needs. In such cases, accessing the Wrangling Expression Language and define your own
transformations using this language tool is the solution:
Note
Refer the Custom Expression Help panel to get detailed description, syntax, and examples for each of the
available functions.
Example
Your dataset contains information on 30 years and you want to group the years into bin of 5 years. Use
binByBinWidth : [column] = binByBinWidth([Year],5).
Example
You have a large dataset containing data on key social and economic indicators in the United States.
You want to see only the tax indicator: Use startsWith and filter on strings starting with "Tax":
[column] = startsWith(indicator Name, 'Tax').
• Date & Time Functions let you work with date and time values.
Example
You want to calculate the difference in months between two dates. Use dateDiff: [column] =
dateDiff(2016, 2020, 'month').
Example
You want to calculate the distance between two locations, based on specified latitudes and
longitudes. Use distance: [column] = distance(latitude_1, longitude_1, latitude_2,
longitude_2).
• Other Functions
Example
You want to add unique row numbers to each row of your dataset, which can be used to see
individual measures instead of aggregated ones in your story. Use rowNumber: [Row Number] =
rowNumber(). A new dimension containing the row numbers is added to your dataset.
Row number can be different between sample and full data or depending on data refreshes. Moreover,
if later you use a transformation that duplicates rows, this can result in duplication of row numbers as
well.
Each time you create a custom expression, a transform log is created. Using this log, you can edit your
expression as many times as you want.
Note
You access the transform log clicking the button Transform Log.
Use level-based hierarchies when your data is organized into levels, such as Product Category, Product Group,
and Product. When the data is displayed in a story, hierarchies can be expanded or collapsed.
Context
You can create a level-based hierarchy when you're working on a story based on a dataset, or when you're
working directly on a dataset.
Procedure
Before you perform geospatial analysis in stories, you must first enrich your dataset with coordinate or area
data.
Prerequisites
You must use acquired data from supported sources such as SAP BW or a file (.xlsx or CSV). The data must
contain a location ID column with unique data, as well as either latitude and longitude columns, or country
(required if your regions and subregions are in different countries), region, and subregion columns.
Context
2. Select (Geo Enrichment) in the toolbar, and then choose either of the following options:
• By Coordinates if you want to use latitude and longitude data to enrich a dimension or create a location
dimension.
• By Area Name if you want to enrich a dimension based on country, region, and subregion data. The
country data can be imported as ISO3 and ISO2 codes, or the country names in English.
3. Provide the following information:
• By Coordinates:
In the Geo Enrich by Coordinates pane choose:
• The Enrich Dimension tab if you want to enrich an existing dimension with geographical data. If you
select this option, you need to provide the following information:
• Dimension ID: use the list of available dimensions to select the dimension you want to enrich.
• Specify the corresponding dimensions for Latitude and Longitude.
• The Create New Dimension tab if you want to create a new location dimension with geographical
data. If you select this option, you need to provide the following information:
• Dimension Name: This is the name of your new location dimension. You will need to specify
this name when creating geo maps in your stories.
• Specify the corresponding dimensions for Latitude and Longitude for the new location
dimension.
Note
When specifying coordinates, SAP Analytics Cloud only supports the decimal degrees
format (with the degree symbol omitted). For example: A latitude of 38.8897 and a
longitude of -77.0089.
• By Area Name
Choose how you want to specify country for your Geo Hierarchy:
• Select Column a Column in your Dataset: if you want to select the column containing the country
data.
• Specify from a list of countries: if you want to select a specific country from a dropdown list.
Note
If you select a country from the list, after the data is geo-enriched, it will be limited to
areas within the selected country. To view a list of supported area names, use the provided
Supported Locations link.
When working with stories, the location dimension will be available to add to geo maps.
Note
In your geo maps, use the choropleth/drill layer to navigate through location dimensions enriched By Area
Name.
Related Information
For any local dataset, either a public dataset or an embedded dataset in a story, you can reacquire the data
from the original data source to replace the contents of the dataset.
Context
If the original data source was an uploaded file, you can upload another compatible file. For any other data
source, the original query is re-executed.
SAP Analytics Cloud automatically matches the columns of the newly acquired data to the columns of the
existing data.
Note
Datasets created prior to SAP Analytics Cloud release 2020.12 don't support the reimport feature.
Procedure
Note
You can only upload a compatible file: a file that has the same number of columns as the original file,
and with the same column names and data types as in the original file.
A model is a representation of the business data of an organization or business segment. You can use a model
as the basis for your story. The Modeler is the place where you can create, maintain, and load data into models.
In SAP Analytics Cloud, models can either have acquired data, or live data:
• Acquired: Data is imported (copied) and stored in SAP Analytics Cloud. Changes made to the data in the
source system don't affect the imported data.
• Live: Data is stored in the source system. It isn't copied to SAP Analytics Cloud, so any changes in the
source data are available immediately if no structural changes are brought to the table or SQL view.
Models complement datasets. Datasets are more suitable for ad-hoc analysis, while models are more suitable
for governed-data use cases.
If you're not sure whether to use a dataset or a model for your story, this might help you decide: Choose
Between Datasets and Models [page 541].
If you're more interested in datasets, not models, see About Datasets and Dataset Types [page 544].
What's in a Model?
A model is a representation of large amounts of business data that uses common business terminology. For
example, if you run a retail clothing business, your data might include:
Typically, your business would generate thousands of rows of data. To avoid dealing with these large amounts
of data that would require using a database query language such as SQL, you create a model with dimensions
and measures to represent your business data and manipulate them directly in the application. The creation of
a model is a crucial part of the data preparation.
Measures hold numeric values and typically represent quantities that provide meaning to your data. For
example, sales revenue, salary, or number of employees. Dimensions on the other hand represent categories
that provide perspective on your data. For example, product category, date, or location.
Before getting into further details about dimensions and measures, you should know that two types of models
are available: analytic models, and planning models.
When creating your first model, you first need to decide between the two models types that the application has
to offer: planning models and analytic models. You can create analytic models from a live data source but you
can't create planning models from a live data source. However, live models created based on live connection to
SAP BPC embedded configuration support planning functionalities.
Planning Models
Planning models are prepared and preconfigured to help you perform business planning tasks such as
forecasting to support and streamline the planning process, with many off-the-shelf features to give you a
quick start in the planning process. When working with this type of model in a story, planning users can use a
variety of features to update values in the model, and create new values. Out of the box, planning models come
with:
When working with this type of model in a story, planning users can use a variety of features to update values in
the model, and create new values.
Analytic Models
Analytic models are full-featured models that contain your business data. They're the more flexible type
of model. You'd typically use an analytics model if you want to analyze your data, looking for trends and
anomalies. Unlike the planning model, analytic models are not preconfigured with categories (for budget and
forecast data), and although a Date dimension is available, it's not required, and you can remove it from the
model during the design stage.
Analytics models can also be used with live data connections. For more information, check out Live Data
Connection Overview Diagram [page 265].
In SAP Analytics Cloud, dimensions and measures are data objects that represent categorical, transactional
and numerical data in a dataset; for example, Products, Sales or Revenue.
• The Dimension type is a generic, free-format dimension. For example, a dimension could be based on
products, channels, or sales representatives.
Whenever you create generic dimensions, a MemberID and a Description attributes are automatically
added.
Values related to dimension members appear in different columns in the dimension grid.
A model can have any number of dimensions.
Note
The Description and Member ID attributes of dimensions must be of data type Text.
• A Measure is a dimension that represents transactional data. Measures aren’t represented visually in the
modeler graph, but the associated data is visible in the data foundation preview.
In a classic account model, model values are stored in a single default measure, and you use the account
structure to add calculations, specify units, and set aggregation types for all the data.
In the new model type, measures are exposed as single entities and you can add and configure multiple
measures with aggregation and units to fit your data. In the Calculations screen, switch to the Graph view
to see a visual representation of all the measures, calculated measures, conversion measures and account
members, and their dependencies to all calculated objects in the model.
• Date is a built-in dimension that defines the start and end dates of the model's timeline. The date
dimension also specifies the granularity – the smallest time units that will be used in the model: years,
quarters, months, weeks, or days. The week granularity must be enabled in the model preferences. See
Planning on Data on a Weekly Basis [page 686] for more information.
Note
You can specify a default time hierarchy to display in stories, and optionally configure the date dimension
to organize data by fiscal year instead of calendar year. You can also have more than one date dimension
Note
• Version is a built-in dimension that defines the data versions available in stories: Actual, Budget, Planning,
Forecast, and Rolling Forecast.
Versioning is a financial planning concept that represents a way of providing calculations such as actual vs.
budget.
In a model that uses a currency conversion table with Rate Versions, you can use the Rate Version column
for the Version dimension to set specific conversion rates for each version.
• The Timestamp dimension type is similar to the Date dimension type, except that it includes hours,
minutes, seconds, and milliseconds, and isn’t hierarchical.
The Timestamp dimension type is useful for data that is recorded very frequently, like sensor data.
Along with their name, type and description, dimensions also have additional settings. In the application,
you’ll find them in the Details panel, under the Dimensions Settings pane. They’re listed in the table below for
reference.
Name The dimension's name. You can't change a dimension's name after you save the model.
Description The dimension's description. You can modify the description at any time.
Data Access Control Switch on this option to enable security for the dimension.
The Read and Write columns are added to the dimension grid.
For more information, see Set Up Data Access Control [page 803].
Hide Parents Switch on this option to choose how you want hidden hierarchy nodes to be handled, when using
Data Access Control.
If this option is on, users who don't have either Update or Maintain Rights will see only the
dimension members that they have at least Read access to in the Modeler.
For more information, see Set Up Data Access Control [page 803].
Responsible Switch on this option to add a Person Responsible column to the dimension and select a person
responsible for an organization member.
For more information, see Gather Planning Data with Input Tasks [page 2249].
Data Locking Owner- Switch on this option to add the Data Locking Owner column to the dimension. This column lets
ship
you specify owners for data locks applied to each member.
Each property that you add has an associated column in the dimension. When creating a property
you define the ID, a description and select the property type. Please note the following:
• If you select the property type Currency, you add a Currency column to the dimension.
Using three-character currency codes, this column identifies the source currency for data
that belongs to each leaf member. For models that use currency conversion, you can rename
the column header (for example, Source Currency or Local Currency) to help users identify it
in a story.
The Currency column is enabled by default for the Organization dimension, and can also be
added to generic dimensions.
For a classic account model model, a single dimension is set as the currency dimension. For
more information, see Set Up Model Preferences [page 664]. For models with measures,
each monetary measure can have a currency dimension.
For more information about currency settings, see Work with Currencies in a Classic Account
Model [page 778] for classic account models, or Work with Currencies in a Model with
Measures [page 779] for models with measures.
Note
For models with currency conversion enabled, the name of the Currency column appears
in stories as a currency calculation. You should give this column a meaningful name such
as Source Currency or Local Currency.
• When you add new properties of type Text of generic, account, organization, and version
dimensions, you can set the maximum text length for the property’s values from 1 up to 5000
characters.
Note
In the user interface, very long property values may be displayed in abbreviated form.
• When you edit existing properties of type Text of generic, account, organization, and version
dimensions, you can increase the maximum text length for the property’s values up to 5000
characters.
Note
You can only increase the maximum text length but not decrease it. This is for safety
reasons to avoid potential data loss.
Make this a Public Di- Select this option to create a new dimension as a public dimension.
mension
This option is available only when you're creating a new generic or organization dimension.
For more information, see Adding Dimensions to a Model [page 755] and Public and Private
Dimensions [page 612].
Translation Select Request Dimension Translation to enable a public dimension for translation.
• Dimension descriptions (for live connections, the dimension descriptions are taken from the
live data source)
• Dimension property descriptions
• Dimension hierarchy descriptions
See the following topics for information about translating public dimension metadata, the Transla-
tor role, the Translation Dashboard, and translation tasks that you can perform there.
Because live data models can contain many dimensions, the dimensions are listed on the All Dimensions
tab, where you can manage the dimensions. For example, you can edit dimension descriptions, show or hide
dimensions, or group dimensions.
Note
SAP BW hierarchies and other attributes aren't visible in SAP Analytics Cloud, and must be viewed in the
underlying BW system.
You can edit the descriptions of the dimensions. You can also edit dimension descriptions for “import data”
models.
Show/Hide Dimensions
If you don't need to use all of a model's dimensions in your stories, you can use the All Dimensions tab to select
which dimensions will be available for the model.
With a live data model open, select the All Dimensions tab. All dimensions in the model are shown in the list.
You can filter out any dimensions that aren't relevant by selecting the Hide check boxes. The selected
dimensions will then not be available when you work with the data in stories.
Note
Required Dimensions
Some dimensions may be required for the calculation of a measure; for example, in a Lookup formula. If not
all required dimensions are in the current drill state, users are notified that the displayed numbers could be
If a dimension is a required dimension, it's not possible to hide this dimension. Also, if a dimension is hidden, it
isn't possible to select this dimension as a required dimension.
Note
Group Dimensions
If there are many dimensions, you can manage them more easily by adding them to groups. Simply type in the
same group name beside the dimensions that should be categorized together.
The Account dimension defines the set of account members and the format of the account data.
In addition to the basic columns of Member ID, Description, and Account Type, a set of technical properties is
automatically created when the dimension is first set up.
Account Types
The Account dimension uses an Account Type attribute to automatically handle positive and negative values.
From an accounting perspective, account members belonging to the Profit and Loss statement and the
Balance Sheet have to be correctly stored in the database with either a positive or negative value so that
the accounts balance correctly. In SAP Analytics Cloud, you can enter all values as positive numbers, and the
switching of signs from positive to negative is handled automatically on the basis of the Account Type setting.
There are four financial account types: Income (INC) and Expense (EXP) items are included in the Profit and
Loss account, and Assets (AST) and Liabilities (LEQ) are Balance sheet items. Automatic switching is applied
to the account types INC and LEQ. Note that all formulas work on the displayed value, not the value saved in the
database.
When importing data from an external system, a mapping feature is available to ensure that imported data also
fits into this schema. This feature is switched on using the Reverse the Sign of the Data Based on Account Type
check box in the Details panel when importing data. When this check box is activated, imported data will also
be identified by account type and stored and handled correctly.
The signs (+/-) of INC and LEQ account values in a model are automatically reversed during data analysis when
displayed in tables, charts, and so on. If you want your INC and LEQ values to show up as positive, import them
as negative values, and vice-versa. Before importing data, make sure to check the signs of the original source
values first, and then set the Reverse the Sign of the Data Based on Account Type option accordingly.
If you’re importing account values with the same signs and want to keep them unchanged during analysis,
select this option. Your INC and LEQ values will be imported with reversed signs, too. For example, a positive
gross sales value (an INC account value) in your draft data will be imported to the model as a negative value,
and displayed for analysis as a positive value.
A model with measures lets you use calculated measures to customize which accounts show negative and
positive values. This way, you can set up different views for different tasks like controlling, accounting, and
Attribute Details
Account Type Select the account type for this type of data:
The asset and liability types are aggregated over time and must be linked to an aggregation dimension in
the model (such as the built-in Date dimension).
Rate Type This attribute column appears when you switch on Currency Conversion in the model preferences. With
currency conversion switched on, set the Rate Type to Average for INC and EXP accounts, and to Closing for
AST and LEQ accounts. This setting corresponds to the rate type for exchange rates in your currency table.
It lets you distinguish between the average exchange rate over a period and the closing rate at the end of the
period. For more details, see Learn About Currency Conversion Tables [page 788].
Units & Cur- Use this attribute to set the value type and display units. Select one of the following from the list:
rencies
• Currency: Use this option for all monetary values. In this case, the unit defined in Scale is shown in all
data output and the appropriate currency symbol is displayed after the numerical value.
Note
For the currency conversion feature, each Account has to have Currency set for this attribute. For
more information, see Work with Currencies in a Classic Account Model [page 778] or Work with
Currencies in a Model with Measures [page 779].
• Label: You can enter a text label (up to 30 characters in length) for this member to define your own
display units; this can be a unit of measure or a packaging unit such as Bottles. The label you enter
is displayed after the numeric value; for example, 25 Bottles.
• %: The percentage option works in the same way, showing the percentage symbol after the value. In
this case, the Scale attribute is also automatically set to percent.
The attribute can also be left blank. In this case, the abbreviated unit from Scale is displayed (see below).
Aggregation For account members that are parent nodes, the aggregation type determines how values are accumulated
Type
from the leaf nodes. These aggregation types don't relate to any dimension.
• SUM: This is the default aggregation type for income and expense values; this simply adds all values
together.
• NONE: If the value types of numerical data cells are different, aggregation may not be possible; this
may be the case, for example, for price information or cells containing different currencies. For these
account members, set the aggregation type to NONE. In tables in stories, cells that have not been
aggregated are shown with a diagonal line drawn through them. The cell will be either empty, or, if all
child values for a node in the hierarchy are the same, this single value is shown at the node level.
• LABEL: Set the aggregation type to Label for any dummy nodes on the hierarchy where you want a
text label to appear in the data grid without any calculation of values. In this case, the Description
Text for the member is displayed as a label, and a simple dash character is used where otherwise
an aggregated value would normally be displayed. You cannot set leaf nodes to type Label – this is
immediately flagged on screen as an error.
Note
• Accounts with a formula can't have an aggregation.
• For models based on HANA views, a different set of aggregation types is available: SUM, MIN, MAX,
AVG, and COUNT.
Exception Ag- Use exception aggregation when you want to aggregate non-cumulative quantities. For example, if you have
gregation
the quantity Inventory Count, you might want to aggregate the inventory count across all products, but not
Type
across time periods, because it doesn't make sense to add up inventory counts from multiple time periods.
In this case, you would choose the aggregation type SUM for Inventory Count, because you want to add up
the inventory counts for all products. But if you don't specify an exception aggregation type, the inventory
counts will also be summed across time. To prevent summing inventory counts across time periods, specify
an exception aggregation type for the time periods.
For example, you might want to choose just the most recent set of Inventory Count values. In this case, you
would choose the exception aggregation type LAST, and the exception aggregation dimension Date.
Exception aggregations relate to one or more dimensions. For example, for the AVG and LAST exception
aggregations, a Date dimension is appropriate. If you select an exception aggregation type, you must also
select an exception aggregation dimension.
You can also use exception aggregation when you've included aggregated quantities in formulas, to ensure
that the formula calculations are performed before the aggregation. For more information, see Price *
Volume Formulas [page 893] and Aggregations in Formulas [page 897].
Note
Although NULL and NO_DATA cells are visually identical, the application still distinguishes them in the
backend.
NULL cells are retrieved directly from the database, and can also be created using formulas. For
example, IF( cost > 1000; NULL; 1).
NO_DATA cells are empty cells that are use jointly with the unbooked mode for example, or to create a
cross-join of the rows and columns axis.
The AVG and COUNT functions only account for NULL cells in the aggregation, and discard the
NO_DATA cells.
Formula Al-
Type Description lowed?
COUNT excl. Counts all the entries, excluding null values. This exception aggregation Yes, mandatory
NULL type is available for models based on “import data” connections (where
the data is replicated into SAP Analytics Cloud), including models cre-
ated only in SAP Analytics Cloud (for example, models created from
Excel files).
COUNT excl. 0, Counts all the entries, excluding zero and null values. This exception Yes, mandatory
NULL aggregation type is available for models based on “import data” con-
nections (where the data is replicated into SAP Analytics Cloud), includ-
Formula Al-
Type Description lowed?
ing models created only in SAP Analytics Cloud (for example, models
created from Excel files).
AVG Calculates the average of all aggregated values, including null values. Yes
Select one to five exception aggregation dimensions.
AVG excl. Calculates the average of all aggregated values, excluding null values. Yes, mandatory
NULL This exception aggregation type is available for models based on “im-
port data” connections (where the data is replicated into SAP Analytics
Cloud), including models created only in SAP Analytics Cloud (for ex-
ample, models created from Excel files).
AVG excl. 0, Calculates the average of all aggregated values, excluding zero and null Yes, mandatory
NULL values. This exception aggregation type is available for models based
on “import data” connections (where the data is replicated into SAP
Analytics Cloud), including models created only in SAP Analytics Cloud
(for example, models created from Excel files).
FIRST Shows the first (oldest) value in the selected time period; it could be Yes
used, for example, to show the number of employees on the first day of
a month. Select only one exception aggregation dimension.
LAST Shows the last (most recent) value in the selected time period; it could Yes
be used, for example, to show the number of employees on the last day
of a month. Select only one exception aggregation dimension.
FIRST Calculates the first quartile value (25% of the data is less than this Yes
QUARTILE value).
FIRST Calculates the first quartile value (25% of the data is less than this Yes
QUARTILE value), ignoring null values.
excl. NULL
FIRST Calculates the first quartile value (25% of the data is less than this Yes
QUARTILE value), ignoring null and zero values.
excl. 0, NULL
MEDIAN The median (middle) value (half of the data lies below the median value, Yes
and half lies above).
MEDIAN excl. The median (middle) value (half of the data lies below the median value, Yes
NULL and half lies above), ignoring null values.
MEDIAN excl. The median (middle) value (half of the data lies below the median value, Yes
0, NULL and half lies above), ignoring null and zero values.
THIRD Calculates the third quartile value (75% of the data is less than this Yes
QUARTILE value).
THIRD Calculates the third quartile value (75% of the data is less than this Yes
QUARTILE value), ignoring null values.
excl. NULL
Formula Al-
Type Description lowed?
THIRD Calculates the third quartile value (75% of the data is less than this Yes
QUARTILE value), ignoring null and zero values.
excl. 0, NULL
Restriction
• When the exception aggregation type COUNT is chosen, no aggregation type can be used.
• If the exception aggregation dimension has a hierarchy, and this hierarchy is in the drill state, the
aggregation along this hierarchy will not be shown (the cell will be crossed out).
• If an account with a formula has an exception aggregation, no direct or indirect referenced base
account is allowed to have an exception aggregation.
Example:
Yes Yes No No
Yes No Yes No
Yes No No Yes
No Yes No Yes
No No Yes Yes
No No No Yes
Exception Ag- If an exception aggregation dimension is required for the member (depending on the account type or
gregation Di-
aggregation type), enter it here. Choose a dimension from the dimension selector dialog, which shows all
mension
dimensions in the model and the built-in Date dimension.
Scale To improve the presentation of numbers in stories, and hide numbers that are not significant, you can set
this attribute to show just integers plus the specified number of decimal places. The unit value is then
shown by the appropriate word or by an abbreviation. You can select one of the following options:
This feature is related to the setting of the Unit attribute that determines if the Scale word or just the
abbreviated Scale letter is used (see also the example following this table):
• If Unit is set to Currency, the word selected as the Scale value is used in the output.
• If Unit is undefined (blank), the abbreviated Scale letter is used.
Decimal Pla- This setting defines the number of digits displayed after the decimal point; select a value from 0–7.
ces
Formula Calculations and predefined formulas can be used for any value of the Account dimension.
Calculate on This column is still available, but deprecated, because it doesn't work in all cases. Instead, we recommend
that you use exception aggregation. For details, see Aggregations in Formulas [page 897].
Dimensions can either be public or private. Public dimensions can be shared between models, while private
dimensions exist only in the current model.
When creating dimensions, you may want to define some dimensions as public so that they're available to
other models, and define other dimensions as private so that they remain local to the model you're working on.
Public dimensions are saved separately from models. This means that models only reference public
dimensions. You’ll find them under the Public Dimensions tab on the Modeler home page. When you create
a new model or modify an existing one, you can add these public dimensions.
As a best practice, we recommend defining dimensions as public if they’re likely to be reused by others in your
organization; for example, Country, Product, and Organizational Unit. They’re standalone entities that you can
reuse in multiple models, and they not impacted by a model lifecycle (copy, deletion,…). Make sure you give
them a unique name so you recognize them easily. Conversely, a dimension such as Account_1234 might not
be used by others and could be defined as a private dimension.
Private dimensions are saved with the model, and don't appear in the Public Dimensions list. When you delete
a model, private dimensions in that model are also deleted. If you copy a model, private dimensions are also
copied. They can’t be reused in multiple models and don’t necessarily require a unique name.
Note
Version and Category dimensions are private dimensions that can’t be shared across models.
A public account dimension is beneficial for ensuring that the common accounts (both base accounts without
formulas or exception aggregation, and calculated accounts) behave the same across all models that share
that account dimension.
However, you can still edit a public account dimension within the context of a model if needed, while keeping
the original public account dimension in its original state.
Let’s say your public account dimension is reused by models A, B and C. Editing the public account dimension
on its own, with no model context, would impact all three models. However, editing the public account
dimension within model B has no impact, be it on models A and C, or on the public account dimension itself
outside of model B.
You can work with models individually in their context in the Modeler with no impacts on other models. The
default data and settings of the public account dimension are displayed in italics. In the Modeler view, you
overwrite them without affecting other models based on that public account dimension, or the original public
account dimension. You can work with formulas that use dimensions such as LOOKUP, RESTRICT, LINK,
ResultLookup, CAGR, YOY, SMA, and so on… Minimum drillstate and Exception Aggregation can also be defined
within the model context.
You can’t add, delete or changes hierarchies in the public dimension account from the context of a model. If
you need to make these types of edits, make sure to open the public account dimension separately using the
side panel.
Tasks
You can create hierarchies during the data preparation stage, or within the modeler after you’ve created a
model. Creating hierarchies during the data preparation stage instead of in the Modeler is faster and more
automated. However, to make adjustments, or to add or delete members from the hierarchy, you'll need to use
the Modeler.
Level-based hierarchies
A level-based hierarchy organizes the members of a dimension into levels, such as Country, State, and City.
You can add level-based hierarchies to generic dimensions and organization dimensions by selecting a
dimension, and then from the Dimension Settings panel, selecting Create Hierarchy Level-Based
Hierarchy .
You then define your level-based hierarchy in the Hierarchy Builder. Click to create a new hierarchy. Name
it, and then select the dimension columns to use for creating the hierarchy. Select again to create more
hierarchies. At any time, you can select the icon beside Hierarchies in the Dimension Settings panel to
create, edit, and delete level-based hierarchies.
Parent-child hierarchies
A parent-child hierarchy organizes the members of a dimension into a set of parent-child relationships.
You can add parent-child hierarchies to generic dimensions and organization dimensions by selecting a
dimension, and then from the Dimension Settings panel, selecting Create Hierarchy Parent-Child
Hierarchy .
When you add a parent-child hierarchy, a new column is inserted into the grid. Add the parent ID values to
the hierarchy column ( ). Select beside Hierarchies in the Dimension Settings panel to create more
hierarchies.
To see a parent-child hierarchy's structure, open the Hierarchy Maintenance view by selecting one of your
parent-child hierarchies in the Dimension Settings panel. Here, you can drag members to build the parent-child
relationships visually. If more than one hierarchy has been defined, you can select which one to work with from
the drop-down list.
In the Hierarchy Maintenance view you can filter the members that are currently displayed in the tree to those
matching the entered value (either in ID or description) and you can use the action menu to sort all direct
children of a parent member by their ID or description (ascending or descending).
Having multiple account hierarchies offers more flexibility in terms of reporting and data entry, and is more
practical to group accounts differently. That's also useful if you're looking to forecast next year's budget in a
dedicated updated hierarchy while still working on the current budget in another account hierarchy.
In the Modeler, each parent column in the account dimension corresponds to an account hierarchy. Visually,
each hierarchy you add to the account dimension is matched with a new parent column, as illustrated in the
screenshot below.
You can add additional hierarchies in the Modeler via the Dimension Settings panel. Once you've added a
hierarchy, you can move on to the member assignment and enter values for each member of the hierarchy, just
like you would for parent-child hierarchies in generic dimensions. All cells that you leave empty with no value
are automatically considered as not part of the hierarchy and aren't exposed in consumption features anymore
if this hierarchy is selected.
If you account dimension has multiple hierarchies, the application sets the first hierarchy of the account as the
default hierarchy. However, you can switch to any hierarchy within the account dimension and set it as default.
Your default hierarchy is important, as it's also taken into account for allocation or data locking for instance.
Note that changing the default hierarchy is only allowed if there are no stories that depend on the model. To
In stories, tables and charts also benefits from having multiple hierarchies, and you can switch from one to
another depending on the data you want to display.
Formulas and calculated accounts work that you add in the modeler are supported by all hierarchies. However,
if a calculated account doesn't make sense one hierarchy from a business perspective, make sure that the
calculation is not part of that hierarchy so that it's not getting exposed in stories.
In some situations, you might want to create a parent-child hierarchy for a dimension, but not want to include
all of the dimension members in the hierarchy. When you add a parent-child hierarchy to a dimension, by
default, members that don’t have a parent node specified (that is, they don't have a value in the hierarchy
column) are not part of the hierarchy and will neither be shown in result sets nor in stories, filters, applications,
etc. ...
However if you want to show these orphaned members in the hierarchy you can do so by using the option
Group and show under "Not in Hierarchy" in the Empy Hierarchy Values dropdown in the Dimension Settings
panel. This will add a Not in Hierarchy member to the dimension that will act as a parent for all orphaned
members. To remove orphaned members from the hierarchy choose the option Remove from Hierarchy in the
Empy Hierarchy Values dropdown.
Depending on which parent-child hierarchy is being shown in a table, different members will be under the Not In
Hierarchy parent member.
For more information, see Entering Values with Multiple Hierarchies [page 2211].
Note
You have chosen the Group and show under "Not in Hierarchy" option:
• If you haven chosen Group and show under "Not in Hierarchy" and you remove a member from the
hierarchy, that member and all of its descendants are moved to the Not In Hierarchy member, as a flat
list. Any previously existing sub-trees are discarded.
• If you haven chosen Group and show under "Not in Hierarchy" and you delete a member from the
dimension, and it has descendants in a parent-child hierarchy, the descendants are moved to the Not In
Hierarchy member.
• You can adjust the properties of the Not In Hierarchy member, except its position in the hierarchy (it's
always assigned to root).
• If you have chosen Remove from Hierarchy, the Not In Hierarchy member is removed.
• If you have chosen Remove from Hierarchy, the totals for example in tables will not be equal when
displaying a dimension with and without hierarchy.
• If you have chosen Remove from Hierarchy and you remove a member from the hierarchy, that member
and all of its descendants are removed from the hierarchy and not shown anymore. Any previously
existing sub-trees are discarded.
When you create level-based hierarchies with dimension members that do not have all properties maintained,
you can control if and how members with empty property values are displayed, for example, in stories. The
following options are available in the Empty Hierarchy Values dropdown box in the Hierarchies section of the
Dimension Details panel:
Your choice has not only has implications on the consumption in story (for example, in a table) but also impacts
formulas.
Example
In the following example you see different visualizations (for example in a table) in a story for the dimension
with an active hierarchy that includes the Quarter property - depending on which option you have chosen in
the Empty Hierarchy Values dropdown box:
As you can see in the example, the second measure column changes as a calculation with dynamic time
navigation function (Restrict and Lookup [page 906]) has been used there. Depending on where the
members are located within the hierarchy, the values of the used formula vary. See for example June 2023.
Managing hierarchies
You can set or change the default hierarchy for any type of dimension, for both level-based and parent
child-hierarchies, in the hierarchy builder.
• For the Date dimension, hierarchies are predefined based on the model granularity, and whether you've
enabled fiscal time for the model. You can specify a default hierarchy in the settings for the Date
dimension.
• The Version dimension doesn't have a hierarchy, since different versions are separate and don't have
parent-child relationships.
The Hierarchy Maintenance screen lets you manage hierarchies visually. It displays an icon ( ) next to
members that belong to the selected parent-child hierarchy. You can also use the All/In Use switch to display
either all members of the dimension, or only members of the selected hierarchy.
Use parent-child hierarchies to structure your data into parent-child relationships. When the data is displayed
in a story, hierarchies can be expanded or collapsed.
1. Start creating your model. After the data import, you can see the data integration view where you perform
data preparation before creating a model.
2. Select the column that you want to be the child in the parent-child hierarchy.
For example, if you're creating a geography hierarchy, with Country as the parent and City as the child,
select the City column. Note that you can change the column name by double-clicking the column header.
3. In the Details panel, select Add Dimension Attributes.
6. Repeat the above steps if you want to create more parent-child hierarchies.
The parent and child columns don't need to represent levels in a hierarchy. For example, if you're
setting up a hierarchy similar to a company org chart, the columns could contain data like this:
Child Parent
Note
If a column is mapped to a hierarchy parent of a dimension, an empty cell in that column means that
the corresponding dimension member doesn't have a parent in that hierarchy. In other words, that
member is a top-level node of the hierarchy.
For example, if you have the following columns, the resulting hierarchy will have A as the top-level node,
with two child nodes A1 and A2:
A1 A
A2 A
The dimension member A still belongs to the hierarchy. If you want a newly created dimension member
to be excluded from a hierarchy of that dimension, don't map the hierarchy to a column.
Use level-based hierarchies when your data is organized into levels, such as Product Category, Product Group,
and Product. When the data is displayed in a story, hierarchies can be expanded or collapsed.
1. Start creating your model. After the data import, you can see the data integration view where you perform
data preparation before creating a model.
City Country
Vancouver Canada
Vancouver USA
Seattle USA
When you create the hierarchy, a new column is generated, with unique values that serve as the dimension
member IDs:
Note that when you subsequently import data into this model, this dimension member ID needs to be
mapped from the imported source data as well. If the source data doesn't already have a column that
contains such values, you can generate the dimension IDs again by selecting the card that represents the
dimension member IDs of the dimension, and selecting Generate and Map the Unique ID.
7. If you want to add more level-based hierarchies, or copy, edit, or delete a hierarchy, select (Level Based
Hierarchy) to open the Hierarchy Builder again.
8. When you've finished creating hierarchies, select Create Model.
9. In the Modeler, switch to the Model view.
If you want to create a hierarchy or make any changes to the hierarchy within the Modeler, switch to the Model
view, open a generic or an organization dimension and open the Dimension Settings to create a new hierarchy.
In the Hierarchy Builder, give a name to the hierarchy and select columns to build the hierarchy. In the Hierarchy
Management screen, you can also add more level-based hierarchies, or copy, edit, or delete a hierarchy. If other
level-based hierarchies already exist, select one of them to open the Hierarchy Builder.
In this video you will drill down between hierarchical levels in a story, create a level-based hierarchy in a dataset,
create a level-based hierarchy in a model, create multiple parent-child hierarchies in a model, and choose how
to display a non-default hierarchy in a story.
The Modeler has been extensively redesigned to align with the changes made to models in SAP Analytics
Cloud.
Models are now more flexible, and you can select how you’d like to build your model. When creating a new
model in the Modeler, the application lets you decide whether you want to focus on the data first to build the
structure of the model or build the structure first and then add the data.
At the heart of this redesign is the new model type, whose capabilities have been extended to offer more
flexibility to your modeling workflows. While you can still create classic account models and datasets, we
encourage you to try out the new model type to benefit from the latest functionalities available.
The home page of the Modeler now offers a single entry point to create both blank and acquired data models
(1).
For the first version of the new modeler, you’ll be able to experience the new data first approach, which allows
you to create new models while solely focusing on the data you want to work with. There is no wrangling
step, the application generates the model automatically. You import the data and carry out all the necessary
transformation, while the application handles the structure of the model automatically.
For more information, see Create a Model from a File [page 652].
After you’ve created your new model, you’re greeted by the new Model Structure space.
• Dimensions and measures now have their separate sections so they can be identified easily (1, 2).
• The Graph view can be accessed via a dedicated toggle (3).
• The data foundation can still be accessed at the bottom of the workspace (4).
• You can find a new section with your calculated measures up top, conversion measures underneath, and
finally your measures and account members (1).
• The formula editor, previously located in the main toolbar, has been integrated to the properties pane (2).
• The calculation preview pane can still be accessed at the bottom of the space (3).
The new model now comes single column dimensions straight out the box. These are dimensions that are
simple columns in your model data. However, they can be enriched with dimension tables that allow you to
unlock the properties management of dimensions in your model.
Using dimensions tables, you’ll be able to add master data to your dimensions.
For more information about dimension table, see Add Master Data to Your Model Using Dimension Tables [page
689].
You can also add properties to exisiting single column dimension to enrich them.
For more information, see Add Properties to Single Column Dimensions [page 692].
The wrangling functionality in no longer tied to the import process. Now, you can wrangle data and run
transformations via the data foundation at any point in time. Open the Data Transformation panel and run quick
actions or smart transformations to start preparing your data right away.
Besides the enhancements above-mentioned, the new flexible modeling approach adds the following
functionalities:
A new model type is available in SAP Analytics Cloud. It lets you create measures within the model, which is a
fundamental extension of the structure used by a classic account model. Known as a model with measures, the
new model type provides more flexibility and adds key benefits for both analytics and planning.
The main difference between the model types is how they handle measures, which are the structures in your
model that hold numeric values.
This example shows two modeling approaches for simple sales planning data. The new model supports either approach.
Refer to the Choosing a Model Configuration [page 631] section to figure out the best configuration for your
data, or read the Benefits of Models with Measures [page 628] section to get a full list of benefits.
Models with measures offer the following main benefits, both for analytic and planning use cases:
• Flexible model structure: Since both accounts and measures are available as structures for your data
(holding calculations and setting aggregation, units, and so on), you can display your data more precisely
for a variety of use cases. You can decide whether to include an account dimension, and whether to
prioritize the settings for measure or accounts. See Set Structure Priority and Create Custom Solve Order
[page 701] for details.
With an integer measure, you can avoid decimals while still distributing the entire value
• Optional account dimension: When you structure your data with measures, you can add your accounts
to a generic dimension instead of an account dimension. This setup lets you avoid restrictions on the
account dimension. See Choosing a Model Configuration [page 631] for details.
• Improved calculations:
• Calculated measures in models: Since measure calculations can be added to your model, you can
reuse them across different stories and analytic applications. You can also add them to some data
action steps.
• Calculations on numeric dimension properties: Using this feature, you can change account values from
positive to negative based on dimension properties. For example, you can set up calculations that show
either positive or negative values for expenses, which makes it easier for different types of users like
controllers or accountants to analyze and plan on the same data. Refer to Changing the Signs of Your
Account Values Using Numeric Properties [page 915] for details.
However, you might need to consider usage restrictions, both for planning and analytics models. See
Restrictions for Models with Measures [page 634].
With the new model type you can pick different configurations for your model, including whether to use an
account dimension or not. It's optional because each measure sets many of the same properties of an account
dimension member, such as aggregation, scale, units, and whether there are currency values or not. Check the
table below for an overview.
Configuration Description
Model with measures and an account dimension This model configuration has two main structures, which
can give more flexibility. Consider this setup in the following
cases:
Model with measures and no account dimension A simpler configuration, this setup removes the account
properties and formulas. It can also make additional features
available, though.
Classic account model In some cases, you may want to keep using a classic account
model:
With the new model, you can import additional currency measures. In this case, an account dimension
is helpful. Without it, your model might need many different measures, making the data more difficult to
manage and analyze.
With the new model you can use different measures instead of a flow dimension, which lets you set the
correct aggregation for each one.
New Model Data with Separate Measures for Balances and Changes Over Time
Entity Date Account Expenses Opening Transfers Closing
Migrate your classical account model into a new model type to benefit from the extended fucntionalities it
offers..
The new model type, also referenced as the model with measures, offers much more possibilities and flexibility
than a classic account model, allowing you to integrate both an account and multiple measures in a single
model.
To benefit from these extended functionalities, you can convert your classical account model into a new model
type using the button Migrate to New Model Type in the modeling area.
Caution
The migration is not possible if your classic account model has dependent objects or if the currency
conversion is turned on. If your classic account model has currency conversion enabled, you need to turn
it off before migrating the model. The new model is created with the currency conversion turned off. Then
it's up to you to turn it on again and redefine the currency conversion measures in your target model. The
migration cannot be undone.
Restriction
• Data import/export job of the new model type is not compatible with the old import/data import job
for a classic account model. Therefore, there is no migration from old job type to new job type. Job
scheduling based on future execution no longer exist, but previous job execution history can still be
observed. So, after the conversion you will have to create new data import/export job for the new
model type.
• When you migrate a classic account model to a new model type, datapoint comments are not
migrated.
Caution
If you migrate a classic account model already used with SAP Analytics Cloud, add-in for Microsoft Office,
you need to re-insert that model into an existing workbook created with the add-in.
Some of these Modeler features may not be available, or may look different, with live data models.
Model Structure
There are three areas in the Modeler: Model Structure, Calculations and Data Management. When you open
an existing model, its contents are shown on the Model Structure screen composed of the dimensions list (1),
the structure view (2), the data foundation view (3), and the Model Details pane (4). By default, objects a
presented in a list. You can also toggle the Graph view to get a star schema diagram, and visualize how fact
data, attributes, and properties all relate to each other.
The Model Structure screen contents can vary depending on the model type. If you're working with a traditional
account model, it shows you the dimensions of the model. If you're working with the new model type, it shows
you both dimensions and measures. As standard, for both model types, it also shows you the number of
members and hierarchies in each dimension, and any attributes that have been defined for the dimensions.
On the right, the Model Details pane shows you the model settings, data sources, and other information about
the model. When you select a dimension or a measure, the view changes to display the Dimension Settings
or Measure Details pane on the right with and detailed information about the selection, which let you define
hierarchies, create calculations, and change properties and settings.
You can also access the grid view for dimensions that have a dimension table. A dimensions table allows you
to add master data manually to your dimension using a grid view, where you can find information about the
dimensions attributes, members and properties.
The Data Foundation view helps you understand the data you have access to. Depending on your user settings
and rights, the data you see might differ from another user.
The data you can see in the data foundation view and the number of results is filtered based on the data access
control and security settings, as well as active filters within the table. You can also select the version you want
The data foundation view supports up to 100 millions published facts. Above that limit, a message is displayed
to let you know that the data cannot be previewed. As a workaround, try adding the model data to a story to
view the whole dataset.
For more information about the Model Structure and Data Foundation, see https://blogs.sap.com/
2020/04/24/were-improving-modeling-in-sap-analytics-cloud/ .
Calculations
The Calculations screen is where you can create additional calculations for your model. It follows the same
layout as the Model Structure screen. You can find you calculated measure up top, conversion measure
underneath, and finally your measures and account members.
On the Calculations screen, you'll find the objects list (1), the formula input field (2), the preview pane (3), and
the properties pane (4).
The objects list can vary depending on the model type. If you're working with a traditional account model, it
shows you the dimensions and account members of the model. If you're working with the new model type, it
shows you both dimensions and measures, as well as account members, calculated measures and conversion
measures if there are any in the model.
With the new model type, you can create calculations on existing accounts and measures by clicking the + icon
next to the Calculated Measures, Conversion Measures and Account Members labels to open the central pane
with the formula editor up top, and the Properties pane on the right.
On the left side of the editor, you’ll find the complete list of operators, functions and formulas available, as well
as their descriptions at the bottom of the dialog.
The calculations preview pane lets you see your aggregated and calculated data within the Modeler before
moving on to a story. It takes all the structure information coming for the Model Structure screen to create a
preview, and provides you with realtime calculations results without having to save the measures and accounts
first.
If you add calculated measures, measures, accounts or conversion measures to the model, the preview gets
updated and you can see how calculated data impact the model by adding objects used in the calculations in
the preview table to see how the calculated values are derived.
However, the preview table changes automatically depending on your selection in the left panel, which resets
the table configuration. You can freeze the table configuration by pinning rows, columns or filters by clicking the
pin icon so that they’re always visible regardless of the member selected in the left panel.
You can organize the preview table and move rows, columns and filters around using drag and drop and
rearrange them to preview data as you see fit, just like you would in a story. You can also switch axes by clicking
.
The properties pane give you information about the selected object in the objects list.
Graph View
The object dependency graph provides a visual understanding of object relationships within a model and helps
to simplify the understanding of complex object dependencies.It is available for account models and models
containing both accounts and measures. The graph is updated in real time as you make changes to a formula
within the editor.
The graph displays measures, conversion measures, calculated measures, and account members. Once you
select a measure, the graph automatically displays all objects directly connected to the one you have selected.
Each object type has a dedicated icon for better clarity. If needed, you can also zoom in and out to focus on
specific areas of the graph.
The search functionality can help you look for specific objects in models that contain many, with a large graph.
The search results are populated as you type in. You can also filter the results by object properties such
as member type, account type, aggregation type, exception aggregation type, unit, currency, or hierarchy if
needed. Click to access filters.
Note
• The graph is an interactive visualization tool you can use to view data and dependencies. It doesn't
feature any editing capabilities, and you cannot create or edit any existing dependencies.
• Calculations containing compounded formulas are curently not supported on the dependency graph.
Hierarchical Filters
In the Preview panel, you can decide whether you want to have dimension members displayed in a flat list,
or hierarchically using hierarchical filters. Hierarchical filters are filters for which you can select a hierarchy
available in the dimension. You can switch hierarchies depending on the way you want to display the results,
just like you would in a story. With these filters, you can see all dimension members and their booked values.
Also, for account dimensions, when a measure is selected, there’s no longer need to have default filters
defined. By default, the application displays all members hierarchically.
You can select a hierarchy either by clicking a filter in the Filters section, or by clicking and Select Hierarchy
next to a dimension in the Rows or Columns section.
For account dimensions, there are two ways to filter. If you select a measure and try to filter on an account
member, the application displays a hierarchical list, and you can select the hierarchy you want to show. In
that case, you can see all account members. However, if you select an account member first, you can select
between Used Members and Other Members. That’s useful if you only want to check the account members that
are used, in formulas for instance, or simply to check the member info.
Data Management
When you create a new model, the Data Management screen appears. The Data Management screen is the
place where you can import and export model data, but also schedule data refreshes for specific data sources
and dimensions.
an entire job. To edit the name of a data refresh job, select (edit) next to the job's name. For more
information about refreshing data in models, see Update and Schedule Models [page 772].
The Data Management screen composed of two main tabs: Import / Export Jobs, and API Subscriptions (1).
The Import / Export Jobs tab lists all draft sources, imports and exports to specific data sources (2).
The API Subcriptions tab lets you view and manage your data export service subscriptions, the preferred
way of exporting models and their associated metadata. API subscriptions can be managed directly within
the Modeler. For more information, see Manage API Subscriptions for Data Exports [page 753]. For detailed
information on the delta functionality, check out the Data Export Service documentation.
In both tabs, a data timeline (3) displayed on the right provides a history of the data changes, import, export
and refresh jobs depending on the data source or subscription you select. It also indicates whether a job has
run smoothly or if it has been rejected.
Draft sources
The draft data sources correspond to data has been uploaded to SAP Analytics Cloud, but hasn't been saved to
a model yet.
Import jobs are connections between your model and a data source that you imported data from. In some
cases, for example if you import data from a file on your local computer, the import job just provides
information about the data import. But in many other cases, the import job can be re-run, and scheduled,
so that you can re-upload data periodically from that same data source to your model. For more information,
see Update and Schedule Models [page 772].
You can also view an import job's data mapping and transform history after importing data into a public model
or public dimension, so that you know how to map or transform data if you want to create another import job
for that model or dimension.
Export Jobs
Export jobs allow you to reuse model data in other systems such as SAP S4/HANA or SAP Business Integrated
Planning. By default however, only the file and SAP BPC options are available. To be able to export to other
systems like SAP S4/HANA, SAP Business Integrated Planning, or SAP BW and BW4/HANA using OData
services, make sure to enable the Enable Legacy Export option first under the Data and Performance tab in the
model preferences.
In some cases, you can also schedule export jobs to run periodically. For more information, see Export Models
and Data [page 750].
Note
You cannot export custom date dimensions to SAP S/4HANA, SAP Integrated Business Planning or OData
Services when using the Enable Legacy Export option. Note that for data exports in general, we recommend
using the Data Export Service.
API Subscriptions
The API Subscriptions tab lists the Data Export Service subscriptions, which correspond to the delta changes in
a given planning model.
Here, you can create, track, and manage delta subscriptions for a given model, but also generate delta links
that you can reuse elsewhere if needed. The list displayed corresponds to a subscription chain, but clicking the
View Details options takes you to a detailed list of individual subscriptions of a chain.
In the Subscription Details dialog, you can find the export history of the subscription, and copy delta links
for future reuse. For more information on how to manage subscriptions in the Modeler, see Manage API
Subscriptions for Data Exports [page 753].
At the bottom of the list, the View All Subscriptions button takes you to the Subscriptions Overview tab on the
Connections page. Here, administrators with the BI Admin and Admin roles and Create and Delete permissions
for connections can view all subscriptions for any model available on the SAP Analytics Cloud tenant and
access further information using the View Details option. Note that isn’t possible to delete subscriptions from
the Subscriptions Overview tab in the Connections page.
• : Hide the column. You can also select Show Columns to show or hide any columns:
• : Pin the column. Pinned columns stay displayed at the left side of the grid when you scroll to the
right. To unpin a column, select the icon again, or drag the column out of the pinned area.
• Data validation and error highlighting: As you work, the model is continually scanned for data irregularities.
Any errors are reported by the Validation icon in the toolbar:
When grid cells contain errors, you can also select the cell for more information:
• Add a formula: In the Formula column, select the icon to open the Advanced Formula Editor.
• Moving columns: You can rearrange columns in the grid by dragging their headers.
• Context-sensitive copy: If you have cells containing the values Cell1 and Cell2, when you select both cells
and copy to the cells below, the cells will be populated with the values Cell3, Cell4, and so on.
• Keyboard navigation: You can use these keys while working in the grid:
Keys Action
Space , t , and f In columns with check boxes, for example the Hide
column, the Space key toggles the check box, while t
(true) selects it, and f (false) deselects it.
Related Information
You can start from scratch, import data from a data source or a dataset, or use a live data connection.
The image below details every possible scenario. Click on one of these scenario to get the detailed procedure.
You can create and design a new empty model so you can focus on the structure first, and work on the data at a
later stage.
Context
Procedure
Note
By default, the application creates a model with measures, with planning capabilities enabled. If you'd
like to create a classic account model, click Want to create a classic model (accounts only) instead?.
For more information on the differences between the two models, check out Get Started with the New
Model Type [page 627].
3. In the left panel, select to set up the model preferences. For more information on model preferences,
see Set Up Model Preferences [page 664].
a. Planning features are enabled by default, but if you want to create an analytics model, select Planning,
and then switch Planning Capabilities off.
For information about planning and analytics models, see Learn About Models [page 599].
b. For a planning model, the From and To dates to define the timeline for the data can be changed in the
side panel when selecting the date dimension.
c. If your business fiscal year isn't the same as the calendar year, select the Date Settings tab, switch
Fiscal Year on, and then select the other parameters as follows:
• Choose the starting month of your fiscal year.
• Choose whether you denote your fiscal year by the calendar year when it starts, or the calendar
year when it ends.
Model data in stories, including tables, charts, value driver trees, input controls, and filters, will display
fiscal years instead of calendar years.
d. Enable any other features you need, such as data auditing or locking, data access control, or currency
conversion.
Note
Some settings cannot be changed after you've saved the model. For more information, see Set Up
Model Preferences [page 664].
Note
The week granularity is only available for the new model type, and must be enabled first under Date
Settings in the model preferences. Also, by default, the Date dimension is system-managed for the
new model type.
b. Choose the default date hierarchy. New story tiles will display this hierarchy by default.
6. If you're creating a classic account model, click to add the mandatory account dimension. Select either
Add New Dimension to create a new one from scratch, or Add Existing Dimensions to add an account
from a list of existing dimensions. If you're creating a new model type, the account dimension is optional.
For more information on the configuration possibilities for the new model type, check out the Choosing a
Model Configuration section in Get Started with the New Model Type [page 627].
If you create a new account dimension, you can define it as a public dimension, if you want it to be reusable
in other models.
7. Optionally, you can add an organization dimension and also other generic dimensions to your model.
See Learn About Dimensions and Measures [page 601] for details of all dimension types.
8. If you're creating a new model type, add one or multiple measures to your model. Click in the Measures
section to open the Measure Details panel, and fill in the different fields within the General, Data Type,
Aggregations, Units & Currencies and Formatting sections to add a measure.
Parameter Description
Name Type a unique ID for the measure using letters and num-
bers only.
Data Type Choose the data type for your measure. The data type
restricts the values that the measure can hold:
Required Dimensions Select and choose the dimensions that you want to be
required for the measure.
Exception Aggregation Type Use exception aggregation when you want to aggregate
non-cumulative quantities. For example, if you have the
quantity Inventory Count, you might want to aggregate the
inventory count across all products, but not across time
periods, because it doesn't make sense to add up inven-
tory counts from multiple time periods.
For example, you might want to choose just the most re-
cent set of Inventory Count values. In this case, you would
choose the exception aggregation type LAST, and the ex-
ception aggregation dimension Date.
Exception Aggregation Dimension Select and choose the dimensions that will use the
selected exception aggregation type.
Units & Currencies Use these options to set the currencies for a monetary
measure, or the unit displayed for a non-monetary meas-
ure.
Decimal Places This setting defines the number of digits displayed after
the decimal point; select a value from 0–7.
Values that have fewer decimal places will match this set-
ting by adding zeros to the end of the value, for example
1.100.
9. If you're creating a new model type, click in the toolbar to open the model preferences, and under
Structure Priority, use the Prioritize properties and calculations from drop down to indicate whether
account settings or measure settings should get the priority.
If you decide to prioritize the account dimension, then all the aggregation definitions, formatting and unit
handling are taken from the account dimension. But if you decide to prioritize the measures, everything is
taken from the measures. For more information, check out Set Structure Priority and Create Custom Solve
Order [page 701].
10. Save the model.
a. Select a location (for example, a private, public, or workspace folder).
b. Provide a unique name for your model and an optional description.
Related Information
Context
In this section, we’ll explore the data first approach by creating our first model using an Excel file. The same
principles apply when creating a model based on a different data source.
Note
For more information on how to create a model from scratch using the structure first approach, see Create
a Model from Scratch [page 647].
Data files can be in your local file system or in your network. The source data can be an Excel spreadsheet
(.xlsx) or a delimited text file (.csv or .txt). If you import data from Microsoft Excel, and if the data is saved in
separate sheets in the Excel workbook, either you can choose which sheet to import (if from a local file system)
or the first sheet is automatically imported (from a network file).
After you’ve imported your data, the application generates a model automatically. You can check the data on
the Modeler home page, in the Model Structure space.
When using a personal file such as an Excel file, the application analyzes the data first to suggest dimensions
for the model you want to create. You can then refine the suggestions by specifying dimension types and fixing
data-quality related problems.
There is no wrangling step, the application generates the model automatically. You can carry out data
transformations if needed, like converting dimensions to measures, perform data cleaning, fix values, and
more...
In the Model Structure page, clicking a dimension opens the Dimension Details panel, and changes the focus to
that dimension in the data foundation. From here, you can either change general information in the Dimension
Details panel or run data transformations, such as converting a dimension to a measure.
Next to each object in the model, the More (…) menu gives you access to the following options:
Option Description
Delete Delete
Delete Dimension Table Delete the dimension table and revert to a single column
dimension.
Procedure
Note
By default, the application creates a new model with measures. If you’d like to create a classic account
model, click Want to create a classic model (accounts only) instead?.
3. In the data source list, select File (Local File or File Server).
4. Click Select Source File, and in the browser window, select the file you want to use as a data source.
5. Optional: If you are importing from a local Excel workbook containing multiple sheets, select the sheet you
want to import.
If you are importing from an Excel file on a file server, the first sheet is automatically imported.
6. Optional: If you're importing a .CSV file, select which delimiter is used in the file, or select Auto-detect.
7. Click Import.
For small data files, the data is imported and displayed in the data integration view, which shows the
imported data columns that you'll define as dimensions, measures, and attributes.
Larger data files may take some time to upload. You can work on other tasks while the data is being
uploaded in the background.
8. If you have created a classic account model, run transformations if needed, and click Create Model.
Allow Data Import and Model Export with a File Server [page 712]
Context
Procedure
Context
Before being able to use a data source, it has to have a connection ready so that data can be imported. For
more information on the available import data sources and how to set up a connection, see Data Connections
[page 263].
Procedure
• If you're creating a query, give it a name, a description, select data, and build a query.
• If you're reusing an existing query, select whether you want to reuse it as is, or modify it using the
Modify existing query option.
6. If needed, build a query by dragging data from the Available Data list to the Selected Data and Filters areas.
Moving data to the Selected Data and Filters areas allows you to create new measures and dimensions to
be imported to your model, and apply filters to them. For more information regarding query building and
filters, see Building a Query [page 704].
After you've built the query, the application imports the data and redirects you to the data integration view,
where you can where you can complete the mapping of your new data to the model's dimensions.
Prerequisites
A live data connection to another system is required. The system administrator will create live data
connections using the Connections page in SAP Analytics Cloud. For a complete list of supported live data
connections, see Data Connections [page 263].
If you want to perform geospatial analysis in your model, then you must manually prepare your data for
geospatial analysis in the connected system using SAP HANA Studio.
Context
Procedure
Note
If you're creating a model from an SAP S/4HANA live data connection, choose the SAP BW system
type.
If you know the name of the data source, you can type or paste it in. Or, you can select the icon to
search for a data source or choose one from a list. The first 100 views are preloaded in the list.
Note
The model must be created based on a query or view that contains a measure.
5. Drag data from the Available Data list to the Selected Data and Filters areas to build a query.
Moving data to the Selected Data and Filters areas allows you to create new measures and dimensions to
be imported to your model, and apply filters to them. For more information regarding query building and
filters, see Building a Query [page 704].
6. If the data from your data source comes in different versions (Actuals, Budget, Planning, Forecast, Rolling
Forecast):
a. Select the (Version Information) icon on the toolbar to specify the version information.
b. Select the column from your data source that contains the version information, and then specify the
relevant category for each version of your data, such as Actuals, Budget, Planning, Forecast, or Rolling
Forecast.
If you're using this model in a story, the data is automatically filtered on Actuals, and you can create
variance charts with the different categories.
7. Save the model.
a. Select a location (for example, a private, public, or workspace folder).
b. Provide a unique name for your model and an optional description.
Next Steps
If you have imported a model containing geographical data from an SAP HANA system, some geospatial
analysis features such as map filters will be automatically disabled.
If you import a location-enabled model from a live HANA calculation view, you will need to add multiple location
dimensions to the new model. For more information, see Creating Geo Spatial Models from HANA Calculation
Views [page 1483].
You can also create a model with coordinates and area data. For more information, see Creating a Model with
Coordinate or Area Data for Geospatial Analysis [page 1479].
Context
Before you can use a version in a variance chart, the version has to be visible. To make the version visible in the
user dialog, it needs to be mapped in the model dialog for a Data Source (either BW, BPC, HANA, or Universe),
because this logic will not be mapped automatically. In the live data model the versions Actual, Budget,
Planning, Forecast, Rolling Forecast have to be mapped to use the functionality of version-based variance chart.
It results in a comparison of a measure restricted by version based on the report designers choice, for example,
Sales Volume Actual compared with Sales Volume Planning.
Map the SAC versions of the drop-down list box in the model to the corresponding version instances of the
source system:
Procedure
Related Information
You can create models by connecting to SAP HANA database views. Functionality available for models based
on HANA views is slightly different in comparison to other models.
With analytics models based on HANA views, you can use your existing data with SAP Analytics Cloud. Many of
the features used in planning-type models are not relevant for this type of model, including financial data types,
and currencies. These are some of the main differences and features:
In HANA models, only a single Measures dimension is initially visible in the model, but all other dimensions
are available on the All Dimensions tab. Because the number of dimensions may be large, and many of these
dimensions won't be relevant in SAP Analytics Cloud, you can use the Hide check boxes to make dimensions
unavailable in stories based on this model.
For models based on HANA views, version management functionality is also available. You can use this feature
to map any of the imported dimensions to selected SAP Analytics Cloud planning categories such as Budgets,
Actuals, or Forecasts.
You can define variables for all types of models, but if variables have been defined in HANA you will also be
prompted to enter the required values in SAP Analytics Cloud when you open the data output.
Calculated Columns
When using calculated columns in SAP HANA live data connection models, you'll need to keep in mind the
output type of any formulas and operations in use. For data correctness, it is important to ensure the following:
For example, if you use a function that returns an integer, and divide the result by another integer, the produced
output will be an integer, resulting in the loss of any digits to the right of the decimal point. This loss could
cause critical data correctness issues.
Security
Analytics models that are based on underlying HANA database views will inherit the security restrictions
applied in HANA. These restrictions can be modified in SAP Analytics Cloud by assigning specific user role
security to them.
The Data Access option that is available for other model types to restrict input to specific named cells is not
available for this type of model.
Note that access to models may be secured, and you may be prompted for credentials before opening the
model.
When you create a model based on a live data connection HANA view, if your HANA view contains string-based
time dimensions or date dimensions that you want to enrich with time-hierarchy information, complete the
following steps.
If you enrich the string-based time dimensions or date dimensions in your HANA view, you can use time-related
features such as Difference From story calculations, trend series charts, and time range sliders for filtering.
Note
As a best practice, we don't recommend creating string-based dimensions derived from another string-
based time dimension that only exists in SAP Analytics Cloud and not in the HANA Calculation View. For
more information, see SAP Note 3256812 .
Before you can follow these steps, you'll need these prerequisite configurations:
Restriction
This feature isn't available with SAP HANA as a Service (HaaS) deployments.
Note
Choose the granularity that matches the granularity of the dimension you're creating.
4. In the Field in Time View field, select the column in the Calculation View that will be used as the join
dimension for enriching the dimension. Its format should exactly match the format of the dimension being
enriched.
5. Choose the Default Hierarchy for the new dimension.
Note
You'll be able to work with these enriched dimensions the same way you work with date dimensions in
import data models.
Enabled Features
Some SAP Analytics Cloud features are enabled for models based on HANA views only if the SAP HANA
system meets certain criteria:
• EXP
• GRANDTOTAL
• %GRANDTOTAL
• ISNULL
• NOT
• Story calculations:
• Difference From
• Search to Insight
Live HANA baseline level 2 All of the above features that are enabled with Live HANA
baseline level 1, plus the following additional features:
SAP HANA 2.0:
• Enriched time dimensions
• SAP HANA 2.0 SPS02 rev 024.05 or higher, or
• SAP HANA 2.0 SPS03 rev 033.00 or higher Note
• The matching EPMMDS plug-in must be installed. For
more information on EPMMDS plug-in versions, see You will need to download a Delivery unit (DU) and
SAP Note 2444261 . import it to your HANA live system. The DU will
be named “SAC HANA Model Date Enrichment -
SP0 for HANA_MODEL_DATE_ENRICHMENT” or
similar.
• Calculated dimensions
• Measure-based filters
• Blending between import data models and live data
models
• Date Difference story calculations
• Custom sort order
• Timestamp dimension support in time series charts
• Histogram support for live data models
• Natural language querying on models with timestamp
dimensions
• Story and modeling formulas:
• LENGTH
• LIKE
• SUBSTR
Live HANA baseline level 3 All of the above features that are enabled with Live HANA
baseline level 2, plus the following additional features:
SAP HANA 2.0:
• ToNumber and ToText calculations
• SAP HANA 2.0 SPS02 rev 24.06 or higher, or
• Improved Count and Average exception aggregation
• SAP HANA 2.0 SPS03 rev 34.00 or higher
• The display of units and currency in charts, tables, and
• The matching EPMMDS plug-in must be installed. For
tooltips is based on the user profile
more information on EPMMDS plug-in versions, see
SAP Note 2444261 .
Live HANA baseline level 4 All of the above features that are enabled with Live HANA
baseline level 3, plus the following additional features:
SAP HANA 2.0:
• Performance improvements when filtering across
• SAP HANA 2.0 SPS02 rev 24.10 or higher, or
models
• SAP HANA 2.0 SPS03 rev 38.00 or higher, or
• SAP HANA 2.0 SPS04 rev 40.00 or higher
• The matching EPMMDS plug-in must be installed. For
more information on EPMMDS plug-in versions, see
SAP Note 2444261 .
Live HANA baseline level 5 All of the above features that are enabled with Live HANA
baseline level 4, plus the following additional exception
SAP HANA 2.0:
aggregation types:
• SAP HANA 2.0 SPS03 rev 37.02 or higher, or
• FIRST QUARTILE
• SAP HANA 2.0 SPS04 rev 41.00 or higher
• FIRST QUARTILE excl. NULL
• FIRST QUARTILE excl. 0, NULL
• MEDIAN
• MEDIAN excl. NULL
• MEDIAN excl. 0, NULL
• THIRD QUARTILE
• THIRD QUARTILE excl. NULL
• THIRD QUARTILE excl. 0, NULL
Related Information
You can create models based on Web Intelligence documents. By doing so, you can leverage your BI investment
and reuse Web Intelligence dimensions, measures, calculations (variables), …
To make sure you get the right data for your model, and later for your story, we’ve prepared a list of things you
should know about document lifecycle, refreshes, and synchronization.
When creating a model using a Web Intelligence document as a data source, the data and the semantics are
retrieved from the document’s cube.
Also, make sure that the document you want to use as a data source is stored in a corporate folder and
assigned to the category defined by the boe.webi.category.cuid parameter. For more info, please refer to
the SAP BusinessObjects Live Data Connect Installation and Security Guide.
Also, note that the security defined for the document in Web Intelligence applies in SAP Analytics Cloud,
including view-time security.
Refer to the SAP BusinessObjects Live Data Connect Installation and Security Guide to get the full list of
universes features supported in SAP Analytics Cloud.
Object Mapping
In the query builder, objects appear as a flat list under the “All Objects” node. They’re listed by type,
dimensions or measures. There’s no distinction between dimensions coming straight from a universe, and
dimensions that are the result of variable calculations. The same applies to measures.
Existing measures in the Web Intelligence document used as a data source don't require setting local
aggregations in SAP Analytics Cloud.
Data Lifecycle
The Web Intelligence document controls the data that is displayed in SAP Analytics Cloud. When you refresh
a story, the application fetches the latest instance from the Web Intelligence document. We recommend
administrators to schedule regular refreshes to make sure the data stays up to date.
If a Web Intelligence user edits the document and its objects, the SAP Analytics Cloud story displays a warning
message on the visualizations that leverage these objects, prompting you to edit the model and the underlying
query.
If Web Intelligence are consumed in SAP Analytics Cloud, make sure you don’t delete the document or its
objects to avoid breaking the model that relies on the document or its objects.
When you're designing a new model, you can set preferences for features such as security, auditing, and
currency conversion. These settings are available from the Model Preferences ( ). Dimension-specific settings
are available from the Dimension Settings panel.
Note
Not all preferences apply to analytics-type models, and models based on HANA views have different
options, as described in the Options for HANA Models section within this topic.
This section will cover the following settings available in the model preferences:
The General Settings section provides overall details about the model.
The Description field is here to help you provide details or a summary of the model.
Switch on model translation to have the model metadata (descriptions and other user-specified text)
translated by a translation service.
For more information, see Learn About the Translation Process [page 2965] and Enable Translation in a Model
[page 777].
The Access and Privacy section includes a range of settings to secure data in your model.
Data Audit
Auditing within SAP Analytics Cloud is available at either of two levels: high-level auditing or transactional-level
Data Changes auditing. If Data Audit is switched on, all changes for this model will be logged.
The audit logs are available from the Users menu ( Users Data Changes ).
Validation Rule
For planning models, validation rules let you define the allowed member combinations across multiple
dimensions to prevent improper data entry and planning operations in stories and analytic applications. Turn
this switch on to validate the data in the model according to the validation rules you define for this model.
Planners are only allowed to enter data or use planning functions for the specified member combinations. For
more information, see Define Valid Member Combinations for Planning Using Validation Rules [page 2547].
Data Locking
For planning models, data locking lets you set up locks on the model data, which prevent the locked values
from being changed by data entry and other planning operations in stories.
Turn this switch on to enable data locking, and set the Default Lock State for the model. This state will apply to
all data in the model's public versions, unless otherwise specified in the Data Locking page:
This setting lets you disable the export of model data to a CSV file. For details, see Export Models and Data
[page 750].
Note
If this option is enabled, you cannot export data using the Data Export Service.
Index Model
This settings lets you enable model indexing for SAP HANA, SAP BW, SAP Universe, or SAP S/4HANA live
models to leverageSearch to Insight capabilities in stories.
After the indexing process is completed, a timestamped entry will appear under the Search to Insight section.
See Live Data Restrictions for Search to Insight [page 2038] for more information on working with live data. You
have the option to update or delete the index entry. Any user working with a story that includes the indexed
model can access quick insights based on the model through Search to Insight.
If needed, you can also specify which dimensions to include or exclude in the indexed live data model. This
is useful when you want to exclude dimensions that may contain sensitive data. Select the text button under
Search to Insight. The Dimension Configuration for Index dialog lists all the available dimensions. Select or
unselect dimensions as required. All the selected dimensions will be searchable by Search to Insight queries.
By default, data models that don't have Model Privacy enabled, or dimensions that don't have Data Access
control activated are indexed. You can choose to index or unindex a (local) acquired model using the toggle
button.
Note
Don't refresh or close your browser window before the indexing process is completed. Also, you should
index only one live model at a time.
• Make sure your systems have optimal access to memory and CPU resources for better indexing.
• Use the Variable settings to limit the number of indexed members.
If you change the data source for an existing model used by Search to Insight, any index will be deleted when
you save the updated model. For more information on using Search to Insight, see Search to Insight [page
2027].
Note
Only HANA live models based on SAP HANA 1.0 SPS12 rev 122.14 or higher can work with the Search to
Insight feature. When a HANA live model is indexed, metadata such as dimension and measure names, and
dimension members, is stored on SAP servers.
This setting determines whether the model is visible to users other than the owner. Note that this setting can
also be changed later after the model has been saved.
If you switch on Model Data Privacy, only the owner of the model and user roles that have specifically been
granted access can see the data. Disable this switch if you want the model and data to be public.
This setting lets you specify in the dimension settings which users will have access to each member in the
dimension.
7.6.3.4 Planning
The Planning section lets you set the planning capabilities on your model and define the time range used for
planning.
If you want to use this model for planning, switch Planning Capabilities on, and then choose start and end dates
for the timeline for the data.
Note
If you're working with the new model type, you can enable or disable planning capabilities at any point in
time in the Model properties pane, in the Model Structure workspace.
You can use the Delete private versions button to delete private versions belonging to a model. An example of
when you might want to do this is when you need to remove model dependencies so that users can upload and
delete model data.
Note
• You can switch Planning Capabilities on and off, but not after the model has been saved.
• If the Planning Capabilities switch is disabled, a tooltip explains why.
• If planning is enabled in the Data Integration step after importing data (see Import and Prepare Fact
Data for a Classic Account Model [page 714], and Import and Prepare Fact Data for a Model with
Measures [page 728]), Planning Capabilities is automatically switched on, and can't be switched off.
• Planning Capabilities can be switched on only if there are no facts in the model.
• When you switch Planning Capabilities on, if the model contains multiple date dimensions, the first one
is used by default for planning, but you can choose a different one:
Data Disaggregation
When you change data in a cell on a planning model, the value is automatically spread to the leaf members that
aggregate up to it. This process is called disaggregation.
The data disaggregation settings provide more control over how values are spread from parent members
to leaf members. Enabling the following options will change how data disaggregation will occur and prevent
disaggregation to those locked (by data locking) or invalid (due to validation rules) leaf members that would
otherwise normally receive values.
You might want to enable these options to ensure that the values on leaf members restricted by these settings
remain unchanged, even when changing values on the parent members.
Note
The disaggregation algorithm currently makes a clear distinction between booked and unbooked data. As a
direct consequence, on booked data edit, if all disaggregation candidates are locked due to data locking or
invalid due to validation rules, no unbooked candidates will be taken into consideration even if there might
be some that are editable.
• Based on Data Locking: Enabling this option will stop values from spreading to locked leaf members,
redirecting the disaggregation to unlocked leaf members only. When this setting is disabled, data entry on
mixed members, in addition to locked members, is not allowed. Please note that any changes to locked leaf
members will be rejected when you publish.
• Based on Validation Rules: Enabling this option will stop values from spreading to leaf members with invalid
member combinations. If this option is disabled and values of invalid member combinations would be
changed, such changes would be rejected on publish, similar as for data locking.
These options impact the behaviour of data entry, delete, and copy and paste operations, including the mass
data entry and fluid data entry modes.
Enabling data disaggregation may impact performance: while speed might be gained by a more focused
disaggregation and a reduced number of facts changed, the evaluation of locks and validation rules during
disaggregation might add to the overall run-time due to increased dependency checks.
If you enable the data disaggregation settings in the model preference, you can refine these options from the
Builder panel on a table. If data disaggregation settings are disabled in the model preferences, they will not be
customizable from the Builder panel for tables.
Input Enablement
Some restrictions automatically make cells appear as read-only in a table, while other restrictions allow you to
edit the values in restricted cells but won’t allow you to publish changes to restricted members.
• Based on Data Locking: Enabling this option changes which cells you can edit depending on whether Data
Disaggregation based on Data Locking is also enabled.
• Based on Validation Rules: This option is read-only and cannot currently be changed in the Builder panel.
• Based on Data Access Control: This options is automatically enabled and cannot currently be changed in
the Builder panel.
You can also control the data locking options for Data Disaggregation and Input Enablement from the context
menu:
• If data locking for both data disaggregation and input enablement is disabled, you can select Enforce Data
Locks in the context menu to enable them.
• If both data locking options are enabled, you can select Ignore Data Locks in the context menu to disable
them.
The Date Settings section lets you fine-tune Date dimensions settings and enable the week granularity.
For model with measures, you can enable the week granularity using the dedicated Enable Week-Based Date
Patterns toggle. You can then select the week pattern, the first day of the week, and the first week of the year.
You can also decide whether your to display months as periods.
You can choose from two patterns: 445/454/544, or 13x4. With these patterns, data is stored data on weeks,
and is then aggregated to months or quarters. To get more information on patterns, check out: Planning on
Data on a Weekly Basis [page 686].
Restriction
If your business fiscal year isn't the same as the calendar year, switch Fiscal Year on, and then select the other
parameters as follows:
7.6.3.6 Currency
The Currency section lets you set the currency and currency conversion settings on your model.
For information about different currency scenarios, see Selecting the Right Model for Your Data [page 600].
If your model doesn't use currency conversion, set one of the following options:
• Default Currency: The currency for all monetary values. For example, if your model only contains monetary
values in US Dollars, you can set the default currency to USD.
• Currency Dimension: The dimension that separates model data into different currencies. Model data will
not be aggregated across the different currencies. For more information about adding a Currency column
to a dimension, see Learn About Dimensions and Measures [page 601].
Enabling currency conversion allows model data from different source currencies to be converted into a single
target currency and then aggregated.
To enable currency conversion, activate the Currency Conversion switch. Then, specify the following settings:
Default Currency: Type the three-character code for the default currency. Monetary data will be converted to
this currency by default.
Currency Dimension: Select the dimension that separates model data into different currencies. For more
information about adding a Currency column to a dimension, see Learn About Dimensions and Measures [page
601].
Currency Rates Table: Select a currency conversion table. The drop-down list shows all tables that have already
been created on the Currency Conversion tab of the Modeler.
Preconverted Actuals: When this setting is on, both source currency data and preconverted currency data can
be imported. For more information, see Learn About Preconverted Currency Data [page 792].
Note
• You can increase this limit after creating the model, but it cannot be decreased.
• When you increase the limit after creating the model, the new limit does not apply to existing stories.
Each story keeps the limit that was valid when the story was created.
• Displaying data in its source currency does not count towards the Maximum Currency Conversion
Limit.
• If the model uses currency conversion, each Account member has to have Currency set for the Units &
Currencies attribute (column) of the Account dimension.
In a model with measures, you can use a currency variable to drive conversion measures bound to that variable
and switch target currencies on the fly to update calculations both within the Modeler, via the Calculations
workspace, and in stories via the filter bar or story and widget prompts.
To set up a currency variable, currency conversion must be turned on. Click Create Currency Variable, and
specify the following settings:
The Structure Priority section lets you set priorities in case of conflicting definitions.
The new model type can combine both accounts and measures, which can cause conflicting properties for
formatting, unit handling and aggregations. To prevent such conflicts, you can specify which structure should
take precedence over the other with the dedicated Prioritize properties and calculations from field.
The Custom priority for calculations toggle can help you define custom solve order for aggregations.
For more information, see Set Structure Priority and Create Custom Solve Order [page 701].
The Data and Performance section includes a variety of settings to help optimize your model performance.
Switch on this option to stop the automatic update that happens when changes are made in a chart or table.
This reduces the load on the system and improves performance.
Switch on this option if the hierarchies in your SAP HANA live data models depend on variables. When enabled,
this allows the levels of the parent-child hierarchies to be reloaded after each variable submit. The levels
change if the variable values are changed.
When this option isn't enabled, the levels of the parent-child hierarchies are only loaded when you first load the
story.
Switch on this option to optimize performance by consuming a limited set of metadata for live models based on
SAP BW connections.
The application normally works on a limited amount of set of attributes, but still have to process additional
attributes transferred to the metadata. Per query, these additional attributes can reach up to multiple
thousands, largely increasing the size of the metadata. This option limits the number of attributes to the
ones that are explicitly defined at the query level.
This option can be enabled at the model level, and has impacts in stories, App Design and Data Analyzer.
This option optimizes performance in a table widget by consuming a reduced set of metadata. This limits the
number of attributes to the ones that are explicitly defined at the query level.
When this option is activated, the only attributes that will be available in an SAP Analytics Cloud story are the
attributes that were marked as visible in the query.
Whenever you make edits to a private or public version, the application uses snapshots to support fast
interactive planning and ensure a stable data region during planning. By default, snapshots are taken straight
from the source public version, with potential restrictions (see below). With version management, you can also
create a private version with an empty or filtered snapshot.
Note
Large version sizes can impact the initial snapshot creation, interactive queries during planning as well as the
publish operation. Although planning on private versions isn’t restricted, you might want to consider other
ways to better control the performance for these versions:
Working with limits in private version offers better control over a model performance. One benefit of working
with limits is that, as a planner, you get notified whenever you create a data snapshot that is above the
configured limit. When you get notified, you can either proceed or create a private version with more filters. For
more details, please check out Creating Versions [page 2182].
Every planning version has a default limit set to 5 million facts that can be adjusted, as planning performance
depends on multiple factors, such as the number and complexity of the dimensions, calculations, concurrent
users, and more. Make sure you adjust that limit accordingly if necessary in the Model Preferences, under
Data & Performance Size Limits for Planning Performance , with the Overwrite Default Limit option.
You can think of the planning area as a version’s data used for all planning actions, and a basis for
creating private versions and editing public versions. By optimizing the planning area, you keep the data size
manageable and can work on data that’s the most relevant to you.
Optimizing the planning area is especially useful if you’re trying to limit the data size of a model with a large
public version that’s not limited. When you edit a public version, or create a private one, the application
stores the recommended planning area as a reduced data snapshot but still shows locked data outside of that
snapshot.
You can limit the size of the recommended planning area either using data access control, data locking, or
both. Data access restrictions give you access to the data for which you have write access, while data locking
restrictions give you access to the data regions that are unlocked.
You can optimize the size of the planning area in the Model Preferences, under the Data and Performance tab.
Click the toggle under the Optimize Recommended Planning Area section, and using the dedicated options,
select whether you want to limit the planning area based on data locking, data access control and model
For more information about the planning area, see Optimize Planning Models Using the Planning Area [page
695]. For more information about working with the recommended planning area when editing or creating
versions, see Planning on Public Versions [page 2188] and Creating Versions [page 2182].
The SAP HANA processing engine used by SAP Analytics Cloud has been optimized and made available for
models created from version 2021.02+ to provide better supportability, performance, and precision. As a
result, you might notice different data results between models created before and after version 2021.02. These
differences are expected and solely due to the update.
However, if you notice unexpected results in new models or errors that didn’t occur before, you can decide
to switch back to the legacy processing engine as opposed to the optimized processing engine. For more
information, please refer to 2994816 .
Comment Statistics
Comment statistics give you information on comments associated with a planning or analytic model at the
model level. Information such as the number of data point comments, dimension comments, and threads in
the model are all part of the statistics.
You can view comment statistics from the (Model Preferences) dialog, under Data & Performance tab, select
View Comment Statistics option. The Comment Statistics dialog opens.
Note
You must have permission to manage and read comments in the tenant and read permissions on planning
and analytic models in the tenant to view comment statistics.
By default, only the file and SAP BPC options are available for export jobs in the Data Management space.
To be able to export to other systems like SAP S4/HANA, SAP Business Integrated Planning, or SAP BW and
BW4/HANA using OData services, make sure to enable this option.
For performance reasons, after a system upgrade, models are upgraded consecutively over time. You can use
the Redeploy Model option if you want to make sure that a specific model is upgraded after a system upgrade.
You can also use that option if you have incorrect data or exceptions errors in widgets in stories and analytic
applications. This usually means that the model is corrupted. The Redeploy Model option regenerates all
runtime artefacts and fixes these errors.
Using the Redeploy Model option also regenerates data locks. In the rare case that the data locking
configuration cannot be recovered, you need to go to Model Preferences, turn data locking on and off, and
re-set the data locks manually. Another option is to contact your system administrator and ask if they can help
reconfigure the data locking and recover the data locks. For more information on configuring data locks, see
Configuring Data Locking [page 2525].
The Options for HANA Models is available for models based on HANA views.
For models based on HANA views, version management functionality is available so that you can map any of
the imported dimensions and values to selected versions (categories: Actual, Planning, Forecast, and so on).
The mapped values are displayed in stories.
To access the feature, select the (Version Information) icon on the toolbar.
The following example shows that the Color dimension has been chosen to be the Version ID column. Different
categories have been selected for the different versions (Blue, Green, and so on).
Note that not all dimensions from the HANA data source are immediately visible in Modeler. To see all
dimensions, select the All Dimensions tab. For more information, see Dimensions in Live Data Models [page
605].
You can customize date dimensions members, hierarchies and properties to make time more flexible and
suited to your planning and analytical needs.
Flexible time hierarchies help you plan and analyze data along an individual time hierarchy. They also allow for
company specific fiscal year variants.
In this section, you will learn about the following topics related to customizing date dimensions.
Date dimensions can be maintained both manually and automatically by the application.
Leaving the management to the application means that the dimensions values are generated from the
parameters within the Dimension Settings panel in the Modeler, and in the model preferences. Conversely,
maintaining the date dimension manually allows you to edit the master data and manage all dimension
members freely.
You can edit the dimension parameters just like you would with a standard date dimension. However, managing
dimension members manually unlocks hierarchy management, the ability to create additional properties, and
editing properties within the dimension view.
If you’ve set the date dimension to user-managed, you can edit the predefined hierarchies that come with the
time granularity that are applied to the dimension.
Note
You cannot revert a date dimension back to system-managed after you've saved the model. Also, setting a
date dimension to user-managed automatically locks all fiscal settings that are applied to the dimension.
These settings are then no longer available in the Dimension Settings panel.
You can select amongst two booking behaviors to decide how values are processed: Distribute Values, or Book
to Unassigned.
Distributing values means that the application assigns and spreads values across all levels available within the
hierarchy. Visually, within a table, values are distributed within each level of the default hiearchy in stories.
If values are booked to unassigned, the application creates an unassigned member in which all values are
distributed. Visually, within a table, values are distributed under an # member in stories. You can also see this in
the Modeler when switching to the grid view where "#" members are added to each property of the dimension,
as illustrated in the image below. Note that for each property set with the day granularity, the member is shown
as empty with no value.
For more information about the booking behavior, check out Disaggregation of Values During Data Entry [page
2207].
Member Management
If the date dimension is user-managed, you can edit its members just like you would with any other generic
dimension, and add additional members to the dimension both forward and backward in time using the
Generate Additional Members option. It's also possible to add, edit and delete members manually within the
grid view.
7.6.4.3 Properties
You can add properties to date dimensions and specify dates of interest for your reporting or planning
purposes, and account for specific periods quickly, like Holidays, Easter or special sales periods for instance.
Within a hierarchy, each property you create is assigned to a semantic type. A semantic type is a time unit
assigned to a property within the hierarchy: Year, Half-Year, Quarter, Month, Period, Week, Day, and Other.
The semantic type Other is reserved for additional properties that you want to attach to other properties as
linked descriptions. In the example below, you can see the Month, Day, Week, and Month, Day properties used
as descriptions and linked to their respective properties. In the Grid view and the Dimension Settings panel, you
can identify ID ( ), unique ( ), and description ( ) properties thanks to their dedicated icons.
The unique property is generated automatically, and corresponds to the lowest level in the hierarchy. Being
unique, there’s no possibility to create one or declare another property as unique, but you can delete it.
The ID property is defined according the dimension granularity and pattern settings. It can't be switched with
another property, and its semantic type is locked. If needed, you can still edit its members and description.
Linked descriptions are really valuable in stories for reporting purposes. If a hierarchy is used in charts and
tables, you can see the actual description of a member rather than its ID. Like in the example below, we can see
the month names rather than the month IDs.
Easter is never the same week from year to year, and if you’re working in sales, reporting on the Easter sales
peak can be complex. Using additional properties, you can align the campaigns across the years using time
series and make the comparison easier. Rather than trying to compare Easter sales with Easter being on
different weeks, you can instead align the years with weeks leading up to Easter and the following weeks to have
a single reference, and align the data points on a single timeline.
By flagging periods that are crucial to your business, you’ll be able to pinpoint them promptly and speed up
your analysis.
Context
When working with user-managed date dimensions, you can add properties for specific members and then add
them to time hierarchies.
1. In the Modeler, click a date dimension to access the Dimension Settings panel.
2. Within the Dimension Settings panel, scroll down to the Properties section and click .
3. Give a unique ID and description to the property.
4. Select a semantic type.
A semantic type is a time unit assigned to a property within the hierarchy: Year, Half-Year, Quarter, Month,
Period, Week, Day, and Other.
5. Specify a data type.
Note
For properties of semantic type Generic and data type Text of user-managed date dimensions, you can
set the maximum text length for the property’s values from 1 up to 5000 characters.
In the user interface, very long property values may be displayed in abbreviated form.
6. If you want to attach another property to the one you're creating as a description, select it in the Linked
Description dropdown. Otherwise, leave it set to (None).
Note
To link a property, make sure it has the Other semantic type assigned to it. If not, you will not see it in
the Linked Description dropdown.
7. Click Create.
Editing a Property
Procedure
1. In the Modeler, click a date dimension to access the Dimension Settings panel.
2. Within the Dimension Settings panel, scroll down to the Properties section and click .
3. Click Edit.
4. Change the description, linked description or both.
Note
For properties of semantic type Generic and data type Text of user-managed date dimensions, you can
increase the maximum text length of the property’s values up to 5000 characters. Note, that you can
only increase the maximum text length but not decrease it. This is for safety reasons to avoid potential
data loss.
In the user interface, very long property values may be displayed in abbreviated form.
5. Click OK.
Procedure
1. In the Modeler, click a date dimension to access the Dimension Settings panel.
2. Within the Dimension Settings panel, scroll down to the Properties section and click .
3. Click Delete.
Unlinking a Property
Context
You can unlink properties that are attached to other properties as linked descriptions.
Procedure
1. In the Modeler, click a date dimension to access the Dimension Settings panel.
2. Within the Dimension Settings panel, scroll down to the Properties section and click .
3. Click Unlink.
You can create hierarchies so that they align with your planning and reporting needs, and shape the year so
that it reflects your reporting structure.
The hierarchies you can build depend on the properties available in the date dimension. For user-managed
dimensions, you must have at least one hierarchy.
Note
• Within a hierarchy, the Month and Period semantic types correspond to the same level, and can't be
used at the same time.
• You can't use an ID property and a unique property at the same time.
Once all properties have a semantic type assigned, you can add them as levels within the hierachy. Make sure
to follow these guidelines:
• The hierarchy has the mandatory Year semantic type as the highest level.
• The semantic types within the hierarchy are ordered from the biggest to the smallest time unit. For
example, Month must be below Quarter, and above Week.
You can model the hierarchies and shift quarters for example. Below is an example, where March rolls up to Q2,
June rolls up to Q3, and September rolls up to Q4.
January Q1 Q1
February Q1 Q1
March Q1 Q2
April Q2 Q2
May Q2 Q2
June Q2 Q3
July Q3 Q3
August Q3 Q3
September Q4 Q3
October Q4 Q4
November Q4 Q4
December Q4 Q4
Here’s another example with the same quarter shifts, and an additional period rolling up to Q4.
P1 Q1
P2 Q1
P3 Q2
P4 Q2
P5 Q2
P6 Q3
P7 Q3
P8 Q3
P9 Q3
P10 Q4
P11 Q4
P12 Q4
P13 Q4
Context
You can create additional time hierarchies and have them reflect you reporting structure. Hierarchies you
create can be reused later in stories. Make sure you follow these guidelines:
• The hierarchy has the mandatory Year semantic type as the highest level.
• The semantic types within the hierarchy are ordered from the biggest to the smallest time unit. For
example, Month must be below Quarter, and above Week.
• Each semantic type within the hierarchy is only used once, except for the Other semantic type.
• If the hierarchy uses the semantic type Month, do not include the semantic type Period, and vice versa.
• The lowest level of the hierarchy is the ID property if there is one. Otherwise, it must be a unique property.
• Description properties cannot be used in the hierarchy.
Procedure
1. In the Modeler, click a date dimension to access the Dimension Settings panel.
2. Within the Dimension Settings panel, scroll down to the Hierarchies section and click .
3. Give a unique ID and description to the property.
4. Using the guidelines listed above, drag and drop the properties to the Levels section of the dialog to build
the hierarchy.
By default, the application lists all properties available in the dimension. Use the Show eligible
properties only toggle to filter on the properties that can be added to the hierarchy.
5. When you're done, click OK, or click Create Hierarchy to add another hierarchy and repeat these steps.
The week granularity allows you to refine date dimensions and expand the time hierarchies available.
Week Granularity
If you’re in the retail industry for instance, it’s highly possible that data is processed at a week level due to
seasonality. With sales performance being highly seasonable, the week granularity helps you track and analyze
that data precisely. It’s also highly beneficial if you’re responsible for staffing, need to plan special timely
initiatives for specific weeks, organize replenishment work, manage weekly discounts,… Other use cases could
be product launches, inventory receipts, weekly promotions, variance analyses,…
You can choose from two distinct week patterns and adapt your date dimension to your business case. With
these patterns, data is stored data on weeks, and is then aggregated to months or quarters:
• 445/454/544: splits a period (a quarter, typically), into sets of four weeks, four weeks, and five weeks.
Each set represents a month. This rounds up to 52 weeks, just like in a typical calendar year.
• 13x4: splits a year into 13 periods rather than 12 months, with each period being divided into sets of four
weeks. This rounds up to 52 weeks, just like in a typical calendar year, with each period having the exact
same number of weeks.
A 53rd week is automatically added to each pattern every five to six years.
These patterns are available in the Model Preferences, via the Enable Week Settings toggle under the Time
Settings tab. You can tune the week further and select the first day of the week, the first week of the year, and
decide how you would like the month or period to be labeled. For period planning, you might want to show
periods as P1-P12 rather than month names, especially if you add periods to a year.
Context
Enable the week granularity if you need to plan and store data at a week level.
Note
The week settings apply to all date dimensions with the week or day granularity.
Procedure
This topic lists various restrictions for the Flexible Time feature.
Formulas
• The ResultLookup function is not supported for non-standard time periods such as week-level granularity
and user-managed time dimensions.
• The IF function is not supported for non-standard time periods such as week-level granularity and user-
managed time dimensions.
• The If and ResultLookup functions are only working if member is unique below a single parent. If the
member is present under multiple parents, the formulas are invalid.
• For the CAGR function, years must be integers. 2021.3 or YearOne, for instance, are invalid.
• Hierarchies that include non-standard user-managed time dimensions (that use the semantic type Other)
won't be available for date ranges.
• You won't be able to edit the custom current date if you change to a hierarchy that doesn't include the
granularity that was previously used. For example, you set a date based on Quarter and then choose a
hierarchy that doesn't have Quarter. You can delete the custom current date, but you can't edit it.
• When you have dates with no value, they will still be included in date range look back.
For example, your user-managed time dimension has values for weekdays only, not weekends. Your
lookback will include the weekend days.
Planning
• If you are copy and pasting values in a table with a model that is using custom time hierarchies, only the
overall values can be pasted to cells with different time hierarchies.
• When creating copy rules for a copy step in data actions, copying to non-leaf target members on user-
managed time dimensions is not supported at this time.
• If you use a model with user-managed time dimensions to define the context of a calendar event, the filters
of the time dimension won't be applied to the attached story.
• The Difference From formula won't be supported for the initial release.
• Variances don't work as expected for hierarchies that include non-standard user-managed time
dimensions that use the semantic type “Other”. You will need to choose a hierarchy that uses only
standard time dimensions.
• Smart Insights is not supported for user-managed dimensions, or dimensions that support week
granularity and/or week patterns to show users how data points change over time. Flexible time
dimensions are supported for top contributor and calculation insights.
• Search to Insight will ignore models containing user-managed dimensions, or dimensions that support
week granularity and/or week patterns.
Mobile (iOS)
• Stories containing flexible time dimensions, if set to Optimized iOS mode, can be consumed on supported
iOS devices. Stories set to the non-optimized mode are not supported.
The flexibility of the model allows you to enrich your data even after you've imported it.
There are two ways to add semantic value to your model data: using dimension tables, and properties.
• Add Master Data to Your Model Using Dimension Tables [page 689]
• Add Properties to Single Column Dimensions [page 692]
Dimension tables allow you to add master data to your facts, complete your dataset, and input data manually.
You can think of dimension tables as a key to unlock the management of dimension members and properties
you add to a single column dimension. They allow you to go beyond the capabilities of a single colunm
dimension and add flexibility to your model.
When you create a dimension table, the application check for potential validation issues:
You can open dimension tables at any point in time using the Edit Dimension Table option in the data
foundation in the Model Structure space.
Dimension tables can be added automatically when creating a new dimension, or manually when editing a
dimension.
In the dimensions list, dimension tables are represented with a gray background.
The following features are also available when using dimensions tables:
Create a dimension table to unlock master data management of properties and enrich your dimension.
Procedure
1. In the Model Structure view, select a dimension from the dimensions list.
2. In the Dimension panel, click Generate Dimension Table.
3. Select whether you’d like the dimension table to be public using the Make Public toggle.
4. Click + Description to attach a property as a description and select the dimension.
Results
The dimension table is now ready. Use it to enrich your dimension using properties and maintain values
manually. To edit the dimension table you’ve just created, click Edit Dimension Table in the data foundation.
You can add master data to dimension tables at any point in time.
Procedure
1. In the Model Structure view, select a dimension that has a dimension table from the dimensions list.
2. In the Dimension panel, in the Dimension Table section, click .
The dimension table opens in the grid view. From here, you can add properties, hierarchies, and maintain
values manually.
3. To add a property, click Add Property, and fill in the different fields.
A new column has been added to the dimension table. You can now maintain it manually.
4. To add a hierarchy, click Create Hierarchy, and select Parent-Child Hierarchy or Level-Based Hierarchy and
proceed to build the hierarchy.
Delete a dimension table to remove all the master data associated to a dimension.
Procedure
1. In the Model Structure view, select a dimension that has a dimension table from the dimensions list.
2. In the Dimension panel, in the Dimension Table section, click next to the dimension table.
Another way of adding semantic meaning to single column dimensions, besides dimension tables, is to convert
other existing single column dimensions to properties.
When you convert a single column dimension to a property, the application asks you to select a target
dimension. If you create multiple properties for the same target dimension, you effectively create a group
that is represented visually within the Modeler.
Note
Only single column dimensions can be converted to property. Dimensions with dimension tables cannot be
converted.
As opposed to dimension tables, converting dimensions to properties and grouping them doesn’t change the
underlying structure of the data. In groups, all the information is contained in the fact table, but cannot be
maintained or edited. More importantly, they don't need to be managed contrary to dimension tables.
Compared to dimension tables, groups are visually represented in the data foundation.
Also, the Dimension Details panel is automatically updated with the properties that have been enriched with
new semantic meaning.
Use properties to add semantic meaning to your model and enrich single column dimensions.
Procedure
Note
Except fot text and description properties, all other property types require the application to create a
dimension table automatically.
Procedure
Note
Except fot text and description properties, all other property types require the application to create a
dimension table automatically.
5. Optional: If you're changing to the integer or decimal property type, set the conversion format.
6. Optional: If you have selected a property type other than text or description, select whether you would like
the dimension table created to be public using the Make Public toggle.
7. Click OK.
Deleting a Property
You can remove a property from a group and revert it back to a measure or single column dimension.
Procedure
Performance in planning models is largely impacted by queries and planning actions. These two factors can be
optimized to reduce processing time when working with large public versions.
By optimizing the planning area, you keep the data size manageable and can work on data that’s the most
relevant to you. You can think of the planning area as a version’s data used for all planning actions, that you
can filter out and refine for a private version. Its size may vary depending on the model settings. Optimizing
the planning area is especially useful if you’re trying to limit the data size of a model with a large public version
that’s not limited.
You can optimize the size of the planning area in the Model Preferences, under the Data and Performance tab.
Click the toggle under the Optimize Recommended Planning Area section, and using the dedicated options,
select whether you want to limit the planning area based on data locking, data access control and model
privacy, or both. Note that for both options to be effective, data access control and data locking must be
enabled and configured. For more information, see Configuring Data Locking [page 2525] and Set Up Data
Access Control [page 803].
You can limit the size of the planning area either using data access control, data locking, or both. This
corresponds to the recommended planning area, that decreases the size of private version size when you
create it based on data access and data locking. Data access restrictions give you access to the data for which
you have write access, while data locking restrictions give you access to the data regions that are unlocked.
With the planning area, when you edit a public version, or create a private one, the application stores a reduced
data snapshot but still shows locked data outside of that snapshot.
When using data access and/or data locking options, the application generates a recommended planning area
based on the data access and data locks configuration. While using data access control works best when users
have more read than write privileges, using data locking configuration is the most effective for model owners
and users with administration privileges, or when most of the data is stored in locked data regions.
When working with a public version in a story, the planning area filters the data based on what data you have
permission to edit. Only data that you have edit permissions for will be put into edit mode. When creating a
private version, you have the option to only copy the recommended planning area to the private version. Data
actions and multi actions only affect data within your planning area.
1. This area is outside of the recommended planning area for this version. This data will not be put into edit
mode on a public version, or be copied to a private version unless otherwise specified.
2. This area is inside the recommended planning area. You can plan on this data in a table, or by running a
data action or multi action.
The planning area is disabled in existing private versions if a dimension referenced in the planning area is
deleted. For more information, see Deleting Dimensions from a Model [page 761].
If you're looking to further optimize the planning area when editing public versions, you can also choose
a customized planning area. This option will allow you to choose the data that will be put into edit mode.
However, publishing permissions will still impact which data can be published.
You can customize the planning area by selecting Start Edit Mode on a public version in the version
management panel and then choosing Customized Planning Area. For more information about editing public
versions, see Planning on Public Versions [page 2188].
If you're looking to further restrict the size of recommended planning area defined by model settings when
editing a public version, you can first select the checkbox of Auto-generate based on the table context in the
Builder panel for the table you're working on. After you select this checkbox and put the public version into Edit
Mode, the recommended planning area that's applied takes into account data access control, data locking, or
both, as well as the current table context that covers story, page, and table filters. For more information about
editing public versions, see Planning on Public Versions [page 2188].
The filter in a private version, or public edit, represents the planning area. Although you’re explicitly limiting
the planning area using the filter, the application merges the public version’s data to display data outside of
the filtered data region. In a myBudget private version for instance, if the filter is set to Region Germany, it’s
processed as Region.ID not in (‘Germany’) in the Budget public version that’s mapped to myBudget.
This way, data actions and multi actions are limited and contained within the filter and planning area, and the
application provides better guidance in stories for planning actions. If a data action or multi action is outside
the planning area, the application throws an error.
Having a small sized subset of data for a private version can speed up performance significantly. However,
there are additional costs to consider when a private version has a defined planning area:
• Each query must process the negated filter for data that’s outside of the data region.
• Each query evaluates the filter to prevent data entry and planning actions for data that’s outside of the data
region.
• Each data action and multi action must process the filter to be consistent with the planning area.
The performance enhancements brought by the planning area largely depend on the complexity of the filter
(based on either data locking, data access control and model privacy, or both).
• Dimensions are treated independently so that filters like “(Product = Running Shoes and Time = 2020) or
(Product = Tennis Shoes and Time = 2021)” are simplified to “Product in (Running Shoes, Tennis Shoes)
and Time in (2020, 2021)”.
• Filters on dimensions are ignored if the number of filter values becomes too high (for example, after the
hierarchies are unfolded).
The application marks cells that are (partially) outside of the planning area as not ready for input. In the above
example only ‘Running Shoes’ is unrestricted by data locks. When the public version is set into edit mode or a
private version is created based on the public version and the planning area is used, the planning area will be
limited to ‘Running Shoes’, too. Other products like ‘Tennis Shoes’ will not be part of the planning area and thus
not ready for input. Also, the common parent in the hierarchy, ‘Footwear’, will not allow input.
In this example, the planning area coincides with the data locks. But there are differences: You can choose
to ignore data locks in the table. If the private version was created without the recommended planning area,
you can then change values that are usually locked. However, you cannot publish these changes. If the private
version was created with the recommended planning area, you cannot change these values, even if you choose
to ignore the data locks. ‘Tennis Shoes’ is outside of the planning area and thus cannot be changed.
• For a cell to allow input, all member combinations that may be aggregated into its value must be inside
the planning area. This means, the dimensions used in the filter of the planning area need to obey the
following:
• For a dimension with a hierarchy on an axis, all descendants of the dimension member must be inside
the planning area.
• For a dimension without a hierarchy on an axis, the dimension members belonging to this cell must be
inside the planning area.
• For a dimension that is not on an axis, a filter must limit the members to a subset of the members
inside the planning area or all members of the dimension must be inside the planning area.
• Dimension attributes, that are on an axis, are only considered if the dimension itself is not on an axis.
Then a cell allows data entry only if all dimensions members that have the cells attribute value are
inside the Planning Area. If the dimension itself is on an axis the cells will only be marked as input
enabled according to the dimension. An additional dimension attribute will be ignored. Remark: In this
layout you can easily generate invalid cells, where the dimension members and the attribute values do
not match.
Tip
You can view the filter of the planning area in the Details section of the version in the Version Management
panel.
Before starting the edit mode, it is not determined whether the edit mode or the copy to a private version will
be done with or without planning area filters. Especially when the recommended planning area is based on
data access control, this can lead to more restricted use than without using a planning area. If you have DAC
enabled on a dimension that is neither on an axis nor in a filter of a table, data entry and publishing may work
without planning area. If the data is distributed in the right way, you might only need to change the member
combinations that you have Write access to. When you enable the planning area based on data access control,
however, the cells will not allow input anymore because the dimension is neither on an axis nor in the filter.
To avoid such situations, it is recommended to have all dimensions that are potentially used in the planning
area filter on an axis or in a filter in your stories. You can influence which dimensions that includes by basing the
recommended planning area either on data locks or on data access control or on both.
In planning models, data is stored in leaf member of each dimension hierarchy. Parent members only show
the aggregated values of their children, and generally don't contain values on their own. Because of this,
Based on different settings, such as aggregation mode, validation rule and state of the data, disaggregation can
behave differently.
Additionally, in the Measure Details panel, you can have even more control by selecting the disaggregation type.
There are two options: Standard, or Reference to Another Measure.
Restriction
• Data disaggregation is only available in the Measure Details panel for models with planning capabilities
enabled.
• Reference to Another Measure is only available if there is no exception aggregation, and the aggregation
mode is either empty or set to SUM.
The standard disaggregation mode is the default disaggregation behavior, derived from the measure's
aggregation type and/or state of the data. For example, if you have a measure that already contains data
across the leaf members and you change the value of a parent member, the value is then disaggregated
proportionally based on the values of the leaf members.
The Reference to Another Measure disaggregation mode adds flexibility and provides you with the ability to
drive the disaggregation process based on business rules proportions from another measure. It also unlocks
access to two disaggregation modes:
P-1 C-1 60 6
P-1 C-2 30 3
P-1 C-3 10 1
The application takes data locking settings into account, regardless of the mode you select. Locked records
aren’t overridden by data disaggregation.
Unbooked members are also disaggregated as long as the reference measure is booked.
There are also situations where the target measure (the measure you want to disaggregate) and the reference
measure (the measure whose proportions you use for the target measure disaggregation) are not equally
booked.
If the reference measure is unbooked, then the application switches back to the standard disaggregation mode.
• The value is disaggregated on every node to the leaf nodes according to the weights given by the reference
measure values of the leafs.
• Unbooked reference values are handled as weight 0, if the target cell is booked.
If a reference cell is 0 (booked) and the target cell is unbooked, then the target cell remains unbooked.
If the sum of all weights is 0, then disaggregation is performed equally across all (in the reference) booked cells.
Procedure
Note
With the new model type, there can be conflicting properties between an account dimension and measures.
If the units or formatting settings are different for instance, you need to indicate which structure should take
precedence over the other. This is especially true for aggregations.
In the Model Preferences, under the Structure Priority tab, use the Prioritize properties and calculations from
drop down to indicate whether account settings or measure settings should get the priority. If you decide
to prioritize the account dimension, then all the aggregation definitions, formatting and unit handling are
taken from the account dimension. But if you decide to prioritize the measures, everything is taken from the
measures.
Setting a structure priority is especially useful as it allows you to reuse global account dimensions across
multiple models, while still being able to fine tune the aggregation behavior, formatting and unit handling
to account or measure per individual model. The structure priority applies to all query processing, for both
planning and reporting.
Note
Calculations have a higher priority than accounts, measures, restrict and lookup formulas. They are always
processed first with or without a custom solve order, regardless of the structure you decide to prioritize.
You can go further with the aggregation processing and create custom priorities for calculations when working
with calculated accounts and calculated measures. In the Model Preferences, under the Structure Priority tab,
toggle Custom priority for calculations on. To access the priority mode, in the Calculations workspace, click
in the toolbar.
The global priority setting still prevails for conversion measures, and calculated accounts or calculated
measures that use either the Restrict or Lookup formula.
In the Calculations workspace, if your model has both calculated accounts and calculated measures, by default,
the application creates two separate dedicated groups. Within these groups, using drag and drop, you can:
Order priorities and groups from top to bottom to set the different priorities, from the most to least important.
The process to import data differs depending on your model type, the data source, and the platform you are
working on.
This section only describes how to import data to a model. After you import data, it must be prepared so that
it's ready to be consumed both in models and stories. For more information about data preparation, check out
Import and Prepare Fact Data for a Classic Account Model [page 714], or Import and Prepare Fact Data for a
Model with Measures [page 728].
In the image below, you can see all the data sources supported when importing data to a model. Click any of
them to access a detailed procedure.
Build a query to import data from an app or a data source into a new or existing model.
To build a query, you move data elements from the Available Data area into the Selected Data and Filters areas
to create new measures and dimensions to be imported to your model, and apply filters to them.
An entity ( ) can be expanded to select one or more of its members. The following icons represent types of
data you can select, and their properties:
Icon Type
Entities
Numerical properties
Complex properties
In SAP Universe queries – the set of data that can be selected from and/or filtered on (similar to the
Entity in other queries)
the icon to display the list of reports from your source data in real time, or select the icon to change the
way values are displayed:
Filters
When you add a data element to the Filters area, you can use operators, such as equal to, except, greater than,
and so on, and select the icon to open the Select Values From dialog. In this dialog, you can search and
select the values you want to filter on. The list displays the unique values from the first 1000 records. It is
possible to search for a value not displayed in the list using the search function.
Some entities, such as Boolean values, can have only a single filter value. Entities such as strings or integer
values can have multiple values in a filter. To select multiple values, select the check box of each value you want
to add to the filter or Select All to select all the values in the list. Selecting multiple values for a single filter
treats the filter as a large OR function, filtering out values that do not match any of the selected values.
When you create a query, you can set an Incremental Load. An incremental load is based on time or numeric
data fields and lets you bring the newest set of data since the last import when you refresh. For example, you
can set an incremental load based on the "Release Date" field of a "Product" entity to bring in the newest data
based on the release date.
You can set an incremental load while creating a new data source or on an existing data source.
Note
When you set an Incremental Load for the first time on a data source, the first refresh will not use the
Incremental Load because the system doesn't know what the "newest" data is yet.
To set an incremental load, in the Filters area, select Set Incremental Load and then drag and drop an
incremental-load-enabled property to the incremental load box that appears.
To ensure that duplicate data is not added during refresh, there are a few restrictions on the data field that
you can use for an Incremental Load:
Updates for previously imported data must be brought in by a new import using a specific query for that
data region.
These restrictions are not enforced by the application, so you must ensure your data follows the
restrictions.
After you create the model, the incremental load appears in the Import Settings section of the Data
Management screen.
Note
The incremental load will only take effect if there is a valid value recorded for the filter in the query
builder or the import settings panel. Once an effective incremental load filter is present for your query, it
is recommended to use Append as the import method for the data refresh job running that query. You can
change the import method from the Import Settings section on the Data Management screen.
If you want to use the import method Update or Clean and replace subset of data, don't remove the
Incremental Load field during wrangling. Otherwise, rows with the same dimensionality will be aggregated.
In addition, if you use Clean and replace subset of data pay special attention to the granularity of the date
dimension used in your model. For more information on import methods, please see Update and Schedule
Models [page 772].
The next time the source is refreshed, the latest value will be recorded for the Incremental Load Filter. Once the
Incremental Load Filter has a value, you should switch the import method to Append.
If you change the Incremental Load Filter to a different field, the new filter will not take effect until after the next
refresh.
Example:
You set an Incremental Load based on the CreateDate field, and the latest row contains the date "Jan 14, 2018",
so the system saves that value. The next time a data refresh occurs, a filter will be applied to grab only data that
is later than "Jan 14, 2018".
If you change the Incremental Load to a different field, for example TransactionID, the system does not
overwrite the old value until the next refresh. If you change the Incremental Load Filter back to CreateDate
before a refresh, the load will still have the old value and, if you refresh now, it will still grab anything after "Jan
14, 2018".
To ensure the data is correct when you change an Incremental Load filter, make sure you perform the following
steps:
• These steps needs to be performed manually without the interruption of an automatic scheduled
refresh job running on the same query.
• If the data source was using the Append import method before using the Incremental Load Filter, then
you can skip steps two to four.
You can import master data from an external data source into a public dimension.
Context
If you update model data by importing data into a model using the “Update dimension with new values” option,
public dimensions are not updated. Follow this procedure to import data into public dimensions. Dimensions
and their attributes can be imported:
• Descriptions
• Properties
• Parent-child hierarchies
You can use this procedure to import data into public account dimensions as well. The following account
dimension properties are supported at this time:
• ID
• Description
• Account Type
• Hierarchy
• Unit Type
• Measure Unit
• Scale
• Decimal Places
• Rate Type
Note
Unit Type and Measure Unit both correspond to the Units & Currencies column in the Modeler, for account
dimensions.
The Unit Type value should be either Currency or Unit. When the value is Currency, the Measure Unit
column is blank. When the value is Unit, the Measure Unit value can be a unit of measure or a packaging
unit, such as Bottles or Pieces.
• Read
• Write
• Data Locking Owner
• Person Responsible
When you finish the column mapping and click Finish Mapping, a job will be submitted and shown in the
Data Management tab. This job will start to run automatically after a few seconds, so you don't need to
(and shouldn't) execute the job directly. Also, while the job is running, you can navigate away from the Data
Management page; the job will continue to run in the background. When the job completes, a completion status
will be displayed.
Note
• This feature is available for generic and organization dimensions, but not account dimensions.
• Be sure that the user and team names to be imported are valid, because the information isn't validated
during the import process.
• Both user and team names are supported, but team names need to be prefixed with the keyword
“TEAM:”. For example: TEAM:ABCTEAM.
Note
The following procedure describes how to import data from a file into a public dimension. If you import
data from other supported data sources, refer to the following topic for data-source-specific details:
Import Data to Your Model [page 702]
Procedure
If you create a new file server connection, specify the path to the folder where the data files are located. For
example: C:\folder1\folder2 or \\servername\volume\path or /mnt/abc.
7. Choose the file you want to import.
8. If you are importing from a local Excel workbook containing multiple sheets, select the Sheet you want to
import.
If you are importing from an Excel file on a file server, the first sheet is automatically imported.
9. Specify whether you want to use the first row of the data as column headers.
10. If you're importing a .csv file, select which delimiter is used in the file.
11. If the dimension specified in the Target Dimension field isn't correct, choose the correct one from the list.
12. Select Import to begin the initial import of the source data.
After the import completes, the data integration view is displayed, where you can complete the mapping of
your new data to the public dimension.
Note
If you import a large-volume dataset, you'll be informed that a data sample will be displayed rather than
the entire dataset. Choose OK to continue.
13. In the Dimension Mapping section of the Details panel, map the new data columns to your dimension's
properties and attributes.
14. If there are any remaining issues shown in the Mapping Requirements section, resolve them.
15. If you want to omit validation for specific hierarchies, to allow non-leaf members to contain fact data, click
Select hierarchies in the Conditional Validation section of the Details panel, and then select the hierarchies
that you want to omit from validation.
For details about storing data in non-leaf members, see Entering Values with Multiple Hierarchies [page
2211].
16. If any cells in the grid appear with red highlighting, those cells have data quality issues, and those rows will
be omitted from the model if you don't resolve the issues. Select a highlighted cell to see information about
the issue in the Details panel.
When you select a column or cell, a menu appears, with options for performing transforms. There are two
parts to this menu:
• Choose the Quick Actions option to perform actions such as deleting rows that contain the
selected value.
• Select the (Smart Transformations) icon to list suggested transformations to apply. You can also
select Create a Transform to customize a transform in the transformation bar.
17. Select Finish Mapping.
Caution
A dimension data import job never deletes existing members, but only adds additional ones. For
example: You have a dimension with members A, B and C. You trigger a data import job that imports
members B, C and D. You'd expect that the dimension would then have members B, C and D, but in fact
the dimension has now members A, B, C and D.
The SAP Analytics Cloud agent must be configured in order to allow importing data from a file server or
exporting a model to a file server.
Prerequisites
The SAP Analytics Cloud agent must be installed. For more information, see Installing SAP Analytics Cloud
Agent [page 472].
Note
If you have used the SAP Analytics Cloud Agent Simple Deployment Kit to install the SAP Analytics Cloud
Agent, you must update the file server import allowlist file following the instructions in the Post-Setup
Guide instead of the instructions below.
Data import from file server or model export to file server will assume the file permission level based on the
user that started the Tomcat process. For import, read permission is required. For export, write permission
is required. The user depends on your Tomcat setup. This may be the Windows system user, a specific user
account that started Tomcat as a service in Windows, or the user that executes startup.bat for Windows or
startup.sh for Linux. The system administrator must ensure this user has permissions to access the local
share or the network share on a different machine.
Note
Any users under the specific tenant will have access to all files defined within the allowlist.
Context
For importing data, a file server location allowlist must be configured. Only file server paths allowed by this
allowlist can be accessed when creating a model from file, or importing data from files.
For exporting models, another file server location allowlist must be configured. Only file server paths allowed by
this allowlist can be accessed when exporting a model.
Procedure
Example allowlist.txt:
\\<YourHostName>\Import
Note
2. Define the file server allowlist using one of the following methods:
You must specify the complete path up to and including the allowlist file. For example, C:\<full path to
file>\allowlist.txt.
Note
• If you’re running Tomcat via command line: Shut down the existing Tomcat process, then add the
environment variable, and then start Tomcat from a new command line window.
• If you’re running Tomcat as a Windows service: Restart the Tomcat service by using the Tomcat
configuration manager.
Note
Updates to the allowlist environment variable will not take effect until the agent is restarted, and after
restart, may require up to one minute to take effect.
5. Select (Edit).
6. For data import, turn on Allow model import from File Server. For model export, turn on Allow model export
to File Server.
The Folder Path should match the path listed in your allowlist file.
10. Select Create.
Results
The environment variables and allowlist files are polled once a minute.
Related Information
You are working with a classic account model and want to import fact data to your model.
In this section, we describe each step required to import data acquired from an external data source into the
fact table of your classic account model. There are five main steps in the import process: creating an import
job, preparing the data, combining acquired data if needed, mapping the data, and running the import. The
process can be run once on demand, or scheduled to be executed on recurring basis. Use the links below to
navigate straight to a specific part of the process.
For a complete view of the workflow, check out the video below.
In this video, you will import data into an existing model, map your data to the model dimensions and
measures, review the available mapping options and import methods, and validate your data.
Context
Before being able to go to the data preparation step, you first have to import data using the Import Data
from option. You can import data either from a file, or from a data source. If you’re importing data from a data
source, make sure to select an existing connection, or create a new one. For more information on how to do so,
make sure to check out Import Data to Your Model [page 702].
Restriction
The following data sources are not supported by the new model type:
• SAP BPC
• SAP Concur
• SAP ERP
• SAP Fieldglass
• Workforce Analytics
• Dow Jones
Also, for the Salesforce data source, importing data into an existing model isn't supported.
Procedure
1. Select Import Data from File or Import Data from Data Source depending where your
data is stored.
2. Select your file if your data is stored in a local file or connect to your data source.
3. Select the data you want to import.
The import job is added to the Draft Sources or Import Jobs list, depending on whether you're working with
a classic account model or a model with measures (i.e, the new model type). At this moment, you can save
and exit the process and come back later by clicking . The uploaded draft data expires 7 days after the
upload.
The draft source is visible to anyone with imports rights to the model.
Results
Now that you've imported data to your model, you're ready for the data preparation step. See Preparing the
Data [page 716] for more information.
The data preparation step is where you can you can resolve data quality issues before the mapping step, but
also wrangle data and make edits such as renaming columns, create transformations, etc…
First, make sure to resolve data quality issues if there are any. After the initial data import, the Details panel
gives you a summary of the characteristics of the model with general information about the imported data,
including any data quality issues.
For tenants based on an SAP Data Center, issues can be listed and described under the Model Requirements
section if they are related to the model itself, or under the Data Quality section when you click a particular
dimension.
For tenants based on a non-SAP Data Center, the Details panel list all the dimensions and measures of the
model. You can click the icon to access more information. If ever the application has detected issues with
a dimension, the exact number of issues with that particular dimension is indicated right next to its name, and
clicking that number takes you straight to the Validation tab, where you can see a description of the issue.
Select a message to see the options available to resolve any identified quality issue. Use the context-sensitive
editing features and the transform bar to edit data in a cell, column, or card. When selecting a column, a cell, or
content in a cell, a menu will appear with options for performing transforms. This menu is two-fold:
• Choose the Quick Actions option to perform actions such as duplicate column, trim whitespace, hide
columns, or delete columns or rows. The table below lists all the available actions.
• Select the (Smart Transformations) icon to list suggested transformations to apply to the column,
such as replacing the value in a cell with a suggested value. You can also select Create a Transform
and choose from the options listed under the transformation bar displayed above the columns. The
transformation bar is used to edit and create transforms.
As you hover over a suggested transformation, the anticipated results are previewed in the grid. To apply
the transformation, simply select the transform. You can manually enter your own transformation in the
transformation bar; as the transform is built, a preview is provided in the affected column, cell, or content
Delete Rows Delete rows in the data, either by selecting Provided only as a quick action
individual members, or by specifying a range
(not possible for text columns; only numerical
and Date columns).
Trim Whitespace Remove spaces, including non-printing Provided only as a quick action
characters, from the start and end of strings.
Duplicate Column Create a copy of an existing column. Provided only as a quick action
Delete Column Delete a column. Use the Shift key to select and Provided only as a quick action
then delete multiple columns.
Remove duplicate Remove duplicate rows when creating or Provided only as a task bar icon
rows adding data to a model.
Concatenate Combine two or more columns into one. An Concatenate [<Column1>], [<Column2>]…
optional value can be entered to separate the using "value"
column values.
Split Split a text column on a chosen delimiter, Split [<Column>] on "delimiter" repeat "#"
starting from left to right. The number of splits
can be chosen by the user.
Extract Use this transform to extract a block of Extract [<what to extract]>] [<where
text specified as numbers, words, or targeted to extract>] [<which occurrence>] [""]
values within a column to a new column. from [<column name>] [<include value
option> ].
• before
• after
• between
Note
You must specify two target values
when using between.
• first
• last
• occurrence: Allows you to specify the
position of the target from one to ten.
Note
Use occurrence when there are
multiple instances of the target.
Replace Replaces either an entire cell or content that Replace (<cell/content>) in [<Column>]
could be found in multiple different cells. matching "value" with "value"
A history of all transforms you implement is displayed in Transform Log. There are two levels of transformation
logs recorded: transforms on the entire dataset, and transformations on the currently selected column. Hover
over a specific transform in the Transform Log to highlight the impacted column. You can roll back the change
by either deleting the entries in the history or using the (Undo/Redo) buttons on the toolbar. You can remove
transforms in the Transform Log panel out of sequential order provided that there are no dependencies.
When selecting a dimension, measure, or attribute, the Data Distribution section gives contextual information
about a dimension with numerical or textual histograms:
• Numerical histograms are vertical, and represent the range of values along the x-axis. Hover over any bar
to show the count, minimum, and maximum values for the data in the bar. The number of bars can also
be adjusted by using the slider above the histogram. Checking the Show Outliers (SAP Data Center) or
Note
The displayed histogram is determined by the data type of the column. A column with numbers could
still be considered as text if the data type is set to text.
Once you have checked and fixed all errors, if you're working with a dataset sample, click Validate Data to apply
your transforms across the entire dataset and check for data quality errors.
Note
Validating data in the data preparation step is only possible for tenants based on a non-SAP Data Center.
For tenants based on an SAP Data Center, you can only validate the full dataset when reviewing the import.
See Reviewing and Running the Import [page 725] for more information.
When you create an import job, the application automatically qualifies the data. Typically, columns containing
text are identified as dimensions, and columns containing numeric data are identified as measures. You can
still change the data qualification and change it to another type if needed. For example, you can change a Date
dimension to an Organization dimension.
If you’re unsure about dimension types, make sure to check out Learn About Dimensions and Measures [page
601].
After you've selected your dimensions make sure to follow the best practices described in the sections below.
For non-planning-enabled models only, you can import dimensions with more than the maximum number of
members. However, the following restrictions apply:
• In the data integration view, these dimensions cannot have any dimension attributes added to them, such
as description or property.
• Once the dimensions are imported into the Modeler, they have only one ID column, and are read-only.
• The dimensions can't be used as exception aggregation dimensions or required dimensions.
• The dimensions can't be referenced in formulas.
Click from the menu toolbar, and use the Create Calculated Column interactive dialog to build up your
calculated column. Add a name, and build the formula for your column in the Edit Formula space. You can either
select an entry from Formula Functions as a starting point or type “ [ " to view all the available columns. Press
Ctrl + Space or Cmd + Space to view all the available functions and columns.
Click Preview to view a 10 line sample of the results of the formula. Click OK to add the calculated column to the
model. If necessary, you can go back and edit the calculated column’s formula by clicking Edit Formula in the
Designer panel.
For a listing of supported functions, see Supported Functions for Calculated Columns [page 745].
Dimension Attributes
Dimension attributes are information that is not suitable to be standalone dimensions. You can use them to
create charts, filters, calculations, input controls, linked analyses, and tables (with “import data” models only).
For example, if you have a Customers column, and a Phone Numbers column, you could set Phone Numbers to
be an attribute of the Customers dimension.
• Description: The column can be used for descriptive labels of the dimension members when the member
IDs (the unique identifiers for the dimension members) are technical and not easily understandable.
For example, if your imported data contains a pair of related columns Product_Description and Product_ID
with data descriptions and data identifiers, you can set Product_Description to be the Description attribute
for the Product_ID dimension. Note that the Product_ID column would then need to contain unique
identifiers for the dimension members.
• Property: The column represents information that is related to the dimension; for example, phone
numbers.
• Parent-Child Hierarchy (Parent): The column is the parent of the parent/child hierarchy pair.
For example, if your imported data contains the two columns Country and City, you can set the Country
column to be the parent of the Country-City hierarchy.
The hierarchy column is a free-format text attribute where you can enter the ID value of the parent
member. By maintaining parent-child relationships in this way, you can build up a data hierarchy that is
used when viewing the data to accumulate high-level values that can be analyzed at lower levels of detail.
You can also create level-based hierarchies when your data is organized into levels, such as Product
Category, Product Group, and Product. When the data is displayed in a story, hierarchies can be expanded
or collapsed. In the toolbar, select (Level Based Hierarchy). For more information about hierarchies, see
Learn About Hierarchies [page 613].
• Currency: If you set a column to be an Organization dimension, the Currency attribute is available. The
Organization dimension offers an organizational analysis of the account data, based, for example, on
geographic entities. You can add the Currency attribute to provide currency information for the geographic
entities.
During data import, anomalies in your data can prevent the data from being imported properly, or prevent
the model from being created. If issues are found in your data, the impacted data cells are highlighted, and
• In account dimensions, dimension member IDs cannot contain the following characters: , ; : ' [ ] =.
• Numeric data cells in measures cannot contain non-numeric characters, and scientific notation is not
supported.
• When importing data to an existing model, cells in a column that is mapped to an existing dimension must
match the existing dimension members. Unmatched values will result in those rows being omitted from the
model.
• For stories, if any member IDs are empty, you can type values in those cells, or select Delete empty rows in
the Details panel to remove those rows.
• When creating a new model, if member IDs are empty, they are automatically filled with the “#” value if you
select the Fill applicable empty ID cells with the "#" value option. Otherwise, those rows are omitted from
the model.
Note
This option is only available on for tenants based on an SAP Data Center (Neo).
• In dimensions and properties, a single member ID cannot correspond to different Descriptions in multiple
rows (but a single Description can correspond to multiple member IDs).
For example, if member IDs are employee numbers, and Descriptions are employee names, you can have
more than one employee with the same name, but cannot have more than one employee with the same
member ID.
• In a Date dimension column, cell values must match the format specified in the Details panel.
The following date formats are supported: dd-mm-yy, dd.mm.yy, dd/mm/yy, dd-mm-yyyy, dd.mm.yyyy,
dd/mm/yyyy, dd-mmm-yyyy, dd-mmmm-yyyy, mm-dd-yy, mm.dd.yy, mm/dd/yy, mm-dd-yyyy,
mm.dd.yyyy, mm/dd/yyyy, mm.yyyy, mmm yyyy, yy-mm-dd, yy.mm.dd, yy/mm/dd, yyyq, yyyy, yyyymm,
yyyy-mm, yyyy.mm, yyyy/mm, yyyymmdd, yyyy-mm-dd, yyyy-mm/dd, yyyy.mm.dd, yyyy/mm-dd,
yyyy/mm/dd, yyyy.mmm, yyyyqq.
Examples:
• mmm: JAN/Jan/jan
• mmmm: JANUARY/January/january
• q: 1/2/3/4
• qq: 01/02/03/04
• Latitude and longitude columns, from which location dimensions are created, must contain values within
the valid latitude and longitude ranges.
• For planning-enabled models, in a hierarchy measure, non-leaf-node members are not allowed.
Note
For analytic (non-planning-enabled) models only, non-leaf node members are allowed, but be aware
of the effects of this behavior. For example: in an organizational chart that includes employee salaries,
the manager has her individual salary, and her staff members have their own salaries as well. In a
visualization, do you expect the manager’s data point to reflect her individual salary, or the sum of her
staff members' salaries?
After importing raw data into a new or existing model, or into a story, you may need to perform some data
preparation.
You can combine data from another source with your acquired data by using up to three matching columns to
join the two datasets.
Note
1. In Actions section of the toolbar, under Actions, select the (Combine Data) icon.
The Let's add some data! dialog is displayed.
2. Select whether you want to add data using a file, or using a data source.
Once the new data is loaded, the Combine Data dialog is displayed. Under Combine Settings, a table
representing all columns in the original data is displayed on the left, and a table listing columns from the
new dataset is displayed on the right.
3. Choose the most appropriate columns in each table to combine data. Click to select, or drag columns to
the field provided under Combine Column for each table.
Note
The columns should be the best match and should not contain duplicates. You can use up to three
columns to join the two datasets.
Note
When you upload acquired data containing cross tables to the data integration view, the Transpose Columns
into Rows action enables you to change selected columns into rows. The new table format can then be used to
either import data or create a new model.
Note
You can only perform one transpose transformation per session on a dataset. The table resulting from the
transformation cannot contain more than 50 million cells.
1. Once the acquired data (or sample) is loaded in the data integration view, select the icon from the
Actions menu.
The Transpose Columns into Rows panel appears in Details.
2. Under Transpose Columns, select columns you want to include in the new table format. Choose Select All
to include all the listed columns.
Note
To undo the transformation, open the Transformation Log and delete the associated entry.
Now that the data is prepared, you can start the mapping process. The application automatically pre-maps
some parts of the data, and you can map the remaining data manually.
Context
Restriction
Restrictions apply when mapping dimensions and attributes. These issues are listed in the Review Import
step just before running the import. To avoid having issues during the import, make sure to follow these
best practices:
Procedure
1. Switch to the Card View and determine what remains to be mapped. Cards display as either mapped or
unmapped entities. A card represents mapped data if it is shaded solid and has defined borders. Cards that
must be mapped appear transparent and borderless.
Dimensions with attributes appear stacked and can be expanded. Mapping attributes is optional.
2. Drag and drop an unmapped imported dimension, measure or attributes on the associated card.
The date dimension can either be matched, or set with a default value. To set a default value, select the
date dimension and click Set a default value in the Details panel. After you've set a default value, you can
change it by clicking Change Default Value.
Note
Check Apply fiscal settings to column values in Details to map imported fiscal period data into a date
dimension in a model enabled to support fiscal year. Review the format listed under Format and change
it if required.
3. Repeat step 2 until you have mapped all dimensions, at least one measure, and the attributes.
You can check the progress of the mapping thanks to the dedicated Dimensions, Attributes, and Measures
headings. When each bar is fully colored in grey, the mapping is complete.
4. In the Details panel, specify the version for which you're importing data by checking either Existing Version
or New Version. If you have multiple versions available, select the source version dimension card and click
Map Versions in the Details panel to map each version to the desired version and category.
5. Once you have mapped all cards, check for mapping issues. If there are any, fix them before moving to the
next step.
Results
Once you've fixed remaining issues, you're ready to validate the data, review and run the import. For more
information, check out Reviewing and Running the Import [page 725].
Once the mapping is complete, you're ready to review the import options and method before running the
import job.
Context
Procedure
1. Check whether there are pending data issues after the mapping.
All issues must be solved to make sure all data is imported into your model. If you decide to create the
model ignoring some of the detected errors, either entire rows or invalid cells will be omitted from the new
model.
2. Review the import option. For tenants based on an SAP Data Center, click View all options under Mapping
Options in the Details panel. For tenants based on a non-SAP Data Center, click to access the
preferences.
• Update dimensions with new values (SAP Data Center only): Select this option if you're importing data
to a model that already contains data and want to update the data with new dimensions members,
attributes, and hierarchy changes.
• Convert value symbol by account type (SAP Data Center only): Select this option to match the value
symbol, positive or negative, to each account type in the model, for when values are stored as positive
regardless of whether they represent income or expense accounts.
• Fill applicable empty ID cells with the "#" value (SAP Data Center only): Select this option to fill empty
ID cells with a "#" value to preserve rows without a value in the Dimension ID column. Otherwise,
empty ID cells are deleted.
• Reverse Sign by Account Type (Non-SAP Data Center only): Select this option to import your INC
and LEQ account values with reversed signs (+/-). This option is only available if your model has an
account dimension. If you want your INC and LEQ values to show up as positive, import them as
negative values, and vice-versa. Before importing data, make sure to check the signs of the original
source values first, and then set the option accordingly.
• Update Local Dimensions with New Members (Non-SAP Data Center only): Select this option to map
your source data to dimension properties to update their members during the import.
• Conditional Validation: Select which hierarchies to validate against. Validating against selected
hierarchies will prevent data from being imported to non-leaf members. This option is only available
if you have at least one parent-child hierarchy in your model. For more details about storing data in
non-leaf members, see Entering Values with Multiple Hierarchies [page 2211].
3. Select an import method.
Note
For tenants based on an SAP Data Center, import methods are listed under Import Method in the
Details panel. For tenants based on a non-SAP Data Center, import methods can be found in the
preferences .
• Update: The target model’s measure values for the dimension member combinations specified by the
source data are updated by the corresponding measure values in the source data. If the particular
dimension member combination has no measure values in the model prior the import, new value is
inserted.
• Append: The target model’s measure values for the dimension member combinations specified by the
source data are added by the corresponding measure values in the source data (summed together). If
the particular dimension member combination has no measure values in the model prior the import,
new value is inserted. For a more refined scope, use either the Clean and replace selected version data
or Clean and replace subset of data update options.
• Clean and replace selected version data: Deletes the existing data and adds new entries to the target
model, only for the versions that you specify in the import. You can choose to use either the existing
version or specify a new version under Version. If you specify to import data for the "actual" version,
only the data in the "actual" version is cleaned and replaced. Other versions, for example "planning",
are not affected.
• Clean and replace subset of data: Replaces existing data and adds new entries to the target model for
a defined subset of the data based on a scope of selected versions using either the Existing Version
or New Version buttons. You can also limit the scope to specific dimensions. To define a scope based
on a combination of dimensions select + Add Scope and use the Select a dimension field to specify a
dimension.
When a Date dimension is defined in the scope, the time range in the source data (determined by the
minimum time value and maximum time value of the source data, as well as the granularity of the date
Note
These options affect measures and dimensions. To include both measures and dimensions, see Update
and Schedule Models [page 772].
4. For tenants based on an SAP Data Center, once you have checked and fixed all errors, if you're working with
a dataset sample, click Validate Data to apply your transforms across the entire dataset and check for data
quality errors.
Note
Validating data in the right before running the import is only possible for tenants based on an SAP Data
Center. For tenants based on a non-SAP Data Center, you can only validate the full dataset during the
data preparation step. See Preparing the Data [page 716] for more information.
5. Click Run Import (non-SAP Data Center tenant) or Finish Mapping (SAP Data Center tenant).
After the import job has run, you can find statistics about that job in the data timeline. Within these statistics,
you can find the number of rows rejected. If you want to analyze the job, you can click the Rejection Summary
link to download a .CSV file and take a closer look at the rows that have been rejected. The rejection summary
helps you understand the reason why rows have been rejected, so you can then figure out how to optimize your
job settings.
You might notice a discrepancy between the number of rows referenced in the data timeline, and the number
of rows listed in the rejection summary file. To optimize performance, the rejection summary only provides
you with a sample of the rejected rows and is capped at 2000 rows. The data import process keeps a limited
number of rejected rows to avoid overloading the system with temporary table data.
Note that it doesn’t look at unique rejected values, but rather validation error type per dimension combination.
Schedule a data import job if you want to refresh data against the original data source. You can import data
from multiple queries and data sources into a model, and each of these imports can be separately scheduled.
Context
Procedure
Note
A grouping can include jobs from public dimensions as well as the model. Running the grouping
refreshes the public dimensions and model together. You can ungroup your import at anytime clicking
• None: Select this option when you want to update the data manually.
• Once: The import is performed only once, at a preselected time.
• Recurring: The import is executed according to a recurrence pattern.
Once your new model is created, you might want to import fact data to your model.
In this section, we describe each step required to import data acquired from an external data source into the
fact table of your new model type. There are four main steps in the import process: creating an import job,
preparing the data, mapping the data, running the import. You can also schedule import jobs to import data on
a one time or recurring basis. Depending on whether you're working with an SAP data center (Neo) or non-SAP
Context
Before being able to go to the data preparation step, you first have to import data using the Import Data
from option. You can import data either from a file, or from a data source. If you’re importing data from a data
source, make sure to select an existing connection, or create a new one. For more information on how to do so,
make sure to check out Import Data to Your Model [page 702].
Restriction
The following data sources are not supported by the new model type:
• SAP BPC
• SAP Concur
• SAP ERP
• SAP Fieldglass
• Workforce Analytics
• Dow Jones
Also, for the Salesforce data source, importing data into an existing model isn't supported.
Procedure
1. Select Import Data from File or Import Data from Data Source depending where your
data is stored.
2. Select your file if your data is stored in a local file or connect to your data source.
3. Select the data you want to import.
The import job is added to the Draft Sources or Import Jobs list, depending on whether you're working with
a classic account model or a model with measures (i.e, the new model type). At this moment, you can save
Note
The draft source is visible to anyone with imports rights to the model.
Results
Now that you've imported data to your model, you're ready for the data preparation step. See Preparing the
Data [page 716] for more information.
The data preparation step is where you can you can resolve data quality issues before the mapping step, but
also wrangle data and make edits such as renaming columns, create transformations, etc…
First, make sure to resolve data quality issues if there are any. After the initial data import, the Details panel
gives you a summary of the characteristics of the model with general information about the imported data,
including any data quality issues.
For tenants based on an SAP Data Center, issues can be listed and described under the Model Requirements
section if they are related to the model itself, or under the Data Quality section when you click a particular
dimension.
For tenants based on a non-SAP Data Center, the Details panel list all the dimensions and measures of the
model. You can click the icon to access more information. If ever the application has detected issues with
a dimension, the exact number of issues with that particular dimension is indicated right next to its name, and
clicking that number takes you straight to the Validation tab, where you can see a description of the issue.
Select a message to see the options available to resolve any identified quality issue. Use the context-sensitive
editing features and the transform bar to edit data in a cell, column, or card. When selecting a column, a cell, or
content in a cell, a menu will appear with options for performing transforms. This menu is two-fold:
• Choose the Quick Actions option to perform actions such as duplicate column, trim whitespace, hide
columns, or delete columns or rows. The table below lists all the available actions.
• Select the (Smart Transformations) icon to list suggested transformations to apply to the column,
such as replacing the value in a cell with a suggested value. You can also select Create a Transform
and choose from the options listed under the transformation bar displayed above the columns. The
transformation bar is used to edit and create transforms.
Delete Rows Delete rows in the data, either by selecting Provided only as a quick action
individual members, or by specifying a range
(not possible for text columns; only numerical
and Date columns).
Trim Whitespace Remove spaces, including non-printing Provided only as a quick action
characters, from the start and end of strings.
Duplicate Column Create a copy of an existing column. Provided only as a quick action
Delete Column Delete a column. Use the Shift key to select and Provided only as a quick action
then delete multiple columns.
Remove duplicate Remove duplicate rows when creating or Provided only as a task bar icon
rows adding data to a model.
Concatenate Combine two or more columns into one. An Concatenate [<Column1>], [<Column2>]…
optional value can be entered to separate the using "value"
column values.
Split Split a text column on a chosen delimiter, Split [<Column>] on "delimiter" repeat "#"
starting from left to right. The number of splits
can be chosen by the user.
Extract Use this transform to extract a block of Extract [<what to extract]>] [<where
text specified as numbers, words, or targeted to extract>] [<which occurrence>] [""]
values within a column to a new column. from [<column name>] [<include value
option> ].
• before
• after
• between
Note
You must specify two target values
when using between.
• first
• last
• occurrence: Allows you to specify the
position of the target from one to ten.
Note
Use occurrence when there are
multiple instances of the target.
Replace Replaces either an entire cell or content that Replace (<cell/content>) in [<Column>]
could be found in multiple different cells. matching "value" with "value"
A history of all transforms you implement is displayed in Transform Log. There are two levels of transformation
logs recorded: transforms on the entire dataset, and transformations on the currently selected column. Hover
over a specific transform in the Transform Log to highlight the impacted column. You can roll back the change
by either deleting the entries in the history or using the (Undo/Redo) buttons on the toolbar. You can remove
transforms in the Transform Log panel out of sequential order provided that there are no dependencies.
When selecting a dimension, measure, or attribute, the Data Distribution section gives contextual information
about a dimension with numerical or textual histograms:
• Numerical histograms are vertical, and represent the range of values along the x-axis. Hover over any bar
to show the count, minimum, and maximum values for the data in the bar. The number of bars can also
Note
The displayed histogram is determined by the data type of the column. A column with numbers could
still be considered as text if the data type is set to text.
Once you have checked and fixed all errors, if you're working with a dataset sample, click Validate Data to apply
your transforms across the entire dataset and check for data quality errors.
Note
Validating data in the data preparation step is only possible for tenants based on a non-SAP Data Center.
For tenants based on an SAP Data Center, you can only validate the full dataset when reviewing the import.
See Reviewing and Running the Import [page 725] for more information.
When you create an import job, the application automatically qualifies the data. Typically, columns containing
text are identified as dimensions, and columns containing numeric data are identified as measures. You can
still change the data qualification and change it to another type if needed. For example, you can change a Date
dimension to an Organization dimension.
If you’re unsure about dimension types, make sure to check out Learn About Dimensions and Measures [page
601].
After you've selected your dimensions make sure to follow the best practices described in the sections below.
For non-planning-enabled models only, you can import dimensions with more than the maximum number of
members. However, the following restrictions apply:
• In the data integration view, these dimensions cannot have any dimension attributes added to them, such
as description or property.
• Once the dimensions are imported into the Modeler, they have only one ID column, and are read-only.
• The dimensions can't be used as exception aggregation dimensions or required dimensions.
Calculated Columns
While you're preparing data in the data integration view, you can create a calculated column based on input
from another column and the application of a formula expression.
Click from the menu toolbar, and use the Create Calculated Column interactive dialog to build up your
calculated column. Add a name, and build the formula for your column in the Edit Formula space. You can either
select an entry from Formula Functions as a starting point or type “ [ " to view all the available columns. Press
Ctrl + Space or Cmd + Space to view all the available functions and columns.
Click Preview to view a 10 line sample of the results of the formula. Click OK to add the calculated column to the
model. If necessary, you can go back and edit the calculated column’s formula by clicking Edit Formula in the
Designer panel.
For a listing of supported functions, see Supported Functions for Calculated Columns [page 745].
Dimension Attributes
Dimension attributes are information that is not suitable to be standalone dimensions. You can use them to
create charts, filters, calculations, input controls, linked analyses, and tables (with “import data” models only).
For example, if you have a Customers column, and a Phone Numbers column, you could set Phone Numbers to
be an attribute of the Customers dimension.
• Description: The column can be used for descriptive labels of the dimension members when the member
IDs (the unique identifiers for the dimension members) are technical and not easily understandable.
For example, if your imported data contains a pair of related columns Product_Description and Product_ID
with data descriptions and data identifiers, you can set Product_Description to be the Description attribute
for the Product_ID dimension. Note that the Product_ID column would then need to contain unique
identifiers for the dimension members.
• Property: The column represents information that is related to the dimension; for example, phone
numbers.
• Parent-Child Hierarchy (Parent): The column is the parent of the parent/child hierarchy pair.
For example, if your imported data contains the two columns Country and City, you can set the Country
column to be the parent of the Country-City hierarchy.
The hierarchy column is a free-format text attribute where you can enter the ID value of the parent
member. By maintaining parent-child relationships in this way, you can build up a data hierarchy that is
used when viewing the data to accumulate high-level values that can be analyzed at lower levels of detail.
You can also create level-based hierarchies when your data is organized into levels, such as Product
Category, Product Group, and Product. When the data is displayed in a story, hierarchies can be expanded
or collapsed. In the toolbar, select (Level Based Hierarchy). For more information about hierarchies, see
Learn About Hierarchies [page 613].
• Currency: If you set a column to be an Organization dimension, the Currency attribute is available. The
Organization dimension offers an organizational analysis of the account data, based, for example, on
geographic entities. You can add the Currency attribute to provide currency information for the geographic
entities.
During data import, anomalies in your data can prevent the data from being imported properly, or prevent
the model from being created. If issues are found in your data, the impacted data cells are highlighted, and
messages in the Details panel explain the issues. You'll need to resolve the following issues before the data can
be fully imported or the model can be created:
• In account dimensions, dimension member IDs cannot contain the following characters: , ; : ' [ ] =.
• Numeric data cells in measures cannot contain non-numeric characters, and scientific notation is not
supported.
• When importing data to an existing model, cells in a column that is mapped to an existing dimension must
match the existing dimension members. Unmatched values will result in those rows being omitted from the
model.
• For stories, if any member IDs are empty, you can type values in those cells, or select Delete empty rows in
the Details panel to remove those rows.
• When creating a new model, if member IDs are empty, they are automatically filled with the “#” value if you
select the Fill applicable empty ID cells with the "#" value option. Otherwise, those rows are omitted from
the model.
Note
This option is only available on for tenants based on an SAP Data Center (Neo).
• In dimensions and properties, a single member ID cannot correspond to different Descriptions in multiple
rows (but a single Description can correspond to multiple member IDs).
For example, if member IDs are employee numbers, and Descriptions are employee names, you can have
more than one employee with the same name, but cannot have more than one employee with the same
member ID.
• In a Date dimension column, cell values must match the format specified in the Details panel.
The following date formats are supported: dd-mm-yy, dd.mm.yy, dd/mm/yy, dd-mm-yyyy, dd.mm.yyyy,
dd/mm/yyyy, dd-mmm-yyyy, dd-mmmm-yyyy, mm-dd-yy, mm.dd.yy, mm/dd/yy, mm-dd-yyyy,
mm.dd.yyyy, mm/dd/yyyy, mm.yyyy, mmm yyyy, yy-mm-dd, yy.mm.dd, yy/mm/dd, yyyq, yyyy, yyyymm,
yyyy-mm, yyyy.mm, yyyy/mm, yyyymmdd, yyyy-mm-dd, yyyy-mm/dd, yyyy.mm.dd, yyyy/mm-dd,
yyyy/mm/dd, yyyy.mmm, yyyyqq.
Examples:
• mmm: JAN/Jan/jan
• mmmm: JANUARY/January/january
• q: 1/2/3/4
• qq: 01/02/03/04
• Latitude and longitude columns, from which location dimensions are created, must contain values within
the valid latitude and longitude ranges.
• For planning-enabled models, in a hierarchy measure, non-leaf-node members are not allowed.
Note
For analytic (non-planning-enabled) models only, non-leaf node members are allowed, but be aware
of the effects of this behavior. For example: in an organizational chart that includes employee salaries,
the manager has her individual salary, and her staff members have their own salaries as well. In a
visualization, do you expect the manager’s data point to reflect her individual salary, or the sum of her
staff members' salaries?
Now that the data is prepared, you can start the mapping process. The application automatically pre-maps
some parts of the data, and you can map the remaining data manually.
Context
Restriction
Restrictions apply when mapping dimensions and attributes. These issues are listed in the Review Import
step just before running the import. To avoid having issues during the import, make sure to follow these
best practices:
Procedure
1. Map all the unmapped dimensions and at least one measure: drag a dimension or from the Source Column
and drop it on a dimension or measure in the Unmapped column to map it. You can also hover over an
unmapped source column and click to open the Quick Map menu and select a target column. If needed,
click to filter the source columns and show only the mapped or unmapped columns.
If at any time, you want to discard the mapping and restart from scratch, click in the toolbar to reset the
mapping.
2. Optional: If needed, you can also set default values to dimensions. In the Unmapped column, click
in the Source section next to an unmapped dimension, and select Set default value to "#" or Set default
value....
If you leave the data type set to Date, you can change the date format in the data preparation. If you set
the data type to String, you can choose from formats allowed by the target date dimension, especially
if the target dimension has fiscal period settings and week granularity. Lastly, if you set the data type to
Integer, your values are assumed to be dates formatted without separators.
b. If the target date dimension has fiscal settings and you want to automatically replicate the fiscal
settings to the source date dimension, click Apply fiscal settings to column values.
4. Map the version:
• If you have a single version, you can select a default value and map all rows to that version. Click in
the Source section next to the version and click Set default value, and select a version. Each version is
described with its version category.
• Alternatively, if the data you're importing has a version, you now have to handle multiple versions in the
dataset. Map the version you have just imported to the version dimension in the Target column. This
allows you to assign rows to multiple versions.
Note
If you have multiple versions, it is required to map the Category attribute in the next step. If you
don't have one, go back to the data preparation step and create one.
5. Once you have mapped all the dimensions and at least on measure, click Next.
6. Map the attributes.
If you’re looking for specific attributes to map, use the expand and collapse functions, or the search
function. You can also filter on unmapped or mapped attributes. For each dimension, if the data you’re
importing has new members, the application indicates the exact number.
7. Click Next.
You're now in the Review Import screen.
Results
Once you've fixed remaining issues, you're ready to validate the data, review and run the import. For more
information, check out Reviewing and Running the Import [page 725].
Context
Restriction
Restrictions apply when mapping dimensions and attributes. These issues are listed in the Review Import
step just before running the import. To avoid having issues during the import, make sure to follow these
best practices:
Procedure
1. Switch to the Card View and determine what remains to be mapped. Cards display as either mapped or
unmapped entities. A card represents mapped data if it is shaded solid and has defined borders. Cards that
must be mapped appear transparent and borderless.
Note
Check Apply fiscal settings to column values in Details to map imported fiscal period data into a date
dimension in a model enabled to support fiscal year. Review the format listed under Format and change
it if required.
3. Repeat step 2 until you have mapped all dimensions, at least one measure, and the attributes.
You can check the progress of the mapping thanks to the dedicated Dimensions, Attributes, and Measures
headings. When each bar is fully colored in grey, the mapping is complete.
4. In the Details panel, specify the version for which you're importing data by checking either Existing Version
or New Version. If you have multiple versions available, select the source version dimension card and click
Map Versions in the Details panel to map each version to the desired version and category.
5. Once you have mapped all cards, check for mapping issues. If there are any, fix them before moving to the
next step.
A red dot in the top right corner of the card indicates there are mapping errors that need to be addressed.
A blue dot indicates that new values have been added from the import to the existing data.
Results
Once you've fixed remaining issues, you're ready to validate the data, review and run the import. For more
information, check out Reviewing and Running the Import [page 725].
After the import job has run, you can find statistics about that job in the data timeline. Within these statistics,
you can find the number of rows rejected. If you want to analyze the job, you can click the Rejection Summary
link to download a .CSV file and take a closer look at the rows that have been rejected. The rejection summary
You might notice a discrepancy between the number of rows referenced in the data timeline, and the number
of rows listed in the rejection summary file. To optimize performance, the rejection summary only provides
you with a sample of the rejected rows and is capped at 2000 rows. The data import process keeps a limited
number of rejected rows to avoid overloading the system with temporary table data.
Note that it doesn’t look at unique rejected values, but rather validation error type per dimension combination.
Once the mapping is complete, you're ready to review the import options and method before running the
import job.
Context
Procedure
1. Check whether there are pending data issues after the mapping.
All issues must be solved to make sure all data is imported into your model. If you decide to create the
model ignoring some of the detected errors, either entire rows or invalid cells will be omitted from the new
model.
2. Review the import option. For tenants based on an SAP Data Center, click View all options under Mapping
Options in the Details panel. For tenants based on a non-SAP Data Center, click to access the
preferences.
• Update dimensions with new values (SAP Data Center only): Select this option if you're importing data
to a model that already contains data and want to update the data with new dimensions members,
attributes, and hierarchy changes.
Note
• Convert value symbol by account type (SAP Data Center only): Select this option to match the value
symbol, positive or negative, to each account type in the model, for when values are stored as positive
regardless of whether they represent income or expense accounts.
• Fill applicable empty ID cells with the "#" value (SAP Data Center only): Select this option to fill empty
ID cells with a "#" value to preserve rows without a value in the Dimension ID column. Otherwise,
empty ID cells are deleted.
• Reverse Sign by Account Type (Non-SAP Data Center only): Select this option to import your INC
and LEQ account values with reversed signs (+/-). This option is only available if your model has an
account dimension. If you want your INC and LEQ values to show up as positive, import them as
Note
For tenants based on an SAP Data Center, import methods are listed under Import Method in the
Details panel. For tenants based on a non-SAP Data Center, import methods can be found in the
preferences .
• Update: The target model’s measure values for the dimension member combinations specified by the
source data are updated by the corresponding measure values in the source data. If the particular
dimension member combination has no measure values in the model prior the import, new value is
inserted.
• Append: The target model’s measure values for the dimension member combinations specified by the
source data are added by the corresponding measure values in the source data (summed together). If
the particular dimension member combination has no measure values in the model prior the import,
new value is inserted. For a more refined scope, use either the Clean and replace selected version data
or Clean and replace subset of data update options.
• Clean and replace selected version data: Deletes the existing data and adds new entries to the target
model, only for the versions that you specify in the import. You can choose to use either the existing
version or specify a new version under Version. If you specify to import data for the "actual" version,
only the data in the "actual" version is cleaned and replaced. Other versions, for example "planning",
are not affected.
• Clean and replace subset of data: Replaces existing data and adds new entries to the target model for
a defined subset of the data based on a scope of selected versions using either the Existing Version
or New Version buttons. You can also limit the scope to specific dimensions. To define a scope based
on a combination of dimensions select + Add Scope and use the Select a dimension field to specify a
dimension.
When a Date dimension is defined in the scope, the time range in the source data (determined by the
minimum time value and maximum time value of the source data, as well as the granularity of the date
dimension) combined with other dimensions in the scope, will determine what existing data is cleaned
and replaced.
If for example, Date and Region dimensions are defined as part of a scope, only entries that fall within
the time range and match Region from the source data will be replaced in the target model. Existing
data that does not match the scope will be kept as is. Other dimensions that are not part of the scope
will be cleaned and replaced regardless of whether the dimension members are in the source data or
not.
Note
These options affect measures and dimensions. To include both measures and dimensions, see Update
and Schedule Models [page 772].
Note
Validating data in the right before running the import is only possible for tenants based on an SAP Data
Center. For tenants based on a non-SAP Data Center, you can only validate the full dataset during the
data preparation step. See Preparing the Data [page 716] for more information.
5. Click Run Import (non-SAP Data Center tenant) or Finish Mapping (SAP Data Center tenant).
In the Data Management space, you can edit the data source and the query definition of an import job for
models with measures if needed.
Context
This flexibility is helpful to adapt to possible changes on the external data source, or simply to repurpose an
existing job for another data source without redoing all the upfront configuration.
If the target model itself has changes, changing the data source of an import job is also beneficial because it
allows you to run the data transformation steps and redo the data mapping to the target model.
When doing so, you replace the existing data source with a new one and create a whole new query, and run data
preparations operations. After the change, you can also review the import settings if needed.
Although all columns from the new data source are considered, they are not always automatically mapped by
the application:
• Columns from the new data source with the same name as column from the original data source are
mapped automatically.
• Any pre-existing mapping attached to a column from an original data source is automatically removed if
there isn’t a match column in the new data source.
If needed, you can edit the import settings at any time to review the mapping and map columns that haven’t
been mapped automatically.
Procedure
Schedule a data import job if you want to refresh data against the original data source. You can import data
from multiple queries and data sources into a model, and each of these imports can be separately scheduled.
Context
Procedure
Note
A grouping can include jobs from public dimensions as well as the model. Running the grouping
refreshes the public dimensions and model together. You can ungroup your import at anytime clicking
• None: Select this option when you want to update the data manually.
• Once: The import is performed only once, at a preselected time.
• Recurring: The import is executed according to a recurrence pattern.
Context
Some dimensions may be required for the calculation of a measure; for example, in a Lookup formula. If not
all required dimensions are in the current drill state, users are notified that the displayed numbers could be
incorrect.
You can set certain dimensions for a measure to be required, so that in stories and in the Explorer, users are
informed if those dimensions are missing.
Procedure
Results
Note
• Dimensions that are marked as hidden on the All Dimensions tab are not displayed in the Select
Dimensions dialog.
• Dimensions that are set as required can't be marked as hidden on the All Dimensions tab.
Related Information
Use predefined functions to build up a calculated dimension while preparing data. All the functions listed below
appear under Formula Functions in the Create Calculated Column dialog.
String Functions
Trim TRIM(s Returns a copy of a specified string, with TRIM(" cloud ") = "cloud"
tring) leading and trailing spaces removed.
Concat CONCA Concatenates one specified string with an- CONCAT( 'ABC', 'DE' ) =
T( str other specified string to form a single 'ABCDE'
ing, string.
string
)
Conversion Functions
MakeDate MAKEDA Returns the date of the specified year, MAKEDATE( 1995, 2, 28 ) =
TE( ye month, and day in a yyyy-mm-dd format. '1995-02-28'
ar, • year is a number representing the
month, year ( 1 - 9999 ).
day )
• month is a number representing the
month.
• day is a number representing the day.
DateDiff DATEDI Calculates the time difference between DATEDIFF( [Date1], [Date 2],
FF(dat date1 and date2. Expresses the difference
in a given unit.
"day" )
e1,
date2, • unit is a string constant. Returns the number of days between Date1
unit) • unit can be one of the following: and Date2 by calculating the difference:
Date1 - Date2 = No. of days.
• "day"
• "month"
• "year"
DateAdd DATEAD Returns the result of adding a time interval DATEADD([Date], 3 ,"Day")
D( dat specified in certain granularity to a date
e, value. Use negative values for subtraction. Returns a date where 3 days has been
interv • interval: number of time interval to added to [Date]. If [Date] is '2015-04-07',
al, be added. the result is '2015-04-10'.
unit ) • date: date value to be operated on.
• unit is a string constant.
• unit can be one of the following:
• "day"
• "month"
• "year"
DayOfWeek DayOfW For a specified date, returns the day of the DayOfWeek( #2012-03-23# )
eek(da week as a number (1 for Monday to 7 for returns 5.
te) Sunday).
#2012-03-23# is a date constant
Logical Functions
AND and Takes two Booleans. false and true returns false
Returns true if both values are true; oth- 1 = 1 and 2 = 2 returns true
erwise false.
< , <=, >, >= , =, != Returns the comparison results between 1 = 1 returns true
num1 <
two operands with compatible types.
num2 1 = 2 returns false
Mathematical Functions
MIN MIN(nu Returns the minimum value between two or MIN(2014, 2016) = 2014
mber1, more numbers.
number
2, ...
)
MAX MAX(nu Returns the maximum value between two MAX(2014, 2016) = 2016
m1, or more numbers.
num2..
.)
LOG LOG(nu Returns the natural logarithm of a number. LOG( 100 ) = 4.605
m)
FLOOR FLOOR( Returns a real number that is smaller than FLOOR(14.8) returns the value 14
number the entered number.
FLOOR(14.82,0) returns the value 14
1, • number2: the number of decimal pla-
number ces (optional). FLOOR(14.82,1) returns the value 14.8
2)
FLOOR(14.82,-1) returns the value 10
You can export models and their data for reuse in other systems.
You can export models directly to text file formats, or export models' transaction data to SAP BPC, SAP
S/4HANA, SAP Integrated Business Planning, or OData services. For text files and SAP S/4HANA, you can set
up a schedule so that exports recur automatically on a daily, weekly, or monthly basis.
You may want to disable the ability to export data from a particular model. In this case, use the model
preference Restrict Export to CSV. When this setting is switched on, users won't be able to export data from
that model to a file, either in stories or from the Files list.
Note
If this setting is switched on for a model, and that model is moved to a second system, in the second
system the setting will also be initially on.
There's also a posibility to export model data using the Data Export Service API. For more information, check
out the dedicated documentation in the SAP Analytics Cloud REST API Developer Guide.
You can use data integration to replicate your data to and from SAP Analytics Cloud to other SAP products or
3rd party applications.
Learn more about how data integration features allows you to retrieve the following content from your model:
• Audit data
• Master data
• Fact data
• Currency data
• Public dimension data
• Unit data
Export Data
Import Data
You can manage your Data Export Service API subscriptions in the Modeler.
In the Data Management screen, under the API Subscriptions tab, you can find a list of all the subscriptions of a
model. It offers a visual interface to the Data Export Service for subscription management as part of the delta
functionality.
Note
This section doesn't cover the delta functionality, and instead focuses on the subscriptions management
capabilities available in the API Subscriptions tab within the Modeler. For more information on the delta
functionality and subscriptions, please refer to the Data Export Service documentation.
Context
You can create a subscription to a model to track data changes. The tracking starts as soon as you create a
subscription and ends when you run a delta export. Within that time frame, every data change is monitored.
Procedure
The subscription you’ve created is now listed in the API Subscriptions tab.
Context
You can copy a subscription URL for future reuse in target systems to retrieve data.
Procedure
Delete a Subscription
Procedure
Results
The subscription has been deleted and no longer appears in the subscriptions list.
In this section, you will learn how to manage dimensions and measures in models:
You can create dimensions, and add existing dimensions to new or existing models.
Context
When you create a model by importing data, dimensions are created from the imported data. However, you can
still add new or existing dimensions manually to the model.
You can add Account, Organization (only one), Date, or Generic dimensions, to both new and existing models.
Note
The user interface can differ depending on the model type you're working with.
Procedure
2. Optional: If you're working with a classic account model, choose Add New Dimension or Add
Existing Dimensions in the toolbar.
• If you’re creating a dimension, give it a name, select a dimension type, like Account or Organization
for example, and specify whether the new dimensions should be public or private. If you’re adding an
existing dimension, select the type directly.
• If you're adding an existing dimension, select a dimension type, and then select a dimension.
• If you're creating a new dimension and want to add a dimension table, give it a name and select Add
a Dimension Table. Then, select the dimension type, and, using the dedicated toggle, indicate whether
you'd like the dimension to be public.
• If you're creating a new dimension but don't want to add a dimension table, give it a name and select
Don't Add a Dimension Table. Then, specify the data type.
• If you want to add an exisiting dimension, select Add Existing Dimension Table. Click , and select a
dimension.
4. Click Add to add it to a model.
5. Select either the new dimension you’ve created or the existing dimension you want to add in the dimension
list to see its contents and settings.
Note
An initial # member is added to each dimension (but not the account dimension). You can't delete
this member manually. For more information, see the glossary entry for “Unassigned member”. In the
Dimension Settings panel, you can set some of the basic properties of the dimension.
During the design stage, you can delete dimensions from the model. To delete a dimension, select it in the
dimension list, and select . Once you've saved the model and exited the design stage, you won't be able
to delete dimensions.
Note
You can also delete public dimensions that aren't being used in any models. To do this, select
Modeler, and then on the Public Dimensions tab, select one or more dimensions and choose Delete.
Related Information
In the new model type, you can add multiple base measures to hold different types of data.
Context
You might have a single base measure in a simple model, such as monetary amount. A more complex model
will often have more measures like price, unit quantity, number of hours, headcount, amounts in local and
global currencies, and so on.
Base measures let you set a number of options to make sure that different measures display their fact data
correctly, including data type, aggregation, currencies and units, and formatting.
Note
Other than data type, these settings are similar to those available for account members. If your model has
an account dimension, each model value will belong to both an account and a measure, and sometimes the
settings will contradict each other.
You can decide whether to apply the settings from accounts or measures in these cases by changing
the structure priority. See Set Structure Priority and Create Custom Solve Order [page 701] for more
information.
After your base measures are set up, you can create calculated measures on top of them, including conversion
measures if you’ve set up currency conversion in your model.
Procedure
1. Create or open a model, and in the Model Structure screen, click in the toolbar. Or if you want to base
your measure on an existing measure, select it and choose Copy.
2. Select Add Measure.
3. In the Measure Details panel, fill in the following settings:
Parameter Description
Name Type a unique ID for the measure using letters and num-
bers only.
Data Type Choose the data type for your measure. The data type
restricts the values that the measure can hold:
• Decimal: Up to seven decimal places and up to 31
digits total.
Note
In Stories, only 15 digits total are supported and
displayed. You can also use formatting if you
want to limit the number of digits displayed in
stories.
Note
In Stories, only 15 digits total are supported and dis-
played. You can also use formatting if you want to limit
the number of digits displayed in stories.
Required Dimensions Select and choose the dimensions that you want to be
required for the measure.
Exception Aggregation Type Use exception aggregation when you want to aggregate
non-cumulative quantities. For example, if you have the
quantity Inventory Count, you might want to aggregate the
inventory count across all products, but not across time
periods, because it doesn't make sense to add up inven-
tory counts from multiple time periods.
For example, you might want to choose just the most re-
cent set of Inventory Count values. In this case, you would
choose the exception aggregation type LAST, and the ex-
ception aggregation dimension Date.
Exception Aggregation Dimension Select and choose the dimensions that will use the
selected exception aggregation type.
Units & Currencies Use these options to set the currencies for a monetary
measure, or the unit displayed for a non-monetary meas-
ure:
• Unit: Use this option for a non-monetary value where
you want to display a fixed unit. Type the unit in Fixed
Unit.
• None: Select this option if you don’t need to include
any units.
• Currency: Use this option for all monetary values,
then specify the currencies settings for the measure.
For example, for a measure that shows only US dol-
lars, set the Currency to Fixed and type USD.
These settings don’t just change the units displayed; they
also can change how the values are aggregated and which
conversion rates apply to the measure.
Decimal Places This setting defines the number of digits displayed after
the decimal point; select a value from 0–7.
Values that have fewer decimal places will match this set-
ting by adding zeros to the end of the value, for example
1.100.
This example illustrates the effect of the Scale and Unit settings.
For a measure where the Scale has been defined as Million, Unit is blank, and Decimal Places is set to 2, the
number 92624530 will be displayed in the data grid with an abbreviation as 92.62M.
If, on the other hand, Unit is set to Currency, the value will be the full word: 92.62Million.
If no Scale value is selected, the full number is shown formatted by appropriate separators; for example:
92,624,530.00
Results
The model now has a new measure. You can add values to it by importing data in the Modeler, or by entering
planning data.
When deleting dimensions from a model, make sure that you check for potential dependencies to ensure that
your model is still valid.
Select a dimension, either in the group view or the list view, and select in the toolbar.
You can delete a dimension from a model after the model has been saved. If the dimension still has related
objects such as stories or data actions, you can still delete it, and the application lets you know about these
releated objects so you can review them and take apporpriate action. If the dimension has a dimension table,
the table is also deleted.
Modeler Impacts Account Formulas When deleting a dimension from a model, the calculations that reference
that particular dimension are invalid.
Variables When deleting a dimension from a model, if there are variables with value
domain on the deleted dimension, make sure to fix the value domain.
Currency Conversion When deleting a dimension from a model, if that that dimension is a
currency conversion dimension, check the model settings to reset the
currency conversion.
Data Locking When deleting a dimension from a model, if that dimension is part of data
locking, the dimension is removed from the driving dimensions. Also,
data locking settings of the model are reverted to the default locking
state. Make sure to check data locking settings in the model after you’ve
deleted the dimension.
Validation Rule When deleting a dimension from a planning model, if that dimension is
the reference dimension that serves as a basis of a validation rule, the
rule is deleted.
If the dimension is the only matched dimension, then the validation rule
is deleted. If the dimension is one of multiple matched dimension, the
validation rule is updated with the remaining dimensions.
Advanced Modeler Im- Value Driver Tree If the dimension you have deleted is used in a value driver tree, the value
pacts driver tree gets invalid. Warning messages are given in the builder panel
to solve the issues.
Data Action If the dimension you have deleted is used in a data action, you can still
save the model. However, make sure to consider the following impacts:
After you’ve disabled the planning area, data outside of the planning area
is no longer visible. However, you can still query, publish, or delete the
version.
Data Auditing After you’ve deleted a dimension from a model this dimension will also be
removed from data audit table.
Commenting After you’ve deleted a dimension from a model, each comment entered
on members of that dimensions are deleted.
Data Acquisition Any import job with a mapping based on the dimension you’ve deleted is
no longer valid. Make sure to redesign the import job manually.
Story Widgets Charts, Input Controls After you’ve deleted a dimension, the application displays validation er-
and Story Filters rors for charts, dynamic texts, input controls and story filters that relied
on that dimension. The application displays warnings to help you resolve
the issue.
Hyperlinks Hyperlinks with dimension context are still displayed, but they’re inactive.
Make sure to fix them manually.
Tables After you’ve deleted a dimension, tables are updated automatically and
show data on an aggregated level. Table filters based on the dimension
are no longer valid. Styling rules will adapt to the aggregated view but
may need to be adapt.
You can change a dimension's description, and also add properties (columns), members (rows), and
hierarchies to a dimension.
To make changes to a dimension, open a model that contains it, and then select the dimension. You can also
open a public dimension directly from the Modeler start page, under the Public Dimensions tab.
Task Steps
Note
Dimension members can't be deleted if there is transac-
tional data, or data in a private version, belonging to the
underlying model.
Copy a member
Choose (Copy) in the Edit section of the toolbar.
Add multiple member descriptions Select a dimension, and in the Dimension Details panel,
scroll down to the Languages and Translation section, and
switch on the Enable Multiple Languages toggle. For more
information, see Adding Member Descriptions in Multiple
Languages [page 764].
The Account dimension includes several properties specific to account dimensions. List boxes help you enter
data in each cell of these properties. For example, in the Account Type column, when you select a cell in the
grid, you can choose between the account types INC, EXP, AST, LEQ, and NFIN.
• You can insert a new property (column): select in the Properties section of the Dimension Settings
panel.
• You can delete a column: find the corresponding property in the Properties section of the Dimension
Related Information
By default, each member within a dimension has a description property that provides additional information
about the member. You can edit this description manually and add multiple descriptions in different languages.
Context
Depending on your business needs, you might have to add translated descriptions to make them more
accessible. If you’re a global customer, this allows you to cater for all end users in their local language. Models
that leverage dimensions with multiple language descriptions can then be consumed as usual in stories.
This functionality is available for all public and private dimensions (except date and version dimensions) with
up to 49 languages. Each language can be maintained either manually, or via the master data upload that
leverages wrangling features. In the Modeler, you can add, edit, update, and delete descriptions, which provides
more flexibility for public and private dimensions in a model. The default language is derived from the user
preferences, according to the Data Access Language setting.
For the descriptions to be effective, you need to update the dimension in the Modeler first and specify all the
languages you would like to use. Once that’s done, you can manually add the description per language. Each
language you add has a corresponding column in the dimension.
Note
This section is about adding new member descriptions manually, and isn't related to the translation
service that you can request to have model metadata translated automatically. If you're looking for more
information on how to translate model metadata, see Enable Translation in a Model [page 777].
Procedure
1. In the Modeler, create a public or private dimension from scratch, or open an existing one.
2. In the Dimension Details panel, scroll down to the Languages and Translation section, and switch on the
Enable Multiple Languages toggle.
Once you enable the option, the application lets you know how many languages are active in the
dimension. Also, note that the Description column now indicates the language code between parentheses.
3. In the toolbar, click the translation icon to open the dropdown menu, and display both active, and available
languages.
Active languages are languages that are already maintained in the dimension and whose descriptions have
been added. Available languages are languages that haven’t been added to the languages yet.
If you flick through the active languages, the application automatically updates the language code in
parentheses as well as the member descriptions.
4. In the Available Languages section of the dropdown menu, select a language you’d like to add to the
dimension.
5. Add the translated description manually for each member of the dimension.
Context
You can set members of the account dimension (measures or accounts) to be visible or invisible to other
users. For example, if you're creating intermediate calculations based on measures, you may not want those
intermediate calculations to be available for other users to add to stories, so you would hide those calculated
account members.
Procedure
Note
In the Modeler, you'll always see all accounts and measures, including any that are hidden.
Context
When you delete a model, the built-in dimensions Date and Version, and also any private dimensions, are
automatically deleted, but public dimensions are kept, and can be reused in other models.
Note
• You can delete models only if your user role has the privilege for deleting models.
• Models that are in use in stories can't be deleted.
Procedure
If you want to create a new model that's similar to an existing model, you can copy an existing model.
Context
Private dimensions are copied with the model, while public dimensions are referenced from the copied model.
Transaction data is copied in all public versions. Private versions are not copied.
Note that models based on BPC data sources can't be copied (either basic models, or models with write-back
to BPC enabled).
You can rebuild a model to modify public models created from acquired data.
Context
After a model is created from acquired data, you can rebuild the model if you need to fix issues such as missing
dimensions or measures. This option lets you revert the model to the state based on the first set of acquired
data.
Note
When you rebuild a model, any data updates that happened after the model was created will be removed.
You can't rebuild a model if the model is missing some dependencies needed for rebuild. This can happen if:
• The model was created from a blank model, which can't be rebuilt because no data preparation was done.
• The model was created before the rebuild feature was introduced.
• The model was exported or imported without the object created along with a public model (called a
wrangler) that contains information needed for rebuilding a model.
Note
• If the Rebuild Model icon isn't available in the menu bar, the model cannot be rebuilt.
• Even if sharing settings are applied, only the owner of the model can rebuild the model.
Note
Once you select Rebuild, the process will start and all the dependencies may be impacted.
Next Steps
You should check that all the stories associated with the original model work as expected.
Related Information
Context
Note
You can clear fact data only if your user role has the Maintain privilege for models.
Procedure
Note
• The Delete Facts icon is enabled only when you select one model (not including live data models),
for which you have the Maintain privilege.
• Only dimensions that you have write access to are shown.
• If Data Access Control was activated for a dimension, you will see only the members for which you
have write access.
Context
You can apply conditional formatting by defining thresholds in a story, or you can apply the formatting to all
stories based on a model by defining thresholds in that model. For example, in your model you could define a
threshold for Revenue at $1,000,000, so that all stories based on the model would have a Revenue threshold at
$1,000,000. Note that thresholds can be defined only on account dimensions.
Procedure
Note
Thresholds that you define for a model are applied to stories based on that model, but if you apply
conditional formatting in the stories, that formatting overrides the thresholds you define for the model.
Related Information
You can change the data source for a live data model whose original data source is no longer available.
Context
For example, if you have multiple environments, such as a Development environment, a QA environment, and a
Production environment, you can create your queries, connect to SAP Analytics Cloud, and create your stories,
all in the Development environment. Then, once your development is finished, you can transport the queries to
your QA or Production environment, and change the model's data source so that the queries work in the new
environment.
Note
• If a story doesn't load properly after changing the data source, you can save the story as a template,
create a new story from the template, and then update the new story's elements by choosing a model
based on the new data source.
• The IDs of dimensions and members in the original query used in a story need to be the same in
the new query for that story. Otherwise, further actions are required before the story can be loaded
correctly.
• If you copy an SAP BW query from an existing model, different IDs may be generated, and therefore
dependent stories may not display correctly.
Procedure
Results
All existing dimensions and prompts will be overwritten by the new data source.
When you create a model from a data source, or import data into an existing model, an import job is
automatically created as well, so that you can schedule data imports.
Context
Schedule a data import if you want to regularly refresh data against the original data source.
See “Import Data Options” in step 3 of this help topic for details.
You can import data from multiple queries (and data sources) into a model, and each of these imports can be
separately scheduled.
Note
SAP Open Connectors uses cloud credits and may incur additional charges. You can review your Open
Connectors cloud credit usage in your SAP Integration Suite overview page. For more information, see Use
SAP Integration Suite Open Connectors [page 526].
Procedure
2. If you want to run an Import Data job immediately, select (Refresh Now), or for BPC jobs, (Update
Model Now).
If the Import Data job was created when creating a model from a data source, the import method used
is "Update". If the Import Data job was created when importing data into an existing model, the import
method used is the same method that was used when the Import Data job was created. See “Import Data
Options” in step 3 of this help topic for the available import methods.
Note
Please note that when refreshing a Salesforce (SFDC)-based model, the Update Model job's import
setting “Clean and Replace” is used. This setting cleans up and re-creates the model, causing the
hierarchies defined in the model to be lost when the data is refreshed.
To receive an email notification when a refresh job fails, select Notify me of refresh failures by email.
3. If you want to run an Import Data job using different data-handling settings, select a job and then select
(Import Settings).
a. Choose how you want existing data to be handled.
Note
These options affect both measures and dimensions. For related information, see Combine Data
with Your Acquired Data [page 722].
Append Keeps the existing data as is and adds new entries to the target model.
Note
When importing data into a planning model using the Append option, you can
choose between aggregating duplicated rows, or rejecting them.
Clean and replace selected Deletes the existing data and adds new entries to the target model, only for the
version data versions that you specify in the import. You can choose to use either the existing
version or specify a new version under Version. If you specify to import data for the
"actual" version, only the data in the "actual" version is cleaned and replaced. Other
versions, for example "planning", are not affected.
When using this import method, the values of the incoming data for the dimension
that is defined as the scope of replacement determines the portion of the model’s
transaction data that will be cleaned up and then replaced by the incoming data. The
dimension members of other dimensions that are not in the replacement scope are
not considered.
For example, when Date and Region dimensions are defined as part of a scope,
transaction data entries in the model that match the specific dates and regions in the
source data will be replaced. Existing data that does not match the scope will be kept
as is.
b. If you want to configure this Import Data job to delete the existing data and add new entries to the
target model, switch on Reset Model.
When Reset Model is on, all data in the model is cleaned before it can be replaced with the newest set.
This cleans any existing data in all dimensions, excluding public dimensions.
Note
• Only the model owner can update the Reset Model setting.
• If Reset Model is switched on, only the owner of the model can run that job or change its
settings.
4. For the following connection types, you can select (Edit Query) to make changes to the query,
including parameter values.
• SAP Business Warehouse (BW)
• SuccessFactors HCM suite
• An SAP BusinessObjects BI platform universe (UNX) query
• SQL Databases
Note
Query filters can be modified, but columns cannot be added, removed, or changed. In the case of SQL
databases, freehand SQL queries can't be edited.
5. To schedule the execution of jobs, select (Schedule Settings), choose between the following options,
and then select Save to save your scheduling settings:
Frequency Description
None Select this option when you want to update the data manually. (Use the (Refresh
Now) icon in the Sources list.)
Repeating The import is executed according to a recurrence pattern. You can select a start and end
date and time as well as a recurrence pattern.
Start Date You set the date by when the task needs to be triggered at the first time.
Time Zone You set the timezone to make sure that the task is triggered as you need.
Start Time You define the time by when the task needs to start.
Note
The scheduled times are adjusted for Daylight Saving Time according to the time zone you select.
6. If you want to run multiple Import Data jobs together, in a specified order, create a source grouping.
A grouping can include jobs from public dimensions as well as the model. Running the grouping refreshes
the public dimensions and model together.
a. Select the jobs that you want to group together, and then select (Schedule Settings).
b. Review the order of the jobs in the grouping, and change it if necessary.
Alternatively, you can select the job that you want to run first, select (Schedule Settings), and then
select Set a Dependency to add the other jobs to the grouping in the correct order.
c. (Optional) Edit the source grouping name.
d. Choose one of these group processing options:
Stop if any query fails If any of the import jobs fails, the group processing stops. You can then
cancel the remaining jobs, or try to fix the cause of the failure, and later
resume execution of the grouping from the same point where execution
stopped.
Skip any failed query If any of the import jobs fails, the remaining jobs are still processed.
Also use the None option if you have an existing schedule for a source grouping, and want to turn off
the scheduled updates but not delete the grouping.
f. (Optional) Specify the Model Refresh period for a public dimension.
Specifying a value prevents a public dimension from being continually updated by various models that
contain that dimension. For example, if you specify a value of 2 hours, the dimension can be refreshed
a maximum of once every 2 hours.
g. Save your grouping and schedule settings.
7. If you chose the None option for scheduling, select (Refresh Now) to run the import job now.
To receive an email notification when a refresh job fails, select Notify me of refresh failures by email.
may want to see which models are scheduled to refresh a public dimension. Select a job, select
(Schedule Settings), and then select View Models Using this Source.
9. If you want to run an Update Model job immediately, select (Refresh Now), or for BPC jobs,
(Update Model Now).
10. If you want to run an Update Model job using different data handling settings, select a job and then select
Note
These options affect both measures and dimensions. For related information, see Combine Data with
Your Acquired Data [page 722].
Update Options
Clean and Replace Deletes the existing data and adds new entries to the target model.
Append Keeps the existing data as is and adds new entries to the target model.
Results
After a scheduled job runs, the result is shown on the Schedule Status tab. If any errors or warnings occurred,
select the job to see the job details in the Data Timeline panel.
If any rows in the dataset were rejected, you can select Download rejected rows to save the rejected rows as
a .csv file. You can then examine this file to see which data was rejected. You could fix the data in the .csv
file and then import the .csv file into the model using the Import Data workflow, or fix the data in the source
system.
Note
If a connection was shared with you without sharing credentials, you'll need to enter your own credentials
when you run a data refresh.
Related Information
Context
See the following topics for information about translating model metadata, the Translator role, the Translation
Dashboard, and translation tasks that you can perform there.
When you enable a model for translation, the following model metadata is translated, both for live data models
and import data models:
• Model description,
• Dimension descriptions (for live connections, the dimension descriptions are taken from the live data
source),
• Dimension property descriptions,
• Dimension hierarchy descriptions,
• Variable descriptions.
Note
Within the Modeler, the content of the model will still be displayed in the default language set in the model's
preferences. Translations are only visible in stories and in the file repository.
Procedure
3. Select Language .
4. Click Request Model Translation.
Note
An administrator will need to switch on the Allow translation of user content toggle to activate
Note
The Data access language selected by you at the time of sending the model for translation becomes the
source language of the model.
Caution
If you disable translation for a model that has been enabled for translation, all of its translations will be
permanently deleted.
Related Information
Classic account models support several different scenarios for working with monetary data from different
currencies.
Note
Different features are available for models with measures, including multiple source currency measures,
converting currencies in a data action step, and planning on multiple currency conversions. Refer to Work
with Currencies in a Model with Measures [page 779] for details.
If your model only contains data in a single currency, you can specify it as the default currency in the model
preferences.
• Ensure that your system contains a currency table with conversion rates for the different currencies, dates,
categories, and rate versions in the model. For more information, see Learn About Currency Conversion
Tables [page 788].
• In the Modeler, enable and configure the currency column for the dimension that separates the data into
different currencies. For example, this dimension could be an organization dimension, or a generic source
currency dimension.
• In the model preferences, enable currency conversion and specify settings for the default currency,
currency dimension, and currency conversion table. For more information, see Set Up Model Preferences
[page 664].
• In the model's account dimension, each account member has to have Currency set for the Units &
Currencies attribute (column). For more information, see Learn About Dimensions and Measures [page
601].
In tables in stories, you can choose which currencies to display. Monetary values are then converted and
displayed according to the specified rates for the currencies you selected. For more information, see Displaying
Currencies in Tables [page 1559].
Related Information
In a model with measures, each financial measure can have data from a single currency or multiple currencies.
You can apply currency conversion too, letting you aggregate data from multiple currencies, simulate shifts in
exchange rates, and plan on or across different currencies.
You can use currency conversion if you want to compare data across different currencies, or view or plan on
aggregated values that include more than one currency.
For example, if you have a separate measure for each currency but you want to compare them, you’ll need to
convert them into the same currency.
Or if your model includes sales revenue for several countries in different currencies, currency conversion lets
you see the regional and global values.
Currency conversion lets you view your base measure values (such as local currencies or transaction
currencies) as well as converting them to other currencies. This lets you aggregate data from multiple source
currencies, as well as enter data on it for a planning model. You can also simulate the effects of different
exchange rate shifts.
• Conversion measures that recalculate values from a base measure. In this case, entering data on one
measure changes the other.
• Conversion steps in data actions that copy data from a base measure, convert it, and paste it to a target
member. In this case, the data is stored separately.
For currency configuration in a classic account model, refer to the Work with Currencies in a Classic Account
Model [page 778] page.
There are a few cases where you don’t need to use currency conversion for monetary values:
• You only have one currency in your model and you don’t need to see the values in any other currency.
• Your model has a separate measure for every different currency, and you don’t need to make comparisons
between values for different currencies.
• Your model has different currency values within one or more measures, but you don’t need to see
aggregated values across multiple currencies.
For measures with fixed currencies, you just need to specify the currency so that your data shows the correct
units. Refer to the Currency Settings for Base Measures [page 783] section for more information.
For different currency values within a measure, you’ll also need to add a currency column to one of your
dimensions. Refer to the Learn About Dimensions and Measures [page 601] page for more information.
The workflow for adding currency conversions to your model includes a few steps:
Note
Keep in mind that currency conversion won't be applied within the base measures; you're just
identifying the currencies that exist in each base measure's data. Afterward, you can create currency
conversions based on these measures.
When you finish these steps, your model is ready to display the base measures and conversion measures from
the model, and you can add more currency conversions in charts and tables if needed. For more information,
refer to the Adding a Currency Conversion Row or Column [page 1561] page.
For planning models, users can enter data on any measure that’s input enabled, and the other currencies will
update immediately. Refer to Plan with Currency Conversion [page 2239].
When your rate table is set up, you can switch on the currency conversion toggle in the Model Preferences ( ).
• Currency Rates Table: This table contains exchange rates that will be used to convert currencies in your
model. Multiple models can use the same table, but it needs to have exchange rates for the different
currencies, dates, rate types, and rate versions required for your currency conversions. Refer to the Learn
About Currency Conversion Tables [page 788] page for more information.
• Date Dimension: You can set up conversion rates for different dates to reflect changes in currency values,
and this date dimension will determine which conversion rate applies to each fact.
Currency conversion enables the following features and settings for your model:
Next, you can add the currency property to one or more dimensions. Refer to the Learn About Dimensions and
Measures [page 601] page. When that's done, check the currency settings of your base measures.
After you’ve set up currency, you can create a variable that drives currency conversion for measures that are
bound to that variable.
With a currency variable, you can manually select any currency that’s part of a conversion table as target of a
currency conversion. When you select a target currency, any conversion measure that’s bound to the variable
gets automatically updated with the target currency set with the variable. It’s also possible to build calculations
on top of conversion measures using that variable.
This provides more flexibility for currency conversion workflows. That’s especially useful if you need to update
multiple measures in one go. While the variable default value is set in the Modeler, you can still overwrite the
value in stories later on, either for the whole story, or for individual charts and tables.
If you want to delete that variable, you can only do so by disabling currency conversion.
If you want your currency conversion to be driven by a currency variable, specify the Currency Variable Settings:
Whether you enable currency conversion or not, you can store different currencies in your model using the unit
type and currency settings when setting up a base measure.
For details about adding base measures, refer to the Adding Measures in the New Model Type [page 757] page.
Setting Description
Unit Type Set this option to Currency for all monetary values.
Rate Type This option corresponds to the rate type setting in the
currency rate table. The measure will be converted using
rates with a matching rate type (blank, Average, or Closing).
Note
If you have an account dimension as the primary
structure, the account member sets this value, so it’s
not necessary for the measure. For more information,
refer to the Set Structure Priority and Create Custom
Solve Order [page 701] page.
When your base measure is set up, you can use it as the source for currency conversion measures and
conversion steps in data actions.
After your model has currency conversion set up and at least one base measure with currency, you can add
conversion measures on top of it. These measures convert the source measure’s data into target currencies.
It’s common to use conversion measures to convert local currencies into a fixed currency so that you can
work with data that is aggregated across the currency dimension. However, both the source measure and the
conversion measure can either have a fixed currency or currencies set by a dimension property. For example,
you can convert currencies set by a transaction currency dimension to local currencies set by an organization
dimension. Additionally, a conversion measure can be linked to a variable and derive its target currency from
that currency variable.
You can plan on conversion measures, and also base calculations on them. However, they're not supported as
source or target measures in data actions.
In the modeler, currency conversions are set up in the Calculations workspace. (You can also add them in
stories and analytic applications.)
1. To get started, select Add Currency Conversion Measure from the Add ( ) menu in the toolbar, or from
the list of conversion measures.
2. In the Properties panel, set the following options:
Setting Description
Name Type a unique ID for the measure using letters and num-
bers only.
Source Measure Select the base measure whose values you want to con-
vert. (Calculated measures aren't available here.)
Tip
As you pick these settings, the Preview pane shows the values and formatting of your conversion
measure. Use it to make sure that the conversion measure has the expected results.
3. If needed, specify Advanced Settings for the conversion. If you leave the default settings, each measure
value will be converted with the rates that correspond to its date and category or rate version. Use these
options to control which conversion rates to use.
For more information about currency conversion rate settings, refer to Learn About Currency Conversion
Tables [page 788].
Dynamic (default) Use rates that apply to each value's version. If the source
version has a rate version, only those rates can be used.
Otherwise, the conversion will use rates for the source
version's category if they exist, or generic rates with no
category setting.
Fixed category (for example, Actual or Forecast) Apply the rates from the category that you choose to all
the measure values.
Specific Apply the rates from a specific rate version to all the
measure values. Choose the rate version from the list.
Note
• Specific versions and calculations will only use
rates from that rate version.
Booking Date (default) Convert each value with the rate that applies to the
value's date.
Booking Date + 1 or Booking Date - 1 Convert each value with a rate that is offset from
the booking date by the specified period. For example,
convert a value for March 5 with the rate for April 5.
Fixed Date Convert all values with rates from a specific date.
If the model has multiple date dimensions, remember that booking dates come from the one that's
specified in the currency conversion settings of your model preferences.
4. If necessary, set the Formatting options. If your model uses an account dimension as the primary
structure, you can ignore these options since the formatting from accounts will be applied instead.
Setting Description
This measure will convert source values from the base measure to the specified target currencies using values
from the rate table.
Now that it's ready, you can create more conversions, add calculated measures based on the conversion, or
work with the converted values in your stories and analytic applications.
Restrictions
The following usage restrictions apply when currency conversion is turned on in the model preferences:
Related Information
Currency conversion tables let you take source data from different currencies in a model and convert it, either
into a single currency for aggregation or into a set of currencies from a dimension.
Currency conversion tables are defined independently of models; you can then apply a selected table to any
model you create. Currency tables are listed on a separate Currency Conversion page, where you can add, copy,
and delete tables as required.
Note
Tenants based on the Cloud Foundry environment benefit from the latest user experience improvements
brought to the currency rate tables, with a redesigned toolbar and an improved Add Missing Rates dialog
with a search functionality. The user experience on tenants based on the Neo environment might differ.
The same currency table can apply to both models with measures and classic account models. For background
information about currency conversion features in each model type, see Work with Currencies in a Model with
Measures [page 779] and Work with Currencies in a Classic Account Model [page 778].
Exchange Rates
A currency conversion table defines exchange rates for all currencies that are in use. The currency conversion
is calculated as a source currency multiplied by an exchange rate to determine the equivalent value in a target
currency. Rates tables can be accessed straight from the Modeler, or from the Calculations workspace if your
conversion measure is missing rates.
From here, you can maintain the rate table manually. On Cloud Foundry tenants, you’ll find a toolbar that helps
you add, copy, or delete rows, undo/redo actions, and open the Add Missing Rates dialog. On Neo tenants,
you’ll be able to add missing rates using the dedicated panel on the right.
Each row in a currency conversion table represents a rate, and has the following settings:
Mandatory Settings
• Source Currency: The three-letter currency code of the source currency. For example, EUR or USD.
• Valid From: This is the start date when the rate becomes effective. For example, if the latest rate is set for
January 1, 2020, the 2020 data will use this rate, but data for 2019 would use older rates. If there are no
rates with Valid From dates that precede the data, it won't be converted.
• Target Currency: The three-letter currency code of the target currency.
Caution
For each row of the table, make sure that source and target currencies are different to avoid generating
errors within the table.
• Exchange Rate: The source value is multiplied by this rate to convert it into the target currency.
Note
Currency conversions can be inverted. For example, if you have a rate for converting USD to CAD, the
inverse rate can be used to convert CAD to USD in case an explicit rate doesn't exist. However, rates
• Rate Type: This setting corresponds to the Rate Type property of an account or measure. Use it to set
different conversion rates for different types of data.
Caution
To convert data from accounts or measures, the rate type must match this setting in one of the valid
rates. A blank rate type in the currency conversion table won't match either closing or average rate
types in your accounts or measures.
When you enable currency conversion in a model, a Rate Type property is added to any accounts and
measures that exist in your model.
Accounts: For any accounts with an Account Type set, you’ll have to set a corresponding rate type before
you can save the model. INC and EXP accounts use the Average rate, that is, the average conversion rate
over a time period. That's because they include transactions that take place over the course of the period.
AST and LEQ accounts represent what your organization owns and what it owes at the end of the period.
They use the Closing rate, that is, the exchange rate at the end of the period.
For accounts without an account type, the setting is optional.
Measures: You can set a rate type for any base measures with currency data. For a model with measures,
the structure priority determines whether the rate type for the account or for the source measure is used
during currency conversions. The other structure's rate types have no effect. For details, see Set Structure
Priority and Create Custom Solve Order [page 701].
If you are using an older model, which was created before Rate Types were introduced in the Modeler, you'll first
need to migrate the model before you enable currency conversion. In this case, an additional Migrate option is
available in the Model Preferences. Select this option to start the automatic migration procedure.
Optional Settings
You can use optional settings to maintain specific rates for different categories and versions of data. These
settings can help you compare scenarios with different exchange rate forecasts, for example.
• Category: If you want to create separate rates for one or more categories (such as actuals, budget,
forecast, and so on), select a category for the rates. Leave this setting blank to create a generic conversion
rate. Generic rates will apply if there aren't any rates for the specific category of the data.
• Rate Version: You can create a rate for specific versions and scenarios using rate versions. Set the Category
to Specific and then type a name for the rate version. You can assign public versions to a rate version in
the Modeler, overriding the rate for the category. Users can also apply these rates when creating their own
currency conversions, or versions in a classic account model.
Note
Specific versions and calculations will only use rates from that rate version. Generic rates without a
category are not applied to the Specific category.
You can create a currency conversion table by copying and pasting data from a file, or by importing data from
a data source. For details on how to import and schedule currency conversion data from a data source, see
Import Currency Conversion Data [page 790].
1. From the Modeler start page, go to the Currency Conversions tab and select Currency Conversion Table.
2. Provide a name for your table, and select Create.
3. Type currency conversion data into the table, or copy and paste data from a file such as a spreadsheet.
Caution
For each row of the table, make sure that source and target currencies are different to avoid generating
errors within the table.
Related Information
Instead of copying and pasting currency data into a currency conversion table, you can import currency data
from supported data sources, and schedule updating of the currency data.
1. From the Modeler start page, go to the Currency Conversions tab and select Currency Conversion
Table.
2. Provide a name for your table, and select Create.
3. Save the empty table.
4. Switch back to the Currency Conversion tab, and select the check box beside the empty table.
Note
Scheduling an import job from a rate table isn't supported by SAP BPC.
1. From the Modeler start page, go to the Currency Conversions tab and select From an SAP BPC
Import.
2. Choose a BPC connection type, and select a BPC system from the list of connections. You can choose from
the following connection types:
• SAP BPC for SAP BW/4HANA
• SAP BPC 10.1 for Microsoft platform
• SAP BPC 10.1 for SAP NetWeaver
Note
You can create a new connection on the fly by selecting Create New Connection and following the
instructions in Import Data Connection to an SAP BPC System [page 480].
3. When importing a currency table from BPC, you are now only allowed to create a new query. The option
Create a new query is selected by default. Select Next.
4. Enter a name and description for the currency conversion table and choose Next.
5. Specify the source BPC rate model by choosing the environment and model. Then name the query, and
choose Next.
6. In the Data Selection page, filter the source BPC dimension members to be imported for each dimension,
and choose Next.
7. In the Member Mapping page, map the source BPC rate account members and category members to the
corresponding SAP Analytics Cloud rate type and version members, respectively.
8. When you've finished mapping, select Finish.
The new conversion table is created. Save the table, and view it in the list on the Currency Conversion tab.
5. Select .
Related Information
For a classic account model, you can enter measures that have been previously converted to a different
currency, if you don't want SAP Analytics Cloud to convert these measures again.
Note
In a model with measures, you can just create a separate base measure for the preconverted data. See
Work with Currencies in a Model with Measures [page 779] for details.
Typically, you would use a currency conversion table when you want to analyze data from multiple regions
with multiple currencies, or from multiple transaction currencies. But if your system has already calculated the
currency conversions, and you'd like to use those conversions instead of a conversion table in SAP Analytics
Cloud, you can import both the source currency data and the converted data.
The source currency data and converted data appear as separate measures that you can use in stories and
tables.
Note
If you select currencies in a story or table that are not the default currency, the data is converted using the
selected currency conversion table.
Note
• This feature applies only to data in the Actual category, not to Budget, Planning, Forecast, or other data
categories.
• Copying Actuals into Budget, Planning, or Forecast allows you to lock down, or fix, the default or local
currency, with the non-locked measure calculated based on currency rates.
• This feature is available with new blank models or when importing a file from your computer.
• The Preconverted Actuals switch is enabled only when Currency Conversion is switched on.
• Preconverted actuals cannot be edited in SAP Analytics Cloud after they have been loaded using data
entry or advanced formulas.
Context
Note
The steps below are geared towards classic account models, and valid for both when creating a blank
model or creating a model from a file, unless stated otherwise. In a model with measures, these steps don't
apply. You can create a separate base measure to hold your preconverted currency data. See Work with
Currencies in a Model with Measures [page 779] for details.
For information about currency conversion in a classic account model, see Work with Currencies in a Classic
Account Model [page 778].
Procedure
1. Create a new model, either from scratch, or using a data file like an Excel spreadsheet for instance.
Note
The data should not yet contain preconverted actuals, because the model is not yet configured to
accept preconverted actuals. If the Excel worksheet contains a column of preconverted actuals data,
temporarily delete that column before importing the file.
For more information on how to create a model, see Create a New Model [page 645].
2. If you're creating a model using a data file, do the following steps. If you're creating a blank model, skip to
step 3.
a. Before saving the model, during the data preparation, select Enable Planning if you want the option to
add currency conversion later in stories, and select a Date dimension.
b. Select Create Model before doing any mapping.
c. Switch to the Model Structure view.
Note
If you're creating a model using a data file, choose the currency that your preconverted actuals
have been converted to.
Note
If you have many members in the Account dimension, it can be impractical to set the Currency value
for each member individually.
An alternative method is to prepare your data file by adding Account Type, Rate Type, and Units
& Currencies columns in your file, and then when you import the file, map those columns to the
Account dimension. As long as the Update dimension with new values mapping option is selected (it's
selected by default when importing data), you can load those values for all members at once. For more
information, see Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and
Prepare Fact Data for a Model with Measures [page 728].
Note
Once Currency Conversion is switched on, the model needs a currency dimension, which defines the
organization of the data in the model. For example, the Organization dimension might include a list of
regions, each of which corresponds to a particular currency. These currencies are the Local Currency
(see step 11). For more information, see Work with Currencies in a Classic Account Model [page 778].
8. Click Import Data , choose the data file that contains the preconverted currency data, and select
Import.
9. Select the imported data from the Draft Sources list, and map the data to the model's dimensions.
First, map the account dimension (must be actuals). After you've mapped the account dimension, the
Measures card can be expanded to two cards: Local Currency (also known as source currency) and Default
Currency. Drag the preconverted actuals data to the Default Currency card, and the standard measure data
to the Local Currency card.
See Combine Data with Your Acquired Data [page 722] for details on mapping data.
10. Select Finish Mapping, and then save your model.
After creating the model, you'll typically create a story with a table to include the preconverted actuals.
14. Select the table, and then open the (Builder) panel.
17. Point to the calculation you just added, and select (Manage Filters).
Related Information
As planning users work with model data in stories, they may try to analyze currency data that depends on rates
that do not exist in the rate table.
Context
For example, if a user creates a Forecast version in a story that uses currency conversion, but the currency
table only has rates that apply to the Actuals and Budget versions, the forecast data cannot be converted.
The way you add missing rates differs depending on whether your tenant is based on a Cloud Foundry or Neo
environment:
• On Neo tenants, you add missing rates using the right panel within the Modeler. There is no search
functionality.
• On Cloud Foundry tenants, you add missing rates using a dedicated dialog. There is also a search
functionality to help you look for missing rates and add them quickly to the rate table.
When working with currency conversion in the Calculations space, the application lets you know in the
calculation preview area if rates are missing in a given rate table. When that’s the case, an Add missing rates link
is displayed to take you straight to the rate table. The same link is also displayed in stories. This link takes you
to the Add Missing Rates panel on Neo tenants, or Add Missing Rates dialog on Cloud Foundry tenants.
The Add Missing Rates dialog helps you search for missing rates and add exchange rates quickly. Depending on
the number of rates and currency combinations in the table, it can be useful to run a quick search to look for
missing rates, especially if the rate table has a large number or rates.
For classic account models, any currency you add to a rate table applies to every measure in the table. The
new model type however offers more flexibility, and allows you to select the base measure you would like to
maintain. As a result, the Add Missing Rates panel can be slighty different depending on the model type you're
working with.
The list shows a log entry for each time a currency conversion in a story has failed due to a missing rate in the
currency table.
If the error occurred in a public version, you can select Analyze next to the error to add the rate using the
Add Missing Rates panel.
Note
In the current release, the Analyze function is not available for private versions.
Procedure
1. In the Add missing Rates panel, select a model that uses the rate table you want to update.
2. Select an optional Base Measure.
This helps in case you’re looking to update a rate for a given measure rather than for the whole model, and
also limits the number of results returned when running a search. Note that this option is only available for
the new model type, and isn't available for classic account models.
Note
3. Under Target Currency, if you want to manually add a fixed target currency, select Fixed and type the
currency code. Or, if you want the target currency to be derived from a currency dimension, select From
Dimension, and select the currency dimension.
Note
The From Dimension option is only supported by the new model type, and isn't available for classic
account models.
• To search for missing rates that apply to a particular rate version, click Specific Rate Version and select
a Rate Version.
• If you’re looking for generic rates that don’t apply to a particular category, leave this field blank, and
make sure Add category-dependent rates is unchecked.
• If you’re looking for missing rates that apply to category specific rates, leave the field blank, and make
sure Add category-dependent rates is checked.
5. Specify how you’d like to run the search:
• Select Booking Date to find missing rates on the booking date only.
• Select Booking Date - 1 and then select a time period, to find missing rates during the time period
preceding the booking date.
• Select Booking Date + 1 and then select a time period, to find missing rates during the time period
following the booking date.
Note
If the application doesn’t return any results, reset or change the search filters, and run the search
again.
7. Click Add to update the rate table with the new rates or Cancel if you don’t want to keep the new rates.
After you’ve added the new rates, the application automatically highlights them in the rate table.
8. In the toolbar, click Save to save the rate table.
Related Information
Procedure
Note
Only base measures with currency enabled are available in the dropdown menu.
5. Under Target Currency, if you want to manually add a fixed target currency, select Fixed and type the
currency code. Or, if you want the target currency to be derived from a currency dimension, select From
Dimension, and select the currency dimension.
Restriction
The From Dimension option is only supported by the new model type, and isn't available for classic
account models.
Results
Back in the grid view, the rates you have added are highlighted in yellow.
Depending on your business case or the sensibility of your data, you might need to restrict access to your data.
Data can be secured at different levels in modeler. You can apply security settings to models and dimensions,
and you can also apply more detailed restrictions.
This section provides an overview of the security settings available. You'll find links in the sections below that
redirect you to more detailed content.
The foundation of security comes from the user role assignment. For each role, you can define permissions. In
the Security Roles area, you can assign general permissions for all models, but you can't assign permissions for
individual models. For information on user roles, see Standard Application Roles [page 2838].
Users must be assigned a role with the same overall model permission level as the model type they want to
access. For example, someone assigned only Read access to Analytic Models in their role (and not also granted
Update, Delete, or Maintain) will only ever be able to view data from the models they are allowed to read, even if
they are additionally granted Update, Delete, or Maintain permissions on those models.
• Rights to read the model via the sharing rights that are set by the user when they share it.
• Read rights on the Planning Model or Analytics Model application privilege.
• Read rights on the Private Files application privilege.
Note
Depending on where the model is saved you might need read rights on either Private File or Public Files
application privilege. For example, if the model is saved in the Models folder under Public, you need
read on the Public files application privilege. For more information, see Permissions [page 2844]
If you don't have one of these three rights, you won't be able to read (open or use) the model.
Note
Users with an SAP Analytics Cloud for planning, standard edition license must be assigned a role with
Maintain permissions on planning models and analytic models. Users also need the Read and Maintain
permissions granted via the Share settings directly on the model itself to upload or change data. For more
information, see Create Roles [page 2877] and Share Files or Folders [page 193].
Another security level comes from the sharing settings and the file location of models. On the Files page, you
can select the location for where to save your model.
• When a model is saved in a private folder (or in the root of the My Files view), only the user who creates it
can see it. However, they can share the model with other teams or users.
• When a model is saved in a public folder, it is automatically shared with everyone who can access the folder
and based on individual sharing settings given to the users.
• When a model is saved in a workspace folder, only the user who creates it can see it. However, they can
share the model with teams and users who are members of that workspace.
Note
When sharing a file, for example a story, that depends on underlying objects, like models or datasets,
you'll also need to share to those other objects with the same access level as the file. Furthermore, if the
dependencies are from different workspaces, you must also request access to the workspaces and objects
on behalf of those users.
Models can be shared the same way that stories and folders can be shared. In the sharing dialog, you can
choose the users and teams and give them a predefined access level (View, Edit, or Full Control) or custom
access. For information about sharing files, see Share Files or Folders [page 193].
Adding version security to a model lets you restrict read, write, and delete access to public versions, to prevent
other users or teams from changing them. Users who have read-only permission for public versions can still
copy data to a private version that they can edit. Users who don't have write permissions can't publish into
a public version. With delete permissions for a public version, a user can read, publish to, and delete a public
version.
This setting determines whether the model is visible to users other than the owner. If you switch on Model Data
Privacy, only the owner of the model and user roles that have specifically been granted access can see the
data. Disable this switch if you want the model and data to be public.
You can set permission for individual dimension using Data Access Control.
Example
• To ensure that product managers can see the financial results only for their products, you enable the
DAC for the dimension Product.
• To prevent some planners from deleting a public version, you enable data access control for the version
dimension and don't give them delete access for that version.
For more information, see Set Up Data Access Control [page 803].
Validation Rules
For planning models, validation rules let you define the allowed member combinations across multiple
dimensions to prevent improper data entry and planning operations in stories and analytic applications. The
system validates the data in the model according to the validation rules you define for this model, and planners
are only allowed to enter data or use planning functions for the specified member combinations.
Validation rules do not impact data import and data deletion. To prevent planners from deleting public versions,
use data access control.
For more information, see Define Valid Member Combinations for Planning Using Validation Rules [page 2547].
Unlike most other data security features, data locks are designed to change frequently over time. For planning
models, data locking lets you prevent changes to specific data at different stages of the planning process, while
also delegating control over the lock state to other users.
Except by users with special permissions, locked values can’t be changed by importing or deleting values in the
modeler, or by data entry or other planning operations. Data locking doesn’t prevent public version deletion,
though; use data access control instead.
Context
When working in the Modeler, the application triggers user validation to check on data access and rights on
dimensions linked to your model. You can then easily detect and unassign missing or invalid users.
Note
The validation is automatically triggered when you save or open a model, or when you save or open a public
dimension outside of a model (for example from a public dimension list).
Procedure
Note
If the validation finds no missing or invalid users with data access or rights on a dimension, the icon is
grey and is not accessible.
You can restrict access to data in your model by setting read and write permissions for individual dimension
members. You can activate this security feature for any dimension in the model.
While setting up DAC, you need to consider that the roles and permissions set to a user and then the security
set at the model's level will have priority on DAC:
• A user having a role with all permissions, like a BI Admin role, can see all data of all models even if you set
DAC on your dimensions. Indeed a role with "Full Data Access" privilege provides unrestricted access to all
models and dimenstions. For more information, see Standard Application Roles [page 2838]
• On role level, you can define for every dimension whether user has UPDATE (allows to change structure),
MAINTAIN (allows to change data) and READ (allows to read data and metadata) permissions. For more
information, see Permissions [page 2844].
• On an individual dimension you can then define data access control, which allows to specify on member
level, whether users can READ, WRITE the corresponding facts. With the Hide Parents option in addition,
the dimension data gets protected as well. For more information, see the next sections below.
• On model level, you can set the option Model Data Privacy, which allows to specify on role level a data
security (READ and WRITE permissions). For more information, see Learn About Data Security in Your
Model [page 799]
• The file repository allows in addition to create private models, which other users per default cannot access.
Those models can then be shared with others. For more information, see Share Files or Folders [page 193].
• Hierarchy support is only available for dimension based DAC
There is no priorities between roles and access control done at the model level: you access to data based on
your role AND the DAC that has been set at the model/global dimension level. In a nutshell, you are able to see
the data of a model once all the roles and access control permissions have been satisfied.
You can restrict data access to individual values in the model to specific users using the Data Access Control
(DAC) setting. When DAC is on, two extra columns, Read and Write, are added to the dimension grid so that you
can apply individual settings to each row. To enable dimension security, switch on Data Access Control in the
Dimension Settings.
You can select one or more users (or simply all users) who will have access to the data.
Note
Restrictions created using Data Access Control apply only to transaction data (fact data). Master data
(members in member selection dialogs) will still be visible.
Saving DAC settings can be done in the background depending on the number of dimension members with
DAC entries. After changing DAC settings, you are notified twice. Once right after you've saved the DAC settings
Note
The Read and Write columns are added to the dimension grid.
Note
When you switch Hide Parents on, you restrict which dimension members can be seen in the Modeler
or in Stories: If this option is enabled, users will see only the members that they have at least Read
access to. For more information, see the example 2 at the end of this topic.
4. You can now use the two new columns Read and Write to control access to all rows of the grid by selecting
one or more users in either or both of the columns.
Note
Each user who is granted Write access for a member automatically receives permission to read
the data as well. Likewise, a user who receives the Delete permission for a member of the Version
dimension also receives Read and Write permissions for it.
Note
If data access control is enabled for a dimension in the model, it restricts changes to data in public
versions, but not in private versions. In a private version, you may want to simulate a scenario that involves
changing dimension members for which you do not have Write permissions, for example. In this case, you
cannot save the changes to those members to a public version. Only members for which you have Write
permissions will be updated.
Note
• Only users with the Update privilege (defined in Security Roles) can set DAC for a version dimension.
• Version security applies only to planning-enabled models.
• The default read/write/delete permission is "none". You must explicitly enable read/write/delete
access to users or teams, including yourself.
You can restrict access to data contained in your model with a filter, and set read and write permissions based
on a custom role.
1. Enable Model Data Privacy for your model. For example, my model name is <Model1>.
2. Create a custom role <A>.
3. Open the created role <A> and add the model <Model1> using Select Model tab.
4. Select the access type you want to allow for this role: Full Access or Limited Access.
Note
If you select Limited Access you can choose between Add Read Access or Define Write Access. You can
then define a filter: For example, you add a filter on <Product = Book>.
5. As a result, Users that are assigned to the custom role <A> can read only the data allowed by the filter
<Product = Book> when opening <Model1>.
Example 1: How the data permissions restrict what users can do with a model:
Example
The model P&L Planning has the following permission on its dimensions:
The user who created the model has defined data access for the Account dimension as follows:
The user who created the model has defined data access for the Organization dimension as follows:
Germany - -
France - -
US
China - -
US 200 300
When Martin Brody opens his story and adds the organization to the row and the account to the column, he
will see only the following data:
EMEA 300
Germany 200
France 100
Note
Don't forget that what Martin Brody can see also depends on his role and permission.
Example
When you switch Hide Parents on, you restrict which dimension members can be seen in the Modeler or in
Stories: If this option is enabled, users will see only the members that they have at least Read access to:
BI Content Viewer Will be able to see only the members that they have at least Read access to.
Planner Reporter Will be able to see only the members that they have at least Read access to.
Viewer Will be able to see only the members that they have at least Read access to.
Example
If you switch on the Hide Parents setting, Alberta and Lethbridge are not shown, and British Columbia,
Calgary and Edmonton are moved up to the top hierarchy level:
If you turn off the Hide Parents setting, Calgary and Edmonton are displayed below their parent member
Alberta:
Example
In the case you’ve been provided access on a node that is parent for more than one hierarchy, you won’t
be able to view all members of the 2nd hierarchy: You’ll only see the members you were given access to
the first hierarchy. Let’s take this example with the Parent “Locations”, which is parent of 2 hierarchies
“Markets” and “Offices”:
In the first hierarchy, the data access control cascades down: if you have access to the parent, you can see
all children. But in the second hierarchy, this does not apply. Even if you have access to the parent, you will
only see the children that are part of the first hierarchy and for which you have at least read authorizations.
Related Information
Another security level comes from the sharing settings and the file location of models. By default, models
saved to a private location (for example, My Files) are only see by the user who creates the model. However,
you can share your model with other teams or individual user. You can make the model accessible to others by
saving it to the public folder. Saving a file to public folder automatically share it with everyone who can access
this folder as long as their role give them permissions.
Note
Model permissions take priority on story permission. For example, if you want a user to view all data on a
story, you need to share the story and give the user at least read permissions for each of the models that
are used in the story.
Models can be shared the same way that stories and folders can be shared. In the sharing dialog, you can
choose the access level for the users or teams that the model is shared with: View, Edit, Full Control, or a
Custom access level. Models that aren't shared can't be viewed or modified by anyone but the model owner.
This section acts as a reference for all procedures to import data in a model for all supported data sources. Use
the links below to jump straight to a specific data source.
Personal Files
Note
The data is imported and analyzed, and then mapped to the dimensions of the target model. Note that the data
to be imported must fit the structure of the existing target model.
The workflow to import data from a file into an existing model is:
Data files can be in your local file system or in your network. The source data can be an Excel spreadsheet
(.xlsx) or a delimited text file (.csv or .txt). If you import data from Microsoft Excel, and if the data is saved in
separate sheets in the Excel workbook, either you can choose which sheet to import (if from a local file system)
or the first sheet is automatically imported (from a network file).
The source data must include columns that can be matched to the existing dimensions in the selected
model. It must also include transactional data (measures) in numeric format (for example, 1,587,370.50 or
1.587.370,50 ) but not scientific notation.
4. Select File .
5. Choose whether you want to import data from a file on your local system, or from a file server.
If you don't see the option to import data from a file server, see Allow Data Import and Model Export with a
File Server [page 712].
Tip
If you import a file from a file server, you can also schedule imports from that file. For more information,
see Update and Schedule Models [page 772].
6. If you're importing from a file server, choose a file server connection, or select Create New Connection.
If you create a new file server connection, specify the path to the folder where the data files are located. For
example: C:\folder1\folder2 or \\servername\volume\path or /mnt/abc.
7. Choose the file you want to import.
The data appears in the data integration view, where you can complete the mapping of your new data to
the model's dimensions.
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model: See
Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and Prepare Fact Data for a
Model with Measures [page 728]..
Related Information
Creating a Model with Coordinate or Area Data for Geospatial Analysis [page 1479]
Allow Data Import and Model Export with a File Server [page 712]
SAP BPC
Prerequisites
• You use SAP Business Planning and Consolidation, version for BW/4HANA, and the SAPCP cloud
connector is installed and configured on premise, or
• You use SAP Business Planning and Consolidation, version for SAP NetWeaver (BPC NW) of version 10.1
and the SAPCP cloud connector is installed and configured on premise, or
• You use SAP Business Planning and Consolidation, version for Microsoft of version 10.1 and both the
SAPCP cloud connector and SAP Analytics Cloud agent are installed and configured on premise.
Note
For SAP BPC 11.0/11.1, version for BW/4HANA and SAP BPC 10.1, version for SAP NetWeaver, only
importing data from a BPC standard configuration is supported.
During the data import, if a member doesn't belong to any hierarchy, the member will by default be added
to Hierarchy1 as root member that has no parent or child members.
For SAP BPC 11.0/11.1, version for BW/4HANA and SAP BPC 10.1, version for SAP NetWeaver, if the BPC
environment does not have the HANA Accelerator on (this can be checked via the environment level IMG
parameter ACCELERATOR_ON in the BPC system), make sure you refer to the SAP Note 1858257 to
enable member set functions in BPC first.
If you import data from BPC 10.1, version for SAP NW on BW 740, make sure you apply the SAP Note
2550738 on the BPC system beforehand.
If you want to import a larger volume of transaction data from BPC to SAP Analytics Cloud without
changing your memory configuration, you can apply the SAP Note 2755379 or upgrade your BPC
system to a minimum support package specified in the note.
1. From the Modeler start page, select Start with data SAP BPC .
Note
From the acquire data panel, select the filter icon to narrow down the number of data sources in the
list. You can filter by data source type or by category.
Note
If the connection you select is an OAuth connection that you haven't authenticated, or your OAuth
authentication has expired, then a dialog will pop up for you to enter your BPC credentials and
complete the OAuth authentication process. If SSO is enabled, you can directly sign in without
entering your BPC credentials again.
You can also create a new connection on the fly by selecting Create New Connection and following
the instructions in Import Data Connection to an SAP BPC System [page 480].
2. Choose to create a new query or copy a query from an existing BPC model for mapping later.
Click Next after completing all the settings on this page.
3. On the next page, enter a Model Name for the new target model, and optionally a Description.
4. If you want users to be able to write data back to the BPC model, choose Write-back model. Otherwise,
choose the Basic model.
Note
In a write back model, transaction data is synchronized from time to time based on your model
settings. Master data imported from BPC is read-only and cannot be changed.
5. When the model type is Basic model, you can choose to create new dimensions as public dimensions
or private dimensions.
Note
When currency conversion is enabled, only the local currency is imported from BPC. When
currency conversion is disabled, all data values of local currencies and pre-converted currencies
are imported without any currency unit.
7. To create a new conversion table by importing it from BPC, select Add New Currency Conversion Table
from the Currency Conversion Table list. For detailed steps, refer to Import Currency Conversion Table
From SAP BPC [page 791].
Click Next after completing all the settings on this page.
3. In the next Select Model from BPC dialog, select an Environment and then select a Model.
A name is automatically generated for the new BPC query, but you can rename it freely.
4. To build the BPC query you just created:
1. If you have chosen a basic type model, drag dimensions from Available Data to the Selected Data area.
2. To restrict the data to be imported, drag dimensions to the Filters area. Set a filter by selecting the filter
icon. In the filtering dialog, select members to be included, and then select OK.
Click Next after completing all the settings on this page.
5. In the dimension mapping dialog, map the source BPC dimensions to the corresponding new dimensions
in SAP Analytics Cloud.
1. Select a type for each dimension from the Dimension Type list.
2. For the Category dimension, an additional Version Mapping dialog is displayed to let you map the BPC
category names to the SAP Analytics Cloud category names.
3. If you didn't toggle on the Currency Conversion setting in step 3, BPC's original data values are
imported into SAP Analytics Cloud.
In this case, if there's only a single currency used in your BPC system, you can select the Default
Currency option and specify the currency; if different currencies are defined for different entities in
BPC, you can select the option Local Currency is taken from:, and choose an entity attribute that
defines the local currency.
When you have completed the mapping, select Finish.
6. The new model is created with BPC imported data.
About Synchronizing BPC Delta Data to SAP Analytics Cloud in a Write-back Model
By directly clicking the (Refresh) icon in the (Version Management) panel for the version that you want
to refresh from BPC, you can bring back only delta data from BPC in BPC 10.1 NW on BW 750 and BPC, version
for BW/4HANA.
Important Information:
• If you are using a BPC 10.1 NW on BW 7.50 version lower than SP09, make sure you first install the SAP
Note 2504877 . Otherwise, full data will be loaded into SAP Analytics Cloud when syncing data.
• Syncing delta data to SAP Analytics Cloud is currently not supported in BPC 10.0 Microsoft version, BPC
10.1 Microsoft version, BPC 10.0 NW version and BPC 10.1 NW on BW740.
• Please don't activate BPC data on the BW DataStore Objects (advanced) or compress BPC data (by
running a Light_Optimize data manager package) on the BW InfoCubes/InfoProviders at BPC side.
Note
When a user clicks the Refresh button, other users won't be able to refresh or schedule to refresh the same
model at the same time.
During this process, only transaction data are synchronized. Master data imported from BPC is read-only.
Performance Tips:
When synchronizing BPC delta data to a write-back model, it will retrieve data calculated by BPC member
formula by default. This may bring extra performance burden.
To overcome this issue, we recommend you set the BPC model level IMG parameter
"SKIP_GET_FORMULA_DATA" to 'X'. Then BPC calculated data will not be retrieved with delta query for a
write-back model.
In a standard configuration of BPC 10.1 on NW 7.50 or higher, most InfoProviders are InfoCubes, but there
might be some rare cases in BPC 10.1 on NW 7.52 that are advanced DataStore objects (aDSO).
You can customize the standard data size for an InfoCube based on your environment settings, which is not
supported in an aDSO.
This defines the threshold for the total number of requests to be imported or exported each round in an
InfoObject in NW 10.1. To do so, set the following parameter in table RSADMIN:
Note
You can improve deployment performance by increasing that number; however, this depends on
the maximum number of table partitions allowed in your RDBMS. For more information, see the
recommendations of the RDBMS engine used by your NetWeaver platform.
Example
For example: in SQL Server, the maximum number of partitions is 1,000. This number is fixed by
the database engine. You can calculate limits as follows: 10,000 * (number of maximum partitions
per table).
For SQL Server 2005, 2008, and 2008 R2, the limit is 1,000 partitions per table => 10,000 * 1,000
= 10,000,000 rows per infocube.
About Import Data to a Basic Model or an Existing Model Created in SAP Analytics Cloud
To import data to a basic model or an existing model created in SAP Analytics Cloud:
Note
You can create a new connection on the fly by selecting Create New Connection and following the
instructions in Import Data Connection to an SAP BPC System [page 480].
6. Choose to create a new query or copy a query from an existing BPC model for mapping later.
Click Next after completing all the settings on this page.
7. In the next Select Model from BPC dialog, select an Environment and then select a Model.
A name is automatically generated for the new BPC query, but you can rename it freely.
8. To build the BPC query you just created:
1. If the SAP Analytics Cloud model is a basic type model, drag dimensions from Available Data to the
Selected Data area.
2. If you want to restrict the data to be imported, drag dimensions to the Filters area. Set a filter by
selecting the filter icon. In the filtering dialog, select members to be included, and then select OK.
Click Next after completing all the settings on this page.
9. In the dimension mapping dialog, check and map the source BPC dimensions to the corresponding new
dimensions in SAP Analytics Cloud. When you've completed the mapping, select Next.
10. In the Import Method dialog, specify how any existing data in the model is to be treated, by choosing one of
the following Import Methods:
Clean & Replace First deletes the existing data in the default replace scope, then updates existing SAP Analytics
Cloud data with data imported from BPC. When there are new records from BPC that don't
exist in SAP Analytics Cloud, add the new records directly to the SAP Analytics Cloud model.
In this case, you can use the check box for each dimension to decide whether to include or
exclude specific dimensions in or from the update.
Update Updates the existing SAP Analytics Cloud data, and when there are new records from BPC that
don't exist in SAP Analytics Cloud, the new records will be added directly to the SAP Analytics
Cloud model.
11. If you choose the Clean & Replace method, you need to further define the default replace scope of the
import.
12. Choose Next. The model is updated with BPC imported data.
Results
Note
Values of calculated members in BPC are not directly imported into SAP Analytics Cloud. Instead, the BPC
formulas that produced the calculated members are imported into an unused FORMULA column of the
account dimension. You must recreate the formulas in the proper syntax for SAP Analytics Cloud in the
Formula column. For instructions on creating formulas, see All Formulas and Calculations [page 873].
Note
In the date dimension of the imported BPC model, if the number of the BASE_PERIOD, which represents
the fiscal month doesn't match the number of the calendar month of PERIOD, then the fiscal year doesn't
start with January. For example, in the following case, the fiscal year of this BPC model starts in April:
In the Modeler, you can still choose to disable fiscal year or change to another starting month.
Noted that during data import and export, BPC only denotes fiscal year by the calendar year it ends with.
So if you plan to sync or export data back to BPC later, we recommend you retain this Denote fiscal year
settings, otherwise fiscal year between the two systems will become inconsistent.
Save your model. The BPC transaction data is imported into your model.
On the Data Management screen, your model is displayed in the Import Jobs section.
If you want to schedule the execution of import jobs, follow these steps:
Frequency Description
None Select this option when you want to import the data manually.
Repeating The import is executed according to a recurrence pattern. You can select a start and end
date and time as well as a recurrence pattern.
If you've enabled write-back to BPC, users can write data back to BPC directly from stories. For detailed
information, refer to the SAP BPC procedure.
• SAP Business Planning and Consolidation, version for Microsoft Platform (BPC for Microsoft)
• Standard configuration of SAP Business Planning and Consolidation, version for SAP NetWeaver (BPC for
NW)
• Standard configuration of SAP Business Planning and Consolidation, version for SAP BW/4HANA
Prerequisites
• To import from a BPC system, the on-premise BPC system must be specifically set up to connect with SAP
Analytics Cloud. Refer to Import Data Connection to an SAP BPC System [page 480] for more information.
• For SAP BPC, version for BW/4HANA and SAP BPC 10.1, version for SAP NetWeaver, if the BPC
environment does not have the HANA Accelerator on (this can be checked via the environment level
IMG parameter ACCELERATOR_ON in the BPC system), make sure you refer to the consulting SAP Note
1858257 to enable member set functions in BPC first.
• If you import data from BPC 10.1, version for SAP NW on BW 740, make sure you apply the SAP Note
2550738 on the BPC system.
• The imported BPC data should have at least month granularity. Before importing data, make sure that the
time dimension in your BPC model has the following properties: BASE_PERIOD, PERIOD, YEAR, LEVEL.
• The LEVEL property only supports the values YEAR, QUARTER, MONTH, DAY on base members. The
level of base members in a time dimension should be consistent.
Activities
The workflow to create a model from BPC data or to import BPC data into an existing model is:
You first need to select the connection to the BPC system. This can be an existing connection or one that you
create on the fly during the export. For more information on creating connections, see Data Connections [page
263].
If you're creating a new model, you'll need to provide a name for the target model and optionally a description.
Note that the model name can't start with a numeral or any other non-alphabetical symbol, such as a space.
Next, specify the environment, which is a set of BPC models, and the specific model to be imported.
Note
SAP Analytics Cloud reads only periodic data from BPC because it doesn't have a YTD measure by default,
regardless of whether the BPC model is periodic or YTD. There's one exception: because the AST and LEQ
account types in SAP Analytics Cloud aggregate the latest data in the Date dimension, these two account
types will store and display YTD values correctly in SAP Analytics Cloud.
If the BPC model is the YTD type, for example, a consolidation-type model, then only importing into an SAP
Analytics Cloud Basic Mode model is supported.
Mapping
• Dimension Type (when creating a new model): The type for each dimension. A type is automatically
suggested for some dimensions, but you can change or set the type by choosing one of the following
options from the list:
• Account: This dimension represents the accounts structure of your organization. You can have only
one dimension of this type per model.
• Organization: This dimension represents the business units that drive your business. Depending on
your organization, these could be operating units, geographic entities, cost centers, and so on. You can
have only one dimension of this type per model.
• Date: Specifies the smallest time period to be applied to the model. You can use year, quarter, month,
or day. It is created automatically when you create a model.
• Version: This is for the Category dimension.
• Generic: Any kind of business dimension that isn't an organizational one. This can include products,
channels, or sales representatives. You can add multiple dimensions of this type to a model.
In a write-back model, SAP Analytics Cloud can be used as a client extension for BPC models. You can perform
planning and analysis, write your planning data back to the BPC model, and bring the newest data in from the
BPC model at any time.
You can edit planning data in a public version of SAP Analytics Cloud, and when you save and publish the data,
it is automatically written back to the BPC model.
Note that a public version in edit mode can't be updated from the BPC model.
Disable currency conversion in SAP Analytics Cloud. If the BPC data contains pre-converted currency data, it is
imported into the model together with local currency values. In stories, you'll need to add the RPTCURRENCY
dimension to the drill state to see data for all the different currencies.
Enable currency conversion in SAP Analytics Cloud, and choose a rate table. If the BPC data contains pre-
converted currency data, it is not imported into the model. Only the local currency data is imported.
Related Information
Prerequisites
You are using a supported version of SAP S/4HANA. For more information, see System Requirements and
Technical Prerequisites [page 2723].
Context
Note
• SAP Analytics Cloud supports OData Version 4.0. Logical Operators (such as Equal, Not Equal, Less
or Equal, Greater than, Greater than or Equal, Less than, Less than or equal, Logical and, Logical or,
Startswith, and substringof) are supported for S/4HANA. Not logical negation, arithmetic operators, or
functions are not supported. The following table lists the minimum requirements for S/4HANA OData
services:
Number (Edm.Decimal) "gt", "ge", "lt", "le", "eq", "ne", "M" [value] m
Number (Edm.Double) "gt", "ge", "lt", "le", "eq", "ne", "d" [value] d
Number (Edm.Single) "gt", "ge", "lt", "le", "eq", "ne", "f" [value] f
Number (Edm.Int64) "gt", "ge", "lt", "le", "eq", "ne", "L" [value] L
From the acquire data panel, select the filter icon to narrow down the number of data sources
in the list. You can filter by data source type or by category.
Note
From the acquire data panel, select the filter icon to narrow down the number of data sources
in the list. You can filter by data source type or by category.
Note
You can connect to services in the discovery service without having to create connections to all of
the individual services.
Build your query by moving data elements into the Selected Data and Filters areas. For more
information, see Building a Query [page 704].
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model: See
Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and Prepare Fact Data for a
Model with Measures [page 728].
SAP HANA
Prerequisites
• The HANA database must first be set up by the system administrator. The HANA views in the database
(analytic or calculation-type views) are available to create new models and datasets from.
• You need to install the SAP Analytics Cloud agent, with location ID. This location ID is configured through
the Cloud Connector, and the agent needs to be allowlisted there. For more information, see:
Note
From the acquire data panel, select the filter icon to narrow down the number of data sources
in the list. You can filter by data source type or by category.
Note
From the acquire data panel, select the filter icon to narrow down the number of data sources
in the list. You can filter by data source type or by category.
Restriction
Next Steps
If your HANA data contains location information, you will need to add location dimensions. For more
information on live HANA data see Creating Geo Spatial Models from HANA Calculation Views [page 1483].
For information on acquired HANA data see Creating a Model with Coordinate or Area Data for Geospatial
Analysis [page 1479].
Related Information
SAP BW
Prerequisites
You use an SAP Business Warehouse (BW) system, version 7.3x or higher release, or an SAP BW/4HANA
system, SP4 or higher, and both the SAP Business Technology Platform (BTP) Cloud Connector and SAP
Analytics Cloud agent are installed and configured.
Note
If you're importing data via a BEx query using an SAP BW import data connection, see these SAP Notes:
2416705 2408693 .
Note
From the acquire data panel, select the filter icon to narrow down the number of data sources
in the list. You can filter by data source type or by category.
Note
You can select the filter icon to narrow down the number of data sources in the list. You can
filter by data source type or by category.
2. Select an existing connection, or select Create New Connection to create a new connection.
3. Copy a query from an existing model (which you can edit before saving), or create a new query.
• To copy and edit an existing query, follow these steps:
1. Select Copy a query from a model.
2. Select a query from the list, and then select Next.
3. Make your changes, and then select Create.
• Or, to create a new query, follow these steps:
1. Select Create a new query and select OK.
2. Type a name for your query.
3. Select a BW query from the list, or search for a query by name, and then select Next.
4. If the data source contains prompts, select values for them. If you're using a variable that is filled
by customer exit, you can select the Use Backend Default option. Then when you refresh the data,
the data is based on the current value filled by the customer exit instead of the value saved when
the query initially ran.
5. Select dimensions and measures for your query.
(If you're importing data via a BEx query using an SAP BW import data connection, see the note in
the prerequisites above.)
6. You can select the icon to open the Select Presentations dialog, and then choose
which presentations you want to see. Depending on the characteristic, you could have these
presentations available:
Key (DISPLAY_KEY)
Key (Internal) (KEY)
Long Text (LONG_TEXT)
Medium Text (MIDDLE_TEXT)
Short Text (SHORT_TEXT)
Text (TEXT)
2. In the right hand Details side panel, expand Modeling and choose the Date dimension type.
3. Choose the appropriate Date Format.
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model: See
Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and Prepare Fact Data for a
Model with Measures [page 728].
Related Information
SAP Universe
Prerequisites
• The Cloud Connector and SAP Analytics Cloud agent are installed and configured.
• You use a supported version of SAP BusinessObjects Business Intelligence platform. For information on
supported versions, see System Requirements and Technical Prerequisites [page 2723].
Note
From the acquire data panel, select the filter icon to narrow down the number of data sources
in the list. You can filter by data source type or by category.
Note
You can select the filter icon to narrow down the number of data sources in the list. You can
filter by data source type or by category.
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model: See
Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and Prepare Fact Data for a
Model with Measures [page 728].
Related Information
SAP ERP
Prerequisites
• The SAP Business Technology Platform (BTP) Coud Connector and SAP Analytics Cloud agent are
installed and configured.
• You use a supported version of SAP ERP Central Component. For information on supported versions, see
System Requirements and Technical Prerequisites [page 2723].
Note
From the acquire data panel, select the filter icon to narrow down the number of data sources
in the list. You can filter by data source type or by category.
Note
You can select the filter icon to narrow down the number of data sources in the list. You can
filter by data source type or by category.
2. Select an existing connection, or select Create New Connection to create a new connection.
3. Copy a query from an existing model (which you can edit before saving), or create a new query.
• To edit an existing query, follow these steps:
1. Select a query from the list.
2. Select Modify existing query.
3. Make your changes, and then select Next.
4. If the data source contains prompts, select values for them.
5. Review the Preview data, and then select Done.
• Or, to create a new query, follow these steps:
1. Select Create a new query.
2. Type a name for the query.
3. Select an ERP object from the list, or search for an object by name, and then select Next.
4. Select columns from the Available Columns list to add to the Selected Columns list, and then select
Next.
5. If the data source contains prompts, select values for them.
6. Review the Preview data, and then select Done.
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model: See
Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and Prepare Fact Data for a
Model with Measures [page 728].
Related Information
Note
From the acquire data panel, select the filter icon to narrow down the number of data sources
in the list. You can filter by data source type or by category.
Note
You can select the filter icon to narrow down the number of data sources in the list. You can
filter by data source type or by category.
2. Select an existing connection, or select Create New Connection to create a new connection. If you're
creating a new connection, check out Import Data Connection to SAP Integrated Business Planning [page
518].
Note
Sample Code
KeyPredicate:
Categories(1)/Products?$format=json&$select=Name ,Category/
Name,Supplier/Name,Supplier/
Address&$expand=Category,Supplier&$filter=Supplier/ID eq 0
FunctionImport:
GetProductsByRating?
rating=3&$format=json&$select=Name,Rating,Category/Name,Supplier/
Name,Supplier/Address&$expand=Category,Supplier
For more information on the OData query syntax, refer to the OData documentation .
Note
SAP Analytics Cloud has the following validation rules for freehand queries:
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model: See
Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and Prepare Fact Data for a
Model with Measures [page 728].
Context
For more information about extending SAP Business ByDesign using SAP Analytics Cloud, see Extending SAP
ByDesign Analytics using SAP Analytics Cloud . Information about available data sources in SAP Business
ByDesign can be found here.
Note
From the acquire data panel, select the filter icon to narrow down the number of data
sources in the list. You can filter by data source type or by category.
Note
You can select the filter icon to narrow down the number of data sources in the list. You can
filter by data source type or by category.
2. Select an existing connection, or select Create New Connection to create a new connection.
Note
Advanced features of customized OData data sources, such as SAP Cloud for Customer and SAP
Business ByDesign Analytics, are only available using customized data source types. These features
are not available using generic OData services. It is highly recommended to use the relevant
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model. See
Import and Prepare Fact Data for a Classic Account Model [page 714], and Import and Prepare Fact Data for a
Model with Measures [page 728].
Context
Note
SAP Cloud for Customer was formerly named SAP Hybris Cloud for Customer.
Note
From the acquire data panel, select the filter icon to narrow down the number of data sources
in the list. You can filter by data source type or by category.
Note
You can select the filter icon to narrow down the number of data sources in the list. You can
filter by data source type or by category.
Note
Advanced features of customized OData data sources, such as SAP Cloud for Customer and SAP
Business ByDesign Analytics, are only available using customized data source types. These features
are not available using generic OData services. It is highly recommended to use the relevant
customized data source types, if available, for your data service to leverage full capability of the data
connector. It is possible to request a customized OData data source solution.
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model: See
Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and Prepare Fact Data for a
Model with Measures [page 728].
SAP Concur
Context
Note
When importing Concur data, SAP Analytics Cloud leverages the Concur Expense Report and Expense
Entries API. Please be aware that the Concur Expense Entries API has restrictions on the set of fields
which can be filtered on. Here's a list of available Expense Entries Filter Parameters:
• attendeeTypeCode
• expenseTypeCode
• hasAttendees
• hasVAT
• isBillable
• paymentTypeID
• reportID
Because of the above restrictions on filtering data, and data volume restrictions of the Concur API, the
amount of data that can be retrieved from Concur into SAP Analytics Cloud is limited. For more information
on data import thresholds, see System Requirements and Technical Prerequisites [page 2723] and Expense
Entries .
Note
From the acquire data panel, select the filter icon to narrow down the number of data
sources in the list. You can filter by data source type or by category.
Note
You can select the filter icon to narrow down the number of data sources in the list. You can
filter by data source type or by category.
2. Select an existing connection, or select Create New Connection to create a new connection.
For more information, see Import Data Connection to SAP Concur [page 499]
3. Copy a query from an existing model (which you can edit before saving), or create a new query.
• If you're using an existing query, select it from the list.
1. If you want to make changes to the query, select Modify existing query.
2. Make your changes, and then select OK.
• If you're creating a new query, enter a name for your query.
1. Select a table, and then select Next.
Build your query by moving data elements into the Selected Data and Filters areas. For more
information, see Building a Query [page 704].
4. Select OK
The data appears in the data integration view, where you can complete the mapping of your new data to the
model's dimensions.
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model: See
Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and Prepare Fact Data for a
Model with Measures [page 728].
Note
From the acquire data panel, select the filter icon to narrow down the number of data
sources in the list. You can filter by data source type or by category.
Note
You can select the filter icon to narrow down the number of data sources in the list. You can
filter by data source type or by category.
2. Select an existing connection, or select Create New Connection to create a new connection.
For more information, see Import Data Connection to SAP Fieldglass [page 508].
3. Copy a query from an existing model (which you can edit before saving), or create a new query.
1. If you're using an existing query, select it from the list, and then select OK.
2. If you're creating a new query, enter a name for your query, select a report, and then select Next.
In the list of reports, you can search by report ID or report description. When hovering over a report,
the tooltip shows the report ID.
Note
Selecting a report starts the data import for a report in Fieldglass. Currently, it is not possible to
filter or to select a subset of properties for Fieldglass data.
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model: See
Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and Prepare Fact Data for a
Model with Measures [page 728].
SAP SuccessFactors
Context
For more information on the data you can access, see the SAP SuccessFactors HCM Suite OData API:
Reference Guide in the SuccessFactors Product Page.
Note
From the acquire data panel, select the filter icon to narrow down the number of data sources
in the list. You can filter by data source type or by category.
Note
You can select the filter icon to narrow down the number of data sources in the list. You can
filter by data source type or by category.
2. Select an existing connection, or select Create New Connection to create a new connection.
For more information, see Import Data Connection to SAP SuccessFactors [page 509]
3. Select your SuccessFactors query or select Create a new query to create a new query.
4. In the Create New Query dialog, enter a name and a description for your query.
5. In the Select Data area, select Build a Query from a Table or Start from a Template Query, select a table or a
template query to build your query, and select Next. For more information, see Building a Query [page 704]
• The Build a Query from a Table option lets you build a query from a table by moving data elements into
the Selected Data and Filters areas.
• The Start from a Template Query option automatically selects the table and data elements to be
included in the Selected Data and Filters areas based on the template query. For example, you can
select the Headcount template query to build your query; you can then add or remove elements to
finish building your own query.
• Select (Refresh list) to display the list of reports from SuccessFactors in real time. Note: The list of
templates cannot be refreshed.
6. Select OK.
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model: See
Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and Prepare Fact Data for a
Model with Measures [page 728].
Note
From the acquire data panel, select the filter icon to narrow down the number of data
sources in the list. You can filter by data source type or by category.
Note
You can select the filter icon to narrow down the number of data sources in the list. You can
filter by data source type or by category.
2. Select an existing connection, or select Create New Connection to create a new connection.
For more information, see Import Data Connection to SAP Open Connectors Cloud Storage Data Sources
[page 529].
3. Choose a file from the list, or search for a file, and select Next.
Note
Only files that have been loaded will appear in the search results. To see more results, open more
folders.
Note
If you're importing an Excel workbook containing multiple sheets, the first sheet is automatically
imported.
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model: See
Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and Prepare Fact Data for a
Model with Measures [page 728]..
Note
From the acquire data panel, select the filter icon to narrow down the number of data
sources in the list. You can filter by data source type or by category.
Note
You can select the filter icon to narrow down the number of data sources in the list. You can
filter by data source type or by category.
2. Select an existing connection, or select Create New Connection to create a new connection.
For more information, see Import Data Connection to SAP Open Connectors Query-Based Data Sources
[page 532].
3. Copy a query from an existing model (which you can edit before saving), or create a new query.
4. If you're using an existing query, select it from the list.
1. If you want to make changes to the query, select Modify existing query.
2. Make your changes, and then select OK.
5. To create a new query, follow these steps:
1. Select Next.
2. Type a name for your query and select Create.
3. Select a table, and then select Next.
Build your query by moving data elements into the Selected Data and Filters areas. For more
information, see Building a Query [page 704].
4. Select Create.
You can continue to work on other tasks while the data is being uploaded in the background.
6. When the draft data is finished uploading, open it from the Draft Data list (for a new model or dataset), or
the Draft Sources list (for an existing model or dataset).
Context
Note
• If your SAP Analytics Cloud system is hosted on a data center located within China, this feature is not
available.
When you connect to Google Drive, you can import any of the following file formats: Google Sheets,
comma-separated-values text files (csv), and Microsoft Excel files (xlsx).
• The login prompt for Google Drive is displayed in a popup dialog. You'll need to disable the popup
blocker in your browser before trying to connect.
• You can also import files from Google Drive in stories when adding data to a new story.
Note
From the acquire data panel, select the filter icon to narrow down the number of data sources
in the list. You can filter by data source type or by category.
Note
From the acquire data panel, select the filter icon to narrow down the number of data sources
in the list. You can filter by data source type or by category.
2. Select a Google account, or create a new Google connection, or type or paste the URL for your file directly
into the dialog.
For your first access to Google Drive, you'll need to sign in to your Google Drive account.
3. Choose a file from the list, and select Next.
4. If you're importing from Google Sheets, and there are multiple sheets, select the sheet you want to import,
and select Next.
5. If you're importing a .csv file, select which delimiter is used in the file, or select Auto-detect.
6. Select OK to begin the import procedure.
7. Select the appropriate option.
• If creating a new model: In the Draft Data dialog, select the data that you just uploaded.
• If importing to an existing model: In the Draft Sources list, select the data that you just uploaded.
Results
The data is imported from Google Drive, and is displayed in the data integration view
Note
You are still signed in to Google. When you are finished using SAP Analytics Cloud, it is recommended to
sign out of your Google account. To sign out, you can select Sign Out in the Select Google Drive File dialog,
or sign out of Google in your browser.
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model: See
Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and Prepare Fact Data for a
Model with Measures [page 728].
Context
Note
If your SAP Analytics Cloud system is hosted on a data center located within China, this feature is not
available.
Note
The login prompt for Google BigQuery is displayed in a popup dialog. You'll need to disable the popup
blocker in your browser before trying to connect.
Note
From the acquire data panel, select the filter icon to narrow down the number of data sources
in the list. You can filter by data source type or by category.
Note
From the acquire data panel, select the filter icon to narrow down the number of data sources
in the list. You can filter by data source type or by category.
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model: See
Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and Prepare Fact Data for a
Model with Measures [page 728].
SQL Data
Prerequisites
• The SAP Business Technology Platform (BTP) Cloud Connector and SAP Analytics Cloud agent are
installed and configured.
• You have installed a JDBC driver. For details, see Import Data Connection to an SQL Database [page 520].
Note
From the acquire data panel, select the filter icon to narrow down the number of data sources
in the list. You can filter by data source type or by category.
Note
From the acquire data panel, select the filter icon to narrow down the number of data sources
in the list. You can filter by data source type or by category.
Note
When creating a Freehand SQL query, if the query contains parameters that are shown with
question marks, the Freehand SQL query cannot be edited.
4. If two or more tables are joined, you can select the (Inner Join) icon to change the type of join
to one of the following options:
• Remove matched data (Exception) – This option removes rows from the left-hand side
table that have a match in the right-hand side table.
5. Select Preview to review the data; if two or more tables are joined, the Joined Table Quality chart
will show the number of accepted rows from the first table as well as matched values, duplicated
values, and omitted values from other joined tables.
The values are displayed on the next page.
6. Select Hide Preview.
7. Select View SQL to show the SQL query generated for joined tables.
You can then select Save as Freehand SQL Query to create a freehand SQL query based on the
content of the View SQL dialog.
Note
When creating a Freehand SQL query, if the query contains parameters that are shown with
question marks, the Freehand SQL query cannot be edited.
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model: See
Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and Prepare Fact Data for a
Model with Measures [page 728].
Related Information
OData Services
You can create a connection that allows you to import data from both on-premise and cloud data sources using
generic OData services. It is possible to request a customized OData data source solution. You can also save
your OData connection details, and schedule model updates and data imports from OData sources.
Context
• SAP Analytics Cloud supports OData Version 4.0. Logical Operators (such as Equal, Not Equal, Greater
than, Greater than or equal, Less than, Less than or equal, Logical and, Logical or) are supported. Not
logical negation, arithmetic operators, or functions are not supported.
The following table shows which operators need to be supported for each data type, for a generic
OData service to integrate with SAP Analytics Cloud:
String (Edm.String) "eq", "ne", "startswith", "toLower" "eq", "ne", "startswith", "toLower";
Number (Edm.Decimal) "gt", "ge", "lt", "le", "eq", "ne", "M" [value] m
Number (Edm.Double) "gt", "ge", "lt", "le", "eq", "ne", "d" [value] d
Number (Edm.Single) "gt", "ge", "lt", "le", "eq", "ne", "f" [value] f
Number (Edm.Int64) "gt", "ge", "lt", "le", "eq", "ne", "L" [value] L
• To integrate with SAP Analytics Cloud, your OData service must support these query parameters and
paging capabilities:
Query parameters • $select: Filters properties (columns). Lets you request a limited set of
properties for each entity.
• $filter: Filters results (rows). Lets you filter a collection of resources that
are addressed by a request URL.
• $expand: Retrieves related resources. Specifies the related resources to be
included with retrieved resources.
• $skip: Specifies the number of items in the queried collection that are to be
skipped and not included in the result.
• $top: Sets the page size of the results. Specifies the number of items in the
queried collection to be included in the result.
• $orderby: Orders the results. Lets you request resources in either ascending
or descending order using asc and desc.
• If you want to use the query builder in the step below when you create a new query, the data service
must support the select system query option. Example: https://services.odata.org/OData/
OData.svc/Products?$select=Price,Name
Key-as-Segment isn't supported by the query builder, and should only be used with freehand queries.
Also, the $skip parameter must be supported by the data service.
• Embedded Complex types are not supported.
Note
From the acquire data panel, select the filter icon to narrow down the number of data
sources in the list. You can filter by data source type or by category.
Note
You can select the filter icon to narrow down the number of data sources in the list. You can
filter by data source type or by category.
2. Select an existing connection, or select Create New Connection to create a new connection.
If you create a new connection, you can select the following options:
• Connect to an On-Premise OData service
Note
To connect to an On-Premise OData service, ensure that the following tasks are completed:
1. The Cloud Connector is installed. For more information, see Installing the Cloud Connector
[page 463].
2. The Cloud Connector is configured. For more information, see Configuring the Cloud
Connector [page 465].
The SAP Analytics Cloud agent doesn't need to be installed during the configuration process.
• Connect to an SAP OData service When you select this option, specific SAP metadata is respected.
This metadata specifies default behaviors based on SAP OData services guidelines.
Note
Advanced features of customized OData data sources, such as SAP Cloud for Customer and SAP
Business ByDesign Analytics, are only available using customized data source types. These features
are not available using generic OData services. It is highly recommended to use the relevant
customized data source types, if available, for your data service to leverage full capability of the data
connector. It is possible to request a customized OData data source solution.
Note
When using freehand queries, the columns (including expanded entity columns) need to be
specified, otherwise they won't be picked up in SAP Analytics Cloud.
For example, the following queries let you get the product by rating, or find a nearby airport:
OData V2 syntax:
Sample Code
KeyPredicate:
Categories(1)/Products?$format=json&$select=Name ,Category/
Name,Supplier/Name,Supplier/
Address&$expand=Category,Supplier&$filter=Supplier/ID eq 0
FunctionImport:
GetProductsByRating?rating=3&$format=json&$select=Name,Rating,Category/
Name,Supplier/Name,Supplier/Address&$expand=Category,Supplier
Sample Code
KeyPredicate:
Products?
$select=ID,Name,Description,ReleaseDate,DiscontinuedDate,Rating,Price&$e
xpand=ProductDetail($select=ProductID),ProductDetail($select=Details)
FunctionImport:
GetNearestAirport(lat=80, lon=90)
For more information on the OData query syntax, refer to the OData documentation .
Note
SAP Analytics Cloud has the following validation rules for freehand queries:
• Duplicated parameters ($select, $expand, $format, $top, $skip, $inlinecount, $filter) in the
query are not allowed.
• Only entity set and function import are supported.
• For function import, entity set is only supported as a return type.
• If $select contains the Nav property but without $expand property, the query is invalid.
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model: See
Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and Prepare Fact Data for a
Model with Measures [page 728].
Workforce Analytics
Note
From the acquire data panel, select the filter icon to narrow down the number of data
sources in the list. You can filter by data source type or by category.
Note
You can select the filter icon to narrow down the number of data sources in the list. You can
filter by data source type or by category.
2. Select an existing connection, or select Create New Connection to create a new connection.
For more information, see Import Data Connection to SAP SuccessFactors Workforce Analytics [page 510].
3. Copy a query from an existing model (which you can edit before saving), or create a new query.
4. If you're using an existing query, select it from the list.
1. If you want to make changes to the query, select Modify existing query.
2. Make your changes, and then select OK.
5. If you're creating a new query, build your query by moving data elements into the Selected Data and Filters
areas.
Select a measure to display a list of dimensions related to that measure. Then, select at least one
dimension. For more information see, Building a Query [page 704].
Note
You can only use dimensions as filters. Filters only support EQUALS and AND operations.
1. Select OK.
2. Enter a name for your query.
6. Select OK.
The data appears in the data integration view, where you can complete the mapping of your new data to
the model's dimensions.
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model: See
Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and Prepare Fact Data for a
Model with Measures [page 728]..
Qualtrics
Context
SAP Analytics Cloud can access data that has been provided through responses to a Qualtrics survey. When
you're creating a model or dataset, it will be helpful if you have access to that survey in Qualtrics, so that you
understand the structure of the survey.
Here are some suggestions to help you get the best out of analytics on Qualtrics surveys.
Note that questions in the Thrash section of your survey shouldn't be used when creating your model or
dataset.
During the data preparation step while creating a model or dataset, the following basic initial steps will
help in getting better insights:
Qualtrics offers several options for question types that may be included in a survey. Here are suggestions for
handling some question types in SAP Analytics Cloud.
For a question of type NPS, for example Q1, Qualtrics generates a field Q1_NPS_GROUP with the value
Promoter or 3, Detractor or 1, or Passive or 2 for each response. To calculate an aggregated NPS in SAP
Analytics Cloud, calculate the counts of Promoters and Detractors, and calculate the NPS as: (Promotercount/
count - Detractorcount/count)*100.
Matrix Table
For a question of type Matrix Table, columns are generated based on the number of statements. For example, if
Q2 has 3 statements, then the generated columns are Q2_1, Q2_2, and Q2_3, and the descriptions are available
as mentioned in basic step 2 above. You can create a model or dataset for each matrix question, by selecting
the columns ResponseID and Q2_1 to Q2_3, doing an unpivot on the Q2 columns, and then adding a count as
mentioned in basic step 1 above.
Link this model to the other model for this survey through ResponseID.
A similar approach can be followed for the Rank Order question type.
Text Entry
For these questions, if there is a TextIQ license, then performing text analysis generates some sentiment
related fields. It is useful to have calculated measures based on the Sentiment score field, which can have
Positive, Negative, or Neutral values.
Highlight
Avoid importing this question type during model creation because it generates a large number of columns,
which could exceed the maximum number of columns supported by SAP Analytics Cloud. For these kinds of
questions, use datasets instead.
The question numbers are available as dimension names. To have dimensions look more meaningful, add
a description for each dimension in the model or dataset, based on your Qualtrics survey. For dimension
members, the code values are provided. You'll need to provide descriptions based on the Qualtrics survey. You
can get all of this information using the “Export Survey to Word” option in Qualtrics.
Restrictions
Note
From the acquire data panel, select the filter icon to narrow down the number of data
sources in the list. You can filter by data source type or by category.
Note
You can select the filter icon to narrow down the number of data sources in the list. You can
filter by data source type or by category.
2. Select an existing connection, or select Create New Connection to create a new connection.
For more information, see Import Data Connection to Qualtrics [page 525]
3. Copy a query from an existing model (which you can edit before saving), or create a new query.
• If you're using an existing query, select it from the list.
1. If you want to make changes to the query, select Modify existing query.
2. Make your changes, and then select OK.
• To create a new query, follow these steps:
1. Select Create a new query, and then click Next.
2. Type a name for your query.
3. Select a table from the list, or search for one.
Next Steps
After the initial import of raw data, continue with the data preparation task before completing your model: See
Import and Prepare Fact Data for a Classic Account Model [page 714] and Import and Prepare Fact Data for a
Model with Measures [page 728]..
This section acts as a reference for all procedures to export data from a model for all supported data sources.
Use the links below to jump straight to a specific data sources.
To be able to export to SAP S/4HANA, SAP Integrated Business Planning, SAP BW and SAP BW/4HANA,
go to the Data Management tab in the Modeler, and make sure to enable the Enable Legacy Export option
first under the Data and Performance tab in the model preferences.
Personal Files
Context
Note
Both Planning and Analytics models can be exported to a flat CSV file. However, you cannot export an
Analytics model based on a HANA view.
Note
Only data imported to SAP Analytics Cloud can be exported to a CSV file. When using live data, such as
from a HANA source, the live data is not exported.
Note
If data export has been restricted for this model, you won't be able to proceed. For details, see Export
Models and Data [page 750].
5. Choose whether you want to export data to a file on your local system, or to a file server, and then select a
folder location to save the exported model to.
If you don't see the option to export data to a file server, see Allow Data Import and Model Export with a File
Server [page 712].
Note
• For Fact Data Only export, you can reorder the CSV output columns by clearing the Dimensions
check box, and then selecting columns in the order you want.
• For Fact Data Only export, calculated measures can’t be exported.
1. Optional: Choose to filter on a dimension. This allows you to reduce the number of returned
records.
By default, dimension members you select are the ones included in the results. Choose Exclude
selected members to filter out selected members. For example, assume there are four quarters Q1, Q2,
Q3, and Q4 in your Date dimension for a particular year, and you select Q1 and Q2 in the filter:
• By default, Q1 and Q2 are included, whereas Q3 and Q4 are excluded.
• Choosing Exclude selected members will exclude Q1 and Q2, and include Q3 and Q4.
Date dimensions allow you to Filter by Member or Filter by Range. You can add multiple ranges to your
filter to specify the exact time periods you want to include. Note that Month is the lowest level of a Date
dimension that can be exported. A time line visualizes the complete range of your filters.
3. Optional: Select Use hierarchy levels from, and then choose a dimension whose hierarchy information
you want to see in the exported results.
A column is exported that includes the parent ID and shows the parent-child relationships between
members of that dimension.
11. In the Scheduling section, choose when you want to export your model.
• Choose Export Now if you want to export only once now.
• Choose Scheduling Recurrence to define a recurring export schedule.
When setting up a recurrence, create a name for your schedule and determine how frequently you want the
export to occur.
12. Choose Schedule to save your export schedule or Export if you chose to export now.
Results
Note
If you share a model with other users, those users can access your export jobs and schedules. If you share
the model with View access, other users can view, but not edit or run the export jobs and schedules. If you
share the model with Full Control access, other users can edit and run the export jobs and schedules.
Successfully exported models are found in the Files area under the folder you specified. From the ( ) Main
Next Steps
The export limit depends on the type of data being exported. If calculations aren't included in the export job,
the default export limit is 6 million rows.
If account calculations are included, the default limit is 1 million rows, but think of this as 1 million rows of
calculation input.
Tip
For example, if your export consists entirely of calculations, with each output cell being the sum of four
other cells, the maximum number of rows that could be exported would be 1 million divided by 4, which is
250,000 rows.
Or, if your query is divided into months (or years), and you selected five months (or years), the maximum
number of rows that could be exported would be 200,000 rows.
The actual result can vary depending on the model and filter complexity.
If you think your export will be bigger than this, try filtering out some data before you begin. The export can
take a long time, but you can use other SAP Analytics Cloud features while you're waiting.
Related Information
SAP BPC
• SAP Business Planning and Consolidation 10.0, version for SAP NetWeaver (BPC for NW)
This means, for example, that you could import a model from BPC, perform planning activities and updates on
the model, and then export the planning data back to the original model in BPC.
When doing this, we recommend you keep the Denote fiscal year setting as By year in which it ends unchanged,
as only this setting is supported by BPC's fiscal year during data import and export.
If currency conversion was set to ON when importing BPC data, only local data will be written back to BPC; If
currency conversion was set to OFF when importing BPC data, all values in the RPTCURRENCY dimension will
be written back to BPC.
To export to BPC, the system must be set up correctly to connect to BPC. For more information, see Data
Connections [page 263].
If you've enabled write-back to BPC, users can write data back to BPC from stories:
2. Click the (Save) icon for the version that you want to write back to BPC.
In an SAP Analytics Cloud write-back model, if the connection is an OAuth connection which you haven't
authenticated, or your OAuth authentication has expired, then a dialog will pop up for you to enter your BPC
credential and complete the OAuth authentication process. If SSO is enabled, you can directly sign in without
entering again your BPC credentials.
After data is written to BPC, a refresh is automatically performed, so that the model in SAP Analytics Cloud
shows all the latest data in BPC.
Try not to activate BPC data on the BW DataStore Objects (advanced) or compress BPC data on the
BW InfoCubes/InfoProviders at BPC side. Otherwise you might not be able to retrieve delta data from
corresponding BPC model.
Users can also retrieve up-to-date data from BPC without first writing data back. For detailed information, refer
to the section "Importing into an Existing Model" after clicking BPC Data in Import Data to Your Model [page
702].
You first need to select the connection to the BPC system. This can be an existing connection or one that you
create on the fly during the export. For more information on creating connections, see Data Connections [page
263].
You must specify the environment and then select the model to be exported. A query is automatically
generated with a predefined name. You can rename it freely.
Mapping Data
You map the source model data to the target model data in the mapping dialog.
Note
If the BPC column names aren't the same as the column names in your SAP Analytics Cloud model, you
can create an extra property column, with names that match the names in BPC. Then when you export the
data, you can map the new property column to the BPC columns.
• Target Dimension: In this area, you can select the target BPC dimensions to map to the source dimensions.
Matches are automatically suggested by the system, but you can change them.
• Source Dimension: Lists the dimensions in the source model. If you choose the option Not Mapped, you
need to specify a default member for the target dimension to write data to.
• Filter: You can set a filter for each column to control how much data is exported.
Select the Export blank values check box if you want to include in the export any members where the name
given by the property column is blank. If you select this check box, also choose what you want to replace the
names of those members with from the Replace with field.
After you complete the data export, if you want to schedule the execution of export jobs, follow these steps:
1. Before you start to schedule the export jobs, we recommend you first check about the BW lock server
settings configured in BPC. Export jobs sometimes can result in a large amount of data written back to BPC
Frequency Description
None Select this option when you want to export the data manually.
Repeating The export is executed according to a recurrence pattern. You can select a start and end
date and time as well as a recurrence pattern.
Related Information
Data Connections [page 263]
SAP S/4HANA
Prerequisites
You are using a supported version of SAP S/4HANA. For more information, see System Requirements and
Technical Prerequisites [page 2723].
If you want to export your quantity-related data to S/4HANA's quantity measure, make sure your target SAP
S/4HANA is version 1911 or higher.
Note
To be able to export to SAP S/4HANA, go to the Data Management tab in the Modeler, and make sure to
enable the Enable Legacy Export option first under the Data and Performance tab in the model preferences.
Context
Make sure the date dimension of the SAP Analytics Cloud model to be exported has either Month or Year
granularity.
Make sure the SAP Analytics Cloud model has fiscal year settings turned on. Its date dimension will be mapped
to S/4HANA's fiscal year or fiscal year period field by default.
• SAP S/4HANA: Exporting Data from SAP Analytics Cloud to SAP S/4HANA
• SAP S/4HANA Cloud: Exporting Data from SAP Analytics Cloud to SAP S/4HANA Cloud
Note
To be able to export to SAP Integrated Business Planning, go to the Data Management tab in the Modeler,
and make sure to enable the Enable Legacy Export option first under the Data and Performance tab in the
model preferences.
Context
4. In the Export Jobs area, choose the (Export) button and select Export Data to SAP Integrated Business
Planning.
Note
You can also choose to export data to IBP via a generic OData service connection. In this case, select
Export Data To OData Services from the list.
5. Choose a connection.
If no connections have been created, you can choose Create New Connection from the drop-down list. For
detailed information, see Import Data Connection to SAP Integrated Business Planning [page 518].
Click Next.
6. On the left side of the Target Data Selection dialog, choose an IBP Planning Area.
7. Select the corresponding target fields in the IBP system that you want to export data to.
8. Click Next.
On the Mapping page, map SAP Analytics Cloud dimensions and measure to the corresponding IBP fields
you selected in the previous step. To map measures, select a measure card, and in the Source Measure
Note
Currently, we don't support exporting data to the target time dimension with Week granularity and a
Timestamp type when the selected source dimension is Date.
Note
For any target IBP field, if you have no source dimension to map data to and choose the option Not
Mapped, you need to specify a default value for the target field.
Note
If the IBP attribute names aren't the same as the column names in your SAP Analytics Cloud model,
you can create an extra property column, with names that match the names in IBP. Then when you
export the data, you can map the new property column to the IBP attributes.
Mapping Tips:
Currently each time you can only export one measure. If you want to export multiple measures to IBP, you
need to perform data export for each measure respectively.
Date dimension in SAP Analytics Cloud should have the same granularity as date relevant field in IBP.
Whether SAP Analytics Cloud has fiscal year settings turned on or not, only calendar year will be exported
to IBP.
9. In the Source Filters area, filter members of the SAP Analytics Cloud dimensions that you want to export. If
you choose a single base member, only that value will be exported; if you specify more than one member or
a parent member, the value will be aggregated during the export; if left empty, all the data in the dimension
will be aggregated.
Note
IBP currently doesn't support version mapping during the data acquisition. If you maintain multiple
versions in SAP Analytics Cloud, make sure you set filter for a specific version. Otherwise, transaction
data from different versions will be aggregated.
If you export data of multiple UoMs or currencies to IBP, make sure you filter a specific UoM or
currency.
10. Select the Export blank values check box if you want to include in the export any members where the name
given by the property column is blank.
If you select this check box, also choose what you want to replace the names of those members with from
the Replace with field.
11. Click Finish to complete the export.
You can continue to work on other tasks while the export runs in the background.
12. If you want to schedule the execution of export jobs, follow these steps:
1. Select an export job.
None Select this option when you want to export the data manually.
Repeating The export is executed according to a recurrence pattern. You can select a start and
end date and time as well as a recurrence pattern.
Prerequisites
Exporting data to OData Services is currently supported with BW/4HANA 2.0 and BW 750. For more
information, see 2838883 . Please note that only BW Standard ADSOs with write change log (optional)
and write interface-enabled are supported.
Note
We do not support exporting data to sources other than BW/4HANA 2.0 and BW 750.
Generally, you are using one single connection for importing and exporting data. But in this case, you need to
create two connections: one for importing data to SAP Analytics Cloud and one for exporting data from SAC,
which is described here. To import data to SAP Analytics Cloud, see Import Data Connection to an SAP BW or
SAP BW/4HANA System [page 486].
Note
To be able to export to SAP BW and SAP BW/4HANA, go to the Data Management tab in the Modeler, and
make sure to enable the Enable Legacy Export option first under the Data and Performance tab in the model
preferences.
Context
The roles you are assigned to must include the following permissions:
Note
Please specify a default value for the target field in case you have no source dimension to map
data to, and have chosen Not Mapped. Every measure must be mapped to a source measure. If the
source measure unit type doesn’t match the target measure unit type, the application shows a warning
message.
Tip
If the target attribute names don't correspond to the column names in your SAP Analytics Cloud
model, you can create an extra property column with names that match the ones in the target attribute.
Exporting the data, you can map the new property column to the target attributes.
Tip
Currently, each time you can only export one measure. If you want to export multiple measures to
the target, you need to perform data export for each measure respectively. For more tips and tricks
analyzing BW data integration issues, please check SAP Note 3026199 .
Note
The date dimension in SAP Analytics Cloud should have the same granularity as the date relevant field
in the target. Only the calendar year will be exported to the target, regardless of the fiscal year settings.
9. In the Source Filters area, filter all members of the dimensions that you want to export. If you choose a
single base member, only that value will be exported. If you specify more than one member or a parent
Note
If you select this check box, choose a new name for these members using the Replace with field.
Note
Note that if no target dimension is selected, you will enable a replacement, but no cleanup of exported
data.
Tip
You can continue to work on other tasks while the export runs in the background.
13. If you want to schedule the execution of export jobs, follow these steps:
1. Select an export job.
Frequency Description
None Select this option if you want to export the data man-
ually.
You can select a start and end date and time as well as a
recurrence pattern.
In the Modeler and in stories, create formulas to perform calculations on constant values and members of the
account dimensions.
See this topic for details on formulas: All Formulas and Calculations [page 873].
Related Information
In SAP Analytics Cloud, use predefined formulas, functions, conditions, and operators to build up a formula in
the Modeler or in stories.
Note
Some functions are specific to the Modeler and some are specific to formulas that can be created in the
calculation editor in stories.
Formulas
You can press Ctrl + Space in the formula bar to display a list of suggestions, or type [ for a list of valid
measures and dimensions.
Note
Function names, AND keywords, and time navigation granularities (for example, "year" and "Month") are
case-insensitive.
Note
While preparing data in the Data Integration view, you can create a calculated column based on input from
another column and the application of a formula expression. For a list of functions available in this scenario
see Supported Functions for Calculated Columns [page 745].
Mathematical Operators
These operators can be used either with constants or when referring to members. For example, [A1000]/3 is
the value of account member A1000 divided by 3.
Mathematical Oper-
ators Syntax Meaning Example
Note
If a summing formula in a table in-
cludes a row with no data, the sums
won't appear, because when a null
value is added to other values, the re-
sult is null.
A1+A2+A3 = null
SUM(A1:A3) = 150
Subtraction N1-N2 Subtracts the second number from the first 1337 - 42 returns 1295
one.
Division N1/N2 Divides the first number by the second one. 12 / 3 returns 4
Unary Minus -N1 Changes the sign of the following number. -3 returns -3
5 + -3 returns 2
Conditional Operators
Conditional Opera-
tors Syntax Meaning Example
AND and Takes two Booleans. false and true returns false
Returns true if both values are true; oth- 1 = 1 and 2 = 2 returns true
erwise false.
Greater Than > Compares two numbers or other values. 1 > 1 returns false
Less Than < Compares two numbers or other values. 1 < 2 returns true
Returns true if the first value is less than 1 < 0 returns false
the second one.
Greater Than or >= Compares two numbers or other values. 1 >= 1 returns true
Equal
Returns true if the first value is greater 2 >= 1 returns true
than or equal to the second one.
1 >= 2 returns false
Less Than or Equal <= Compares two numbers or other values. 1 <= 1 returns true
Returns true if the first value is less than 1 <= 2 returns true
or equal to the second one.
1 <= 0 returns false
Note
Comparisons using operators like Greater Than or Less Than with null as one operand will always evaluate
to false.
Business Functions
Note
Restriction
For formulas that use
week-level granularity time
dimensions, the year must
be an integer value.
Month To Date Account Models Month To Date sums the ag- MTD([Quantity], [d/Date])
MTD([<account gregated values over a given returns the sum of the values from the
member>],[d/<date month period. This function ap- beginning of the month to the current
dimension>]) plies to both classic account date.
models and models with meas-
New Model Type
ures.
MTD([<measure>],[d/
Make sure to use the relevant
<date dimension>])
syntax depending on the model
type and refer to an account
member for a classic account
model, or a measure for the
new model type.
Quarter To Account Models Quarter To Date sums the ag- QTD([Quantity], [d/Date])
Date
QTD([<account gregated values over a given returns the sum of the values from the
member>],[d/<date quarter period. This function beginning of the quarter to the current
dimension>]) applies to both classic account date.
models and models with meas-
New Model Type
ures.
QTD([<measure>],[d/
Make sure to use the relevant
<date dimension>])
syntax depending on the model
type and refer to an account
member for a classic account
model, or a measure for the
new model type.
Year Over Year Account Models The rate showing the difference Revenue_YoY =
(YoY) between the value of a mem- YoY([Revenue])
YoY([<account ber in the current year com-
member>],[d/<date pared to the previous year. This For a detailed example, see YoY Formula
dimension>]) function applies to both classic Details [page 914].
account models and the new
New Model Type
model type.
YoY([<measure>],[d/
Make sure to use the relevant
<date dimension>])
syntax depending on the model
type, and refer to an account
member for a classic account
model, or a measure for the
new model type.
Year To Date Account Models Year To Date sums the aggre- YTD([Quantity], [d/Date])
(YTD)
YTD([<account gated values over a given year returns the sum of the values from the
member>],[d/<date period. This function applies to beginning of the year to the current
dimension>]) both classic account models date.
and models with measures.
New Model Type
Make sure to use the relevant
YTD([<measure>],[d/
syntax depending on the model
<date dimension>])
type, and refer to an account
member for a classic account
model, or a measure for the
new model type.
Note
Accoun- ACCOUNTLOOKUP([<measu Returns a cell value based on For detailed syntax information, with
tLookup re name>],[<account a measure name filtered by an examples, see Restrict and Lookup
member>]) account dimension. [page 906].
Note
This function is only avail-
able with the new model
type.
Link Link([<linked model Model linking (blending) can For detailed syntax information, with
name>], [<linked be used to display data from examples, see Link Formula Details
member>], <POV>) more than one model in a sin- [page 903].
gle story. Model linking means
adding facts (actual values)
from Model A into Model B, by
linking to the facts in Model A.
Lookup LOOKUP([<account Returns a value by referring For detailed syntax information, with
member>], [<POV>], to an account, filtering on di- examples, see Restrict and Lookup
[<Ignore Dimension>]) mension and member pairs. [page 906].
Optionally, specific dimensions
Also see ResultLookup vs. Lookup
that should be ignored can be
specified. [page 913].
Measure- MEASURELOOKUP([<accou Returns a cell value based on For detailed syntax information, with
Lookup nt member>],[<measure an account dimension filtered examples, see Restrict and Lookup
name>]) by a measure name. [page 906].
Note
This function is only avail-
able with the new model
type.
Restrict RESTRICT([<account Returns a measure value re- For detailed syntax information, with
member>], [<POV>]) stricted to an account and a list examples, see Restrict and Lookup
of dimension members. [page 906].
Restriction
• ResultLookup is not
supported for non-
standard time periods
such as week-level
granularity and user-
managed time dimen-
sions.
• ResultLookup func-
tions only work when
the member has a
single parent. If the
member is present
under multiple pa-
rents, the formula is
invalid.
IF(condition, then, The IF function returns the first value <then> if the IF([SALES]>100, [SALES], [SALES]
else) specified condition is TRUE, and the second value +10) returns the following:
<else> if the condition is FALSE.
• If [SALES] is greater than 100, returns
The IF function can also be used with Boolean [SALES]
values. For example, if you wanted to treat cells • If [SALES] is less than or equal to 100, re-
with NULL values as cells with zero (0) values, you turns [SALES]+10
could have the following formula:
IF(EQUAL(B10,NULL), 0, B10)
IF(condition, then) The IF function returns a value if the specified IF([SALES]>100, [SALES]) returns the
following:
condition is TRUE.
• If [SALES] is greater than 100, returns
Data points that don't match the condition will be
[SALES]
assigned a NULL value.
Note
This syntax is not supported for live data
models.
Restriction
• The IF function is not supported for non-standard time periods such as week-level granularity and
user-managed time dimensions.
• IF functions only work when the member has a single parent. If the member is present under multiple
parents, the formula is invalid.
Inverse Functions
Note
| INVERSE Add one or more inverse formulas to the end of For detailed syntax information with examples,
([<target a formula to specify how the formula should be see Inverse Formulas [page 917].
member 1>]:= reversed.
Formula 1)
OR INVERSE This function allows data entry on the formula in a
([<target story.
member 2>]:=
The number of inverse formulas is limited to the
Formula 2)
number of operands in the base formula for which
the inverse formulas are applied.
| INVERSEIF This type of inverse formula is only applied when For detailed syntax information with examples,
(<condition> the specified condition is TRUE. see Conditional Inverse Formulas [page 921].
, [<target
member>]:= Otherwise, it functions similarly to the INVERSE
Formula ) function.
Mathematical Functions
Restriction
These functions can be used to create formulas in the Calculation Editor in stories; they can't be used in
formulas created directly in a table widget.
Mathematical
Functions Meaning Example
%GrandTotal(Ac- Returns the percentage of the grand total that %GrandTotal(Sales) returns the percentage
count) each value represents. of the grand total that each value represents.
ABS(number) Returns the absolute value of a number (the num- ABS(-11) returns 11
ber without its sign).
EXP(number) Natural exponential function. Returns the value of EXP(3) returns 20.0855
e (2.718) raised to a power.
GrandTotal(Ac- Returns the grand total of all the Account values GrandTotal(Sales) returns the aggregated
count) in the result set. Filters are included in the calcula- value of Sales.
tion of the grand total.
Subtotal Returns the subtotal of a member broken down by SUBTOTAL([Sales], [d/Location]) re-
one or multiple dimensions. turns the aggregated value of Sales broken down
by Location.
This function applies to both classic account mod-
els and models with measures. Make sure to use Sub To-
the relevant syntax depending on the model type Region Location Sales tal(Sales)
SUBTOTAL (Measure,
Dimension1, ..., Dimension10)
Note
• The function does not aggregate values
over non-aggregable dimensions (e.g.
Account dimension, Category/Version
dimension) and generates a separate
sub total for each dimension member of
these dimensions.
• The function supports up to three hier-
archical dimensions.
• The function supports up to 10 dimen-
sions.
• You cannot nest multiple subtotal formu-
las.
%SUBTOTAL (Measure,
Dimension1, ..., Dimension10)
Note
• The function does not aggregate values
over non-aggregable dimensions (e.g.
Account dimension, Category/Version
dimension) and generates a separate
sub total for each dimension member of
these dimensions.
• The function supports up to three hier-
archical dimensions.
• The function supports up to 10 dimen-
sions.
• You cannot nest multiple subtotal formu-
las.
MAX(number1, Returns the largest of two or more numbers. MAX(10,20,15) returns the value 20
number2, ...)
MIN(number1, Returns the smallest of two or more numbers. MIN(10,20,15) returns the value 10
number2, ...)
MOD(number1, Returns the remainder of dividing a specified MOD(15,2) returns the value 1
number2) number <number1> by another specified number
<number2>.
POWER(number, Returns the result of a number raised to a power. POWER(2,3) returns the value 8
power)
String Functions
FINDINDEX('This is only a
string.', 'is', 6) returns -1
ENDSWITH(string, Returns true (1) if the string ends with the speci-
ENDSWITH('This ends with this.',
string ) fied substring. It is case-sensitive, and doesn't ig-
nore trailing white spaces.
'this.') returns 1
Conversion Functions
Conversion Func-
tions Meaning Example
CEIL(number1, Returns the smallest number that is greater than CEIL(14.8) returns the value 15
number2) or equal to a specified number <number1>.
CEIL(14.82,0) returns the value 15
The optional <number2> argument specifies the
number of decimal places. CEIL(14.82,1) returns the value 14.9
DECFLOAT(string) Converts string to a decimal floating point num- DECFLOAT("14.6") returns 14.6
ber.
DOUBLE(arg) Converts arg to a high precision floating point DOUBLE(14) returns 14.0
number.
FLOOR(number1, Returns the largest number that is not greater FLOOR(14.8) returns the value 14
number2) than <number1>.
FLOOR(14.82,0) returns the value 14
The optional <number2> argument specifies the
number of decimal places. FLOOR(14.82,1) returns the value 14.8
INT(number) Returns the integer portion of a number. INT(9.5) returns the value 9
ROUND(number1, Rounds argument <number1> to the specified ROUND(14.82,1) returns the value 14.8
number2) number <number2> of decimal places.
ROUND(14.82,0) returns the value 15
TRUNC(number1, Returns a specified numeric value <number1>, TRUNC(12.281,1) returns the value 12.2
number2) truncated to a specified number <number2> of
decimal places.
Note
The date dimensions used as arguments
in the function are automatically added as
exception aggregation dimensions.
Related Information
Formulas perform calculations on either constant values or members of the account dimension. References to
members are enclosed in square brackets (see following illustration). Some formulas are designed for use in
models and others are designed for use in tables in stories.
In the Modeler, when you create a new account dimension, the Formula column is automatically added. You
can enter a formula directly in the formula column, or select a formula cell and click to open the Advanced
Formula Editor. In the Modeler, formulas apply to individual account members (rows).
In stories, you can create calculations in tables using the Calculation Editor, or you can type formulas in grid
cells outside of a table using the formula bar. For more information, see Create Custom Calculations for Your
Tables [page 1439] and The Formula Bar [page 1398].
Related Information
You can type formulas manually, or use the formula editor if you're not sure how to construct the formula you
want.
Context
These are the formula syntaxes you can use when entering formulas in the Modeler, or when using the
calculation editor in stories, or when creating formulas in data actions.
Note
In stories, the identifiers are qualified with the model ID. For example:
A dimension: [d/"BusinessPlanning_BASE":Product]
Procedure
These are the steps you'd follow to enter a formula in the Modeler:
1. In the Modeler, open a model and select the account dimension.
2. Scroll to the right until you see the Formula column.
3. Select the cell or row in the Formula column where you want to add a formula.
4. In the Member Details panel, if you're already familiar with formula syntax, you can begin typing your
formula in the Formula field.
As you begin to type, a hint list shows all available options (including formula and account members) that
match the text you've typed. The list shows values for both account member IDs and account member
descriptions.
Instead of typing formulas manually, you can press Ctrl + Space to choose from a list of values that are
valid for that location in the formula, or type [ for a list of valid measures and dimensions.
Some formulas available in the Modeler, for example YoY (year over year), take parameters. Press Ctrl +
Space to choose from a list of values that are valid for that location in the formula.
5. If you're not very familiar with formula syntax, you can select Open the advanced formula tool to help you to
enter, format, and validate your formula.
The formula editor lists all the available functions (functions, conditions, operators) that can be used to
build up a formula. You can select the functions you need from the lists and also type additional values in
the editor.
Note
The formula editor is available only if a formula can be defined for that account. For example, for
analytic-type HANA models, you can't define formulas for existing measures.
Related Information
You can create a revenue formula (price * volume) when using a planning model, or other formulas based on
lookup functions.
• The total product sales quantity (by number) is planned by region and product (and also Time and
Version). A sales revenue forecast would be calculated based on this formula:
Sales Revenue = (Sales Quantity of Product in Region) * (Price of the Product in Currency)
When prices are identical for all regions, you need to enter the price only once for all regions.
To accomplish this, you'll need to create an interim account to store the prices of the different products.
In our scenario, the interim account is named "storedPrice/Stored Price". In the Region dimension, use the
unassigned region member ("#") to store the price. The interim account is hidden so that users of the model
won't see it.
The account that is used to show the price to users is named "price/Price". It retrieves its values from
the "storedPrice" account using a lookup formula. This formula looks in the "storedPrice" account for the
unassigned location ("#") and uses it for all regions.
No Aggregation of prices
Region
White 1 1
Whole Grain 2 2
White 1 1
Whole Grain 2 2
In this scenario, choose the aggregation type NONE for the accounts that include Price data:
However, for the parent item Bread, Revenue should not be calculated as Price * Volume, because there is
no aggregation type for Price that would provide a meaningful value in a Price * Volume calculation. Instead,
Revenue should be calculated as the sum of the child items:
Switzerland Bread 49 81
White 1 17 17
Whole Grain 2 32 64
Whole Grain 2 45 90
Rye 3 8 24
The calculation of Revenue by multiplying the Price with Volume needs to be done before summing the
volumes for the different bread types. When summing the different contributions of the regions for one
product, the Volumes for these regions can be aggregated before multiplying with the price for the product:
White 1 17 17
Rye 3 8 24
You can define two aggregation types for an account: Aggregation Type and Exception Aggregation Type. The
exception aggregation type is optional, and requires one or more exception aggregation dimensions that are
associated with the exception aggregation type. Using both aggregation types allows for flexible definition of
aggregations and calculations.
SAP Analytics Cloud first attempts to aggregate the data as much as possible, and then processes any
calculations. If no exception aggregation is defined, this process applies to all dimensions of the model. If
exception aggregation is defined, the dimensions that are defined as exception aggregation dimensions are
excluded from this processing. The exception aggregation is then processed after the calculations.
Note that defining an exception aggregation and choosing exception aggregation dimensions is also useful if
the account is not calculated by a formula. However, in this case, the exception aggregation type should be
different from the standard aggregation type.
In our example, we define the exception aggregation type to be SUM, and the formula to be [price] * [volume].
This means that the other dimensions that may be aggregated before calculating the formula are Region and
Time.
You can create conditional calculations in Modeler based on dimensions and their properties.
IF ([d/sap.fpa_data:REGION]="DE",[SALES] *2 , [SALES] )
IF ([d/sap.fpa_data:PRODUCT]=("COKE", "SPRITE"),[REVENUE] *1.5, [REVENUE] )
• properties of a dimension
Examples:
Note
The operators = and != allow multiple values. If using multiple values, join them with commas and enclose
them in parentheses: ("CA", "US"). Operators >, <, >=, and <= allow only one value. Use AND and OR to
join conditions.
IF([d/REGION]="US",[SALES]*2,[SALES]*3)
To create this calculation, use the IF function and add the Region dimension to the formula. As you type in
the formula bar, dimension names that match the text you have typed are shown. A lower-case “d” indicates a
Note
• Variables can be included in the <then> and <else> parts of the IF formula.
Example:
IF ([d/sap.fpa_data:Time].[h/YM] = "2015",[RevenueLastYear]
*['scaling_factor'], [RevenueLastYear])
• Nested IF statements can be used in the <then> and <else> parts of the IF formula.
Examples:
Related Information
SAP Analytics Cloud first attempts to aggregate the data as much as possible, and then processes any
calculations. If no exception aggregation is defined, this process applies to all dimensions of the model. If
exception aggregation is defined, the dimensions that are defined as exception aggregation dimensions are
excluded from this processing. The exception aggregation is then processed after the calculations.
For example, if you have the formula Total Income = Price * Volume, you might want to ensure that
the Total Income amounts are summed after the Price * Volume calculations are done. In this case, use the
SUM exception aggregation type for Total Income, and specify all of the available dimensions as exception
aggregation dimensions:
In the new model type, you can optimize formulas containing an exception aggregation by specifying
dimensions properties. When you create a formula using a dimension property, both the Exception Aggregation
Type and Exception Aggregation Dimensions fields are automatically pre-filled. By default, the application sets
the Exception Aggregation Type to SUM, and populates the Exception Aggregation Dimensions field with the
dimension properties referenced in the formula.
These changes can also be seen within the Advanced Formula Editor in the Model Structure screen, with
Exception Aggregation Type set to SUM, and the Exception Aggregation Dimensions field populated with the
dimension properties. This allows for better performance and simplifies the overall exception aggregation
workflow.
While this enhancement provides more flexibility to add properties in the exception aggregation, having too
many properties can potentially hinder performance. As a best practice, we recommend having exception
aggregations on the level you need with relevant properties only.
Tip
When creating the formula, hit Ctrl + Space or Cmd + Space to bring up the dimension properties
and select one from the list.
In the Calculations screen, after you’ve created the formula, you should see the updated aggregated values
with the exception aggregation without having to add the dimension property to the drill. If needed, you can
add additional dimensions properties using the multi-selection dialog icon next to the Exception Aggregation
Dimensions option.
Related Information
You can define variables and their default values, and then enter the variable names in formula definitions. In
tables in stories, variable values are calculated and displayed.
In the modeler, the variables you enter are placeholders that you define with a default value. In stories, you can
then select the variable values so that the formula can be used to operate with different numbers or different
sets of data.
Select (Variables) to define variables. In this example, one variable has been defined:
You can then use the variable in a formula by referring to the Variable ID:
In tables in stories, you'll be prompted to set the variable values when you first add the model to the table, but
at any other time, you can change a variable setting by selecting the (Edit Prompts) icon on the story
Restriction
Variables work only on booked data. For example, in an Account member that uses the Restrict formula,
if you define a variable to point to a hierarchy member that doesn't have booked data, the values of that
hierarchy member's child members don't aggregate into the chosen hierarchy member.
To define one or more variables for the model, follow these steps.
Procedure
2. Select (Add).
3. Enter a Variable ID for the new variable.
4. (Optional) You can enter a description of the variable.
5. Select a Value Domain.
These are the options available for the value domain:
If the variable relates to an existing dimension, you can choose to select a member of the dimension
the variable relates to.
7. To use the variable in a formula, simply enter the formula in the normal way.
The variables you have defined will be included in the drop-down list boxes.
The Iterate formula is designed to work with embedded formulas for rolling calculations, such as cumulative
sum and compound growth.
Using the Prior function, you can also retrieve the value that was calculated for the previous member. The base
member in the second argument is always the starting value of the iteration.
Iterate results are reflective of the hierarchy order of the dimension and do not reflect the sorting
appearance on the front end. For flat dimensions, where no hierarchy exists, results are reflective of the
dimension ID ordering within the model. For more information, see SAP Note 3293548 .
Syntax
Examples
In this example, the function calculates a rolling sum along the date dimension split in quarters based on the
Balance measure. The second value is the starting value, and corresponds to the value of the base measure.
For each hierarchy level, every next value is calculated by adding the previous value to the current value.
The rolling sum is aggregated at each hierarchy level. Here, year, quarters, and months.
In this example, the second member Balance is the starting value, and the next value is calculated using the
([Growth]-[Expenses]) formula. The result of that formula is then added to the results of Balance.
In this example, the first value corresponds to the Balance value. Every next value is the previous value,
increased by a growth factor (usually expressed in percentage). The iteration moves along the date dimension,
and for each date, along the product dimension.
This example is the same as above, except that the iteration only moves along the Date dimension. The product
dimension isn’t taken into account.
The Iterate formula can be used in conjunction with inverse formulas. Use inverse formulas to specify the first
argument of the Iterate formula and specify how exactly to inverse the Iterate formula.
Where SUM is the measure defined by the formula. Prior is also supported in inverse formulas. In this case, it
evaluates the SUM value of the previous date member.
This writes the SUM value to the previous date member, which iteratively invokes inverse formulas again. If
there’s no other inverse formula, this iterates until the first date member is reached. Note that a data entry on
the first member dismisses the inverse formulas and always writes to the base measure.
Balance Sheet
A balance sheet typically consists of three accounts or measures:
• Opening, which denotes the value at the start of the month or any other time period.
• Transactions, which denotes the changes within the months or other time period.
• Closing, which denotes the value at the end of the month other time period.
Iterative formulas allow you to model this in multiple ways. First way would be to make Closing a growing sum
over Transactions, and Opening the difference between Closing and Transactions:
[Closing] - [Transactions]
Opening has the inverse formula INVERSE([Closing] := [Opening] + [Transactions]) and writes to
Closing, which is a calculated account or measure. Transactions in a non-calculated account or measure.
By doing this, the value of Transactions is adjusted accordingly when entering data. However, when entering
data on Opening, the formula will try to adjust the value of Closing, which cannot adjust the value of
Transactions since it’s a dependency of Opening. This would create a circular dependency. As a result, the
second inverse formula is used and writes to the previous value of Closing. This will evaluate all inverse
formulas again, and therefore write to the previous value of Transactions.
Syntax:
Parameter Usage
• The currency conversion setting (enabled or disabled) for the linked model is the same as for
the current model.
• The Date dimension in the linked model has the same granularity as the Date dimension in the
current model, or both models have no date dimension at all.
• Fiscal Year is either disabled in both dimensions, or enabled with the same month shift.
<Point of View> The point of view is a list of filters, concatenated by the "and" operator, containing selected dimen-
sions and their members. The point of view is used to further restrict data in the linked account
member.
Each POV filter contains a dimension name followed by a string list. A string list is either a single
string, or a comma-separated list of strings in parentheses.
For this parameter, the all keyword is available to select all members in the dimension.
• Missing Dimensions
These dimensions exist only in the linked model. You can select none, one, many, or all mem-
bers of each dimension, multiple dimensions, or even all missing dimensions.
• Matching Dimensions
These dimensions exist in both the linked model and current model. You have the same selec-
tion options as you have with missing dimensions.
• Additional Dimensions
These dimensions exist only in the current model. If there are any additional dimensions, it is
mandatory to specify them in a point of view, and you must select exactly one member of each
of the additional dimensions.
Example
The following example explains the different dimension types. The tables show the fact tables of the
corresponding models.
The three highlighted rows in Model_B should be linked to Model_A. We can add the following member with its
formula to Model_A to get the following result:
Model_A result
To understand which values are added to Model_A, consider the different dimension types:
• Missing Dimensions
• The missing dimension is CustomerGroup.
• The information about the CustomerGroup dimension is not available in the fact table of Model_A.
• Matching Dimensions
• The matching dimensions are Date and Country. These dimensions are used to map the linked values.
• Filters can be applied to these dimensions. In this example, a filter is applied to retrieve only values
from 2017.
• Additional Dimensions
• The additional dimension is Product.
Related Information
The functionality of the Restrict and Lookup formulas is very similar. They differ in the way the results are
displayed, showing a breakdown of aggregated numbers.
Syntax
Note
<Point of View> This is a list containing selected dimensions and their members:
This is used to further restrict data in the account member (see following example 1).
Additionally, you can use time navigation syntax to identify specific periods (see example 2 and the
following examples, and the dynamic time navigation functions below).
[d/Employee]="e1"
[d/Employee]=("e1","e2")
You can use multiple POV parameters and ignored dimensions by using the AND keyword. See
example 3 below.
You can use dimension attributes or properties in the POV. See example 5 below.
Example:
1. RESTRICT([Income],[d/Employee]=("e1","e2"))
2. RESTRICT([Sales],[d/Date]=Previous("Year", 1).Next("Day", 5))
3. LOOKUP([Sales],[d/Employee]="Anne" and [d/Country]="Spain" and [d/
Product]=("Paper","Glue"), [d/Country] and [d/Product])
4. RESTRICT([SalesRevenue],[d/Products]!="Apple Juice")
5. RESTRICT([NetRevenue],[d/ResponsibilityCenter].[p/currency]="CAD")
An extensive set of keywords is available for time navigation. The following examples show how these are used:
LOOKUP([Sales],[d/Date]=Next("Year",1).Current("Month"))
RESTRICT([Sales],[d/Date]=Next("Year",1).Last("Quarter",1).Last("Month",1))
LOOKUP([Sales],[d/Date]=Next("Year",1).LastPeriods("Month",2))
These functions are only available supported by the new model type, and are only accessible from within the
Modeler, both in the Model Structure and Calculations and workspaces.
Note
• Both functions are only available for the new model type, and in models that have both accounts and
measures.
To avoid errors while running the functions, make sure to pass the arguments in the right order. Also, you can
use Ctrl + Space to open the list of dimensions and measures you can when writing the function.
These functions can be used only in the POV parameter of the Restrict and Lookup formulas. Use hints after
the [d/Date] = string to access them in the formula bar, and select Set Dynamic Time Filter.
Note
For the ToPeriod formula, hints are available in the formula bar only.
Except for the LastPeriods function, these functions can be combined using periods. Ensure that the
Granularity order is correct.
Example: Previous("Year",4).First("Quarter",3).Last("Month",6))
ToPeriod ToPeriod("Gran (String Granularity) Sums the (aggregated) values ToPeriod("Year") returns
ularity") over a given period. the sum of the values from the
beginning of the year to the cur-
rent date.
First First("Quarter (String Granularity, The Nth member relative to the First("Month",2).Firs
",3) Number N)
parent granularity; for example, t("Day",1) refers to
the Nth month of the quarter. 2017.08.01.
Not available for “Year” (be- This is the first day of the sec-
cause there is no parent). ond month of the current quar-
ter of the current year.
Previous Previous("Quar (String Granularity, The Nth member before the Previous("Month", 1)
ter",2) Number N)
current one. refers to 2017.06.14.
Next Next("Year",4) (String Granularity, The Nth member after the cur- Next("Year", 3) refers to
Number N)
rent one. 2020.07.14.
Note
• If you use the dynamic time navigation functions, the time dimension needs to be part of the drill state.
Consider adding the time dimension to the list of required dimensions so that a story builder adds the
time dimension to the drill state.
• In stories, use the default date hierarchy when working with accounts that use dynamic time
navigation. When a widget doesn’t use the default date hierarchy, these accounts don’t show any
data.
• When working with planning model data in a story, users can enter data on restricted measures that
use dynamic time navigation. For more information, see Entering Values with Dynamic Time Filters
[page 2210].
In the case where an aggregated number is based on a dimension where some members in the hierarchy have
been excluded by the filter, there will be gaps in the selected data. When you analyze such an aggregated
number by drilling down to lower levels, how should these gaps be handled and what should be shown? These
formulas provide two solutions:
• Restrict – shows no value for rows or columns where no data has been selected.
• Lookup – simply shows the aggregated value for all lines (including filtered out members).
All the following examples are based on a simple dataset showing revenue for three products and two
countries:
Paper 1$ 4$
Plastic 2$ 8$
Restrict()
In this example, select two of the three products (paper and glue):
Example
RESTRICT([MyAccount],[d/Product]=("Paper","Glue"))
Total value returned: 53$. That is, an aggregated value for all paper and glue from all countries.
Further analysis: using Restrict, if you drill down further in the product dimension, any lines for products that
were not selected will show empty cells:
Lookup can be used with exactly the same parameters for the same purpose. But the behavior for handling
lower-level values is different. This example formula makes exactly the same selection as the previous Restrict
example:
Example
LOOKUP([MyAccount],[d/Product]=("Paper","Glue"))
Total value returned: 53$. That is, an aggregated value for all paper and glue from all countries.
Further analysis: using Lookup, if you drill down further in the product dimension, Lookup will show a value for
all lines:
The next examples follow on from the previous scenario to show the behavior of Lookup using the additional
parameter IgnoreDimension.
Example
In this first example, the same dimension filter for Paper and Glue is applied, and the IgnoreDimension
parameter is used to disregard the country dimension:
LOOKUP([MyAccount],[d/Product]=("Paper","Glue"),[d/Country])
Total value returned = 53$. That is, values for all paper and glue from all countries (country is ignored).
• When drilling down to show product detail, no value is shown for products that have been filtered out (not
been selected).
• When drilling down to show country detail, the same accumulated value is shown for both countries.
Example
If the filter (POV) parameter is omitted from Lookup, then all products are selected:
LOOKUP([MyAccount],,[d/Country])
Total value returned = 63$. That is, values for all products from all countries.
Further analysis:
• When drilling down to show product detail, all values are available (no filter was applied).
• When drilling down to show country detail, the same accumulated value for all products is shown for both
countries.
To maintain a consistent sign throughout the usage of the account dimension for account types INC and
LEQ, the correct account type needs to be set.
For example, if you have an account with account type INC, the signs of the account values are
automatically switched, as they are already stored with a reversed sign. If you create a calculated account
that uses a Lookup formula based on that INC account, the calculated account should also be set to the
INC account type if you want incomes to appear with the correct sign (usually positive).
For more information on account dimension attributes, see Attributes of an Account Dimension [page
606].
Related Information
In SAP Analytics Cloud, the dimensions that are defined in the ResultLookup formula need to be in the current
drill state. Also, you can filter on only one value.
• ResultLookup is not supported for non-standard time periods such as week-level granularity and
user-managed time dimensions.
• ResultLookup functions only work when the member has a single parent. If the member is present
under multiple parents, the formula is invalid.
Because value driver trees only include information about the Account and Time dimensions, ResultLookup
formulas often don't show any data in value driver tree nodes.
With Lookup, the dimensions defined in the Lookup formula don't need to be in the current drill state, and you
can maintain more than one member in the filter expression.
Detailed explanation and examples for the YoY (Year over Year) formula.
Year over Year returns a percentage showing the difference between the value of a member in the current year
compared with the previous year.
Syntax:
YoY([<account member>])
2008 0 -100%
Example
Revenue_YoY = YoY([Revenue])
To change which accounts show positive values and which show negative values for different types of users,
you can create a calculated measure using a numeric property from the account dimension.
Context
In charts and tables, automatic sign switching is applied to measures for income accounts and liabilities and
equity accounts. These accounts usually hold negative values in your model so that they balance out expenses
and assets in your financial statements. However, in stories and analytic applications they’ll show up as positive
values to make the data easier to work with. For details, see Attributes of an Account Dimension [page 606].
However, you might want to change the signs for different user types. For example, an accountant may want
to see income and expense values as negative, while a manager may want to see negative income values and
positive expense values.
To customize your account values in this way, you can add numeric properties to the account dimension and
create calculated measures based on them.
Note
This functionality isn’t available for classic account models, since they don’t support calculated measures
in the modeler.
Procedure
1. From the Model Structure screen in the modeler, select the account dimension from the list of dimensions
to open the grid display.
2. In the Dimension Settings panel, select next to Properties to add a custom property.
3. Type an ID and description, and set the Data Type to Integer. Select Done.
The value should be -1 for accounts where you want to change the sign, and otherwise 1.
(Remember that automatic sign switching is still applied to the INC and LEQ account values, so in general
use -1 for accounts that should show negative values. You can also use the calculation preview to make sure
that your results are correct when setting up the calculated measures.)
Fill in values for all account members to avoid null values in the calculation results.
5. If needed, add more properties to set up different representations of the data.
When all the properties are ready, you can add a calculated measure for each property.
6. Switch to the Calculations screen by selecting (Back) and choosing Calculations from the screen list.
Usually you’ll want to match the settings of the base measure for this calculation.
10. Type the calculation in this format:
[d/AccountDimension].[p/PropertyName]*[BaseMeasure]
If your model has data, you can see a preview of the results.
This function lets you change the base measure value by entering data on the formula.
[d/AccountDimension].[p/PropertyName]*[BaseMeasure]|
INVERSE([BaseMeasure]:=[CalculatedMeasure]/[d/AccountDimension].[p/PropertyName]
For details about inverse formulas, see Inverse Formulas [page 917].
When you’re finished, you can add the calculated measures to tables and charts. If you added inverse
formulas to the calculations for a planning model, users can enter data on the calculations instead of the
base measure.
A table showing a base measure and two calculations with sign switching
By default, planning models do not support planning operations for cell values that are calculated by formulas.
If you want to allow data entry for a formula, you can add one or more inverse functions to specify how the
formula should be reversed.
For example, consider an account dimension that has a Profit member with the following calculation:
[Revenue] – [Cost]
With this formula, users cannot enter data for the Profit account until you define an inverse formula for it:
Users can now can enter values for profit, and cost will be adjusted while revenue remains constant.
Note
• |
A vertical bar marks the end of the base formula and the beginning of one or more inverse functions.
• :=
Add a colon before the equals sign in inverse functions.
Note
• Inverse formulas work for classic account models, and for both measures and accounts in a model with
measures.
• You can't change inverse formula values using a data action.
As you begin to type the inverse formula, the hint list suggests complete inverse formulas that you can select.
To add multiple inverse formulas in this way, type or INVERSE at the end of the formula and then select the
next inverse formula from the hint list.
If you need to create an inverse formula that is only applied under certain conditions, see Conditional Inverse
Formulas [page 921].
In some cases, it may not be possible to change the value for cost. For example, a cell lock may be applied to
cost, or values may be entered for cost and profit simultaneously. To allow data entry in these cases, you can
define a secondary inverse formula that assigns value to revenue:
When the first inverse formula cannot be applied, the data entry can be carried out using the secondary
formula. As a result, the revenue value is adjusted.
For multiple inverse functions, the priority is determined by the order in which they are typed.
Inverse formulas can assign value to another member or measure calculated by a formula that includes inverse
functions. For example, after you add an inverse function to the profit formula, you can define an inverse
function for an account that is calculated based on profit, such as profit margin:
Inverse formulas can also assign values to aggregated accounts, including accounts that use exception
aggregation.
Inverse formulas can use data from a specific version, calculated using the Restrict or Lookup functions, as
an operand or as the target of the inverse formula.
To enable the data entry on the difference, you can add an inverse formula to the calculation of
RevenueVsActual:
Inverse formulas can assign value to calculations that use dynamic time navigation, and they can also use such
calculations as operands.
RESTRICT([IncomeStatement],[d/Date] = Previous("Quarter",1))
Data entry is already enabled for this restricted measure. However, you also want to allow planning users
to simulate different values for Quarter Over Quarter Growth (QQGrowth), which is calculated based on
IncomeStatementPreviousQuarter:
([IncomeStatement] - [IncomeStatementPreviousQuarter]) /
[IncomeStatementPreviousQuarter] | INVERSE([IncomeStatement] := [QQChange] *
[IncomeStatementPreviousQuarter] + [IncomeStatementPreviousQuarter])
For more information about dynamic time navigation, see Restrict and Lookup [page 906].
When you create an inverse formula for an account that uses exception aggregation, you can enter data on leaf
members of the exception aggregation dimension.
For example, consider the following formula for a revenue account that uses sum exception aggregation along
the Product dimension:
In this case, data entry is possible on leaf members of the product dimension for this account, but not on
parent members of the product dimension.
Data entry is also supported when the account data is filtered to a single leaf member of the exception
aggregation dimension. If the account uses multiple exception aggregation dimensions, each of them must be
filtered or drilled down to a single leaf member in order to perform data entry.
For more information about exception aggregation in revenue calculations, see Price * Volume Formulas [page
893].
• For first aggregation, you can enter data on a parent member if its first leaf member is already booked.
• For last aggregation, you can enter data on a parent member if its last leaf member is already booked.
In these cases, the values that you enter are booked to the parent member's first or last leaf member,
respectively.
You can also use exception aggregation accounts as operands in other inverse formulas.
Note
This does not apply to the exception aggregation types COUNT; COUNT excl. NULL; or COUNT excl. 0,
NULL. Inverse formulas do not support accounts that use these aggregation types.
Inverse formulas can be added to accounts calculated using the GrandTotal and %GrandTotal functions.
For example, the following inverse function can be defined for a TotalRevenue account:
Data entry on TotalRevenue is booked to the aggregated value of the revenue account.
Restrictions
• The inverse function assigns value to more than one operand. For example: INVERSE([Revenue] :=
1.5*[Profit], [Cost] := 0.5*[Profit])
• The base formula uses a Link or ResultLookup function, or the inverse function uses an operand that is
defined by a ResultLookup function. Inverse functions can be applied to formulas that use Restrict or
Lookup functions, including Restrict and Lookup functions that use dynamic time navigation.
• The inverse function uses specific cell values as inputs, instead of members.
Related Information
You can use the INVERSEIF function to define a conditional inverse formula.
The inverse formula is valid when the condition is met; otherwise, it will not be executed. The condition uses the
same syntax as the condition in an IF function, and the inverse formula uses the same syntax as an INVERSE
function. For more information, see Conditional Formulas Using Dimension Members [page 896] and Inverse
Formulas [page 917].
Consider the following example, where a Revenue account uses an IF function to take its value either from the
Actuals data or from a Forecast calculation, depending on the date:
You want to enable data entry on this account when it shows forecast data, but not when it shows actuals data.
Therefore, you can define the following conditional inverse function:
The account now supports data entry on forecast data from 2018 onwards.
Multiple INVERSEIF functions can be added to a formula, as well as multiple INVERSE functions. These
functions are evaluated according to their order in the formula, and the first valid function is applied. Data entry
may still be prevented if each inverse function has an unfulfilled condition or invalid target cells.
Related Information
Learn how to start ad-hoc analyses in SAP Analytics Cloud based on Live SAP BW queries, Live SAP HANA
views, SAP Analytics Cloud models and SAP Datasphere models and save your drill-down data state and
ad-hoc analysis as insight.
Note
Previously, you created ad-hoc analysis and saved insights with classic data analyzer. You can still see
and access these classic insights in the Files repository. To transform existing classic insights to the new
insights, see Migrate Classic Insights [page 949]. For new ad-hoc analysis and insights we recommend to
use the new data analyzer that you can access on the data analyzer start page.
If you work with new data analyzer, see the following chapters for more details:
If you want to open an existing classic insight and use classic data analyzer, see the following chapter for more
details:
• Launch Classic Data Analyzer and Start Ad-Hoc Analysis [page 943]
If you want to migrate classic insights to new insights, see the following chapter:
Data analyzer is a predefined ready-to-run application for Live SAP BW queries, Live SAP HANA views, SAP
Analytics Cloud models and SAP Datasphere models for ad-hoc analysis. It can be accessed by using the side
navigation of SAP Analytics Cloud and it enables you to save your drill-down data state and analysis as insight.
All Live SAP BW queries, Live SAP HANA views and models can be accessed directly in the selection dialogs for
data source and existing models and no additional model is created.
Note
Previously, you created ad-hoc analysis and saved insights with classic data analyzer. You can still see and
access these classic insights in the Files repository. To transform existing classic insights to new insights,
see Migrate Classic Insights [page 949].
Note
Before you can work with data analyzer, you need to have the corresponding permission. Many standard
application roles already include the permission to work with data analyzer. For further information, contact
your application designer and see Standard Application Roles [page 2838].
Learning Tutorial
Click through the interactive tutorial illustrating how to launch data analyzer from the data analyzer start page
and create a data analysis based on a SAP HANA view in step-by-step instructions (5:00 min); the tutorial is
captioned exclusively in English:
Context
As an integrated part of SAP Analytics Cloud you find data analyzer in the side navigation on the SAP Analytics
Cloudstart page.
Procedure
1. To open data analyzer and create a new ad-hoc analysis, go to Data Analyzer in the side navigation
of SAP Analytics Cloud. The start page of data analyzer is displayed with the following URL: https://
<hostname>/sap/fpa/ui/app.html#/dataanalyzer
2. On the data analyzer start page, choose one of these options:
Option Description
Create New From a Data Source to create an ad- Please ensure the following pre-requisites are met before
hoc analysis based on a Live SAP BW query or Live SAP proceeding: To use an SAP BW query or a SAP HANA view,
HANA view. at least one live connection to a SAP BW system or a live
SAP HANA needs to be configured in SAP Analytics Cloud.
Note
• You can work with only a single data source in data analyzer.
• Once you've opened data analyzer for a new ad-hoc analysis following the two options
described above, the URL will get updated containing the data source information, for example
in SAP BW Data Analyzer https://<hostname>/sap/fpa/ui/app.html#/dataanalyzer&/dz/?
connection=BIX&dataSourceName=0BICS_C03_BICSTEST_Q0010
• When you start a new ad-hoc analysis and leave data analyzer without saving any changes, you
won't be asked to save your changes. Only if you open an existing insight, change it and leave
without saving, you are asked whether your changes should be saved. Be aware that users with
viewer role and without the private insight privilege can open and change insights but can't save
the changes.
As a user of a story in optimized design and view mode, you can open data analyzer from a table and a number
of supported chart types via the actions menu item Open Data Analyzer....
You can choose if data anlyzer should be opened in the current tab ( Open Data Analyzer... Open in
Current Tab ) or in in a new tab ( Open Data Analyzer... Open in New Tab ). To be able to use the
• enable data analyzer on a table ot chart via the builder panel (by checking the property Enable Data
Analyzer)
• enable data analyzer for all tables and all supported chart types via the Story Details dialog (by selecting
Story Details Data Exploration Settings Enable Data Analyzer for all supported widgets in the File
menu of the story)
When the action menu item Open Data Analyzer... is selected, data analyzer is opened depending on your
choice eihter in the current tab or in a new tab (while the story remains in the curenct tab) with transferred
table or chart information like the following:
• data source
• navigation state (dimensions/measures on rows and columns)
• filters
• thresholds
• variables
Note
In some cases the Open Data Analyzer... action menu item will be disabled:
• If the user does not have the Execute privilege for data analyzer.
• For more information on other uses cases where this option is not available, see SAP Note 3278772
Note
For opening data analyzer from a chart the following applies: You can open data analyzer from any chart
type except the historam chart.
You can use the URL in data analyzer to open and view the data analysis or share it with other users.
The URL uses a fixed structure and you can add parameters to it.
• You can open data analyzer via the URL parameters that specify the connection and the data source name.
For further information on how to open data analyzer via URL parameters that specify the connection and
data source, please see the corresponding sections in the tables below.
Example
Usage with system type parameter https://
<hostname>/sap/fpa/ui/app.html?#/
dataanalyzer&/dz/?
systemType=BW&connection=<connection
Name>&dataSourceName=<BWQueryName>
Example
Usage without system type parameter https://
<hostname>/sap/fpa/ui/app.html?#/
dataanalyzer&/dz/?
connection=<connetionName>&dataSourc
eName=<BWQueryName>
Note
It is important that you use dataanalyzer.
Example
https://<hostname>/sap/fpa/ui/
app.html#dataanalyzer&/dz/?
systemType=HANA&connection=<connecti
onName>&dataSourceName=[<SchemaName>
][<PackageName>][<HANAViewName>
Note
• It is important that you use
dataanalyzer.
• A SAP HANA view ID
should follow the format of
[<SchemaName>][<PackageName>]
[<HANAViewName>] where the square
brackets are required
• For [<SchemaName>] enter always
[_SYS_BIC] to signify a remote SAP
HANA system
Note
You can open data analyzer by creating an analytic ap-
plication with the following script in the
onInitialization event handler of your applica-
tion:
NavigationUtils.openDataAnalyzer(und
efined, undefined, undefined,
false);
Example
https://<hostname>/sap/fpa/ui/
app.html#/dataanalyzer&/dz/?
systemType=MODEL;datasourceName=<Pac
kageName>:<ModelName>
Example
https://<hostname>/sap/fpa/ui/
app.html#/dataanalyzer&/dz/?
systemType=MODEL;datasourceName=<res
ourceId>
Note
It is important that you use dataanalyzer.
Example
/sap/fpa/ui/app.html#/
dataanalyzer&/dz/?
dataSourceName=[spaceName][]
[modelName]&systemType=DATASPHERE&co
nnection=connectionName
Note
It is important that you use dataanalyzer.
Note
Additionally you can use filter and variable URL parameters to open a specific view on your data. For
this follow the syntax described in the SAP Analytics Cloud URL API Developer Guide.
• It is also possible to open a saved insight via URL parameter that specifies the insight ID. Additionally you
can also use URL filter parameters to open the insight with a specific filtered view on your data.
• To compose the URL with the insight ID follow this syntax: https://<hostname>/sap/fpa/ui/
app.html#dataanalyzer&/dz/<InsightCUID>
• To set the filter URL parameters, follow the syntax for setting filters described in the SAP Analytics
Cloud URL API Developer Guide.
Note
Variable parameters in URL are currently not applied to the insight. For more information on other
issues you may encounter when using URL parameters, see SAP Note 3408228
The filter URL parameters are preserved in the URL after being applied. Non-relevant URL
parameters are removed.
Besides starting a new ad-hoc analysis from the data analyzer start page, you can start with opening a saved
insight, for example from the Recent Files section on the data analyzer start page or from a private, public, or
workspace folder location on the Files page.
You have several options to work on your ad-hoc analysis that you started in data analyzer.
You can set filters and variable prompts, work with the table's context menu and use the builder and styling
panel to explore in your data.
Adding Filters
If you want to add filters, click (Add Filter) to choose one of the listed items. In the Set Filter dialog you can
add filters according to your needs: You can add filters for dimensions and filter by member or by range. After
you have added filters, the filter tokens are displayed next to (Add Filter). By clicking (Delete Filter) you
can easily remove the filter.
In the Settings section you can globally turn on the Cascading Filter Behavior for all dimensions added to the
filter line.
• With Always cascade members turned on, only members that are relevant in the current filter context will
be available for selection in the filter dialog.
• With Default turned on, you can switch back to the default dimension setting.
• for SAP BW data sources the default is the setting in the SAP BW query.
• for SAP HANA data sources, the default setting is “booked”.
You can individually change the cascading behavior of a single dimension in the Set Filter for <your item>
dialog. This turns off the global setting within the filter line.
Note
There are cases when the filter of a compounded dimension defined in the SAP BW query is causing an
incompatible filter state in data analyzer. This is the case, if the higher-level dimension of the compounded
dimension is not filtered uniquely. This scenario cannot be mapped to the dimension filter dialog, hence
you are asked to first remove the current backend filter to be able to open the filter dialog.
Note
If your data source is designed for setting variable prompts, click (Edit Prompts...) in the toolbar bar to set
the variable prompts.
The Builder panel is displayed at the right side of the application. Under Rows and Columns, you see all
measures and dimensions that are displayed in the table. In the Available Objects panel that is displayed next
to the Builder Panel by default, all available measures and dimensions of the data source are displayed. Here
you can select dimensions and assign them directly to the table's rows or columns by clicking or or
dragging and dropping dimensions or measures from the Available Objects to the Builder panel. Or you can
simply drag and drop objects like dimensions, measures and structures from the Available Objects list into the
table. For displaying more measures in the table, you can check the measures in the list or uncheck them if you
want to remove them from the table. To remove dimensions or measures from the Builder panel and from the
For the Rows and Columns, you are offered a context menu that you can open by clicking (More). Here you
can use the following functions:
Each dimension in the Builder panel also has a context menu that you can open and use by clicking behind
the according dimension.
In the Available Objects panel, you can also search for objects. Additionally, in the upper right corner of the
Available Objects panel, you can change the objects' display as well as their sort order.
You can resize the width of the left side panel for Available Objects. This way you can display long dimension
and long measure names.
In the Builder panel you can also switch the visualization of your data from table to chart by choosing a chart
type in the dropdown box at the top of the panel. Each chart type offers a specific set of settings that you can
change. The following chart types are availabe:
• bar
• column
Note
The data visualization switch is supported for SAP BW live and SAP HANA live connections as well as for all
SAP Analytics Cloud models (except the classic account model) and all SAP Datasphere models.
Learning Tutorial
Click through the interactive tutorial illustrating how to use and work with charts in data analyzer (2:00 min);
the tutorial is captioned exclusively in English:
In the Measures section of the Available Objects list you can add calculations that are based on the listed
measures by clicking Add Calculation. In the Calculated Measure dialog, you create formulas by adding
functions, operators and conditional operators listed in the left-handside panel or by typing in the formula
directly in the editor.
The formula editor supports the following functions, operators and conditional operators:
• Functions: IF, ISNULL, NOT, ABS, EXP, SQRT, GrandTotal, %GrandTotal, MOD, and POWER
• Operators: +, -, *, /, and **
• Conditional Operators: AND, OR, =, !=, <, <=, >, and >=
You can use the newly created calculated measures on the table or chart. On table, you can create thresholds
based on the calculated measures.
In the Styling panel you can use the Default or Basic template as styling option for your table and switch on
Alternating Rows. To change the number formatting of your measures, you find different number formatting
options like Scale, Scale Format, Decimal Places, and so on.
For SAP BW data sources you can set the position of the Display Scale/Currencies/Unit in the Styling panel.
Depending on the context menu of a measure or dimension, you are offered different options:
• filter member
• filter
• exclcude member
• sort options
• table functions
• select conversions
• freeze with various options
• swap axsis
• show/hide with various options
• full screen
• create top N
Additional Options: Create New and Show Information about the Data
Source
In the Additional Options menu of the toolbar you can use the following items.
• By clicking Create New... you can create a new ad-hoc analysis with a new data source.
• By clicking Show Information... in the dropdown, you can get some general information about the query or
view and the date of the Last Data Update.
You can use the toolbar menus Undo, Redo and Reset in data analyzer. These menu functions support all
actions that manipulate the data source in the table (like for example sorting, swap axis and more) . They do
not support table styling settings and general appearance settings of user interface elements.
With the toolbar menu Auto Refresh you can set the auto refresh of the data.
You can use custom formatting options to highlight data information such as low sales in an area.
Context
Use thresholds to compare measures to a fixed range or to other measure. You can create thresholds and
assign styles and colors to it.
Procedure
1. In the side navigation in your ad-hoc analysis open the Styling panel and choose (Create Threshold) in
the Conditional Formating section.
2. In the Create Threshold dialog perform the following steps:
1. Optional: Enter a name for your threshold (by overwriting the default name).
2. Choose the account or measure you want to create the threshold for.
Note
• Depending on the model/data source of your analysis either Account or Measure is displayed
• For universal account models you can specify a measure or a list of measures in the Measure
field.
• For SAP BW two structures queries you can specify a structure member or multiple structure
members in the Structure field.
• A theshold is defined on a specific hierarchy in an account dimension. If you have multiple
hierarchies in an account dimension, you will need to create different thresholds for each
hierarchy.
• You can use as many or as few ranges as you like, and you can change the label names and
colors.
• When you add ranges, the displayed icons and labels will keep cycling through the default
choices. This can be modified as you wish. You can change the icon or label for your range.
• You do not need to set both an upper and a lower bound if you have only one range. When you
add more ranges, you can leave either the upper or lower bound empty.
5. Under Style you can choose any style from the following options:
• Color values
• Color Background
• Color Background (non-Transparent)
• Color Background (Without Vlaues
• Symbol
6. Under the Filters section, choose a dimension in the Select Dimension dropdown box to add filters
based on this dimension.
After you have selected a dimension from the dropdown, the filter dialog for dimension members
is displayed, where you can select one or more members. The threshold will only be applied to the
selected dimensions members related to the chosen overarching measure. This will create a filter
token below the Filters section for each dimension
Note
3. Click OK.
Results
You have created one or more thresholds that are displayed in the table with the respective style you have
selected.
Next Steps
Once you have created a threshold, the threshold card is added to the Conditional Formatting section and set
active by default. For each treshold you have the following options to work with:
• Edit
You can edit an existing treshold by choosing (Edith Threshold). This opens the Edith Threshold dialog
where you can edit every option except the selected measure.
• Delete
By choosing (Delete) the threshold is deleted. To revert the deletion choose (Undo) in the toolbar.
Note
After you have drilled down to the data details according to your needs and analyzed your data, you might want
to save this insight.
About Insights
An insight represents the displayed data source with filters, the navigation state as well as the presentation
state of the table.
Note
• If you are assigned to the creator or admin role, you can save an insight in the public or your private
files.
• If you are assigned to a viewer role, you can save an insight in your private files only. To do this an
administrator can grant the private insight privilege to users with no create rights on the public or
private files by creating a custom role and assigning the private insight privilege. With the private
insight privilege, these users can also share and delete their insights however they can't edit or rename
Saving Insights
After saving an insight, the insight name is displayed in the header of data analyzer. Saving an insight also
creates an insight ID that is displayed in the URL, for example, https://<hostname>/sap/fpa/ui/app.html#/
dataanalyzer&/dz/1A6006079B878D23EF65D16ACCBB65C4
Opening Insights
To open an existing insight and make changes to the data, select Files from the side navigation and select the
folder where you have saved your insight. In the folder, click the respective insight to open it in data analyzer.
Alternatively, you can open insights who have worked on and saved recently in the Recent Files section on the
data analyzer start page.
Note
Insights are labled with the symbol in front of the insight name and can be found under the file type
Insight.
To save your changes, click Save Save to overwrite it. Or - to save your changes as a new insight
-.select Save Save As and choose the file location (public or private) for your insight and enter a name
and description for it.
Note
Changes to existing insights can be made depending on permissions for your user role and for the file
repository.
In the folder view you can also share your insight, add it to your favorites or delete it.
For the time being you can share insights by email but you can’t create a custom link.
Note
If you open a classic insight from the Files repository, your insight will be displayed in classic data analyzer.
You can migrate your classic insight to a new insight. For more details, see Migrate Classic Insights [page
949].
In data analyzer you have different options to export data from your table. With the export options all data in
the table are exported at the time of the export.
To start the export of your table's data, choose (Export) in the data analyzer toolbar and select one of the
export options:
• Export Excel...
• Export PDF...
• Export CSV...
You find the export files in the download area of your local PC.
In the Export Data as Microsoft Excel File dialog you can overwrite the proposed name of your export file (data
source name as default) and select Expand Hierarchy to expand all the hierarchies (if any available in the data
source) for the export. Note the following when exporting data as Excel file:
• The table is exported as you see it in the resultset (with the exception being the hierarchy – if you choose
the setting Expand Hierarchy)
• Formatting (not styling) is included by default in the export. Number formatting that is also exported
includes scaling, units and currencies that were defined in data analyzer.
• Currency/Units are exported as represented in the table (Cells, Row, Column) and you can perform
calculations in the exported Excel file and click in the cell to see the raw data value. You can also sort and
filter in the Excel file.
• If your table has a hierarchy, all the nodes of the hierarchy will be exported, even if you have not drilled
down on the data in your table.
• Hierarchies are indented and exported as seen in the table (unless you choose Expand Hierarchy)
• You can choose the option to print repetitive members of a dimension (Show Repetitive Member Names) in
the export dialog, regardless of the table or chart setting.
• Value exceptions are represented in the export.
• Metadata sheet is exported with the following information:
For more information on specifics, points to note and possible limitations when exporting data to Microsoft
Excel in data analyzer, see SAP Note 3302128
How it is exported depends on the size of the table when compared to the PDF page dimensions. Page
orientation, paper size, location of the page number and the name of the file you are exporting are some other
properties you can set:
Paper Size • A2
• A3
• A4 (default)
• A5
• Legal
• Letter
• Formatting (not styling) is included by default for the PDF export. Number formatting that is also exported
includes scaling, units and currencies that were defined in data analyzer.
• Pages are indexed and split if necessary and the headers are repeated.
• Hierarchies are indented and exported as seen in the table.
• Expansion symbols for indented hierarchies in the exported table output to PDF are not displayed
• You can choose the option to print repetitive members of a dimension (Show Repetitive Member Names) in
the export dialog, regardless of the table or chart setting.
• Value exceptions are represented in the export.
• Non-ASCII characters are not supported for the PDF file export.
For more information on specifics, points to note and possible limitations when exporting data to PDF in data
analyzer, see SAP Note 3302109
In the Export Data as CSV File dialog you can overwrite the proposed name of your export file and you can
choose the CSV delimiter. The following delimiters are available:
• Comma ,
• Semicolon ;
• Tab
• Space
• Dot .
• Colon :
• Dash -
You can use key combinations (keyboard shortcuts) instead of the mouse to move around in SAP Analytics
Cloud Data Analyzer as well to navigate the table.
The following table gives an overview of the most common key combinations for navigation in data analyzer.
Note
• Depending on your keyboard layout and type, some key combinations could be different and won't
work out as described in the list.
• The combination of the control command and letters like CTR + S , CTR + C , CTR + V are
currently not supported.
Expand object +
Collapse object -
The following table gives an overview of the most common key combinations for navigating the data analyzer
table.
Exapand an object +
Classic data analyzer is a predefined ready-to-run application for Live SAP BW queries, Live SAP HANA views
and SAP Analytics Cloud models for ad-hoc analysis. If you open a classic insight, it is displayed in classic
data analzyer. You can edit the classic insight with the classic data analyzer and save it or you can migrate the
classic insight to an insight that can be edited in the new data analyzer. For new insights we recommend to use
the new data analyzer that you can access for example via the the data analytzer start page.
Classic data analyzer contains a table, a filter area, and a builder panel with navigation capabilities to add and
remove dimensions and measures from the table. In addition, you find a menu bar with a Refresh possibility and
an Edit Prompts dialog (in case your data source is designed for setting prompts).
Note
Before you can work with data analyzer, you need to have the corresponding permission. Many standard
application roles already include the permission to work with data analyzer. For further information, contact
your application designer and see Standard Application Roles [page 2838].
Note
• You can work with only a single data source in data analyzer.
• When you start classic data analyzer through a classic story (by selecting Open in Data Analyzer)
and leave classic data analyzer without saving any changes, you won't be asked to save your
changes. Only if you open an existing classic insight, change it and leave without saving, you are
asked whether your changes should be saved. Be aware that users with viewer role and without the
private insight privilege can open and change insights but can't save the changes.
With the introduction of the new data analyzer, you can't open classic data analyzer from the data analyzer
start page any longer. For new ad-hoc analyses and for the creation of new insights we recommend to use the
new data analyzer. However you still can work in classic data analyzer when you open exsting classic insights.
• You can start with opening a saved classic insight from the Files start page.
Your classic insight will open in classic data analyzer where you can make changes and save your insight.
Once you have opened a classic insight in classic data analyzer you can also manually migrate this classic
insight to an insight that you can edit and change with the new data analyzer. For more information, see
Migrate Classic Insights [page 949] .
• As a story user of a story that has been created in the classic mode, you can open classic data analyzer
from a table via the actions menu item Open Data Analyzer. To be able to use this action menu item, first
the data analyzer property has to be enabled by one of following options:
• enable data analyzer on a table via the builder panel (by checking the property Enable Data Analyzer)
• enable data analyzer for all tables via the Story Details dialog (by selecting Story Details Data
Exploration Settings Enable Data Analyzer for all Tables in the File menu of the story)
You have several options to work on your ad-hoc analysis that you started in classic data analyzer.
You can set filters and variable prompts, work with the table's context menu and use the builder and styling
panel to drill-down in your data.
Setting Filters
If you want to set filters, click to display the filter line and click beneath the menu bar to choose one
of the items. To collapse the filter line, click again . Depending on the amount of filters you have set, the
Note
If your data source is designed for setting variable prompts, click Edit Prompts in the menu bar to set the
variable prompts. When you have set variable prompts, they are displayed in the Information menu under
Variables.
The Builder panel is displayed at the right side of the application. Here you can see the data source of the
application and change it if needed. Under Rows and Columns, you see all measures and dimensions that are
displayed in the table.
In the Available Items area that is displayed on the left hand side of the navigation panel by default, you can see
all available items for the data source. Here you can select dimensions and measures and assign them directly
to the table's rows or columns by clicking or . To close the available items area, click the button.
You can also drag and drop dimensions from the Available Items panel into the Builder panel.
In the Available Items panel, you can also search for items. Additionally, in the upper right corner of the
Available Items area, you can change the items' display between ID and Description as well as their sort order.
You can resize the width of the left side panel for Available Items. This way you can display long dimension
names.
To change the number formatting of your measures, switch to the Styling panel . Here you can find different
number formatting options.
Note
Depending on the context menu of a measure or dimension, you are offered different options. For example, you
can add calculations to the table by using the context menu entry Add calculation.
In the Information popup window you can get some general information about the query or view in the General
tab like Query name, the Info Provider name and the date of the Last Data Update and the used variables in the
Variables tab. Next to Last Data Update you can click the Refresh button to refresh the data. After the refresh
the time stamp changes according to the last data update.
In classic data analyzer you have different options to export data from your table.
• a CSV file
• an XLSX file
• an PDF file
To start the export, choose Export in the table's context menu and select under File Type your preferred export
file.
Note
For more information about exporting data as a CSV or XLSX file, see Export Table Data as a CSV or XLSX
File [page 248].
When selecting Export from the table's context menu you can choose to export the data to PDF. On selection of
PDF within the dialog, you can configure the layout of the table exported to PDF.
How it exports depends on the size of the Table when compared to the PDF page dimensions. Page orientation,
paper size, location of the page number and the name of the file you are exporting are some other properties
you can set.
You can choose whether to display an appendix. If you choose to include an appendix, metadata will be
displayed within the output on the appendix page.
The following restrictions apply to table exports that have the Scope set to All:
Note
• Formatting (such as cell color, font styles, and so on) will not be exported.
• Hyperlinks are removed
If the chosen PDF page size and orientation is not sufficient to accommodate all the data, the data is split over
multiple pages, both horizontally and vertically as required. For the PDF creation the following pre-conditions
are checked:
• The chosen PDF page size should be of sufficient height to accommodate all the rows that contain column
dimension headers and attributes, plus at least one more row containing either data cells or row dimension
headers.
• The width must be sufficient to accommodate all columns containing row dimension headers/attributes
and at least one column containing data cells.
If either of these pre-conditions are not met, an error message is displayed suggesting that you should choose
a bigger page size and orientation.
If this occurs, the table will be split from left to right and top to bottom across multiple PDF pages by default. If
the Table does split there will be an indication above the table split by displaying which row and column is being
displayed. for example. 1 - 1 The table row and column headers will be repeated when a table split occurs.
Hierarchies
When you choose scope All all hierarchy nodes are shown as expanded within the PDF document, whereas with
scope Point of View, the nodes are exported as they are displayed in the table from which the data is exported.
To illustrate hierarchy levels, cell texts are indented for hierarchies in row dimension members and vertically
aligned for hierarchies in column dimension members.
The data exported with scope Point of View is considered live data from the table, where as with scope All, the
data is acquired data.
Note
Expansion symbols for indented hierarchies in the exported table output to PDF are not displayed.
Note
Formatting is included by default for PDF. Number formatting includes scaling, units and currencies that
were defined in the model and story.
All Exports everything that data source has with the same
filters applied and with hierarchies expanded.
Point of view Default export. Exports what you see in the table grid (all
the visible rows and columns).
Note
5. Choose the page orientation to set how the export output is displayed:
Option Description
6. Choose the paper size to set the paper size of the export output:
• A2
• A3
• A4 (default export)
• A5
• Letter
• Legal
7. Choose the page number location:
• None (default export)
• Header
• Footer
8. Decide whether you want an appendix in your PDF export. By default no appendix is displayed.
Note
The appendix must be included to have metadata displayed in the PDF export.
The following metadata will be displayed on the appendix page, if you have included the appendix in
your PDF export::
• Tenant URI
• Query Name
• Info Provider Name
• Created By
• Created On
• Variables
• Filters
You can open already existing classic insights from the Files respository.
To open an existing classic insight and make changes to the data, select Files from the side navigation and
select the folder where you have saved your insight. In the folder, click the respective insight to open it in data
analyzer. To save your changes, click Save Save . For creating new insights we recommed to use new
data analyzer.
Note
Changes to existing insights can be made depending on permissions for your user role and for the file
repository.
Note
If you want to migrate your classic insight to an insight that can be edited with the new data analyzer,
please see Migrate Classic Insights [page 949].
In the folder view you can also share your insight, add it to your favorites or delete it.
Note
For the time being you can share insights by email but you can’t create a custom link.
Learn how to migrate classic insights to insights that you can then edit in data analyzer
Context
You used to create and save insights in classic data analyzer. You can still access these classic insights from
your Files repository. If you want to migrate existing classic insights to new insights, follow this procedure
Note
• Classic insights can be accessed and opened from the Files repository and they won't be displayed in
your Recent Files on the data analyzer start page.
• Table calculations are not available in data analyzer and will be dropped when you open a classic insight
in data analyzer. For more information, please see 3280828
1. In the Files repository open your classic insight. It is displayed in classic data analyzer.
2. In the File dropdown, click Open Data Anayzer.
3. In the confirmation dialog Open Data Analyzer, click Open to open your classic insight in data analyzer.
4. For the state of the classic insight to data analyzer insight to be effective, save your insight.
Results
With saving your classic insight in data analyzer you have migrated the insight.
Note
• After saving the classic insight in data analyzer using the same name, you cannot revert the migration
and you won't be able to open the insight in classic data analyzer again. If you save the classic insight
in data analyzer under a different name, then you will be able to open the classic insight in classic data
analyzer again.
• There are some instances when Open Data Analyzer is disabled:
• When you use an insight that has a model with custome date dimension.
• When you open classic data analyzer through classic story (which has been designed in classic
design experience) you first need to save the insight.
• When you edit a classic insight without saving. You need to save the insight first.
An SAP Analytics Cloud story is a presentation-style document that uses charts, visualizations, text, images,
and pictograms to describe data.
Once you create or open a story, you can add and edit pages, sections, and elements as you like. The story
toolbar is divided into different categories such as File, Insert, Data, and Tools to help you find options and
perform tasks more efficiently.
As the owner of stories that you've created, you can share them with other users and grant permissions for
these stories.
When you share your stories, users with view permissions can analyze the data by navigating within the stories.
Setting up a Story
If you are using input tasks with your tables, you can get more information from the following links:
Use Custom Add-Ons on Your Widgets (Optimized Story Experience) [page 1648]
Related Information
Learn about SAP Analytics Cloud stories and create a sample story.
What is a Story?
Stories are at the center of the SAP Analytics Cloud experience. They let you explore data interactively to find
insights, visualize information with charts and tables, and share, present, and comment on your findings with
colleagues. Before you get started, it's helpful to know a few basic things.
• A Data view where you can explore data in real-time, with dynamic visualizations changing on-the-fly.
• A Story view where you can design beautiful, interactive dashboards for yourself or others.
You can create new pages and add items such as charts, tables, and other graphics that visualize your
data. Items on a page, such as a bar chart, are arranged as tiles that you can move around, resize, and style
to your liking.
Whichever view of a story you are using, the key to the underlying data lies in the measures and dimensions
defined in the story's model or dataset.
• Measures (or Accounts in planning models) represent quantities that provide meaning to your data. For
example, sales revenue, salary, or number of employees.
• Dimensions represent categories that provide perspective on your data. For example, product category,
date, or location.
Together, they are the framework for viewing data in interesting ways, whether it be a trend line of revenue over
time, or a comparison of gross margin across different regions.
Dimensions contain attributes – think of them as columns in a dimension table. For example, a product
dimension's attributes could include an ID and description.
There is an underlying model or dataset for every story, which defines the dimensions, measures, and other
relationships in your data, such as hierarchies or currency conversions.
Tip
Looking for a deeper dive on models? If so, see Learn About Models [page 599].
Or are you more interested in datasets? If so, see About Datasets and Dataset Types [page 544].
With SAP Analytics Cloud, you can bring in data from a wide variety of cloud-based and local data sources.
For this guide, let's use a simple CSV file containing revenue and expenses for a set of products in a fictional
company.
Download the CSV file here to follow along (unzip to find SampleData.zip and then unzip again).
Let's begin from the Home screen and import the sample data.
Note
I'm Feeling Lucky automatically maps the column data to measures or dimensions, and dives right into
the Data Exploration mode ( ). You can always change this model.
After the data is loaded into a model, you'll see the Data Manipulation view in your newly created story.
Your column data has automatically been mapped to measures and dimensions. In our sample data, the
column Value should be a measure, identified by the icon , and the other columns should be set to
Tip
If the data was imported as agile data instead of basic data, the dimensions won't have the icon.
To change to using basic data preparation, select File Save Open With Basic Data Preparation .
Add Dimension Attributes Description . Choose ProductsDescription from the drop-down list.
Note that the icon next to the ProductDescription column has changed to indicate it is
no longer a dimension. Repeat these steps for AccountID/AccountDescription, RegionId/
RegionDescription, CurrencyId/CurrencyDescription, and ChannelId/ChannelDescription.
5. Choose on the Details panel to switch to an overview of the data as a whole. Review the Model
Requirements section to confirm that mapping is complete and that no data quality issues are detected.
6. Save your changes by selecting Save to generate the model in the story with your data.
Now that the model is saved and your data imported, let's start exploring your data.
In the Data view, charts are auto-suggested as you select and deselect measures and dimensions:
1. Select Show Dimensions in the facet panel and choose AccountId, ProductsId, RegionId, and
TimeMonth.
2. Select the Value measure and REVENUE from the AccountId dimension.
4. In the Display section of the toolbar, you can show or hide dimensions from the (Show or Hide)
menu. Select CurrencyId to show this dimension and de-select TimeMonth and RegionId to hide them.
5. Now select CurrencyId in the facet panel to change the suggested chart again.
If you don't like the chart that is suggested, you can manually select the chart type instead:
Try it for yourself! The Data view allows you to explore insights into your data at your own pace.
The Data view is a good way to get quick insights into your data, but SAP Analytics Cloud provides a robust
designer where you can build more permanent pages with visualizations and other interactive elements. To get
Let's add a table to the page that shows revenue breakdown over time.
You can also reposition tiles by grabbing an edge and moving the tile around the canvas. Surrounding
tiles on the page auto-arrange as needed.
The table now shows actuals and forecast values for the individual accounts. Select the arrow next to (all) to
expand the TimeMonth dimension that was set up as a hierarchy earlier to see a more detailed breakdown by
year, quarter, or month.
The Canvas provides a variety of tools that let you control how the charts on your page behave and are
displayed. Let's try a couple of these out.
Filters let you focus on a specific set of data. You can apply filters to an entire story, a single page, or a specific
chart on a page. Let's add a currency filter that restricts the data for the entire story to show only US dollars.
2. Select (Add Story Filter/Prompt) and choose Dimensions CurrencyId from the drop-down list.
A dialog will appear allowing you to set a filter on this dimension.
Note
For Trial users, the CurrencyId is directly accessible from the drop-down list.
Both the chart and table in the story should refresh automatically to show only US dollar values. The new filter
The Styling panel on the Canvas allows you to change the fonts, colors, and other styling attributes of tiles on
your page. You can also make use of pre-configured style templates. Let's apply a Report-Styling template to
the table and give it a different look.
You can also change individual fonts and other styling options.
Now that you've created your story, select (Save) to save your changes.
Note
Trial users won't have other team members to share with. Skip ahead to Pin to Your Home Screen.
2. In the Share Story dialog, select (Add usernames or teams) and select a colleague's username.
When your colleague logs on to SAP Analytics Cloud, they will see a notification containing a link to open your
story and see what you've done.
You can also pin individual story tiles to your Home screen. Pin different story tiles to create a dashboard
of important charts and other visualizations that you can view immediately after signing into SAP Analytics
Cloud.
Go to the Home screen and you'll see the chart. You can resize and reposition the chart on the Home screen as
you would on the story Canvas.
Congratulations! You've created your first story. Take a look at the following resources to find out what else you
can do.
• Learn About Models [page 599]: find more information on creating and using models.
• Video Tutorials [page 2980]: watch our video tutorials to get started on additional topics.
• Welcome to the SAP Analytics Cloud Help [page 20]: search the SAP Analytics Cloud help to find step-by-
step procedures on how to use the product.
A list of help topics that explain how to use the keyboard to navigate SAP Analytics Cloud, and help topics
describing best practices when designing stories for accessibility.
There are various topics in the SAP Analytics Cloud help that explain the keyboard shortcuts that are available
for navigating stories, tables, and so on.
Keyboard navigation for story viewers. Content Viewing Accessibility - Use the Keyboard to Access
Story Pages [page 183]
Keyboard navigation for story designers. Keyboard Shortcuts for Story Pages (Edit Mode) [page 1025]
Best practices for making accessible stories. Accessibility - Best Practices for Making Stories Accessible
[page 1017]
Keyboard navigation within tables or in the planning panel. Keyboard Shortcut List for Tables [page 1395]
Keyboard navigation within SAP Analytics Cloud calendar. Keyboard Shortcut List for Calendar Navigation [page 2676]
Keyboard navigation within the script editor. Use Keyboard Shortcuts in Script Editor [page 1662]
Use a combination of SAP Analytics Cloud story and SAP Analytics Cloud, analytics designer features to build a
story.
Optimized Story Experience integrates elements of both story pages and analytics designer scripting
capabilities.
Assets panel and Outline panel Quickly build out your dashboard by Create a New Story (Optimized Story
dragging widgets from the Assets and Experience) [page 969]
managing them on the Outline panel.
Manage Your Widgets (Optimized Story
Experience) [page 976]
Copy and paste Copy and paste widgets, scripting ele- Copy and Paste Items (Optimized Story
ments and pages across optimized sto- Experience) [page 978]
ries or within the same story. Copy and
paste items between composites and
optimized stories are also supported.
Responsive layout Use responsive pages and define re- Design Responsive Layout (Optimized
sponsive rules to design stories that Story Experience) [page 1624]
can adapt to different screen sizes.
Theming improvements Creating themes is now more intuitive Configure Preferences and Use Themes
(Optimized Story Experience) [page
with widget previews, letting you see
1609]
the design changes before you commit
to them. We’ve also improved the con-
trast on the default color palettes.
CSS capabilities Define CSS classes for either the whole Customize Your Themes Using CSS
story or any widget, page or popup, (Optimized Story Experience) [page
which brings more flexibility and granu- 1621]
larity to the styling in the story.
Linked Widgets Diagram View and manage widget relationships Work with Linked Widgets Diagram (Op-
on Linked Widgets Diagram after creat- timized Story Experience) [page 1555]
ing linked analyses, cascading effects or
filter lines in your story, for example.
Toolbar updates The restructured story toolbar better Navigate Within Stories (Optimized
Story Experience) [page 980]
reflects the more commonly-used ac-
tions.
View time toolbar customization Customize the toolbar so that specific Customize View Time Toolbar for Your
Story (Optimized Story Experience)
quick access functions and submenu
[page 1014]
functions are displayed to the story
viewers.
Filter panel Add filters that are applied to all pages • Change the Filter Bar Orienta-
in the story. You can display your story tion (Optimized Story Experience)
[page 175]
filters in a vertical format, which makes
it easier to view hierarchies and see • Edit Filter Bar Settings (Optimized
Story Experience) [page 1549]
long lists of members.
Data Change Insights Enable and subscribe to top N data • Subscribe to and View Data
change insights in the story to keep Change Insights [page 177]
informed of data changes of specific
charts on a regular basis.
• Enable Data Change Insights and
Use Related APIs [page 1697]
Composite Use composites, which are combina- Use Composites in Your Story Design
tions of widgets with configured data (Optimized Story Experience) [page
and scripting elements and can be 1638]
reused across stories.
Custom widget Use custom widgets, which extend the Use Custom Widgets (Optimized Story
functionalities of SAP Analytics Cloud Experience) [page 1645]
and complement the existing set of
widgets according to your needs.
Custom add-on Use custom add-ons on your widgets, Use Custom Add-Ons on Your Widg-
which extend the predefined set of ets (Optimized Story Experience) [page
widget add-ons provided by Optimized 1648]
Story Experience.
If you’re a story developer who is assigned to the Application Creator role, you can turn on or off advanced
mode as you like to have different views for building your story. Advanced mode lets you add custom logic to
your story with more widgets, functionalities and scripting capabilities.
Advanced mode in Optimized Story Experience integrates features from SAP Analytics Cloud, analytics
designer, where you can create customized interactions and add custom logic with a variety of widgets,
functionalities and scripting capabilities.
Note
To use advanced mode, make sure that you've been assigned to the role Application Creator.
To switch on advanced mode, from View section in the toolbar, choose (Advanced Mode). To switch back,
do in the same way. You're notified both when entering and exiting advanced mode.
In advanced mode, you can have the following additions, rather than creating your story in a self-service
workflow with laid-out widgets and configured functionalities:
SAP Analytics Cloud stories combine your data with visual elements that help you to describe and analyze your
data.
Context
A story is a document that uses charts, visualizations, images, and other objects to describe and analyze data.
You can share your stories with other users and grant permissions for these stories. When you share your
stories, users with view permissions can analyze the data by navigating within the stories.
Click through the interactive tutorial illustrating how to create a story in the optimize story experience in
step-by-step instructions (4:00 min); the tutorial is captioned exclusively in English:
Procedure
Option Description
Add a Responsive Responsive pages allow you to create layouts that automatically resize and reflow when viewed
page on different screen sizes.
Add a Canvas page Use one or more canvas pages to explore and present your data. Lay out charts, tables, geo
maps, images, and other objects.
Use the Designer panel to format and manipulate the data in your canvas elements.
Use Templates You can apply formatting to your story by using a template, which provides predefined layouts
and placeholders for objects to help you build a story.
For more information, see Create and Use Story Templates [page 1047].
For more information on the story pages, see Story Pages [page 1026].
The story opens with the Assets panel displayed on the left. In the Assets panel, you can find a full list of
available widgets in categories, and directly drag and drop one to a page to build your story. The widgets
here are same as the ones from the Insert menu in the toolbar.
If the Assets panel is not open, from the toolbar, select View Left Side Panel .
4. From the Assets panel, expand Widgets, select a Chart or Table widget, and then drag that widget to the
body of the story page.
The data source is added to your story and the Builder panel (Right Side Panel) is displayed.
6. In the Builder panel, select dimensions and accounts to configure your widget.
You've added a chart or table with data to your story. You can find it in the Outline panel on the left, which
shows the structure of your story. From here, you can also locate the added widget on the page, edit its ID,
show or hide it, prevent it from being resized or repositioned, and add scripting.
7. Save your story.
Your story name can include valid alphanumeric characters and underscores. Spaces can be
included but not at the start or end of a name.
Results
The story is saved in the location you selected. If you saved the story to a public folder, it is visible to all users. If
you saved the story to a private folder, only you can see and edit it until you share it with other users. For more
information, see Share Files or Folders [page 193].
You can view your story by choosing View in the top-right corner. If your story doesn't contain scripting
elements, you can choose to view in the current browser tab or a new one via (View Story In...). If it does,
you can only view in a new tab.
Note
To view the newly created story in a new tab, you need to save it first.
When you view in a new tab, only saved states are displayed.
By default the story is opened in view mode. In Files you can also directly open it in edit mode by selecting
Next Steps
As a story developer, you can add scripts to your widgets to implement custom-specific behavior in addition to
basic user interaction in view time.
1. In the Outline panel, hover over the widget you'd like to add scripts to.
You can also trigger scripting by selecting Edit Scripts in the widget's (More Actions) menu.
4. In the script editor, write scripts for the event of the widget, which are executed when viewers changes the
selections on it, for example.
Related Information
Create a new SAP Analytics Cloud story in a few steps. Use a story to describe your data with charts, tables,
text, images, and more.
Context
A story is a document that uses charts, visualizations, images, and other objects to describe and analyze data.
You can share your stories with other users and grant permissions for these stories. When you share your
stories, users with view permissions can analyze the data by navigating within the stories.
Procedure
Tip
You can create a story from the Files page by selecting Create Story .
Option Description
Add a Responsive Responsive pages allow you to create layouts that automatically resize and reflow when viewed
page on different screen sizes.
Add a Canvas page Use one or more canvas pages to explore and present your data. Lay out charts, tables, geo
maps, images, and other objects.
Use the Designer panel to format and manipulate the data in your canvas elements.
Add a Grid page Grid pages allow you to add data to an empty grid, or you can add a table based on an existing
model.
From a Smart Use Smart Discovery to clearly define the context of the business question you want to ask your
Discovery data.
Use Templates You can apply formatting to your story by using a template, which provides predefined layouts
and placeholders for objects to help you build a story.
For more information, see Create and Use Story Templates [page 1047].
For more information on the story pages, see Story Pages [page 1026].
Note
You can add data at a later stage from the Data view. To switch to this view, select Data in the Story/
Data control.
4. Add data to your story: from the toolbar, select Data, or on a Canvas page you can select Add Data from the
page.
Import data from an Excel or CSV (comma-separated-values) file, a data source, or an existing dataset or
model.
Note
Your story name can include valid alphanumeric characters and underscores. Spaces can be
included but not at the start or end of a name.
Results
The story is saved in the location you selected. If you saved the story to a public folder, it is visible to all users. If
you saved the story to a private folder, only you can see and edit it until you share it with other users. For more
information, see Share Files or Folders [page 193].
By default the story is opened in view mode. In Files you can also directly open it in edit mode by selecting
Related Information
You can add a variety of widgets to your story and do further actions, such as setting their size, position, edit
time and view time visibility.
To show or hide a widget in edit time, go to the Outline panel to find the widget first. When hovering over it, you
can see (Visible) or (Hidden), which indicates its visibility status. Choose the icon to change the status.
The visibility setting on Outline applies to view time as well.
As a story developer, additionally you can set the widget's initial view time visibility and write scripts to allow
for dynamically changing the visibility in view time. You can find the setting View Time Visibility in the Actions
section of the widget's Styling panel, which overwrites the setting on Outline.
• Go to the Outline panel to find the widget. When hovering over it, you can see or , which indicates its
status. Choose the icon to change the status.
• On the page or popup, open the (More Actions) menu of the widget. Select Enable Mouse Actions or
When a widget is locked, you can see over it indicating the status. You can still resize or remove it using
keyboard shortcut or in the Styling panel.
You can configure the view time context menu of a widget, including the data points, in the Quick Menus section
of its Styling panel. For example, here's the context menu configuration of a chart:
You can then select the actions to be available in view time so that viewers can only do these specific actions.
You can resize or position a widget under Size and Position in its Styling panel. The following settings are
available:
Align a Widget
You can align the position of your widgets to improve the look of your story. For more information, see Align
Your Widgets (Optimized Story Experience) [page 1606].
As a story designer, you can copy and paste widgets, scripting elements and pages across optimized stories
or in the same optimized story, either in the same browser tab or across browser tabs. Copy and paste items
between composites and optimized stories is also supported.
Prerequisites
For example, you've created a story with widgets and scripting elements, such as script variables, script objects
and OData services. Now you'd like to use one of the widgets with the corresponding scripts in another story,
another page within the story or a composite.
Context
• Copy and paste between optimized stories and classic stories or analytic applications
• Copy and paste items between optimized and classic responsive pages
• Copy and paste responsive pages to another optimized story with different responsive layout
• Copy and paste pages from optimized stories to composites
Restriction
The following components and widget elements in optimized stories won't be copied and pasted:
The following story-level settings in optimized stories won't be copied and pasted:
• Story details
• Query settings
• View time toolbar settings
• Story filters and filter panel
• Bookmark settings
• Auto refresh configuration
• Loading optimization settings
• Preferences and themes
• CSS
Procedure
1. On your story page, select the item you want to reuse, and then Copy Copy from its (More
Actions) menu or the toolbar.
If you want to reuse the item on the same page, select Duplicate, and it's done.
2. Paste the item.
You can also use the keyboard shortcuts CTRL + C and CTRL + V to copy and paste.
It's supported to copy and paste either in the same browser tab or across browser tabs. You can copy and
paste without opening another browser tab by closing the current story and moving to the target story or
composite.
Results
You've copied and pasted an item to the target story or page. For a widget with scripts assigned, you can find
active next to the pasted widget in the Outline panel and select it to check the copied script.
Note
The corresponding script variables and technical components won’t be copied and pasted with the page.
You need to copy and paste them individually.
Did someone share a story with you? Explore the menu options and objects on the story page in SAP Analytics
Cloud's Optimized Story Experience. You can use these tools to make sense of your data.
Here's an overview of the main menu options that you can find within a story in SAP Analytics Cloud:
Note
Your choice of menu options changes depending on your role, the way the story was created, and the data it
contains.
In Edit mode, there is a View Story In dropdown button next to the View mode toggle button. By selecting the
dropdown button, you can choose from one of the following options:
The following table is a comparison of the collapsed and expanded toolbar layouts. Compared to the collapsed
layout, the options may be in a different order when the toolbar is expanded.
Collapsed
Expanded
Note
The options displayed in the expanded toolbar may vary. For example, you may see the Insert (2) options instead of the
File (1) options on the toolbar.
Legend
Key Option
1 File
2 Edit
3 Insert
4 Tools
7 Undo
8 Redo
9 Publish Data
10 Version Management
11 Allocate
14 Filter Panel
15 Advanced Mode
17 Refresh
18 Edit Prompts
19 Edit CSS
The following table shows the submenu options for the File, Edit, Insert, Tools, View, and Format menus when
the toolbar is collapsed.
Note
Options that have “planning models only” next to their names are only available if a planning model is
added to your story.
Menu/Functionality Description/Submenu
File (1)
Save
Save As
Save As Template
Export
Share
Publish to Catalog
Edit Story:
• Story Details
Edit (2)
Undo
Redo
Refresh:
• Refresh
• Configure Auto Refresh
• Loading Optimization Settings
Copy
Copy To
Duplicate
Note Paste
The text on the right side of the dialog (A) show key-
Paste Overall Values Only
board shortcuts for the Undo, Redo, Copy, Paste, and
Paste Overall Values Only options.
Insert (3)
Chart
Table
Input Control
Text
Button
Filters/Controls:
• Checkbox Group
• Dropdown
• List Box
• Radio Button Group
• Range Slider
• Slider
• Switch
Text Inputs:
• Input Field
• Text Area
Containers:
Others:
• Geo Map
• Image
• Shape
• Symbol
• Comment
• RSS Reader
• R Visualization
• Value Driver Tree
• Web Page
Custom Widgets
Tools (4)
Add New Data
Publish Data
Edit Prompts:
• Link Variables
Link Dimensions
Chart Scaling
Note
Chart Scaling is currently not supported in Optimized
Design Experience.
Conditional Formatting
Note
Create Input Task is currently not supported in Opti-
mized Design Experience.
Formula Bar
Predictive Forecast
Note
Predictive Forecast is currently not supported in Opti-
mized Design Experience.
Smart Discovery
Note
Smart Discovery is currently not supported in Optimized
Design Experience.
Performance Optimization:
Format (5)
Layouts
Theme:
Edit CSS
• Assets
• Outline
• Builder
• Styling
Info Panel:
• Errors
• Reference List
Filter Panel
Note
For information on editing the filter panel settings, see
Edit Filter Bar Settings (Optimized Story Experience)
[page 1549].
Advanced Mode
Comment Mode
Legend
Key Option
1 File
2 Edit
3 Tools
4 Display
6 Version Management
7 Allocate
8 Edit Prompts
9 Filter Panel
10 Data Refresh
11 Bookmark
12 Undo
13 Redo
14 Page navigation
15 Fullscreen
Note
Options that have “planning models only” next to their names are only available if a planning model is
added to your story.
File (1)
Save
Save As
Save As Template
Export
Share
Publish to Catalog
Schedule Publication
Note
The text on the right side of the dialog (A) show key-
board shortcuts for the Save and Save As options. For
more information on keyboard shortcuts, see Keyboard
Shortcuts for Story Pages (Edit Mode) [page 1025].
Edit (2)
Undo
Redo
Reset
Data Refresh:
• Refresh
• Auto Refresh
• Loading Optimization Settings
Copy
Note
Paste
The text on the right side of the dialog (A) show key-
board shortcuts for the Undo, Redo, Copy, Paste, and Paste Overall Values Only
Tools (3)
Predictive Forecast (planning models only)
Edit Prompts
Bookmark:
Formula Bar
Tab Bar
Show/Hide:
• Filter Panel
Note
For information on changing the filter bar to the
filter panel, see Change the Filter Bar Orientation
(Optimized Story Experience) [page 175].
Note
If you are using a planning model in your story, this
option is displayed in your toolbar.
Note
If you are using a planning model in your story, this
option is displayed in your toolbar.
(Allocate) (7)
Note
If you are using a planning model in your story, this
option is displayed in your toolbar.
(Undo) (12)
(Redo) (13)
Page navigation (14) This functionality shows the page number. You can use the
arrows to navigate through the pages in your story.
(Fullscreen) (15)
Toolbar Options
You can find the following menu options on the toolbar that is directly above the story page. The options you
see will vary depending on the way the story was created and the data that it contains.
Toolbar Options
Functionality Description
Save a) Save As: save the story with a different name to create a new version of it. Keep in mind that this
copy is not linked or synchronized with the original one.
b) Save As Template: save the story as a template that can be used to create a new story.
Reset Reset any changes that you made to filters and input controls to get the original view of the story.
Your comments and any new versions of the story you created are saved.
Filter Panel Apply filters for all charts that are based on the same data source. The filter is applied in all the
pages of a story.
Formula Bar Calculate values in empty table rows and columns, or cells outside a table.
Refresh a) Refresh: reload the story to get the most recent data.
b) Auto Refresh: automatically refresh the story at an interval set by the story creator.
Edit Prompts If this option is enabled, you can filter story data from the data source level by setting story
variables that you want to display in the story. Charts or tables that are built from that data source
will get updated based on the details you entered.
For example, if you want the story to show only full-time employees who have worked in the
company for at least 7 years, you can set these variables and change the details.
Comment Mode Switch Comment Mode on and off to hide or display comments in the story. You can add com-
ments to story pages as well as to charts and other objects in the story. After a comment is added,
everyone who has access to the story will get notified.
Fullscreen Display the story in full-screen, without toolbars. To switch out of full-screen view, hover at the top
Bookmark Current a) Bookmark Current State: create Personal bookmarks to save different states of a story as
State
private bookmarks that are visible only to you. Story and page filters, input controls, prompts,
explorer views, and variances are saved in your bookmarks.
b) Open Saved Bookmarks: access your Personal bookmarks (created by you), Global bookmarks
that are shared with anyone who has access to the story, and other bookmarks that are shared
with you. When a Personal bookmark is shared with you, you have the option to remove it from
your list of bookmarks.
Click anywhere within the area of a chart or table to open the Action Menu. Expand (More Actions) to see
all the options available in the menu. Note that these options vary depending on how the table or chart was
created and the data it contains. Just make a choice and the selected chart or table will get updated right away.
Click on a data point of a chart or table to open the Context Menu. Note that the menu options you get vary
depending on how the chart or table was created and the data that it contains.
For more information on context menu options, see Chart Context Menu Options [page 1239] or Table Menu
Options on Story Pages [page 1374].
Additional Options
You can find more options on the story page, depending on the story that is being shared with you. Once
you have made a choice, either the entire story page or selected elements on the story page will get updated
accordingly.
Functionality Description
Pages A story can contain multiple pages. Use the tabs to move from one page to the next.
Page Filter If a story page contains page filters, you can filter all charts in the story page that are
based on the same data source.
Select the data you want to display in your charts or tables and see the updates in
real time.
• Slider: control what data you want to see on your charts and tables.
• Radio Buttons: let you pick one measure to display in your charts or tables.
• Checkboxes: let you choose one or more measures to display in your charts or
tables.
See the images below for some examples of input controls [page 1012].
Linked Analysis When multiples charts are connected to one chart through Linked Analysis, the
charts will update simultaneously based on the selection you made in that chart.
Once you are done, click a blank area within the chart to remove the filter. See the
images below for more details on how to use and reset Linked Analysis [page 1013].
Input Control
Slider
Radio Buttons
Checkboxes
You can use a chart as a filter for other charts if they are connected through linked analysis.
Example
Let's say you want to know the average gross margin of carbonated drinks in California 2014. Select the
highlighted area on the vertical bar chart. Now, the numbers on the horizontal bar chart tell you the
average gross margin of carbonated drinks in 2014, rather than of all products over time.
This is what it looks like before and after you use linked analysis on two bar charts that are linked together.
Here’s what it looks like before and after you reset linked analysis on two bar charts that are linked together.
Do you want to reset your linked analysis? Just click on a blank space within your chart.
In the example above we applied a filter to see the average gross margin of carbonated drinks in 2014.
Related Information
Did someone share a story with you? Explore the menu options and objects on the SAP Analytics Cloud story
page. You can use these tools to make sense of your data!
Here's an overview of the main menu options that you can find within a story in SAP Analytics Cloud:
Note
Your choice of menu options changes depending on the way the story was created and the data it contains.
The following table is a comparison of the collapsed and expanded toolbar layouts. Compared to the collapsed
layout, the options may be in a different order when the toolbar is expanded.
Collapsed
Expanded
Note
The options displayed in the expanded toolbar may vary. For example, you may see the Insert (2) options instead of the
File (1) options on the toolbar.
Legend
Key Option
1 File
2 Insert
3 Tools
4 Data
5 Format
6 Display
7 More
The following table shows the order of the options when the toolbar is collapsed.
Note
Options that have “planning models only” next to their names are only available if a planning model is
added to your story.
File (1)
Edit Story:
• Story Details
• Preferences
• Query Settings
• Smart Insights Settings
• Convert to Optimized Design Experience
Save:
• Save
• Save As
• Save As Template
• Export
• Enable Optimized View Mode
• Copy
• Copy To
• Duplicate
• Paste
• Paste Overall Values Only
Share:
• Share
• Publish to Catalog
Insert (2)
Chart
Table
Input Control
Geo Map
Section
Image
Shape
Text
Clock
Comment Widget
RSS Reader
Web Page
Planning Trigger
R Visualization
Symbol
Formula Bar
Chart Scaling
Conditional Formatting
Smart Discovery
Data (4)
Publish Data
Refresh:
• Refresh
• Configure Auto Refresh
Edit Prompts:
Link Dimensions
Examine
The following table is a comparison of the collapsed and expanded toolbar layouts. Compared to the collapsed
layout, the options may be in a different order when the toolbar is expanded.
Collapsed
Expanded
Legend
Key Option
1 File
2 Mode
3 Data
4 Display
5 Actions
Note
This option will appear on your toolbar if you added a dataset.
The following table shows the order of the options when the toolbar is collapsed. The options may be in a
different order when the toolbar is expanded.
For options that have “Grid View for datasets only” included next to their names, it means that they are only
available if your added data source is a dataset and you have enabled Grid View.
Menu/Functionality Description/Submenu
File (1)
Edit Story:
• Story Details
• Preferences
• Query Settings
• Smart Insights Settings
• Convert to Optimized Design Experience
Save:
• Save
• Save As
• Open With Basic Data Preparation (Grid View for data-
sets only)
• Enable Optimized View Mode
Mode (2)
Data Exploration
Grid View
Note
If you do not have a dataset added and you have added
a public or embedded live connection model, you will
not see the Grid View option. Instead, you will see the
following message: Data preparation is unavailable for
public and embedded live connection models.
Data (3)
Add New Data
Edit Prompts:
Link Dimensions
Edit Query
Display (4)
Show/Hide:
• Show/Hide Data
Sort Facets
Note
If your added data source is a dataset and you have ena-
bled Grid View, you will see Sort with the following
options in your Display dialog instead:
Actions (5)
Toggle Transform Bar
Note
Toggle Custom Expression Editor
This option would appear on your toolbar if you added a
dataset and enabled Grid View.
Geo Enrichment:
Legend
Key Option
1 File
2 Edit
3 Tools
4 Display
6 Version Management
7 Allocate
8 Filter
9 Refresh
10 Bookmark
11 Page name
12 Page navigation
13 Fullscreen
Note
Options that have “planning models only” next to their names are only available if a planning model is
added to your story.
Menu/Functionality Description/Submenu
File (1)
Save
Save As
Save As Template
Export
Share
Publish to Catalog
Note
The text on the right side of the dialog (A) show key-
board shortcuts for the Save and Save As options. For
more information on keyboard shortcuts, see Keyboard
Shortcuts for Story Pages (Edit Mode) [page 1025].
Edit (2)
Reset
Refresh:
• Refresh
• Auto Refresh
Copy
Paste
Tools (3)
Predictive Forecast (planning models only)
Edit Prompts
Bookmark:
Formula Bar
Story Filter/Prompt
Comment Mode
Note
If you are using a planning model in your story, this
option is displayed in your toolbar.
Note
If you are using a planning model in your story, this
option is displayed in your toolbar.
(Allocate) (7)
Note
If you are using a planning model in your story, this
option is displayed in your toolbar.
(Filter) (8)
(Refresh) (9)
Page name (11) This functionality shows the name of the current page. You
can choose another page using the dropdown menu.
Page navigation (12) This functionality shows the page number. You can use the
arrows to navigate through the pages in your story.
Fullscreen (13)
Menu/Functionality Description/Submenu
File (1)
Save:
• Save
• Save As
Note
The text on the right side of the dialog (A) show key-
board shortcuts for the Save and Save As options.
Edit (2)
Refresh
Display (3)
Story View Mode
Tab Bar
(Refresh) (4)
Page name (6) This functionality shows the name of the current page. You
can choose another page using the dropdown menu.
Page navigation (7) This functionality shows the page number. You can use the
arrows to navigate through the pages in your story.
(Fullscreen) (8)
Toolbar Options
You can find the following menu options on the toolbar that is directly above the story page. The options you
see will vary depending on the way the story was created and the data that it contains.
Toolbar Options
Functionality Description
Save a) Save As: save the story with a different name to create a new version of it. Keep in mind that this
copy is not linked or synchronized with the original one.
b) Save As Template: save the story as a template that can be used to create a new story.
Reset Reset any changes that you made to filters and input controls to get the original view of the story.
Your comments and any new versions of the story you created are saved.
Story Filter Apply filters for all charts that are based on the same data source. The filter is applied in all the
pages of a story.
Formula Bar Calculate values in empty table rows and columns, or cells outside a table.
Refresh a) Refresh: reload the story to get the most recent data.
b) Auto Refresh: automatically refresh the story at an interval set by the story creator.
Edit Prompts If this option is enabled, you can filter story data from the data source level by setting story
variables that you want to display in the story. Charts or tables that are built from that data source
will get updated based on the details you entered.
For example, if you want the story to show only full-time employees who have worked in the
company for at least 7 years, you can set these variables and change the details.
Comment Mode Switch Comment Mode on and off to hide or display comments in the story. You can add com-
ments to story pages as well as to charts and other objects in the story. After a comment is added,
everyone who has access to the story will get notified.
Fullscreen Display the story in full-screen, without toolbars. To switch out of full-screen view, hover at the top
Bookmark Current a) Bookmark Current State: create Personal bookmarks to save different states of a story as
State
private bookmarks that are visible only to you. Story and page filters, input controls, prompts,
explorer views, and variances are saved in your bookmarks.
b) Open Saved Bookmarks: access your Personal bookmarks (created by you), Global bookmarks
that are shared with anyone who has access to the story, and other bookmarks that are shared
with you. When a Personal bookmark is shared with you, you have the option to remove it from
your list of bookmarks.
Click anywhere within the area of a chart or table to open the Action Menu. Expand (More Actions) to see
all the options available in the menu. Note that these options vary depending on how the table or chart was
created and the data it contains. Just make a choice and the selected chart or table will get updated right away.
Click on a data point of a chart or table to open the Context Menu. Note that the menu options you get vary
depending on how the chart or table was created and the data that it contains.
For more information on context menu options, see Chart Context Menu Options [page 1239] or Table Menu
Options on Story Pages [page 1374].
Additional Options
You can find more options on the story page, depending on the story that is being shared with you. Once
you have made a choice, either the entire story page or selected elements on the story page will get updated
accordingly.
Functionality Description
Pages A story can contain multiple pages. Use the tabs to move from one page to the next.
Page Filter If a story page contains page filters, you can filter all charts in the story page that are
based on the same data source.
Select the data you want to display in your charts or tables and see the updates in
real time.
• Slider: control what data you want to see on your charts and tables.
• Radio Buttons: let you pick one measure to display in your charts or tables.
• Checkboxes: let you choose one or more measures to display in your charts or
tables.
See the images below for some examples of input controls [page 1012].
Linked Analysis When multiples charts are connected to one chart through Linked Analysis, the
charts will update simultaneously based on the selection you made in that chart.
Once you are done, click a blank area within the chart to remove the filter. See the
images below for more details on how to use and reset Linked Analysis [page 1013].
Input Control
Slider
Radio Buttons
Checkboxes
You can use a chart as a filter for other charts if they are connected through linked analysis.
Example
Let's say you want to know the average gross margin of carbonated drinks in California 2014. Select the
highlighted area on the vertical bar chart. Now, the numbers on the horizontal bar chart tell you the
average gross margin of carbonated drinks in 2014, rather than of all products over time.
This is what it looks like before and after you use linked analysis on two bar charts that are linked together.
Here’s what it looks like before and after you reset linked analysis on two bar charts that are linked together.
Do you want to reset your linked analysis? Just click on a blank space within your chart.
In the example above we applied a filter to see the average gross margin of carbonated drinks in 2014.
Related Information
In edit time, you can customize the view time toolbar of the current story for desktop, phone and tablet
respectively so that specific actions are displayed to the story viewers.
Learning Tutorial
Click through the interactive tutorial illustrating how to customize the view time toolbar in step-by-step
instructions (3:00 min); the tutorial is captioned exclusively in English:
You can configure the quick access menu and submenu in the view time toolbar for view, present or embed
mode on desktop.
Procedure
1. Under File in the edit time toolbar, select (Edit Story) View Time Settings Toolbar .
You can see a dialog with a sample toolbar and available actions.
If you choose to customize the toolbar for embed mode, select Enable toolbar in embed mode so that the
toolbar is visible to viewers.
To remove the entry, hover over it from the bar, and choose . You can also drag and drop another
icon to the existing one in the bar to replace it.
4. To customize the options in the submenu:
1. Find the menu, File, Edit, Tools or Display, and then the action.
Note
2. To the right of the action, select Hide from submenu from the dropdown.
3. Repeat this to hide more actions from the submenu.
If you later want to show the action, select Show in submenu from the dropdown.
Under the sample toolbar, you can find how many actions are displayed with quick access and how many
you've hidden from the submenu.
5. To finish the toolbar customization, choose Apply.
You can configure the quick access menu in the view time toolbar for SAP Analytics Cloud mobile app or web
on phone or tablet.
Procedure
1. Under File in the edit time toolbar, select (Edit Story) View Time Settings Toolbar .
The View Time Toolbar Settings dialog opens.
2. Choose Phone or Tablet.
3. Select SAP Analytics Cloud for Mobile App or for Web.
For mobile app on phone, you can also configure the bottom toolbar. Make sure to select Enable bottom
toolbar on mobile app.
To configure the toolbar for web, make sure to select Enable toolbar in web browser.
4. To add or remove an action:
• For an icon representing an action, drag and drop it to the blank entry in the sample toolbar, or choose
Use these guidelines to help make the SAP Analytics Cloud stories and visualizations that you create
accessible for everyone.
You can make your stories more accessible with changes to contrast and colors, text and font, chart design,
and table formatting.
• These guidelines are intended for story designers who have a good understanding of SAP Analytics Cloud
features.
Topic Sections
Viewing
Encourage your story viewers to maximize their browser and select full screen in the tool bar to get a better
view.
If individual widgets are too small, viewers can also select full screen mode from each widget.
To display the menu, in the widget either right-click or select (More Actions), and then select
Fullscreen.
To ensure that your story is accessible to all audiences, here are some options to test the clarity of your
visualizations.
• Low light test: Turn the screen brightness down on your device and try to view the charts in your story.
• Squint test: Squint your eyes and read your story to see what your visualizations might look like out of
focus.
• Contrast Checker: Use a contrast-checking tool to ensure that you are meeting the minimum contrast ratio
requirements.
• Contrast Settings: Enable high-contrast display settings in your operating system to confirm if your
content and charts are still visible.
To update your display settings, refer to the Apple and Windows support guides.
• Ask a colleague: Ask a colleague to critique the readability of your story.
Getting a second opinion can help identify potential improvements.
• Device Preview (Only available for Responsive pages.): Navigate to the toolbar and select Format
Device Preview to see how your story might look on different devices.
Try a variety of device types to ensure that clarity is not compromised.
You can also customize the font scaling for specific platforms (iOS small phone, iOS tablet, and so on) to
better fit smaller screen sizes.
As a story designer, you need to pay careful attention to color choices to ensure that your visualizations can be
easily read.
You can use global standards such as the Web Content Accessibility Guidelines (WCAG) to help determine the
contrast ratio between the object and the background.
Images, shapes, and large text (18pt or 14pt bold and larger) Minimum contrast ratio of 4.5:1
Small text (smaller than 18pt or 14pt bold) Contrast ratio of 7:1
To create your own design, use story preferences to configure colors and apply them to the entire story.
Many of these changes can also be applied at the chart or table level, however, it is best practice to apply them
all at once using story preferences.
Page Settings
Element Option Color Setting
3. Update the tile settings to ensure high contrast between the background and font colors.
For a high-contrast ratio of 21:1, use a black (000000) background with white (FFFFFF) text.
Tile Settings
Object Element Color Setting
Tip
At this point, you may want to set border colors and thickness for specific tiles to increase contrast.
However, this can also be completed at an individual widget level in the styling panel.
4. Once the settings have been configured, apply them to all pages, lanes, and widgets.
For some individuals, certain color combinations make it difficult to understand visualizations. They need
visualizations that are created from a color palette that guarantees the minimum color contrast to the
background.
You can use story preferences in SAP Analytics Cloud to create your own personalized color palettes to meet
your accessibility requirements.
Note
For a high-contrast black theme, avoid using any dark shades which do not fulfill the 4.5 contrast ratio when
creating a story with a black background. Use nine distinct light colors with a minimum color contrast of 4.5
to black.
For a high-contrast white theme, all chart elements can be set to black by choosing a color palette and then
either setting all nine color selectors in the palette to black, or deleting all but one color selector.
Example
You can use the following examples to create a high-contrast black or white color palette. The tables include
a color swatch, the associated hex code, and the contrast ratio for each color.
The typical color palette in SAP Analytics Cloud contains 9 colors; these examples have 10 colors.
Color
Hex FFC847 ED884A DB9292 E269C9 8CA7D5 6BD3FF 7FC6C6 B2E484 B995E0 B0BCC5
value
Ratio 13.6:1 8.2:1 8.5:1 7.1:1 8.6:1 12.4:1 10.8:1 14.3:1 8.4:1 10.8:1
High-Contrast White
Color
Hex 5F5800 5E4101 973333 961D7C 365892 004CCB 105B5B 26340B 6C32A9 4A5964
value
Ratio 7.3:1 9.4:1 7.4:1 7.6:1 7.1:1 7.3:1 7.9:1 13.3:1 7.9:1 7.2:1
SAP Analytics Cloud has four different color palettes, but three of them use colors scales with subtle color
differences that are difficult for visually impaired users to perceive.
Whenever possible, use the Standard color palette. However, if you need to use one of the other palettes, make
sure that there is a good contrast between the background and chart colors.
For information about the color palettes, see Story Preferences and Details [page 1035].
While titles and explanations may be provided to support visual elements, those descriptive texts may also
need to be changed so that they are readable.
To improve readability, maximize font size as much as you can without compromising story spacing. Set the
header text font to be at least size 18, and the descriptive text font to be at least size 14.
Note
Because every story is different, you'll need to consider the consumption device (monitor, mobile device)
as well as design themes.
Avoid any decorative or script font styles as they may be difficult to read. Instead, use simple sans serif
typefaces for improved legibility (for example, use 72-Web/Default and Arial).
You can update font styles and sizes for your entire story in the story preferences or at an individual widget level
in the styling panel.
Titles and labels provide key context to help viewers understand visualizations. Including dynamic text in your
story will ensure that titles remain up-to-date regardless of filters or linked analysis.
Note
Whether using text labels or dynamic text, it's important to avoid using any unnecessary jargon or
acronyms that may confuse your audience.
To add dynamic text in SAP Analytics Cloud, select the chart title text or some other text element, and then
To help viewers who use screen readers, add an alternative text description.
Alternative text descriptions are short labels that describe the context of visual elements, allowing you to
describe key information to your audience. They can be put in a text box above your chart visualization or in the
chart footer.
Note
These text boxes and footers need to be updated when your data changes.
You could also use smart insights to provide supplementary contextual descriptions and insights into your
data; the smart insights adapt as your data changes.
Currently, there are several limitations with what screen readers can read. Microsoft's Narrator extension can
have some difficulty identifying text boxes. However, Google Chrome's screen reader works fairly well.
Chart Design
Charts can become complex and difficult to understand, especially for individuals who cannot distinguish
between fine details or colors. When designing charts, your formatting plays a crucial role in presenting
information in an easily digestible manner.
Patterns
Combining colors and patterns in your charts can help users with color vision deficiencies differentiate
between various measures. This conveys information through two distinguishable sensory characteristics.
Note
In SAP Analytics Cloud, you can apply patterns to measures, but not to dimensions.
For example, if you set sales revenue to an orange striped pattern, that setting will be applied to all charts with
the sales revenue measure.
For example, when you apply a sort, you can quickly determine the worst- or best-performing sales managers.
Number Formatting
In some cases, less is more. With number formatting, changing your scaling or decimal places may help limit
complexity. You can format individual calculated measures from the Builder panel, or you can use the Styling
panel to format numbers for the entire widget.
For specific chart types, you can also format the X and Y axis scale in the Styling panel.
Show/Hide Settings
The Show/Hide settings give you the choice to present only the most relevant information.
Consider your audience and if a widget is too busy, consider hiding some information.
Data Labels
Data labels can be formatted in the Styling panel to maintain contrast levels and reduce overlap.
For charts with data labels inside of bars, you may need to enable data label background colors and select the
avoid overlap and round values check boxes.
Spacing
The size of your chart plays a crucial role in showing all relevant details. If a chart is too small, certain details
may be distorted or automatically hidden.
Try to maximize chart size while balancing the spacing between other visualizations to avoid overlap.
When possible, limit the number of dimensions and measures in a single chart to avoid overcrowding.
Chart Components
Adding reference lines, thresholds, or variances to your visualizations can provide valuable information to your
audience. It is essential to format these components to maintain a high-contrast ratio to improve visibility.
Reference Lines
When adding a reference line, alter the label and line color to maximize contrast for your theme.
Thresholds
Threshold ranges can be configured with a variety of shapes and custom colors that allow for unique
combinations to help distinguish them from one another.
Choose colors that fulfill the minimum contrast to your background. If the threshold is shown as a symbol, it
would be 4.5:1; if it is shown as the text color, it can be up to 7:1.
Variances
Variances that are added to charts will be shown as a separate bar, an integrated bar, or a data label.
To add or edit a variance, navigate to the context menu, select Compare To and open the variance panel to edit
the display options.
The separate bar display is easier to see than the other two options. However, even that display can cause
problems because the bar's default red and green colors can't be changed.
The default colors may not work for high-contrast white or other templates with light backgrounds, but they are
acceptable for a high-contrast black template.
Input Controls
Interactive elements like input controls need to be consistent and easily identifiable.
Specific input control elements like button content and border colors can be updated in the styling panel.
To remain consistent, the input control should be positioned in the same location throughout all your pages.
Table Design
In addition to configuring table styling and font colors in story preferences, you can take other steps to improve
the clarity of your tables.
• When inserting a table, ensure that it is large enough to limit any horizontal scrolling so that all columns are
visible.
• In new stories, Optimized Presentation is the default table type for all tables. Continue to use this option as
it will help with pixel-level resizing of columns and rows.
• If the ability to resize rows is needed, you can set the row heights automatically by using the styling panel,
or manually by dragging and dropping.
Related Information
When you are creating or editing your SAP Analytics Cloud story pages, you can use some key combinations
(keyboard shortcuts) to move around and to move objects around.
On Windows, use the Ctrl key and on Mac use the Command key.
As in other programs, you can also select ranges (using the Shift key with the cursor) or select specific
values (using the Ctrl or Command key).
Note
For some countries, the shortcuts might not work if the keyboard layout (key placement) is different. For
example, Show Grid ( Ctrl + Alt + G ) may not work with an “English (India)” keyboard layout.
Save Ctrl + S
Command + S
Command + Shift + S
Command + A
Command + G
Command + Up
Command + Down
Command + Alt + Up
Command + Option + G
Underline Ctrl + U
Command + U
Bold Ctrl + B
Command + B
Italics Ctrl + I
Command + I
Related Information
You can add multiple pages to your story to help you explore and present your data.
A story page can be a blank canvas, a responsive page, or a grid. Use a blank canvas or responsive pages to lay
out tables and charts, or use a grid to work with numbers and formulas on a sheet. Responsive pages let you
create lanes to section the page content into groups. Tiles within a lane stay together when the responsive page
is resized.
On the page tab bar, you can select a page's drop-down menu to delete, duplicate, rename, move, hide, or add
comments to that page. You can also copy and paste story pages from one story to another.
Restriction
When you copy or duplicate a story page, the input controls (measure, dimension, and cross-calculation)
won't be copied to the new page.
To move a page, you can drag and drop the tab to the desired location, or from the drop-down menu, select
Move left or Move right to move the page one level in either direction.
Hiding a page makes it visible only to users that have edit rights; it is not visible in present mode. To hide a page,
from the page's drop-down menu, select Set as hidden. You can tell if a page has been set as hidden because its
name is struck-through. To make a hidden page visible, from the page's drop-down menu, select Set as visible.
Note
The preview screen shows an approximation of how the widgets will appear, but it won't be an exact match
to how the widgets actually appear on a specific mobile device.
In some cases, widgets resize and flow to fit smaller resolution screens when space becomes limited.
However, if the screen that you're previewing is larger than the screen you're working on (for example,
trying to preview a 4K UHD device on a laptop monitor), then the preview may show the widgets very small,
possibly too small to be readable. On the appropriate device, the widgets will be the correct size.
Responsive Lanes
You can use responsive lanes to section the content of a responsive page into groups. Tiles within a responsive
lane stay together when the responsive page is resized.
By default, a responsive page starts with two lanes. To add more lanes, highlight an existing lane in your
responsive page, select (More Actions) Add lane , and then select Add lane to left, Add lane to
right, Add lane above, or Add lane below. Reorder lanes by dragging and dropping them in another section of
the page.
You can enter lane titles to help you organize the lanes in your page. (Lane titles can include dynamic text.)
Lane titles resize when the responsive page is shared on smaller or larger screens.
You can copy lanes from a responsive page and paste them with all their contents to other pages.
You can also change lane styling properties such as the background color by selecting (Edit
Styling) . Select the lane title to see styling options for the title such as adding a border, setting the title's
background color, setting text properties, and inserting shapes or images to help you enhance the visual style
of your presentation.
In Optimized Story Experience, you can add lanes, define responsive rules for each of them and preview in view
time so that you stories can automatically adapt to and be viewed on different devices. For more information,
see Design Responsive Layout (Optimized Story Experience) [page 1624].
Find out supported and unsupported features on classic responsive pages after conversion to Optimized Story
Experience.
Supported Features
• Outline
• Linked Widgets Diagram
• Theme
• Custom Widget
• Bookmark
• Viewport Loading and Background Loading
Unsupported Features
• All scripting-related widgets and technical objects
• Show or Hide actions on Outline
• Settings in widget's Styling panel:
• Size and position
• View time visibility
• Enabling or disabling allowing mouse actions to resize or reposition widgets
• Data Refresh options
• CSS creation
• Device Preview Bar
Canvas Pages
Use canvas pages to bring your story to life. Add charts, tables, a geo map, or some other objects that make
your data visually appealing.
Don't like where the objects were placed on the canvas page? Rotate, resize, or move the objects around to help
you tell your story better. You can even copy the objects and paste them elsewhere on the same page or paste
them to a different canvas page.
Want to change the appearance of some objects? You can apply styling changes to individual objects or group
them together and then apply the styling changes.
You can also use canvas pages to create a reporting layout. For more information, see Create a Reporting
Layout with Header, Footer and Margins [page 1038].
The grid page is a space where you can create and work with formulas in a table that has been generated from
existing data.
You can freeze parts of the table to stop them from scrolling, and you can hide table elements such as the grid
or the freeze lines.
Restriction
When you freeze up to a column or row, the static column or row number is used, even if you add or remove
dimensions.
For example, if you freeze up to column 3 and then add a dimension to the rows so that column 3 becomes
column 4, the freeze stays at column 3.
Before changing dimensions in the rows or columns, you need to turn off the freeze feature.
1. In the Display menu in the menu bar, select (Show/Hide Grid Elements).
Grid Pages: Create Custom Calculations Based on Data from Multiple Models
(Classic Story Experience)
Grid pages are useful for when you want to create your own custom formulas based on data from different
models. There's no need to link the models together: just pick an empty cell and create your formula.
When you swap the axes of a table or drill down, the formula automatically updates with the new cell reference.
Note
If you change the model after you create the formula, you'll see REF_ERR instead of the formula. Select the
cell containing the formula to see which referenced cells are no longer valid.
• The referenced cell is part of a hierarchy; collapsing the hierarchy hides the cell.
• Add more dimensions or measures to the table.
When you change the table back to its original layout, the formula shows the correct result.
Example
You're discussing purchases with your colleagues. Someone wonders if there's a way to show the combined
sales per month for dissimilar products.
You create a grid page in your story and add two models to it: drinks and athletic gear. You limit both models
to the same time period (first half of 2015, all months) and to one specific part of the country.
Remember
As a story developer, you can use popups and dialogs to design interactive stories.
Context
A popup helps viewers quickly enter information, perform configurations or make selections. You can also use
it to display more specific data for a selected item on a story page. A popup acts as a container, so you can put
any other widgets into it, such as table, button or checkbox.
Note
Procedure
1. Open the Outline panel. From View in the toolbar, make sure that Left Side Panel is selected.
2. In the Outline panel, hover over the page from which you want viewers to open the popup.
You can see that an empty popup opens with the default name Popup_1.
4. If you want to use dialog for the popup, in the Builder panel of the popup, select Enable header & footer.
Otherwise, skip to step 7.
5. Optional: You can change the title and configure buttons for the dialog.
6. Choose Apply.
7. Add widgets to the popup, such as dropdown boxes, tables or charts.
For dialogs, you can only add widgets to areas other than the header and footer.
Note
If there're no widgets in the popup, even if you save the story with it, viewers won't see it even if they've
triggered it.
8. Edit the styling of the popup in the Styling panel. For example, you can change its height and width.
Results
Furthermore, in the Outline panel, you can select to the right of the popup or a widget in the popup to add
scripts to it. For example, they can trigger events when viewers click a button in the popup.
To return to a page, select it from either the page tabs or Outline. You can also reopen the popup by selecting it
from Outline.
To delete the popup, find it in the Outline panel, and choose (More) Delete .
Set your SAP Analytics Cloud story preferences to specify default formatting options in a story (such as the
page size and background color). You can also set the story details to specify how to open the story.
Default Page Select a default background color for the pages in your story.
Background
Default Page Size Select On to set the default page size in your story. You can select a predetermined size from
the list (Letter, Legal, Tabloid, A3, A4, B4, B5) or set a custom size by entering the page width
and height in pixels.
Note
Before you can change the page size, you may need to rearrange tiles in your canvas to fit
the new size.
Default Page Select a default background color and the grid spacing for the pages in your story.
Properties
Default Lane Title Select On to show the page title. You can also select the title style, font, size, color, and
Properties alignment.
4. Select if you want to apply the new page settings to new pages, lanes, and tiles only or to all pages, lanes,
and tiles currently in your story.
5. Select OK.
Charts/Geo • Default Tile Background – select a default background color for the chart tiles in your
story.
• Default Text – select a default font and text color for the chart tiles in your story.
• Default Color Palettes – select the default color palettes for charts and geomaps in your
story.
• Standard: determines the colors available in all styling options.
The standard palette affects dual measure comparisons and all dimensions in chart-
ing.
• Continuous: can have any number of colors.
Continuous palettes are commonly used in geospatial analysis charting.
• Diverging: has a range based on three different colors. You can set the colors for the
left, center, and right values, typically using a neutral color for the center value.
A diverging palette affects heat maps, tree maps, and geospatial analysis in charting.
• Sequential: has a range based on two different colors. You can only select the end
values to change the color.
A sequential palette affects heat maps, tree maps, and geospatial analysis in chart-
ing.
• Other color options:
• Geo Single Color
• Geo Point of Interest
• Geo Cluster
• Waterfall Standard
• Null Color
• Default Axis Line Color – select the default axis line color for charts in your story.
Tables • Default Tile Background – select a default background color for the text tiles in your story.
• Default Text – select a default font and text color for the following text in table tiles: All
Text, Title, Subtitle.
• Default Styling Template – select a styling template. For example, Standard, Report-Styling
or Alternating Rows.
• Default Threshold Style – select a default format for threshold values.
• Symbol (default) – use the story-defined symbols and colors for the threshold
ranges.
• Color Values – the threshold colors are applied to the cell data.
• Color Background – the threshold colors are applied to the cell background.
• Color Background Without Values – the threshold colors are applied to the cell back-
ground and the values are hidden.
Text • Default Tile Background – select a default background color for the text tiles in your story.
• Default Text – select a default font and text color for the text tiles in your story.
Shapes • Default Tile Background – select a default background color for the shape tiles in your
story.
Input Controls • Default Tile Background – select a default background color for the Input Control tiles in
your story.
• Default Text – select a default font and text color for the Input Control tiles in your story.
Others • Default Tile Background – select a default background color for the other tiles in your
story.
• Default Text – select a default font and text color for the other tiles in your story.
4. Select if you want to apply the new tile settings to new pages and tiles only or to all pages and tiles
currently in your story.
5. Select OK.
Story Details
There are some options that can be set for the entire story.
Translation: You can create a request to have a translation service translate the story descriptions. For more
information on story translation, see Enabling a Story for Translation [page 1041].
The optimized view mode includes several usability improvements and it can also reduce story loading times in
certain scenarios. For more information on this view mode, see Optimized Story Experience Restrictions [page
1160].
To get the full benefit of optimized view mode, use SAP Analytics Cloud in a Google Chrome browser.
Viewer Settings: Stories usually open in Story View Mode but you can change it to open in Explorer instead.
Page Navigation: You can navigate story pages using a tab bar that shows each page as a tab, or using a drop
list. The system administrator sets a default navigation option, but you can change it for your story.
Override Default Settings: Allows you to change the default navigation option for your story pages.
Explorer Settings: Enable Explorer for all Charts and Tables overrides the state of any Enable Explorer settings
that are already applied to charts and tables. However, it does not reset those settings. When you turn off the
story-level setting, the charts and tables will go back to using the Enable Explorer setting.
Mobile Support for Canvas Pages (Optimized Design Experience): If there're mobile-friendly canvas pages
designed in your story, you can enable them to be viewed on mobiles.
Note
The option Enable mobile support for canvas pages is only available to story developers.
Bookmarks (Optimized Story Experience): You can choose whether to let viewers save last changed dynamic
variable values to override model-level ones, or let them modify the variable behavior when they create
bookmarks.
You can split the canvas into three distinct areas to create a reporting layout, and have different formatting for
each of them if needed.
In the Styling Panel of the canvas, in the Page Layout section, use the two dedicated toggles to show or hide the
footer and the header. Then, the story layers additional areas dedicated to the header and the footer so that
you can have different formatting for each of them.
Note
In Optimized Story Experience, you need to switch on Enable Reporting Layout first. Note that some
features won't be supported in the reporting layout, and once you've enabled it, you won't be able to revert
back to the previous layout even if you later hide header and footer.
The borders are highlighted in orange so you can quickly visualize them. To edit the size of the footer or the
header, click the corresponding border and drag the mouse up or down to increase or decrease the size of the
area you’ve selected.
The header and the footer are available for both fixed and dynamic canvas, and can be combined with a
paginated canvas. If the canvas page is set to fixed, you can edit the body margins in Edit mode using the
dedicated dropdown. They can be either non visible, normal, narrow, or wide. There are no margins in a
dynamic canvas, but you can still have a header and a footer.
Only Text, Image, Shape and Clock widgets are allowed in the header or footer. When inserted or moved to the
header or footer, these widgets are automatically resized if they are too big to be displayed.
Before converting your optimized stories to reporting layout, check what features will and won't be available.
You can open, create, edit, and share Stories in the Stories side navigation. When you open a Story, it will open
in View Mode by default. You can switch to edit mode by selecting the Edit button.
You can create a new Story from the story side navigation. For more information, see Create a New Story
(Classic Design Experience) [page 974].
• From the side navigation, select Files. Then select the check box for the story and select (Edit
Details).
• With a story open in Edit mode, choose (Edit Story) Story Details .
2. Find and select the story you want to copy and then select (Copy to).
3. (Optional) Select the location for where you want to copy the story to (for example, a different private,
public, or workspace folder).
Tip
If you want all users to have Read permission to your story, you can copy it to the Public folder. Also, if
you want to make your story available to all users as a template, you can copy it to the Samples folder.
Note
Your story name can include valid alphanumeric characters and underscores. Spaces can be included
but not at the start or end of a name.
Share Stories
You can share a story, and change the sharing settings. For more information, see Share Stories or Global
Bookmarks [page 205].
Note
If the story you are sharing uses content from different workspaces or folders, you must also give (or
request) access permission to those users so that they can access the content associated with the
workspace or folders where the content is saved.
You can enable a story for translation, import, export, and delete translated content in the Translation
Dashboard. For more information on SAP Analytics Cloud translation services, see Enabling a Story for
Translation [page 1041]
Related Information
Context
See Content Administration [page 2908] for help on translating story metadata, the Translator role, the
Translation Dashboard, and translation tasks that you can perform there.
Procedure
1. To access a story that you want to enable translation for, do one of the following:
• From the Home Screen, under Recent Stories, select the story.
• From the ( ) Main Menu, select Files and then select the story.
2. Next to Controls, click Edit.
Note
The admin has the rights to turn on or turn off the toggle to activate translation. (To navigate to turn on
the toggle, go to Main Menu, System Administration System Configuration , turn on or turn off
the Allow translation of user content toggle.
Any changes made to the story will be sent for translation every time the story is saved.
Note
The Data access language selected by you at the time of sending the story for translation becomes the
source language of the story.
Caution
If you disable Mark for translation for a story that has been enabled for translation, all of its translations
will be permanently deleted.
You can examine the data behind a chart or view a chart based on selected table data.
While designing your story, you can open the Examine panel to explore certain types of tiles:
• Select a chart. The data that makes up the chart is shown in the Examine panel. If you filter the chart by
selecting data points or dragging a rectangle around data points, only the selected data is shown in the
Examine panel.
• Select a table, or select cells within the table. A visualization based on the selected data in the table is
shown in the Examine panel. You can also change the visualization type, or select the icon to apply
operations such as sorting and ranking (for example, show top 5 or bottom 5 values).
Enable (Synchronize Visualization Automatically) to update the chart as you select cells (you can disable
this option by selecting the icon again).
If you want to save charts and tables from the Examine panel, first copy them to story pages.
In SAP Analytics Cloud, a data source for a chart or table may prompt you to add variables before data can be
displayed.
If the data source you select to create a chart or table requires variables to be set, a prompt will appear when
you create the first chart or table that uses the data source. After the variables are set, the information you
provide will be used by all tables and charts that use the same data source.
To display the Set variables dialog when the story opens (prompting the viewers to set the story variables
before they can do anything else), enable the following option: Automatically open prompt when story opens.
To change story variables, in the menu toolbar under Tools, select (Edit Prompts) and then choose the
data source with the prompts that you want to edit.
To add a variable token to the story filter bar to make it more visible, select Story Filter Variables and
select the variable you want to show.
You can override story variables for individual tables and charts with the following process.
• Charts: right-click or select (More Actions) and then select More Options Variables .
Select Set Chart Variables.
2. Choose a new variable value and then select Set.
Note
As a story designer, you can choose to have the story open with a pre-defined set of mandatory variables or to
let the story viewer set their own variables when the story opens.
1. Open a story.
2. In the menu toolbar under Tools, select (Edit Prompts) Variable Settings , and then select one
of the following options:
• Save Current Variable Selections: saves the variables that have been set.
• Clear Variables and Use Server-Defined Values: variables that have default values will show their default
values; variables that don't have default values will be empty.
3. Save your story.
When using an SAP BW data source, you may want to choose whether a variable value from a Customer Exit
variable comes dynamically from the SAP BW back end, or from a saved value that is manually input by the
user.
For example, if a Customer Exit variable is Current Month, and you don't manually input a value, the value of
the variable is different for each month: if the value this month is 2018 January, then next month it would be
2018 February, and so on.
Related Information
In SAP Analytics Cloud, a data source for a chart or table may prompt you to add variables before data can be
displayed.
If the data source you select to create a chart or table requires variables to be set, a prompt will appear when
you create the first chart or table that uses the data source. After the variables are set, the information you
provide will be used by all tables and charts that use the same data source.
To display the Set variables dialog when the story opens (prompting the viewers to set the story variables
before they can do anything else), enable the following option: Automatically open prompt when story opens.
To change story variables, in the menu toolbar under Data, select (Edit Prompts) and choose the data
source with the prompts that you want to edit.
To add a variable token to the story filter bar to make it more visible, select Story Filter Variables and
select the variable you want to show.
You can override story variables for individual tables and charts with the following process.
Note
When using an SAP BW data source, you may want to choose whether a variable value from a Customer Exit
variable comes dynamically from the SAP BW back end, or from a saved value that is manually input by the
user.
For example, if a Customer Exit variable is Current Month, and you don't manually input a value, the value of
the variable is different for each month: if the value this month is 2018 January, then next month it would be
2018 February, and so on.
In some cases, you may want the value of the Customer Exit variable to remain unchanged. To accomplish that,
select the Disable exit variables option, and select a member for the exit variable. The member you select is
saved with the story, and every time the BEx query is executed, the saved value is chosen.
Related Information
If your story has variables from more than one model, you can link variables so that changing one variable
updates the data from multiple models.
Context
For example, you may have a story that displays data from an Employee_Salary model and an
Employee_Performance model, each with separate variables for Time and Country. When these variables are
linked, both models show data for the same prompt values for Time and Country.
When setting story variables, linked variables are identified by the icon. Hover over this icon to see which
variables are linked. When you change the prompt values for this variable, the new values are applied to all the
linked variables.
Procedure
2. In the Data area in the toolbar, select (Edit Prompts) and select Link Variables.
The Link Variables window appears. If any links already exist for the story, they are displayed. Select Start a
new variable link to create your own.
For models based on live data connections, the following restrictions apply:
• Models based on live data connections to SAP HANA can link to other SAP HANA live data models, or
to models based on imported data.
• Models based on live data connections to SAP BW can only link to other SAP BW live data models.
• Models based on live data connections to an SAP Universe do not support variable linking.
Note
Values for intersecting links are matched based on their ID, not their description. To see the IDs, select
Data Samples for one of the models and choose ID or ID and Description. Then hover over
the ellipses next to the variable values to see a list:
If you’re creating an intersecting link between variables that have hierarchical values, the link only
includes values at the top level of each hierarchy.
The variables appear in the Matched Variables list. A successful link shows a check mark and an
unsuccessful link shows an X icon. Hover over this icon to see the reason for the unsuccessful link.
To change a link, you can select the icon next to one or both variables to remove them.
• All data (default): All prompt values that exist in either model are available.
Note
The All data link type is not supported for SAP BW models.
• Intersecting data only: Only prompt values that exist in both models are available.
7. Select Set.
8. In the Set Variables window, choose values for the linked variables, and select Set.
The Link Variables window appears, showing an overview of the existing links. You can edit or delete
existing links, or create new ones.
9. To return to the story, select Done.
Related Information
In SAP Analytics Cloud, you can create and use templates to help format your story.
Rather than building a story from scratch, you can use a template to speed up your story creation process.
Templates contain predefined page layouts and placeholders for objects such as charts, tables, maps, and
input controls. You can use templates in responsive or canvas pages.
You can work with templates that are already available to use in your story, or you can create your own custom
templates.
To apply a template, you can search for existing templates and add them to your story. You can select a
template that best suits the type of story you are planning to build.
In the SAP Analytics section of the Layouts menu, you can select a template from the following categories
(category labels are below the layout thumbnails):
• Dashboard/Boardroom: These layouts can help you create a story that is ideal for monitoring your data or
create a boardroom monitor. The layouts combine widgets such as maps, content, and key figures.
• Report: These layouts can help you create a story ideal for reporting. The layouts combine widgets such as
charts, tables, and annotations.
• Present/Presentation: These layouts can help you create a story that is ideal for presentations. The layouts
provide the formatting and elements necessary to create a slide deck.
Note
Layouts that you have added to your story appear in the Manually Added section of the Layouts menu. You
can change the layout by selecting it from the menu or add more layouts by selecting + Add Layout from the
Additional Layouts section.
3. In the chart, map, or table placeholder widgets that you want to use, select (Add in place).
You can also add a specific widget by selecting (More Actions) (Add) and then selecting
the widget you want to add.
You will be able to see the chart type (for example, bar or waterfall) that was used when saving the story
as a template or in the predefined template. When turning the chart placeholder into a chart, the original
chart type will be automatically selected.
4. If you want to remove the widget from the layout, select (More Actions) (Remove) in the
top-right corner of the widget.
You can create and use your own custom templates in your stories. To create your own template, you will just
need to save a story as a template.
Additionally, page and widget (tile) story preferences such as background color are saved with your template.
You can view and customize your preferences in the Story Preferences dialog.
Saving the story as a template removes all data and converts charts, tables, maps, input controls, and value
driver trees into empty placeholders. All grid pages and custom formatting are removed. You will be able to
see the chart type (for example, bar or waterfall) that was used when saving the story as a template. When
turning the chart placeholder into a chart, the original chart type will be automatically selected.
In Optimized Story Experience, the following features can also be saved with your template:
• Theme
• Scripting
Scripts on widgets are saved with templates, and widget IDs are kept.
• Widgets exclusive to unifies stories, such as dropdown and list box
• Custom widgets
The template removes all data and converts custom widgets into placeholders.
Note
If the widget or custom widget is bound to a data source, saving the story as a template removes all data
and converts it into a placeholder.
Related Information
You can schedule all charts, tables, and geomaps to automatically refresh in a story.
Context
You set the auto refresh option when editing a story, but the refresh only occurs when you are viewing the story.
2. From Data section on the story toolbar main menu, select (Refresh) and then select Configure Auto
Refresh.
3. On the Configure Auto Refresh screen, select the Enable Auto Refresh option.
4. Enter a numerical value and select the type of interval (for example, minutes).
For example, you can select the data to refresh every 30 minutes.
Note
The auto refresh interval should be shorter than the session timeout value.
Results
When you have the browser tab open that contains your story, your data is automatically refreshed at the
specified time interval.
Note
If auto refreshing is turned on but can't retrieve a data response for 5 consecutive time intervals, auto
refresh will be forced to stop.
Switch to another browser tab. Switch back to the browser tab that contains your story.
Switch to edit mode. Switch from edit mode to view or embed mode.
You can use SAP BW's BEx query dynamic filters for charts and tables in your story.
When you add a BW dynamic filter to your story, it affects both charts and tables.
You can add or update a dynamic filter in two ways: from a chart or table variable dialog or from the story
variable dialog.
Note
Variables may accept a single value, multiple single values, or ranges. However, range or variable values set
with mixed operators are currently not supported. In this case, the dynamic filter is not displayed.
1. With your story open, in the Data area in the toolbar, select (Edit Prompts), and choose the data
source that has the prompts you want to edit.
2. Select (Edit Table Prompts) under the table or chart name, and then select Set Chart Variables or Set
Table Variables.
3. In the Set Variables dialog, scroll to the bottom for the dynamic filter field.
4. Select the member area, and then select a value from the members.
Variables chosen from the chart or table variable dialog will overwrite the chart or table filter whether or not
you modify the chart or table. Also, the chart or table filter values will be projected back to the variables when
the chart or table variable dialog is opened, if you choose to use chart or table level variables.
You can change the filter for either a chart or a table, but once you do that, the object that you updated will
no longer be affected by changes to the “global” dynamic filter.
1. With your story open, in the Data area in the toolbar, select (Edit Prompts), and choose the data
source with the prompts you want to edit.
2. In the Set Variables dialog, scroll to the bottom for the dynamic filter field.
3. Select the member area, and then select a value from the members.
If you don't modify the filter, and you also don't switch to chart or table level variables, the variables chosen
from the story level will update the previously generated dynamic filter.
In the Set Variables dialog, the Apply Variant field provides you with a choice:
There are several options you can use in the member selector of the variable dialog.
You can enter values either by writing them directly into the field or by selecting them from the member
selector.
If you use the member selector, you will see booked values only. To select your unbooked members, turn the
following feature on: Show unbooked members.
The members will stay selected when you turn the feature off, but they will not be displayed in your results.
To use the unbooked members feature with SAP BW, you will need to apply the following additional SAP notes:
• 2936657
• 2945730
Note
When you open the story, the Show unbooked members toggle may not be set as you left it: the system
may have reset it.
Use query-defined member access: when you are connected to an SAP BW live model, the read mode for the
data is determined by the model. The toggle lets you choose whether to use the read mode settings or the
unbooked members setting.
Attributes help you find the members you want to select as variable values.
In the variable dialog, click (Value Help) to open the member selector. There you can select several
attributes, for example region or country. Selecting these attributes adds columns with the corresponding
information about the members into a table.
Select the members from the table that you want to use as variables in your story.
Note
To use this feature with SAP BW, you need to have version SAP BW 7.50 and apply the following SAP Notes:
• 2906044
• 2839707
• 2847470
Related Information
You can import data as a dataset or model from an external data source into a new story.
When you import data into a story, a “private” or “embedded” entity is created in the background. This
entity contains your data structures - dimensions, measures, and dimension attributes such as descriptions or
hierarchy information. The imported data is embedded in the story as a self-contained entity. It will not appear
in the folder list for use with other stories.
The source data can be your own data from local or server-based files, or from supported data sources.
Note
When you import data from Microsoft Excel on a network, the first sheet is automatically imported. When
importing from a local file system, you can choose which sheet to import.
When importing data into a story, the import process analyzes the source data and creates an initial inferred
data view with proposed dimensions and measures. You then further refine the proposal by geo enriching
dimensions, specifying certain columns as either dimensions or measures, and fixing any data-quality
problems.
The overall workflow for adding data to a story involves the following:
Note
If you are importing a model, please refer to Models in Stories [page 1058].
• An agile experience that is geared to consume datasets for adhoc and flexible analysis. This is the default
experience for adding data to a story. For a better understanding of datasets, see About Datasets and
Dataset Types [page 544]. You can import and immediately use data to create visualizations within your
stories. Using this experience, you can smoothly move between the data and the story layout views. You
can quickly edit the imported data to suit the changing requirements of your story creation process. Other
benefits of this agile data preparation experience:
• Data structures and data types are inferred during the initial data import, letting you immediately start
working on story layout.
Note
The basic data preparation experience supports the following functionalities that are not available in
the default data preparation experience:
You can import and immediately use data to create visualizations within your stories.
Note
SAP Datasphere analytic models are only available using the Optimized Design Experience.
From the ( ) Main Menu, select Stories, and then select Canvas.
In the dialog, select Classic Design Experience, and then select Create.
Tip
In the classic design experience, this option will add the model and open the Data Exploration view.
The Acquiring Data process is launched. Upon completion, the Data view for the story is displayed.
Note
If you import a large-volume dataset, you are informed that a data sample will be used rather than the
entire dataset. Select Got it to continue. If you want the entire dataset to be validated when performing
any data preparation tasks, select Validate Full Dataset at the bottom of the Dataset Overview panel.
This will give you a better sense of how many issues, such as mismatched values for specific data
types, exist in your dataset.
Once data has been imported, you can perform any of the following tasks:
From the ( ) Main Menu, select Stories, and then select Canvas.
In the dialog, select Optimized Design Experience, and then select Create.
2. From the toolbar, select More Add New Data Add New Data .
The Let's add some data! dialog is displayed.
3. Import your data by selecting one of the following options:
• Data uploaded from a file: use this option for an Excel or CSV (comma-separated-values) file stored
locally or on a file server.
Choose the file you want to import.
• Data from a data source: use this option for data stored in a supported data source (for example, SAP
HANA).
Select the dataset you want to import from your source.
• Data from an existing dataset or model: use this option for data stored in an existing dataset or model.
For data stored in SAP Datasphere:
1. Select an SAP Datasphere live data connection in the model selection dialog to browse SAP
Datasphere spaces.
2. Select an analytic model, a view (analytical dataset), or a perspective.
For more information on how to leverage SAP Datasphere data in stories, check Live Data Connections
to SAP Datasphere [page 412].
Note
To be able to consume SAP Datasphere data in stories, make sure that you have the DW Consumer
role assigned in SAP Datasphere. This role is required for SAP Datasphere data consumption in
stories in SAP Analytics Cloud.
You can now replace any public or private dataset in your story with another public dataset. When you replace
the dataset in your story, the configuration of all the widgets in your story is preserved. This means there's no
need to rebuild your charts and tables.
Your new dataset must contain all the measures and dimensions, and the same measure and dimension names
and types as your original dataset. However, your new dataset can also contain additional measures and
dimensions not found in the original dataset.
4. In the (Search) field, type the name of the new dataset with which you'd like to replace your current
dataset, and select it from the list.
5. A Replace Dataset dialog appears informing you about the potential differences in your story after
replacing the dataset. Select OK.
6. Your dataset is now replaced.
Note
• If you want to base your Smart Discovery results on your new dataset, you need to run Smart Discovery
again, from the Story view.
Related Information
When you import a data file into a new story, a “private” or “embedded” model is created in the background,
containing your data and dimension information.
Unlike models created on the Models page, these private models are saved with the stories, and they don't
appear in the public models list on the Models page.
When you initially acquire raw data, it may need to be cleansed, and it may include many rows and columns
which might not be relevant to your analysis or planning. Data preparation involves pruning, transforming, and
formatting your raw data before it's used in charts and tables.
• Fixing anomalies in the data (such as dates with inconsistent formatting), filling in empty cells, and so on.
You can refresh a private model inside a story by uploading another file. The contents of the uploaded file either
replace, or are appended to, the existing content in the model. Follow these steps to refresh a model in a story:
You can publish a model in a story to create a new public model based on the information in the private model.
Reasons for publishing a private model include:
• If you want a public model that uses the same file and the same column settings as an existing private
model, you don't need to create that public model from scratch.
• You may no longer have the original uploaded file, and therefore wouldn't be able to create a public model
using that file.
A new public model, independent of the original private model, is created, and appears in the public models list
on the Models page.
The original private model remains unchanged, and the story that contains it does not reference the new public
model.
Note
There may be differences between the public model and the original private model in the story:
• The dimension member IDs of a dimension in the public model can be different from the IDs in
the corresponding dimension in the model inside the story. If you create a formula in the story that
references dimension member IDs, the equivalent formula in the public model may refer to different
You can now replace any acquired model in your story with another compatible acquired model (replacing the
classic account models with the New Model type is not supported). When you replace the model in your story,
the configuration of all the widgets in your story is preserved. This means there's no need to rebuild your charts
and tables.
Note
When a model is replaced in a story, there will be issues using the Member Selector for input controls.
Note
Model replacement uses identifiers for mapping both columns and dimension. Make sure the source and
the target identifiers are the same. In some instances the IDs might be truncated and cause the model
replacement to fail. To avoid truncation of IDs do not use the following for dimension or measure names:
Your target model doesn't have to be the same model type (planning or analytic) as the original model in your
story. You can replace an analytic account model in your story with a planning model, and vice versa. Your new
model must contain all the measures and dimensions, and the same measure and dimension names and types
as your original model. However, your new model can also contain additional measures and dimensions not
found in the original model.
4. In the (Search) field, type the name of the new model with which you'd like to replace your current
model, and select it from the list.
Note
Models identified as not compatible will be hidden and not displayed in the Select Model dialog.
5. A Replace Model dialog appears informing you about the potential differences in your story after replacing
the model. Select OK.
Note
• It is currently not possible to replace the classic account type model with the newer New Model Type
and vice versa.
• If you want to base your Smart Discovery results on your new model, you need to run Smart Discovery
again, from the same story.
• If you replace a model using the R visualization widget, the original model name persists. Only the
model ID is replaced.
Related Information
In SAP Analytics Cloud, you can create links between dimensions in multiple models.
You can link dimensions between models to create charts or tables that display data from multiple models.
Linked dimensions also let you create filters that simultaneously update all charts that include linked data.
Filters on linked dimensions can be used at the story, page, and linked analysis level.
Note
• The attributes of a dimension must match the attributes of the dimension it is linked to:
• Description to Description
• ID to ID
If the attributes don't match, information might not be returned when a filter is created, and a “(No
Value)” flag will appear for an unmatched dimension member.
• Chart filters don't affect linked dimensions, because they're applied to a single chart.
• Some dimension links may not apply filters to all charts. This happens when filters applied to one
model can't be translated to meaningful filters on a second model.
• For BW remote models, consider that the Description might be language-dependent.
• Sorting doesn't work with linked dimensions.
1. Use one of the following methods to open the Linked Dimensions dialog:
Note
If there are already linked dimensions in the story, they will be listed. To use the existing link, select
3. (Optional) To change how the dimensions are displayed in the Select a model area, select Data
Samples and then select an attribute:
• Description
• ID
• ID and Description
Note
By default, only descriptions are retrieved. For dimensions that don't have a Description attribute, “No
Values” will be displayed. Selecting ID or ID and Description gives a better indication of the true values
available.
Note
For dimensions that have blank or empty strings for Description (""), the value of the ID attribute will be
displayed. This can sometimes lead to an incorrect perception that two dimensions can be matched on
their Description attribute when in fact the ID attribute should be used.
Tip
When you hover over a dimension, you can preview the dimension values.
The attribute of a dimension must match the attribute of the dimension it is linked to. If it doesn't
match, when a filter is created, information may not be returned.
6. (Optional) In the Matched Dimensions area, select a dimension, select (Link Attribute) and then choose
from the linking options.
Link On: ID or Description. For filtering across models, the ID at- For blending, the Description attribute
tribute is used to propagate filters is used to match members between
This feature is available on non-hi-
from source models to target models. the linked dimensions.
erarchical dimensions and imported
data source dimensions with level-
based hierarchies.
Link On specific level-based hierarchy • For imported data sources, link Not applicable
on the dimension ID and the
This option is available on dimensions
individual hierarchy properties,
with level-based hierarchies.
rather than the hierarchy itself.
• The dimension can only be linked
to another dimension with a level-
based hierarchy that has the
same number of levels. For best
results, the levels should contain
similar or matching information.
For example, it is possible to
link a two-level hierarchy repre-
senting Country-City to another
two-level hierarchy representing
Country-City or to a two-level hi-
erarchy representing Continent-
Country. However, only the first
link would yield expected results.
Link on matching hierarchies When the setting is OFF, filtering When the setting is OFF, blending
across models propagates filters on uses the default hierarchy in the sec-
This option is not available for Live
the default hierarchy: this can cause ondary sub-query: this may not match
Universe models.
poor performance for source filters members of the parent-child hierar-
This setting is ON by default for any that use the non-default hierarchy. chy set on the primary sub-query.
links created starting with version
When the setting is ON, filtering When the setting is ON, blending uses
2020.15 and OFF by default for any
across models propagates filters on the matching hierarchy (provided one
content created prior to 2020.15.
the matching hierarchy (provided one exists) in the secondary sub-query
Matching is done based on the hier- exists) no matter which hierarchy is based on the parent-child hierarchy
archy name/id and assumes that hi- used in the source filter. set on the primary sub-query.
erarchies with the same name have
matching structure/members. For
cases where this assumption is true,
it is recommended to turn ON the op-
tion.
By default, the Description attribute is used to match members between the linked dimensions. However,
for non-hierarchical dimensions it is possible to change the attribute that is used for linking.
Note
Filtering across models on hierarchies can be performance intensive. There are a few things to
consider:
• If the hierarchies aren't identical, then you might need to filter on descendants to get the right filter
result, at the cost of slower performance.
• If the hierarchy is a level-based hierarchy for an import data model, we suggest that you link on ID
and the individual hierarchy properties, rather than the hierarchy itself.
7. Select Set.
8. Review the links and if necessary, edit them or select Add Model Link to add more links.
9. When finished, select Done.
1. Use one of the following methods to open the Linked Dimensions dialog:
Note
If there are already linked dimensions in the story, they will be listed. To use the existing link, select
All charts or tables that include the linked dimension will update simultaneously. In the Builder tab, linked
When blending data, you combine data from more than one model into a chart or table as a part of a story.
In both cases, however, the first step is to link the common dimensions of one model with the other.
For more information about filtering across models, see Filter Across Models [page 1066].
For more information about data blending, see the following topics:
Related Information
Note
Filtering across models on hierarchies can be performance intensive. There are a few things to consider:
• If the hierarchies aren't identical, then you might need to filter on descendants to get the right filter
result, at the cost of slower performance.
• If the hierarchy is a level-based hierarchy for an import data model, we suggest that you link on ID and
the individual hierarchy properties, rather than the hierarchy itself.
• Direct filters:
• If the dimensions that you wish to filter can be linked, create a link that works with a direct filter.
In addition, link as many dimensions as possible to increase the speed of propagation and the
application of filter values across models.
For example, when you create filters for a dimension “Store”, then “Store” has to be a part of the linked
dimensions.
• Indirect filters:
• If the dimensions you wish to filter cannot be linked, link as few dimensions as possible; this will
mitigate the slower performance caused by using indirect filters.
The fewer linked dimensions you use, the faster SAP Analytics Cloud will find the correct combination
of filter values.
For example, when you create filters for a dimension “Product” – which is only present in one model
and can't be linked to the other model – optimize performance by creating as few linked dimensions as
possible.
Related Information
Blending models lets you join a primary data source with secondary data sources that contain common linked
dimensions. For example, you can blend data from a corporate data source with data from a local spreadsheet,
Blending can be done within individual tables and charts. For details, see Link Dimensions Across Multiple
Models [page 1061].
Note
A new model is not created when you blend models, and the original models are not modified. Links
between models that are blended only occur within a story.
Note
Note
These restrictions apply to stories that have been fully converted to the Optimized Design Experience.
There are some that may not be supported within the Classic Design Experience with Optimized View Mode
enabled.
The following features are not supported for blended data in the optimized story experience.
Some blending options are not supported in the classic story experience, including the following:
For other restrictions on blending data, please see the following SAP Note: 3143834 desc: Blending
Restrictions in SAP Analytics Cloud (SAC)
• Comparing data of actuals from a corporate SAP source with plans stored from a non-SAP source.
• Comparing sales results against market benchmarks, or marketing campaign results available in .csv files,
or data provided by a 3rd party consulting firm.
• Creating custom calculations based on key performance indicators from different data models like public
census data.
• Using Smart Predict to output predictions in a dataset, and blend these predictions with other data
sources, in the context of a story.
You can create simple calculations using measures or account members from multiple models in your blended
chart or table. When creating the formula in the formula editor, the auto-complete options will include
measures from all your linked models.
Note
You cannot use IF or ResultLookup functions in formulas that use multiple models.
In the Builder, members and dimensions from the primary model are identified with a dot.
When you switch between linked models for your blended table, the list of calculations will contain all the
available calculations for the current model.
You can blend data between models that are based on imported data stored in SAP Analytics Cloud, without
needing any blending-specific prerequisites or setup.
Data blending with models based on live data is also supported for specific SAP data sources, but does require
additional prerequisites and setup.
As an administrator, you can choose to disable blended data for either local or remote data sets, or both. When
you disable blending, the chart or table widget will show a warning message.
Note
The blended definition is preserved. When you turn off the disabling option, the widget is restored and
shows the same contents it previously showed.
To disable or enable blended data, go to System Administration and then set the toggles to ON for the
following settings:
Related Information
Prerequisites
You have already set up linked dimensions between models. For more information on linked dimensions, see
Link Dimensions Across Multiple Models [page 1061].
Note
If you are using live data models for blending, additional prerequisites and setup may be required. For more
information, see Blend Data [page 1066].
Procedure
The primary model is selected by default in the Data Source list. The primary model is the model used to
create the table.
If no other models are listed, select Add Linked Model and then select a model from the list.
4. (Optional) Select a new Link Type from the list. There are three options:
• All primary data – Allows all data in the primary model, and corresponding data in a secondary model
to appear in the chart.
• All data – Allows all data in the primary and secondary model to appear in the chart. (This option isn't
available when linking on hierarchies.)
Measures and dimensions that belong to the primary model are indicated by a blue dot.
6. Repeat steps 3 to 5 for all models that you want to use in the chart.
Results
The chart displays data from both the primary and secondary models.
Related Information
Prerequisites
You have already set up linked dimensions between models. For more information on linked dimensions, see
Link Dimensions Across Multiple Models [page 1061].
Note
If you are using live data models for blending, additional prerequisites and setup may be required. For more
information, see Blend Data [page 1066].
Procedure
1. Insert a table and select the data source for the table.
The primary model is selected by default in the Data Source list. The primary model is the model used to
create the table.
If no other models are listed, select Add Linked Model and then select a model from the list.
4. Add measures or dimensions to the table.
5. (Optional) Select a new Link Type from the list. There are three options:
• All primary data – Allows all data in the primary model, and corresponding data in a secondary model
to appear in the table.
• All data – Allows all data in the primary and secondary model to appear in the table. (This option isn't
available when linking on hierarchies.)
• Intersecting data only – Allows only linked data to appear in the table.
Results
The table displays data from both the primary and secondary models.
Related Information
• Blending between a live data model from SAP BW, and a data model based on imported data.
• Blending between a live data model from SAP S/4HANA (on-premise), and a data model based on
imported data.
• Blending between a live data model from SAP HANA (on-premise), and a data model based on imported
data.
To enable browser-based data blending, you'll need to consider these additional requirements:
• If you're running an SAP BW or SAP S/4HANA system, ensure that the latest SAP Notes are applied,
depending on your deployment type:
• 2737952 – SAP Analytics Cloud browser blending with SAP BW 7.5
• 2737953 – SAP Analytics Cloud browser blending with SAP BW/4HANA
No other hardware deployment or special licensing is needed. Browser-based blending for SAP BW doesn't
require SAP HANA.
• If you're running an SAP HANA system, ensure that your system is on the correct version:
Why are there two different approaches, and when should I use which approach?
We typically recommend browser-based data blending to business users or data analysts who want to blend
smaller* aggregated datasets together within a story. A system-defined data volume limit ensures that the data
blend operation within the story performs as expected. In general, we recommend trying browser-based data
blending with live models first, because this approach should meet most business users' needs. There are no
extra hardware deployment requirements for using browser-based blending.
When users are linking dimensions for the purpose of browser-based blending, a warning is shown if the data
volume limit is exceeded:
An error message is shown if the blending operation doesn't complete successfully due to the data volume limit
being exceeded:
If this happens regularly, contact your SAP Analytics Cloud system administrator to determine whether SDI-
based data blending should be enabled for your business use case. We recommend SDI-based blending for
larger aggregated datasets.
Note
When using browser-based data blending with a live SAP HANA source, data from the live HANA system is
temporarily cached in SAP Analytics Cloud.
You can perform blending between SAP HANA live data models.
• Blending with imported data models only: A primary model based on imported data can be blended with
one or more secondary models based on imported data.
• Blending between multiple SAP HANA live data models on the same system: A primary live data model
based on an SAP HANA live connection can be blended with one or more secondary live data models from
the same system only.
• Blending SAP HANA live data models with import data models: A live data model based on SAP HANA can
be blended with one or more secondary models based on imported data and live data models on the same
system.
Blending Between SAP HANA Live Data Models on the Same System [page 1075]
SDI-Based Blending Between SAP HANA Live Data Models and Import Data Models [page 1075]
Related Information
You can enable blending between SAP HANA live data models available on the same system.
The system administrator for SAP HANA must perform the following tasks:
Related Information
Data Blending for SAP HANA Live Data Models [page 1074]
Blend Data [page 1066]
SDI-Based Blending Between SAP HANA Live Data Models and Import Data Models [page 1075]
You can enable blending between a primary SAP HANA live data model, and import data models.
The following setup and configuration steps must be performed by the administrator for your SAP HANA
system, and an administrator for SAP Analytics Cloud. These steps are typically not performed by one person
alone; therefore, it is recommended that you make a plan with your SAP HANA and SAP Analytics Cloud
system administrators before proceeding with the steps below.
• You are on SAP HANA 2.0 on-premise SPS02 Patch 5 or higher, with EPMMDS version
1.00.201815.00.1534765831 (Wave 201815) or higher. See SAP Note 2456225 and SAP Note 2444261
for additional setup information.
• Your SAP HANA system must have a full use license. Data blending for SAP Analytics Cloud is not
permitted for SAP HANA systems using a runtime (REAB) license. SAP HANA systems using a Runtime
Edition (REAB) license will be supported in the near future.
SAP Analytics Cloud leverages the use of SAP HANA Smart Data Integration (SDI) to enable data blending. You
must have the SAP Data Provisioning Agent installed on a Windows or Linux host from which it can establish
HTTPS connections to SAP Analytics Cloud and your SAP HANA system.
The version of the Data Provisioning Agent required for your SAP HANA system can be determined by checking
the Product Availability Matrix.
For information on installing the agent, see Installing the SAP Data Provisioning Agent. The following steps
must be performed during the agent installation:
1. Using SAP HANA Studio, create the REMOTE_OBJECTS table on your on-premise SAP HANA system. This
table is used by SAP Analytics Cloud to temporarily store data results required for a blend via the agent
connection. You must execute the following SQL and stored procedure statements within the SQL Console
of SAP HANA Studio.
1. Create a user for administering this table. This user will also be used in SAP Analytics Cloud for
creating the connection.
The remaining steps for setting up your on-premise system must be done with the SDI_ADMIN user.
2. Create the REMOTE_OBJECTS table.
3. Set permissions for the REMOTE_OBJECTS table, which allow for storage and access to the temporary
cache data during data blending scenarios.
Caution
If you re-import the INA Delivery Unit (DU), these permissions will be removed from the INA_USER
role, and they will need to be re-added.
1. Log onto SAP Analytics Cloud and go to (Main Menu) System Administration Data
Source Configuration .
2. Scroll down to the SAP HANA Smart Data Integration section, and select Register New Data
Provisioning Agent.
3. In the dialog, enter a unique name for your new agent registration.
Note
Note
You will need the information presented in the Data Provisioning Agent dialog for step 2. Either keep
the dialog box open, or note the information provided and give it to the SAP HANA administrator
who will perform step 2.
2. Create a connection between the agent and your on-premise SAP HANA system. Perform the following
steps in the SAP Data Provisioning Agent.
Note
<DPAgent_root> is the directory where the DP agent was installed. By default, on Windows, this is
C:\usr\sap\dataprovagent, and on Linux it is /usr/sap/dataprovagent.
1. Configure dpagentconfig.ini:
In the directory where the agent was installed, open <DPAgent_root>/dpagentconfig.ini, and
edit the following lines. Use the values provided by the Register New Data Provisioning Agent dialog
from step 1.b. The required values are listed under Data Provisioning Agent Details.
Note
cloud.useProxy=true
proxyHost=<proxy_hostname>
proxyPort=<proxy_port>
Note
The proxy must be able to handle HTTP long polling. Using a proxy may have a performance
impact on blending.
export DPA_INSTANCE=<DPAgent_root>
On Windows:
set DPA_INSTANCE=<DPAgent_root>
<DPAgent_root>/bin/agentcli.sh --configAgent
On Windows:
<DPAgent_root>/bin/agentcli.bat --configAgent
You must start the agent, if it has not been started already: Select option 2. Start or Stop Agent,
and then option 1. Start Agent.
Select q. Quit to exit the script.
This pattern of stopping and then starting the agent is required whenever you make a
configuration change to the agent.
• Set the credentials for the HANA XS user.
On Linux:
<DPAgent_root>/bin/agentcli.sh --setSecureProperty
On Windows:
<DPAgent_root>/bin/agentcli.bat --setSecureProperty
If you have closed and reopened the Register New Data Provisioning Agent dialog from step
2.b, and the password field is empty, you can click the button next to the password box to
generate a new password.
When you click the Save button, the new password is saved to SAP Analytics Cloud and is
ready to be used.
<DPAgent_root>/bin/agentcli.sh --configAgent
On Windows:
<DPAgent_root>/bin/agentcli.bat --configAgent
Select option 2. Start or Stop Agent, and then option 2. Stop Agent to stop the agent.
Select option 1. Start Agent to start the agent.
Select option 1. Agent Status to check the connection status.
If the connection succeeded, you should see Agent connected to HANA: Yes.
Select q. Quit to exit the script.
You can now close the agent registration dialog from step 1.
3. Connect SAP Analytics Cloud to your on-premise SAP HANA system.
1. Log on to SAP Analytics Cloud and go to (Main Menu) System Administration Data
Source Configuration .
Note
For security reasons, don't select this option if you want to prevent data from being copied and
saved in SAP Analytics Cloud.
Note
The live data connection you select must be the same on-premise SAP HANA system as the remote
source.
1. Click Add Live Data Connection on the remote source created in step 3.
2. In the drop-down box, select the appropriate live data connection.
3. Click Save.
For information about installing the agent, see Installing the SAP Data Provisioning Agent.
If you encounter problems with the Data Provisioning Agent, it can be helpful to examine the logs located in the
<DPAgent_root>/log directory.
Data Blending for SAP HANA Live Data Models [page 1074]
Blend Data [page 1066]
• Blending SAP BW live data models with import data models: A live data model based on SAP BW can be
blended with one or more secondary models based on imported data.
• Blending SAP BW live data models with other SAP BW live data models:
• Multiple BW models from the same BW system
• Multiple BW models from different BW systems
• Blending multiple models (more than two)
• Blending multiple models using acquired data with multiple BW models
• Support both browser-based and SDI blending modes
• Use BW systems with different release cycles (for example, BW 7.5 and BW/4HANA)
• Linking with and without hierarchies (parent-child or time-dependent hierarchies)
• Use BW as the primary data source
(Refer to the configuration steps and prerequisites below for details on SAP BW versions and SAP HANA
versions required for data blending.)
• SAP BW on HANA
• SAP BW/4HANA
SDI-Based Blending Between SAP BW Live Data Models and Import Data Models [page 1086]
Related Information
You can enable blending between a primary SAP BW live data model, and import data models. The actual
blending of data sets is always performed in the SAP HANA database belonging to the SAP Analytics Cloud
system.
The following setup and configuration steps must be performed by the administrator for the SAP HANA system
that your BW or BW/4HANA system is running on, and an administrator for SAP Analytics Cloud. These steps
are typically not performed by one person alone; therefore, it is recommended that you make a plan with your
SAP HANA and SAP Analytics Cloud system administrators before proceeding with the steps below.
Note
Apply all required SAP Notes. Refer to the following SAP Note first to check the compatible versions of SAP
BW that SAP Analytics Cloud supports. Execute the steps described in the SAP Note to determine the required
notes that must be implemented in your BW system to enable SAP Analytics Cloud to work seamlessly with BW
Live connections:
2541557
Then, refer to the following SAP Note for specific requirements for enabling data blending with your SAP BW
system. It is important that this step be performed after the steps described in the above note.
2715871
When blending data from Live connections to SAP BW on-premise systems, data is moved outside the
corporate network to the SAP HANA engine of SAP Analytics Cloud to perform the blending operation.
However, data is not stored persistently in SAP Analytics Cloud; a temporary cache is used. Data from the
cache is removed after the blending operation is completed. Data is transported to SAP Analytics Cloud
through channels secured with industry-standard data protection standards and encryption methods, so
data is secure at all times.
Refer to the following SAP Notes for the versions of SAP HANA that are compatible with SAP BW. It is also
recommended to check the compatible versions of SAP BW that SAP Analytics Cloud supports:
• 1600929
SAP Analytics Cloud leverages the use of SAP HANA Smart Data Integration (SDI) to enable blending with SAP
BW data. You must have the SAP Data Provisioning Agent installed on a Windows or Linux host from which it
can establish HTTPS connections to SAP Analytics Cloud and the SAP HANA system your SAP BW deployment
is running on.
The version of the Data Provisioning Agent required for your SAP HANA system can be determined by checking
the Product Availability Matrix.
For information on installing the agent, see Installing the SAP Data Provisioning Agent. The following steps
must be performed during the agent installation:
Note
The schema referenced in this help topic is the HANA schema. The ABAP schema is not recommended in
this scenario, because you may experience technical difficulties during remote system deployment.
Using SAP HANA studio, create the REMOTE_OBJECTS table on your on-premise SAP HANA system. This table
is used by SAP Analytics Cloud to temporarily store data results required for a blend via the agent connection.
1. In case the SAP HANA system your SAP BW deployment is running on has a full license, perform the
following steps:
1. Create a user for administering this table. This user will also be used in SAP Analytics Cloud for
creating the connection.
The remaining steps for setting up your on-premise system must be done with the SDI_ADMIN user.
2. Create the REMOTE_OBJECTS table.
3. Set permissions for the REMOTE_OBJECTS table, which allow for storage and access to the temporary
cache data during data blending scenarios.
Assign the ABAP communication user via the HANA studio - this is the user with whom ABAP accesses
the HANA database - under "Object Privileges" the schema "SDI_ADMIN" with DELETE, INSERT,
SELECT permission and granted to others "no Grant"
2. In case the SAP HANA system your SAP BW deployment is running on does not have a full license (for
example, Runtime Edition (REAB)), perform the following steps:
1. Create a user for administering this table. This user will also be used in SAP Analytics Cloud for
creating the connection.
The remaining steps for setting up your on-premise system must be done with the SDI_ADMIN user.
2. Assign the ABAP communication user via the HANA studio - this is the user with whom ABAP accesses
the HANA database - under "Object Privileges" the schema " SDI_ADMIN" with CREATE ANY, DELETE,
INSERT, SELECT permission and granted to others "no Grant".
3. Create the REMOTE_OBJECTS table by executing the ABAP report RS_BICS_INA_BLE_SETUP
(Software Component: DW4CORE note 2806650 and SAP_BW note 2806682 ) via SE38
transaction. Enter the schema " SDI_ADMIN" and the table name “EHS_REMOTE_OBJECTS”.
4. For security reasons, you can deselect the flag for CREATE ANY.
1. Log onto SAP Analytics Cloud and go to (Main Menu) System Administration Data
Source Configuration .
Note
Note
You will need the information presented in the Data Provisioning Agent dialog for step 2. Either keep
the dialog box open, or note the information provided and give it to the SAP HANA administrator
who will perform step 2.
2. Create a connection between the agent and your on-premise SAP HANA system. Perform the following
steps in the SAP Data Provisioning Agent.
Note
<DPAgent_root> is the directory where the DP agent was installed. By default, on Windows, this is
C:\usr\sap\dataprovagent, and on Linux it is /usr/sap/dataprovagent.
1. Configure dpagentconfig.ini:
In the directory where the agent was installed, open <DPAgent_root>/dpagentconfig.ini, and
edit the following lines. Use the values provided by the Register New Data Provisioning Agent dialog
from step 1.b. The required values are listed under Data Provisioning Agent Details.
Note
cloud.useProxy=true
proxyHost=<proxy_hostname>
proxyPort=<proxy_port>
Note
The proxy must be able to handle HTTP long polling. Using a proxy may have a performance
impact on blending.
export DPA_INSTANCE=<DPAgent_root>
On Windows:
set DPA_INSTANCE=<DPAgent_root>
<DPAgent_root>/bin/agentcli.sh --configAgent
On Windows:
<DPAgent_root>/bin/agentcli.bat --configAgent
You must start the agent, if it has not been started already: Select option 2. Start or Stop Agent,
and then option 1. Start Agent.
Select q. Quit to exit the script.
This pattern of stopping and then starting the agent is required whenever you make a
configuration change to the agent.
• Set the credentials for the HANA XS user.
On Linux:
<DPAgent_root>/bin/agentcli.sh --setSecureProperty
On Windows:
<DPAgent_root>/bin/agentcli.bat --setSecureProperty
If you have closed and reopened the Register New Data Provisioning Agent dialog from step
2.b, and the password field is empty, you can click the button next to the password box to
generate a new password.
When you click the Save button, the new password is saved to SAP Analytics Cloud and is
ready to be used.
<DPAgent_root>/bin/agentcli.sh --configAgent
On Windows:
<DPAgent_root>/bin/agentcli.bat --configAgent
Select option 2. Start or Stop Agent, and then option 2. Stop AgentSelect to stop the agent.
Select option 1. Start Agent to start the agent.
Select option 1. Agent Status to check the connection status.
If the connection succeeded, you should see Agent connected to HANA: Yes.
Select q. Quit to exit the script.
You can now close the agent registration dialog from step 1.
3. Connect SAP Analytics Cloud to the on-premise SAP HANA system your SAP BW deployment is running
on.
1. Log on to SAP Analytics Cloud and go to (Main Menu) System Administration Data
Source Configuration .
2. Under SAP HANA Smart Data Integration, select Add New Remote Source.
3. In the Add New Remote Source dialog, provide a name for your SAP HANA system.
4. Provide the fully qualified domain name of your SAP HANA system.
5. Provide the SQL port of your SAP HANA system.
6. Provide the schema and table name of the REMOTE_OBJECTS table you created.
7. If you want to enable SSL encryption between your remote source and the SAP Data Provisioning
Agent, select the Use encryption and Validate certificate check boxes.
8. Provide the credentials for the SDI_ADMIN user that created the REMOTE_OBJECTS table.
9. Select Save.
4. Associate an existing live data connection with your SAP BW on HANA or SAP BW/4HANA system.
Note
The live data connection you select must be the SAP BW system that you're blending data from.
1. Click Add Live Data Connection on the remote source created in step 3.
2. In the drop-down box, select the appropriate SAP BW live data connection.
3. Click Save.
For information about installing the agent, see Installing the SAP Data Provisioning Agent.
If you encounter problems with the Data Provisioning Agent, it can be helpful to examine the logs located in the
<DPAgent_root>/log directory.
Related Information
Add various elements to your SAP Analytics Cloud story, such as images and text, or enhance your story's
performance.
Whether you are creating a report or just want a way to let your viewers navigate the story more easily, use one
of the following options:
You can add static images or dynamic images stored in a remote database to SAP Analytics Cloud stories. In
Optimized Story Experience, you can also bind the image widget to a script variable in your story, whose value
can be the source URL of any image.
Prerequisites
To create a dynamic image, you will need a model based on a HANA view with the following attributes:
• A column containing BLOB data, used in the dynamic image. It is recommended that the image BLOBs
used are less than 5MB in size.
• A column containing a unique ID used to link the image model to an aggregate model via linked models.
When queries are performed on the aggregate model, the image object will update if a single member is
filtered down to in the aggregate query. This prerequisite is necessary to prevent duplicate image data to
be added to the model. For more information on linking models, see Blend Data [page 1066].
Context
Note
• Dynamic image objects are not supported in the mobile application and cannot be embedded in header
widgets.
• Dynamic image objects don't support measure-based filters on the source model.
Procedure
You can upload a static image, which is an image locally uploaded from your computer, or a dynamic image,
which is an image stored in a remote system.
Note
You can delete an image in the library be selecting the image and then selecting Delete
Image.
3. From the Image Library, select the image you want to use.
• To display a dynamic image from a data source, do the following:
1. On the builder panel, select the Dynamic (for classic stories) or Data Source tab (for optimized
stories).
Note
To switch back to adding an image via other option, select the corresponding tab, and proceed
with the steps on the option. This will clear all dynamic data that is currently set.
2. In the Data Source area, select + Add Model, select a model from the list, and select OK.
The model you select will be used to display a dynamic image. Make sure that the model has at
least one image dimension.
Note
When you switch to dynamic mode, the image object will pick up the last used model. You can
3. In the Image Structure area, select + Add Image Dimension and select an image dimension from
the list.
The members from the selected image dimension will determine the available images to display.
4. (Optional) Select + Add Alternative Image and upload an image from your computer to use when
the remote image can't be displayed. For example, when the image data cannot be rendered.
The alternative image will become the static image source when switching from dynamic to static.
If a static image source is selected, it will become the alternative image when switching from static
to dynamic.
5. Select +Add Filters, and choose one member from the list of available members.
The member you select will be the image that is displayed.
Note
If you select multiple members or no members, no image will be displayed or the alternative
image will be used instead. You can use a table in list reporting mode to see how many
members of the current filter configuration are coming back.
Note
Switching to other tab will clear the image binding to the script variable that is currently set.
2. In the Image Binding area, select a script variable to bind the image to it.
For more information on styling images, see Formatting and Styling Items on a Story Canvas [page 1589].
4. (Optional) To insert a hyperlink on the image, select (Add Image Components) Hyperlink
For more information on hyperlinks, see either Linking to an External URL or to Another Page or Story
(Optimized Story Experience) [page 1104] or Linking to Another Page, Story, or External URL (Classic
Story Experience) [page 1108].
Note
In Optimized Story Experience, you can also use hyperlink-related APIs to customize more scenarios.
Only external URLs are supported for setting hyperlinks via API, and hyperlinks will always open in
a new tab. If you set different hyperlinks for the image in both ways, the hyperlink defined via API
has higher precedence. For more information about the APIs, see Optimized Story Experience API
Reference.
Before you can add your own vector graphics (SVG files) to stories, you must upload the graphics. You can also
choose to use standard shapes or pictograms.
Procedure
When using Adobe Illustrator to generate SVG files, ensure that the option “Preserve Illustrator
Editing Capabilities” is not checked.
3. Select Insert.
Results
In SAP Analytics Cloud, you can add dynamic text to a text box.
Context
Dynamic text automatically updates the text based on either the values from the source input control or filter,
or from any filters applied to dimensions.
Note
If you have emojis in text that you want to copy into your story, you will need to remove them before
copying.
Note
You can add Smart Insight tokens displaying dynamic text to chart footers. From your chart, choose
(More Actions) Add Smart Insights . For more information, see Smart Insights [page 2003].
Procedure
Option Objects
Dimensions For dimensions, you can also specify which hierarchy and
level to display, and whether to display the ID or descrip-
tion.
Input Controls
Cross Calculation Input Controls (Optimized Story Experi- Cross calculation input controls should be based on clas-
ence) sic account models.
Story Filters
Tile Filters & Variables Tile (or widget) filters and variables include the filters and
variables that are used in charts and tables, and (Opti-
mized Story Expereince) geo map filters.
Note
The tile filters and variables will not appear on mobile
devices.
Model Variables
Model Properties (Optimized Story Experience) Last Refreshed Date and Time: You can add last refreshed
data and time of any SAP BW live data model used in your
story.
Note
When the story is modified by someone who isn't a
member of the team, a warning message is displayed
instead of the team name through a dynamic text that
puts a mark in the story as a trust factor.
Note
In view time, the global variables will change accord-
ing to the customized scripts, and their values will be
automatically updated in the text widget.
4. Select Create.
5. Open the Designer and select the Styling panel.
6. In the Dynamic Text section, change the following options.
Results
The dynamic text has been added to the story. The text will be updated based on the values from the source or
filter, or from any filters applied to dimensions.
Related Information
Create a linked analysis to drill through hierarchical data or create filters that simultaneously update multiple
charts in your story. You can also create a linked analysis from a table measure to simultaneously update
multiple charts in your story.
Context
With linked analysis, when you create filters or drill through hierarchical data in one chart, the same filters are
applied to other charts that you include in the analysis. For a filter to update other charts, the charts in the
analysis must be based on the same model, or the source models must contain linked dimensions.
Note
Spatial filters created with a geo map can be applied to the story or group through linked analysis.
Linked analysis can also be used to link a table measure or account name to the same measure in one or more
charts. When the table is linked to the chart, selecting a table cell won't change what the chart displays, but
selecting a column header will.
Procedure
Option Description
Only this (default option): Filtering and drilling through hierarchical data on this object will only update this object.
Widget
All widgets Filtering and drilling through hierarchical data on this object will update all the objects in the story based
in the Story on the same model or linked dimensions to this object's model.
• Use this table to control measures: When your object is a table, this option lets you use that table to
control which measures appear in other objects in the page or story. A new measure input control is
automatically created to be used as a chart measure.
All widgets Filtering and drilling through hierarchical data on this object will update all the objects in the page based
on this Page on the same model or linked dimensions to this object's model.
• Use this table to control measures: When your object is a table, this option lets you use that table to
control which measures appear in other objects in the page or story. A new measure input control is
automatically created to be used as a chart measure.
Only Filtering and drilling through hierarchical data on this chart will update only specific widgets.
selected
widgets
• Automatically connect newly created widgets: applies linked analysis from the driver to any new
widgets using the same model or linked models.
• Filter on datapoint selection: filters other charts on a selected datapoint.
• Use this table to control measures: When your object is a table, this option lets you use that table to
control which measures appear in other objects in the page or story. A new measure input control is
automatically created to be used as a chart measure.
Note
When enabling linked analysis on a chart set with a geo map, you can set a filter based on a data point
in the bubble layer or a selected shape in the choropleth layer. A Histogram, Waterfall, or a Time Series
chart can receive links from other charts, but it cannot send them.
4. Select Apply.
Results
Restriction
In the classic design experience, both a story-level linked analysis filter and a widget-level filter are created.
Removing the linked analysis filter does not automatically remove the widget-level filter: you must manually
delete the filter.
In the optimized design experience, only the story-level linked analysis filter is created.
Filtering and drilling through hierarchical data on this object will simultaneously update other objects
depending on the interaction option you selected.
Note
• If the charts in your linked analysis are based on models with linked dimensions, the ID of the linked
dimensions must match. Otherwise, filtering in one chart will not update the linked charts.
• If the charts in your linked analysis are based on models with linked dimensions, and you drill through
the data in one chart, filters are applied on all linked charts, but the level of the data displayed is not
changed.
• Linked analysis filters that apply to All widgets in the Story will replace story filters on the same
dimension.
• When you delete a chart filter using the filter token, the chart interaction is shared by the linked
analysis; therefore, story filters applied on the same dimension will also be deleted.
Import values, such as IDs from an Excel file, via copy and paste.
You want to add variables to your story and have listed IDs of variables or value ranges in an Excel file
beforehand. Now you would like to import the IDs to SAP Analytics Cloud.
1. Open the variable dialog with (Edit Prompts) and click (Clipboard).
2. Mark the complete list with IDs in your Excel file and copy them. Paste the IDs into the clipboard.
3. Click OK. You will see the variables listed in the variable dialog.
Note
1. Open the variable dialog with (Edit Prompts) and click (Clipboard).
2. Mark the cells in your Excel file that you would like to import. Copy and paste the values into the clipboard.
Note
3. Click OK. You will see the ranges displayed in the variable dialog.
You can add a hyperlink that takes you to an external URL or to another page or story.
You can add a hyperlink to the following places in a chart widget: the chart title and the chart footer.
However, all the other widgets (including tables) can have only one hyperlink per widget.
Create a Hyperlink
Option Action
Select the chart tile or a Select (More Actions) More Options Add Hyperlink
data point.
Option Description
Mobile App URL The mobile app URL allows users to link to a specific loca-
tion inside a native app.
Mobile app URL allows users to link to a specific location inside a native app.
When you choose a mobile app URL, you have the following optional configurations:
• Mobile App URL: from the dropdown list, select one or more dimensions to insert the dimensions into the
URL.
After a dimension has been added, if you want to pass an ID instead of a description, select the dimension
and then select one of the options: Description, ID, or ID and Description.
• Label: provide a label for the URL, or select a dimension from the list to use as a label.
The label will be displayed instead of the full URL.
• Hyperlink Options:
• Escape special characters in dimension values: when the check box is selected, the dimension values
will be encoded.
Example
If the dimension value is called Vancouver/Lower Mainland, the encoded dimension value will be
Vancouver%2FLower%20Mainland.
When you choose an external URL, you have the following optional configurations:
• External URL: from the dropdown list, select one or more dimensions to insert the dimensions into the
URL.
After a dimension has been added, if you want to pass an ID instead of a description, select the dimension
and then select one of the options: Description, ID, or ID and Description.
Note
For SAP BW live data connections, you can also choose to use the following option: BW Key.
• Label: provide a label for the URL, or select a dimension from the list to use as a label.
The label will be displayed instead of the full URL.
• Hyperlink Options:
• Open in New Tab: open the destination page or URL in a new browser tab.
Note
If the link destination is another page in the same story, the application will switch to the
destination page within the same browser window when the hyperlink is followed.
• Escape special characters in dimension values: when the check box is selected, the dimension values
will be encoded.
If the dimension value is called Vancouver/Lower Mainland, the encoded dimension value will be
Vancouver%2FLower%20Mainland.
When you choose a page, you have the following optional configurations:
• Page
Select a page from the following list:
• First Page
• Next Page
• Previous Page
• Last Page
• Specific Page: choose a page from the list that appears.
• Hyperlink Options
• Apply Selected Dimension as a Filter: use selected dimensions as a temporary filter on the destination
page.
• Cascading Effect: changes that you make to the filter affect all related filters in the same page. For
more information, see Story and Page Filters [page 1518].
The hyperlink creates a story-level filter in read-only mode as an AND filter.
• Filtering Options
• Only Apply Selected Dimension (default): only the dimension context of the selected data point will be
passed through the hyperlink.
• Apply All Filters That Impact Visualization: the dimension context of the selected data point, plus all
other filters that affect the widget, will be passed through the hyperlink.
• Only Apply Specific Filters: the dimension context of the selected data point, and any other specifically
selected applied filters, will be passed through the hyperlink.
• Filters
All the existing filters are listed here; select the ones that you want to include in your hyperlink.
When you choose a story, you have the following optional configurations:
• Story
Select a story.
• Page
Select a page in the story to link to.
• Hyperlink Options
• Open in New Tab: open the destination page or URL in a new browser tab.
If the link destination is another page in the same story, the application will switch to the
destination page within the same browser window when the hyperlink is followed.
• Apply Selected Dimension as a Filter: use selected dimensions as a temporary filter on the destination
page.
Note
The Apply Selected Dimension as a Filter option only works when the source and destination
stories use the same model.
• Cascading Effect: changes that you make to the filter affect all related filters in the same page. For
more information, see Story and Page Filters [page 1518].
• Filtering Options
• Only Apply Selected Dimension (default): only the dimension context of the selected data point will be
passed through the hyperlink.
• Apply All Filters That Impact Visualization: the dimension context of the selected data point, plus all
other filters that affect the widget, will be passed through the hyperlink.
• Only Apply Specific Filters: the dimension context of the selected data point, and any other specifically
selected applied filters, will be passed through the hyperlink.
• Filters
All the existing filters are listed here; select the ones that you want to include in your hyperlink.
Image, pictogram, or text Select the element that contains the hyperlink and then Ctrl + Click ( Cmd +
Click ).
Chart For a hyperlink to an external URL, select one data point (for example, one bar in a bar
chart) and then select the hyperlink.
The dimensions represented by that data point are passed to the destination web page.
Context
You can add a hyperlink to the following places in a chart widget: the chart title, the chart footer, and on the
chart datapoints.
However, all the other widgets (including tables) can have only one hyperlink per widget.
Procedure
Option Action
• pictogram
• Right-click within the tile (or on a chart data point) and then select Add
• text
Hyperlink
• chart tile
• chart data point
2. In the Hyperlink panel, under Link to, select one of the following options:
Mobile App URL Mobile app URL allows users to link to a specific location inside a native app.
When you enter a mobile app URL, you have the following optional configurations:
• Under Mobile App URL, select one or more dimensions from the list that appears,
to insert the dimensions into the URL. The dimension will be used as a filter on the
destination.
If you want to pass an ID instead of a description, select a dimension and then
select one of the options: Description, ID, or ID and Description.
• Under Label, enter a label for the URL, or from the list, select a dimension to use as
a label. The label will be displayed instead of the full URL.
You can also choose whether you would like to encode dimension values by selecting or
deselecting the Escape special characters in dimension values check box. If this check
box is selected, dimension values will be encoded.
Example
If the dimension value is called Vancouver/Lower Mainland, the encoded
dimension value will be Vancouver%2FLower%20Mainland.
External URL When you enter an external URL, you have the following optional configurations:
• Under External URL, select one or more dimensions from the list that appears, to
insert the dimensions into the URL.
If you want to pass an ID instead of a description, select a dimension and then
select one of the options: Description, ID, or ID and Description.
Note
For SAP BW live data connections, you can also choose to use the following
option: BW Key.
• Under Label, enter a label for the URL, or from the list, select a dimension to use as
a label. The label will be displayed instead of the full URL.
If you want the destination page or URL to open in a new browser tab, select the Open in
New Tab check box.
Note
If the link destination is another page in the same story, the application will switch
to the destination page within the same browser window when the hyperlink is
followed.
You can also choose whether you would like to encode dimension values by selecting or
deselecting the Escape special characters in dimension values check box. If this check
box is selected, dimension values will be encoded.
Example
If the dimension value is called Vancouver/Lower Mainland, the encoded
dimension value will be Vancouver%2FLower%20Mainland.
Select Cascading Effect, to enable changes you make to the filter to affect all related
filters in the same page. For more information, see Story and Page Filters [page 1518].
Story Select a story from the list, and then select a page in the story to link to.
If you want the destination page or URL to open in a new browser tab, select the Open in
New Tab check box.
Note
If the link destination is another page in the same story, the application will switch
to the destination page within the same browser window when the hyperlink is
followed.
Note
The Apply selected dimension as a filter option only works when the source and
destination stories use the same model.
Select Cascading Effect, to enable changes you make to the filter to affect all related
filters in the same story. For more information, see Story and Page Filters [page 1518].
Note
For the Mobile App URL and External URL options, not all chart types support using dimensions in
hyperlinks.
3. Select Done.
Results
Image, pictogram, or text Select the element that contains the hyperlink and then Ctrl + Click ( Cmd +
Click ).
Chart For a hyperlink to an external URL, select one data point (for example, one bar in a bar
chart) and then select the hyperlink.
The dimensions represented by that data point are passed to the destination web page.
Add an RSS Reader to your story to present relevant articles from an RSS feed alongside your data and
visualizations.
Procedure
A blank RSS Reader is added to your page and the Builder panel is displayed.
2. Go to the Builder panel and add an RSS URL to the Manage RSS Feeds section.
Example
Title URL
Note
The RSS Reader on the page displays the latest results from the published RSS feed and can be refreshed by
choosing .
Next Steps
You can customize your RSS Reader by selecting it and choosing Designer to open the Builder panel. The
panel will display the following options:
Option Description
Batch Load Number of Articles Specify the number of articles to display per page. The previ-
ous and next buttons allow you to change the page.
Show Time Stamp Show or hide the time the article was published.
Truncate Long Articles Select to display only a short preview of the article. Choose
MORE to expand and show the entire article.
Allow Hyperlinking on Article Title If selected, clicking the article title will launch a separate
browser window to display the contents.
You can also change the background color, border, fonts, and icon colors themes of how articles are displayed
Embed a web page into your SAP Analytics Cloud story to combine your analytics with additional live, hosted
content.
Context
Once a web page is added to your story, there are a few usage restrictions to consider when viewing and
interacting with the story, or when presenting the story in a Boardroom meeting.
Story Restrictions
Unsupported Features
Embedding another story into the web page widget in a story or digital boardroom is not supported.
Full screen mode is not supported for videos streamed on the web page.
Embedded web pages in a story cannot be exported to PDF. Stories saved to PDF will include a placeholder message where
the embedded web page is located.
Choosing on the widget only refreshes to the original Web Page Address you specified above. If you navigate away from
this page within the widget, you cannot refresh to that new page.
Note
For administrators: Only web pages that are of the same origin (protocol, port, and host) can be used
in a Boardroom agenda (same-origin policy). To allow embedded web pages from a different origin to be
presented in a Boardroom agenda, set up a web proxy to handle requests to and from this web page. Use
this only for websites you trust.
Usage Restrictions
External web addresses are not supported during online meetings (same-origin policy). Contact your administrator if you
see this kind of error message when trying to show an embedded web page during a Boardroom meeting.
Only the meeting organizer can interact with the embedded web page. For example, refreshing or navigating.
Meeting participants may see results different from the meeting organizer on dynamic pages.
Scrolling is not supported in the web page widget. Size the widget appropriately in your story before presenting to meeting
participants.
To embed web pages into SAP Analytics Cloud, the website must grant access to all external sites or
specifically trust the SAP Analytics Cloud domain. Contact your administrator to verify the website you
wish to embed is trusted.
For administrators: Check the HTTP response header of the target website to see if the browser allows
embedding the web page.
Procedure
A blank web page is added to your canvas and the Builder panel opens and displays the Web Page editing
pane.
2. Add a title and URL, and then click elsewhere in the editing pane.
Example
Note
Web pages using HTTP are not supported; use HTTPS web pages instead.
Results
The Web Page widget on the canvas loads and displays the current URL and can be refreshed by choosing .
To change the web page or its title, select the web page on the canvas and then open the Builder panel.
In SAP Analytics Cloud, set a new chart or table as the initial item that you see when you enter Explorer mode.
Context
When you launch Explorer from the story page, you see the same chart details that are in your page. You can
change the Explorer view to show more detail or different measures and dimensions, and you can set the new
view as your default view.
Note
Explorer is not available when you use a data source that has multiple hierarchies in an account dimension.
Procedure
This setting makes the current view the default view in Explorer Mode.
10. Exit Explorer Mode and then save the story.
Results
Your chart now displays that it has at least one Explorer view. The next time you select Explore from that chart,
Explorer Mode will open with the new default chart or table.
Related Information
Launching the Explorer from a Story Page (Classic Story Experience) [page 1193]
Pagination is designed to display all the data of a table and provide a reporting page-like browsing experience.
In View mode, a dedicated toolbar lets you navigate the report from page to page and quickly access parts of
the report that are of interest to you.
Note
If you're designing the story in Edit mode and want to preview how the report is going to look in paginated
mode, try setting one or more widgets to resize automatically.
Restriction
Don't use this option if you are planning to use a script (Advanced Mode feature) on the same story tab to
switch between visible widgets.
For more information about the script, see Best Practice: Switch Between Chart and Table [page 1778]
1. Select a table.
The canvas generates as many pages as are necessary to display the whole content of these widgets. If a
widget spreads across multiples pages, the application renders the data as you navigate through pages.
In both dynamic and fixed canvas, the pagination is generated in view time and duplicates the canvas with the
defined size as many times as required to display all data. There is no overlap, and widgets that are set to be
resized automatically are automatically collapsed.
Pagination is triggered every time it is required: when you're editing a widget and increase or decrease its size,
expand or collapse the widget, add a filter, refresh the data, and so on.
Sections are like containers for your SAP Analytics Cloud story.
Sections are useful for reporting, but they're also useful for dashboarding if you're working with a canvas page.
Sections allow you to split report information into smaller, more comprehensible and manageable parts. That's
also a great way to create comparisons quickly by laying out multiple section instances in a paginated canvas.
You can use sections to improve the layout of your stories and break down your analysis per dimension
member.
• The dimension's members are displayed in the section heading, one member per section.
• The widgets in the section based on the same model are filtered on that member.
You can customize sections and section headings in Edit mode in the Styling panel if you’re looking to further
refine the design of your story. Still in the Styling panel, you can also hide the heading and guidelines of the
section.
Restriction
Multiple options are available so you can fine-tune the look of your section. That's especially useful if you must
replicate your company’s corporate look. You can access these options in the Styling panel by right-clicking a
section and selecting Edit Styling. Most of these options are also available in the Story Preferences dialog.
Note
The options available in the Styling panel are contextual and depend on your selection (whether you have
selected the section header or section body).
Along with graphic options, you can also select attributes that impacts the behavior of the section. For
example, you can decide whether to paginate the section and have a dedicated page per member, or look for
members manually using the left and right arrows or search bar.
Default Body Background Select a default background color for the section body.
Default Body Border Select a default border for the section body. You can also
edit the Style, Color and Line Width.
Note
The Corner Radius is only available if you select All
Borders.
Show Heading Show or hide the section heading. The section heading dis-
plays the filtered member of the section.
Browse and search instances Allows you to use the left and right arrows to navigate to
different member sections. The navigation wraps around to
the beginning when you reach the end of the sections.
Show all instances on pages(Default setting) Paginate the section and show each member on a dedicated
page.
Start section on new page: You can also select whether you
want the section to start on a new page.
Note
When the story is using optimized view mode, you must
start each section on a new page.
Default Heading Background Select a default background for the section heading.
Default Heading Border Select a default border for the section heading. You can also
edit the Style, Color and Line Width.
Note
The Corner Radius is only available if you select All
Borders.
Default Heading Properties Select a default Font, font Color, and font Size for the section
heading.
Additionally, the following options can be found in the Story Preferences dialog:
New pages, lanes and tiles Apply styling to new sections only.
Use the comment widget alongside the data point and dimension comments to add more details to your story.
Prerequisites
To add a new comment, view the existing comments, or delete any comment, you must have the following
permissions:
• Read, Create, and Delete permissions for the object-type Comment in the tenant.
• Add Comment, View Comment, and Delete Comment permissions on the model to comment on data
points.
• Add Comment, View Comment, and Delete Comment permissions on the story.
If any one permission is missing in the combination, then you won't be able to perform the relevant action.
Note
The prerequisites are applicable for all data point comments on acquired models.
Context
In SAP Analytics Cloud, a comment widget allows you to add comments about your data in the story and also
displays data point comments associated with the model.
You can add comments in the comment widget sourced from calculations defined in the Modeler, which include
calculated accounts (accounts with formulas), calculated measures, and conversion measures. The context
of such comments does not include the specific filters on dimensions or variables used to create such a
calculation, but it does include its ID. Even with other tables and stories that apply the same filter combinations
to these dimensions, as long as the calculation with the same ID is not available, the same comment won’t
display.
Note
You can’t add comments to comment widget sourced from calculations defined in the story, except
restricted measures (or accounts).
When commenting in a comment widget on restricted measures (or account) defined in the table, the context
includes the specific filters on dimensions used to create the restricted measure (or account). The comment is
displayed in comment widgets and tables that apply the same filter combinations to the dimension.
For example:
• You put a comment on a restricted measure based on account values that apply a filter on the region
dimension to limit the member to 'North America'.
• The comment is displayed in other tables or stories that apply the same filters to limit the region dimension
to 'North America'.
• The comments in a comment widget are stored against the model and are independent of the story.
• You can add data point comments using the comment widget on models with variables as well.
Variables can be of number, dimension, and currency type.
• When commenting on a comment widget, the data context is taken into account. Only users who share
the data context can view the comment. The data context is composed of the following:
• Comment widget, page, and story level filters.
• Data access control and role-level security.
• When commenting on data points in models that have role-based data security applied through model
privacy and multiple read and write permissions on more than one dimension, consider the following:
• When the Model Data Privacy is enabled, only the owner of the model and user roles that have
specifically been granted access can see the data. For more information, see Learn About Data
Security in Your Model. For information on user roles, see Standard Application Roles.
Restriction
If the model calculation includes the operator "IS CURRENT USER", commenting is not
supported on such data points.
• For a role with multiple read and write permissions, the data aggregation respects the operators
used in the model calculation. If AND is selected, the conditions for all attributes must be met for
the mapping to be applied. If OR is selected, the conditions for only one of the attributes must
be met for the mapping to be applied. Based on the permissions assigned and the resulting data
aggregation, the user can comment on the respective data points. Similarly, based on the read
permissions and the combined data aggregation assigned to the user's role, a user can read the
comments on a certain data point if the data aggregation conditions are met.
• If a dimension property has been used in a filter, which can be a table, page, or story filter, the data
context will take into account the dimension property (and its value) when you add a comment.
• You can add comments in the comment widget for single column dimensions in an analytic model.
• You can add a comment in the comment widget to stories built on SAP BW live or SAP BPC live
connection models.
Restriction
• You can't add comments to a comment widget on stories built on blended models or live data models
(except SAP BW live and SAP BPC live data models).
• You can’t add comments to a comment widget sourced from calculations defined in the story, except
for restricted measures (or accounts).
• Linked analysis is not supported in a comment widget.
• If the text length of a dimension property's value exceeds 4996 characters, comments are not
supported on members with the dimension property set to this value.
1. Create a story.
2. Add a comment widget.
A comment widget is placed on the story page and the Builder panel displays the widget's filters.
b. In an acquired model, select Category, Version, Measure, or Account, and any other filters as per your
preference.
Note
When comments are copied and pasted in the comment dialog from an external text source, the
rich text formatting options such as Bold, Italics, Underline, Bullet list, and Numbered list are
retained.
The text you copy from any external source or editor will not retain formatting and will be plain text.
For the best comment formatting experience, it is recommended to use the default rich text
formatting options available in the comment dialog.
c. Select Add Comment to post your comment. If you want to cancel adding comment, select Cancel.
Note
If you are a system administrator, to set or change the comment limit for your model, go to System
Administration System Configuration , and update the value for Limit of comment threads per
Model.
Up to four distinct comment threads can be added to a specific widget in a story. Each comment
thread can have a maximum of 100 comments. As users add multiple threads to a widget, comment
icons will be superimposed on one another. The top icon is associated with the most recent thread.
Results
The comment widget is added to the story and is now displaying the comments based on the defined filters.
There are a number of ways you can use the comment widget to collaborate with other users and share
insights, or bring attention to a key data in your story.
• In the comment widget, select (add comment) to add another comment to the thread.
• To like a comment you agree with you can use the ( ) functionality available.
• To edit an existing comment in the comment widget, select (Edit Comment), enter your comment in the
comment editor and select Save Changes to submit the comment.
The comment is updated to the text you just submitted and the existing comment turns into an entry at
the top of the comment history.
To show the history of a comment, select the comment, select (Show/Hide History) in the comment
widget.
• To delete all the comments in the comment thread, select (Delete Comment Thread) on the top right
corner of of the comment widget.
• When you export a story as a PDF, PPTX, or Google Slides file, comments in the comment widget in the
visible parts of the story are also exported.
Note
• Comments on SAP BW and SAP BPC live data are not collaborative. For a comment widget, only one
comment is maintained, with its history displayed if desired.
• To edit an existing comment in the comment widget, select (Edit Comment), enter your comment
in the comment editor and select Save Changes to submit the comment.
The comment is updated to the text you just submitted and the existing comment turns into an entry
at the top of the comment history.
• To show the history of a comment, select the comment, select (Show/Hide History) in the
comment widget.
The comment history, which is indented, is shown below as a list of previous comments in
chronological order, with the most recent comment placed at the top. To hide the history, select
(Show/Hide History) again.
Related Information
You can add, view and manage your comments on a widget in stories.
• Chart
• Table
• Geo map
• Image
• Shape
• Text
• RSS reader
• Web page
• R visualization
• Clock
Note
In Optimized Story Experience, you can disable commenting on a widget. In edit time, go to the Quick
Menus section of its Styling panel, and deselect Comment.
To add comments to a widget, from its (More Actions) menu, select Add Comment.
To show or hide all comments, from the view time toolbar, select Display Comment Mode .
Related Information
In SAP Analytics Cloud stories, optimize your story building performance by reducing the number of queries.
By default, every modification to your table or chart from the builder panel sends a query to the server,
resulting in an immediate update to the table or chart. For example, when adding or removing a dimension or
measure, you need to wait for the chart or table to be updated before making another change. Depending on
the size of the model, the response time can be slow; this can put a heavy load on the server.
To reduce the number of queries being sent, enable the following option in your model: Optimize Story Builder
Performance.
Once this option is enabled, when you use the model in a story you will see a warning message at the top of the
Builder panel stating that the data source doesn't automatically refresh.
When the optimization option is enabled, and you modify objects in the Builder panel, you will see a refresh
button ( ) in the data source warning message and the main part of the story page is grayed out (has a gray
overlay).
To refresh the data (apply changes to chart or table), do one of the following:
• In the data source warning message in the Builder panel, click the refresh button ( ).
• On the story page, click the chart or table widget.
Either of these actions processes the query (removes the gray overlay) and updates the chart or table widget.
Restriction
In some situations, clicking the story page or the chart or table widget removes the gray overlay, but
doesn't update the data, or you may not see the gray overlay. (For example, when you've added a
dimension input control in the Builder panel.)
In those situations, go to the warning message at the top of the Builder panel and click the refresh button
( ).
Related Information
These tips will help you discover ways to optimize the performance of your SAP Analytics Cloud story.
You want your SAP Analytics Cloud story to run smoothly, to have the best performance. The following tips will
help optimize your story's performance.
Topic Sections
When your story has a lot of pages, adding another page could decrease the story's performance.
For faster performance, you can create a new story or rearrange the story's content to use fewer pages.
If you are planning to rearrange the content, try to limit the number of data-related (process-heavy) widgets on
a page.
Story performance can be impacted when your story has a lot of widgets on a page, particularly data-related
(process-heavy) widgets. For example, you have several charts, tables, or filters on a page. Processing data
from those widgets can slow down the story's performance.
To determine the best performance for a story, the widgets are assigned a unit weight, with the maximum ideal
weight per page set to 5 units.
Geo Map 1
Table 0.6
Chart 0.3
Clock/Image/Shape 0.1
Text 0.1
Popup 0.2
Header 0.1
Clock and Section on page are not available in Optimized Story Experience.
Example
If you have a page that contains the following widgets, you will be close to the maximum weight and may
start to see performance issues.
2 Geo maps 1 2
For faster performance, consider changing the types of widgets that you are using or reducing the number of
heavy widgets per page.
Expanded page filters or input controls are convenient because you can quickly select members or search for
members in the control. However, an expanded input control refreshes often and that can affect the story's
performance.
When the input control is collapsed, you must select the control to display the member list before you can
search or make your selections.
Input controls for hierarchy dimensions can also affect the story's performance when expanded, even if they
have fewer members.
If you have high cardinality dimensions, consider manually setting the Top N values rather than using Auto Top
N.
When Auto Top-N is used, all the data is transferred from the system before the Top N data is rendered. When
you use Top N, the data is sorted and the Top N values are selected in the system. Only the Top N data is sent.
Whenever possible, upload background images that are 1Mb or smaller. Larger images take longer to load and
will have an impact on your story's performance.
To get a good quality background image, you could use a compressed web image or you might want to use an
SVG image instead of PNG, JPG, or BMP images.
• Limit the number of charts on the page to enhance readability, decrease the number of backend requests,
and improve performance.
• Avoid charts with more than 500 data points.
• Be aware of progressive chart rendering, which enables chart widgets to display more quickly when a story
is opened a subsequent time.
• Apply the Top N feature to charts.
Tables
• Limit the number of tables on the page to enhance readability, decrease the number of backend requests,
and improve performance.
• Limit the number of cells in tables.
• Apply the Top N feature to tables.
Geo Maps
To significantly improve your story's performance at startup and save system resources, as a story developer,
first consider the following four tips.
For more detailed information, see Optimize Story Loading [page 1186].
For more detailed information, see Use Pause Refresh Options and APIs [page 1742].
For more detailed information, see Use Planning Enabled Property and setEnabled API [page 1719].
For more detailed information, see Use MemberInfo Object in setDimensionFilter() API [page 1823].
Related Information
The optimized story experience improves performance within SAP Analytics Cloud stories.
The optimized story experience enables content within an SAP Analytics Cloud story to load faster, but not
all features and story options will be available when you enable the Optimized Design Experience or Optimized
View Mode features.
These usage restrictions will be removed over time or changed into improvements.
The following topics and sections provide more information for the optimized story experience.
Restrictions, features that are not supported Optimized Story Experience Restrictions [page 1160]
Enabling Optimized Design Experience or View Mode Enable Optimized Story Experience [page 1172]
Converting classic responsive pages to optimized ones and Convert Classic Responsive Pages in Optimized Story Expe-
what happens after conversion rience [page 1174]
Backwards compatibility for SAP BW Backwards Compatibility Guidance for Using Optimized
Story Experience with SAP BW [page 1135]
Chart Builder panel improvements in Optimized Design Ex- Chart Builder Panel in Optimized Design Experience [page
perience 1176]
Available Objects panel in Optimized Design Experience Available Objects Panel (Optimized Design Experience)
[page 1183]
Menu options for story and page filters in Optimized Story Menu Options for Story and Page Filters in Optimized Story
Experience Experience [page 1523]
Navigate within stories (Optimized Story Experience) Navigate Within Stories (Optimized Story Experience) [page
980]
The following topics provide other story performance tips and tricks.
Reduce number of queries sent to server by changes in the Optimize the Story Builder Performance [page 1126]
builder panel.
Initialize only specifc widgets at story startup. Initialize Widgets on Startup (Optimized Story Experience)
[page 1187]
General best practices for performance. Best Practices for Performance Optimization in Your Story
Design [page 1127]
Upgrading and patching your BW InA interface before using Optimized Story Experience gives you the following
benefits:
For more information on the existing SAP Analytics Cloud patches to apply to your BW system, see
the following SAP Note: 2715030 desc: Considerations when using SAP BW and SAP S/4HANA Live
Connections in SAC
To enable optimized story experience for stories with BW live connections, you should also apply the following
patches:
3049412 BICS INA : Incorrect error message for user with no Authori-
zation for BW Query
3041816 INA: Incorrect parent indexes for flat presentation (value in-
stead of -1)
Restriction
If your story’s onInitialization event script runs during the background validation of Persisted InA,
and you might receive an error message of the failed script execution companied with a reminder that your
story is still loading or been optimized, it probably means Persisted InA is invalid. Resaving the document
should resolve this.
In SAP Analytics Cloud, the optimized story experience improves the performance of dashboards (in certain
scenarios) and provides usability improvements.
The optimized story experience has two modes: the design mode (optimized design experience, or ODE) and
the view mode (optimized view mode, or OVM).
To enable either mode in your story, see Enable Optimized Story Experience [page 1172].
For information on the optimized story experience supported and unsupported functionality, see Optimized
Story Experience Restrictions [page 1160].
Topic Sections
The following sections list the improved behavior of features when using optimized view mode.
The following are the current updates to performance for Acquired Data and SAP HANA data sources:
• Improved rendering of what you see when you first open the story, before you do any scrolling. (Improved
rendering doesn't apply when the story contains dynamic variables or forced variable prompts.)
• Improved rendering of large complex hierarchies by loading leaf nodes on demand.
• Model information is downloaded based on user interactions (for example, filter interaction, page switch,
linked analysis, and so on).
• Story designers can change rendering behavior: they can use classic rendering or active viewport
rendering (loading content that is in the visible area of the screen, and only loading new content after
scrolling stops).
The following are the current updates to performance for SAP BW data sources:
Upgrading and patching your BW InA interface before using the Optimized Story Experience gives you the
following benefits:
For more upgrade and patch details, see Backwards Compatibility Guidance for Using Optimized Story
Experience with SAP BW [page 1135].
The following section lists usability and behavior improvements shared between Optimized Design Experience
and Optimized View Mode.
Linked Analysis Story scope linked analysis (shown as story filter) enabled
for charts and geographical visualizations:
Link on Matching Hierarchies (Filtering Across Models) There are performance improvements when filtering across
models using links on matching hierarchies:
User-Managed Time Dimensions on SAC Models (Measure There are behavior improvements when using user-managed
Model / Measure Model with an Account dimension)
time dimensions in optimized stories:
Export to PDF Exporting to PDF has the following usability and accessibility
improvements:
Renamed Measures & Dimensions included in Export to CSV Previously, when you renamed a measure or dimension, the
(ALL)
new names weren't used in the exports to CSV (scope All).
Horizontal Filter Bar To see the filters that aren't displayed on the horizontal filter bar, select See More
Filters.
Collapsed Input Controls Discoverabil- A visual update to story filters and collapsed input controls now makes the drop-
ity
down easier to find.
• The dropdown menu has been expanded for better readability and higher
discoverability.
Hierarchical Input Controls In hierarchies, children are loaded on demand and inactive members are hidden.
• Complex hierarchies (multiple levels deep) now load faster, as children are
loaded on demand when the parent node is expanded.
• Hierarchy members that would result in no data, because other story or page
filters were applied, are now hidden by default.
• Provides better discoverability of the active members. Inactive members can
still be loaded on demand.
• Hierarchical input controls now automatically shift left and right when a
viewer navigates through the input control. The corresponding parent node
of a hierarchy will be displayed at the top.
• Improves overall readability and navigation as we maximize the displayed
members under the corresponding parent node and provide a quick and
easy way to collapse hierarchy nodes.
Filtering Across Models The overall accuracy of filtering across models is improved:
• Indirect time filters function like direct time filters in the way that they find a
match on a secondary model. This behavior ensures higher accuracy of data
when fiscal time is mapped directly to calendar time.
Indirect multi-dimension links between level-based hierarchies and non level-
based hierarchies increase the number of links, resulting in more accurate
matches.
Direct filters that are linked on Description are excluded, which improves
generation of target filters and accuracy of data.
The Exclude descendants setting is now on by default when creating new
dimension links. This configuration is recommended as it improves perform-
ance and accuracy of data.
BW Link Remapping:
• To ensure that it reflects the actual state of the model in all cases, optimized
view mode will check for an active hierarchy in the model before mapping to
a secondary model.
BW Time Links:
Single Select Page Filter Single Select Page Filter will not show All as default:
• Story designers can choose to show All using the Show/Hide settings.
Custom Current Date Story viewers can't delete the custom current date: this prevents them from
accidentally resetting dynamic time filters to the system date.
Child Member Selection in Hierarchies When you have access to only one member out of multiple members in the
hierarchy and you select that one member, the input control selects only that
child member instead of the parent member.
Data Access Language for Static Mem- Data access language for story, page, and local filters is now supported.
ber Filter
• If the story is viewed with a different data access language than the one
saved with the story, the user will see the data in their current data access
language.
• For example, if the story was saved with English as a data access language
but the language was changed to French (Français), the filters will show in
Français rather than English.
To use this feature, the following option must be enabled in the System Adminis-
tration configuration settings: Refresh member description for filters.
BW Hierarchy Variables Changing Hierarchy using a variable: The drill is removed, and the filter is pre-
served with its token. However, a warning will be displayed to indicate that the
filter might conflict with the new hierarchy.
Changing Hierarchy using a hierarchy dialog (Flat or Hierarchy): The drill and filter
are removed.
Changing Hierarchy to Flat using a variable: The drill is removed, and the filter is
preserved.
Input Controls (for SAP BW) Hierarchical input controls will not apply the default anymore. However, the con-
figuration from the input controls is still applied.
Filter Token for Dynamic Filters (for BW Dynamic Filter tokens are now selectable, which means that items can be
SAP BW)
deselected from the token.
Exit Variables on Dynamic Filters in Lo- When the exit variable is reset the filter is overwritten.
cal Widget Variables (for SAP BW)
Opening the variable dialog will now show the exit value. The old behavior can be
restored by disabling exits on the local widget.
Hierarchy variables with story filter or When the hierarchy variable is submitted, page filters and story filters remain on
page filter (for SAP BW)
the old hierarchy.
There will be a warning on input Controls to make user aware of the fact that the
filter is NOT on the current hierarchy.
Hierarchies with LinkedNodes (for SAP The linkedNode appears in multiple places, but you can't select or unselect it.
BW)
(The rest of the nodes can be selected or unselected).
• When the dimension is expanded, applying a rank will now flatten the list
while preserving the rank that is applied.
Version:
• The version dropdown list is hidden when Version (Category) is not included
in the Color section.
Data order / Sort / Auto limits When Time is the outer-most dimension, auto-limits are applied to the chart and
a descending order is applied on the Time dimension.
In Optimized View Mode, because the inner dimension ordering isn't changed,
there may be a difference between the visualization that is displayed in Edit Mode
versus Optimized View Mode.
Rank and Sort with Color Dimension You can group or ungroup color dimensions from other dimensions:
• Ungrouping the color dimension from other dimensions will move the color
dimensions to the columns axis, which will improve the color legend sorting
and allow color dimension ranking to ignore the outer dimension grouping.
• Grouping the color dimension with other dimensions will move the color
dimension back to the row axis, where the sorting of color legend reflects the
data point sort order. The color dimension ranking will be applied per outer
dimension grouping.
Rank on Multiple Dimensions You can rank your charts on multiple dimensions.
• Rank All Dimensions Individually: Allows the user to apply the same rank
(Top/Bottom N) across all dimensions individually.
• Rank per Dimension Separately: Allows the user to apply a different rank
(Top/Bottom N) per dimension.
• The ranking behavior for BW is now aligned with Live HANA where ranking
with multiple dimensions returns the Top / Bottom N results across the
dataset.
Drilling There are multiple improvements to the drilling capabilities in charts, including
the following:
When drilling on a REST node, the result set contains the REST node and its
children. The chart only displays the children because we've filtered out the
REST node.
• With the BW-InA Patch: The behavior for Chart drilling on a REST node is
identical to the current behavior.
• Without BW-InA Patch: The chart will display the children in the REST node.
Minimum Drill State There are multiple improvements to the minimum drill state, including the follow-
ing.
Charts:
• All errors that make a chart invalid are listed under warnings and errors when
the chart can't be displayed.
• Variance, sort, advanced sort, rank, and reference line will all show independ-
ent messages.
• When you have multiple measures, measures and dimensions are not com-
bined. Measures are separated according to their own set of required dimen-
sions.
• Only missing / unsatisfied required dimensions are listed.
Geo Visualizations:
• When rendering Partial-Layer bindings, the following layers will always render
with at least a valid location dimension as well as any Color/Size bindings
that satisfy minimum drill:
• Bubble
• Heat map
• Choropleth
• Flow
• For Partial-Layer Satisfaction, only missing / unsatisfied required dimen-
sions are listed.
• Tooltip measure shows label with warning message.
• Required dimension is satisfied if the Location extended dimension is joined
on a dimension set as required dimension (for example, city name1) for
some measure (for example, CM(city name1)).
Tables:
• For zero suppression on rows, when there is no visible combination left be-
tween outer dimension and inner dimension, the outer dimension members
are hidden.
• The granularity option is merged into the drill workflow and is added to the
context menu.
Drill Menu
Heatmap chart Sorting behavior may change depending on what data is included in the sort.
Measure sort: measure sorting uses the data intersections as the sort type, but
the sort is disabled when there are two dimensions.
Dimension sort: break grouping is always disabled when using dimension sorting.
Unbooked data: the heat map will respect the sort that is applied even though
unbooked data (nulls) is present.
• Axis remains aligned when one or more participating charts are interacted
with (for example, Drill, Filtered, and so on).
The rendering behavior has been improved to render the chart with the
correct axis position.
• The viewer is limited to resetting a charts axis position (if moved) back to the
default position.
• Locking and unlocking an axis is now limited to an edit time option.
• The version will be removed from auto-generated titles (for example, “for
Actuals” is removed).
• The version filter will be displayed the same as other widget level filters,
except that it cannot be deleted.
• When attempting to exclude all data points in a chart, the exclude button is
disabled and shows a helper tooltip.
• Within the Rank (Top N Options) menu, the Version (Category) dropdown
menu will no longer be visible unless Version (Category) is within the color
binding of the visualization.
Auto-Generated Chart Titles Auto-generated titles in a chart will no longer display the Version (Category). This
change is because IBCS no longer requires the versions to be shown in the latest
specification.
Negative Percentage Formatting with The Optimized Story Experience now follows IBCS formatting for negative per-
Parentheses
centages with parentheses:
• IBCS formatting places the number and percentage sign on the inside of the
parentheses with no spaces in between.
• For example, the number -21% displayed with parentheses in the non-opti-
mized version would have been displayed as (21) %, (21)%, or (21%) but
they are now all displayed as (21%).
Break grouping interaction with sort When a chart has a measure, a dimension, and a separate color dimension,
and the color dimension
applying a sort on the measure (ascending or descending) enables the Break
Grouping option. This is because the color dimension is now included within the
sort.
Default Sort (for SAP BW) Default sorting rules are respected in Charts:
• Charts will respect the SAP BW default sorting rules: this behavior aligns
with the existing behavior in tables.
• The actions are consolidated into a single menu that is shown based on the
interaction with the data point.
• Tooltip will be shown closer to the position of the mouse, leading to easier
access, a cleaner look, and better readability.
• Additional actions are still available via the right-click context menu.
Dimension Tooltip
The measures listed in the tooltip dimension will be the measures associated to
the data point that is clicked / hovered on. If there is a color measure, it will be
included.
Multiple Account Hierarchies There is a UI enhancement that allows you to apply multiple account hierarchies
(MAH) per chart:
• You can define the account hierarchy from a new dropdown menu on the
builder panel, which allows you to easily apply various hierarchies into a
chart.
• There is improved error handling through enhanced error validation – if you
switch the hierarchy to another hierarchy, any member that is unassociated
with the newly selected hierarchy will appear in an error state.
• You can use the undo/redo function with the account hierarchies.
Model Changes When a data model is changed (for example, removing a dimension), charts
will show the following behavior or messages. This applies to any public, live, or
embedded models.
• The chart query tries to recover based on the last persisted or cached data
query.
• The chart shows an error when a dimension is removed from the model, and
the error disappears when the dimension is added back to the model.
• When the chart contains a missing dimension, a message banner is shown in
the Builder panel. The message warns you to fix the missing dimension issue
before performing further actions on the chart.
Version Mapping for Key Figure Models Existing charts in stories won't include versions filters in the visualization defini-
(not Account models)
tion or query in the following circumstances:
• Charts were created prior to version mapping being defined in the model.
• You add version mapping to the model, save it, and then re-open the story.
In the optimized story experience, a ghost chart is displayed with the following
warning message:
• You must add the version to the dimension bindings or as a single value filter.
Optimized Presentation Table The Optimized Presentation table is the only table type sup-
ported for optimized story experience.
Hierarchy Loading in Table Within a table, the children of a hierarchy will be loaded on
demand.
Chart Builder Panel The chart Builder panel has been redesigned to improve the design experience.
These improvements include:
For more information, see Chart Builder Panel in Optimized Design Experience
[page 1176].
• The Geo Builder panel (geo map) has been redesigned to stay aligned with
the design experience of the Chart Builder panel.
Location Clustering
• When creating a Geo Map, Location Clustering automatically turns off in the
Builder panel depending on the number of data points.
• If the number of data points is less than the clustering threshold (5000 is the
default), location clustering will automatically be turned off.
• Users can manually enable location clustering: it will display a Performance
Optimization Tip if the number of data points is less than the clustering
threshold.
Empty State of Builder Panel When no widgets are selected on the page, instead of seeing a single line of text at
the top of an empty Builder panel, you now see a placeholder image with the text
below it.
Available Objects Panel The Available Objects panel allows Story Designers to see all measures, calcula-
tions, and dimensions. They can easily add objects to the Builder panel using
drag-and-drop or the quick action menu.
For more information, see Available Objects Panel (Optimized Design Experience)
[page 1183].
Custom Groups (for SAP BW) You can manage custom groups for a dimension in Charts and Geo:
• For dimensions that have grouping options, the more action icon (…) will now
have a submenu option called Manage Custom Groups.
• Manage Custom Groups allows you to create, edit, or delete a custom group-
ing.
A new custom group that is created from this submenu will not automatically be
applied to the chart.
Story Preferences and Styling Panel Ac- The following actions can't be used in the classic design experience; they are only
tions
available in the optimized design experience.
• New Chart Palettes and Color Swatches are added into the Color Picker to
stay aligned with Fiori Horizon color guidelines.
• Customization in the Styling panel overrides Story Preferences, except for
fonts and color palettes.
• Fonts: if there is a user-defined font color or font family for the visualiza-
tion, updates from Story Preferences for those options will be ignored by
the visualization.
If there is no user-defined font color or font family for the visualization,
the updates from Story Preferences for those options will be applied.
• Color palettes: the last update that was applied - whether from Story
Preferences or the Builder panel - is used.
• The choice to apply story preferences to "All" or to new pages/lanes/tiles is
removed; this behavior aligns with Theme Preferences.
• Because the chart types can have different colors for their default axis lines,
the default axis line color in Story Preferences is now transparent, or no color.
• When Axis Color or the widget background/border is updated in the Styling
panel for a chart, those changes override the story preferences settings; sub-
sequent story preference default changes will not change those properties.
Templates and Layouts • With an optimized story, users can apply either a classic template or an
optimized template.
• Users can also save an optimized story into an optimized template.
• There are new placeholder widgets.
• Story preferences in optimized design experience can also be used with
templates/layouts.
• Templates created in the classic experience will automatically migrate to the
optimized experience to create a seamless transition for users.
Integrated Stories and Analytic Applica- With an integrated story and analytic application module, we now support creat-
tion Module, and Updated Landing Page
ing new analytic applications with the Optimized Design Experience option. (This
option redirects you to the Stories page.)
Also, the Stories landing page has been enhanced to include the following op-
tions:
• Stories
• Bookmarks List
• Custom Widgets (only visible for story developers)
(The Analytic Applications module is still available from the Home or main menu.)
Conversion to the New Optimized De- Integrating Stories and Analytic Applications is the next step in the Optimized
sign Experience
Story Experience process, and provides the following benefits:
Reference Line - New aggregation types The Reference Line now includes additional aggregation types such as the follow-
ing types:
Custom Sort With the detection of Filter by Range or Filtering Across Models, the visible sec-
tion for Custom Sort will be ignored to prevent data incorrectness.
Legends Visibility We've improved the visibility of the legend by allowing users to explicitly Show or
Hide the Legend using the Show/Hide options.
Chart Title, Subtitle, and Footer Styling Chart Text Styling options are unified with the Text Widget. With this, there are
more Styling Options available such as text alignment.
For the footer, the right-click option will now show the Add Dynamic Text option.
Hierarchy & Drilling Appearance The first hierarchy dimension listed in the dropdown will be chosen as default for
non-BW models.
For BW models, the hierarchy must be set in the metadata cube and must not be
mandatory for the dimension. For other BW models, the dimension will be shown
flat.
In addition, to avoid loading all hierarchies at the same time, the hierarchy icon
may appear after the dimension is selected.
For dimensions that contain no hierarchies, only the Flat Presentation option will
be available in the Set Hierarchy drop-down.
Token Actions and Menu Button Visibil- The Display As option will now apply to the entire dimension. This behavior
ity means that if the dimension is switched from hierarchical to flat, then the Display
As option that is set won't change.
Tooltip Measures + Dimensions Entities There is no longer a limit to the number of entities that can be added to the
Limit Tooltip Measure or Dimension section.
Note
As soon as more than 5 entities are added, the performance optimization
tooltip is displayed.
Deletion Handling for Measure and Di- The deletion of a Measure or Dimension Input Control is now aligned with Calcu-
mension Input Controls lation Input Controls. This means that any Chart that consumes the respective
input control will display an error state.
Version Filter Creation/Editing When creating a new version filter or editing an existing version filter, the dialog
that appears always allows multiple selections.
• The All Members option will also appear, allowing the user to create a dy-
namic local version filter.
• The new version filter will use the same validation conditions that apply to
deleting version filters.
Version Filter Deletion Users now have an option to delete the default version filter in charts. Upon
deletion, one of the following version conditions should be satisfied to see the
visualization:
If none of the conditions apply, then an error state is presented to the user.
Chart Builder Panel Terminology Align- With the support of planning-enabled models in the optimized design experience,
ment the chart builder panel terminology has been updated to reflect more accurate
and consistent terminology with the Modeler.
• All account models (Classic Account Model, Model with Multiple Account)
now show Accounts as the primary option for numeric values.
All models with measures (Model with Multiple Measures, Acquired Analytic
or Live Model, Classic Measure Model) will continue to show Measure as the
primary option for numeric values.
Hover tooltips in view mode will also reflect this terminology change.
Show / Hide Options Chart details that were displayed in the subtitles are now displayed in the Applied
to Chart dropdown menu:
• Show/hide options for individual chart detail items are removed (model,
currency, drill, filter, rank, variance details, bin token, null token, prompts,
explorer view, and BW conditions).
• The options to show/hide warnings or errors on the chart are now in the
Font Reset on Numeric Point Chart When you reset the font for a Numeric Point Chart in the Styling Panel, and there
is only one modified element left, the chart is put back into a dynamic font size
state.
Input Control Dropdown Options The size of the selection token beside the description remains consistent with the
size of the text with restyle.
Chart Local Filter – Select All Member New improvements for chart local filter include:
• Like story and page filter, designers will be able to select all members in
the chart local filter. The filter selection is dynamic: this behavior means
that if any new member is added to the flat dimension or version, it will
automatically be included as a part of the applied local filter.
Direct filtering of data points on the chart will override the dynamic local
filter created in the builder panel.
Bookmark Bookmarked information is stored and saved based on User Interaction with the
story.
The next time the story viewer opens the saved bookmark, the bookmark is
updated based on the following:
• Story Filter A: Appears in the bookmark as the viewer had interacted with it
to narrow down the search results when creating the original bookmark.
• Story Filter B is removed from the bookmark, as it was interacted with by the
story viewer and removed by the story designer.
Had the story viewer made changes to this story filter, it would not have been
removed from the bookmark, like filter A.
• Story Filter C appears in the bookmark.
Even though the viewer had not interacted with it, no changes were made by
the designer that impacted story filter C.
Blended Calculations and Thresholds in There are several behavior changes for blended visualizations that use Blended
Blended Visualizations
Calculations and Thresholds.
• There is now a warning for data inaccuracies when Blended Calculations are
used in Linked Analysis while Filtering Across Models.
• In the Classic Design Experience, required dimensions that are linked dimen-
sions are not available for out-of-context blends.
In the Optimized Story Experience, required dimensions that are linked di-
mensions are avaiable for out-of-context blends.
• In the Classic Design Experience, adding a Blended Calculation to a non-
blended chart will automatically add all the required data models and auto-
matically blend the chart.
In the Optimized Story Experience, adding a Blended Calculation to a non-
blended chart will no longer automatically blend the chart.
The Lite Viewer is an extension of the Optimized Story Experience. It is a viewer that is designed to load the
lightest version of the application.
While it improves the performance of stories that are opened through a URL, it does not support all the
features that are currently offered in the Optimized Story Experience [page 1133].
Listed below are the feature sets that will automatically redirect you to the full Optimized (Unified) Experience
viewer. They will have to be removed to view the story in Lite Viewer.
Legacy Book- You are automatically redirected to the full Opti- You are required to recreate the bookmark (in Lite
marks mized (Unified) Experience viewer. Viewer or Optimized Story Experience)
Classic Stories You are automatically redirected to the full Opti- You are required to convert the asset into the Opti-
or Analytic Ap- mized (Unified) Experience viewer. mized Story Experience. For more information on
plications how to enable Optimized Experience, see Enable
Optimized Story Experience [page 1172].
Classic Stories You are automatically redirected to the full Opti- You are required to convert the story to the Opti-
with Optimized mized (Unified) Experience viewer. mized (Unified) Experience.
View Mode En-
abled
Unsupported Modes
Stories opened in either <mode=view> or <mode=present> will launch in the full Optimized Story Experience
Viewer. The Lite Viewer does not currently support these modes. Dashboards must be opened via a URL while
using <shellMode=embed>. For example, a URL must be adjusted to:
https://<TENANT>/sap/fpa/ui/tenants/<TENANT_ID/app.html#/story2?
shellMode=embed&/s2/<STORY_ID>
Feature Impact
<mode=view> View Mode You are automatically redirected to the full Optimized (Uni-
fied) Experience viewer.
<mode=present> Present Mode You are automatically redirected to the full Optimized (Uni-
fied) Experience viewer.
Listed below are the feature sets that are not available in Lite Viewer. These features are only available in the full
application.
Feature Impact
Comment Mode (excluding data point commenting) The option to activate comment mode is disabled. You will
not be able to view, or add comments to story objects such
pages, charts, tables, and so on.
Export to PDF with comments The Comments option in the export to PDF feature is disa-
bled.
Add Story Filter / Prompt The Add Story Filter / Prompt option is disabled
View Conditions for Advanced Filters The view conditions for Advanced Filtering option are disa-
bled.
Hyperlink – Story / Page (via Table) Selecting the Hyperlink option is disabled.
Export in the Background The Enable Export in the Background option is disabled.
Export to PDF (via Table) The Export Table to PDF option is unavailable.
Below are the features that can be added to a dashboard, which are unsupported in Lite Viewer. When
these features are detected, the story will display in the full Optimized (Unified) Experience viewer. These
unsupported features need to be removed to open the story in Lite Viewer.
Unsupported Models
Data Models
Model
Geo Visualization
RSS Reader
Web Page
R Visualization
Planning Actions:
Commenting Widget
Script APIs
Table (Forecasting)
Listed below are the feature sets that have known issues when used in Lite Viewer. These features may not
work as intended.
Feature Issue
Measure or Dimension Input Control Used in Table with Table won't reflect the state of the Input Control.
Bookmarks
Dynamic Text on Teams Signature (includes Export to PDF) Dynamic Text on Team Signature will appear as Loading…
Decimal Places in Table Decimals may not be rounded off as they are in the full
Optimized Story Experience.
Image as Background and in Charts (Fullscreen mode) Background image will appear multiple times.
Advanced Filter and Chart Local Filter on the Same Dimen- Chart won't render in Lite Viewer.
sion
Navigate Back to Story from Dashboard Navigating back to the previous story may not return you to
the source story if the target story automatically redirected
to the full viewer.
Custom Fonts with Map Labels Custom Fonts for Map Labels are not supported with ESRI
4.X. The Arial font will be used instead.
In SAP Analytics Cloud, while the optimized story experience improves the performance of dashboards (in
certain scenarios), it doesn't include all the features that are currently supported in the non-optimized (classic)
experience.
The optimized story experience has two modes: the design mode (optimized design experience, or ODE) and
the view mode (optimized view mode, or OVM).
To enable either mode in your story, see Enable Optimized Story Experience [page 1172].
For information on behavior improvements, see Optimized Story Experience Improvements [page 1136].
For more information on the optimized story experience, see Optimized Story Experience [page 1133].
The following sections provide the current optimized story experience restrictions.
The following features are supported in optimized view mode, optimized design experience or both modes
(optimized story experience).
• Variance as a Waterfall Chart (for Bar-Column & Horizontal Waterfall) (new feature)
• Support of Version and Time Dependent Hierarchies (for SAP BW)(new feature)
• Dynamic Time Calculations with Restricted Measures (for SAP BW) (new feature)
• Version Sort in Table (new feature)
• Save Story without Pre-Selecting Mandatory Variable Prompts (new feature)
• Dynamic Text – Geo Local Filters
• Exception Aggregation for Cross Calculations (excludes SAP BW) (new feature)
• Apply Story, Page, Local and Linked Analysis Filter with Internal (Page or Story) Hyperlinks (new feature)
• Using a Text Pool Technical Object to Translate Texts in Scripts (new feature)
• Input Control APIs to Get and Set Unbooked Members (new feature)
The following features may require specific actions before they can be used in the optimized story experience.
Story Editing • If your story contains scripts, it's read-only for story designers.
To be able to edit the story, the story designers need to be assigned to the Application
Creator role.
• If you directly open the story in view mode and switch to edit mode, the story needs to
be reloaded.
• If you change the script variable values in view mode, they won't be kept when you
switch back to edit mode.
• Any actions performed on a widget, popup, or page before it's completely rendered
can't be undone, such as deleting a chart.
Story Viewing • Once a script is executed, the Undo and Redo history is cleared and restarts.
• If your story contains any popups, the Save As… action is not available as this may
break them.
Responsive Page • Not all features are available to responsive pages converted from Classic to Optimized
Story Experience. For more information, see the section Responsive Pages: Feature
Compatibility of Responsive Layout after Conversion from Classic to Optimized Story
Experience in Story Pages [page 1026].
• If your optimized stories contain classic responsive pages, you can only add such ones
even after conversion.
Reporting Layout Once you've enabled reporting layout, you won't be able to revert back to the previous
layout. In addition, some features aren't supported. See Create a Reporting Layout with
Header, Footer and Margins [page 1038].
Input Control • The following input control types aren't supported on Linked Widgets Diagram: Account
Input Control, Dimension Input Control, and Cross Calculation Input Control.
• Input Control doesn't fully support theming capabilities.
• After converting an analytic application to optimized story, you can no longer set styling
respectively for different types of Input Controls in theme preferences.
Filter Line • In Group Filter mode, dynamic time range filters don't align with the system time when
it’s changed.
• Filters in Group Filter mode aren't shown in the appendix of the PDF export triggered by
the table’s context menu.
Export • If you've included comments in PDF export, page comments and table dimension
comments won’t be exported.
• If you want to export a table to Excel or CSV using script API, the table needs to be on
the current active page.
• Exporting widgets to CSV or tables to XLSX only supports Point of View as scope, while
All isn't supported.
Scripting and APIs • The scripts on one page can't access the widgets and popups from others.
• Value help for dimensions, measures and all their members aren’t supported for geo
map related scripting, but you can manually input the parameters.
• Customized dimension descriptions in memberInfo won't be shown on tables.
• The onResultChanged event isn't triggered when the chart or table is invisible and
set to Refresh Active Widgets Only. When the widget becomes visible, the necessary
query is sent and onResultChanged event is triggered.
• The onSelect event is only supported for dimension member input controls and
calculation input controls, while not supported for other types, such as range filters and
dimension input controls.
• The input control APIs, getActiveSelectedMembers,
setSelectedMembers, isAllMembersSelected and
setAllMembersSelected, are only supported for dimension member input con-
trols and calculation input controls.
• The setSelectedMembers and setSelectedMembersWithUnbooked APIs
don't support selecting input control members whose IDs are empty strings.
• When triggered by setVariableValue API, the variable dialog doesn’t display
member description for charts and tables.
• In view mode, setting multiple values on tables and R visualizations using an API like
setDimensionFilter changes the selection mode of a filter even if it was set to
Single Selection in edit mode.
For details of the features that are not supported for the optimized story experience on mobile devices, please
see the following help content: Preparing Stories for Mobile [page 90]
Some features may not be supported in the optimized story experience, or at least not supported right away.
There are different categories of unsupported features:
• Features that block conversion: if your story contains a feature that blocks conversion, you won't be able
to save or convert the story to the Optimized View Mode, Optimized Design Experience, or Optimized Story
Experience.
You will have to remove the features from your story before you can proceed.
X means that the feature is not currently supported in the optimized story experience for the mode.
Batch Export with Story Filter Unsupported feature; it will remain dis- X N/A
abled.
Device Preview (includes Font Scaling) Unsupported feature; it will remain dis- X X
abled.
Export to XLS using Table (Scope All) Unsupported feature; it will remain dis- X X
abled.
Hyperlink Support for Chart Title and Unsupported feature; it will remain dis- X X
Footer abled.
PDF Export with Scaled Measure Unsupported feature; it will remain dis- X N/A
Across Pages abled.
Pin to Home (Replace with Watchlist) Unsupported feature; it will remain dis- X X
abled.
Replace an Acquired Data Model in a Unsupported feature; it will remain dis- N/A X
Story abled.
SAP HANA Models with Load Parent- Blocks the conversion to the Optimized X X
Child Hierarchies Independent of Meta- Story Experience.
data
Section Widget (includes PDF Export Blocks the conversion to the Optimized X X
with Report Pages and Repeatable Story Experience.
Group Widget with Pagination)
X means that the feature is not currently supported in the optimized story experience for the mode.
Blended Visualizations (for Chart) (for Feature will be permanently lost after X X
SAP BW Data Models) conversion to the Optimized Story Ex-
perience.
Blended Visualizations (for Chart) (for Feature will be permanently lost after X X
SAP Datasphere Data Models) conversion to the Optimized Story Ex-
perience.
Blended Visualizations (for Table) (for Feature will be permanently lost after X X
all data sources) conversion to the Optimized Story Ex-
perience.
Chart Design Pause Mode (Optimized Chart will refresh with every interaction. N/A X
Story Building Performance)
Heatmap Chart: Sort Measures (When Unsupported feature; it will remain dis- X X
Chart has Two Dimensions) abled.
Table – Update Hierarchy for Multiple Unsupported feature; it will remain dis- X
Account Hierarchy Data Models abled.
Time Series Chart with Timestamp Di- Feature will be permanently lost after X X
mension conversion to the Optimized Story Ex-
perience.
X means that the feature is not currently supported in the optimized story experience for the mode.
X means that the feature is not currently supported in the optimized story experience for the mode.
• GeoMap.setContextMenuVi
sible()
• GeoMap.setQuickActionsV
isibility()
• GeoMap.getLayer().getDa
taSource().getMemberDis
playMode()
• GeoMap.getLayer().getDa
taSource().setMemberDis
playMode()
• GeoMap.getLayer().getDa
taSource().getDataExplo
rer()
• GeoMap.getLayer().getDa
taSource().getDimension
Properties()
• GeoMap.getLayer().getDa
taSource().setHierarch
y()
• GeoMap.getLayer().getDa
taSource().expandNode()
• GeoMap.getLayer().getDa
taSource().collapseNod
e()
Chart getEffectiveAxisScale X X
Some features won't be available in the optimized story experience. The story designer will need to change or
remove the following features before they can save or convert their story to the optimized story experience.
X means that the feature is not currently supported in the optimized story experience for the mode.
View Design
Feature Impact Mode Mode Action (before conversion)
Chart Design Pause Chart will refresh with every inter- N/A X
Mode (Optimize Story action
Building Performance)
Controls Panel Unsupported feature; it will remain X X The controls panel has been re-
disabled. placed with the vertical filter panel.
Digital Boardroom and In the Optimized Story Experience, X X Digital Boardroom will be inte-
Properties
Digital Boardrooms that reference grated into stories.
stories will be in a broken state.
Available view time properties can
now be configured for stories.
disabled.
Explorer, and Explorer- Feature will be permanently lost X X Recommendation: remove Explorer
related APIs after conversion to the Optimized from existing stories; it will be re-
Story Experience. placed by Data Analyzer.
Export in Design Time Unsupported feature; it will remain N/A X The feature is available in View
disabled. mode, but not in Design mode.
Grid pages Blocks the conversion to the Opti- X X Delete Grid pages before converting
mized Story Experience. your story.
Grouping New groups can't be created in the N/A X There is no action required as the
Optimized Story Experience. You group will be maintained with the
can ungroup existing groups after conversion.
conversion.
Input Task Blocks the conversion to the Opti- X X Delete the input task.
mized Story Experience.
Legacy Value Driver Feature will be permanently lost X X Use the Builder panel to transform
Tree objects to new Value Driver Tree ob-
after conversion to the Optimized
jects.
Story Experience.
Linked Analysis: Opti- Blocks the conversion to the Opti- X X Recommendation: use the onSe-
mized Presentation Ta- mized Story Experience. lect() for Table
ble (Table to Control
Measures)
Local Range Filters Feature will be permanently lost X X The Unrestricted Drilling Option
with Unrestricted Drill-
after conversion to the Optimized needs to be enabled in the Edit Fil-
ing Off (Chart, Table,
Story Experience. ter Panel.
Reference Line)
Tip
In the Classic Design
Experience, you are warned
which objects are impacted
when you switch to edit mode.
Non-Optimized Pre- Blocks the conversion to the Opti- X X In the Builder panel, enable the
sentation Table
mized Story Experience. Optimized Presentation table.
Tip
In the Classic Design
Experience, you are warned
which objects are impacted
when you switch to edit mode.
Partial Range Filters Story Filters X X Reset the Selection of Range Filter
to default.
Blocks the conversion to the Opti-
mized Story Experience.
Page Filters
Predictive Forecast (in Unsupported feature; it will remain X X Recommendation: use predictive
tables)
disabled. scenarios (for example, Predictive
Planning) to generate predictive
forecasts for planning models.
Search to Insight Unsupported feature; it will remain X X Is being replaced by JustAsk (for
disabled. Cloud Foundry tenants)
Smart Discovery and Feature will be permanently lost X X Recommendation: remove Smart
Smart Discovery APIs after conversion to the Optimized
Discovery related widgets before
Story Experience.
migration.
Thresholds in Tables After conversion, threshold results N/A Recommendation: remove thresh-
(New Model Type that may be corrupted or inaccurate.
olds from tables before converting
has an Account Dimen-
your story to the optimized design
sion)
experience.
Unsupported SAP BW Blocks the conversion to the Opti- X X Recommendation: keep the BW
Versions
mized Story Experience. version up to date with the latest
version and patches. That will help
you take advantage of performance
and functional improvements.
Unsupported SAP Some features may not work in an X X Recommendation: keep the SAP
HANA Versions
optimized story. HANA version up to date with the
latest version. That will help you
take advantage of performance and
functional improvements.
Related Information
The optimized story experience enables content within an SAP Analytics Cloud story to load faster.
Note
Before using the optimized story experience with SAP BW, please verify that you have applied the
applicable upgrades and patches. For more information, see Backwards Compatibility Guidance for Using
Optimized Story Experience with SAP BW [page 1135].
With either approach you can use the classic view mode or enable the optimized view mode. (See How to
Enable the Optimized View Mode [page 1173].)
Note
The conversion to the Optimized Design Experience is permanent and any unsupported features will be
permanently lost.
You have two options for converting your story to the optimized design experience:
Note
The dialog that appears shows the features that will be permanently lost (unsupported or deprecated
features) or the blocking features that will prevent you from converting your story.
• A message appears notifying you that the story is 1. Provide a name for your optimized story, or accept
being converted. the default suggested name.
• After conversion is successful, a message appears 2. Select OK.
and the design experience story starts to load.
The story is cloned, and a message appears when the
cloned story is converted.
Note
If your story has a recurring publication scheduled
before the conversion, when you view or edit this
publication in Calendar after the conversion, the file
settings and story view selection are still from Classic
View Mode. Therefore, we recommend deleting the
scheduled publication and rescheduling one after the
conversion.
The Optimized Design Experience dashboard opens, and you can start adding visualizations.
Remember
Optimized view mode is not enabled by default, and you need to go into each story separately to enable this
view mode.
• Use the Save menu: select (Save) Enable Optimized View Mode .
• Use the Story Details:
4. To save your story, from the file menu select (Save) Save .
However, if the story can't be optimized, or if some items can't be included, a warning message is displayed
explaining the details. Review the details in the warning and, if possible, resolve the issues.
Note
After you enable or disable Optimized View Mode for a story that’s already been scheduled for a recurring
publication, in Calendar the file settings and story view selection for this publication are still from the
previous mode. In such cases, we recommend deleting the scheduled publication and rescheduling one
after the conversion.
As a story designer, you can convert classic responsive pages to optimized ones so that you can use more
features on responsive pages that are exclusive to Optimized Story Experience. Also know about what happens
after the conversion.
When you convert your story to Optimized Story Experience, all the classic responsive pages are converted to
optimized ones.
For stories converted to Optimized Story Experience before version 2023.19, you may have legacy classic
responsive pages. For supported and unsupported features, refer to the section Feature Compatibility of Classic
Responsive Pages in Optimized Story Experience in Story Pages [page 1026]. To use more features exclusive to
Optimized Story Experience, you can convert all the classic responsive pages to optimized ones by selecting
Convert Responsive Pages from Edit in the edit time toolbar.
Note
The conversion is one way and applies to all the classic responsive pages.
Note
The converted responsive pages might not be identical to the original ones. Check out in the following
section.
Therefore, we recommend using device preview bar to test your story and configuring responsive rules to
adjust the layout accordingly after the conversion. For more information, refer to Design Responsive Layout
(Optimized Story Experience) [page 1624].
Here're general descriptions of what happens on different devices after classic responsive pages are converted
to optimized ones:
Device Notes
Devices of width between 768px and 1024px Lanes don't reflow in optimized responsive layout, so lane
arrangements might be different in some special cases. You
might need to adjust the widget positions.
Devices of width between 480px and 768px Devices of such sizes aren't listed in the device preview bar,
but you can add custom device and define responsive rules
for it.
Tablets (iOS, Android) • For large tablets of width more than 1024px, no major
adjustment's needed.
• For small tablets of width no more than 1024px, lanes
don't reflow in optimized responsive layout, so lane ar-
rangements might be different in some special cases.
You might need to adjust the widget positions.
Phones (iOS, Android) There's a default responsive rule where widgets auto-flow by
row. Each widget occupies the full width like before, while the
height might be slightly different. You can further adjust and
customize the rule.
Header Widgets
The height and font might be slightly different compared to classic responsive layout. The height won’t be
automatically resized based on contents. You can define responsive rules for different devices to adjust the
height.
The chart Builder panel in SAP Analytics Cloud’s Optimized Design Experience provides features for working
with data in your chart or customizing your chart’s appearance.
The chart Builder panel is organized into different sections where you can configure your data source,
choose your chart type, add and remove dimensions, and so on.
The following diagram displays the chart Builder panel features available in Optimized Design Experience.
Diagram Legend
Key Feature
In Optimized Design Experience, hovering over or selecting the data source name will show the Data Source
ID and Last Updated information in a tooltip. The tooltip also displays the description and file path under Data
Source Details.
Note
The header section of the Builder panel containing the data source information stays fixed at the top of the
Builder panel. Even as you scroll down the panel, you can still view this section at the top of it.
For more information on data sources, see About Adding Data to a Story [page 1054].
The Available Objects panel allows story designers to see all accounts/measures, calculations, input controls,
and dimensions.
To open the Available Objects panel, you can do one of the following:
• You can select the Available Objects button in the Builder panel.
• You can select Open Available Objects in the dialog that appears in the Builder panel when you are
adding objects such as accounts, measures, or dimensions.
For more information on the Available Objects panel, see Available Objects Panel (Optimized Design
Experience) [page 1183].
In Optimized Design Experience, there is a dropdown menu that shows all the available chart types. This
dropdown menu is found under Currently Selected Chart in the Builder panel.
You can see the chart category (comparison, trend, distribution, correlation, indicator, more) in the tooltip
that appears when you hover over a chart option in the dropdown. For more information on chart types, see
Choosing the Best Chart Type for the Data [page 1204].
Additionally, ghost charts, which are grayed out versions of the charts, display a visual representation of the
selected chart type in the widget. The chart orientation is also reflected in the ghost chart.
Example
You can see a ghost chart when you have a chart with no added measures.
Chart Orientation
For specific chart types, such as Bar/Column, Box Plot, Waterfall, and so on, you can change whether you want
your chart to be displayed in a Vertical or Horizontal orientation.
Tip
Depending on your chart type, you may see other options such as Y Axes Overlap or Show Chart as 100%.
For more information, see Choosing the Best Chart Type for the Data [page 1204].
Required fields are highlighted with a color and dashed border in the Builder panel.
When adding objects such as accounts, measures, or dimensions to your chart using the Builder panel, a
dialog appears. (In this topic, accounts and measures are referred to as accounts.) The dialog will open and
expand to take up the height of the Builder panel for better viewing.
• – Versions dimension
• – Date dimension
• – Generic dimension
• – Hierarchical dimension
In the dialog and in the Available Objects panel, there are sections for account input controls, accounts,
dimension input controls, dimensions, and calculations.
With the support of planning-enabled models in the optimized design experience, the chart Builder panel
terminology has been updated to reflect more accurate and consistent terminology with the Modeller.
• All account models (Classic Account Model, Model with Multiple Accounts) now show Accounts as the
primary option for numeric values.
All models with measures (Model with Multiple Measures, Acquired Analytic or Live Model, Classic
Measure Model) will continue to show Measure as the primary option for numeric values.
Hover tooltips in view mode will also reflect this terminology change.
In the Color section of the Builder panel, you can assign colors for individual values, synchronize colors
across charts, or change the color palette for your charts.
For more information, see Changing Color Palettes and Synchronizing Colors Across Charts [page 1273].
You can also use the color section to change patterns or colors for version dimensions in your charts.
Filters
In the Filters section, you can create filters on your chart data. By choosing + Add Filters, you can select
dimensions and create a filter.
Chart Add-ons
The Chart Add-ons section lists configurations that you can add to your chart.
Note
Depending on your selected chart type, the features that are available in the Chart Add-ons section may
vary.
• Linked Analysis: You can create a linked analysis to drill through hierarchical data or create filters that
simultaneously update multiple charts in your story. For more information, see Creating a Linked Analysis
[page 1101].
• Variance: Adding a variance in your chart can show you the difference between versions of an account
or the difference between time periods. For more information, see Adding Variances to Charts [page 1308].
• Threshold: You can add thresholds to your stories or individual accounts in a chart. For more
information, see Using Thresholds in Charts [page 1294].
• Trellis: Adding a trellis chart provides a grid of small charts for comparison. For more information, see
Use Trellis Charts for Comparison [page 1233].
• Tooltip: You can include additional information in your tooltips for accounts or dimensions that are
not in your chart. For more information, see Include Extra Accounts, Measures, and Dimensions in Tooltips
[page 1234].
• Hyperlink: You can add a hyperlink to another story or page, or to an external URL. For more
information, see Linking to Another Page, Story, or External URL (Classic Story Experience) [page 1108].
• Cross Calculations: This feature is currently not supported in Optimized Design Experience.
Chart Properties
Zero Suppression
In the Chart Properties section, you choose whether you want to enable Zero Suppression. If you enable Zero
Suppression, you can hide zero or null values in your selected chart.
Arrange Totals
By default in a new story, the totals data point is last in the chart.
• Under Initial Chart View, you can choose one of the following options:
• Open in last saved view
• Open in latest data
• Under Time Axis Settings, you can select whether you want to Collapse dates with no data in your chart.
• You can enable the Variance Waterfall option if you want to show comparison values between dimension
members.
• Under Waterfall expansion, you can select or deselect the following options:
• Start first measure from the baseline
• Place subtotal parent nodes first
Related Information
The Available Objects panel in SAP Analytics Cloud displays objects that you can add to your chart.
The Available Objects panel allows you to see all measures, calculations, input controls, and dimensions of your
data source. This panel provides an overview of all available objects to help you explore and decide on the
objects you want to add to your chart.
Note
Currently, the Available Objects panel can only be used for charts.
You can access the Available Objects panel in one of the following ways:
The following diagram displays the Available Objects panel features in Optimized Design Experience.
2 You can choose whether you want the Description, ID, or ID and Description to be displayed
for all objects in your Available Objects panel. To configure this, you can select
Note
In the Builder panel, your added objects will show the description of the object in
the chart by default.
3 To search for a particular object, you can use the search bar at the top of the panel.
4
This section contains available accounts ( ) or measures ( ). It also contains subsec-
tions for Calculations and Account Input Control (Measure Input Control).
Tip
You can create calculations or input controls within their respective sections of this
panel.
You can easily add objects from the Available Objects panel to the Builder panel
using drag-and-drop or the (quick action) menu. You can rename your object
through its quick action menu.
5 You can resize the width of the panel by dragging the left edge of the Available Objects panel.
6 This section contains available dimensions. This section contains a subsection for dimen-
sion input controls and a subsection for dimensions. This section also shows a Calculated
Dimensions subsection if any calculated dimensions are added.
In the diagram above, the dimensions section contains the following types of dimensions:
• – Versions dimension
• – Date dimension
• – Generic dimension
• – Hierarchical dimension
Tip
You can easily add objects from the Available Objects panel to the Builder panel
using drag-and-drop or the (quick action) menu. You can rename your object
through its quick action menu.
In Optimized Story Experience, you can improve your story's performance by loading restricted contents.
You can choose how the story is loaded in both edit time and view time. From Edit section in the toolbar, select
(Refresh) Loading Optimization Settings . Select one of the following loading options:
Option Description
Viewport Loading (Default) Improves performance by only loading the contents that are
in the visible area of the screen. Contents are loaded until
you stop scrolling. Works in view time only.
None All the contents are loaded when the story is opened. This
option isn't available in view time.
For more information about the loading options, see the following sections.
If you choose viewport or background loading, you can still load widgets out of the viewport or hidden in view
time by selecting Always initialize on startup in the Styling panel, which overwrites the loading settings. For
more information, see Initialize Widgets on Startup (Optimized Story Experience) [page 1187].
You can speed up your story's view time performance by choosing to only load the contents in the visible area.
Instead of waiting for all of the contents in your story to load, you can start interacting with the data right away.
Other contents are installed when you scroll to a new section, but not until you stop scrolling.
Note
Different from viewport loading, visible contents are still loaded in view time, which are displayed to story
viewers at first. In a second step, invisible contents are initialized in the background to be available.
Note
All the contents are loaded on initialization, which applies to both edit time and view time.
In edit time, if you choose to load all the contents, later viewers can't change the loading settings in view time.
You can add additional parameters, for example, IsOVM to indicate whether you’ve enabled optimized view
mode for your story or analytic application and improve its loading performance.
Parameter Options
In Optimized Story Experience, you can choose to initialize specific widgets on startup to influence the
performance of your story if it uses viewport or background loading.
In the Styling panel of a widget, you can select Always initialize on startup. Then, for example, it's initialized on
startup with visible widgets and ready to be accessed in the onInitialization event, even if viewport or
background loading is enabled.
Note
This can also improve the story's performance if any scripts performed on startup need to access the widgets
whose loading otherwise haven't yet been finished in the background.
Note
• Always initialize on startup doesn't work on the widget if you don't set this option for its container or
the container is invisible. Therefore, if you use this option for widgets that are embedded in a container,
Related Information
You can start creating a story for your data by experimenting with filters and charts in the Explorer.
In the Explorer, you see a faceted view of your data, which you can manipulate to generate charts for your
story pages. When you select measures and dimensions in the upper pane, the visualization in the lower pane
updates in real time. You can filter dimensions by selecting individual members, and the visualization changes
immediately to show you the filtered result.
Initially, the visualization type is chosen automatically based on the selected data, but you can change it to any
of the types supported for your data. When using the Table visualization, the behavior is the same as when
using tables in stories, with the following exceptions. For more information about tables, see Use Tables to
Visualize Data [page 1351].
• Some actions specific to tables in planning models are not supported in the Explorer; for example,
allocation, version management, and forecasting.
In addition to this Data Exploration mode, you can also use the Data Manipulation mode, which lets you
perform simple data preparation, such as specifying which column is a measure and which is a dimension.
Note
The Data Manipulation mode is available only when you acquire data by dragging an Excel file onto the
Home screen, or by selecting Add New Data from the menu under the Story/Data switch, and then
selecting Data uploaded from a file.
Some dimensions may be required for displaying measure values. To ensure that displayed numbers are
correct, select all the Required Dimensions in the facet panel in the Explorer. The (Show/Hide Data) list
displays required dimensions based on which measures are selected in the Explorer.
Also, when using the Show Measure feature in the Explorer, a measure is disabled if the measure has more than
one required dimension, or if the selected dimension is not the measure’s required dimension.
Tasks
Related Information
You can access the Explorer to begin creating a story for your data.
Context
The Explorer displays your data in a faceted view, with a visualization area below it. The initial view contains
the dimensions that are selected in the Show/Hide Data list, plus any other dimensions that are included in
visualizations in the story. If you're creating a new story, only measures are shown initially.
Procedure
Open an existing 1. From the main menu, select Files and then select a story.
Create a new story. 1. From the main menu, select Stories and then add a Canvas or Responsive page.
2. Add a data source.
3. Use the Story/Data control to switch to the Data view.
Access the explorer For details, see Launching the Explorer from a Story Page (Classic Story Experience) [page
from a story page. 1193].
Note
If you want to add all the dimensions to the facet panel (Show All) and there are more than 20
dimensions, it may take a little time to load all the dimensions.
• In the facet panel, select a dimension facet and drag it to a new location.
Note
Using Sort Facets applies the custom order to the current exploration session only.
4. Select measures and dimensions in the facet panel to add them to the visualization, or select individual
members to filter a dimension.
Note
You can also display additional information about the members in the facet panel. Hover over the
dimension name in the facet panel, select (Access Other Interactions), and then select Show Measure.
Tip
• Analytic view data sets: to display the number of times that a member occurs in the dimension,
select Occurrences.
To display the sum of a measure for each dimension member, select that measure from the list.
• Planning models: to display the sum of a selected measure (account) for each dimension member,
select Accounts.
If you display the occurrences or sums, you can also sort the dimension by that information. Hover over the
dimension name in the facet panel, select (Access Other Interactions) and then choose a sort option.
Note
• For hierarchical dimensions, sorting is applied at each level (siblings in the tree are sorted relative
to each other), in contrast to flat dimensions, where sorting is applied across the entire list.
• Sort options are not available for SAP BW hierarchical dimensions.
When you filter dimensions by selecting individual members, you can select (Show Filters) in the
dimension header to toggle between seeing only the selected members or seeing all members. You can
You can choose whether selecting members includes them in your visualization or excludes them. Selected
members are included in your visualization by default, but to exclude them, hover over the dimension name
in the facet panel, select (Access Other Interactions), and then select Exclude.
6. If you want to place dimensions on particular axes (rows or columns), select Designer to open the Builder
panel.
In the Builder panel you can drag dimensions and measures between axes, change the order, assign colors,
and more.
Remember
When accessing the Builder panel from the Explorer, the available functionality is not the same as when
accessing the Builder panel from a story page. For example, you can't add or edit filters.
7. To change the formatting of your visualization, select Designer and switch to the Styling panel .
In the Styling panel you can change chart properties, number formatting, fonts and font colors, and more.
Story pages and Explorer do not show the same options in the styling panel. For example, when using
the styling panel in Explorer the Break Scale chart property isn't available, and you can't apply Styling
Rules to tables.
8. To see more information about a particular data point in your visualization, select a data point and choose
(Smart Insights).
Note
The Smart Insights icon isn't available when the Styling panel is open.
9. To clear your visualization settings and start again, in the visualization area select Clear Chart .
All measures, dimensions, and members are deselected, the sort order is reset to Sort Ascending, and filter
selection mode is reset to Include. However, all hidden dimensions remain hidden.
When you're happy with your visualization, you can copy it to a page in your story.
10. To export the data from your current explorer view to a CSV file, select Export .
If your current explorer view is a table, you can also export the data to an XLSX file.
Remember
• The Grid View mode is available only for datasets acquired by dragging a data file onto the Home
screen, or when choosing Data uploaded from a file in the Add New Data dialog.
• When you make changes in the Grid View mode, previous selections made in the Data Exploration
Related Information
Prerequisites
Before you can use the explorer, you must enable explorer mode for the charts or tables.
Restriction
• Measure and dimension configuration is not available for blended charts and tables.
2. Enable explorer mode for each chart or table that you want to include.
3. When you've finished, switch to View mode.
Context
When you're viewing a chart or table in the standard View mode or in Fullscreen ( ) mode, you can
launch the Explorer to select different measures and dimensions, experiment with filters, and more. For more
information about the Explorer, see Explore Your Data (Classic Story Experience) [page 1188].
Procedure
Note
If you want to add all the dimensions to the facet panel (Show All) and there are more than 20
dimensions, it make take a little time to load all the dimensions.
In the Builder panel, you can drag measures and dimensions to change the chart structure, and edit your
filters.
For example, you can adjust story and page filter values by interacting with the filter token, the same way
you do on the story canvas. All changes made to story and page filters in the Builder are applied to the
story after you exit Explorer mode.
Any existing story and page filters are applied by default. To disable a story or page filter, in the Filters
section of the Builder panel, hover over the filter token and select (More) Apply Filter .
If you want to delete a chart or table filter, select the icon in the filter token.
Tip
Hover over a filter token's icon to see what kind of filter it is.
Note
• When you exit the Explorer, changes to the chart or table are not saved.
• Changing story or page filter values may impact the values you see in the facet panel. For example,
in a currency dimension, you might see USD and Euro initially. Then if you change the country filter
to filter out USA, the currency dimension in the facet panel no longer shows USD.
6. When you are finished exploring the data and are ready to return to view mode, in the toolbar select
(Story View Mode).
Remember
Only the original view of the facets will be kept if you don't save a new view before exiting Explorer
mode.
The chart or table header displays the number of explorer views that you have created.
Results
Remember
You'll need to save your story or save a bookmark if you want the explorer views to be available the next
time you open the story.
Changing the Initial Chart for Explorer Mode (Classic Story Experience) [page 1116]
Accessing the Explorer (Classic Story Experience) [page 1189]
You can analyze every dimension or KPI in the story that is not restricted by the story owner, view the data in
other chart types or tables, see default views set by the story owner, or add and save additional views.
Tip
If some explorer views have been added to a chart, the chart's subtitle will contain the amount of available
views. Simply click on the link to access the view.
The Explorer can contain multiple views that you can navigate as you do with story pages.
1. Change to a different page. In Edit mode, you select the pages from a drop list. In View mode, you can use
the left or right arrow keys to switch pages.
2. The tabs represent each chart or table widget that can be viewed in the explorer.
3. Create new views by clicking + New Explorer View.
4. After you create a new view, that view label is displayed. Select the drop list icon to change to a different
view.
5. The original view is first in the list (Original View), and it can't be renamed.
Note
If you want to add all the dimensions to the facet panel (Show All) and there are more than 20
dimensions, it may take a little time to load all the dimensions.
8. Sort Facets: The Sort Facets dialog provides another way to show extra dimensions.
1. In the dialog, either use Sort Options to sort all facets into ascending or descending order, or manually
drag the facets to create a new custom order.
2. When you are satisfied with the order of the facets, select OK.
When you select measures and dimensions in the upper pane, the visualization in the lower pane updates
in real time. You can filter dimensions by selecting individual members, and the visualization changes
immediately to show you the filtered result.
For example, you can click on the Discount % measure to enable it in the chart, then click on California from
the Location dimension to filter the results and only show the state.
• Click Access Other Interactions to open a context menu and do the following:
• Select a hierarchy (if available)
• Change the sort order
• Select display options
The visualization type is chosen automatically based on the selected data, but you can change it to any of the
types supported for your data.
To change the type of visualization, click the label on the top right corner of the lower pane and select a
different type of chart or table.
For more information, see Choosing the Best Chart Type for the Data [page 1204].
In addition to changing the type of visualization, you can also automatically or manually refresh the data in the
visualization and open the chart Builder to build and customize your chart.
The Builder can help you switch dimensions and add or remove filters easily. Drag and drop dimensions,
measures, or filters and the visualization will be updated.
Explore Tables
• Show/Hide- click the menu and then Show/Hide to show or hide the elements in a table.
This way you can control which information is displayed. For example, you can choose to hide Dimension
Headers.
• Set Drillstate- set the drill level or drill state of the whole table content using the context menu and
choosing the Dimension and Drill Level.
• Edit Drill Limitation - When there are too many rows in your table only 500 are displayed and you get a
warning. You can click on the icon and then Edit Drill Limitation to increase the table size. Keep in mind that
when you increase the number of rows the number of columns will decrease to preserve the recommended
limit.
• Hide all the dimensions and only add the ones you require for your visualization. This improves
performance in navigation.
• Use the Builder to refine the Visualization. You can more easily build charts with multiple dimensions and
tables using the builder. For example, in a table you can move dimensions to rows, columns, or filters.
Create a new view by clicking + New View and Edit to give your view a new name. Then, click (Story
View Mode) to leave the Explorer and save your story in a bookmark .
The Explorer view will be available directly after loading your bookmark.
• You can export the data to Excel or CSV by clicking Export.
Related Information
You can create different Explorer views and add them to your Story bookmarks.
You may want to create different Explorer views and come back to them later. You can do that by creating the
views and then creating story bookmarks.
Note
To create Explorer views, your chart or table must have been created with Enable Explorer selected. For
more information, see Changing the Initial Chart for Explorer Mode (Classic Story Experience) [page 1116]
Open your story in View mode and then select Explorer View Mode.
Now you can choose a different chart or table and create more views, or just create a bookmark.
You can return to the story mode at any time by selecting (Story View Mode.
To make sure your new views will still be there when you come back to the story, you need to create a
bookmark. However, when you open the story, selecting an explorer bookmark won't open the explorer view.
Whether you want to see a visual representation of your data or want to show it to someone else, charts can
help you. There are many ways to visualize your data, from bar charts and waterfall charts to tree maps or pie
charts.
Once you've picked a chart type, selected your model and added measures and dimensions, you can move onto
other tasks such as adding a variance or an axis break, or changing how the data is displayed.
Use the Builder to select the measures and dimensions to include in your chart.
Use the Examine panel to create a table based on the data in your chart.
You can add multiple measures and multiple dimensions to your chart. When measures or dimensions are part
of a hierarchy or when a dimension has attributes, you can expand them and select their children or expand a
dimension and select its attributes. You can also apply filters to your measures and dimensions. The chart is
updated as you make your choices in the Builder.
Setting Up Charts
Waterfall Charts
Add a chart to your SAP Analytics Cloud story or to an analytic applications page, or apply other properties to a
chart.
You can add many charts to your story or analytic applications page, and you can have more than one chart on
a page. You can also run smart discoveries on your dataset to find unknown relationships in your data.
For information about all the parameters for the various chart types, see Choosing the Best Chart Type for the
Data [page 1204].
In addition to adding charts to new and existing stories, you can set certain properties, apply smart grouping,
and so on.
The default chart type is a Bar/Column chart. You can change the chart type in the Chart Structure section of
the Builder.
for your new chart. To change it, in Builder Data Source , select (change primary model) and
choose a new data source.
If prompted, provide the proper login credentials for your data source connection.
You can select a chart type from the Chart Structure section of the Builder.
Properties
View Mode
Enable Explorer – select whether to enable the Explorer to be launched directly from the tile when in View
mode.
Note
Explorer is not available when you use a data source that has multiple hierarchies in an account dimension.
If you want to restrict the number of measures and dimensions that are visible in the Explorer, select
Configure Measures & Dimensions. Note that all measures and dimensions that are currently in the chart are
automatically included and can’t be removed. Also, if you don’t specify any additional dimensions or measures,
then only the ones used in the chart are available in the Explorer.
Boardroom
Select sorting options for when your story is used in the Digital Boardroom.
Smart Grouping is available for both the Bubble and Scatterplot chart types when using a Correlation analysis.
Note
Before you can create a forecast on a time series chart, using a remote model from an SAP HANA, SAP BW,
SAP Universe or SAP S/4HANA system, your administrator must configure the following option: Enable
Time Series Forecast and Smart Grouping on Live Data Models. The data will be processed on SAP servers.
Use Smart Grouping to automatically analyze the data points in your chart and group them based on similar
properties.
Note
By using Smart Grouping, any previously defined color settings for the chart will be overridden.
Note
To successfully use a bubble chart, the resulting measure values cannot be negative.
Note
Use (More Actions) Add Tooltip Measure to add a tooltip measure to the chart.
Related Information
When adding a chart to your SAP Analytics Cloud story or analytic applications page, choose the best chart
type for your analysis.
You can add many charts to your story or analytic applications page, and you can have more than one chart on
a page. Choose different chart types to display different aspects of your data.
The SAP Analytics Cloud Builder panel groups similar chart types together.
• Comparison charts compare differences between values or show a simple comparison of categorical
divisions of measures.
Description Parameters
• 1+ - minimum
• 1 - maximum
• 0+ - optional
Measures (Accounts)
Parameter Number
Measures 1+
Color* 0+
Bar Chart Column Chart
Dimensions
Trellis 0+
Description Parameters
• 1+ - minimum
• 1 - maximum
• 0+ - optional
Measures (Accounts)
Parameter Number
Measures 1+
Dimensions
Stacked Bar Chart Stacked Column Chart
Parameter Number
Stacked Bar/Column: includes at least three series of data, each series repre-
sented by a color stacked in a single bar (for example, sales for 2014, 2015, and Dimensions 0+
2016).
Color* 1+
Tip
Trellis 0+
You only need one dimension for the stacked bar chart (as the Color parame-
ter) and as many measures as you want. * When you have a measure in the Color
parameter, you can have only one pa-
Chart Orientation: select either Horizontal or Vertical orientation. rameter. If you already had dimensions
in the Color parameter or a different
Show Chart as 100%: select when you want to show the Stacked Bar/Column
measure, the new measure will replace
chart as 100% stacked.
those parameters.
Note
If the stacked bar or column chart has only negative values, they will be
shown in a 100% stacked chart. However, if the stacked chart has both posi-
tive and negative values, when shown as a 100% stacked chart the negative
values won't be shown.
Description Parameters
Measures (Accounts)
Combina-
Combina- tion
tion Col- Stacked
umn & Column &
Parameter Line Line
Column and Line Chart Stacked Column and Line Chart
Column 1+ 1+
Axis
Y Axes Overlap: Select when you want the bar and line axes to overlap.
Line Axis 0+ 0+
Dimensions
Combina-
Combina- tion
tion Col- Stacked
umn & Column &
Parameter Line Line
Dimension 1+ 1+
s
Color 0+ 0+
Trellis 0+ 0+
Description Parameters
Measures (Accounts)
Parameter Number
Measures 1+
Dimensions
Parameter Number
Dimensions 2
Color Version*
Waterfall Chart
The initial and the final values are represented by columns attached to the base- Restriction
line, while the intermediate values are denoted by floating columns. The columns
* Waterfall charts support only the
are color-coded to show Increase, Decrease, and Total values.
Version dimension in the Color pa-
You can have up to two dimensions in your chart: rameter. If you don't have a version
dimension (for example, Actual or
• The first (primary) dimension provides subtotals or interim totals (for exam-
Forecast versions), the Color list
ple, sales per month).
will be empty.
• The second (breakdown) dimension provides further details (for example,
sales by product per month).
The initial and the final values are represented by columns attached to the base-
line, while the intermediate values are denoted by floating columns. The columns
are color-coded to show Increase, Decrease, and Total values.
For more information on waterfall charts, see Waterfall Charts [page 1234].
Description Parameters
Measures (Accounts)
Parameter Number
Measures 1+
Dimensions
Parameter Number
Dimensions 0+
Color 0+
Stacked Area Chart
Trellis 0+
Description Parameters
• 1+ - minimum
• 1 - maximum
• 0+ - optional
Measures (Accounts)
Parameter Number
Left Y-Axis 1+
Right Y-Axis 0+
Dimensions
A line chart is best suited for showing data for a large number of groups (for Dimensions 1+
example, total sales over the past several years).
Color 0+
Trellis 0+
Description Parameters
• 1+ - minimum
• 1 - maximum
• 0+ - optional
Measures (Accounts)
Parameter Number
Measures 1+
Dimensions
Parameter Number
Each point on a time series chart corresponds to both a time and a quantity. This Color 0+
chart type can be used to view sales revenue trends of a product throughout a
range of years.
Description Parameters
Measures (Accounts)
Parameter Number
Measures 1+
Dimensions
Parameter Number
Aggregation 1
Dimension
Box Plot Chart
Dimensions 0+
Use a Box Plot chart to summarize annual sales amounts according to geographi-
cal regions. For each country, plot a box that shows the range and distribution of Color 0+
the regional sales amounts within that country.
Trellis 0+
A box plot chart usually includes two parts: a box and a set of whiskers.
• Box: the box is drawn from Q1 to Q3 with a horizontal line drawn in the middle
to denote the median (Q2).
• Whiskers (W): Lines drawn outside the box, terminating at the maximum
(Q4) and minimum (Q0) data points.
Tip
SAP Analytics Cloud follows Excel's Quartile.INC() function behavior
for computing quartiles.
Quartile Description
Description Parameters
• 1+ - minimum
• 1 - maximum
• 0+ - optional
Dimensions
Parameter Number
X-Axis 1
Y-Axis 0+
Trellis 0+
Measures (Accounts)
Heat Map Chart
A heat map (or heatmap) is a data visualization technique that shows magnitude Parameter Number
as color in two dimensions.
Color 1
The variation in color in a heat map may be by hue or intensity, giving visual cues
to the reader about how the phenomenon is clustered or varies over space.
Description Parameters
Measures (Accounts)
Parameter Number
Measures 1
Dimensions
Parameter Number
Aggregation 1+
Dimension
Histogram Chart
Color 0+
Bin Properties
Parameter Number
Number of Bins 1+
Description Parameters
• 1+ - minimum
• 1 - maximum
• 0+ - optional
Measures (Accounts)
Parameter Number
Measures 1
Dimensions
Parameter Number
Radar Chart
Dimensions 1
The Radar chart places numeric values, increasing in value, from the center of the
radar to the perimeter. Radar charts are particularly useful for determining how Color 1+
Description Parameters
• 1+ - minimum
• 1 - maximum
• 0+ - optional
Measures (Accounts)
Parameter Number
Size 1
Color 0+
Dimensions
A Tree Map chart displays data as a series of hierarchical rectangles, where the
Label 1+
surface area of each rectangle is proportional to the size of a value from the data
object; the larger the rectangle, the greater the value that it represents. Trellis 0+
In a tree map chart, the Label dimensions are shown as nested groups, with the
first dimension as the outermost group.
Use the Size measure for size and color, or add a separate Color measure.
Description Parameters
• 1+ - minimum
• 1 - maximum
• 0+ - optional
Measures (Accounts)
Parameter Number
X-Axis 1
Y-Axis 1
Bubble Chart
Size 1
Color 0+
Dimensions
Parameter Number
Dimensions 1
Color 0+
Trellis 0+
Description Parameters
• 1+ - minimum
• 1 - maximum
• 0+ - optional
Measures (Accounts)
Parameter Number
Measures 1
Dimensions
Parameter Number
In cluster bubble charts (also called packed bubble charts), the size and the color Color 1
of the bubbles are used to visualize the data.
Trellis 0+
The positioning of the bubbles is not significant; however, the chart is optimized
for compactness.
Description Parameters
• 1+ - minimum
• 1 - maximum
• 0+ - optional
Measures (Accounts)
Parameter Number
Scatterplot Chart
X-Axis 1
The scatterplot chart allows the user to consider a larger scope of data for the
Y-Axis 1
purpose of determining trends. For example, if you input customer information,
including sales, products, countries, months, and years, you would have a collec-
Dimensions
tion of plotted points that represents the pool of customer information. Viewing
all of this data on a scatterplot chart would allow you to speculate as to why
certain products were selling better than others or why certain regions were Parameter Number
purchasing more than others.
Dimensions 1+
Color 0+
Trellis 0+
Description Parameters
• 1+ - minimum
• 1 - maximum
• 0+ - optional
Measures (Accounts)
Parameter Number
Measures 1+
Color* 1
Dimensions 1+
Description Parameters
• 1+ - minimum
• 1 - maximum
• 0+ - optional
Measures (Accounts)
Parameter Number
Primary Values 1+
Secondary Values 1+
Tip
To add more emphasis to your
Numeric Point Chart
chart, add an existing threshold to
Numeric point charts require at least one primary value, but they can have mul- the Color element.
tiple primary and secondary values. In both cases, only a single data point is
displayed.
• Primary Values display the numeric data point in a large font with the label
displayed in a smaller font below the numeric value.
• Secondary Values display the label and the numeric data point on the same
line.
Description Parameters
Measures (Accounts)
Measures 1 1
Dimensions
Both donut and pie charts display data as sections of a circle or donut, and both Trellis 0+ 0+
are usually used for one group of data, but can be used for multiple groups of
data.
Donut charts display the total numerical value in the center and the dimensions
as different sized colored sections of the donut, with the size of the section
representing a percentage of the total.
For example, for a donut chart showing sales by region, the total number of sales
(the numerical value) is in the center and the regions are the colored sections.
A pie chart displays data as a pie, each section or pie wedge filled with color or
patterns. Each pie wedge represents a subset of a total value. For example, the
percentage of total sales divided by regions.
Description Parameters
• 1+ - minimum
• 1 - maximum
• 0+ - optional
Measures (Accounts)
Parameter Number
Height 1+
Width 0+
Dimensions
Marimekko Chart
Parameter Number
Dimensions 1
Color 0+
Trellis 0+
Related Information
The chart Builder panel in SAP Analytics Cloud provides features for working with data in your chart or
customizing your chart’s appearance.
The chart Builder panel is organized into different sections where you can configure your data source,
choose your chart type, add and remove dimensions, and so on.
The following diagram displays the chart Builder panel features available.
Diagram Legend
Key Feature
Hovering over or selecting the data source name will show a tooltip with the data source description and file
path.
To change the data source, choose the (Change Primary Model) button in the Builder panel.
You can create links between dimensions across models. To open the Link Dimensions dialog, you can select
(Link Dimensions) or + Add Linked Models in the Builder panel. For more information on linking
dimensions, see Link Dimensions Across Multiple Models [page 1061].
Tip
You can also open the Link Dimensions dialog through selecting (Link Dimensions) in the Data section
of the toolbar.
For more information on data sources, see About Adding Data to a Story [page 1054].
If you select (Add/Remove Chart Components) next to the Chart Structure label, a dialog will open that lists
the components you can add to your chart.
Note
Depending on your selected chart type, the features that are available in the Add/Remove Chart
Components dialog may vary.
• Add Reference Line: You can add reference lines to show important values in your chart. For more
information, see Add a Reference Line [page 1258].
• Add Variance: Adding a variance in your chart can show you the difference between versions of an
account or the difference between time periods. For more information, see Adding Variances to Charts
[page 1308].
• Add Trellis: Adding a trellis chart provides a grid of small charts for comparison. For more information,
see Use Trellis Charts for Comparison [page 1233].
• Add Tooltip: You can include additional information in your tooltips for measures or dimensions that
are not in your chart. (In this topic, accounts and measures are referred to as measures for most model
types.) For more information, see Include Extra Accounts, Measures, and Dimensions in Tooltips [page
1234].
• Hyperlink: You can add a hyperlink to another story or page, or to an external URL. For more
information, see Linking to Another Page, Story, or External URL (Classic Story Experience) [page 1108].
• Add Cross Calculations: You can add cross calculations to your chart. For more information, see Add
Cross Calculations to Charts [page 1349].
• Comparison
• Trend
• Distribution
• Correlation
• Indicator
• More
For more information on the chart type groups and the chart types that are included in each group, see
Choosing the Best Chart Type for the Data [page 1204].
For specific chart types, such as Bar/Column, Box Plot, Waterfall, and so on, you can change whether you want
your chart to be displayed in a Vertical or Horizontal orientation.
Tip
Depending on your chart type, you may see other options such as Y Axes Overlap or Show Chart as 100%.
For more information, see Choosing the Best Chart Type for the Data [page 1204].
When adding objects such as accounts, measures, or dimensions to your chart using the Builder panel, a
dropdown menu appears. (In this topic, accounts and measures are referred to as measures for most model
types.)
The diagram above shows the dialog that appears after you select + Add Dimension:
• 1) A dimension that has next to their name is a hierarchical dimension. For more information about
hierarchical data, see Work with Hierarchies in a Chart (Classic Story Experience) [page 1299].
• 2) To view objects in a panel instead of a dropdown menu, select Expand List in the dropdown menu.
In the dropdown menu under the Measures section, you can do the following:
In the dropdown menu under the Dimensions section, you can do the following:
For more information on measures and dimensions, see Selecting Accounts, Measures, and Dimensions [page
1230].
Color
In the Color section of the Builder panel, you can assign colors for individual values, synchronize colors
across charts, or change the color palette for your charts.
For more information, see Changing Color Palettes and Synchronizing Colors Across Charts [page 1273].
Filters
In the Filters section, you can create filters on your chart data. By choosing + Add Filters, you can select
dimensions and create a filter.
Properties
In the Properties section, you can choose whether you want to Enable Explorer. If you enable Explorer, you can
Configure Measures & Dimensions.
For more information on Explorer, see Explore Your Data (Classic Story Experience) [page 1188].
Related Information
After you select a chart type, select the accounts, measures and dimensions to display in each area of your
chart.
Depending on your data source or model, you may see either accounts or measures or both.
(For more information, see Learn About Dimensions and Measures [page 601].)
When you first create your chart, you are asked to choose a data source or model. The accounts, measures,
and dimensions that you can use are based on the data in that model.
You can add multiple measures and multiple dimensions to your chart. When measures or dimensions are part
of a hierarchy, or when a dimension has attributes, you can expand them and select their children, or expand a
dimension and select its attributes.
Restriction
When you have an account dimension that has multiple hierarchies, you can use different hierarchies for
different charts. However, if you change an account hierarchy in a chart, your data may no longer be
displayed.
For more information on accounts with multiple hierarchies, see Learn About Hierarchies [page 613].
You can also apply filters to your accounts, measures, and dimensions. The chart is updated as you make your
Note
When using dimension attributes in your chart, drilling and Display As are not supported. For more
information about dimension attributes, see Combine Data with Your Acquired Data [page 722].
Tip
Added a date or time dimension to your chart and want to show some comparisons? Follow the guidelines
in Adding Time-Based Variances to Charts [page 1318].
You can personalize your charts in many ways including adding color or adding values to tooltips for measures
that are not in the chart.
There are times when you would rather have a customized description for an account, measure, or dimension
in your chart rather than using the default value.
To change back to the default information, pick the measure or dimension, select and then select
(Reset).
For some chart types, you can use the same dimension for both Dimensions and Color sections. This can be
useful if you want to hide the legend for a chart, but still want to see the information that the legend would
provide. For example, you can display total sales (measure) by employee (dimension), and add employee to the
color section as well.
Version (or category) dimensions can also be added to the Color section, allowing you to modify colors and
patterns. Version dimensions show different versions of data such as Actual or Forecast data.
Restriction
For most chart types, you can only have one of the following object types in the Color section: dimensions,
measures, or version dimensions.
• Dimensions: you can have several dimensions in the Color section (when the chart type allows it), but
you can't add version dimensions.
If there was a measure in the Color section, dragging a dimension to the section will replace the
measure with the dimension. When you remove the dimension from the section, the measure is
displayed again.
• Measures: when you have a measure in the Color section, you can have only one parameter. If you
already had dimensions in the color section or a different measure, the new measure will replace those
objects.
You can apply the same dimension to both dimension and color sections in the following chart types:
• Bar/Column
• Box Plot
• Bubble, Scatterplot, and Cluster Bubble
• Marimekko
When you switch to a chart type that does not support using the same dimension twice, one of the dimension
options will be dropped.
Tip
You can either use the Add Dimension/Measure option to add a dimension to the color section, or you can
move (drag) dimensions between the Dimensions and Color sections.
Color Options for Versions and Accounts/Measures in Charts (Optimized Story Experience)
When you add the version dimension to the color section, the versions inherit the color choices that are made
for the accounts (or measures). You can modify the patterns for the versions, but you can override those
pattern choices when you change the patterns for the accounts.
If you want to have the version dimension control both the pattern and color options, then clear the following
checkbox: Color by accounts (Color by measures).
A trellis chart is a set of small charts shown in a grid for comparison. Each small chart represents one item in a
section. For example, if you create a bar chart that compares revenue by region, and then add the <Country>
dimension to the trellis, multiple small charts appear. Each small chart displays the revenue by region for one
country.
Find Chart Structure and select (Add Chart Components) Add Trellis .
2. Under Trellis, select Add Dimension and choose your dimension.
You can add information to the tooltip for accounts, measures, and dimensions that are not in your chart.
You may want to limit the number of accounts, measures, and dimensions used to create your chart, but still be
able to see the details of a datapoint from all of them. Select additional accounts, measures, and dimensions
from the appropriate section:
• Tooltip Accounts
• Tooltip Measures
• Tooltip Dimensions
Remember
You will see either a Tooltip Accounts or a Tooltip Measures option, but not both.
• From the action menu ( ) select Add Tooltip , and then select either Account, Measure, or
Dimension.
Find Chart Structure and select (Add Chart Components) Add Tooltip , and then select either
Account, Measure, or Dimension.
2. Under Tooltip Accounts, select Add Account and choose your accounts.
3. Under Tooltip Measures, select Add Measure and choose your measures.
4. Under Tooltip Dimensions, select Add Dimension and choose your dimensions.
When you move the cursor over a datapoint, the tooltip will show information for all the selected measures and
dimensions.
Related Information
Waterfall charts show how an initial value is affected by a series of intermediate positive or negative values.
Generally, waterfall charts are used for time or duration-related data, although horizontal waterfall charts can
be used for other types of data.
Waterfall charts usually have one dimension, but you can add a second dimension:
• The first (primary) dimension provides subtotals or interim totals (for example, sales per month).
To use the Color parameter in a waterfall chart you need to have a Version (for example, Actual or Forecast)
dimension. For your versions, you can modify the patterns, but not the colors. To change the colors, refer to
Adding Columns and Customizing the Column Color-Coding [page 1235].
Note
To define a waterfall chart based on a dimension which is already used in a restricted measure (referred
to as a key figure in SAP BW) may result in unexpected data because SAP BW treats restrictions as OR
operations. In most cases restricted measures are intended to get the result as an AND operation.
For example, COUNTRY is restricted to Great Britain and Germany. In a sales scenario, the intended result
would be that you see products which are sold in both countries (Great Britain OR Germany) and not
a combination of either country and both of them (Great Britain OR Germany OR (Great Britain AND
Germany)).
If you accept an OR operation, a waterfall chart could be defined on a dimension which is already used on a
restricted key figure designed in SAP BW. However, the described result occurs only if the key figure is part
of the drill down.
The initial and the final values are represented by columns attached to the baseline, while the intermediate
values are denoted by floating columns. The columns are color-coded to show Increase, Decrease, and Total
values.
The chart labels show the primary (or only) dimension label at the bottom or outer edge, and all other labels in
line with the bars in the chart.
When there are too many data points in the waterfall chart, it won't render properly.
For example, your waterfall chart looks fine, but you change the dimension hierarchy to use Flat presentation
instead of the hierarchy. Your chart can't process all the data, and the chart view is replaced with a ghost chart
and a warning message.
To restore your chart, either reset the dimension back to a hierarchy or add filters to the data.
For your waterfall chart, you can use the standard 3-column display, or you can use a 4-column display. The
4-column display shows Increase, Decrease, Subtotal, and Total values.
While you can use filters in charts to exclude the root node, there are times when filters may not work. For
example, in BW hierarchies the root node may change, and the filter will no longer be valid.
When you have multiple measures in your chart, you may see either actual data values or comparison values.
Using the Variance Waterfall switch in the Builder panel, you can choose which values to display. (You can only
choose the values when the switch is enabled.):
• Enabled: the Variance Waterfall switch is enabled when you have multiple measures and specific
conditions; you can choose to turn it on or off.
• Disabled: the Variance Waterfall switch is disabled when you have a single measure, but it will be
automatically turned on under the following conditions:
• In the primary dimension the root node is hidden and multiple nodes are displayed.
• Expand any node of the primary dimension except the root node, or add a secondary (breakdown)
dimension.
Waterfall Expansion
Waterfall charts can have additional properties, which can be chosen from the Builder panel in the Waterfall
Expansion section.
When using cumulative measures, the chart may not display the values correctly. Select Start first measure
from the baseline to anchor the first measure to the axis.
Generally, when you expand a hierarchy the parent node (subtotal node) is after the children nodes.
To change the order of the nodes, select Place subtotal parent nodes first.
Related Information
Add a time-based variance to your chart to show the difference between time periods.
Context
The variance chart is based on all the measures in the Waterfall chart. Set up your chart with one measure and
a Date filter. The Date filter must have only one value.
Procedure
1. Select your Waterfall chart, open the Designer, and then select (Builder).
2. In the Variance area, select Add Variance.
Tip
If you cannot see a variance area, find Chart Structure and select (Add Chart Components) Add
Variance .
Option Description
Name Use the default name or provide a different name for the variance.
Invert Colors Reverse the color scheme from the following default values: a positive variance is shown in green,
and a negative variance is shown in red.
Set No Data as To include a NULL data point in your variance, set it to the value zero.
Zero
Scale with Base Default setting keeps the variance chart scale the same as the chart scale. If you want the variance
Chart to have its own scale, clear the checkbox.
Show Difference Choose whether you want to display the variance numerically or as a percentage, or both.
as
View Variance as • Bar – displays a classic variance chart beside the regular chart.
• Data Label – adds variance information to the data labels.
Use data labels if there is no room to add a variance chart, or if you want variance information
for charts that cannot use a variance chart.
• Integrated – displays the variance data as an overlay on the bar chart.
Variance information is added to your visualization. The variance bars are color-coded based on the account
type and whether an increase is desirable or not. For example, an increase in income would be welcome
(green), but an increase in expenses would not (red).
Related Information
There are several features available on the action menu (context menu) for working with the data in the chart or
for changing the chart's appearance.
The chart action menu (context menu) provides options that let you make additional changes to the chart. If
you want to make changes to specific areas in a chart (For example, drilling down on a data point), you can use
the right-click context menus.
To display the menu, in the chart tile either right-click or select (More Actions).
Restriction
Some options are only available for specific chart or data types. For example, you must have a hierarchy
dimension in your chart to see the Drill option.
Action Description
Applied to Chart Contains chart details such as filters, drill level, and so on.
Drill Allows you to set the drill level to be displayed in the chart. You can also choose to include parent levels or
to change the hierarchy format.
For more information, see Work with Hierarchies in a Chart (Optimized Story Experience) [page 1295].
Apply an ascending or descending sort to your chart, or set a custom sort order.
Sort
For more information, see Sort Measures and Dimensions [page 1291].
To sort on a measure or account that isn't included in the chart, do the following:
When you have multiple account hierarchies, the account to be used for sorting must belong to the same
hierarchy as the one that is used in the chart.
Note
Rank performs both a rank and sort action when producing chart results. However, ranking and
sorting features can be used independently if you are using one of the following data sources:
• Top 5
• Bottom 5
• Top N Options:
• Mode - Top or Bottom
• Value - used as the number of values you want to include in the filter.
• Dimension - Select All Dimensions or a specific dimension.
• Account - Select any of the accounts that are used in the chart. (Only accounts that are cur-
rently in the chart can be selected.)
• Measure
• Cross Calculation
• Version
More Options To reduce size of the main context menu, some options have been grouped together into a sub-menu.
Action Description
Show as The Show as percentage option displays the relative percentage difference over
percentage/Show time in a Time Series chart. When showing percentage in a Time Series chart, the
as value (Time Ser- left-hand data point is displayed as 0%, and all values to the right show the relative
ies chart) difference from that point. When the time frame is adjusted (moved, shrunk, or
stretched), the relative differences are updated.
The Show as value option provides the actual value for the data points in a Time
Series chart.
Group Color Axis/ On multidimensional charts, when color dimensions are grouped with other di-
Ungroup Color Axis
mensions, the legend sort order will reflect the data point sort order and the color
dimension ranking will be applied per outer dimension grouping.
Ungrouping the color dimensions from other dimensions improves sorting of color
legends, and allows ranking on color to ignore the outer dimension grouping.
Use a linked analysis to drill through hierarchical data or to create filters that si-
Linked
Analysis multaneously update multiple charts in your story. For more linked analysis details,
see Creating a Linked Analysis [page 1101].
For a list of options you can add, see Add More Functionality (Optimized) [page
Add
1244].
Allows you to show or hide chart elements. By default, most elements are shown.
Show / Hide
The following elements can be hidden:
• Chart Title
• Subtitle
• Chart Details
• Legend
• Threshold Legend
• Data Labels
• X-Axis Labels
• X-Axis Title (Bubble, Scatterplot, and Histogram charts)
• Y-Axis Labels
• Y-Axis Title (Bubble and Scatterplot charts)
• Totals (Stacked Bar/Column, Combination Stacked Column & Line, and Mari-
mekko charts in Optimized Story Experience): select or deselect this option
to show or hide the total values of your measures. For more information, see
Show Total Values (Optimized Story Experience) [page 1290].
• Footer
Action Description
• Time Intervals
• Date Picker
• Navigator
• Vertical Grid Lines
• Horizontal Grid Lines
• Confidence Interval
• Past Period Forecast
You can manually change the axis values. For more information, see Modify Axes
Edit Axis
on Single or Dual Axis Charts [page 1284].
Collapse Title, Collapse to show only one line of text in the Chart Title or expand to show the full
Expand Title
text.
Allows you to export data from your chart as a CSV file, with or without formatting.
Export
For more information, see Export Chart Data as a CSV File [page 245].
Opens the Styling panel. For more information on styling options, see Formatting and Styling Items on a
Edit Styling
Story Canvas [page 1589].
Disable Mouse When enabled, you can use the mouse to resize or reposition the chart widget.
Actions /
Enable Mouse
Actions
When you select Add in the main context menu, you can select the following options:
Action Description
Add a Compound Annual Growth Rate line to your Bar/Column or Stacked Bar/Column chart.
Add CGR
For more information, see Add a Compound Annual Growth Rate Line to Bar or Stacked Charts [page
1259].
For more information about thresholds, see Creating Story-Defined Thresholds (Classic Story Experi-
ence) [page 1514].
Allows you to add a hyperlink to an external URL, page or story. For more information, see Linking to
Hyperlink
Another Page, Story, or External URL (Classic Story Experience) [page 1108].
Allows you to add reference lines to your charts. For more information, see Add a Reference Line [page
Reference Line
1258].
Add information for measures and dimensions that aren't in your chart. For more information, see Add
Tooltip
extra tooltips for measures and dimensions [page 1234].
Provides a grid of small charts for comparison. For more information, see Use Trellis charts for
Trellis
comparison [page 1233].
For more information about cross calculations, see Add Cross Calculations to Charts [page 1349].
Cross
Calculation
The following menu appears when you right-click a data point in a chart. Different options are available for
different chart types.
Note
When you change the account hierarchy (for an account that has multiple
hierarchies), the drill state will be reset to its default level.
Expand / Collapse Lets you show more than one level of data.
When you have only one dimension and it has been fully expanded, the menu will
show a Collapse option.
When multiple dimensions are fully expanded, selecting Expand for a dimension
will collapse that dimension.
More information: Applying a Chart Filter (Classic Story Experience) [page 1270]
More information: Applying a Chart Filter (Classic Story Experience) [page 1270]
More Options To reduce size of the main context menu, some options have been grouped
together into a sub-menu.
Action Description
Opens the Styling panel. For more information on styling options, see Formatting
Edit Styling
and Styling Items on a Story Canvas [page 1589].
Disable Mouse Actions / Enable Mouse When enabled, you can use the mouse to resize or reposition the chart widget.
Actions
To display the menu, in the chart tile either right-click or select (More Actions).
Restriction
Different chart types have different options available, and some options are not available when in View
mode.
Action Description
Show as The Show as percentage option displays the relative percentage difference over time in a Time Series
percentage/ chart. When showing percentage in a Time Series chart, the left-hand data point is displayed as 0%,
Show as value and all values to the right show the relative difference from that point. When the time frame is adjusted
(Time Series (moved, shrunk, or stretched), the relative differences are updated.
chart)
The Show as value option provides the actual value for the data points in a Time Series chart.
Apply an ascending or descending sort to your chart, or set a custom sort order.
Sort
For more information, see Sort Measures and Dimensions [page 1291].
To sort on a measure or account that isn't included in the chart, do the following:
When you have multiple account hierarchies, the account to be used for sorting must belong to the same
hierarchy as the one that is used in the chart.
Note
Rank performs both a rank and sort action when producing chart results. However, ranking and
sorting features can be used independently if you are using one of the following data sources:
• Top 5
• Bottom 5
• Top N Options:
• Mode - Top or Bottom
• Value - used as the number of values you want to include in the filter.
• Dimension - Select All Dimensions or a specific dimension.
• Account - Select any of the accounts that are used in the chart. (Only accounts that are cur-
rently in the chart can be selected.)
• Measure
• Cross Calculation
• Version
When a Top N filter is applied, text is displayed at the top of the chart indicating the filter direction and
value. To change the filter, select the filter text and input new values.
Use a linked analysis to drill through hierarchical data or to create filters that simultaneously update
Linked
Analysis multiple charts in your story. For more linked analysis details, see Creating a Linked Analysis [page 1101].
Shows the difference between versions of a measure or the difference between time periods. For more
Compare
information, see Adding Time-Based Variances to Charts [page 1318] or How to Add a Variance to a Chart
To
(Classic Story Experience) [page 1315].
Lets you see more information about a particular data point in your visualization. For more information,
Add Smart
see Smart Insights [page 2003].
Insights
For a list of options you can add, see Add More Functionality (classic) [page 1249].
Add
Allows you to show or hide chart elements. By default, most elements are shown.
Show /
Hide The following elements can be hidden:
• Chart Title
• Subtitle
• Chart Details
• Legend
• Threshold Legend
• Data Labels
• X-Axis Labels
• X-Axis Title (Bubble, Scatterplot, and Histogram charts)
• Y-Axis Labels
• Y-Axis Title (Bubble and Scatterplot charts)
• Totals (Stacked Bar/Column, Combination Stacked Column & Line, and Marimekko charts in Opti-
mized Story Experience): select or deselect this option to show or hide the total values of your
measures. For more information, see Show Total Values (Optimized Story Experience) [page 1290].
• Footer
• Time Intervals
• Date Picker
• Navigator
• Vertical Grid Lines
• Horizontal Grid Lines
• Confidence Interval
• Past Period Forecast
You can manually change the axis values. For more information, see Modify Axes on Single or Dual Axis
Edit Axis
Charts [page 1284].
Collapse Title, Collapse to show only one line of text in the Chart Title or expand to show the full text.
Expand Title
Allows you to export data from your chart as a CSV file, with or without formatting. For more information,
Export
see Export Chart Data as a CSV File [page 245].
Opens the Styling panel. For more information on styling options, see Formatting and Styling Items on a
Edit Styling
Story Canvas [page 1589].
Opens the Controls panel. For more information about controls, see Measure-Based Filters [page 1533].
View
Controls
When you select Add in the Action menu, you can select the following options:
Action Description
For more information about thresholds, see Creating Story-Defined Thresholds (Classic Story Experi-
ence) [page 1514].
Allows you to add reference lines to your charts. For more information, see Add a Reference Line [page
Reference Line
1258].
• Automatic Forecast: performs a predictive forecast on the available data. With this option, you can
Forecast specify how many forecast periods to display in the chart.
• Advanced Options: enables you to select one of the following simulate forecasting scenarios:
• Linear Regression: use the Linear Regression algorithm on historical time series data to
predict future values.
• Triple Exponential Smoothing: use the Triple Exponential Smoothing algorithm to account for
seasonal changes as well as trends.
• Add Additional Inputs: use calculated measures, measure input controls, or additional meas-
ures from the current model as inputs when creating the forecast. This option is available
only for Time Series charts.
More information: Running a Forecast in a Time Series or Line Chart [page 1281]
Add a Compound Annual Growth Rate line to your Bar/Column or Stacked Bar/Column chart.
CGR
For more information, see Add a Compound Annual Growth Rate Line to Bar or Stacked Charts [page
1259].
Indicates the error or uncertainty in a reported measurement. For more information, see Add an Error
Error Bar
Bar to a Chart [page 1256].
Add information for measures and dimensions that aren't in your chart. For more information, see Add
Tooltip
extra tooltips for measures and dimensions [page 1234].
Provides a grid of small charts for comparison. For more information, see Use Trellis charts for
Trellis
comparison [page 1233].
For more information about cross calculations, see Add Cross Calculations to Charts [page 1349].
Cross
Calculation
Allows you to add a hyperlink to an external URL, page or story. For more information, see Linking to
Hyperlink
Another Page, Story, or External URL (Classic Story Experience) [page 1108].
The following menu appears when you right-click a data point in a chart. Different options are available for
different chart types.
Action Description
Smart Insights Lets you see more information about a particular data point in your visualization.
More information: Applying a Chart Filter (Classic Story Experience) [page 1270]
More information: Applying a Chart Filter (Classic Story Experience) [page 1270]
Note
When you change the account hierarchy (for an account that has multiple
hierarchies), the drill state will be reset to its default level.
Expand / Collapse Lets you show more than one level of data.
When you have only one dimension and it has been fully expanded, the menu will
show a Collapse option.
When multiple dimensions are fully expanded, selecting Expand for a dimension
will collapse that dimension.
Compare To Shows the difference between versions of a measure or the difference between
time periods. For more information, see Adding Time-Based Variances to Charts
[page 1318] or How to Add a Variance to a Chart (Classic Story Experience) [page
1315].
Pin Data Point Anchors a numeric point chart to the pinned item; you can add comments to that
chart.
More information: Annotate Data Points in Bar, Line, or Bubble Charts [page 1261]
More information: Add Axis Breaks to Bar or Waterfall Charts [page 1255]
The following menu appears when you right-click a table cell in the Examine view. The menu options are similar
to those available for tables.
Action Description
Smart Insights Lets you see more information about a particular data point in your visualization.
Lock Cell Lets you prevent individual cells of the table being updated. This feature is availa-
ble for users with a planning license.
Select the cells you want to lock and choose the icon on the toolbar. Locked cells
are shaded gray as a visual indicator that they are locked, and the toolbar icon
changes to an open lock. Select the lock icon again to unlock the cell.
The read-only feature on the main toolbar prevents a cell value from being over-
written, but cell locking prevents updates to a cell value that would be caused by
aggregation. You can use this feature to redistribute values among sibling nodes
of a hierarchy.
For example, if the parent node in the hierarchy Employee Expense has three
child nodes: Salaries, Bonus, and Training, and you lock the Employee Expense
node, and then increase the value of the Salaries cell, the values of the Bonus and
Training cells are automatically reduced to balance out the increase in salaries
and maintain the same total Employee Expense value.
The cell locks that you apply are saved with the story. When other users work on
the page, they can remove your cell locks and add their own, if necessary.
Sort Options You can apply an ascending or descending sort to a table. To apply a sort, select a
Note
The Sort Options icon is active only when a suitable column or row of data is
selected.
Choose the direction of the sort. For example, you can choose Ascending or
Descending or A-Z or Z-A. If you want to arrange the members of a dimension
yourself, select Add Custom Order and drag the members into the correct order in
the Edit Member Order panel.
More information about custom sorting: Changing Member Order [page 1435].
Create Top N A Top N filter shows a specified number of the lowest or highest ranked members.
• Type
• Direction
• Apply to each dimension – shows the top values for each dimension instead
of for the group of dimensions.
When you have a hierarchical measure in your table, this option is enabled
and cannot be disabled.
• Value – the number of values you want to include in the filter.
• Related Dimensions – if required, select dimension members.
When a Top N filter is applied, text is displayed at the top of the table indicating
which column is filtered.
The visible content of the table depends on the underlying structure of the data.
All rows of data that have not been selected are hidden. If the data in the table is
hierarchical or aggregated, other dependent rows will also be visible.
Note
The Create Top N icon is active only when a suitable column of data is se-
lected.
Related Information
After creating SAP Analytics Cloud charts, you can enhance those charts in many ways, including adding
features such as dynamic text or reference lines, and changing the color palette.
To modify your charts, you use the chart builder for some changes and the chart menus for other changes. For
information on the chart menus, see Chart Context Menu Options [page 1239].
Display Information
Related Information
You can add many features to your SAP Analytics Cloud charts, including axis breaks, error bars, or reference
lines.
Context
When a chart has one or two bars that are significantly longer than the rest, the chart can be difficult to read.
An axis break lets you shrink the longer bars and improve readability of the shorter ones.
• Bar/Column
• Waterfall
Note
• Axis break may not be available when the axis range is manually set.
If you set an axis break and then edit the axis range, the axis break may be removed.
• Selection is retained for drill-states.
• You can have only one axis break in a chart.
Procedure
Tip
To see the option, you may need to right-click and then select More Options Break Axis
Results
An axis break is added to the chart. To remove it, select a measure and then click Disable axis break.
Context
When a chart has widely spread bubbles or clusters of datapoints, it can be difficult to read. Axis line breaks
help reduce the visual gap in the data, improving the readability. You can apply and edit multiple axis line breaks
on both axes.
Procedure
1. Select an axis.
4. (Optional) To change the range covered by the axis break, select an axis line break and select (Edit axis
line break).
Context
Error bars can be added to bar/column charts. They can provide a general idea of how precise a measurement
is or how far the reported value is from the error-free value.In your Bubble or Scatterplot chart, select the X or Y
axis.
Procedure
Context
You can add a line or block of text appearing at the foot of a chart widget in your story.
Procedure
A footer section with the phrase Click to enter text is displayed in the chart.
Tip
If you don't see the footer option, in the action menu select More Options Show/Hide Footer .
Tip
You can add Smart Insight tokens that display dynamic text to chart footers. From your chart,
choose (More Actions) Add Smart Insights . The generated dynamic text in the footer
corresponds to the first insight available for your data.
Open the Designer panel and select (Styling). Apply any of the following styling options to the text:
• Text Properties:
• Select the font style, size, and color.
• Select the paragraph justification.
• Add bulleted or numbered lists to text.
• Hyperlink: Link to another story or page, or to an external URL.
Context
You can define reference lines to show important values on your chart; for example, the average and maximum
prices of your company's products.
• Fixed reference lines are created with a specific reference value, and don't change when you change the
data in your chart (for example, if you filter your data).
• Dynamic reference lines are updated when filters, ranking, and sorting are applied to the chart. Dynamic
reference lines can be used with any measure, including story calculation measures.
When you add a reference line, you can choose to fill the background area above and below the line with color.
Reference lines are maintained when you change the chart type.
Procedure
• On a canvas or responsive page, select a chart, right-click or select and then select Add
Reference Line .
Tip
If you don't see the reference line option, in the menu select More Options Add Reference
Line .
•
1. On a canvas or responsive page, select a chart, open the Designer and select Builder.
2. Find Chart Structure and select (Add/Remove Chart Components) Add Reference Line .
• On a grid page, open the Examine panel, select (more), and then select Reference Lines.
• For a fixed reference line, enter a value and a label for the line.
• For a dynamic reference line, select the parameters you want to base the reference line on, and enter a
label for the line.
3. If you want to color the background areas above and below (or to the left and right of) the reference line,
choose colors in the Above and Below color-pickers.
If two reference lines try to set the same background area to different colors, the line created first takes
precedence.
4. If you have applied a Top N filter to the chart, and are adding a dynamic reference line, you can choose
whether you want to include the Top N options as a filter for the reference line.
Results
Select OK to add the reference line to the chart. You can hover over the reference line to see details about the
line.
Show the Compound Annual Growth Rate for SAP Analytics Cloud Bar/Column or Stacked charts.
Prerequisites
Note
Context
The Compound Annual Growth Rate (CAGR) is the ratio that provides a constant rate of return over the time
period. You can add a CAGR line to your Bar/Column or Stacked Bar/Column chart.
Procedure
Tip
If you don't see the CGR option, in the menu select More Options Add Add CGR .
A CAGR line is added to your chart. If you have changed the drill level for other dimensions, you may see
separate CAGR lines instead of a continuous one.
You can add one or more time calculations to your chart without using the calculation editor.
Prerequisites
Your chart must contain a date dimension with the appropriate level of granularity. For example, if you want to
see quarter over quarter results, the date dimension must include quarterly or even monthly results.
Context
You can add a time calculation to a chart to show year over year changes in quantity sold.
Procedure
3. Select More Add Time Calculation , and then select a time period.
Results
A new measure is created. The chart now includes a bar for the time calculation.
Note
When using acquired models created with SAP Analytics Cloud version 2021.02 or later, dynamic time
calculations and variances should display data when the page and story filters are applied on the time
dimensions. If you are experiencing issues with the displayed data please refer to 2994816 .
You can annotate data points in Bar, Line, and Bubble charts.
Context
You can pin a data point and add additional measures or comments to the pinned object. The pinned
information is displayed as a Numeric point chart that is linked to the data point.
Note
If you filter the data, the annotated chart will not be affected. However, if you add another measure to your
chart, you will break the link.
Procedure
4. With the numeric point chart selected, open (Builder) and choose measures to add to the annotation.
You cannot edit existing measures, but you can add more measures.
In SAP Analytics Cloud, you can add dynamic text to a chart title or subtitle.
Context
You can change the title of a chart to include dynamic text from elements that are in use in the chart, such as
filters, dimensions, scales, and so on.
You can scroll through all the options, or select an option from the left side to display specific objects on
the right.
Option Objects
Dimensions For dimensions, you can also specify which hierarchy and
level to display, and whether to display the ID or descrip-
tion.
Input Controls
Cross Calculation Input Controls (Optimized Story Experi- Cross calculation input controls should be based on clas-
ence) sic account models.
Story Filters
Tile Filters & Variables Tile (or widget) filters and variables include the filters and
variables that are used in charts and tables, and (Opti-
mized Story Expereince) geo map filters.
Note
The tile filters and variables will not appear on mobile
devices.
Model Variables
Model Properties (Optimized Story Experience) Last Refreshed Date and Time: You can add last refreshed
data and time of any SAP BW live data model used in your
story.
Chart Properties You can add various chart properties to the title or subti-
tle.
• Measures
• Dimensions
• Scales
• Units
Note
When the story is modified by someone who isn't a
member of the team, a warning message is displayed
instead of the team name through a dynamic text that
puts a mark in the story as a trust factor.
Note
In view time, the global variables will change accord-
ing to the customized scripts, and their values will be
automatically updated in the text widget.
4. Select Create.
Results
Related Information
SAP Analytics Cloud story pages are more visually appealing when the chart axes are all aligned vertically and
horizontally.
Context
You can manually control the position of the axis – including the axis on variance charts – to create a visually
appealing story with nicely aligned visualizations. You can select either the vertical or horizontal axis and drag it
to a new position.
When you are moving the axis to a position that is almost aligned with an adjacent chart, snap lines appear.
When you release the cursor, the axis snaps to that aligned axis.
Note
Each time you drill to a different level in a chart, you will need to manually re-align the chart's axes.
Tip
The axis alignment behavior aligns all charts in a row or column with the vertical or horizontal axis that is
closest to the middle of the chart.
If the chart axes aren't aligning as expected, review the following items and adjust your charts:
The axis alignment functionality is available for the following chart types:
• Bar
• Column
• Line
• Combination bar or column and line
• Combination stacked bar or column and line
Procedure
As the axis comes closer to alignment with an adjacent chart axis, a dotted line appears on the chart. When
you release the cursor, the axis will snap to align with that adjacent chart axis.
Results
The two adjacent charts will now be aligned on one axis. You can repeat the process for other adjacent charts.
If you change your mind, select the axis that you just moved and then select (Reset axis line).
You can exclude non-relevant data points or filter data points to focus a chart on a specific set of data
You can filter a chart by selecting data points or members directly on the chart, or you can use the builder
panel to choose members from a list or define a range (for date dimensions). The chart filters only apply to the
data displayed in that chart.
You can also apply filters to all charts on a page, or all charts in a story. For more information, see Story and
Page Filters [page 1518].
Note
If you add a dimension that contains a large number of members, a filter to restrict the number of members
added to the chart may be automatically applied. If you apply your own filter to the chart, all automatically
generated filters will be removed. You can manually remove an automatically generated filter using the
(Cancel) icon beside the filter. You can also edit the filter and save modifications.
When applying a filter to a custom property of the account dimension, the filter does not work as expected.
For example, calculated accounts and any accounts with child accounts that match the filter would be
included. See SAP Note 2931452 for information about the usage restriction.
Restriction
The following restrictions are known for tuple filtering with live connections for SAP BW, SAP BW/4HANA,
SAP BPC embedded and SAP S/4HANA:
Tuple filters on multiple dimensions are supported, with the following considerations:
Prerequisite: See SAP Note 2715030 for information about supported versions.
• Filters are created by selecting multiple data points in a widget (the created filters are read-only).
• The “Exclude selected members” option isn't available because the BW back-end doesn't support the
NOT operator.
• BPC live connection models with planning enabled are not supported.
• The Currency/Unit dimension cannot be filtered.
• Blending is not supported.
• Queries with two structures do not support tuple filters.
You can filter or exclude members by selecting data points directly on the chart.
When your chart axis has more than one dimension, the filter/exclude behavior is specific to the selected
dimension member combination.
Tip
You can drag a box around a group of data points to select the group.
The chart is updated to either only show the specific data points (filter) or to hide them (exclude).
Instead of filtering on or excluding specific data points within your chart, you can create filters on specific
dimension members.
When your chart axis has more than one dimension, the filter/exclude behavior is specific to the selected
member (or members) from a single dimension.
Example
For this example, we are using a bar chart with two dimensions.
For simplicity, the chart dimensions are displayed as table columns and neither the measures or data
values are included.
Product Location
Alcohol Town A
Town B
Town C
Town D
Town B
Town C
Town D
Juices Town A
Town B
Town C
Town D
Others Town A
Town B
Town C
Town D
The following table shows the difference between Filter/Filter Member and Exclude/Exclude Member
options.
• The action column lists the action and - where necessary - which member is selected for the action.
• The result column shows what you would see in a bar chart with two dimensions.
For simplicity, the chart dimensions are displayed as table columns and neither the measures or data
values are included.
Filter Only the selected product/location Filter Member All products are displayed with only
from the first bar (Alcohol/ Town D) one location (Town D) for each prod-
In the chart bar In the chart bar
is displayed. uct.
(table row) for (table row) for the
the product “Al- • Product: Alcohol product “Alcohol”, • Product: all products
cohol”, right-click • Location: Town D right-click “Town • Location: Town D
“Town D” and se- D” and select
lect Filter. Product Location Filter Member. Product Location
Carbonated Town D
Drinks
Juices Town D
Others Town D
Exclude Only the selected value (Town B) from Exclude Member The selected value (Town B) is ex-
the first bar (Alcohol) is excluded/hid- cluded/hidden from all products.
In the chart bar In the chart bar
den.
(table row) for (table row) for the
Product Location
the product “Al- product “Alcohol”,
Product Location
cohol”, right-click right-click “Town Alcohol Town A
“Town B” and se- Alcohol Town A B” and select
Town C
lect Exclude. Exclude Member.
Town C
Town D
Town D
Carbonated Town A
Carbonated Town A Drinks
Town C
Drinks
Town B
Town D
Town C
Juices Town A
Town D
Town C
Juices Town A
Town D
Town B
Others Town A
Town C
Town C
Town D
Town D
Others Town A
Town B
Town C
Town D
You can use the Builder panel to filter data by choosing members from a list or by defining a range.
Tip
Viewers can reset any changes that they made to filters and input controls to get the original view of
Choosing members: select members The members you choose appear in the Selected Members list on the right.
from the Available Members list.
You can use the Search function to find the members you want.
Defining a range: select a Dynamic or Date ranges can be fixed or dynamic; for example, you could choose the fixed
Fixed range type. range January 2019 to December 2019. If this story is opened in 2020, the
story will still show 2019 data. Dynamic date ranges shift based on the current
date. They also offer a few more granularities, such as current year, current
quarter, and current month, as well as ranges that are offset from the current
date.
If the time dimension is added to the chart, you can also select the (Filter)
icon next to it in the Builder panel to choose from preset dynamic filters such
as Current Month or Current & Next Quarter To Date.
For more information, see Story and Page Filters [page 1518].
Allowing modifications: select Allow If you allow viewers to modify filter selections, they can either toggle on and
viewers to modify selections. off each filter value (with the Multiple Selection option), or select a single filter
value (with the Single Selection option).
Changing drill levels for date range fil- Unrestricted drilling lets you drill to any level in the hierarchy, no matter what
ters: select Unrestricted Drilling. the filter or date granularity is set to.
Note
The Unrestricted Drilling option is only available for date range filters in
charts and tables.
You can exclude non-relevant data points or filter data points to focus a chart on a specific set of data.
Context
You can filter a chart by selecting members directly on the chart, by choosing members from a list, or for
certain types of dimensions (for example, date dimensions), by defining a range. Chart filters apply only to the
data displayed in that chart.
You can also apply filters to all charts on a page, or all charts in a story. For more information, see Story and
Page Filters [page 1518].
Note
If you add a dimension that contains a large number of members, a filter to restrict the number of members
added to the chart may be automatically applied. If you apply your own filter to the chart, all automatically
generated filters will be removed. You can manually remove an automatically generated filter using the
(Cancel) icon beside the filter. You can also edit the filter and save modifications.
Restriction
When applying a filter to a custom property of the account dimension, the filter does not work as expected.
For example, calculated accounts and any accounts with child accounts that match the filter would be
included. See SAP Note 2931452 for information about the usage restriction.
Restriction
The following restrictions are known for tuple filtering with live connections for SAP BW, SAP BW/4HANA,
SAP BPC embedded and SAP S/4HANA:
Tuple filters on multiple dimensions are supported, with the following considerations:
Prerequisite: See SAP Note 2715030 for information about supported versions.
• Filters are created by selecting multiple data points in a widget (the created filters are read-only).
• The “Exclude selected members” option isn't available because the BW back-end doesn't support the
NOT operator.
• BPC live connection models with planning enabled are not supported.
• The Currency/Unit dimension cannot be filtered.
• Blending is not supported.
• Queries with two structures do not support tuple filters.
Tip
You can drag a box around a group of data points to select the group.
Some types of dimensions, for example date dimensions, can be filtered by choosing members or by
defining a range. Those dimensions appear twice in the list, with (Member) and (Range) suffixes.
Tip
Viewers can reset any changes that they made to filters and input controls to get the original view
Choosing members: select mem- The members you choose appear in the Selected Members list on the right.
bers from the Available Members
You can use the Search function to find the members you want.
list.
Defining a range: select a Dynamic Date ranges can be fixed or dynamic; for example, you could choose the
or Fixed range type. fixed range January 2019 to December 2019. If this story is opened in 2020,
the story will still show 2019 data. Dynamic date ranges shift based on the
current date. They also offer a few more granularities, such as current year,
current quarter, and current month, as well as ranges that are offset from
the current date.
If the time dimension is added to the chart, you can also select the
(Filter) icon next to it in the Builder panel to choose from preset dynamic
filters such as Current Month or Current & Next Quarter To Date.
For more information, see Story and Page Filters [page 1518].
Allowing modifications: select Allow If you allow viewers to modify filter selections, they can either toggle on and
viewers to modify selections. off each filter value (with the Multiple Selection option), or select a single
filter value (with the Single Selection option).
Changing drill levels for date range Unrestricted drilling lets you drill to any level in the hierarchy, no matter
filters: select Unrestricted Drilling. what the filter or date granularity is set to.
Note
The Unrestricted Drilling option is only available for date range filters in
charts and tables.
The filter appears at the top of the chart, and in the Filters area in the Builder tab.
Related Information
Prerequisites
To assign colors to specific members of a dimension, you must be on a story page and have the conditional
formatting menu open. (
You also need at least one chart that uses the dimension in its Color section.
Context
When a color is assigned to a specific dimension member, that color will override any changes to the color
palette for a chart. Members that do not have an assigned color will change color when the color palette is
updated.
The Assign Colors panel appears, displaying a model, a dimension, and the dimension members.
4. Choose your model.
5. Choose a dimension.
6. In Dimension Memberssection, select a member's color option list and change the color.
If you do not want to assign a color to a member, clear the check box. That member will now use the color
palette from the chart builder.
7. When you have finished assigning colors, click Apply.
8. To close the Conditional Formatting panel, click Done.
Related Information
Changing Color Palettes and Synchronizing Colors Across Charts [page 1273]
You can change the color palette for one chart in your story, or for multiple charts.
You can use the provided color palettes for your chart, or you can follow these steps to create your own palette.
1. In Builder, select the down-arrow in the color-picker, and then select Create New Palette.
2. In the Create New Palette dialog, choose a color selector and then define a custom color for it.
You can maintain a maximum of nine custom-color selectors. You can reuse custom-color palettes across
multiple charts in a story.
When the color palette for one chart in a story is changed, all the other charts that have the same color
dimension or member will also be updated. This applies to charts that have a single dimension or measure for
Color.
You can use one or more Dimensions or a single Measure for the color parameter in Bubble charts and Bar
charts. For other charts, you can only use Dimensions.
Note
When you add a Measure to the Color parameter, any existing Dimensions or Measure will be replaced with
the new Measure. You can have only one member in Color when that member is a Measure.
Note
When you have a linked dimension as your color dimension, you cannot assign colors for individual
members or create a new color palette.
After you change the color palette for one chart, you can update the color for the rest of the charts in the story.
3. Find your Color dimension and then choose (More) Color Sync Sync Colors .
Unsynchronize Colors
When you synchronize charts with the same color dimension, all the charts in that story will use the same color
palette.
If you want to use different color palettes for some charts, you can either change the color dimension or do the
following:
3. Find your Color dimension and then choose (More) Color Sync Unsync Colors .
You can choose a new color palette, or you can change the color for one – or more – members in the color
dimension.
3. Find your Color dimension and then choose (More) Color Synch Assign Colors .
4. Find the desired dimension member and choose a new color for it.
5. When you have finished making changes, choose Apply.
The dimension member color will be changed for all the charts that use that dimension in the Color parameter.
You can select two data points in a chart to quickly see the difference between their values. For example, you
can compare a year-over-year percentage increase or decrease, or see the delta between two versions.
Context
An arrow shows which column is the second column chosen, and the value is displayed both numerically
and as a percentage. For vertical charts, the delta graphic and the value are displayed to the right of the
selected columns. For horizontal charts, the delta graphic is displayed below the selected bars, and the value is
displayed on the second selected bar.
Taller Positive
Shorter Negative
Procedure
Results
A line is drawn between the two columns and the difference is displayed.
Tip
If the action menu is hiding the result, move the cursor to a blank spot on the axis and click. The action
menu will be hidden.
You can show the dimension Description, ID, or ID and Description in a chart, or you can show dimension values
in a bubble or scatterplot chart.
• You can show the dimension information (description or ID) in your chart.
• If you have a bubble or scatterplot chart, you can show the dimension values on the data points.
1. Select a chart.
3. Move your cursor to the desired dimension label and then click Display As .
4. Select one of the options:
• Description
• ID
• ID and Description
Note
When you change how the dimension information is displayed, the corresponding axis label, legend, or
tooltip is updated. The updated display will persist even if you change the chart type.
Note
If you remove the dimension from the chart, the information display will revert to its default setting.
2. From the chart action menu ( ), select More Options Show/Hide Dimension Labels .
Note
If you adjust the scale of the chart, that will impact the space that is available for the labels.
Note
If you show both the dimension labels and the data labels, the displayed information will show <dimension
value>(<size, measure value>).
Related Information
As a story editor, you may want to remove dimenasion points where all measure values are zero, or a measure
series that is all zero.
Note
Not all datapoints with zero value will necessarily be suppressed. All measures on a given dimension point
must be zero, or the entire measures series must be zero, to be suppressed. (In a table, this would be the
same as having zero values for all the measures in a row or column.)
Note
Unbooked Data and Zero Suppression are mutually exclusive, meaning that both options can't be applied at
the same time. If you select Zero Suppression while Unbooked Data is already selected, a message appears
informing you of this, and asking you if you want to turn off Unbooked Data. Once you select OK, the Zero
Suppression option is applied to the chart.
One disadvantage of using zero suppression is that it may alter results when you are using ranking. In those
situations, you could use a cross calculation instead.
You could use the following formula in a cross calculation to remove zero value measures:
Zero suppression can also be replaced with a page or story filter that filters out zeros.
Note
If you use a page or story filter, you may get a warning about performance.
When reloading a chart, show the last successful refresh of the chart (if available) until all the data has been
refreshed and displayed.
When the rendering option is enabled, a version of the chart is cached (for an hour) and then displayed while
the data is being retrieved and refreshed. If you try to interact with the cached chart, you will see the loading
icon.
Usage Restrictions
• Caching is only available for Story Embed Mode and Digital Boardroom. Opening a story in Edit mode (for
example, when creating or updating the story) will not cause the data to be cached.
• Data is cached and encrypted for up to an hour. After this period, you will need to refresh to take advantage
of the cached data.
• The cache only includes data that is currently visible in the story when it is loaded on screen.
For example, if a user opens their story within Story Embed Mode and then decides to modify a chart to
display different or additional data, the resulting data will not be the same as the cached data.
2. Select (Edit).
3. Set the Enable Progressive Chart Rendering toggle ON.
You can reorder how measures or versions are displayed in a chart, including displaying them as overlapping
bars.
You can use the chart Builder panel to reorder measures or versions in a chart.
Note
To reorder measures or versions, you should have a bar chart that has at least two measures or at least two
versions.
Note
You can have versions if you are using a planning model. For more information on creating and managing
versions of your planning data, see Create, Publish, and Manage Versions of Planning Data [page 2170].
Tip
You can change the colors and patterns of the bars in your chart in the Color section of the Builder
panel.
You can display your measures or versions as overlapping bars. You can also change which measure or version
is displayed at the front of your chart. To move a bar to the front, select Show As Layered Bars Front in
Note
If you have measures showing as overlapping bars in the Color section and you add a dimension to this
section, the chart changes to show the measures as separate bars. When you remove the dimension from
Color, the measures are again displayed as overlapping bars.
You can also change the bars to triangles by selecting Show As Triangle in the Color section.
Note
When you change all the measures or versions to triangles, at least one of the measures or versions stay as
a bar.
Context
You can use a time series or line chart to automatically predict future data values based on the data included in
your chart definition. The creation of a forecast requires a measure and a date dimension, along with additional
options that can be integrated into the visualization:
• Multiple measures can be used to compare predicted values across these measures.
• Coloring by dimensions can be used to compare predicted values across dimension values.
Once data is loaded into the chart, you can perform any of the following forecasting options:
• Automatic Forecast: performs a predictive forecast on the available data. With this option, you can specify
how many forecast periods to display in the chart.
• Advanced Options: enables you to select one of the following simulate forecasting scenarios:
• Linear Regression: use the Linear Regression algorithm on historical time series data to predict future
values.
• Triple Exponential Smoothing: use the Triple Exponential Smoothing algorithm to account for seasonal
changes as well as trends.
• Add Additional Inputs: use calculated measures, measure input controls, or additional measures from
the current model as inputs when creating the forecast. This option is available only for Time Series
charts.
Note
To avoid errors when using Additional Inputs, ensure that data is available for both your original
input data periods as well as the future period(s) you would like to forecast.
Note
Before you can create a forecast on a time series chart, using a remote model from an SAP HANA, SAP BW,
SAP Universe or SAP S/4HANA system, your administrator must configure the following option: Enable
Time Series Forecast and Smart Grouping on Live Data Models. The data will be processed on SAP servers.
You cannot run forecasts on Line charts using remote models.
The chart will display the following information, either in the chart or in the tooltip:
• Predicted values
• Upper and lower limits for the confidence interval
5. From your chart action menu ( ) choose Add Forecast , and then select one of the following:
• Automatic Forecast
Once the forecast is performed, a color-coded Forecast link is displayed above the chart. The link
is blue if the forecast was performed successfully with no data issues. The link is gray if the forecast
quality is low, and red if the forecast was unsuccessful. Select the link to display a Forecast Periods
slider for the number of displayed periods and the Forecast Quality for the configured data.
• Advanced Options
Choose one of the following advanced options for running your forecast:
• Triple Exponential Smoothing
Select this option to run the forecast using the Triple Exponential Smoothing algorithm to account
for seasonal changes as well as trends. Once the forecast is performed, a color-coded Forecast
link is displayed above the chart. The link is blue if the forecast was performed successfully with
no data issues. The link is orange if the forecast was partially successful, and red if the forecast
was unsuccessful. Select the link to display a Forecast Periods slider for the number of displayed
periods and the Forecast Quality for the configured data.
• Add Additional Inputs...
Note
Results
The resulting chart displays the forecast values and the confidence interval for the forecast. To show or hide the
Related Information
In SAP Analytics Cloud, you can scale your charts so that measures have the same scale across multiple
charts.
Context
If multiple charts in a story contain the same measure, the measure values may be scaled differently in
different charts, which can make comparisons difficult.
Note
• Chart scaling doesn't force charts to have the same axis range.
For example, if one chart on a page has negative values but other charts don't, only the one chart will
include negative values when scaled, and the 0 coordinates won't be in the same location across all
charts.
To add a negative value range to other charts (for visual alignment), you need to manually edit the axis
in those charts.
For information on editing the chart axis, see Modify Axes on Single or Dual Axis Charts [page 1284].
• Chart scaling is applied to all of the pages in a story, but the scaling may be different for the same
measure on different pages, because the scaling factor is calculated separately for each page.
• You can exclude charts from the scaling.
For example, you may want to exclude a chart that contains data that is much larger than the data in
other charts, because it makes the other charts look small.
• When you have multiple hierarchies in an account dimension, you can only apply scaling to charts that
use the default account hierarchy.
1. Open the story that contains the charts you want to scale, and on the toolbar select (Chart Scaling).
2. Select the measures that you want to scale.
The affected charts in the story are rescaled automatically, and an axis scaling rule is created.
Note
Once a measure is included in an axis scaling rule, that measure can't be included in another axis
scaling rule.
3. To set the bar widths in the scaled charts to a consistent size, hover over the axis scaling rule you just
Type the number of pixels you'd like to set the bar width to. To revert back to the default widths, delete the
number you typed.
When conflicting bar widths are defined, the width rule of the first measure on the axis is applied to the
overall chart.
When an axis scaling rule is deleted, any associated bar width setting is also deleted, and the bar widths
revert to the default sizes.
4. To exclude a chart from scaling, do the following:
a. Select the chart, and then select More Actions Edit Styling .
b. In the Styling panel, expand Chart Properties, and then set Break Scale to On.
5. To change which measures are scaled, hover over a measure in the Scaled Entities list, and select (Edit
Scaling).
6. To remove scaling for a measure, hover over the measure in the Scaled Entities list, and select (Remove
Scaling).
In SAP Analytics Cloud, you can change the axis range of a chart, or you can synchronize axes on a dual-axis
chart.
By default, charts include the zero axis even if there are no values close to zero. This can leave a lot of white
space in your chart.
To minimize whitespace, you can manually set ranges for your values, or you can let the system dynamically set
the minimum and maximum values.
The bubble chart Y and X axis intersect at 0, but your data starts at 50 million on one axis and 0.05 million
on the other. Even with scaling, there is still some unused space near the intersection. By changing the axis
minimum value, you can eliminate more of that empty space.
You can set the axis range for multiple chart types.
Stacked Area
Stacked Bar/Column
Waterfall
Box Plot
Line
Bubble
Scatterplot
Tip
To reset the chart back to the default axis values, in the Edit Axis dialog, for each axis select (Reset to
auto).
When you have charts that have measures on two different axes (for example, a Line chart or a Combination
Column & Line chart) and those measures are of different scales, the data can be confusing. You can
synchronize the chart scaling so that the measures are on the same scale.
Example
Your gross margin or left axis scale goes to 100 million, but the sales price or right axis scale goes to 600
million.
When you synchronize the axes, you can clearly see the values for each measure.
Context
You can choose to include unbooked data in your chart. Booked data shows a label for the data point, but
unbooked data does not have a data label. For line charts, displaying the unbooked data breaks the line at the
unbooked values.
• Bar/Column
• Stacked Bar/Column
• Combination Column & Line
• Combination Stacked Column & Line
• Area
• Line
• Numeric Point
The Numeric Point chart shows a dash for the unbooked data. The other charts depict the unbooked data in
the title area as a quantity of Null values.
Procedure
2. In the Builder, select a dimension and then select (More) Unbooked Data .
The chart is updated to include unbooked data for that dimension, and any dimensions listed below
that dimension in the Builder. A message appears stating that unbooked data has been switched on for
dimensions below the selected dimension.
3. To clear unbooked data from your chart, in the Builder select a dimension that has unbooked data
The chart is updated to remove unbooked data from that dimension and any dimensions listed above
that dimension in the Builder. A message appears stating that unbooked data has been switched off for
dimensions above the selected dimension.
Results
When unbooked data is switched on, the unbooked values are listed in the chart title area as a number of Nulls.
You can hide this information by selecting More Actions Show/Hide Null Token .
In SAP Analytics Cloud, you can show dimension totals for bar/column, stacked bar/column or marimekko
charts.
You can enable totals for dimensions in several areas in the Builder panel:
• Dimensions
• Color
• Trellis
The totals can appear as either the first or last data point in your chart.
Note
If your SAP BW query has totals enabled, then totals will be enabled by default when using the query in SAP
Analytics Cloud charts.
By default in a new story, the totals data point is last in the chart. Use the following process to make it the first
data point.
Related Information
In SAP Analytics Cloud, you can show or hide total values, or display the net total values in your stacked or
Marimekko charts.
You can show or hide total values for Stacked Bar/Column, Combination Stacked Column & Line, and
Marimekko charts by doing the following:
1. In Edit mode, open the chart context menu by doing one of the following:
• Right-click your chart.
Tip
If you don't see the Show/Hide option in the first menu level, then select More Options Show/Hide
Totals .
If there are positive and negative values for a bar in your chart, the totals of the positive values and negative
values are displayed separately. For Stacked Bar/Column and Combination Stacked Column & Line charts, you
can choose to display the net total values instead.
For more information, see How to Show Net Total Values [page 1290].
1. Select your Stacked Bar/Column or Combination Stacked Column & Line chart.
Related Information
You can sort measures and dimensions in charts, or you can use Break Grouping to change how measures and
dimensions are sorted in a chart with multiple dimensions on the categories axis.
You can specify the sort order for data in your chart for measures and dimensions: alphabetical ascending or
descending, highest to lowest, and so on. You can even sort on multiple dimensions within the chart.
Note
While you can sort your data on multiple dimensions, you can't sort on dimensions and measures together,
and you can only sort on one measure at a time.
Sometimes neither ascending or descending (or highest to lowest) sort will give you the results that you want.
In those situations, you can create a custom sort order.
Example
You have a chart that you would like to sort in the following way:
• Very High
• High
• Medium
• Low
2. From the action menu ( ) select Sort, select the dimension that you want to sort, and then select Add
Custom Order.
3. In the Edit Member Order dialog box, provide a name for the custom sort.
4. Drag the members to rearrange them.
5. (Optional) Select Preview to see how the members would be rearranged.
6. When you are finished, select OK.
The following options help you find members to use in your custom sort.
Restriction
When you use Rank to show the top N values, the chart will appear to have fewer visible members.
However, the custom sort order visibility is not affected by the ranking order, which means that there
may be more than N members in the Edit Member Order list.
2. Select the dimension and then select the edit icon ( ) for the custom order.
3. When you've finished making changes, select OK.
Use advanced sorting when you want to sort on a measure that isn't included in the chart or you want to select
a different cross calculation.
When you have multiple dimensions on the categories axis, use Break Grouping to ignore the grouping order
and change how the dimensions and measures are sorted.
The Break Grouping option is available to use on a multi-dimension chart if either a measure or the innermost
dimension is sorted.
To enable or disable Break Grouping, from the chart action menu ( ) select Sort Break Grouping .
Example
You have a Profit and Loss model with the following measure and dimensions:
Type Name
Dimension Product
Dimension Region
Related Information
Prerequisites
Create a threshold value. For more information, see Creating Story-Defined Thresholds (Optimized Story
Experience) [page 1511].
Context
Note
If a chart has a cross calculation, the mobile app won't be able to apply a threshold to that chart.
1. Select a chart.
2. Add a threshold in one of the following ways:
Option Description
To the Color section 1. In the Builder panel, under Color, select Add Dimension / Threshold.
2. In the Thresholds section, select the desired threshold.
Note
Results
The threshold will be applied to the chart. When you move the cursor over a value, the tooltip will display the
range and threshold name. You can also show the threshold names in the chart legend area.
Related Information
In a chart with hierarchical data, you can change the hierarchical level or expand a hierarchy.
If you have hierarchical data in your chart, you can explore the data at different levels. Depending on the type of
chart you have, you can do the following:
• Chart with at least one hierarchical dimension: you can expand a hierarchy in your chart to display
multiple levels of members at the same time, or you can drill up and down through the hierarchical
dimensions.
• Waterfall chart with hierarchical values: you can expand or collapse the chart to explore the data at
different levels.
• Time Series chart: you can change the hierarchy level to see the data aggregation at different granularity
settings.
Note
You can expand dimension hierarchies in a chart to view finer granularity of the data. You can also display more
than one level of the hierarchy at the same time, similar to how expanded levels are displayed in a table.
The Drill dialog gives you control over which level of data is displayed as well as whether to include the
parent levels.
The Expand/Collapse option within the chart expands or collapses data by one level each time you select it.
Tip
When you have a linked analysis set in your story and you expand or drill into a chart, all the charts with the
same hierarchy level will be refreshed to show the new level.
Note
When you add a filter to the chart, the hierarchy level is reset to level 1. The drill level is now at level 1 relative
to the filter.
For example, your date dimension shows months as “Jan”, “Feb”, “Mar”, and so on. You can choose to now
include the year (if applicable) in parentheses: “Jan (2022)”, “Feb (2022)”, “Mar (2022)”.
Note
The date clarification feature is not available for the following conditions:
If hierarchical dimensions are included in a chart, you can drill up or down through dimensions to explore the
data at different levels.
If the chart contains more than one hierarchical dimension, you can select which dimension to drill into.
Note
1. In the Builder panel, navigate to the Chart Add-Ons section, and then select Trellis.
2. In the Trellis section, select Add Dimension and then select your hierarchical dimension.
2. Right-click and select (Drill Down) or (Drill Up), and then select the dimension to drill on.
You can also show all levels of the hierarchy by choosing Expand/Collapse.
3. To reset the drill option, right-click in the chart, select Drill, select a dimension, and then select Reset Drill.
Note
Selecting Reset Drill resets the drill level to 1, not to any other level that you may have set.
If hierarchical values are included in a Waterfall chart, you can expand or collapse the chart to explore the data
at different levels.
• Chart has only one Account measure: select (Expand Account or Collapse Account).
• Chart has a measure and one or more dimensions: select (Expand or Collapse) and from the list,
select the measure or dimension.
• Set one or more data points to be the intermediate or total sum: select ( ).
3. To reset the drill option, right-click in the chart, select Drill, select an account, and then select Reset Drill, or
select All Levels.
You can change the hierarchy level in a Time Series chart to see the data aggregation at different granularity
settings.
In your story, you can change the hierarchy level of a Time Series chart to show Year, Quarter, Month, and so on.
When you change from a more granular to a less granular interval such as from Month to Quarter, the new view
displays an aggregated value for the interval.
3. If the level of granularity that you want is not available, in the Builder, find the date dimension and then
Related Information
In a chart with hierarchical data, you can change the hierarchical level or expand a hierarchy.
If you have hierarchical data in your chart, you can explore the data at different levels. Depending on the type of
chart you have, you can do the following:
• Chart with at least one hierarchical dimension: you can expand a hierarchy in your chart to display
multiple levels of members at the same time, or you can drill up and down through the hierarchical
dimensions.
• Waterfall chart with hierarchical values: you can expand or collapse the chart to explore the data at
different levels.
• Time Series chart: you can change the hierarchy level to see the data aggregation at different granularity
settings.
Note
You can expand dimension hierarchies in a chart to view finer granularity of the data. You can also display more
than one level of the hierarchy at the same time, similar to how expanded levels are displayed in a table.
The Drill dialog gives you control over which level of data is displayed as well as whether to include the
parent levels.
The Expand/Collapse option within the chart expands or collapses data by one level each time you select it.
Tip
When you have a linked analysis set in your story and you expand or drill into a chart, all the charts with the
same hierarchy level will be refreshed to show the new level.
Note
When you add a filter to the chart, the hierarchy level is reset to level 1. The drill level is now at level 1 relative
to the filter.
If hierarchical dimensions are included in a chart, you can drill up or down through dimensions to explore the
data at different levels.
If the chart contains more than one hierarchical dimension, you can select which dimension to drill into.
Note
2. Right-click and select Drill Down or Drill Up, and then select the dimension to drill on.
You can also show all levels of the hierarchy by choosing Expand/Collapse.
3. To reset the drill option, in the title area for the chart select (Set Drill) Reset .
Note
Selecting Reset will reset the drill level to 1, not to any other level that you may have set.
If hierarchical values are included in a Waterfall chart, you can expand or collapse the chart to explore the data
at different levels.
• Chart has only one Account measure: select (Expand Account or Collapse Account).
• Chart has a measure and one or more dimensions: select (Expand or Collapse) and from the list,
select the measure or dimension.
• Set one or more data points to be the intermediate or total sum: select ( ).
3. To remove the drill option, in the title area for the chart select (Set Drill) and then choose (Reset).
You can change the hierarchy level in a Time Series chart to see the data aggregation at different granularity
settings.
In your story, you can change the hierarchy level of a Time Series chart to show Year, Quarter, Month, and so on.
When you change from a more granular to a less granular interval such as from Month to Quarter, the new view
displays an aggregated value for the interval.
2. In the Chart Details area, select (Hierarchy) and select a different level of granularity.
3. If the level of granularity that you want is not available, in the Builder, find the date dimension and then
Related Information
Context
When you have a chart with a lot of data points, you may want to limit the focus to show only a few values at a
time.
Note
Zoom is not available for all chart types. For example, while you can select a few data points in a waterfall
chart, you won't be able to zoom in on that data.
Procedure
Examples of data points are the bars in a bar chart or the bubbles in a bubble chart.
The selected data points are highlighted in the center of the chart. Also, the zoom out icon is added to the
chart's subtitle.
4. To view other data points, use the scroll bars to move through the rest of the data.
Next Steps
Solutions and suggestions for common errors in SAP Analytics Cloud charts.
Use the following steps to simplify your chart and identify the cause of the error:
• Create a table alongside the chart using the exact same model, linked dimensions, and structure as the
chart; verify if the values are refreshed.
• Explore the measures directly using the Data View; this reduces interference from filters and other specific
chart features.
• Copy the chart to a new canvas page and try to remove as many of the following as you can: measures,
dimensions, and active filters.
Error Message: Top N has been automatically applied to the chart because
your data selection has too many values.
Applying the Top N filter automatically is meant to help you avoid having too many overlapping data points in
your charts while giving you enough data to allow you to adjust the filters or the structure of your charts.
The automatic Top N value is triggered at a different number of records for each chart type, but you can
override the value if you manually set a Top N value that is larger than the default value.
For stacked chart types there is a stack or grouping limit that may be different from the automatic limit.
The following table shows the Top N trigger values (number of records) that are valid as of version 2020.02
Donut Chart 10 10 No
Line 500 50 No
Marimekko 25 500 20 No
Pie Chart 10 10 No
Instead of the Top N sort, an “Auto Limit” sort may be applied to your chart if the following conditions exist:
• No rank.
• A time dimension exists and is bound to the outer-most dimension.
• The chart supports Sort.
• The chart has no existing sort or a sort exists on the time dimension.
If the auto limit is triggered, then it will automatically query and sort (ascending) the last-N values sorted by the
Time dimension, provided that the Time dimension exists and is the outer-most dimension.
This message is likely due to a problem with the underlying models used in the story:
• If you have multiple models, test one by one using the Data View option.
• If you are using Live models, verify that the connection to the data source works as expected.
This message is displayed when the chart encounters an unexpected exception during rendering.
To speed up troubleshooting, capture the following information using Chrome Developer tools ( F12 or Ctrl
+ Shift + I ):
1. With Chrome Developer tools open, attempt to refresh the specific chart that fails.
2. Expand the error being thrown and capture the entire stack trace.
3. Attach these traces to the Product Support incident along with any data on the structure and formatting
settings of the chart.
This behavior could be due to many different reasons, but it's possible that a component at a lower level has
changed and now returns different data.
The story was likely saved when the chart was in an error state.
Problem: Chart does not display all of the expected data labels
When designing a chart, some of the labels may not appear when the chart is displayed.
• Labels in column charts are too long and span outside of the category, colliding with adjacent labels.
• For overlapping data-points in a bar/column chart, only the largest absolute value of the overlapping labels
will be shown.
This behavior can be modified by changing the Data Label styling options as explained in the help guide.
For more information, see Formatting and Styling Items on a Story Canvas [page 1589].
• Labels may be hitting or surpassing the bounds of the rendering area in the chart. In charts such as
line and combined bar/column+line, the labels will attempt to be rendered in new positions close to the
data-point. If that is not possible, then the label is hidden.
Note
If labels are overlapping, then the Avoid Data Label Overlap option has probably been unchecked in the Data
Label styling options area for that chart. Re-enable that option to prevent labels from overlapping.
When you export a story to PDF, some of the charts in the PDF are blank.
When processing the export to PDF, each widget (chart, table, and so on) has a default timeout setting of three
minutes. If it takes longer than three minutes to render the widget, the widget will be blank in the PDF.
You can ask your support person to enable a toggle for a longer rendering period
(WIDGET_RENDER_COMPLETE_LONG_TIMEOUT), but be aware that longer timeout periods may affect
system performance.
Note
For BW to work with SAP Analytics Cloud, you should have the following patches applied: 2715030
Also, one of the following notes might help you to resolve your BW issues.
SAP Note 3013551 BW BEx TOP N Condition conflicts with sorting on an SAC
Chart
SAP Knowledge Base Article 3048595 Rank Option is greyed out when using a BW connection in
SAP Analytics Cloud (SAC)
SAP Knowledge Base Article 3068353 Stacked Chart won't sort on total in SAP Analytics Cloud
(SAC)
SAP Note 3041816 INA: Incorrect parent indexes for flat presentation (value
instead of -1)
One of the following notes might help you to resolve your issues.
SAP Knowledge Base Article 2947417 +0 and -0 shows on charts when the plus zero / minus zero
option is not selected in SAP Analytics Cloud (SAC)
SAP Knowledge Base Article 2872808 Sort does not work in Heat Map chart in SAP Analytics Cloud
(SAC) story
SAP Knowledge Base Article 2950499 Drill level lost when adding a filter to a chart in SAP Analytics
Cloud (SAC)
SAP Knowledge Base Article 2939546 "Sorry, we couldn't access the datasource..." error in Stories
using live data models in SAP Analytics Cloud (SAC)
SAP Knowledge Base Article 2984649 Chart Legends not displaying items in sorted order in SAP
Analytics Cloud (SAC)
SAP Knowledge Base Article 2623967 Combination Column & Line Chart does not sync scale for
column and line when auto in SAP Analytics Cloud (SAC)
SAP Knowledge Base Article 3034866 Charts and tables do not display correctly when viewed at a
different zoom percentage in SAP Analytics Cloud (SAC)
SAP Knowledge Base Article 3044899 Error 'An Error is preventing the chart from being displayed'
appears in charts and some Hierarchies are no longer availa-
ble in Hierarchy Selector after wave update in SAP Analytics
Cloud (SAC)
Variances in your charts let you show the difference between versions of a measure or the difference between
time periods.
You can create a variance for any two measures, even if the measures do not appear in your base chart. You
can create variances that don't include the versions for the measures, as well as variances that do include time
periods.
You can also create a dynamic variance, a variance that is based on the measures in the chart context. If you
change the measures in the chart, the variance automatically updates.
Note
To include time in your variance, you must have the Date dimension in your chart or in an Input Control.
For some chart types (Numeric Point, for example), when the variance uses a Time Range filter, the range
must be set to only one interval (year, quarter, or period).
When using models from live data connections, you must include version information with your measures,
and your measures must be in your base chart. Using measures without versions applies to local models
only.
Tip
To create variances based on versions, you first need to make the versions visible. To make them visible,
follow the steps in Mapping Versions for Usage in the Variance Chart [page 657].
The classic variance chart is a separate bar chart within the chart tile.
For example, in a net revenue chart, you would see the following data for successive months:
• If the next month has less revenue than the previous month: a red bar appears above the revenue bar, and
the negative value is displayed above the red bar.
• If the next month has more revenue than the previous month: a green bar overlays the revenue bar, and the
positive value is displayed below the green bar.
• Bar charts can display classic variance charts, integrated charts, or variance values on data labels.
• Line charts can use classic variance charts and data labels.
• Heat maps can use data labels only.
• Pie charts do not support variance.
If you change the chart to a type that doesn’t support the current variance type, your variance will be
changed to a different type.
As you change chart types, your variances may be changed or removed depending on whether the new
chart type supports them.
Related Information
How to Add and Modify Variances in Charts (Optimized Story Experience) [page 1311]
Adding Time-Based Variances to Charts [page 1318]
Adding a Time-Based Variance to a Waterfall Chart [page 1238]
Add a variance to your chart to show the difference between versions of an account or to show the difference
between time periods.
You can create variances that include either versions or time intervals, or both. You can create a variance for
any two accounts, even if they don't appear in your base chart. However, date dimensions must be in your chart
if you want to use them in a variance.
Note
• Bar charts can display classic variance charts, variance values on data labels, integrated charts, or
variance waterfall charts.
• Waterfall charts can display classic variance charts or variance values on data labels.
Horizontal waterfall charts can also display variance waterfall charts.
• Line charts can use classic variance charts and data labels.
• Heat maps can use data labels only.
• Pie charts do not support variance.
If you change the chart to a type that doesn’t support the current variance type, your variance will be
changed to a different type.
Restriction
If your model with measures and accounts or your planning model have multiple hierarchies for an account
dimension, you can compare accounts from the same hierarchy; however, the hierarchy has to already exist
in the chart.
1. Select your chart, open the Right Side Panel, and then select Builder.
2. In the Variance area, select Add Variance.
Tip
If you can't see a variance area, expand the Chart Add-Ons section, and then select Variance.
3. For both sections (COMPARE and TO), choose specific accounts or choose All Accounts in Use (Dynamic).
Note
For Waterfall charts, you must choose the same account for both sections.
Note
The versions (or categories) must be added to the Color feed before you can use them.
1. In the COMPARE section, select Add Version, and then select a version.
2. In the TO section, select a corresponding version.
Tip
You won't be able to select a version in the TO section if you haven't first selected one in the COMPARE
section.
Tip
You won't be able to select a version or time period in the TO section if you haven't first selected one in
the COMPARE section.
Note
To use a time period, the time (date) dimension has to be included in your chart.
You can sort or rank either absolute or percentage variances in your charts.
The variance bars are color-coded based on the account type and whether an increase is desirable or not. For
example, an increase in income would be welcome (green), but an increase in expenses would not (red).
Tip
You can customize the variance colors. For more information, see How to Add a Variance to a Chart [page
1312].
To change the default variance colors for your story, in the toolbar, go to Theme Preferences
(Preferences) Charts Default Variance Color .
After you've added your variances, you can change how they are formatted and displayed in the chart.
2. In the Variance area, find the variance calculation and select Edit.
3. In the Edit Variance panel, make your changes.
4. When finished, select Done.
Add a variance to your chart to show the difference between versions of a measure or to show the difference
between time periods.
Context
You can create variances that include either versions or time periods, or both. You can create a variance for any
two accounts, even if they don't appear in your base chart. However, date dimensions must be in your chart if
you want to use them in a variance.
Tip
Classic analytic models use the term measure, but planning models and models with measures and
account dimensions use the term account.
Note
• Bar charts can display classic variance (bar) charts, variance values on data labels, or integrated
charts.
• Line charts can use classic variance charts and data labels.
• Heat maps can use data labels only.
• Pie charts do not support variance.
If you change the chart to a type that doesn’t support the current variance type, your variance will be
changed to a different type.
As you change chart types, your variances may be changed or removed depending on whether the new
chart type supports them.
Restriction
If your model with measures and accounts or your planning model have multiple hierarchies for an account
dimension, you can compare accounts from the same hierarchy; however, the hierarchy has to already exist
in the chart.
1. Select your chart, open the Designer, and then select (Builder).
2. In the Variance area, select Add Variance.
Tip
If you cannot see a variance area, find Chart Structure and select (Add Chart
Components) Add Variance .
2. Open the Examine panel, select (more), and then select Variance Add Variance .
2. For both sections (COMPARE and TO), choose specific measures or choose All Measures in Use (Dynamic).
3. (Optional) Choose the versions or time periods.
a. In the COMPARE section, select Add Version/Time, and then select a version or time period.
b. In the TO section, select a corresponding version or time period.
Tip
You won't be able to select a version or time period in the TO section if you haven't first selected one in
the COMPARE section.
Note
To use a time period, the time (date) dimension has to be included in your chart.
Option Description
Name Use the default name or provide a different name for the variance.
Invert Colors Reverse the color scheme from the following default values: a positive variance is shown
in green, and a negative variance is shown in red.
Set No Data as Zero To include a NULL data point in your variance, set it to the value zero.
Scale with Base Chart Default setting keeps the variance chart scale the same as the chart scale. If you want
the variance to have its own scale, clear the checkbox.
Show Difference as Choose whether you want to display the variance as Numeric, as a Percentage, or both.
When you choose Percentage you can also choose to use an Absolute Base Value. Abso-
lute values are useful if you are comparing negative values, but have a positive change
(For example, (-20) - (-40) = +20. The percentage change (+20/-40) would be -50% if
you did not use absolute values.)
View Variance as • Bar – displays a classic variance chart beside the regular chart.
• Data Label – adds variance information to the data labels.
Use data labels if there is no room to add a variance chart, or if you want variance
information for charts that cannot use a variance chart.
• Integrated – displays the variance data as an overlay on the bar chart.
Results
A variance chart or variance information is added to your visualization. The variance bars are color-coded
based on the account type and whether an increase is desirable or not. For example, an increase in income
would be welcome (green), but an increase in expenses would not (red).
Tip
To hide the variance label, select (More Actions) (Show / Hide) Variance Label .
Next Steps
To change your variance calculation, open the Edit Variance dialog in one of the following ways:
1. Open the Examine panel, select (more), and then select Variance Edit Variance .
Once you have added a variance to your chart, you can trigger Smart Insights to see more information about
the data in the variance. For more information, see Smart Insights [page 2003].
Related Information
Add a time dimension to your SAP Analytics Cloud chart to see some recommendations for comparisons
(variances) that you can also add.
When you create a chart and add a time or date dimension to it, some recommended comparisons are listed
in the Builder panel above the Measures list. Select one or more of the comparisons to add them to your chart.
The first comparison is applied as a data label and the rest of the comparisons are listed as Variances at the top
of the chart.
Only one variance can be added as a data label, but you can select a datapoint to see all the variances in the
tooltip.
You can choose to add a variance or comparison from the chart or from the builder panel.
Tip
The available comparison options are dependent on your time hierarchy: the more granular the hierarchy,
the more options that will be available, including one or more of the following options:
• Previous Period
• Previous Year
• Previous Half Year
• Previous Quarter
• Previous Month
• Previous Week
(More Actions) and then select Compare To and choose a comparison option.
• From the Builder panel: select a chart and then in the Builder panel choose a comparison from the
Recommended Comparisons section.
Note
Recommended Comparisons won't be available if your time dimension uses week-level granularity or
the dimension is a user-managed time dimension.
Related Information
How to Add and Modify Variances in Charts (Optimized Story Experience) [page 1311]
How to Add a Variance to a Chart (Classic Story Experience) [page 1315]
Time-Based Variances (Specifics for SAP BW) [page 1585]
You can create custom calculations for your charts such as aggregations, calculated or restricted measures,
value-based dimensions, and so on.
When you create calculations for your charts you can include them as either measures or dimensions; the
dimension calculations are added as cross calculations. (For information on cross calculations, see Add Cross
Calculations to Charts [page 1349]).
The Calculation Editor is used to create the calculations that can be added to your charts.
Calculation Editor
For each type of calculation, a new calculated or restricted member is created for the dimension that you
used to create it. You can also use dimension attributes as part of a calculation. For more information about
dimension attributes, see Combine Data with Your Acquired Data [page 722].
Note
The number format chosen in Profile Settings User Preferences influences the expected input format
for Story Calculated Measures and Calculated Dimensions:
If the formula is typed in from scratch, the correct function auto-completion happens based on the user
preferences. However, if you copy and paste a full formula string, auto-complete won't be able to adapt if
there is a mismatch between separators used and the user preferences.
In a chart, calculations based on the Account dimension can be added as new measures.
• Calculated measures: Perform a calculation on one or more members of either the Account dimension or
the Cross Calculations dimension. A new calculated member of the selected dimension is created as a
result. See Create Calculated Measures for Charts [page 1324]
Restriction
If the calculation is based on the Cross Calculations dimension then dimension members or attributes
from the Account dimension cannot be used.
If the calculation is based on the Account dimension then dimension members from the Cross
Calculations dimension cannot be used.
• Restricted measures: Restrict the data from a member of either the Account dimension or the Cross
Calculations dimension so that it excludes certain members of one or more dimensions. For the Date
dimension, you can pick dynamic values such as year-to-date or previous quarter. A new restricted
member of the selected dimension is created as a result. (See Create Restricted Measures for Charts
[page 1333]) .
When using data from SAP BW, creating a restricted measure on another restricted measure (referred
to as a key figure in SAP BW) that uses the same dimension results in incorrect data in SAP Analytics
Cloud. A further restriction is not possible because SAP BW treats restrictions as OR operations. In
most cases restricted measures are intended to get the result as an AND operation.
For example, COUNTRY is restricted to Great Britain and Germany. In a sales scenario, the intended
result would be that you see products that are sold in both countries (Great Britain OR Germany) and
not a combination of either country and both of them (Great Britain OR Germany OR (Great Britain
AND Germany)).
• Difference From: Find the difference in an account’s value between two dates. A new calculated account
member is created as a result. (See Calculate the Difference for Charts [page 1336])
• Aggregation: Create calculations from aggregations such as sum, count, average, and so on. Choose what
conditions are required for the aggregation to be applied, and when the conditions are required. (See
Create Aggregations for Charts [page 1339])
• Date Difference: Calculate the time interval between two dates. (See Calculate the Time Interval Between
Two Dates [page 1351])
• Perform string and number conversions: convert strings to numbers and numbers to strings, and use the
resulting values in calculated measures or dimensions. See Convert Strings to Numbers and Numbers to
Strings [page 1322].
• Dimension to Measure: The string and number conversion functions can be combined with measure-based
dimensions to create measure to dimension conversions. (See Create Calculated Dimensions for Charts
[page 1346], How to Create a Measure-Based Dimension [page 1348], and Dimension to Measure [page
1323].)
Calculations can use input controls. Input controls provide variable input for a calculation, allowing viewers to
influence the result of a calculation without modifying the underlying data or formula. For example, viewers
can choose to see the impact of a 1%, 2%, or 3% tax-rate increase. You choose the list of values for an input
control, and specify how the user can select values. Input controls can be formatted after they are added to the
canvas.
Tip
The Calculation Editor dialog will change size to fit the calculation type information fields and re-center
itself on the page.
Note
You can add dynamic time calculations directly to a chart. For more information, see Dynamically Add a
Time Calculation [page 1260].
Note
(Optimized Story Experience) When creating a calculation input control, use letters A–Z, a–z, numerals
0–9 or underscore (_) in its name, which is used as ID in scripting.
There are times that you may want multiple calculations that are almost the same. For example, you may want
restricted measures with different time filters. It takes time to create a calculation and you have to create each
calculation separately from start to finish, even if you only need to change one small parameter.
A faster way to create multiple calculations would be to duplicate an existing calculation and then edit it.
Note
• Calculated measures
• Restricted measures
• Calculated dimensions
• Cross calculations
Select a chart 1. In the Builder, select a calcula- 1. In the Builder, select a calcula-
tion. tion.
2. Make changes to the calculation. (You can't change the calculation type.)
3. Select OK.
Format a Calculation
1. Select a chart.
2. In the Builder, select a calculation.
3. Select Format... .
4. Modify the formatting options.
• Use unit of underlying measures - uses the unit (Dollar, Euro, Yen, and so on) of the underlying
measures.
5. Select OK.
Convert a string value to a numeric value or a numeric value to a string value and use the new values in
calculated dimensions or measures.
ToNumber Function
ToNumber converts string values to numbers.
1. In Builder, under Dimensions, select Add Dimension Calculated Dimensions Create Calculated
Dimension .
2. In the Calculation Editor under Type, select Calculated Dimension.
3. Create your formula.
Example
ToNumber ([d/ACT_EMPLOYEE_NUMERIC:Age_String].[p/ID])
ToText Function
ToText converts numeric values to strings, and it uses an existing calculated dimension in its calculations.
1. In Builder, under Dimensions, select Add Dimension Calculated Dimensions Create Calculated
Dimension .
2. In the Calculation Editor under Type, select Calculated Dimension.
3. Create your formula.
Example
Dimension to Measure
The string and number conversion functions can be combined with Measure-Based dimensions (in a
calculated dimension) to create dimension to measure conversions. For more information about Measure-
Based dimensions, see How to Create a Measure-Based Dimension [page 1348].
There is also a calculation type that lets you convert a dimension to a measure.
Related Information
In SAP Analytics Cloud, the Calculated Measures calculation creates a new measure (or account) that you can
use in your chart.
Context
Measures (which are referred to as Accounts in a planning model) are numerical values on which you can
use mathematical functions. When setting up your calculation, you’ll apply the typical formula functions,
conditions, and operators to the data contained in your model.
Remember
Depending on your model or data source, you may see either accounts or measures or both.
Calculated measures allow you to perform mathematical and Boolean operations on your data. For example,
you can use a calculated measure to chart the effect a sales increase of 20% would have on profits.
Note
The number format chosen in Profile Settings User Preferences influences the expected input format
for Story Calculated Measures and Calculated Dimensions:
• Choosing a number format that uses periods (.) as decimal separators means that commas (,) must be
used to separate function parameters (for example, IF(Condition, ValueIfConditionIsTrue,
ValueIfConditionIsFalse)).
• Choosing a number format that uses commas (,) as decimal separators means that semi-
colons (;) must be used to separate function parameters (for example, IF(Condition;
ValueIfConditionIsTrue; ValueIfConditionIsFalse)).
If the formula is typed in from scratch, the correct function auto-completion happens based on the user
preferences. However, if you copy and paste a full formula string, auto-complete won't be able to adapt if
there is a mismatch between separators used and the user preferences.
The following functions, conditions, and operators are available for creating calculated dimensions. For
descriptions of each option, see All Formulas and Calculations [page 873].
Functions
FLOAT() ACCOUNTLOOKUP()
Note
The ISNULL function identifies NULL values, but won't replace a NULL value with a value.
For example, your calculated dimension has the following formula: CD1=IF(ISNULL(D1), "No Value",
D1)
The value in cell D1 will not be changed to show the words “No Value”: it will stay as a NULL value.
Conditions:
AND OR
Operators:
• + (addition)
• - (subtraction)
• * (multiplication)
• / (division)
Tip
In the Calculation Editor, you only see the functions that are valid for your data source.
Restriction
Charts can use only one Account Hierarchy at a time. You can use different hierarchies for different charts,
but if you change an account hierarchy in a chart, your data may no longer be displayed.
Procedure
Note
The option to create a new calculation may not appear if calculations are not possible for the chart type
or model.
Tip
As you are adding formula details, you will see a message appear and disappear: it appears when your
formula is not valid and disappears when the formula is valid.
Existing input controls appear in the Available Objects list, and can be added to a formula.
Note
You can add preset functions, conditions, and operators, by selecting options in the Formula Functions list.
You can use IF conditional functions, and you can display a list of possible formulas for the function by
pressing Ctrl + Space bar.
Note
The Cross Calculations dimension cannot be used in the Account dimension calculations.
• Adding a calculation to the Cross Calculations dimension: shows Cross Calculations dimension
members.
Note
• # – Returns all calculations (that is, measures created using the Calculation Editor).
• @ – Returns input controls. (Only single value numeric input controls are returned.)
Tip
In the list of measures, you see both the measure ID and the measure description. After you select a
measure, the formula editor area shows only the measure ID. To view the measure description, click
outside the formula editor area.
in the input control. You can use (Search) to find specific values. When you expand the list
beside the search icon, you can choose to view the member Description, ID and Description, or
ID.
3. Expand the Settings for Users section, and then choose whether users can do the following in
the input control: Single Selection, Multiple Selection, or Multiple Selection Hierarchy.
4. Select OK.
• Static List
1. Select Click to Add Values and choose either Select by Range or Select by Member.
Note
Choose Select by Range to specify a range for a numeric slider. Choose Select by Member
for an input control based on a defined set of members.
The Select by Range option appears only if dimension values are numerical or date based.
If the dimension is date based, you can also select quarter, month, or year from the slider
that appears. Ranges can be fixed or dynamic; for example, you could choose the fixed
range January 2017 to December 2017. If this story is opened in 2018, the story will still
show 2017 data. For dynamic date ranges, in addition to the above granularities, these
granularities are also available: current year, current quarter, and current month. For more
information, see Story and Page Filters [page 1518].
2. To create a numeric slider, enter the Min and Max values for your range in the Set Values for
Custom Range dialog. You can optionally set an Increment value for the slider.
3. Select OK.
4. To create a member based input control, add numeric values to the Custom Members area in
the Select Values from Custom LOV dialog, and then select Update Selected Members.
5. Expand the Settings for Users section, and then choose whether users can do the following in
the input control: Single Selection, or Multiple Selection.
6. Select OK.
d. Select OK.
Note
When you create a calculation input control, if you select “All Members”, a dynamic filter is created.
This means that the latest dimension member descriptions are always fetched from the model. But
if you select individual dimension members, a static filter is created. This means that the dimension
member descriptions are remembered from the time when the input control was created. For details,
see Story and Page Filters [page 1518].
Results
A measure is created based on the formula you entered, and it is displayed as a new measure or a new cross
calculation dimension.
On the canvas, input controls are indicated by the (Formula) icon. If you hover over the icon, all calculations
associated with the input control are displayed. By default, the input control is displayed in token mode where
input values can be selected from a drop-down list. The input control can be expanded into widget mode, where
radio buttons appear beside each value.
Related Information
In SAP Analytics Cloud, the Calculated Account calculation creates a new account that you can use in your
chart.
Context
Accounts, are numerical values on which you can use mathematical functions. When setting up your
calculation, you’ll apply the typical formula functions, conditions, and operators to the data contained in your
model.
Depending on your model or data source, you may see either accounts or measures or both.
Calculated accounts allow you to perform mathematical and Boolean operations on your data. For example,
you can use a calculated account to chart the effect a sales increase of 20% would have on profits.
Note
The number format chosen in Profile Settings User Preferences influences the expected input format
for Story Calculated Measures, Calculated Accounts, and Calculated Dimensions:
• Choosing a number format that uses periods (.) as decimal separators means that commas (,) must be
used to separate function parameters (for example, IF(Condition, ValueIfConditionIsTrue,
ValueIfConditionIsFalse)).
• Choosing a number format that uses commas (,) as decimal separators means that semi-
colons (;) must be used to separate function parameters (for example, IF(Condition;
ValueIfConditionIsTrue; ValueIfConditionIsFalse)).
If the formula is typed in from scratch, the correct function auto-completion happens based on the user
preferences. However, if you copy and paste a full formula string, auto-complete won't be able to adapt if
there is a mismatch between separators used and the user preferences.
The following functions, conditions, and operators are available for creating calculated accounts. For
descriptions of each option, see All Formulas and Calculations [page 873].
Functions
FLOAT() MEASURELOOKUP()
Note
The ISNULL function identifies NULL values, but won't replace a NULL value with a value.
For example, your calculated dimension has the following formula: CD1=IF(ISNULL(D1), "No Value",
D1)
The value in cell D1 will not be changed to show the words “No Value”: it will stay as a NULL value.
Conditions:
AND OR
Operators:
• + (addition)
• - (subtraction)
• * (multiplication)
• / (division)
Tip
In the Calculation Editor, you only see the functions that are valid for your data source.
Restriction
Charts can use only one Account Hierarchy at a time. You can use different hierarchies for different charts,
but if you change an account hierarchy in a chart, your data may no longer be displayed.
Procedure
1. Select a chart.
2. Open the Calculation Editor: in Builder, choose Add Account Calculations Create Calculation .
Note
The option to create a new calculation may not appear if calculations are not possible for the chart type
or model.
Tip
As you are adding formula details, you will see a message appear and disappear: it appears when your
formula is not valid and disappears when the formula is valid.
Existing input controls appear in the Available Objects list, and can be added to a formula.
Note
You can add preset functions, conditions, and operators, by selecting options in the Formula Functions list.
You can use IF conditional functions, and you can display a list of possible formulas for the function by
pressing Ctrl + Space bar.
Tip
In the list of accounts, you see both the account ID and the account description. After you select an
account, the formula editor area shows only the account ID. To view the account description, click
outside the formula editor area.
in the input control. You can use (Search) to find specific values. When you expand the list
beside the search icon, you can choose to view the member Description, ID and Description, or
ID.
3. Expand the Settings for Users section, and then choose whether users can do the following in
the input control: Single Selection, Multiple Selection, or Multiple Selection Hierarchy.
4. Select OK.
• Static List
1. Select Click to Add Valuesand choose either Select by Range or Select by Member.
Note
Choose Select by Range to specify a range for a numeric slider. Choose Select by Member
for an input control based on a defined set of members.
The Select by Range option appears only if dimension values are numerical or date based.
If the dimension is date based, you can also select quarter, month, or year from the slider
that appears. Ranges can be fixed or dynamic; for example, you could choose the fixed
range January 2017 to December 2017. If this story is opened in 2018, the story will still
show 2017 data. For dynamic date ranges, in addition to the above granularities, these
granularities are also available: current year, current quarter, and current month. For more
information, see Story and Page Filters [page 1518].
2. To create a numeric slider, enter the Min and Max values for your range in the Set Values for
Custom Range dialog. You can optionally set an Increment value for the slider.
3. Select OK.
4. To create a member based input control, add numeric values to the Custom Members area in
the Select Values from Custom LOV dialog, and then select Update Selected Members.
5. Expand the Settings for Users section, and then choose whether users can do the following in
the input control: Single Selection, or Multiple Selection.
Note
When you create a calculation input control, if you select “All Members”, a dynamic filter is created.
This means that the latest dimension member descriptions are always fetched from the model. But
if you select individual dimension members, a static filter is created. This means that the dimension
member descriptions are remembered from the time when the input control was created. For details,
see Story and Page Filters [page 1518].
7. (Optional) If you want to verify that your formula is formatted correctly, select Format: it may reformat your
formula before displaying a valid formula message.
8. Select OK.
Results
An account is created based on the formula you entered, and it is displayed as a new account dimension.
On the canvas, input controls are indicated by the (Formula) icon. If you hover over the icon, all calculations
associated with the input control are displayed. By default, the input control is displayed in token mode where
input values can be selected from a drop-down list. The input control can be expanded into widget mode, where
radio buttons appear beside each value.
Related Information
In your SAP Analytics Cloud chart, you can create a measure (or account) that restricts the data from a
member of the Account dimension or the Cross Calculations dimension by excluding certain members of one
or more dimensions.
Context
Measures (which are referred to as Accounts in a planning model) are numerical values on which you can use
mathematical functions.
Restricted measures can be useful for comparing one value to a set of other values in the same chart. For
example, you can create a measure that contains all expenses for the country of Australia, and compare
expenses from Australia side by side with expenses for all other countries.
Note
Procedure
(Classic design experience) To open the Calculation Editor, do one of the following:
• In Builder, choose Add Measure Calculations Create Calculation .
• Classic design experience.
1. In Builder, beside Chart Structure, select (Add Chart Components), and then select Add Cross
Calculations.
A Cross Calculations section is added with the default dimension Measure Values.
2. Select Select Cross Calculation Create Cross Calculation .
(Optimized design experience) To open the Calculation Editor, do one of the following:
• Create a restricted account.
In Builder, choose Add Account Calculations Add Calculation .
• Create a restricted cross calculation.
1. In Builder, in the Chart Add-Ons section, select Cross Calculations.
A Cross Calculations section is added with the default dimension Measure Values.
2. Select Add Cross Calculation Add Cross Calculation .
Note
The option to create a new calculation may not appear if calculations are not possible for the chart type
or model.
Tip
Instead of Restricted Measure, you may see Restricted Account or Restricted Cross Calculation. All
three types use the same process to create the restricted value.
When Constant Selection is disabled, the restricted measure value is influenced by chart, page, and story
filters, as well as categorical axis values. This is the default setting.
When Constant Selection is enabled, the restricted measure value is determined by the values you specify
in the Calculation Editor and will remain constant. Enabling constant selection is useful for comparing a
single value with several different values. For example, you could create a restricted measure for sales in
2012, and then compare sales in 2012 with sales for all other years in the same chart.
Note
Remember
When searching for a specific measure in a long list of measures, you must use the measure's
description, not its ID, as your search term.
6. In the Dimensions section, select one or more dimensions along which you want to restrict the measure.
If you want to restrict the measure along more than one dimension, use Add a Dimension.
7. Beside each dimension, under Values or Input Controls, select Click to Select Values, and then choose an
option from the list:
except the ones selected are applied to the restricted measure. You can use (Search) to find
specific values. When you expand the list beside the search icon, you may have the following options:
Show Description and Show Hierarchy.
Show Description lets you choose to view the member Description, ID and Description, or ID. Show
Hierarchy lets you choose a Flat presentation or an available hierarchy.
The members you choose appear in the Selected Members list.
• SELECTION Select by Range :
Enter a start value and end value for the range. Select Add a New Range to add additional ranges to the
restricted measure.
Note
This option appears only if dimension values are numerical or date based. If the dimension is date
based, you can also select week, quarter, month, or year from the slider that appears. Ranges can
be fixed or dynamic. For more information, see Story and Page Filters [page 1518].
For date dimensions, you can quickly specify a dynamic restriction based on the current date using these
options:
Note
• To see restricted measures with dynamic date range selections, you'll need to add the Date
dimension to the chart, or filter it to a single member.
• You can also add dynamic time calculations directly to a measure in a chart. See Dynamically
Add a Time Calculation [page 1260] for details.
• Create a New Calculation Input Control (This option is available when adding a calculation to a chart.):
1. Enter a name for the input control, and then select Click to Add Values.
2. Select values from the list of available members. If you select Exclude selected members, all
members except the ones selected will be included in the input control. You can use (Search)
to find specific values. When you expand the list beside the search icon, you can choose to view the
member Description, ID and Description, or ID.
3. Expand the Settings for Users section, and then choose whether users can do the following in the
input control: Single Selection, Multiple Selection, or Multiple Selection Hierarchy. Select OK.
4. Select OK.
8. Select OK.
Results
A measure is created that does not include data for the members that you excluded. Any input controls you
created appear on the canvas and are listed in the Calculation Editor under Input Controls. Input controls can
also be used in calculated measures.
Related Information
In your SAP Analytics Cloud chart, you can determine the difference between measure values from two time
periods.
Prerequisites
A chart must be selected. The data source must include a date dimension. However, the chart only needs the
date dimension when comparing Current Period.
Note
Context
The Difference From aggregation shows the difference in the value of an account between a starting date and
a target date that is calculated from a specified number of time periods before or after the starting date. For
example, you can compare the sales on February 2015, with the sales 6 months previously.
You can also use the current date as a starting date for your comparison.
1. Select a chart.
Note
The option to create a new calculation may not appear if calculations are not possible for the chart type
or model.
• Current Period
• Current Date:
Choose the Granularity level:
• Year
• Half Year
• Quarter
• Month
• Week
• Select by Member:
Select a value from the list of available members. You can use (Search) to find specific values.
When you expand the list beside the search icon, you can choose to view the member Description, ID
and Description, or ID, and specify whether to display quarters and halves in the time hierarchy.
• Create a New Calculation Input Control (This option is available when adding a calculation to a chart.):
1. Enter a name for the input control, and then select Click to Add Values.
2. Choose how to select values for the input control:
• Select by Member:
Select values from the list of available members. If you select Exclude selected members,
all members except the ones selected will be included in the input control. You can use
(Search) to find specific values. When you expand the list beside the search icon, you can
choose to view the member Description, ID and Description, or ID, and specify whether to
display quarters and halves in the time hierarchy.
The members you choose appear in the Selected Members list.
Expand the Settings for Users section, and then choose whether users can do the following in
the input control: Single Selection, Multiple Selection, or Multiple Selection Hierarchy. Select
OK.
• Select by Range:
Choose a start value and end value for the range. Select Add a New Range to add additional
ranges to the input control.
The Select by Range option appears only if dimension values are numerical or date based.
If the dimension is date based, you can also select quarter, month, or year from the slider
that appears. Ranges can be fixed or dynamic; for example, you could choose the fixed
range January 2017 to December 2017. If this story is opened in 2018, the story will still
show 2017 data. For dynamic date ranges, in addition to the above granularities, these
granularities are also available: current year, current quarter, and current month. For more
information, see Story and Page Filters [page 1518].
Expand the Settings for Users section, and then choose whether users can do the following in
the input control: Single Selection, Multiple Selection, or Single Value Slider. Select OK.
3. Select OK.
This value is used as the starting date to calculate the difference from.
8. In the To (B) section, choose PREVIOUS or NEXT.
• PREVIOUS – calculates the difference between the starting date and a specific previous time period:
• 'N' Periods
• Period
• Year
• Quarter
• Month
• Week
• NEXT 'N' Periods – calculates the difference between the starting date and a future time period.
9. In the Nth Period section, enter the number of time periods over which to calculate the difference.
The time period must be a discrete value, or an input control with a static list of discrete values. This is the
number of time periods before or after the starting date that you want to compare the value to. The period
is the smallest time granularity of the current date, included in your data source. For example, the period
may be a year, a quarter, or a month.
Note
10. Optional:
• Select Set No Data as Zero: To include a NULL data point in your variance, set it to the value zero.
• Select Calculate as Percentage.
Select Absolute Base Value to display the absolute percentage change. Absolute values are useful if
you are comparing negative values, but have a positive change (For example, (-20) - (-40) = +20. The
percentage change (+20/-40) would be -50% if you did not use absolute values.)
1. In the Divide By section, choose Compare (A) or To Value (B).
The Compare (A) is the value of the measure at the starting date. The To Value (B) is the value of the
measure at your target date.
11. Select OK.
In your SAP Analytics Cloud chart, you can use the Calculation Editor to create aggregations.
Prerequisites
A chart must be selected. The data source must contain key figures.
Note
Context
Calculations can be created from aggregations such as sum, count, average, and so on. When you create an
aggregation, you can also choose what conditions are required for the aggregation to be applied, and when the
conditions are required. For example, you can create an aggregation to count the number of sales per store,
when the store carries a certain product.
Procedure
Note
The option to create a new calculation may not appear if calculations are not possible for the chart type
or model.
Type Description
SUM The sum of the measure’s values across all leaf members of the selected dimensions.
COUNT The number of leaf members of the selected dimensions with values for a specific measure,
including null values. Empty values for this measure are not counted.
Note
Although NULL and NO_DATA cells are visually identical, the application still distinguishes
them in the backend.
NULL cells are retrieved directly from the database, and can also be created using formulas.
For example, IF( cost > 1000; NULL; 1).
NO_DATA cells are empty cells that are use jointly with the unbooked mode for example, or
to create a cross-join of the rows and columns axis.
The COUNT function only accounts for NULL cells in the aggregation, and discards the
NO_DATA cells.
COUNT The number of leaf members of the selected dimensions that have at least one measure value.
DIMENSIONS This aggregation doesn't count members that have an empty value for all measures. (COUNT
DIMENSIONS is not valid for the Cross Calculations dimension.)
COUNT excl 0, NULL The number of values, excluding zero and null values.
AVERAGE The average of the measure’s values across the selected dimensions, including null values.
Note
Although NULL and NO_DATA cells are visually identical, the application still distinguishes
them in the backend.
NULL cells are retrieved directly from the database, and can also be created using formulas.
For example, IF( cost > 1000; NULL; 1).
NO_DATA cells are empty cells that are use jointly with the unbooked mode for example, or
to create a cross-join of the rows and columns axis.
The AVERAGE function only accounts for NULL cells in the aggregation, and discards the
NO_DATA cells.
AVERAGE excl NULL The average of the measure’s values, excluding null values.
AVERAGE excl 0, The average of the measure’s values, excluding zero and null values.
NULL
FIRST Shows the first (oldest) value in the selected time period: for example, show the number of
employees on the first day of a month.
LAST Shows the last (most recent) value in the selected time period: for example, show the number of
employees on the last day of a month.
STANDARD Calculates how much the value varies from the average value in the series.
DEVIATION
MEDIAN The median (middle) value (half of the data lies below the median value, and half lies above).
MEDIAN excl. NULL The median (middle) value (half of the data lies below the median value, and half lies above),
ignoring null values.
MEDIAN excl. 0, The median (middle) value (half of the data lies below the median value, and half lies above),
NULL ignoring null and zero values.
FIRST QUARTILE Calculates the first quartile value (25% of the data is less than this value).
FIRST QUARTILE Calculates the first quartile value (25% of the data is less than this value), ignoring null values.
excl. NULL
FIRST QUARTILE Calculates the first quartile value (25% of the data is less than this value), ignoring null and zero
excl. 0, NULL values.
THIRD QUARTILE Calculates the third quartile value (75% of the data is less than this value).
THIRD QUARTILE Calculates the third quartile value (75% of the data is less than this value), ignoring null values.
excl. NULL
THIRD QUARTILE Calculates the third quartile value (75% of the data is less than this value), ignoring null and zero
excl. 0, NULL values.
Note
6. In the Aggregation Dimensions section, select one or more dimensions to apply the aggregation to. For
example, if you select the Date and Product dimensions, values for each product at the smallest time
granularity in the model will be aggregated.
Conditional aggregation allows you to specify when the aggregation is applied and what conditions are
required for the aggregation to be applied.
Note
a. In the Aggregate when aggregation dimensions section, choose Have Measure values for Conditions, or
Do not have Measure values for Conditions.
b. In the Conditions section, under Dimension, select one or more dimensions to apply conditions to.
Option Description
Select by Member Select values for the condition from the list of available members. If you select
Exclude selected members, all members except the ones selected will be applied to
the aggregation condition. You can use (Search) to find specific values. When
you expand the list beside the search icon, you can choose to view the member
Description, ID and Description, or ID.
Select by Range Enter a start value and end value for the range of values for the condition. Select Add
a New Range to add additional ranges to the aggregation condition.
Note
This option appears only if dimension values are numerical or date based. If the
dimension is date based, you can also select quarter, month, or year from the
slider that appears.
Ranges can be fixed or dynamic; for example, you could choose the fixed range
January 2017 to December 2017. If this story is opened in 2018, the story will still
show 2017 data. For dynamic date ranges, in addition to the above granularities,
these granularities are also available: current year, current quarter, and current
month. For more information, see Story and Page Filters [page 1518].
Create a New Calculation This option is available when adding an aggregation to a chart. It allows chart viewers
Input Control to select their own values for the aggregation condition from a list of values that you
specify.
1. Enter a name for the input control, and then select Click to Add Values.
2. Choose how to add values to the input control:
• Select by Member:
Select values from the list of available members. If you select Exclude
selected members, all members except the ones selected will be used in
the condition for the aggregation. You can use (Search) to find specific
values. When you expand the list beside the search icon, you can choose to
view the member Description, ID and Description, or ID.
Expand the Settings for Users section, and then choose whether users can
do the following in the input control: Single Selection, Multiple Selection, or
Multiple Selection Hierarchy. Select OK.
• Select by Range:
Choose a start value and end value for the range. Select Add a New Range
to add additional ranges to the input control.
Expand the Settings for Users section, and then choose whether users can
do the following in the input control: Single Selection, Multiple Selection, or
Single Value Slider. Select OK.
3. Select OK.
8. Select OK.
Related Information
In your SAP Analytics Cloud chart, you can create running total calculations (such as sum or average) using the
Calculation Editor.
This feature lets you create running total calculations that can be used for other tables or charts in your story.
Restriction
Remember
Dimensions that you use in your running calculations become required dimensions: if they aren't already in
your chart, you must add them.
Tip
If you rearrange or add dimensions, you'll change the results of the running calculations.
For example, your running sum shows the values for different sales managers. When you add a location
dimension as the outer dimension, the running sum now restarts calculating when the location changes.
1. Select a chart.
2. In Builder, choose Add Account Calculations Add Calculation .
3. In the Calculation Editor under Type, select Running Total.
4. Enter a name for the running total.
5. Select a running total operation.
The following running total operations are supported:
Operation Description
SUM The sum of the account's values across all leaf members of the selected dimensions.
COUNT The number of leaf members of the selected dimensions with values for a specific account,
including null values. Empty values for this account are not counted.
Note
Although NULL and NO_DATA cells are visually identical, the application still distinguishes
them in the backend.
NULL cells are retrieved directly from the database, and can also be created using formulas.
For example, IF( cost > 1000; NULL; 1).
NO_DATA cells are empty cells that are used jointly with the unbooked mode or to create a
cross-join of the rows and columns axis.
The COUNT function only accounts for NULL cells in the aggregation, and discards the
NO_DATA cells.
AVERAGE The average of the account's values across the selected dimensions, including null values.
Note
Although NULL and NO_DATA cells are visually identical, the application still distinguishes
them in the backend.
NULL cells are retrieved directly from the database, and can also be created using formulas.
For example, IF( cost > 1000; NULL; 1).
NO_DATA cells are empty cells that are used jointly with the unbooked mode or to create a
cross-join of the rows and columns axis.
The AVERAGE function only accounts for NULL cells in the aggregation, and discards the
NO_DATA cells.
Related Information
In your SAP Analytics Cloud chart, create a calculated dimension when you want to concatenate dimensions
together, group dimensions, create a measure-based dimension, and so on.
You can choose to combine existing dimensions to create your own dimensions.
Note
The number format chosen in the user preferences ( Profile Settings User Preferences ) influences the
expected input format for Story Calculated Measures and Calculated Dimensions:
• Choosing a number format that uses periods (.) as decimal separators means that commas (,) must be
used to separate function parameters (for example, IF(Condition, ValueIfConditionIsTrue,
ValueIfConditionIsFalse)).
• Choosing a number format that uses commas (,) as decimal separators means that semi-
colons (;) must be used to separate function parameters (for example, IF(Condition;
ValueIfConditionIsTrue; ValueIfConditionIsFalse)).
If the formula is typed in from scratch, the correct function auto-completion happens based on the user
preferences. However, if you copy and paste a full formula string, auto-complete won't be able to adapt if
there is a mismatch between separators used and the user preferences.
• Concatenated dimension
• Grouped dimension
• Measure-based dimension
The following functions, conditions, and operators are available for creating calculated dimensions. For
descriptions of each option, see All Formulas and Calculations [page 873].
Functions:
Note
The ISNULL function identifies NULL values, but won't replace a NULL value with a value.
For example, your calculated dimension has the following formula: CD1=IF(ISNULL(D1), "No Value",
D1)
The value in cell D1 will not be changed to show the words “No Value”: it will stay as a NULL value.
AND OR
Operators:
• + (addition)
• - (subtraction)
• * (multiplication)
• / (division)
1. In Builder, under Dimensions, select Add Dimension Calculated Dimensions Create Calculated
Dimension .
2. In the Calculation Editor under Type, select Calculated Dimension.
3. Provide a name for your calculated dimension.
Remember
You can press Ctrl + Space in the formula area to display a list of suggestions, or type [ for a list of
valid measures and dimensions.
Example
1. In the Edit Formula area, type an IF statement with the specific conditions.
You can create a calculated dimension based on measure or account values or thresholds. For example, you
could create a high/medium/low chart dimension based on your own criteria.
You can also create measure-based or value-based charts that include the following options:
• Date dimension
• Static filter that ignores all other filters
• Dynamic filter or Input Control
1. In Builder, under Dimensions, select Add Dimension Calculated Dimensions Create Calculated
Dimension .
2. In the Calculation Editor under Type, depending on whether you're working with an classic account model
or a new model type, select Measure-Based Dimension or Value-Based Dimension.
3. Provide a Name for the dimension.
4. In the Properties section, select an account, a measure, or both using the dedicated fields:
• If you’re working with a new model type, select the measure and the account using the dedicated field.
• If you're working with a classic account model, select a measure.
5. Select Use measure values as dimension members.
This will replace the member name field. You can set the scale and the number of decimal places for the
measure values that are converted to dimension members.
6. Provide a member name: for example, high, medium, or low.
7. Set the Measure Values.
8. From the Dimension Context, select one or more versions.
Note
For hierarchical dimensions, you also need to specify the level. For example, Date.Year
Related Information
Note
After you add a cross calculation to a chart, you won't be able to apply thresholds to that chart.
In the cross calculations dimension you can create calculations such as calculated measures, restricted
measures, or currency conversions.
2. Beside Chart Structure, select (Add Chart Components) Add Cross Calculations .
A Cross Calculations section is added with a default dimension.
3. To use more than one cross calculation in a chart, in the Dimensions section select Add Dimension
Cross Calculations .
4. In the Cross Calculations section, select Add Cross Calculation and add a calculation.
Note
Once you add the Cross Calculations section to your chart, you can't remove it. You must have Measure
Values or some other dimension selected.
In the cross calculations dimension you can add existing calculations or create calculations such as calculated
cross calculations, restricted cross calculations, aggregations, and so on.
Tip
To remove the cross calculations section, select Chart Add-Ons Remove Cross Calculations
3. In the Cross Calculations section, select Add Cross Calculation and add an exisitng calculation.
Note
To use more than one cross calculation in a chart, in the Dimensions section select Add Dimension
Cross Calculations .
Note
To use more than one cross calculation in a chart, in the Dimensions section select Add Dimension
Cross Calculations .
3. In the Cross Calculations section, select Add Cross Calculation Add Cross Calculation .
4. In the Calculation Editor, create one of the following calculations:
• Calculated Cross Calculation
• Restricted Cross Calculation
• Aggregation
Tip
The cross calculations use the same creation process as the measure calculations.
Related Information
Create a calculation that shows the time interval between two dates.
Prerequisites
Your model must have Date columns or date (time) hierarchies with Day level granularity.
Procedure
The Result Granularity values are determined by the data source: if the data source does not contain
quarters, you won't see them in the list of values.
6. Select OK.
Tables can be used in SAP Analytics Cloud stories to view and analyze data.
For tables that are based on models, the available features and options depend on the model type (for example,
planning, analytic, or other model types). Only the features and options that are supported by the model type
are visible. For example, tables based on planning models allow users to make changes to the model data using
version management, data entry, and allocations.
Some features may not be handled in the same way on all page types.
Note
If you would like to include Grid pages in your story, you will have to use the classic design experience, not
the optimized experience.
Using Models Tables are integrated with an Tables are integrated with an Tables may be related to a
underlying Model. underlying Model. model but it is also possible
to create blank tables and
type or paste the data in
manually.
Positioning Table tiles can be moved Table tiles can be moved You can change the position
around just like other tiles. around just like other tiles. of the table by adding rows
and columns above or to the
left of the table.
Creating Input Tasks Select the table and in the Not Applicable Not Applicable
Tools menu, select Create
Input Task.
Designing Tables
When you add a table to a Story, a data grid is created with the basic dimensions and categories of the model
aligned along the axes of the grid. You can change this basic layout using the Designer tools.
Use the Builder to select the measures and dimensions to include in the rows and columns of your table.
You can add multiple measures and multiple dimensions to your table. When measures or dimensions are part
of a hierarchy or when a dimension has attributes, you can expand them and select their children or expand a
dimension and select its attributes. You can also apply filters to your measures and dimensions. The table is
updated as you make your choices in the Builder.
“784 rows and 73 columns of data are requested, but the drill limit is set to 500 rows and 60 columns.”
You can choose to either change the filters to return fewer results or edit the drill limits and retrieve more
data.
Remember
Editing the drill limits changes the table results when you are viewing the story, but they aren't saved.
To see the new table results the next time you open the story, change the drill limits and then save your
story.
General Tasks
Related Information
You can add a table to any SAP Analytics Cloud story or analytic applications page.
Models based on remote live data depend on connections that an administrator creates using the Connections
feature.
• Cross-tab: displays the data that is grouped or totaled in two directions. This is the default table.
• Forecast Layout: displays the data divided into several time periods, allowing you to look back or look
ahead.
For more information, see Creating a Forecast or Rolling Forecast Layout [page 1428].
Remember
If your story already has a model or data source on any page, that data source is used automatically for
new models added to Canvas or Responsive pages. If there is more than one data source, the most recently
used one is applied to new tables.
Note
If you have a lot of data to retrieve when your table is initially created, it may take some time for the table to
finish loading it all. To help speed up load time, your new table will open using the following criteria:
The default table type is an optimized presentation table. You can change the table type in the Builder panel
by clearing the check box for Optimized Presentation. For more information on the default table, see Optimized
Presentation Table: Features and Restrictions [page 1355].
5. A data source will automatically be selected for your new table. To change it, in Builder Data
Source , select (change primary model) and choose a new data source.
If prompted, provide the proper login credentials for your data source connection.
Use the Builder tab to select the dimensions to include in the rows and columns.
Related Information
The optimized presentation table provides pixel-level column or row resizing, smooth scrolling, and other
features.
While the optimized presentation table does have new features, it also has some restrictions. If the restrictions
prevent you from getting the information you need, you can switch to the standard table. (To switch to the
standard table, open the Builder panel and clear the selection for Optimized Presentation.)
Feature Description
Fast-loading in-cell charts In-cell charts load much more quickly than they do in a standard table.
Pixel-based column / row resizing When you resize columns or rows in the table, you now see the pixel value; this
allows for more precise styling.
Also, you can now resize multiple columns or rows at once to a specific size, the
same as you would do in Microsoft Excel.
Smooth scrolling Scrolling is now pixel-based instead of column or row based, which makes for
smoother scrolling.
View mode scrolling
In View mode, you don't need to click inside the table to scroll: just hover the
cursor over the table and then scroll.
Consistent row heights You no longer have to worry about inconsistent row heights when new data is
loaded into the table. You can set the row height in the styling panel for the whole
table.
You can either set a custom height or use the predefined heights (Default, Cozy,
Condensed, and so on). The default height will account for the font-size of the
text within the cells and adjusts the height to avoid cutting off text.
New responsive logic Previously, you were required to scroll horizontally even if there should have been
enough space for the content.
Now when scrolling horizontally, the Adaptive Column Width behavior works bet-
ter. If there is leftover white space, it is distributed evenly.
Table title visibility and height The table title is always visible now. It is frozen at the top of the page and no
longer scrolls with the table content.
In addition, the title height is automatically adjusted when the table is resized.
Removing a linked analysis To remove a linked analysis, choose one of the following:
Resizing tables automatically In Canvas pages, you can now resize a table so that its size is no longer fixed
and automatically adjusts vertically to display all rows. The table size changes
depending on different events, such as data refresh or new filters for instance.
Repeat table title and headers In paginated stories, the title and headers of a tables are repeated across multiple
pages in both fixed and dynamic canvases, and Edit and View modes. The option
is enabled by default. Here are the possibilities when working with a fixed-sized
table:
• None
• Freeze dimension headers
• Freeze up to row
• Freeze up to column
Here are the possibilities when working with a dynamic-sized table (growing ta-
ble):
• None
• Repeat dimension headers
• Repeat up to row
• Freeze up to column
If you're working with resizable tables, dimensions headers are frozen by default.
While there are useful new features in this table, there are also some restrictions to be aware of. Use the
information in the following table to help you decide which table option to use.
Restriction Description
Performance issues when there are a Tables that have a lot of columns will not refresh as quickly as tables with fewer
lot of columns. columns.
The optimized presentation table is not Use the standard table if you need to use any of the following options:
available for all story pages and modes.
• Data View Explorer
• Examine Panel
• Grid page
• Predictive Forecast preview
• Input task
• R widget
• Data locks
Keeping member name visible (sticky When multiple dimensions are shown for rows and repetitive member names are
scrolling) for repeating dimension
disabled, the standard table keeps the dimension member names visible when
members.
you scroll down the table. This does not work in the optimized presentation table.
Workaround: to see dimension member names at all times, from the table select
You may notice that some things don't work the way you expect them to in this table. Here are answers to some
of the most common questions that come up.
Why am I seeing additional columns or The Optimized Presentation table shows all existing custom cells including cells
rows after converting the table, but no that have only styling but no value. In some cases, this may lead to additional
values in those columns or rows? They rows or columns being visible after converting the table.
only contain empty cells that have styl-
ing applied.
Why is there no resize indicator after I The new table redefines the “responsive” logic. If the Adaptive Column Width
switch to the new table? checkbox is checked when you use the new table, we no longer allow resizing of
columns. A column will get a width that fits the longest text of its cells, and all the
leftover white space will be distributed evenly to all columns.
If you want to resize a column, you have to clear the Adaptive Column Width
checkbox. This will stop columns from adjusting to the widget size, and you can
then define your own column widths.
Why do I always see a scroll bar now? When the content of the table exceeds the widget height or width, a scroll bar is
automatically added. This scroll bar is not shown when all the content fits inside
the widget; however, space is reserved at the side for the vertical scroll bar so that
the scroll bar doesn't overlap content.
In addition, in view mode, you can now scroll without having to select the table
first.
What are the colored boxes that I see The colored boxes (default color: orange) are shown when you hover the cursor
when I have the Styling panel open and over certain areas of the table. This indicates which cells will be selected with the
want to select cells inside the table? next click.
The table has multiple areas (regions) that can be styled separately. These areas
are:
Why do I have to click multiple times to The table is using region selection to make it easier for you to style a collection of
select an individual cell when the Styl- cells at once.
ing panel is open?
For example, if you want all data cells (the numbers) to be red, you don't have to
select all the cells manually, but can use the “Data region” selection.
What happens when the checkbox To make the table contents responsive to outside changes (such as resizing
Adaptive Column Width is selected? the widget), the table can automatically adjust the column widths. The changes
respect the following guidelines:
• Manual resizing of columns is not allowed anymore; instead, the width of the
columns change if the width of the widget changes.
• The text is not cut off horizontally.
• For each column, the longest text is used as a base width:
• If the sum of the base widths of all columns is smaller than the widget
width, then this “empty space” is distributed evenly to the columns.
• If the sum of the base widths of all columns is larger than the widget
width, then the columns keep this base width.
Restriction
When a cell in a column has Wrap styling applied, that column will be exempt
from the white space distribution. A set width is applied to the column and
it won't change when extra white space is distributed to the columns, even if
the wrapped text content is narrower than the applied width.
If you want to use the Wrap style, it is recommended that the Adaptive
Column Width option is turned off. This gives you more control to manually
resize the column to the desired width for displaying the wrapped text.
How can I set a fixed height for all of the The Styling panel has a section Table Properties that lets you either select a pre-
rows in my table? defined height or choose to set a custom height. The predefined height options
are Default, Cozy, and Condensed:
• Default: sets the row height according to the content in the row by calculating
the needed height with respect to the font size as well as icons and padding.
• Cozy: results in more padding above and below the text within the cell.
• Condensed: results in less padding above and below the text within the cell.
I have a cell selected, but cannot Resizing is done using the bottom or right side of the cell.
change the height of the cell when se-
Resizing always affects all cells in the same column or row.
lecting the line at the top of the cell.
• To increase or reduce the height of a row, use the line at the bottom of a cell.
• To increase or reduce the width of a column, use the line on the right side of a
cell.
Why do different colors show up when • The colors in the dropdown for the in-cell charts are only a preview of the
I switch the color palette for in-cell palette.
charts? • When switching the order of the colors you might see completely different
colors from the same palette. The additional colors were the ones that were
not shown in the preview.
Related Information
In SAP Analytics Cloud, your tables can consume data in a grid layout.
You may use Microsoft Excel in your daily work and are familiar with grid layouts of data. The following sections
explain how you can use a grid layout on a canvas or responsive page.
The following table shows the grid layout and column/row headers:
Tip
By using a table with a grid layout on a canvas or responsive page, you can make use of all available widgets
in the story mode. For example, you can add other widgets, such as charts, images, and page filters on the
same story page.
In Classic Story Experience, you can use a grid page, but you can't include other widgets such as charts on
the page.
Tip
If you created your story in Classic Story Experience, check that the Optimized Presentation option is
4. Open the contextual menu for your table by selecting (More Actions).
5. Select Show/Hide.
6. Select Grid and Column/Row Headers.
The following are some ways you can leverage the grid layout of your table:
• To add or remove columns, select (Add a new column) or (Delete a column) near the right side
of the table widget.
• To add or remove rows, select (Add a new row) or (Delete a row) near the bottom border of the
table widget.
• Automatically resize your columns by selecting the Adaptive Column Width option in the Builder panel
for your table.
• Easily copy and paste values from your table to the grid.
• Use the formula bar to add calculations to the grid.
The following table uses the formula bar to calculate the total quantity sold across all locations in 2016:
You can change the default design of a table in the SAP Analytics Cloud Builder panel as well as in the table
itself.
The Builder panel allows you to modify different features related to your table, including changing the data
source, adding data elements to rows and columns, creating filters, and so on.
You can also do some changes in the table itself, such as hiding data or rows and columns.
The following diagram displays the table Builder panel features available.
Key Feature
Hovering over or selecting the data source name will show a tooltip with the data source description and file
path.
You can refresh the table data. From the main menu, choose Data, and then choose (Refresh).
Table Properties
Cross-tab: For your planning model, you have two choices: a regular table (Cross-tab) or a forecast table
(Forecast Layout). For more information, see Creating a Forecast or Rolling Forecast Layout [page 1428].
Adaptive Column Width: Automatically resizes the columns when you resize the table. On a Responsive page,
the columns can be aligned with columns of other tables in the same lane.
Arrange Totals / Parent Nodes Below: Moves totals to the bottom of the table. If you use a hierarchical model,
when this option is selected, all child nodes will appear above the totals.
Forecast: Shows the cut-over date and how your planning model's data is summarized (Cut-over Year, Look
ahead, and so on)..
Optimized Presentation: Provides fast-loading In-Cell charts, pixel-level resizing of columns or rows, and
smooth scrolling. For more information on Optimized Presentation for tables, see Optimized Presentation
Table: Features and Restrictions [page 1355].
If a dimension has dimension attributes (also referred to as navigational attributes), you can expand the
dimension in the builder panel and select the attributes. They will be added to your table just like other
dimensions.
When you hover over a dimension, you can select (Filter) to add or modify filters.
• Hierarchy: Choose a different hierarchy to display for the dimension. For example, you can choose to
display different periods for the Date dimension. You can also select to flatten the hierarchy of a dimension
(Flat presentation). (This option is not valid for an Account dimension.)
Show only leaves in widget: use this feature to only show the leaf members of the hierarchy. For example,
you can use this option while planning, if you want to enter data only at the leaf level of the dimension.
• Display Options: Show the member description, ID, or both the ID and description.
Note
• Properties: Select multiple properties to display in the table, including ID and Description in separate
columns or rows.
• Unbooked Data: Show or hide unbooked data. When unbooked data is hidden, only cells in the grid that
contain data are visible.
Restriction
Account formulas will still appear if the operands of the formula contain data even though the
Unbooked Data option is disabled.
When using SAP BW, unknown read modes received from the server can't be mapped to booked or
unbooked data. To prevent unintended changes, the Unbooked Data setting is disabled in the Builder,
and the read mode is used for the dimension.
When using an SAP BW data source, Unbooked Data is not available for unsupported read modes.
When you show unbooked data, the table is updated to include unbooked data for that dimension and any
inner dimensions. A message appears stating that unbooked data has been switched on for dimensions
below the selected dimension.
When you hide unbooked data, the table is updated to remove unbooked data from that dimension and any
outer dimensions. A message appears stating that unbooked data has been switched off for dimensions
above the selected dimension.
Above and below in the messages refers to how the dimensions are listed in the Builder panel.
• Zero Suppression and Null Suppression: Show or hide zero and null values in a table.
For more details on zero or null suppression, see Hide zero or null values [page 1370].
Note
For hierarchical dimensions, the total is added to the top-level node only. The total does not change
when you drill into the hierarchy.
Create Top N will not change the total either, as it applies to all the data, not just the top values.
You won't be able to use the Show Totals option when you have calculated measures that contain
if/then statements. For more information, see KBA 3015223
• Rename: Provide a customized description for a dimension in your table rather than using the default value.
The name you use for a dimension appears in the story, but does not change the name in the model.
Note
If you use R scripts, they will not be automatically updated with the new name. You will need to
manually change the dimension name in your script.
All filters that have been applied to the table are listed in the Filters section of the tab, and in the table subtitle.
If the underlying model is using categories and periods in the Date dimension, these are visible in the filters list
and cannot be removed. Filters that have been manually applied can be removed by choosing the (Cancel)
icon beside the filter and more filters can be added here by selecting the Add Filters text at the bottom of the
list.
Filters are normally applied as restrictive filters (so that only the selected members are visible) but they can
also be applied exclusively by selecting the Exclude selected members checkbox when you select the members;
in this case, all members are included in the table except the selected items.
You can also specify the visibility of each selected member as well as their child members by selecting (Set
to Invisible) next to the member. In the Selected filters dialog, selected members are shown in two separate
groups: Selected members and Invisible members. Invisible members do not appear in the table, but unlike
members that are filtered out of the table, their values are still aggregated to parent members and can be
affected when a parent member is adjusted.
Note
If you add a dimension that contains a large number of members, a filter to restrict the number of members
added to the table may be automatically applied. You can manually remove an automatically generated
filter using the (Cancel) icon beside the filter. You can also edit the filter and save modifications.
• Filter by member: this option is essentially the same as the standard filter with checkboxes to select or
exclude individual members of the hierarchy.
• Filter by range: using this option, you can define time periods based on years, half-years, quarters, months,
weeks, or days (depending on the time granularity defined in the underlying model) and apply the date
range as a filter, so that only details in the selected time period are visible. It is also possible to define
multiple range time filters and apply these together. You could use this for example, to compare the first
two months of the year over a three year period by defining three separate ranges for months Jan–Feb for
each of the three years. When these ranges are applied as a single filter, everything else except the selected
periods is filtered out.
Ranges can be fixed or dynamic; for example, you could choose the fixed range January 2017 to December
2017. If this story is opened in 2018, the story will still show 2017 data. For dynamic date ranges, in addition
There is a Reporting section on your Builder panel if your table is on a canvas page. In this section, there
is an option to Auto-size And Page Table Vertically. If you select this option, your table will expand or shrink
automatically. If necessary, the application will split the table over multiple pages to fit the content.
View Mode
Enable Explorer (Classic Design Experience): Select whether to enable the Explorer to be launched directly
from the tile when in view mode.
Note
Explorer is not available when you use a data source that has multiple hierarchies in an account dimension.
If you want to restrict the number of measures and dimensions that are visible in the Explorer, select
Configure Measures & Dimensions. Note that all measures and dimensions that are currently in the chart are
automatically included and can’t be removed. Also, if you don’t specify any additional dimensions or measures,
then only the ones used in the chart are available in the Explorer.
Enable Data Analyzer (Optimized Design Experience): Select to allow Data Analyzer to be opened from the
table's action action menu.
Disable Interaction (Optimized Design Experience): Disables any interaction with the widget in view mode, such
as filtering and showing tooltips. This option is available when you've enabled advanced mode ( ).
For more information, refer to Use Pause Refresh Options and APIs [page 1742].
Comments
Allow Data Point Comments: select whether to allow data point comments to be added on a table.
Use one of the following options to decide what happens to the calculations in the intersecting cells:
• Unresolved: By default, no row or column priority is set. However, even when you choose a different priority
you can change back to Unresolved at any time.
• Columns Override Rows: Uses the calculated column value.
• Rows Override Columns: Uses the calculated row value.
Restriction
The Intersecting Calculations Priority feature only affects the custom calculations that you add to the table.
For more informaton on those calculations, see Create Custom Calculations Within Your Table [page 1469].
Boardroom
Select sorting options for when your story is used in the SAP Digital Boardroom.
You can hide rows or columns of table data without filtering the table. However, hiding too many rows or
columns can affect performance.
Note
An information message appears when you choose to hide another row or column when you already have a
combination of 80 hidden rows or columns.
To hide a row or column, right-click the column or row headers that you want to hide and choose Hide
row/Hide column.
You can select multiple members from within each dimension (for example, 2018 and 2019), and across
different dimensions on the rows or columns (for example, Net Revenue for Laptops and Desktops).
The selected rows or columns are removed from the table view, but the data is not filtered out. The values of
the remaining table cells are not affected, and any visible children of the selected members will remain in the
table.
To restore individual rows or columns, select Hidden and choose next to a member or combination of
members.
You can use the table to show unbooked data and totals (or visible properties), even when you don't have edit
privileges.
In the table right-click the column or row header, select Show/Hide and then select one of the following:
• Unbooked
• Totals
• Properties: In the dialog, select the properties to show.
Note
• If you choose to hide (suppress) zero or null values while Unbooked Data is selected, the following
message (or a similar one) is displayed.
“To apply Zero suppression, we're disabling unbooked data.”
• Booked mode is disabled in View mode on the table context menu when you've selected to hide zero or
null values. This applies to analytic models only.
You can choose whether to hide zero or booked null values in a table.
Restriction
Not all models use the same process for hiding zero or null values.
SAP Analytics Cloud acquired or planning models allow you to suppress zero or null values in both view
and edit mode, but models from SAP BW, SAP HANA live, and other data sources may only suppress these
values when using the Builder panel to edit the table's structure.
Acquired Models: Hide Zero or Null Values using the Builder Panel
You can suppress zero or null values for all dimensions and measures in rows or columns.
3. In the Builder panel, for either the Rows or Columns heading, select (More), and then select one of the
following options:
• Zero Suppression
Note
If you choose to hide (suppress) zero or null values while Unbooked Data is selected, the following message
(or a similar one) is displayed.
From your table, select (More) Show/Hide , and then select one or more of the following:
Note
If you select Unbooked Data while zero or null suppression is selected, the following message (or a similar
one) is displayed.
However, there are some differences in behavior depending on whether you're working with SAP BW or SAP
HANA.
Hide zero and null values on either rows or columns, or both Hide zero or null values on all rows and columns.
simultaneously.
Hide zero and null values on all cells, or on totals. Hide zero or null values on cells. Rows or columns are
removed if all cells along the row or column are zero or null.
3. In the Builder panel, select a dimension, select (More), and then select one of the following options:
• Zero Suppression
• Null Suppression
SAP BW Models
SAP BW models don't have separate options for zero and null values.
3. In the Builder panel, select a dimension, select (More) Zero Suppression , and then select one of
the following options:
• All Cells
• Totals
Note
Related Information
Your data is usually loaded into a table with the hierarchies collapsed.
Note
There are times when the system is retrieving data that it can't determine the actual number of levels in a
hierarchy. In those situations, the drill menu will show five levels, even if there are fewer than five levels in
your hierarchy.
If you have more than five levels, you can use the drill menu to expand your hierarchy to Level 5 and then
use the expand arrows in the table to expand the remaining levels.
• Expand from a member cell: select the expand arrow to show another level of data.
Tip
The expand/collapse arrow is always left-aligned for columns. This means that if you have very wide
columns with short texts, there may be a large gap between the arrow and the heading.
1. Select Drill .
2. In the dialog that appears, select a Dimension and a Drill Level.
3. Select Set.
After you import data to your story, you might want to run some simulations by temporarily changing a few
values.
You can use a set of basic simulation features in a table based on embedded data. The full set of features are
available when working with data from a planning model.
Feature Details
Data entry on booked and unbooked cells Entering Values in a Table [page 2192]
Copying and cutting cells (records or aggregated values) Copying and Pasting Cell Values [page 2199]
Undoing or redoing changes from the table cell menu, or Table Menu Options on Story Pages [page 1374]
from the History panel
Undo, Redo, and Revert Changes to Versions [page 2179]
You won’t be able to publish these changes to your embedded data. When you’re finished with your simulation,
you can right-click a table cell and select Version Revert to go back to your original data.
Also, if you switch back to the Data view and make changes to your embedded data, you may lose the changes
that you’ve made for a simulation. You’ll get a warning before any data is discarded.
Note
Simulating values isn’t supported if your embedded data creates members with NULL IDs. Change the null
IDs in the Data view before starting a simulation.
Related Information
In SAP Analytics Cloud, there are several options available in table-specific action and context menus.
Note
When your table is in Edit mode, you'll see all available options. When your table is in View mode, some
options will be hidden.
• Table Main Context Menu (optimized story) [page 1374]: use to select Top N items, sort, do mass data
entry, and so on. Appears when the table tile is selected.
• Table Cell (Context) Menus (optimized story) [page 1379]: use from a table cell to add calculations directly
in the chart, to create new members, set thresholds, and so on. Appears when you right-click a table cell.
• Table Body (dimension values) Cell Menu (optimized story) [page 1382]: menu for working with dimension
body cells.
• Table Main Context Menu [page 1385]: use to select Top N items, sort, do mass data entry, and so on.
Appears when the table tile is selected.
• Table Cell (Context) Menus [page 1390]: use from a table cell to add calculations directly in the chart, to
create new members, set thresholds, and so on. Appears when you right-click a table cell.
• Table Body (dimension values) Cell Menu [page 1392]: menu for working with dimension body cells.
The table-specific menu options let you work with the data and change the appearance of the whole table, as
well as letting you work with individual table cells.
The table action menu is dynamic and is only visible when (More Actions) is selected.
Note
Some options are only available on Grid pages, and other options are only available on Canvas or
Responsive pages. The options will be identified in the following tables as (Grid pages only) or (Canvas
or Responsive).
Action Description
Applied to Table Contains table details such as filters, drill level, and so on.
(Optimized View
Mode) Restriction
To use the linked analysis functionality to change filter values, you must make the changes within
the table body.
Use this option to set up a table that shows the results of an allocation step. It’s available when you
Set Layout
have access to an allocation process based on the table’s data source.
• Rows: Source dimensions of the allocation step, and the account dimension
• Columns: Target dimensions of the allocation step, and the version dimension
Any other dimensions are removed from the table as soon as you select the Allocation Processes
option. You won’t be able to automatically restore your table’s previous setup, although you can
change it back by adding and removing dimensions.
To use this option, select it and choose Allocation Processes in the Layout panel. To change the
allocation step that it’s based on, select the (Select allocation step) icon and choose an allocation
process and step.
Drill (Drill) opens a Set Drillstate dialog that lets you manage the number of levels that are currently
visible in a hierarchy. Data is typically organized and displayed in a collapsible hierarchy. This option
lets you avoid drilling through branches of the hierarchy to see the level you prefer.
When the table currently includes more than one dimension, you can select the dimension you wish to
open from the drop-down Dimension list. Under Drill Level, enter the drill level you require. Entering 0
as the drill level value, collapses all levels of the hierarchy.
If you select a cell in a table, you can freezes all rows up to the selected row, and all columns up to the
Freeze (Can-
selected column.
vas or Responsive)
For a planning model with data locking enabled, toggle this setting to switch between ignoring data
Ignore
Data Locks or locks or enforcing them.
When ignoring data locks, you can enter data in locked and restricted cells. However, you won’t be able
Enforce Data
Locks to publish changes to a locked or restricted cell unless you have additional permissions. (For example,
you have data locking ownership of a restricted cell that you changed, you're the model creator, or you
have the Admin role.)
If you select one or more columns in a table, you can automatically resize the columns to fit the
Resize
content. Alternately, you can select one or more columns in a table then manually drag the columns to
columns to fit
resize them.
content (Canvas or
Responsive)
Use a linked analysis to drill through hierarchical data or to create filters that simultaneously update
Linked
Analysis multiple charts in your story.
• Hyperlink:
Lets you add a hyperlink to an external URL, page or story: Linking to Another Page, Story, or
External URL (Classic Story Experience) [page 1108].
• Comment:
Canvas or Responsive pages: lets you add a comment to the tile.
Grid pages: lets you add and save comments to a data cell: Adding Comments to a Data Cell [page
1407]
Lets you show or hide table elements. By default, most elements are shown.
Show / Hide
The following elements can be hidden
• Grid
• Column / Row Headers
• Freeze Lines
Note
When the table cells have text wrapping set, showing or hiding the formulas can affect the
row height and the width of the column and table.
• References
• Data Locks - when data locking is enabled, you can show icons on cells that are locked, cells that
have a mix of locked and unlocked data, and cells that have an unknown lock state.
• Validation Warning - validation warnings help you to quickly identify which cells are invalid for
data entry due to dimension combination rules that have been defined for the underlying planning
model. More information: Check Validation Rule Results and Warnings for Planning [page 2237]
• Dimension Headers
• Member Names: you can apply one of the following. Selecting the other option deselects which-
ever one had been selected.
• Repetitive Member Names - when you've expanded a hiearchy, this repeats the dimension
name on every row.
• Keep Member Names Visible - when you've expanded a hierarchy and are scrolling through
the data, this displays the member name for outer dimensions.
Note
If you choose to hide (suppress) zero or null values while Unbooked Data is selected, the following
message (or a similar one) is displayed.
Edit Scripts Use the script editor to add scripts to your table widget.
(Advanced Mode)
For more information, see Scripting (Optimized Story Experience and Analytics Designer) [page 1652].
Restriction
When you copy or duplicate a table that has custom calculations (row or column calculations), the
new table uses the same calculations as the original table, not copies of those calculations.
If you want to use different custom calculations in your new table, you need to remove the current
calculations from that table and then create new ones.
For more information on custom calculations in tables, see Create Custom Calculations Within
Your Table [page 1469].
Lets you export the table as a CSV or XLSX file, with or without formatting.
Export
More information: Export Table Data as a CSV or XLSX File [page 248]
Open Data Lets you create ad-hoc analyses from the data used in your story.
Analyzer
For more information, see Data Analyzer [page 922].
Expands the tile to fill the canvas; also displays the formula bar. (When you exit fullscreen mode, the
Fullscreen
formula bar is hidden again.)
(Canvas or Re-
sponsive)
Disable Mouse When enabled, you can use the mouse to resize or reposition the table widget.
Actions / Enable
Mouse Actions
In addition to the actions available for the whole table, you can also modify individual table cells. Right-click a
table cell to bring up a context menu.
Action Description
Add Dynamic Text Add dynamic text from a variety of elements. (See Add Dy-
namic Text to a Table Title [page 1400])
Show/Hide Lets you show or hide table elements. By default, most ele-
ments are shown.
Action Description
Drill Lets you change the level of the hierarchy that is currently visible for each dimension.
More information: How to Show More Hierarchy Levels in Your Table [page 1372]
Select Hierarchy Lets you choose the type of hierarchy that you want to display.
Sort Options Decide whether to sort ascending or descending, or to set a custom order.
Restriction
• The sort option won't be available if you aren't allowed to change the sort order for
a particular dimension.
For example, you aren't allowed to change the order of members in the Version
dimension for a Forecast Layout.
• Sorting on the Version dimension is not currently supported for SAP Integrated
Business Planning.
• The grouping options (Keep Left Columns Grouped, Keep Top Rows Grouped) are
only available for SAP acquired data models and SAP HANA live data connections.
Show / Hide Lets you decide whether to show Unbooked cells or Totals values, and select properties.
More information: Create New Dimension Members Without Leaving Your Table [page 2229]
Action Description
Version management actions Valid for tables created from planning models.
For more version management information, see About the Version Management Panel
[page 2174]
Action Description
Redo Data Entry Redo (reapply) the change that was undone.
More information: Create New Dimension Members Without Leaving Your Table [page 2229]
In-Cell Chart Add a bar chart to a measure. The bar chart can be changed to a variance bar or variance
pin chart.
Restriction
The sort option won't be available if you aren't allowed to change the sort order for a
particular dimension.
For more information about adding calculated rows or columns, see Create Custom Calcula-
tions Within Your Table [page 1469].
Add client calculation Select one or more measures and use them to create calculation rows or columns.
For more information about creating calculations from table rows or columns, see Create
Custom Calculations Within Your Table [page 1469].
Hide row / Hide column Removes the rows or columns, but doesn't filter the data.
The table body (dimension value) cells have context menus for rows and columns.
Action Description
Lets you prevent individual cells of the table being updated. This feature is availa-
Lock Cell
ble for users with a planning license.
Select the cells you want to lock and choose the icon on the toolbar. Locked cells
are shaded gray as a visual indicator that they are locked, and the toolbar icon
changes to an open lock. Select the lock icon again to unlock the cell.
The read-only feature on the main toolbar prevents a cell value from being over-
written, but cell locking prevents updates to a cell value that would be caused by
aggregation. You can use this feature to redistribute values among sibling nodes
of a hierarchy.
For example, if the parent node in the hierarchy Employee Expense has three
child nodes: Salaries, Bonus, and Training, and you lock the Employee Expense
node, and then increase the value of the Salaries cell, the values of the Bonus and
Training cells are automatically reduced to balance out the increase in salaries
and maintain the same total Employee Expense value.
The cell locks that you apply are saved with the story. When other users work on
the page, they can remove your cell locks and add their own, if necessary.
For models with data locking enabled, this option opens the Data Locking page so
Manage Data Locks...
that you can lock, restrict, or unlock dimension members.
Before you open the Data Locking page, adjust your filters to include the relevant
dimension members in the table. You can also select one or more cells before
opening the Data Locking page to filter the data locking grid to those dimension
members.
For more information about data locking, see Configuring Data Locking [page
2525].
Note
If you want to change settings such as driving dimensions or dimension
ownership, open the Data Locking page from the Modeler.
Filter Member, Filter Create a simple filter (Filter Member) based on a single member, or create a
complex filter (Filter) that includes a combination of specific cells.
For more information, see Applying a Compound or a Simple Table Filter [page
1404].
Exclude Member, Exclude Excludes a single member (Exclude Member) or exclude members (Exclude) that
are in a combination of specific cells.
For more information, see Applying a Compound or a Simple Table Filter [page
1404].
Mobile (Responsive pages only) When enabled, shows the table on a mobile device.
You can apply an ascending or descending sort to a table. To apply a sort, select a
Sort Options
column or row heading, and then select (Sort Options).
Note
The Sort Options icon is active only when a suitable column or row of data is
selected.
Restriction
The sort option won't be available if you aren't allowed to change the sort
order for a particular dimension.
Choose the direction of the sort. For example, you can choose Ascending or
Descending or A-Z or Z-A. If you want to arrange the members of a dimension
yourself, select Add Custom Order and drag the members into the correct order in
the Edit Member Order panel.
For more information about custom sorting, see Changing Member Order [page
1435].
Value Sorting: For more information about sorting non-string dimension values,
see How to Sort Dimension Values [page 1436].
A Top N filter shows a specified number of the lowest or highest ranked members.
Create Top N
Fill in the following details:
• Type
• Direction
• Apply to each dimension – shows the top values for each dimension instead
of for the group of dimensions.
When you have a hierarchical measure in your table, this option is enabled
and cannot be disabled.
• Value – the number of values you want to include in the filter.
• Related Dimensions – if required, select dimension members.
When a Top N filter is applied, text is displayed at the top of the table indicating
which column is filtered.
The visible content of the table depends on the underlying structure of the data.
All rows of data that have not been selected are hidden. If the data in the table is
hierarchical or aggregated, other dependent rows will also be visible.
Note
The Create Top N icon is active only when a suitable column of data is se-
lected.
The table-specific menu options let you work with the data and change the appearance of the whole table, as
well as letting you work with individual table cells.
The table action menu is dynamic and is only visible when (More Actions) is selected.
Note
Some options are only available on Grid pages, and other options are only available on Canvas or
Responsive pages. The options will be identified in the following tables as (Grid pages only) or (Canvas
or Responsive).
Action Description
Applied to Table Contains table details such as filters, drill level, and so on.
(Optimized View
Mode) Restriction
To use the linked analysis functionality to change filter values, you must make the changes within
the table body.
Use this option to set up a table that shows the results of an allocation step. It’s available when you
Set Layout
have access to an allocation process based on the table’s data source.
• Rows: Source dimensions of the allocation step, and the account dimension
• Columns: Target dimensions of the allocation step, and the version dimension
Any other dimensions are removed from the table as soon as you select the Allocation Processes
option. You won’t be able to automatically restore your table’s previous setup, although you can
change it back by adding and removing dimensions.
To use this option, select it and choose Allocation Processes in the Layout panel. To change the
allocation step that it’s based on, select the (Select allocation step) icon and choose an allocation
process and step.
Drill (Drill) opens a Set Drillstate dialog that lets you manage the number of levels that are currently
visible in a hierarchy. Data is typically organized and displayed in a collapsible hierarchy. This option
lets you avoid drilling through branches of the hierarchy to see the level you prefer.
When the table currently includes more than one dimension, you can select the dimension you wish to
open from the drop-down Dimension list. Under Drill Level, enter the drill level you require. Entering 0
as the drill level value, collapses all levels of the hierarchy.
If you select a cell in a table, you can freezes all rows up to the selected row, and all columns up to the
Freeze (Can-
selected column.
vas or Responsive)
For a planning model with data locking enabled, toggle this setting to switch between ignoring data
Ignore Data
locks or enforcing them.
Locks or
When ignoring data locks, you can enter data in locked and restricted cells. However, you won’t be able
Enforce Data Locks
to publish changes to a locked or restricted cell unless you have additional permissions. (For example,
you have data locking ownership of a restricted cell that you changed, you're the model creator, or you
have the Admin role.)
If you select one or more columns in a table, you can automatically resize the columns to fit the
Resize
content. Alternately, you can select one or more columns in a table then manually drag the columns to
columns to fit
resize them.
content (Canvas or
Responsive)
Use a linked analysis to drill through hierarchical data or to create filters that simultaneously update
Linked
Analysis multiple charts in your story.
• Hyperlink:
Lets you add a hyperlink to an external URL, page or story: Linking to Another Page, Story, or
External URL (Classic Story Experience) [page 1108].
• Comment:
Canvas or Responsive pages: lets you add a comment to the tile.
Grid pages: lets you add and save comments to a data cell: Adding Comments to a Data Cell [page
1407]
Lets you show or hide table elements. By default, most elements are shown.
Show / Hide
The following elements can be hidden
• Grid
• Column / Row Headers
• Freeze Lines
Note
When the table cells have text wrapping set, showing or hiding the formulas can affect the
row height and the width of the column and table.
• References
• Data Locks - when data locking is enabled, you can show icons on cells that are locked, cells that
have a mix of locked and unlocked data, and cells that have an unknown lock state.
• Validation Warning - validation warnings help you to quickly identify which cells are invalid for
data entry due to dimension combination rules that have been defined for the underlying planning
model. More information: Check Validation Rule Results and Warnings for Planning [page 2237]
• Dimension Headers
• Member Names: you can apply one of the following. Selecting the other option deselects which-
ever one had been selected.
• Repetitive Member Names - when you've expanded a hiearchy, this repeats the dimension
name on every row.
• Keep Member Names Visible - when you've expanded a hierarchy and are scrolling through
the data, this displays the member name for outer dimensions.
Note
If you choose to hide (suppress) zero or null values while Unbooked Data is selected, the following
message (or a similar one) is displayed.
Restriction
When you copy or duplicate a table that has custom calculations (row or column calculations), the
new table uses the same calculations as the original table, not copies of those calculations.
If you want to use different custom calculations in your new table, you need to remove the current
calculations from that table and then create new ones.
For more information on custom calculations in tables, see Create Custom Calculations Within
Your Table [page 1469].
Lets you export the table as a CSV or XLSX file, with or without formatting.
Export
More information: Export Table Data as a CSV or XLSX File [page 248]
Expands the tile to fill the canvas; also displays the formula bar. (When you exit fullscreen mode, the
Fullscreen
formula bar is hidden again.)
(Canvas or Re-
sponsive)
Opens the Controls panel. For more information about controls, see Measure-Based Filters [page
View Controls
1533].
In addition to the actions available for the whole table, you can also modify individual table cells. Right-click a
table cell to bring up a context menu.
Action Description
Add Dynamic Text Add dynamic text from a variety of elements. (See Add Dy-
namic Text to a Table Title [page 1400])
Show/Hide Lets you show or hide table elements. By default, most ele-
ments are shown.
The table row and column headers (Dimension headers) have their own context (right-click) menu.
Action Description
Drill Lets you change the level of the hierarchy that is currently visible for each dimen-
sion.
More information: How to Show More Hierarchy Levels in Your Table [page 1372]
Select Hierarchy Lets you choose the type of hierarchy that you want to display.
Sort Options Decide whether to sort ascending or descending, or to set a custom order.
Restriction
• The sort option won't be available if you aren't allowed to change the
sort order for a particular dimension. For example, you aren't allowed to
change the order of members in the version (or category) dimension.
• The grouping options (Keep Left Columns Grouped, Keep Top Rows
Grouped) are only available for SAP acquired data models and SAP
HANA live data connections.
Show / Hide Lets you decide whether to show Unbooked cells or Totals values, and select
properties.
More information: Create New Dimension Members Without Leaving Your Table
[page 2229]
Action Description
Version management actions Valid for tables created from planning models.
For more version management information, see About the Version Management Panel
[page 2174]
Action Description
Redo Data Entry Redo (reapply) the change that was undone.
More information: Create New Dimension Members Without Leaving Your Table [page 2229]
In-Cell Chart Add a bar chart to a measure. The bar chart can be changed to a variance bar or variance
pin chart.
Restriction
The sort option won't be available if you aren't allowed to change the sort order for a
particular dimension. For example, you aren't allowed to change the order of members
in the version (or category) dimension.
For more information about adding calculated rows or columns, see Create Custom Calcula-
tions Within Your Table [page 1469].
Add client calculation Select one or more measures and use them to create calculation rows or columns.
For more information about creating calculations from table rows or columns, see Create
Custom Calculations Within Your Table [page 1469].
Hide row / Hide column Removes the rows or columns, but doesn't filter the data.
The table body (dimension value) cells have context menus for rows and columns.
Action Description
Lets you prevent individual cells of the table being updated. This feature is availa-
Lock Cell
ble for users with a planning license.
Select the cells you want to lock and choose the icon on the toolbar. Locked cells
are shaded gray as a visual indicator that they are locked, and the toolbar icon
changes to an open lock. Select the lock icon again to unlock the cell.
The read-only feature on the main toolbar prevents a cell value from being over-
written, but cell locking prevents updates to a cell value that would be caused by
aggregation. You can use this feature to redistribute values among sibling nodes
of a hierarchy.
For example, if the parent node in the hierarchy Employee Expense has three
child nodes: Salaries, Bonus, and Training, and you lock the Employee Expense
node, and then increase the value of the Salaries cell, the values of the Bonus and
Training cells are automatically reduced to balance out the increase in salaries
and maintain the same total Employee Expense value.
The cell locks that you apply are saved with the story. When other users work on
the page, they can remove your cell locks and add their own, if necessary.
For models with data locking enabled, this option opens the Data Locking page so
Manage Data Locks...
that you can lock, restrict, or unlock dimension members.
Before you open the Data Locking page, adjust your filters to include the relevant
dimension members in the table. You can also select one or more cells before
opening the Data Locking page to filter the data locking grid to those dimension
members.
For more information about data locking, see Configuring Data Locking [page
2525].
Note
If you want to change settings such as driving dimensions or dimension
ownership, open the Data Locking page from the Modeler.
Mobile (Responsive pages only) When enabled, shows the table on a mobile device.
You can apply an ascending or descending sort to a table. To apply a sort, select a
Sort Options
column or row heading, and then select (Sort Options).
Note
The Sort Options icon is active only when a suitable column or row of data is
selected.
Restriction
The sort option won't be available if you aren't allowed to change the sort
order for a particular dimension. For example, you aren't allowed to change
the order of members in the version (or category) dimension.
Choose the direction of the sort. For example, you can choose Ascending or
Descending or A-Z or Z-A. If you want to arrange the members of a dimension
yourself, select Add Custom Order and drag the members into the correct order in
the Edit Member Order panel.
For more information about custom sorting, see Changing Member Order [page
1435].
Value Sorting: For more information about sorting non-string dimension values,
see How to Sort Dimension Values [page 1436].
A Top N filter shows a specified number of the lowest or highest ranked members.
Create Top N
Fill in the following details:
• Type
• Direction
• Apply to each dimension – shows the top values for each dimension instead
of for the group of dimensions.
When you have a hierarchical measure in your table, this option is enabled
and cannot be disabled.
• Value – the number of values you want to include in the filter.
• Related Dimensions – if required, select dimension members.
When a Top N filter is applied, text is displayed at the top of the table indicating
which column is filtered.
The visible content of the table depends on the underlying structure of the data.
All rows of data that have not been selected are hidden. If the data in the table is
hierarchical or aggregated, other dependent rows will also be visible.
Note
The Create Top N icon is active only when a suitable column of data is se-
lected.
Related Information
A list of key combinations that can be used to move around in tables or the planning panel in SAP Analytics
Cloud.
Many key combinations (shortcuts) are similar to the ones used in other programs, but not all of them.
In addition to the shortcuts, you can also select ranges (using the Shift key) or select specific values (using
the Ctrl or Command key).
On Windows, use the Ctrl key and on Mac use the Command key.
Note
When you are finished working inside a table widget, use the following keyboard actions to leave that
widget.
1. Press F6 .
The focus moves out of the table and away from the story page.
2. To move the focus back to the same story page, press Shift + F6 .
The focus moves to one of the widgets on the story page.
3. Use the arrow keys to move the focus to the next widget that you want to work on.
The following shortcuts can be used to move around inside a table, including moving between cells as well as
expanding and collapsing rows.
Move one cell: up, down, right, and left. Use the arrow keys or the following options.
Command + Home
Move right (page right) one screen at a time. Alt + Page Down
Opt + fn + up arrow
Command + up arrow
Expand a hierarchy. Use one of the following options to expand the hierarchy:
• Plus ( + )
• Alt ( Option ) + down arrow key
• F4
Collapse a hierarchy. Use one of the following options to collapse the hierarchy:
• Minus ( - )
• Alt ( Option ) + up arrow key
• F4
The following shortcuts can be used to change or update data in table cells (copy, paste, underline, and so on).
Command + A
Copy Ctrl + C
Command + C
Paste Ctrl + V
Command + V
Cut Ctrl + X
Command + X
Command + B
Command + I
Command + U
Complete cell entry and move down in the selection (cell Enter
below)
Return
Complete cell entry and move up in the selection (cell above) Shift + Enter
Shift + Return
Complete cell entry and move to the right in the selection Tab
Complete cell entry and move to the left in the selection Shift + Tab
You can use the Planning Panel keyboard shortcuts along with the table shortcuts to quickly enter and move
values.
Control + Option + D
Control + Return
Related Information
Use the formula bar to calculate values in empty table rows and columns, or cells outside a table.
To show the formula bar, select from the toolbar in the Story page.
Choose a cell that has a formula applied to it, such as the header of a calculated row or column in a table, or a
cell in a grid outside a table. The formula is displayed by default. Choose the icon in the formula bar ( or
You can add a formula to an empty cell in a grid, or to an empty row or column in a table. For more information
about adding formulas in table rows and columns, see Create Custom Calculations Within Your Table [page
1469].
To add a formula, select an empty column or row header, or an empty cell outside a table, and start typing the
formula text, beginning with an equals sign (=). (If the formula is a long one, you may prefer to type it in the
formula bar instead.)
You can use references to other cells, including cells that contain model data. To add a reference while typing a
formula, type the coordinate of the cell (for example, E6), or select the cell. You can also click and drag to select
cell ranges, and click an existing reference in the formula and select a new cell to update the reference. Each
cell reference is identified by color in the cell and the formula.
You can also create calculations in a table or chart using the Calculation Editor. For more information, see
Create Custom Calculations for Your Tables [page 1439] or Create Custom Calculations for Your Charts [page
1319].
After creating SAP Analytics Cloud tables, you can enhance those tables in many ways, including adding
features such as dynamic text, calculated rows, or table filters.
To modify your tables, you use the table builder for some changes and the table menus for other changes. For
information on the table menus, see Table Menu Options on Story Pages [page 1374].
Related Information
In SAP Analytics Cloud, you can add dynamic text to a table title.
Context
You can change the title of a table to include dynamic text from a variety of elements including story filters and
filters that are in use in the table.
Procedure
You can scroll through all the options, or select an option from the left side to display specific objects on
the right.
Option Objects
Dimensions For dimensions, you can also specify which hierarchy and
level to display, and whether to display the ID or descrip-
tion.
Input Controls
Cross Calculation Input Controls (Optimized Story Experi- Cross calculation input controls should be based on clas-
ence) sic account models.
Story Filters
Tile Filters & Variables Tile (or widget) filters and variables include the filters and
variables that are used in charts and tables, and (Opti-
mized Story Expereince) geo map filters.
Note
The tile filters and variables will not appear on mobile
devices.
Model Variables
Model Properties (Optimized Story Experience) Last Refreshed Date and Time: You can add last refreshed
data and time of any SAP BW live data model used in your
story.
Note
When the story is modified by someone who isn't a
member of the team, a warning message is displayed
instead of the team name through a dynamic text that
puts a mark in the story as a trust factor.
Note
In view time, the global variables will change accord-
ing to the customized scripts, and their values will be
automatically updated in the text widget.
4. Select Create.
Results
You can filter cells to focus on a specific set of data in a table or you can exclude non-relevant cells.
Context
You can filter a table by selecting cells in the table, by choosing members from a list, or for certain types of
dimensions (for example, date dimensions), by defining a range. Table filters apply only to the data in the table.
Restriction
When applying a filter to a custom property of the account dimension, the filter does not work as expected.
For example, calculated accounts and any accounts with child accounts that match the filter would be
included. See SAP Note 2931452 for information about the usage restriction.
Procedure
Some types of dimensions, for example date dimensions, can be filtered by choosing members or by
defining a range. Those dimensions appear twice in the list, with (Member) and (Range) suffixes.
Tip
Viewers can reset any changes that they made to filters and input controls to get the original view
Choosing members: select mem- The members you choose appear in the Selected Members list on the right.
bers from the Available Members
You can use the Search function to find the members you want.
list.
Restriction
All Members creates a static list of the current dimension members. If
you add new members to the dimension, they won't be added to the
table filter. Also, for hierarchical dimensions, All Members lists only one
level of hierarchy members in the table filter.
Defining a range: select a Dynamic Date ranges can be fixed or dynamic; for example, you could choose the
or Fixed range type. fixed range January 2019 to December 2019. If this story is opened in 2020,
the story will still show 2019 data. Dynamic date ranges shift based on the
current date. They also offer a few more granularities, such as current year,
current quarter, and current month, as well as ranges that are offset from
the current date.
If the time dimension is added to the table, you can also select the
(Filter) icon next to it in the Builder panel to choose from preset dynamic
filters such as Current Month or Current & Next Quarter To Date.
For more information, see Story and Page Filters [page 1518].
Allowing modifications: select Allow If you allow viewers to modify filter selections, they can either toggle on and
viewers to modify selections. off each filter value (with the Multiple Selection option), or select a single
filter value (with the Single Selection option).
Changing drill levels for date range Unrestricted drilling lets you drill to any level in the hierarchy, no matter
filters: select Unrestricted Drilling. what the filter or date granularity is set to.
Note
The Unrestricted Drilling option is only available for date range filters in
charts and tables.
The filter appears at the top of the table, and in the Filters area in the Builder tab.
Related Information
You can use different types of filters to filter or exclude a specific set of data in a table or a subset of the data.
You can filter a table by selecting cells in the table, by choosing members from a list, or for certain types of
dimensions (for example, date dimensions), by defining a range. Table filters apply only to the data in the table.
Restriction
When applying a filter to a custom property of the account dimension, the filter does not work as expected.
For example, calculated accounts and any accounts with child accounts that match the filter would be
included. See SAP Note 2931452 for information about the usage restriction.
Use the following procedure to filter by selecting cells in the table. You can quickly create compound or simple
filters or exclusions.
• Compound filters (Filter, Exclude): a compound filter includes or excludes the member from all dimensions
on the table row or column.
For example, when you have three dimensions and create a compound filter on the innermost dimension,
the filter is applied to that combination of filters: “A + B + C (3)”.
• Simple filters (Filter Member, Exclude Member): a simple filter includes or excludes specific members.
For example, when you have three dimensions and create a simple filter on the innermost dimension, the
filter is applied to that member for all the dimensions.
Example
There is a table that shows three products and two cities. It also shows the date as either a quarter or
periods within the expanded quarter.
• Product = Apparel
• Region = City2
• Date = P05 (2014)
P04 (2014) 10
P06 (2014) 10
City2 Q2 (2014) 10
City2 Q2 (2014) 80
• The compound filter (Filter) is filtered on the combination of “Product + Region + Date”.
• The simple filter (Filter Member) is filtered on the specific date member.
Apparel City2 P05 (2014) 100 Apparel City1 P05 (2014) 100
As a story designer, you can use the following procedure (from the Builder panel) to filter by choosing members
from a list, or by defining a range.
Viewers can reset any changes that they made to filters and input controls to get the original view of
Choosing members: select members The members you choose appear in the Selected Members list on the right.
from the Available Members list.
You can use the Search function to find the members you want.
Restriction
All Members creates a static list of the current dimension members. If you
add new members to the dimension, they won't be added to the table
filter. Also, for hierarchical dimensions, All Members lists only one level of
hierarchy members in the table filter.
Defining a range: select a Dynamic or Date ranges can be fixed or dynamic; for example, you could choose the fixed
Fixed range type. range January 2019 to December 2019. If this story is opened in 2020, the
story will still show 2019 data. Dynamic date ranges shift based on the current
date. They also offer a few more granularities, such as current year, current
quarter, and current month, as well as ranges that are offset from the current
date.
If the time dimension is added to the table, you can also select the (Filter)
icon next to it in the Builder panel to choose from preset dynamic filters such
as Current Month or Current & Next Quarter To Date.
For more information, see Story and Page Filters [page 1518].
Allowing modifications: select Allow If you allow viewers to modify filter selections, they can either toggle on and
viewers to modify selections. off each filter value (with the Multiple Selection option), or select a single filter
value (with the Single Selection option).
Changing drill levels for date range fil- Unrestricted drilling lets you drill to any level in the hierarchy, no matter what
ters: select Unrestricted Drilling. the filter or date granularity is set to.
Note
The Unrestricted Drilling option is only available for date range filters in
charts and tables.
Prerequisites
To set or change the comment limit for your model, go to System Administration System
Configuration , and update the value for Limit of comment threads per Model.
To add comments to data cells, you need to enable the Allow Data Point Comments option in the Properties
section of the builder panel.
Note
The Allow Data Point Comments option is kept on for exiting tables but is disabled by default for newly
created ones.
Context
You can leave comments on cells from any type of acquired model. When a story is opened, comment mode is
on by default.
Note
You can't add comments to table cells from blended models or live data models (except SAP BW and SAP
BPC live data models).
To view the existing comments, add a new comment, and delete any comment, you would need the following
permissions.
• You should have Read, Create, and Delete permissions for the object-type Comment in the tenant.
• You should have Add Comment, View Comment, and Delete Comment permissions on the model to
comment on datapoints.
• You should have Add Comment, View Comment, and Delete Comment permissions on the story.
Note
If any one permission is missing in the combination, then you won't be able to perform the relevant action.
When commenting on a data point, the data context is taken into account. Only users who share the data
context can view the comment. The data context is composed of the following:
When commenting on data points in models that have role-based data security applied through model privacy
and multiple read and write permissions on more than one dimension, consider the following:
• When the Model Data Privacy is enabled, only the owner of the model and user roles that have specifically
been granted access can see the data. For more information, see Learn About Data Security in Your Model.
For information on user roles, see Standard Application Roles.
Restriction
If the model calculation includes the operator "IS CURRENT USER", commenting is not supported on
such data points.
• For a role with multiple read and write permissions, the data aggregation respects the operators used in
the model calculation. If AND is selected, the conditions for all attributes must be met for the mapping to
be applied. If OR is selected, the conditions for only one of the attributes must be met for the mapping to
be applied. Based on the permissions assigned and the resulting data aggregation, the user can comment
on the respective data points. Similarly, based on the read permissions and the combined data aggregation
assigned to the user's role, a user can read the comments on a certain data point if the data aggregation
conditions are met.
If a dimension property has been added to the table axis or used in a filter, which can be a table, page, or story
filter, the data context will take into account the dimension property (and its value) when you add a comment.
Example
If you add a comment to a data cell that aggregates the data of multiple leaf members (which share the
same property value): the same comment will be shown for any data cell which aggregates the data of the
same set of leaf members.
If you then proceed to update the values of the property for dimension members, the set of leaf members
that share the property value may be changed accordingly. The same comment will be shown for any data
cell that aggregates the data of the changed set of leaf members. Note that if a table has a filter based on
the dimension property in question, the same property value has to be selected in the filter to allow the
same comment to be shown.
To sum up, the dimension property value being used when you add a comment will serve as the basis to
determine the set of leaf members associated with a specific comment.
In the second table, the dimension property "Quality" has been added like a dimension to the table. The
same comment "High Fruit" is also shown because the data cell for "High" actually aggregates the data of
"Apple", "Peach" and "Orange".
In the third table, a dimension filter has been added, and "Apple", "Peach" and "Orange" have been
selected. The same comment "High Fruit" is also shown because the data cell again aggregates the data of
"Apple", "Peach" and "Orange".
Then, the value of dimension property "Quality" for "Apple" is changed to "Low". It means the dimension
property value of "High", which is used when the comment was added before, now corresponds to only two
leaf members, namely "Peach" and "Orange".
In the first table, the comment of "High Fruit" is still shown, because the data cell now aggregates the data
of the changed set of leaf members ("Peach" and "Orange") and the same property value of "High" is used
in the filter.
In the second table, the same comment of "High Fruit" remains because the data cell for "High" now
aggregates the data of the changed set of leaf members ("Peach" and "Orange").
In the third table, the same comment of "High Fruit" is not shown because the data cell still aggregates
the data of "Apple", "Peach" and "Orange", which are not consistent with the changed set of leaf members
("Peach" and "Orange"). The comment of "High Fruit" will be shown if you remove "Apple" from the filter
selection.
If you add a comment to a data cell for a leaf member, and then proceed to update the property value of
this member, the same comment will remain. In this case, the comment is bound with the leaf member
once it's added.
In the second table, the same comment "Pear" is also shown because the member with the value of
"Quality" being "Medium" is "Pear". And the comment is bound with this leaf member.
In the third table, the same comment "Pear" is also shown for the leaf member "Pear".
Then, the value of the dimension property "Quality" for "Pear" is changed to "Medium_P", and "Medium_P"
is added as a selection for property filter for the first and second table.
In the first table, a comment of "Pear" is still shown for "Pear", which are bound. Changing the property
value of this leaf member won't affect it.
In the second table, the same comment "Pear" is also shown because the member with the value of
"Quality" being "Medium_P" is "Pear". And the comment is bound with this leaf member.
In the third table, the same comment "Pear" is also shown for the leaf member "Pear".
Note
You may have existing comments that were added when the dimension property was by design not taken
into account when it comes to the data context, which was the case before SAP Analytics Cloud 2023.25.
Note
The data context takes into account only the dimension property in use when you add a comment.
For example, the dimension property "Quality" is added to the row of Table 1 while the dimension property
"Color" is added to the row of Table 2. The dimension property values of "High" and "Red" both correspond
to members "Orange" and "Peach". A comment is then added to the data cell for "High" in Table 1, and the
same comment will also been shown in Table 2 for "Red". If the value of dimension property "Quality" for
"Apple" is then changed from "Low" to "High", the same comment will still be shown in Table 1 where the
dimension property in use is "Quality" but it won't be shown in Table 2 where the dimension property in use
is "Color".
Note
If the text length of a dimension property's value exceeds 4996 characters, comments are not supported
on members with the dimension property set to this value.
Note
If you try to add comments on a dimension property of type Decimal and the property value has over 17
total digits, comments will fail to be saved.
Note
When trying to add comments, you might see the following message:
Comments were not saved due to data access control or role-level security,
conflicting dimension members, invalid combination of dimension members and
attribute values, or dimension members missing from the model.
• You try to add a comment to an unbooked cell, but you don't have the read access to one or more of the
dimension members that make up its data context.
• You try to add a comment to an unbooked cell, but there're conflicting dimension members.
For example:
RMStoryTennisShoes is a restricted measure defined in the story that restricts the Product dimension to
Tennis Shoes. One comment is successfully added for Tennis Shoes but a comment can't be added to the
unbooked cell in the row below, as there is a conflict of dimension members.
Note
You can’t add comments to data cells sourced from calculations defined in the table, except restricted
measures (or accounts). However, these measures (or accounts) cannot be based on any underlying
restricted or calculated measures (or accounts) that are created in the table, but they can be based
on underlying calculated measures (or calculated accounts, i.e., accounts with formulas) defined in the
Modeler.
Note
When commenting on a data cell sourced from a restricted measures (or account) defined in the table, the
context includes the specific filters on dimensions used to create the restricted measure (or account). The
comment will display in other tables and stories that apply the same filter combinations to the dimension.
For example:
• You put a comment on a restricted measure based on account values that apply a filter on the region
dimension to limit the member to 'North America'.
• The comment will display in other tables or stories that apply the same filters to limit the region
dimension to 'North America'.
Note
You can add comments to data cells sourced from calculations defined in the Modeler, which include
calculated accounts (accounts with formulas), calculated measures, and conversion measures. The
context of such comments does not include the specific filters on dimensions or variables used to create
such a calculation, but it does include its name (or ID). Even with other tables and stories that apply the
same filter combinations to these dimensions, as long as the calculation with the same name (or ID) is not
available, the same comment won’t display.
The calculated measure in the column of this table is calculated by the following formula:
A data point comment of “Comment1” is added to the data cell (but managed and shown in the comment
column) sourced from this calculated measure, with the version dimension filtered to Actual via the
formula.
The table above is built on the same model and has a filter that restricts measure to Amount, which
is the base measure of the calculated measure mentioned above. However, even though the data cell
also has its version dimension filtered to Actual, the comment of the first table is not shown in this one,
as the calculation with the same name is not available. Rather, you can add a different comment, like
“Comment2”.
You can also add comments to data cells sourced from calculations mentioned above that include
variables, which can be of number, dimension, and currency types. The same rule regarding the data
context also applies: the name (or ID) of the calculation is included in the data context. For example:
The conversion measure of “CNY_Currency” has its target currency fixed to CNY while the conversion
measure of “dynamic_Currency” has its target currency taken from a currency variable. Even when
the currency variable is set to CNY, the same comment for “CNY_Currency” won’t appear for
“dynamic_Currency”, as they’re two different conversion measures (with different names). And if the
currency variable is changed to another currency, like EUR, the same comment of “comment for
dynamic_Currency when the variable = CNY” will remain there, because the name of the conversion
measure hasn’t changed.
Comments are not supported when you set a filter using member navigation. For example, in an optimized
design experience story or in view mode of an optimized view mode story, a dynamic range filter is set on
a user-managed time dimension. If a story in the non-optimized (classic) experience that includes such a
filter is converted to one in the optimized story experience, comments will also become unavailable
Note
When flat presentation and one hierarchy are used in two or more filters of a dimension of parent-child
hierarchy, comments are not supported on members in flat presentation of this dimension if these
members are parent nodes in the other hierarchy.
When you add or remove dimensions, the existing comments will still be visible if the data context of the cell
has not changed. In the example below, the comment on the left was created with a Region filter on Canada. On
the right, Region has been added to the visible dimension without a filter. The context of the cell remains the
same for both scenarios, therefore the same comment is visible.
Note
When your table has multiple hierarchical dimensions and the comment is added to a lower level member
of an inner dimension, you won't see the comment wedge icon when you collapse the outer dimension.
Example:
• You have two dimensions, Account and Region. In Account you expand Income Statement, and in
Region you expand All Companies.
• You add a comment to North America.
A colored wedge is displayed to indicate that there is a comment.
Procedure
Note
b. Select the displayed comment icon ( ) to display the thread to which you want to reply.
c. Enter your text in the reply box and then select Add Comment.
Note
A data point comment also displays the original value of the cell if the data value has changed.
Note
When you copy and paste formatted text to your comments, the source format may be lost.
Note
Comments are also supported on tables built on BPC or BW live data models. Make sure to enable the
Allow Data Point Comments option, and do the following to add and interact with comments on such
tables.
• To add a new comment, right-click the data cell to which you want to add a comment, select Add
Data Point Comment, enter your comment in the comment editor, and select Add Comment to
submit the comment.
• To show the history of a comment, choose the wedge in the data cell and select (Show/Hide
History).
The comment history will be shown below as a list of previous comments in chronological order,
with the most recent comment placed at the top. To hide the history, select (Show/Hide
History) again.
• To delete a comment, choose the wedge in the data cell and select (Delete Comment).
Here are the supported SAP BW versions for adding comments on such tables :
SAP BW/4HANA 2.0 SP13 or higher. From SP07 to SP12, SAP Note 3244228 is required.
SAP BW/4HANA 2021 SP04 or higher. From SP00 to SP03, SAP Note 3244228 is required.
SAP BW 7.50 SP23 or higher, and SAP Note 3242613 , 3133846 , and 3169011 are required.
Note
For SAP Analytics Cloud version 2023.15 or later, which supports rich text formatting, SAP Note
3348600 is required.
Note
To add comments on tables built on BPC or BW live data models, make sure that your data source
has correct CORS settings that cover the Service Path of /sap/bw/ina/Documents. For more
information, see Live Data Connections to SAP BW and SAP BW/4HANA [page 376] or Live Data
Connections to SAP BPC Embedded [page 415]. You can also check this blog post Comments on
BW/BPC live Connections in SAP Analytics Cloud .
Note
The icon for editing or deleting comments won’t be available if you don’t have the corresponding SAP
BW authorization.
If you share a story that contains such a table and set a comment-related access, it won't be applied to
comments on such a table as they are controlled by SAP BW authorizations. Such an access set when
you share an SAP BPC or BW live data model will also not be applied.
They are not collaborative. For a data cell, only one comment is maintained, with its history displayed if
desired.
Note
What you need to know about comments on tables built on BW or BPC live data models:
The transparent comment wedge icon indicating there is a comment on a child member is not
available.
You can’t add comments to data cells sourced from calculations defined in the table, including
restricted measures.
Not all comments on duplicate nodes or link nodes and their original nodes in a hierarchy defined in the
SAP BW are shown if they are managed in the commenting column, but they are shown normally when
you remove the column and click their corresponding wedges in data cells.
Results
Another way to manage data point comments is by using a dedicated commenting column in your table. This
column can be used to view and add comments. For more information on how create this column, see Create
Custom Calculations Within Your Table [page 1469].
Note
If you add a dedicated commenting column, all existing data point comment wedge icons will become
invisible.
Note
If you disable the Allow Data Point Comments option, existing data point comments, including those added
via data cell context menu or displayed and managed in the dedicated commenting columns, will disappear
and you won't be able to edit these comments as a result. They will be shown again once you enable the
option.
Next Steps
To create a new comment thread or reply to an existing comment thread, select the target cell in the column
and enter your comment in the displayed dialog. Select Add Comment to submit the comment.
Note
Up to four distinct comment threads can be added to a specific widget in a story. Each comment thread
can have a maximum of 100 comments. As users add multiple threads to a widget, comment icons will be
superimposed on one another. The top icon is associated with the most recent thread.
Note
The dedicated commenting column is created as a new member of the same dimension and hierarchy
level as the member it references. You can change this context by referencing a different column in
the commenting formula. Changing the drill state or adding a dimension can impact the context of the
referenced cell.
Only the most recent comment in a thread will display in a comment column cell. To view an entire
comment thread, double-click in the corresponding cell.
Note
Let's dive into this example. The table on the left is built based on Model_1 while the table on the right is
based on Model_2. A model link has been established between Model_1 and Model_2, and Location_1 and
Location_2 are linked dimensions. A page filter based on the dimension Location_1 is applied to the table
Model_2 apart from the table Model_1 (the filter also affects the dimension Location_2 from Model_2),
which makes it a typical case of filtering across models.
When filtering across models, if a story or page filter and a table are based on different models, you are
advised to not add comments in such a table (table Model_2 in this example). If you do add comments, you
are advised to follow these two suggestions:
• Put all linked dimensions in the table axis (the linked dimension Location_2 is put in the axis of table
Model_2).
• Use a direct filter (like the page filter on the left), which means a filter based on linked dimensions,
instead of an indirect filter (like the page filter on the right, which is based on the dimension Color_1,
which is not a linked dimension).
For more information about filtering across models, see Filter Across Models [page 1066].
You can export a story as a PDF, PPTX or Google Slides file, comment columns in the visible parts of the story
tables will also be exported.
You can also export a table with its comment column as a CSV or XLSX file. Note that you need to set Scope to
Point of view in the case of CSV file.
Note
There is a randomness to the result if a story with data point comment indicators is exported. You may not
see them in the exported file.
You can view and create dimension comments that record textual descriptions of dimension member
combinations in a table.
Prerequisites
To set or change the comment thread limit for your model, go to System Administration System
Configuration , and update the value for Limit of comment threads per Model.
Context
You can leave dimension comments on dimension member combinations from any type of acquired model.
Note
You can't add dimension comments to table cells from blended models or live data models.
Note
Dimension comments are only supported on tables with the option of Optimized Presentation enabled.
• You should have Read, Create, and Delete permissions for the object-type Comment in the tenant.
• You should have Add Comment, View Comment, and Delete Comment permissions on the model to
comment on dimension member combinations.
• You should have Add Comment, View Comment, and Delete Comment permissions on the story.
Note
If any one permission is missing in the combination, then you won't be able to perform the relevant action.
When you work on dimension comments, the dimension comment context is taken into account. It refers to
all the dimensions defined in the dimension comment formula, which are specified either in the row axis or in
a filter. When you add or remove dimensions, the existing dimension comments will still be visible if their own
contexts have not changed.
Example
First scenario: the dimension comment in the first screenshot is created with a Product filter on Athletic
Shirts.
Second scenario: Product has been added to the visible dimension without a filter.
Note
Role-based data security and data access control are not taken into account when it comes to dimension
comments as these comments are irrelevant to fact data.
Note
If the account dimension is defined in the dimension comment formula, you can add dimension comments
on accounts and calculated accounts (accounts with formulas) defined in the Modeler. These calculated
accounts can also include variables, which can be of number and dimension types. Note that the context
of such comments does not include the specific filters on dimensions or variables used to create such a
calculated account, but includes its ID.
Note
Dimension comments will be shown for each member of dimensions which are in the row axis but not
defined in the dimension comment formula.
Example
Since dimension Product is in the row axis but not defined in the dimension comment formula,
dimension comments are shown for each member of this dimension, with comment cells sharing the
same context showing the same dimension comment.
The data cells highlighted in the previous example are pasted to dimension comment cells, and
“51,282.05” is shown as the dimension comment for all cells.
Note
Hierarchy is not taken into account for dimension comments. For example, a dimension comment on
a parent member will always be shown regardless of which child members are selected, because child
members are not stored as dimension comment context for the parent member. Besides, dimension
comments won’t change after hierarchy is switched.
Note
Different from data cells, there is no background color indicating value change for dimension comment
cells.
Note
Note
As opposed to comments made on data points, you can delete dimension comments by using a
keyboard shortcut, or just entering/pasting empty value.
Note
You can copy and paste dimension comments within the same table, across tables within the same story,
from tables in another story, and from external sources like a Microsoft Excel spreadsheet.
Note
Next Steps
You can export a story as a PDF, PPTX or Google Slides file, and existing dimension comments in the visible
parts of the story tables will also be exported.
You can also export a table with all its dimension comments as a CSV or XLSX file. Note that you need to set
Scope to Point of view.
Note
• Measures and cross calculations can’t be defined in the dimension comment formula.
• Dimension attributes can’t be defined in the dimension comment formula and hence are not included
in the context for dimension comments. If there are dimension attributes in the table axis or attribute-
based filters, and the corresponding dimension is used in the dimension comment formula, the
dimension has to be put in the row to make dimension comments work.
• Dimension comments are not supported when linked dimensions are used in the dimension comment
formula and these dimensions are not in the row.
• Dimension comments are not supported when dimensions defined in the dimension comment formula
are used in complex advanced filters.
• Dimension comments are not supported on parent nodes of non-time dimensions with level-based
hierarchies.
• If multiple parent-child hierarchies are used in multiple filters of a dimension, dimension comments are
not supported on its members.
• When a dimension of parent-child hierarchy in the row is defined in the dimension comment formula,
and flat presentation and one hierarchy are used in two or more filters of this dimension, dimension
comments are not supported on members in flat presentation of this dimension if these members are
parent nodes in the other hierarchy.
• If you have a property of a user-managed time dimension with semantic type of Others and use it as a
level in a time hierarchy, dimension comments are not supported when the hierarchy is adopted.
• Dimension comments made on members in a hierarchy based on fiscal year are not shown for their
corresponding members when you switch to a hierarchy based on calendar year, and vice versa.
• Dimension comments are not supported when you set a filter using member navigation and the
filtered dimension is defined in the dimension comment formula. For example, in an optimized design
experience story or in view mode of an optimized view mode story, a dynamic range filter is set
on a user-managed time dimension defined in the dimension comment formula. If a story in the
non-optimized (classic) experience that includes such a filter is converted to one in the optimized story
experience, dimension comments will also become unavailable.
In addition to a numeric value, you can add a small bar chart to each cell of a measure in a table column or row.
Prerequisites
Create a table with at least one measure (an account measure, for example) and a dimension.
Context
You can display a bar chart or a variance chart in your table cells so that you have both a visual and a numeric
view of your data.
Procedure
1. To choose the members to display, in the Builder select your measure and then select (Manage Filters):
select your members and then click OK.
2. In the table, right-click a measure column or row header and then select (In-Cell Chart).
Each cell in the table for the selected measure displays a bar and a value.
3. To hide the bar chart, right-click the measure header again and then select (In-Cell Chart).
4. To change the bar chart to a variance chart, do the following:
a. In the Builder under Chart Structure, select Comparison and then select either Variance Bar or
Variance Pin.
If you see Table Structure instead of Chart Structure, you may need to select the bar chart in the table,
or hide the in-cell chart and then show it again.
When working with large data sets, you might want to see the entirety of the data you’re working with.
Context
With large data sets, there’s usually more data than a table can display, and scroll bars help you view the
columns and rows that aren’t displayed initially.
In Canvas pages, you can resize a table so that its size is no longer fixed and automatically adjusts vertically to
display all rows. The table size changes depending on different events, such as data refresh or new filters for
instance. In Edit mode, the size of the table also adapts to the canvas size or orientation, with both fixed and
dynamic page option. In View mode, the size of the table also adapts to the canvas size or orientation defined in
Edit mode.
The option applies only to the table you have selected, but you can have multiple automatically resizable tables
on the same canvas.
Note
• There’s no relative positioning when using automatically resizable tables. They might overlap other
existing widgets on the canvas. If that’s the case, try repositioning either the tables, or the overlapped
widgets.
Restriction
Don't use this option if you are planning to use a script (Advanced Mode feature) on the same story tab to
switch between visible widgets.
For more information about the script, see Best Practice: Switch Between Chart and Table [page 1778]
Procedure
Create a forecast or rolling forecast layout to look back at Actuals data for time periods before a cut-over date,
and look ahead to forecast data for subsequent time periods.
Prerequisites
Your story needs to have a planning model. For information on different types of models, see Learn About
Models [page 599].
The table in your story should have a version with data that you want to use for the forecast periods. If it does
not have one, you can use Version Management to create one. For more information, see Creating Versions
[page 2182].
Context
• Layout
Granularity: Range:
• Quarter • Year
• Month • Quarter
• Week • Month
Range refers to the timeframe covered in a forecast layout. The granularity value of Look back additional
orLook ahead additional is set to be of an equal or lower level than the value of Range. For example, if you
set Range to Month, the only options available for Look back additional or Look ahead additional are Month
and Week.
Type: Rolling Forecast
• Calculation: has one field – Sum for – which has the following options:
• Cut-over Year
• All
• Look ahead
• None
• Additional Versions: if you have more versions, you can use them in your layout.
Procedure
Results
The table now shows actual and forecast values. The Builder panel displays the Forecast information for
Cut-over date and Sum for.
Add thresholds to tables and choose how to display the threshold values.
Restriction
(Classic design experience) Using thresholds in tables is not supported for the new model type that
includes an account dimension.
You can add thresholds to measures in tables, including calculated and restricted measures.
After the threshold is applied to the table, when you move the cursor over a value, the tooltip will display the
range and threshold name.
Restriction
You can also add thresholds to calculated rows and columns, however those thresholds use number ranges
only; they cannot be compared to measure values.
For information on how to create a threshold, see Creating Story-Defined Thresholds (Optimized Story
Experience) [page 1511].
You can add thresholds to the table directly or by using the builder panel.
1. Select one or more table cells. 1. With your table selected, open the Builder panel.
2. Right-click and then select New Threshold. 2. Choose a measure or account.
Threshold .
The Thresholds panel appears.
4. In the Model section, select a model and a measure.
Note
(Classic design experience) The Thresholds option is
not supported for the new model type that has an
account dimension. The option will be removed in the
next QRC release: SAP Analytics Cloud 2024.QRC2.
To make it easier to find the threshold values in the table, you can change the font color or the cell
background.
Once you've created some thresholds, you may want to edit the ranges, or see a list of available thresholds.
With your table selected, open Builder, choose a measure, select (More) Thresholds , and then
select one of the following options.
Option Description
Show Thresholds Shows all the valid assigned threshold values in the table
cells.
Edit Ranges Allows you to edit the threshold ranges for the selected
threshold.
View All Thresholds Opens the Conditional Formatting panel to display all
thresholds.
Use the following process to change how the threshold values are displayed:
The table cells automatically change to reflect the new style choice.
You can change how dimension, account, or measure members are sorted, including keeping members
grouped when there are multiple dimensions on a row or column.
Restriction
The sort option won't be available if you aren't allowed to change the sort order for a particular dimension.
Restriction
Sorting on the Version dimension is not currently supported for SAP Integrated Business Planning.
Sorting Results
After the sort order is applied, information about the sort is added to the table subtitle. The information
includes the member that is sorted and one of the following icons to indicate the type of sort that was applied.
Icon Description
Note
The Forecast Layout option applies a custom sort on the Version dimension, and displays the icon in the
table's subtitle. To change the version sort order, you need to change your forecast layout options.
When you have multiple dimensions on a row or column and you sort on the innermost dimension, you can
decide whether to keep the row or column sort results grouped together.
Restriction
The member grouping options are only available for SAP acquired data models and SAP HANA live data
connections.
The grouping status (Keep Left Columns Grouped, Keep Top Rows Grouped) can only be changed in the
following situations:
In the following examples, the phrase regarding member grouping refers to both options: Keep Left Columns
Grouped and Keep Top Rows Grouped.
Example
You have a Profit and Loss model with the following measure and dimensions:
Type Name
Dimension Product
Dimension Region
Sort on Product (A-Z) with the Keep Members Grouped Option Disabled
Prerequisites
Make sure you are in Edit mode. You can't change the order when you are viewing the story.
Context
You can rearrange the order of the members in your table. This applies to all members or measures on the
same hierarchical level, including calculated measures.
Restriction
The sort option won't be available if you aren't allowed to change the sort order for a particular dimension.
For example, you aren't allowed to change the order of members in the version dimension.
Procedure
1. Right-click your dimension header and then select (Sort Options) Add Custom Order .
3. Select a member and then drag it to the new location, or select either Move to top or Move to
bottom of currently shown members.
After moving members around, you can Preview the table layout.
4. When you've finished rearranging members, select OK to save your changes.
Next Steps
To change or delete the custom order, right-click the dimension header, select Sort Options, find the custom
order, and then select (edit). In the Edit Member Order panel, make your changes or select Delete.
Procedure
Note
The Sort Options feature is only enabled for dimensions that can be sorted.
Note
This option is only enabled if more than one sort direction is possible in the table.
If you choose Vertical, the column the dimension is included in will be sorted. If you choose Horizontal, the
row the dimension is included in will be sorted.
5. (Optional): Select Break Grouping.
Break Grouping is useful when there are multiple columns or rows in your table. If Break Grouping is
enabled, the sort will be applied to the selected column or row first. Dimensions in all other columns or
rows will be sorted next, and may not remain in their original groupings.
If Break Grouping is not enabled, the sort is performed on the outer column or row first.
Note
Break Grouping is only available if there are groups in the column or row.
Break Grouping is unavailable if the axis (that is, the row or column) that you want to sort has one or
more of the following situations:
• axis has totals
• axis has a hierarchy
• axis contains a cross calculation
6. Under Related Dimensions, select the icon beside a dimension member to change the default value.
Results
The column or row is sorted in ascending or descending order. The sort will appear in the table subtitle, and can
be removed by clicking .
Related Information
When you open an existing story that has tables with Key Performance Indicators (KPIs), the Table KPIs will
automatically be migrated to Story Thresholds.
Note
The following things need to be considered before opening a story that contains KPIs:
• KPI ranges are defined for a table, but Threshold ranges are defined for the story.
• Migration could cause conflicts at the story level.
• KPI notifications won't be migrated.
• KPIs in the mobile app or pinned to the home page won't be migrated because they don't have a story
container.
If you have multiple tables with different KPIs defined on the same model and measure, then conflicts can
occur. Some of the conflicting KPIs are dropped and a warning message is shown in the affected tables.
This message goes away after you refresh or after you save and load the story.
Related Information
9.14.4.14 Aggregation
Normally, values in a table are aggregated along the hierarchy but sometimes exceptions are necessary.
Aggregation of values in a hierarchy is determined by the Aggregation type setting, which is applied to each
account member in Modeler. Typically, values are summed up into higher-level totals.
No Aggregation
In some situations, it is not possible to simply aggregate values; this would be true, for example, when working
with different currencies, or if values are expressed in different units that are not compatible with each other. In
these cases, a parent cell may be displayed as empty, or a common value appear in the cell, but it isn't a simple
aggregation.
In order to make these cells recognizable, the cell is shown with a diagonal line drawn through it, and a help
message is available to explain the data. Select the notification symbol next to the cell to see the explanation.
For analytic models, values can be booked to parent nodes in a hierarchy. In this case, the node shows the sum
of the booked value plus the sum of the child values. (For planning models, this situation only occurs when
working with different hierarchies for the same dimension. See Entering Values with Multiple Hierarchies [page
2211] and Disaggregation of Values During Data Entry [page 2207] for more information.)
Example
A value of 30 has been booked to the parent node Sales Expenses. The value shown in the table cell is 50; this
figure includes the aggregation of two child accounts.
Sales Expenses 50
– Travel Expenses 10
– Other Expenses 10
When you copy a cell in a table and paste it to a cell in a grid that does not belong to a table, a cell reference is
created.
Cell references link the two cells so that they always have the same value; any changes to one cell will also
update the other.
1. From the Tools menu, select Cell References and Formulas Show References . (If you don't see
the option in the Tools menu, select (more actions) and then select the option from the list.
The cells in the table that have references are highlighted.
2. To find a particular referenced cell in the grid, select the source cell in the table. It will be highlighted with a
color that is different from other cell reference highlighting.
Scroll through the grid until you find other cells that have the same highlighting as the source cell.
To break the link between cells, select the reference cell in the grid and select Cell References and
Formulas Remove Reference .
When a grid cell references a cell that is no longer visible in the table, it displays a # character.
You can create custom calculations for your tables such as aggregations, calculated or restricted measures,
value-based dimensions, and so on.
The Calculation Editor is used to create the calculations that can be added to your tables. You apply the
calculations either to the Account or to the Cross Calculations dimension.
You can also create some calculations directly within your table. For more information, see Create Custom
Calculations Within Your Table [page 1469].
Remember
The types of calculations that you can create are limited by where you are trying to create them, whether
that is by using the Calculation Editor or by creating a calculation directly within the table. Within the
table, you may see different calculation types available for rows or columns, or outer or inner dimension
members.
For each type of calculation, a new calculated or restricted member is created for the dimension that you
used to create it. You can also use dimension attributes as part of a calculation. For more information about
dimension attributes, see Combine Data with Your Acquired Data [page 722].
Note
The number format chosen in Profile Settings User Preferences influences the expected input format
for Story Calculated Measures and Calculated Dimensions:
• Choosing a number format that uses periods (.) as decimal separators means that commas (,) must be
used to separate function parameters (for example, IF(Condition, ValueIfConditionIsTrue,
ValueIfConditionIsFalse)).
• Choosing a number format that uses commas (,) as decimal separators means that
semicolons (;) must be used to separate function parameters (for example, IF(Condition;
ValueIfConditionIsTrue; ValueIfConditionIsFalse)).
If the formula is typed in from scratch, the correct function auto-completion happens based on the user
preferences. However, if you copy and paste a full formula string, auto-complete won't be able to adapt if
there is a mismatch between separators used and the user preferences.
You can display the calculation in a table by adding the Account or Cross Calculations dimension to the table,
or by selecting it in the filter applied to the dimension.
• Calculated measures: Perform a calculation on one or more members of either the Account dimension or
the Cross Calculations dimension. A new calculated member of the selected dimension is created as a
result. For more information, see Create Calculated Measures for Tables [page 1444].
Restriction
If the calculation is based on the Cross Calculations dimension then dimension members or attributes
from the Account dimension cannot be used.
• Restricted measures: Restrict the data from a member of either the Account dimension or the Cross
Calculations dimension so that it excludes certain members of one or more dimensions. For the Date
dimension, you can pick dynamic values such as year-to-date or previous quarter. A new restricted
member of the selected dimension is created as a result. For more information, see Create Restricted
Measures for Tables [page 1448].
Note
When using data from SAP BW, creating a restricted measure on another restricted measure (referred
to as a key figure in SAP BW) that uses the same dimension results in incorrect data in SAP Analytics
Cloud. A further restriction is not possible because SAP BW treats restrictions as OR operations. In
most cases restricted measures are intended to get the result as an AND operation.
For example, COUNTRY is restricted to Great Britain and Germany. In a sales scenario, the intended
result would be that you see products that are sold in both countries (Great Britain OR Germany) and
not a combination of either country and both of them (Great Britain OR Germany OR (Great Britain
AND Germany)).
• Difference From: Find the difference in an account’s value between two dates. A new calculated account
member is created as a result. For more information, see Calculate the Difference for Tables [page 1451].
• Currency conversions: For planning models with currency conversion enabled, add a new currency
conversion to the Cross Calculations dimension. For more information, see Displaying Currencies in Tables
[page 1559].
• Aggregation: Create calculations from aggregations such as sum, count, average, and so on. Choose what
conditions are required for the aggregation to be applied, and when the conditions are required. For more
information, see Create Aggregations for Tables [page 1453].
• Perform string and number conversions: convert strings to numbers and numbers to strings, and use the
resulting values in calculated measures or dimensions. See Convert Strings to Numbers and Numbers to
Strings [page 1443].
• Dimension to Measure: The string and number conversion functions can be combined with Measure-
Based dimensions (in a calculated dimension) to create dimension to measure conversions. (See Create
Calculated Dimensions for Tables [page 1461], How to Create a Measure-Based Dimension [page 1463],
and Dimension to Measure [page 1444].)
• Forecast
• Rolling Forecast
For more information, see Creating a Forecast or Rolling Forecast Layout [page 1428]
Calculations can use input controls. Input controls provide variable input for a calculation, allowing viewers to
influence the result of a calculation without modifying the underlying data or formula. For example, viewers
can choose to see the impact of a 1%, 2%, or 3% tax-rate increase. You choose the list of values for an input
control, and specify how the user can select values. Input controls can be formatted after they are added to the
canvas.
The Calculation Editor dialog will change size to fit the calculation type information fields and re-center
itself on the page.
Note
You can add formulas to tables by inserting rows or columns based on any dimension. For more
information, see Create Custom Calculations Within Your Table [page 1469].
Note
(Optimized Story Experience) When creating a calculation input control, use letters A–Z, a–z, numerals
0–9 or underscore (_) in its name, which is used as ID in scripting.
There are times that you may want multiple calculations that are almost the same. For example, you may want
restricted measures with different time filters. It takes time to create a calculation and you have to create each
calculation separately from start to finish, even if you only need to change one small parameter.
A faster way to create multiple calculations would be to duplicate an existing calculation and then edit it.
Note
• Calculated measures
• Restricted measures
• Calculated dimensions
• Cross calculations
Select a table 1. In the Builder, expand the 1. In the Builder, expand the
Story Calculations list. (Story Story Calculations list. (Story
Calculations are listed with meas- Calculations are listed with meas-
ures or with cross calculations.) ures or with cross calculations.)
2. Select the calculation and then 2. Select the calculation and then
2. Make changes to the calculation. (You can't change the calculation type.)
3. Select OK.
1. Select a table.
2. In the Builder, select Story Calculations and expand the calculation list.
Convert a string value to a numeric value or a numeric value to a string value and use the new values in
calculated dimensions or measures.
There are two functions that you can use to convert values from one type to another in calculated measures or
calculated dimensions: ToNumber and ToText.
ToNumber Function
ToNumber converts string values to numbers.
1. In Builder, under Rows or Columns, select Add Dimensions Calculated Dimensions Create Calculated
Dimension .
2. In the Calculation Editor under Type, select Calculated Dimension.
3. Create your formula.
Example
ToNumber ([d/ACT_EMPLOYEE_NUMERIC:Age_String].[p/ID])
ToText Function
ToText converts numeric values to strings, and it uses an existing calculated dimension in its calculations.
1. In Builder, under Rows or Columns, select Add Dimensions Calculated Dimensions Create Calculated
Dimension .
2. In the Calculation Editor under Type, select Calculated Dimension.
3. Create your formula.
Example
The string and number conversion functions can be combined with Measure-Based dimensions (in a
calculated dimension) to create dimension to measure conversions. For more information about Measure-
Based dimensions, see How to Create a Measure-Based Dimension [page 1463].
There is also a calculation type that lets you convert a dimension to a measure.
3. For the column that you added in the Builder tab, select Add Calculation .
4. In the Calculation Editor, select Dimension to Measure from the list.
5. From Dimension Attribute to Convert, select a dimension.
6. Select context dimensions.
7. Set the aggregation operation type.
8. Select OK and review the results in the table.
Related Information
In SAP Analytics Cloud, the Calculated Measures calculation creates a new measure (or account) that you can
use in your table.
Context
Measures (which are referred to as Accounts in a planning model) are numerical values on which you can
use mathematical functions. When setting up your calculation, you’ll apply the typical formula functions,
conditions, and operators to the data contained in your model.
Calculated measures allow you to perform mathematical and Boolean operations on your data. For example,
you can use a calculated measure to show the effect a sales increase of 20% would have on profits.
Note
The number format chosen in Profile Settings User Preferences influences the expected input format
for Story Calculated Measures and Calculated Dimensions:
If the formula is typed in from scratch, the correct function auto-completion happens based on the user
preferences. However, if you copy and paste a full formula string, auto-complete won't be able to adapt if
there is a mismatch between separators used and the user preferences.
The following functions, conditions, and operators are available for creating calculated measures. For
descriptions of each option, see All Formulas and Calculations [page 873].
Functions
FLOAT() ACCOUNTLOOKUP()
Note
The ISNULL function identifies NULL values, but won't replace a NULL value with a value.
For example, your calculated dimension has the following formula: CD1=IF(ISNULL(D1), "No Value",
D1)
The value in cell D1 will not be changed to show the words “No Value”: it will stay as a NULL value.
Conditions:
AND OR
Operators:
• + (addition)
• - (subtraction)
• * (multiplication)
• / (division)
In the Calculation Editor, you only see the functions that are valid for your data source.
Procedure
Tip
b. For the row or column that you added in the Builder tab, select Add Calculation .
Note
The option to create a new calculation may not appear if calculations are not possible for the table, or
model.
Tip
As you are adding formula details, you will see a message appear and disappear: it appears when your
formula is not valid and disappears when the formula is valid.
Existing input controls appear in the Available Objects list, and can be added to a formula.
Note
You can add preset functions, conditions, and operators, by selecting options in the Formula Functions list.
You can use IF conditional functions, and you can display a list of possible formulas for the function by
pressing Ctrl + Space bar.
Note
The Cross Calculations dimension cannot be used in the Account dimension calculations.
Note
• # – Returns all calculations (that is, measures created using the Calculation Editor).
• @ – Returns input controls. (Only single value numeric input controls are returned.)
Tip
In the list of measures, you see both the measure ID and the measure description. After you select a
measure, the formula editor area shows only the measure ID. To view the measure description, click
outside the formula editor area.
5. (Optional) If you want to verify that your formula is formatted correctly, select Format: it may reformat your
formula before displaying a valid formula message.
6. Select OK.
Results
A measure is created based on the formula you entered, and it is added as a new member of the Account or
Cross Calculations dimension.
On the canvas, input controls are indicated by the (Formula) icon. If you hover over the icon, all calculations
associated with the input control are displayed. By default, the input control is displayed in token mode where
input values can be selected from a drop-down list. The input control can be expanded into widget mode, where
radio buttons appear beside each value.
Related Information
Context
Measures (which are referred to as Accounts in a planning model) are numerical values on which you can use
mathematical functions.
Restricted measures can be useful for comparing one value to a set of other values in the same table. For
example, you can create a measure that contains all expenses for the country of Australia, and compare
expenses from Australia side by side with expenses for all other countries.
Note
Procedure
Tip
2. For the row or column that you added in the Builder tab, select Add Calculation .
Note
The option to create a new calculation may not appear if calculations are not possible for the table or
model.
When Constant Selection is disabled, the restricted measure value is influenced by chart, page, and story
filters, as well as categorical axis values. This is the default setting.
When Constant Selection is enabled, the restricted measure value is determined by the values you specify
in the Calculation Editor and will remain constant. Enabling constant selection is useful for comparing a
Note
Remember
When searching for a specific measure in a long list of measures, you must use the measure's
description, not its ID, as your search term.
6. In the Dimensions section, select one or more dimensions along which you want to restrict the measure.
If you want to restrict the measure along more than one dimension, use Add a Dimension.
7. Beside each dimension, under Values or Input Controls, select Click to Select Values, and then choose an
option from the list:
except the ones selected are applied to the restricted measure. You can use (Search) to find
specific values. When you expand the list beside the search icon, you may have the following options:
Show Description and Show Hierarchy.
Show Description lets you choose to view the member Description, ID and Description, or ID. Show
Hierarchy lets you choose a Flat presentation or an available hierarchy.
The members you choose appear in the Selected Members list.
• SELECTION Select by Range :
Enter a start value and end value for the range. Select Add a New Range to add additional ranges to the
restricted measure.
Note
This option appears only if dimension values are numerical or date based. If the dimension is date
based, you can also select week, quarter, month, or year from the slider that appears. Ranges can
be fixed or dynamic. For more information, see Story and Page Filters [page 1518].
In some workflows, for example planning, you might want to set the current date of a dynamic date
range filter to be different from today's actual date. To learn how to do this, see Customizing the
Current Date [page 1546].
For date dimensions, you can quickly specify a dynamic restriction based on the current date using these
options:
Note
To see restricted measures with dynamic date range selections, you'll need to add the Date
dimension to the table, or filter it to a single member.
• Create a New Calculation Input Control (This option is available when adding a calculation to a table.):
1. Enter a name for the input control, and then select Click to Add Values.
2. Select values from the list of available members. If you select Exclude selected members, all
members except the ones selected will be included in the input control. You can use (Search)
to find specific values. When you expand the list beside the search icon, you can choose to view the
member Description, ID and Description, or ID.
3. Expand the Settings for Users section, and then choose whether users can do the following in the
input control: Single Selection, Multiple Selection, or Multiple Selection Hierarchy.. Select OK.
4. Select OK.
8. Select OK.
Results
A measure is created that does not include data for the members that you excluded. Any input controls you
created appear on the canvas and are listed in the Calculation Editor under Input Controls. Input controls can
also be used in calculated measures.
On the canvas, input controls are indicated by the (Formula) icon. If you hover over the icon, all calculations
associated with the input control are displayed. By default, the input control is displayed in token mode where
input values can be selected from a drop-down list. The input control can be expanded into widget mode, where
radio buttons appear beside each value.
Related Information
In your SAP Analytics Cloud table, you can determine the difference between measure values from two time
periods.
Prerequisites
A table must be selected. The data source must include a date dimension. However, the table only needs the
date dimension when comparing Current Period.
Note
Context
The Difference From aggregation shows the difference in the value of an account between a starting date and
a target date that is calculated from a specified number of time periods before or after the starting date. For
example, you can compare the sales on February 2015, with the sales 6 months previously.
You can also use the current date as a starting date for your comparison.
Procedure
1. Select a table.
• 1. Under Rows or Columns in the Builder tab, select Add Dimensions and add the Account dimension.
2. Next to the Account dimension in the Builder tab, select Add Calculation .
Note
The option to create a new calculation may not appear if calculations are not possible for the model.
Select a value from the list of available members. You can use (Search) to find specific values.
When you expand the list beside the search icon, you can choose to view the member Description, ID
and Description, or ID, and specify whether to display quarters and halves in the time hierarchy.
• Create a New Calculation Input Control (This option is available when adding a calculation to a table.):
1. Enter a name for the input control, and then select Click to Add Values.
2. Choose how to select values for the input control:
• Select by Member:
Select values from the list of available members. If you select Exclude selected members,
all members except the ones selected will be included in the input control. You can use
(Search) to find specific values. When you expand the list beside the search icon, you can
choose to view the member Description, ID and Description, or ID, and specify whether to
display quarters and halves in the time hierarchy.
The members you choose appear in the Selected Members list.
Expand the Settings for Users section, and then choose whether users can do the following in
the input control: Single Selection, Multiple Selection, or Multiple Selection Hierarchy. Select
OK.
• Select by Range:
Choose a start value and end value for the range. Select Add a New Range to add additional
ranges to the input control.
Note
The Select by Range option appears only if dimension values are numerical or date based.
If the dimension is date based, you can also select quarter, month, or year from the slider
that appears. Ranges can be fixed or dynamic; for example, you could choose the fixed
range January 2017 to December 2017. If this story is opened in 2018, the story will still
show 2017 data. For dynamic date ranges, in addition to the above granularities, these
granularities are also available: current year, current quarter, and current month. For more
information, see Story and Page Filters [page 1518].
Expand the Settings for Users section, and then choose whether users can do the following in
the input control: Single Selection, Multiple Selection, or Single Value Slider. Select OK.
3. Select OK.
This value is used as the starting date to calculate the difference from.
7. In the To (B) section, choose PREVIOUS or NEXT.
• PREVIOUS – calculates the difference between the starting date and a specific previous time period:
• 'N' Periods
The time period must be a discrete value, or an input control with a static list of discrete values. This is the
number of time periods before or after the starting date that you want to compare the value to. The period
is the smallest time granularity of the current date, included in your data source. For example, the period
may be a year, a quarter, or a month.
Note
9. Optional:
• Select Set No Data as Zero: To include a NULL data point in your variance, set it to the value zero.
• Select Calculate as Percentage.
Select Absolute Base Value to display the absolute percentage change. Absolute values are useful if
you are comparing negative values, but have a positive change (For example, (-20) - (-40) = +20. The
percentage change (+20/-40) would be -50% if you did not use absolute values.)
1. In the Divide By section, choose Compare (A) or To Value (B).
The Compare (A) is the value of the measure at the starting date. The To Value (B) is the value of the
measure at your target date.
10. Select OK.
Related Information
In your SAP Analytics Cloud table, you can use the Calculation Editor to create aggregations.
Prerequisites
A table must be selected. The data source must contain key figures.
Context
Calculations can be created from aggregations such as sum, count, average, and so on. When you create an
aggregation, you can also choose what conditions are required for the aggregation to be applied, and when the
conditions are required. For example, you can create an aggregation to count the number of sales per store,
when the store carries a certain product.
Procedure
• 1. Under Rows or Columns, select Add Dimensions and add the Account dimension or the Cross
Calculations dimension.
2. Select either the Account or Cross Calculations dimension and then select Add
Calculation .
Note
The option to create a new calculation may not appear if calculations are not possible for the model.
Type Description
SUM The sum of the measure’s values across all leaf members of the selected dimensions.
COUNT The number of leaf members of the selected dimensions with values for a specific measure,
including null values. Empty values for this measure are not counted.
Note
Although NULL and NO_DATA cells are visually identical, the application still distinguishes
them in the backend.
NULL cells are retrieved directly from the database, and can also be created using formulas.
For example, IF( cost > 1000; NULL; 1).
NO_DATA cells are empty cells that are use jointly with the unbooked mode for example, or
to create a cross-join of the rows and columns axis.
The COUNT function only accounts for NULL cells in the aggregation, and discards the
NO_DATA cells.
COUNT The number of leaf members of the selected dimensions that have at least one measure value.
DIMENSIONS This aggregation doesn't count members that have an empty value for all measures. (COUNT
DIMENSIONS is not valid for the Cross Calculations dimension.)
COUNT excl 0, NULL The number of values, excluding zero and null values.
AVERAGE The average of the measure’s values across the selected dimensions, including null values.
Note
Although NULL and NO_DATA cells are visually identical, the application still distinguishes
them in the backend.
NULL cells are retrieved directly from the database, and can also be created using formulas.
For example, IF( cost > 1000; NULL; 1).
NO_DATA cells are empty cells that are use jointly with the unbooked mode for example, or
to create a cross-join of the rows and columns axis.
The AVERAGE function only accounts for NULL cells in the aggregation, and discards the
NO_DATA cells.
AVERAGE excl NULL The average of the measure’s values, excluding null values.
AVERAGE excl 0, The average of the measure’s values, excluding zero and null values.
NULL
FIRST Shows the first (oldest) value in the selected time period: for example, show the number of
employees on the first day of a month.
LAST Shows the last (most recent) value in the selected time period: for example, show the number of
employees on the last day of a month.
STANDARD Calculates how much the value varies from the average value in the series.
DEVIATION
MEDIAN The median (middle) value (half of the data lies below the median value, and half lies above).
MEDIAN excl. NULL The median (middle) value (half of the data lies below the median value, and half lies above),
ignoring null values.
MEDIAN excl. 0, The median (middle) value (half of the data lies below the median value, and half lies above),
NULL ignoring null and zero values.
FIRST QUARTILE Calculates the first quartile value (25% of the data is less than this value).
FIRST QUARTILE Calculates the first quartile value (25% of the data is less than this value), ignoring null values.
excl. NULL
FIRST QUARTILE Calculates the first quartile value (25% of the data is less than this value), ignoring null and zero
excl. 0, NULL values.
THIRD QUARTILE Calculates the third quartile value (75% of the data is less than this value).
THIRD QUARTILE Calculates the third quartile value (75% of the data is less than this value), ignoring null values.
excl. NULL
THIRD QUARTILE Calculates the third quartile value (75% of the data is less than this value), ignoring null and zero
excl. 0, NULL values.
Note
6. In the Aggregation Dimensions section, select one or more dimensions to apply the aggregation to. For
example, if you select the Date and Product dimensions, values for each product at the smallest time
granularity in the model will be aggregated.
Conditional aggregation allows you to specify when the aggregation is applied and what conditions are
required for the aggregation to be applied.
Note
a. In the Aggregate when aggregation dimensions section, choose Have Measure values for Conditions, or
Do not have Measure values for Conditions.
Option Description
Select by Member Select values for the condition from the list of available members. If you select
Exclude selected members, all members except the ones selected will be applied to
the aggregation condition. You can use (Search) to find specific values. When
you expand the list beside the search icon, you can choose to view the member
Description, ID and Description, or ID.
Select by Range Enter a start value and end value for the range of values for the condition. Select Add
a New Range to add additional ranges to the aggregation condition.
Note
This option appears only if dimension values are numerical or date based. If the
dimension is date based, you can also select quarter, month, or year from the
slider that appears.
Ranges can be fixed or dynamic; for example, you could choose the fixed range
January 2017 to December 2017. If this story is opened in 2018, the story will still
show 2017 data. For dynamic date ranges, in addition to the above granularities,
these granularities are also available: current year, current quarter, and current
month. For more information, see Story and Page Filters [page 1518].
Create a New Calculation This option is available when adding an aggregation to a table. It allows table viewers
Input Control to select their own values for the aggregation condition from a list of values that you
specify.
1. Enter a name for the input control, and then select Click to Add Values.
2. Choose how to add values to the input control:
• Select by Member:
Select values from the list of available members. If you select Exclude
selected members, all members except the ones selected will be used in
the condition for the aggregation. You can use (Search) to find specific
values. When you expand the list beside the search icon, you can choose to
view the member Description, ID and Description, or ID.
Expand the Settings for Users section, and then choose whether users can
do the following in the input control: Single Selection, Multiple Selection, or
Multiple Selection Hierarchy. Select OK.
• Select by Range:
Choose a start value and end value for the range. Select Add a New Range
to add additional ranges to the input control.
Expand the Settings for Users section, and then choose whether users can
do the following in the input control: Single Selection, Multiple Selection, or
Single Value Slider. Select OK.
3. Select OK.
8. Select OK.
Related Information
In your SAP Analytics Cloud table, you can create running total calculations (such as sum or average) using the
Calculation Editor.
SAP Analytics Cloud already allows you to create running (accumulative) sums, counts, averages, and so on in
tables, but those calculations are confined to the table they are created in. (For more information on custom
running functions, see Create Custom Calculations Within Your Table [page 1469].)
This feature lets you create running total calculations that can be used for other tables or charts in your story.
Remember
Dimensions that you use in your running calculations become required dimensions: if they aren't already in
your table, you must add them.
Tip
If you rearrange or add dimensions, you'll change the results of the running calculations.
For example, your running sum shows the values for different sales managers. When you add a location
dimension as the outer dimension, the running sum now restarts calculating when the location changes.
1. Select a table.
2. In Builder, under Rows or Columns, select Add Dimensions and add the Account dimension or the Cross
Calculations dimension.
3. Select either the Account or Cross Calculations dimension and then select Add Calculation .
Note
The option to create a new calculation may not appear if calculations are not possible for the model.
Operation Description
SUM The sum of the account's values across all leaf members of the selected dimensions.
COUNT The number of leaf members of the selected dimensions with values for a specific account,
including null values. Empty values for this account are not counted.
Note
Although NULL and NO_DATA cells are visually identical, the application still distinguishes
them in the backend.
NULL cells are retrieved directly from the database, and can also be created using formulas.
For example, IF( cost > 1000; NULL; 1).
NO_DATA cells are empty cells that are used jointly with the unbooked mode or to create a
cross-join of the rows and columns axis.
The COUNT function only accounts for NULL cells in the aggregation, and discards the
NO_DATA cells.
AVERAGE The average of the account's values across the selected dimensions, including null values.
Note
Although NULL and NO_DATA cells are visually identical, the application still distinguishes
them in the backend.
NULL cells are retrieved directly from the database, and can also be created using formulas.
For example, IF( cost > 1000; NULL; 1).
NO_DATA cells are empty cells that are used jointly with the unbooked mode or to create a
cross-join of the rows and columns axis.
The AVERAGE function only accounts for NULL cells in the aggregation, and discards the
NO_DATA cells.
Related Information
You can choose to combine existing dimensions to create your own dimensions.
Note
The number format chosen in the user preferences ( Profile Settings User Preferences ) influences the
expected input format for Story Calculated Measures and Calculated Dimensions:
• Choosing a number format that uses periods (.) as decimal separators means that commas (,) must be
used to separate function parameters (for example, IF(Condition, ValueIfConditionIsTrue,
ValueIfConditionIsFalse)).
• Choosing a number format that uses commas (,) as decimal separators means that semi-
colons (;) must be used to separate function parameters (for example, IF(Condition;
ValueIfConditionIsTrue; ValueIfConditionIsFalse)).
If the formula is typed in from scratch, the correct function auto-completion happens based on the user
preferences. However, if you copy and paste a full formula string, auto-complete won't be able to adapt if
there is a mismatch between separators used and the user preferences.
• Concatenated dimension
• Grouped dimension
• Measure-based dimension
The following functions, conditions, and operators are available for creating calculated dimensions. For
descriptions of each option, see All Formulas and Calculations [page 873].
Functions:
Note
The ISNULL function identifies NULL values, but won't replace a NULL value with a value.
For example, your calculated dimension has the following formula: CD1=IF(ISNULL(D1), "No Value",
D1)
Conditions:
AND OR
Operators:
• + (addition)
• - (subtraction)
• * (multiplication)
• / (division)
1. In Builder, under Rows or Columns, select Add Dimensions Calculated Dimensions Create Calculated
Dimension .
2. In the Calculation Editor under Type, select Calculated Dimension.
3. Provide a name for your calculated dimension.
Remember
You can press Ctrl + Space in the formula area to display a list of suggestions, or type [ for a list of
valid measures and dimensions.
Example
Example
You can create a calculated dimension based on measure or account values or thresholds. For example, you
could create a high/medium/low table row or column based on your own criteria.
1. In Builder, under Rows or Columns, select Add Dimensions Calculated Dimensions Create Calculated
Dimension .
2. In the Calculation Editor under Type, depending on whether you're working with an classic account model
or a new model type, select Measure-Based Dimension or Value-Based Dimension.
3. Provide a Name for the dimension.
4. In the Properties section, select an account, a measure, or both using the dedicated fields:
• If you’re working with a new model type, select the measure and the account using the dedicated field.
• If you're working with a classic account model, select a measure.
5. Select Use measure values as dimension members.
This will replace the member name field. You can set the scale and the number of decimal places for the
measure values that are converted to dimension members.
6. Provide a member name: for example, high, medium, or low.
7. Set the Measure Values.
8. From the Dimension Context, select one or more versions.
Note
For hierarchical dimensions, you also need to specify the level. For example, Date.Year
Related Information
You can add custom calculations to the Forecast or Rolling Forecast layout, which allows you to combine
different values in the same table. You can also create Calculation Input Controls for your forecast.
Using the Calculation Editor, you can create a Forecast member that is added to your table. When you add
more than one calculated Forecast or Rolling Forecast member, you can then create a custom calculation to
show the delta between those members.
You can add input controls to your tables for forecast layouts.
1. Under Rows or Columns in the Builder tab, select Add Measures/Dimensions and add the Cross
Calculations dimension.
Related Information
Use Calculation Input Controls to create a forecast layout that can be updated dynamically from a central
setting.
To update all your similar forecast layouts at the same time, you can create dynamic input controls by adding
attributes to the Version dimension of your model. For example, you could add a current version flag and a
cut-over date.
Each month (or designated time interval), someone must manually update attributes in the Version dimension
of your model, and edit the attribute values that are being used in the calculation input control formulas.
After updating the model, the following changes are visible in the dynamic layout:
The Version dimension of your model can include attributes that allow you to control and update all forecast
layouts. The following types of columns can be added to your model's Version dimension for working with
cut-over dates:
Note
NOTE: For fiscal years, use calendar values in the CutOverDate column. For example, if your fiscal year
begins in July, the value should be 201807 (YYYYMM).
The FIND () function looks for a member that matches the value in the first parameter and returns its
attribute n the second parameter. The third parameter contains the ID value associated with the value in the
second parameter.
• Value to match – the flag for your current version, or another Input Control.
• Property to match – the specific column from your Version dimension, for example,[d/Version].[p/
CURRENT].
• Property to map – the ID associated with the property that you matched, for example, [d/Version].[p/
ID]).
Note
Formulas using the FIND function can only be used in forecast layouts or forecast calculations.
1. Under Rows or Columns in the Builder tab, select Add Measures/Dimensions and add the Cross
Calculations dimension.
Remember
You can press Ctrl + Space in the formula area to display a list of suggestions, or type [ for a list of
valid measures and dimensions.
Example
To create the second Calculation Input Control (Cut-Over Date), do the following:
1. In the Builder, in the Cross Calculations dimension, select the current calculation and then select (Edit
Calculation).
2. From the Properties cut over date section, select Specific Date and then select Click to Select
Values Create a New Calculation Input Control .
3. Provide a name for the input control, for example, Cut-Over Date.
4. In the Formula section, select Click to Add Formula.
5. Provide a name for the formula.
6. In the Edit Formula area, type the formula.
Remember
You can press Ctrl + Space in the formula area to display a list of suggestions, or type [ for a list of
valid measures and dimensions.
The results of the calculations are displayed in your table and are updated when you change your cut-over date.
At story load time, the “Cut-Over Date” Calculation Input Control formula will be processed as follows:
1. The system first processes the Calculation Input Control called “Current”.
2. The system then searches in the “ID” column of the Version dimension for the value returned from the
“Current” calculation.
3. When the value is found, the system then uses the value from that row in the column “CutOverDate”.
This calculation returns a string that is then translated into a time period. The system understands and
interprets time in the following formats:
These are the values that you can use in attributes in the Version dimension when you want the system to
interpret these strings as time values.
Note
A returned string that is not in YYYYMMDD format will first be converted to the first date based on its
own date granularity. For example, 2023 will be converted to 20230101 while 202302 will be converted to
20230201.
Since the date dimension has multiple hierarchies, the system also sets the corresponding date hierarchy as it
is being used in the Calculation Input Control.
You can use any value for the “Current” attribute, but the “CutOverDate” attribute should be a valid time
period.
Actual Actuals
Estimated Forecast
FC2020 Forecast
Forecast Forecast
NewFC Forecast
The following example shows the formulas that use the Version dimension attributes.
Example
Current For-
mula Name Current Formula Cut-Over Date Formula
The following table shows the results of applying the Current and Cut-Over Date Calculation Input Controls.
Notice how the cut-over date (black bar) changes from LE1-20172 to LE2-20173.
You can add calculated rows and columns to your table, or even create calculations and formulas within the
table.
There may be situations where you prefer to add custom calculations (client calculations) directly to the table
rows or columns and you can do that for less complex calculations.
However, if you want to create calculated or restricted measure calculations or other complex calculations, you
will need to access the Builder panel and create those calculations using the Calculation Editor.
For more information, see Create Custom Calculations for Your Tables [page 1439].
Remember
The types of calculations that you can create are limited by where you are trying to create them, whether
that is by using the Calculation Editor or by creating a calculation directly within the table. Within the
table, you may see different calculation types available for rows or columns, or outer or inner dimension
members.
Note
When you copy or duplicate a table that has custom calculations (row or column calculations), the new
table uses the same calculations as the original table, not copies of those calculations.
If you want to use different custom calculations in your new table, you need to remove the current
calculations from that table and then create new ones.
While you can still create calculated members based on the Account or Cross Calculations dimensions using
the Calculation Editor, creating calculated rows and columns can provide more flexibility. For example, you
don't really need to create a restricted measure to calculate the variance between an Actual and a Budget
version.
• Repeating: uses one dimension member and the calculation appears wherever that dimension member
appears.
If you have more than one dimension defined in your column or row, make sure the cell references refer to
the inner most dimension.
1. To add a blank row or column, right-click the header of a dimension member and select Add column
(or Add row) and then select either Repeating or Single.
The calculated row or column is created as a new member of the same dimension and hierarchy level as
the member that you selected.
2. (Optional) Type a name for the calculation in the header cell.
3. To enter the formula, type the equals sign (=) and then begin typing the formula.
• Add cell references by typing the coordinates of the cell (for example, B2), or by selecting the cell to
reference.
The cell must belong to the same table as the calculation.
• References to cells in a calculated row or column are relative.
For example, if you type =C2-B2 in a column header, the fifth row of this column will calculate C5-B5,
the sixth row will calculate C6-B6, and so on.
• Create absolute or fixed references to cells.
For example, if you want to multiply each row by the value in C2, you type =$C$2*B2.
• The following are some of the most common formulas that you can create.
Sum =B2+C2
Subtract =B2-C2
Multiply =B2*C2
Divide =B2/C2
4. To edit, format, or delete a calculated row or column, open the Builder panel and expand the list of
calculations for the dimension that the row or column was added to.
The calculated rows or columns are referred to as Story Calculations in the builder panel. When you select
a story calculation and choose (Edit calculation), the calculated column or row is highlighted in the
table and the formula is displayed in the formula bar.
Restriction
The story calculations won't be as complex as calculations created using the calculation editor or backend
calculations (calculations from the data source that are displayed as Totals in the table when you set the
Show Totals option for a dimension in the builder panel).
For example, the following simple table includes numeric and percentage (ratio) data. In this instance, the
backend calculation isn't too complex (Profit/Revenue), but the sum calculation created in the table shows
a sum of the percentage values, not the expected result.
A B C D E
1 Product
=B2+B3+B4
To calculate the difference between a Budget and Actuals version, you can add a calculated column based on
the Version dimension:
1. Right-click the column header of the Budget version and select Add column.
2. To name the new column, select the header, type Variance, and press Enter .
3. Select the Variance cell again and type the formula. Use the cell coordinates of the Budget and Actuals
column headers, or select each column to create references. For example, =C2-B2. Press Enter .
The new column shows the difference between the Budget and Actual version for each account. You can also
perform this calculation using restricted measures based on the Cross Calculations dimension.
Instead of adding a row or column to the table first and then creating a running calculation formula for the
measure, you can select a member and pick a formula from the list. These calculations can be added even if
you don't have permission to edit the story.
You can also select multiple members and create a calculation from them.
Note
The numbers that you see in the table are the numbers that are used in the calculations. You can't create
complex calculations with these formulas.
Restriction
Running functions (for example, Accumulative Sum, Moving Minimum Value, Moving Average, and
so on) shouldn't be used for hierarchical data.
Restriction
Running functions shouldn't be used with SAP BW linked nodes (hierarchy nodes) because the
results may not be as expected.
Option Function
Accumulative Sum of all Detailed Values that are not runningsum (<value>, true)
Zero, Null, or Error
Accumulative Count of all Detailed Values that are not runningcount (<value>, true)
Zero, Null, or Error
Moving Average that is not Zero, Null, or Error runningaverage ( <value>, true )
2. To hide the reference (original) column or row, select the member and then select Hide column (or Hide
row).
The calculation or comment column or row is still visible. To show the reference column, remove the
Hidden filter from the table.
A new row or column containing calculations is added to the table, and the formula is added to the builder
panel as Story Calculations for the dimension.
When you select the header for the calculation row or column, the calculation formula is displayed in the
formula bar. The calculation formula will also be displayed when you select (Edit calculation) for the story
calculation in the builder panel.
Null values in most of the calculations in table rows and columns will be handled in the same way that Microsoft
Excel handles null values.
When you are creating calculations in your table rows or columns, null values will be ignored, unless they are
explicitly included in a calculation. For example, =if(A10 = null,1,2)
Most of the calculations will be handled in the same way that Microsoft Excel handles these calculations.
However, finding the average of two null values will be handled differently. Microsoft Excel shows a divide by
zero error, but SAP Analytics Cloud shows a null value instead.
The following table shows the results of basic mathematical operations as well as the result of calculating an
average.
Using SAP Analytics Cloud and SAP HANA Spatial technology, you can overlay business data on geo maps with
detailed geographic information such as topography, satellite imagery, and streets and highways.
Note
Before you perform geospatial analysis in stories, you must first import coordinate data or area data, and
enrich it in the Modeler.
Geo maps can have multiple layers of different types of data that allow you to visualize measures and
dimensions from a model, or mark important locations, areas, and routes on the map.
Using a combination of these layers, as well as tools for filtering spatial data, you can perform a variety of
geographical analyses. For example:
• Giving a quick visual overview of the performance of sales regions against KPIs that you set.
• Finding retail store locations that have a type of public attraction or service nearby, such as pharmacies
that are close to hospitals.
• Overlaying a map with custom defined regions, such as sales territories or electoral districts.
• Display data distributions as either choropleth or heat maps.
Note
You can only consume geo-enriched dimensions on the new model type in the using the Optimized Story
Experience.
Geographical hierarchies
You can customize the navigation path by skipping levels and setting your own label for each layer in a geo map.
Geographical hierarchies defined by general administration boundaries are not universal. The table below lists
the default labels for hierarchies and provides proximate equivalents.
Country Country
Note
Models with multiple account hierarchies are not supported; and only the default account hierarchy will be
consumed.
When working with data from a live HANA system, you can load custom shapes and define custom
geographical hierarchies. For example, you can define custom shapes to represent particular sales regions,
or sections of a specific facility. You can then use the choropleth/drill layer to navigate through your custom
hierarchy. For more information on creating custom shapes and hierarchies, see https://help.sap.com/doc/
ae871ef7a4cc469494ef6ef09d4d9df1/release/en-US/HANA%20Live%20Custom%20Regions.pdf.
Points of Interest
You can use this data in a geo map by adding it as a point of interest layer in the map, or by creating a Map Filter
to filter locations within a certain distance of one of the points of interest.
When specifying measures in the Bubble, Choropleth, Heat, and Flow map layers in the Classic Design
Experience, the following options are available:
When working in the Optimized Design Experience, in addition to all the options described above, you can set
up account/measure pairings for your location data. Currently this feature is only available for Bubble Size and
Bubble Color in the Bubble layer, and Heatmap Color in the Heatmap layer.
As a prerequisite for both private and public generic as well as organization dimensions, you need to first
enrich dimensions using coordinate data within the modeller by following the steps described in Creating a
Geo-Enriched Dimension Property Structure for a Model With Coordinate Data [page 1482].
• Thresholds defined in a model for a given measure. These thresholds are specified in the Modeler.
• Thresholds defined for a story.
Note
In a geo map, thresholds defined in a story will override those defined for a model.
Related Information
Creating a Model with Coordinate or Area Data for Geospatial Analysis [page 1479]
Creating Geo Spatial Models from HANA Calculation Views [page 1483]
Creating Points of Interest [page 1485]
Creating a Geo Map [page 1488]
Adding the Bubble Layer [page 1491]
Adding the Point of Interest Layer [page 1495]
Adding the Choropleth Layer [page 1497]
Styling Options for Geo Maps [page 1503]
Custom Hierarchies on Choropleth Layer (Specifics for SAP BW) [page 1570]
Optimized Story Experience [page 1133]
Before you perform geospatial analysis in stories, you must first import coordinate data or area data, and
enrich it in the Modeler. This process creates a new column in the data view with an enriched format of latitude
and longitude coordinates or by area using country, region, and subregion data.
Prerequisites
You must use acquired data from supported sources such as SAP BW or a file (.xlsx or CSV). The data must
contain a location ID column with unique data, as well as either latitude and longitude columns, or country
(required if your regions and subregions are in different countries), region, and subregion columns.
Context
These steps describe how to enrich coordinate or area data while creating a model from a file. You can also
enrich coordinate or area data while uploading data to an existing model. For more information, see Combine
Data with Your Acquired Data [page 722].
Procedure
3. In the Import Model From File dialog, choose whether you want to import data from a file on your local
system, or from a file server.
Tip
If you import a file from a file server, you can also schedule imports from that file. For more information,
see Update and Schedule Models [page 772].
4. If you're importing from a file server, choose a file server connection, or select Create New Connection.
If you create a new file server connection, specify the path to the folder where the data files are located. For
example: C:\folder1\folder2 or \\servername\volume\path or /mnt/abc.
5. Choose the file you want to import.
6. If you are importing from a local Excel workbook containing multiple sheets, select the Sheet you want to
import.
If you are importing from an Excel file on a file server, the first sheet is automatically imported.
7. If you're importing a .csv file, select which delimiter is used in the file.
8. Select Import.
For small data files, the data is imported and displayed in the Data Integration view, which shows the
imported data columns that you'll define as dimensions, measures, and attributes.
9. Larger data files may take some time to upload. You can work on other tasks while the dataset is being
uploaded in the background. When the data is finished uploading, select it from the Draft Data list to open it
in the Data Integration view.
10. Ensure that at least one numeric column has Measure as its dimension type.
11. Select (Geo Enrichment) in the toolbar, and then choose either of the following options:
• By Coordinates if you want to use latitude and longitude data to create the location dimension.
• By Area Name if you want to create the location dimension based on country, region, and subregion
data. The country data can be imported as ISO3 and ISO2 codes, or the country names in English.
Note
To download a list of supported area names, use the link provided in the Geo Enrichment By Area
Name dialog.
Note
You can create a location for the Dimension ID column of an organization or generic dimension.
You cannot create a location dimension associated with Account, Version or Date dimensions.
Note
When specifying coordinates, SAP Analytics Cloud only supports the decimal degrees format
(with the degree symbol omitted). For example: A latitude of 38.8897 and a longitude of
-77.0089.
• By Area Name
Choose how you want to specify the country data:
• Select Column: if you want to select the column containing the country data.
• Select Country: if you want to select a specific country from a dropdown list.
Note
If you select a country from the list, after the data is geo-enriched, it will be limited to areas
within the selected country.
Results
When working with stories, the location dimension will be available to add to geo maps.
Note
In your geo maps, use the choropleth/drill layer to navigate through location dimensions enriched By Area
Name.
Continue with the data preparation task before completing your model: Combine Data with Your Acquired Data
[page 722].
Related Information
You can enrich a private or public dimension of type Generic or Organization with geo data in the modeler. To
enrich a dimension with numeric coordinates you must create a dimension property for geo longitude and geo
latitude with the according types.
Prerequisites
You have defined a model that you would like to enrich with geographical data.
Context
These steps describe how to prepare the property structure of an existing or new model for the Geo
enrichment before the data assignment.
Note
• You can enrich a dimension only once with a coordinate pair consisting of longitude and latitude.
• When specifying coordinates, SAP Analytics Cloud only supports the decimal degrees format (with the
degree symbol omitted). For example: A latitude of 38.8897 and a longitude of -77.0089. The supported
range for latitude is from -85.05113 to 85.05113 and for longitude from -180 to 180.
1. Either open an existing dimension of type genric or organization in your model or create a new dimension
of these types.
2. In the Dimension Details panel, select Create Property.
3. In the Create Property dialog enter an ID, an optional description and open the dropdown for property Type.
4. Select Latitude for Geo Coordinates and choose Create.
5. Repeat steps 3 to 4 for a Longitude property.
6. Save the model. Upon saving the model, the structure of the dimension that has one pair of coordinates
is enriched. The geo enriched dimensions are displayed with the symbol in the modeler. You can now
assign the data to the dimension you have changed or created.
7. Assign geographical data via the Data Management option Import Data from Fiile. For more details, see
Import Data Source Reference [page 811].
Results
You can now visualize the geo-enriched dimensions using geo maps in stories.
Note
Related Information
You can import a location-enabled model from a live HANA calculation view and use this data source to add
multiple location dimensions to a new model.
Prerequisites
To create a Geo model in SAP Analytics Cloud from calculation views on a live HANA system:
Procedure
2. Select Get data from a data source Live Data connection to create a new model.
Note
From the acquire data panel, select the filter icon to narrow down the number of data sources in the
list. You can filter by data source type or by category.
Location Identifier Select the key column which contains the identifier for
the locations
Location Dimension Name This name is automatically generated from the Location
Data name.
Identifier for Mapping Select the name of the key column of the location data
view.
Note
the column names in the calculation view model and the location data view have to be unique.
Related Information
Points of interest are sets of geographical data that can be added to a geo map and analyzed with reference to
business data from a model.
Point of interest data can be added from an Esri shapefile, a CSV or Excel file, or from an SAP HANA model with
a geographical dimension.
Note
If you have a CSV or Excel file with clearly distinguishable names for latitude and longitude columns, you
can simply drag and drop the file directly into your geo map.
Note
SAP Analytics Cloud supports a finite number of spatial reference IDs. To avoid incorrectly displayed data
or error messages, you should know the SRID used in your shapefile. Publicly available tools can help you
recognize the shape identifier and thus determine the shapefile format.
You can view and maintain point of interest data in the Files list. You can filter on the Points of Interest file type
to view the files. You can add or search points of interest, or select points of interest to delete.
You can set access permissions for point of interest data using the (Share) icon in the toolbar.
Tasks
Related Information
Follow these steps to create point of interest data by uploading an Esri shapefile.
Procedure
You must select the SHP and DBF files from the shapefile. Both files must have the same name, for
example, sales_regions.shp and sales_regions.dbf.
4. In the Point of Interest Name field, type a name to identify the point of interest data.
5. Select an option from Spatial Reference ID corresponding to the encoding of the imported shapefile.
For more information on spatial reference IDs, see the SAP HANA Spatial Reference available on the SAP
Help Portal at help.sap.com.
The following options are available:
• 3857:WGS 84 / Pseudo-Mercator
• 4326:WGS 84
• 1000004326:WGS 84 (planar)
6. From the Tooltip Text list, select a field to provide the tooltip text for each point of interest.
7. Select Create.
Follow these steps to create point of interest data from an SAP HANA analytic or planning model.
Prerequisites
You must have access to an SAP HANA analytic or planning model with a dimension that has been
geographically enriched.
Follow these steps to create point of interest data by uploading either a CSV or an Excel file.
Prerequisites
To create point of interest data from an imported CSV or Excel file, the Execute permission for Personal Data
Acquisition must be selected in the role that is assigned to you. For more information see Permissions [page
2844] and Standard Application Roles [page 2838].
Context
Note
If your file contains clearly distinguishable names for latitude and longitude columns, you can simply drag
and drop the file directly into the geo map. For more information see Creating a Geo Map [page 1488].
Use the procedure below if you want to manually determine which columns serve as longitude and latitude
values for your point of interest data.
Procedure
Note
6. From the list under Longitude, select the column to associate with the longitude for the points of interest.
7. From the Tooltip Text list, select a field to provide the tooltip text for each point of interest.
8. Select Create.
Results
The point of interest data is created and saved in the modeler. The 3857:WGS 84 / Pseudo-Mercator spatial
reference ID is used to encode the point of interest data. For more information on spatial reference IDs, see the
SAP HANA Spatial Reference available on the SAP Help Portal at help.sap.com.
Like other chart types, you can add a geo map to a story canvas.
Prerequisites
You geo-enrich data in the Modeler. For more information see Creating a Model with Coordinate or Area Data
for Geospatial Analysis [page 1479].
Procedure
1. From the Insert menu above the story canvas select Geo Map.
A geo map widget is added to the canvas and the (Builder) opens under Designer on the right.
2. From the Basemap list, select a visual style for the map hosted by Esri ArcGIS.
For example, the Streets basemap can help you clearly visualize city neighborhoods, while the Light Gray
basemap provides a simple, uncluttered look.
Note
Once you add a model to your geo map, it serves as the default model for any subsequent layers you
create. You can always change the data source for any given Bubble, Choropleth, Heat Map, or Flow
layer by selecting the corresponding (edit) icon under Data Source. If you choose the No Model
option as your data source for a given layer, no data points will display for that layer. You cannot add a
model based on blended data to a geo map.
4. Select the (Hide Layer) icon next to a layer to toggle the layer visibility in the geo map.
5. The geo map legend provides information for each layer, excluding the Feature layer. To specify which
dimensions and measures are available for a layer in the legend, select Maintain legend . You can
set each entry to either Show or Hide. To remove a layer from both the legend and map, select
Remove Layer .
In the map legend you can use the icon next to each layer to toggle the layer visibility. This is
particularly useful for toggling between the displayed data during presentations.
Note
There are two style themes for map legends, map controls, and tooltip charts. To change the style
themes, under Designer select (Styling). Under Map Controls you can select either the Light or Dark
theme. You can also adjust the Opacity control slider as required.
• Select (polygon filter tool) on the left menu of the geo map, and choose either the circle or square
filter tools. Then, draw a shape on the map around the data points that interest you.
Select and drag points to change the filter shape. A toolbar of filter option will display near the filter
shape. Choose to include all the data points within the filter shape. Choose to exclude all data
points within the filter shape. Select (cancel) to remove the filter. Use the Opacity slider in (Builder)
to reduce the opacity of shape.
• Map filter: Find data points within or beyond a certain distance of a point of interest or another location
dimension. Select the + icon for Add Filter in the geo map builder. In the Create Map Filter dialog, select
Distance from Filter type. Select a geographical hierarchy for Show and either Within or Not Within from
Note
To create a slider control filter based on distance from your Reference Location, select Add as
an input control slider to page OK . An input control slider appears next to the geo map. Select
the control to move the slider between the minimum and maximum distance from your reference
location.
• Map filter: Display data points that intersect or not intersect with a point of interest or another location
dimension. Select the + icon for Add Filter in the geo map builder. In the Create Map Filter dialog,
select Intersection from Filter type. Select location dimensions for Show and either Intersecting or Not
Intersecting from the Operator list. Finally, specify a point of interest or a location dimension as the
Reference Location.
Note
You can use location references from live HANA systems to create map filters, however both
references need to be on the same system.
• Story filters: Any filters that you apply to the page or story will also filter data in the geo map.
• Dimension filter: You may want to apply non-geographical filters to the layers of a geo map, for
example, to view data for a specific product or time period.
In the geo map builder, select next to the layer that you want to filter, and then select Add Filters to
choose the dimension that you want to filter. For more information on working with filters, see Applying
a Chart Filter (Classic Story Experience) [page 1270].
If the layer is based on a model that contains a version dimension, this dimension is filtered to the
actuals data by default.
7. Use the (Zoom In) or (Zoom Out) controls at the left side of the map to adjust the pan and zoom.
You can also use the (Zoom to Data) or (Return to Last Saved ) controls to reset the display state.
8. To manage thresholds in the geo map, select the icon to the right of the measure in the Builder.
Select Show Threshold None to remove all thresholds. Go to Show Threshold Model or
Show Threshold Story to specify which type of threshold you want to display. Select Edit Ranges
or View all Thresholds to modify the thresholds.
9. Under Properties specify how you want the geo map to display when opened. Pick one of the following
options:
• Open in last saved view : displays the zoom level and data points shown when you saved the geo map.
This is the default setting.
• Optimized to fit data points: displays a zoom level and display optimization that fits all the available
data points.
Related Information
Prerequisites
To add data from a model to a bubble layer, the model must contain at least one location dimension. For
more information see Creating a Model with Coordinate or Area Data for Geospatial Analysis [page 1479].
For both private and public generic as well as organization dimensions, you can enrich a dimension using
coordinate data within the modeller, see Creating a Geo-Enriched Dimension Property Structure for a Model
With Coordinate Data [page 1482].
Procedure
1. To pick the model that the layer will be based on, under Data Source, select (edit).
2. Select Bubble Layer from the Layer Type list.
3. From the Location Dimension list, select the dimension that you want to visualize.
a. To change the marker icon, select the arrow to the right of Style and choose a new icon.
Note
You can also use a custom marker; choose Add Custom, select Enter SVG Path to directly enter the
icon's SVG Path, or select Upload SVG File to load the file containing the icon. If you are using a
custom SVG, only the <path> attribute is read. All other attributes such as <transform> will be
ignored - if your icon relies on these attributes it may not display as expected.
b. You have the option to apply more changes to the location dimension marker:
Note
The settings for Palette, Opacity, and Size only correspond to the localization dimension. Once the
marker size or color is associated with a measure, the location dimension settings are ignored.
4. Use the Add Measure/Dimension list under Bubble Color to associate a bubble color with a measure or a
dimension.
Note
If you are using a New Model with Measures as your data source, select + Add Dimension/Account to
specify an Account and Measure pairing for the Bubble Color. To define more Bubble Color settings,
expand the associated Account heading.
You can choose the number of Ranges for the palette. This sets the number of KPI values and allows you
to specify values to set the KPI ranges for each color in the palette. If necessary, adjust the Opacity of the
bubbles to ensure that the basemap or other layers are visible.
Note
Use the switch beside the Ranges list to show KPI values as percentages or as absolute values. To
If you choose to associate Bubble Color with a dimension, the dimension's top five members are displayed
within pie charts in the resulting bubbles. If no measure is specified for Bubble Size, the pie sections will be
evenly proportional. If a measure is specified, the proportion will be aggregated according to the measure
values. To highlight a bubble, select it once. To highlight a section of the pie within the bubble, select the
bubble and then select the section. If you associate Bubble Color with a hierarchical dimension, you can
drill up or down the hierarchy after selecting a bubble or section.
Note
You can only sync colors by dimension using a geo map and a chart.
Note
You can associate Bubble Color with only one dimension or measure. When working with New Models
with Measures, you can associate Bubble Color with only one account and measure pairing.
5. Use the Add Measure list under Bubble Size to associate the bubble size with a measure.
Note
If you are using a New Model with Measures as your data source, select + Add Account and + Add
Measureto specify an account and measure pairing for the Bubble Size. To define more Bubble Size
settings, expand the associated Account heading.
Use the Size slider to adjust the size of bubbles. Toggle the Adjust Range Scale switch between a
scale range based on percentages and a range based on numeric values. You can customize the range by
modifying the min and max values or using the provided range slider.
6. To control the display of data point labels in geo maps, select the geo map and go to More Actions
By default, labels will display in a Bubble layer, and labels will not overlap. However, to remove the
automatic control on label overlap, choose Designer and select (Styling). Turn Off the Avoid Data Label
Overlap under Labels.
To visualize multiple data points associated with the same location, hover over the location to display all
the values. The bubble size and color used for overlapping points is based on the average value of all the
points at that location.
Note
• Clustering is strictly enforced when a map contains more than 20,000 data points.
• In the optimized design experience mode, clustering is automatically disabled when the displayed
points are below the value set for Maximum Display Points.
• The clustering of points on a map into groups is based on the the k-means algorithm.
Tip
8. To manage thresholds in the geo map, in (Builder) select the icon to the right of the measure.
Go to Show Threshold None to remove all thresholds. Go to Show Threshold Model or Show
Threshold Story to specify which type of threshold you want to display. Select Edit Ranges or View all
Thresholds to modify the thresholds. If you choose not to display thresholds in geo map, the Bubble Color
palette specified for the layer will override the one specified in the threshold. Any palette changes will be
reflected in the map legend.
9. To customize the data that displays for a given data point in the bubble layer, select the icon to right of
Layout. You can also use the (More Actions) icon to the right of the geo map to access Add Tooltip
• Select Tooltip Information to specify information displayed as you hover over a given data point in
the geo map. Use the provided Tooltip Information field to choose from a selection of measures,
dimensions, and dimension attributes, or to create a calculated measure or measure input controls.
Note
If you are using a New Model with Measures as your data source, you can choose from a selection
of accounts, dimensions, and dimension attributes. Account information will only display in tooltips
if a single measure is specified for Bubble Color and Bubble Size.
• Select Tooltip Chart to populate a corresponding table of information for a point in the geo map. Make
your selections from the provided list of Measures and Dimensions, and click OK.
Related Information
Prerequisites
To add point of interest data to SAP Analytics Cloud, see Creating Points of Interest [page 1485].
Context
Using your point of interest data, mark locations or areas on the map, such as sales regions for your
organization, or the location of special events that are important to your analysis. A model is not required
to add a point of interest or feature layer to a geo map.
Note
If your point of interest data is available in a CSV or an Excel file, you can simply drag and drop the file onto
the map to populate the data points. Your data must contain columns for latitude (column heading string
should contain latitude or lat) and longitude (column heading string should contain longitude or long).
Tooltip columns are supported but are optional (column heading string should contain tooltip, name, or id.
The column heading strings are not case sensitive.
Procedure
These options aren't available for point of interest data that displays shapes or lines.
Note
To change the current marker icon, select the arrow next to Type. Select More to view all the available
options. You can also create a custom marker; choose Add (the plus sign) under Custom Markers,
select Enter SVG Path to directly enter the icon's SVG Path, or select Upload SVG File to load the file
containing the icon.
Related Information
Prerequisites
To add data from a model to a heat map layer, the model must contain at least one location dimension. For
more information see Creating a Model with Coordinate or Area Data for Geospatial Analysis [page 1479].
For both private and public generic as well as organization dimensions, you can enrich a dimension using
coordinate data within the modeller, see Creating a Geo-Enriched Dimension Property Structure for a Model
With Coordinate Data [page 1482].
Procedure
1. To pick the model that the layer will be based on, under Data Source select (edit).
If you have associated a model with a previously added layer, this data source will by default appear under
If you are using a New Model with Measures as your data source, select + Add Dimension/Account to
specify an Account and Measure pairing for the Heatmap Color. To define more settings, expand the
associated Account heading.
a. Select a color from the Palette list.
b. If necessary, adjust the Blur Radius value.
This setting determines the size of each point and how it overlaps with other points. A higher
percentage value results in a larger blur radius.
c. If necessary, adjust the Opacity of the layer.
d. Choose the number of Ranges for the palette.
This sets the number of KPI values. Specify values to set the KPI ranges for each color in the palette.
Use the switch beside the Ranges list to display KPI values as percentages or as absolute values.
5. Select OK to save all layer settings.
Related Information
Visualize your dimensions and measures by displaying shaded geographical shapes or bubbles in a choropleth
layer.
Prerequisites
To add data from a model to a choropleth layer, the model must contain at least one location dimension. For
more information, see Creating a Model with Coordinate or Area Data for Geospatial Analysis [page 1479].
Note
If your SAP Analytics Cloud system is hosted on a data center located within China, or you are accessing a
system assigned to an SAP India customer, this feature is not available.
Note
The model does not explicitly contain a location dimension like SAP HANA models. The dimension needs to
be configured in SAP BW as geo relevant. To use the choropleth layer in SAC, the attributes 0LONGITUDE
and 0LATITUDE need to be maintained. For more information, see Editing the Settings for BI Clients.
For SAP BW/4HANA 2.0, the SAP HANA geo table is filled automatically. Fill the navigational attribute
0GEOFEATID via transformation.
Note
You need to have the SAP Spatial DU available for Live SAP HANA models.
For more detailed information on prerequisites to display the choropleth layer in an SAP BW live
connection, see Using Geo-Relevant Characteristics in SAP Analytics Cloud.
Procedure
1. To pick the model that the layer will be based on, under Data Source, select (edit).
If you have associated a model with a previously added layer, this data source will by default appear under
Note
If you experience slow rendering performance due to a large number of shapes, you can switch to
bubble style for minimal rendering.
• Bubble – using the choropleth layer hierarchy, bubbles are used to display your data.
Note
The point where the bubble is rendered is the centroid of the underlying shape and not a precise
location based on coordinates.
4. From the + Add Location Dimension list under Location Dimension, select the dimension that you want to
visualize.
5. Under Choropleth Color, select a measure from the + Add Measure list to determine the values to color on
your map.
a. Select a color from the Palette list.
Use the swap icon to reverse the palette order.
b. If necessary, adjust the Opacity of the layer.
c. Choose the number of Ranges for the palette.
d. Specify values to set the KPI ranges for each color in the palette.
When displaying bubbles on a choropleth layer, if you choose to associate Choropleth Color with a
dimension, the dimension's top five members are displayed in pie charts within the resulting bubbles.
If no measure is specified for Bubble Size, the pie sections will be evenly proportional. If a measure is
Note
You can only sync colors by dimension using a geo map and a chart.
Note
You can associate Choropleth Color with only one dimension or measure when displaying bubbles in a
layer.
Choose a shape in the map and select the (Filter) icon to remove data points outside the selected area.
Choose to exclude all data points within the filter. Select (cancel) icon to remove the filter. You can
also remove parts of the drill path by removing them from the Location Filter token. You can use the Drill
Filter control at the top of the geo widget to navigate back on the drill path or to remove the filter.
Note
To drill a layer above or below current geographic shape, select either the (Drill Up) or the (Drill
down) icons. Drilling on a region will filter out any data from other regions. You can also remove parts of
the drill path by removing them from the Location Filter token. You can also drill down to the individual
data points if the model was originally enriched by coordinates.
8. To control the display of data point labels in the geo maps, select the geo map and go to More
By default, labels will display inside or undereth the bubble, and labels will not overlap. However, to remove
the automatic control on label overlap, choose Designer and select (Styling). Turn Off the Avoid Data
Label Overlap under Labels.
a. To display values for measures, you can go to More Actions Show/Hide Labels , select
the corresponding layer, and under Measures select the measure(s) for which you want to display
values.
9. To filter from the designer panel, select a value from the +Add filter list and choose the members you want
to filter. Click OK to activate the filter. You can then see your filtered members as selected in the filter box
and the map displays the regions the filtered members are contained in. Select the (cancel) icon to
remove the filter.
10. Draw a polygon filter on the map for customized filtering. Choose (polygon filter tool) from the panel on
the left-hand side of the geo map.
a. Select the (Filter) icon to display the regions within the polygon shape only.
c. Click the frame of the polygon and choose (cancel) to remove the filter.
11. To customize the data that displays for a given data point in the layer, select the icon to right of Layout.
You can also use the (More Actions) icon to the right of the geo map to access Add Tooltip for a
specific choropleth layer.
• Select Tooltip Information to specify information that will be displayed when you hover over a given
data point in the geo map. Use the provided Tooltip Information field to choose from a selection of
measures, dimensions, and dimension attributes, or to create a calculated measure or measure input
control.
Related Information
Mark locations or areas on the map using a public Esri ArcGIS web service.
Context
Procedure
Create a flow map by connecting two sets of location dimensions within your dataset.
Prerequisites
To add data from a model to a flow layer, the model must contain at least two location dimensions. For more
information see Creating a Model with Coordinate or Area Data for Geospatial Analysis [page 1479].
Procedure
1. To pick the model that the layer will be based on, under Data Source select (edit).
If you have associated a model with a previously added layer, this data source will by default appear under
Note
A flow map layer is limited to 5000 flow arcs. The flow layer is included in the map legend. Any tooltips
for the flow layer will indicate both the origin and destination dimension for a given data point.
Prerequisites
Note
Dynamic time range filters are not supported for geo maps.
Procedure
1. To set the dynamic filter variables, select the (Edit Prompts) icon and click the corresponding data
source.
If you do not see the (Edit Prompts) icon, select the icon from the More tab in the menu and
select Edit Prompts.
Note
In View mode, the (Edit Prompts) icon appears under the Tools toolbar.
Results
Procedure
Styling Options
Widget Select a background color for the tile. Specify border details for the tile.
Actions Controls how to display the various widgets including geo maps on a canvas page.
Control Styling Select a theme and specify an Opacity for the tile.
Font Specify textual attributes to style, color, size, and select fonts for all text, titles, subtitles, or a
specific layer in tooltips and map labels.
Note
If you select a specific layer under Text Selection use the provided Label Selection to style the
dimension or data label.
Number Format Specify number formatting for all layers or a specifc layer. You can determine the Scale, Scale
Format, the number of decimal places and what signs (+, -, and ( )) to use with numeric values in
both tooltips and labels.
Note
If you select a specific layer for Layer Selection, use the Measure/Dimension Selection to
format a specific measure or dimension.
Decimal places, Automatic: truncates values to at least three digits. Values that are less than one
will have three decimal places. Larger values will have the decimal places truncated.
Example
Labels To enable or disable the automatic control on label overlap use the Avoid Data Label Overlapbox
accordingly.
You can use SAP Analytics Cloud to create and edit visualizations based on R scripts.
R is an integrated suite of software that includes packages for advanced visualizations and statistics.
Caution
Therefore, if you're building analytic applications or optimized stories, we strongly recommend using
custom widgets to create your custom visualizations. For more information about custom widgets, see Use
Custom Widgets in Analytic Applications [page 1927].
Note
Both new models types and classic account models are supported. For more information on the new
model types and classic account models, see Get Started with the New Model Type [page 627].
Note
You can’t use a model with either currency or currency conversion as input data when creating an R
visualization.
Note
If third-party cookies are not permitted in a Safari on Mac R environment, visualizations based on
HTML content are not supported. If third-party R packages with certain HTML features are used, not all
the contents can be exported to PDF.
Caution
In the R environment managed by SAP, the Shiny R package isn’t supported due to public network
access security considerations.
Note
You can embed R visualizations of HTML contents in the host HTML page by adding it as a trusted
Note
To use R capabilities in SAP Analytics Cloud, your system must be configured to connect to an R
runtime environment. For more information on connecting to an external R environment, see Connect
to an R Environment [page 2772]. You can also run scripts on an R server runtime environment
provided by SAP, which is available to tenants operating through the following data centers AP1, AP10,
AP11, AP12, BR1, BR10, CA1, CA10,EU1, EU2, EU10, EU11 JP1, JP10, US1, US2, US3and US10. For a
list of packages, see https://blogs.sap.com/2020/03/18/r-packages-for-sap-analytics-cloud/. When
editing R script, you can find the latest supported packages by selecting Snippets List of Installed
Packages in Editor.
Note
Your SAP Analytics Cloud URL indicates which data center you are connected to.
Related Information
You can insert an R visualization into a story by running R scripts on an externally deployed R environment, or
an R server runtime environment deployed by SAP Analytics Cloud if it available in your region.
Prerequisites
• You need to create a local HTML file in the connected R environment and call browseURL with a reference
to the local file path.
• Alternatively, you can use HTML rendering packages that leverage browseURL with the local HTML file
reference.
• Only the first level of HTML dependencies are retrieved and available to render the R visualization.
Context
• Input data based on a model, specified dimensions, dimension attributes, and filters.
Note
Both new models types and classic account models are supported. For more information on the new
model types and classic account models, see Get Started with the New Model Type [page 627].
Note
1. From the Insert menu on the story canvas, select (Add) R Visualization .
An empty R visualization is added to the canvas page.
2. To configure the input data for the R visualization:
a. In the Builder panel, from the Input Data section, select +Add Input Data.
Note
If a data source has already been added to the story, it will be used by default.
Note
Select the (Expand) icon under Designer to view a Preview for the input data table structure.
• ROWS: select +Add Dimensions to add dimensions from the list of available options into your input
data. Hover over a dimension to display the (More) icon if you want to specify display options
or attributes. Use the Manage Filters icon to specify filters for the dimension.
Note
To create a dynamic input control for a given dimension, select the Manage Filters icon and
enable the Allow viewers to modify selections option.
• COLUMNS: is used to manage the available measures. Select All Members to display the currently
available measures. Hover over a column entry to display the (More) icon if you want to specify
display options or attributes. Use the Manage Filters icon to specify filters for the measures.
• FILTERS: is used to manage the filters for the input data. Select +Add Filters to specify filters.
Use Advanced Filtering to create filters based on multiple dimensions by defining a set of logical
conditions. For more information, see Advanced Filtering [page 1537].
Note
In R visualization scripts, the dimension names and dimension member descriptions won’t be
translated into the data access language.
d. When you have finished setting up the table structure, select OK.
Note
Your input data can be directly referenced as a data frame in the R script editor. Remember the
name used for the input data or, alternatively, click the displayed entry to specify a new name.
Note
Your input parameters can be directly referenced in the R script editor. Remember the name used
for the parameter or, alternatively, click the current name to specify a new name.
c. Select Existing Dimension to allow users to pick from members of a dimension, or Static List to add
custom values as options for the input control.
the input control. You can use (Search) to find specific values. When you expand the list beside
the search icon, you can choose to view the member Description, ID and Description, or ID.
3. Expand the Settings for Users section, and then choose whether users can do the following in the
input control: Single Selection, Multiple Selection, or Multiple Selection Hierarchy.
4. Select OK.
1. If you have selected Select by Range, enter the Min and Max values for your range in the Set Values
for Custom Range dialog. You can optionally set an Increment value.
2. Select OK.
3. To create a member based input control, add numeric values to the Custom Members area in the
Select Values from Custom LOV dialog, and then select Update Selected Members.
4. Expand the Settings for Users section, and then choose whether users can do the following in the
input control: Single Selection, or Multiple Selection.
5. Select OK.
4. Under Script, select + Add Script to create a new script, or Edit Script if a script has previously been
specified.
The Script panel is displayed over the Designer.
Note
• A list of suggested code is displayed if you press Ctrl + + + Space , or if there are multiple
suggestions based on the characters you have typed in the Editor. Based on your coding context,
the available R functions, data frames, R packages, vectors, arguments, input parameters, and
function lists appear on the left with a corresponding description (if available) on the right.
To reference a data frame, enter the three letters of the data frame name. All data frame and input data
names are preceded by the icon. All input parameter names are preceded by the icon.
• You can use the up, down, PgUp PgDn , Home , and End keys to navigate the displayed list of
suggestions. Select a suggestion to insert it into your script.
• If you combine a data frame name with $ , a list of the column names for the data frame is
displayed.
Note
Sample scripts to render R based visualizations are provided for your convenience. Go to
(Snippets) Examples to access the samples. The code and visualizations rendered by these
samples are for instructional purposes. If you have the requisite R programming background, you can
use these samples to generate visualizations based on your data.
7. Select Apply to insert the R visualization into the current canvas page.
You cannot export interactive HTML content to PDF. You cannot use remote or key figure models as
input data. All hierarchies in models are flattened when working with R.
Related Information
SAP Analytics Cloud hosts an R runtime environment with R packages that can be called using R scripts. R
packages are NOT provided under SAP’s license terms and conditions: each is governed by an open source
license assigned by the package’s copyright owner.
The R logo is © 2016 The R Foundation. The R logo is licensed under the Creative Commons
Attribution-ShareAlike 4.0 International License: https://creativecommons.org/licenses/by-sa/4.0/ The R
logo has been modified by SAP SE and can be downloaded from: https://help.sap.com/http.svc/download?
deliverable_id=20303653
Use custom formatting options to highlight information such as low sales in an area.
Conditional formatting covers several options, including thresholds in models and stories, and assigned colors
in stories.
Tasks
Changing Color Palettes and Synchronizing Colors Across Charts [page 1273]
Define Thresholds in Models [page 770]
Use thresholds to provide visual cues for your information, so that you can quickly see what areas are doing
well, and what areas may need improvements.
Note
Note
A threshold is defined on a specific hierarchy in an account dimension. When you have multiple hierarchies
in an account dimension, you will need to create different thresholds for each hierarchy.
When you are comparing negative values, your results may not be what you assumed they should be.
To provide more accurate results, the system uses a formula to compare measures:
If you put that formula into Excel, you would write it as follows: =IF(B5=0,A5/ABS(A5),
IF(OR(A5<0,B5<0),((A5-B5)/ABS(B5)), A5/B5))
The following table shows the ratio and formula results from comparing the positive and negative values
(profits/losses) for two years.
A B C D E
1 This year (A) Last year (B) Ratio result (A/B) Formula result Description
Threshold Options
• OK
• Warning
• Critical
Tip
When the Thresholds panel opens, you may only see one range: OK.
To add the remaining ranges, click Add Range once for each additional range.
The new ranges will cycle through the list forwards and then backwards.
When you enter the value for your range, the value appears on the line at the bottom of the panel. The value
includes a letter to designate whether it is thousands, millions, and so on.
You do not need to set both an upper and a lower bound if you have only one range. When you add more ranges,
you can leave either the upper or lower bound empty.
Tip
When you add ranges, the displayed icons and labels will keep cycling through the default choices: OK,
Warning, Critical, Neutral, Information, Neutral, Critical, Warning, OK, and so on.
A warning will appear if you create a threshold range that already exists. You must either change the
range or apply additional filters to the threshold.
The threshold you created will appear in the Conditional Formatting panel. It can be added to charts (see Using
Thresholds in Charts [page 1294]) or tables (see Using Thresholds in Tables [page 1430]).
Customize Symbols
You can use as many or as few ranges as you like, and you can change the label names and colors. You can also
change the symbols.
To change the color or symbol, select the current symbol and then select Customize Symbol.
You have five more symbols that you can choose from.
Note
When you customize the threshold symbols within a story, they will no longer be connected to the story's
default theme preferences. To make your new symbol part of the default theme, you can configure it in the
theme preferences.
For more information on configuring and setting theme preferences, see Story Theming (Optimized Story
Experience) [page 1608].
Prerequisites
Note
A threshold is defined on a specific hierarchy in an account dimension. When you have multiple hierarchies
in an account dimension, you will need to create different thresholds for each hierarchy.
Context
Use thresholds to provide visual cues for your information, so that you can quickly see what areas are doing
well, and what areas may need improvements.
Restriction
When you are comparing negative values, your results may not be what you assumed they should be.
To provide more accurate results, the system uses a formula to compare measures:
If you put that formula into Excel, you would write it as follows: =IF(B5=0,A5/ABS(A5),
IF(OR(A5<0,B5<0),((A5-B5)/ABS(B5)), A5/B5))
The following table shows the ratio and formula results from comparing the positive and negative values
(profits/losses) for two years.
1 This year (A) Last year (B) Ratio result (A/B) Formula result Description
There are three default ranges with the following labels and colors:
• Green square: OK
• Orange triangle: Warning
• Red circle: Critical
Tip
When the Thresholds panel opens, you may only see one range: a green square followed by the word OK.
Click Add Range twice to add the Warning and Critical ranges.
You can use as many or as few ranges as you like, and you can change the label names and colors. (However,
when you change the icon colors, you may see a circle with your new color, not a triangle or square.)
When you enter the value for your range, the value appears on the line at the bottom of the panel. The value
includes a letter to designate whether it is thousands, millions, and so on.
You do not need to set both an upper and a lower bound if you have only one range. When you add more ranges,
you can leave either the upper or lower bound empty.
• Number Range
• Measure: In Comparison Measure, select the measure to compare to.
Optional: To include a NULL data point in your results, select Set no data as zero to set the value to
zero.
7. Under Ranges, set a lower bound and an upper bound for your range.
As you type your values for the upper and lower bounds, you will see a warning appear if the value does not
fall within the range. For example, a lower bound cannot be larger than the upper bound.
Tip
When you add ranges, the displayed icons and labels will keep cycling through the default choices: OK,
Warning, Critical, Warning, OK, Warning, Critical, and so on.
By default, users are allowed to modify filter selections and multiple selections are allowed.
Single Selection Users can apply only one filter to the threshold at a time. For example, if there is a
filter on time, the user can choose to display 2014 sales in the chart.
Multiple Selection Users can apply multiple filters to the threshold. For example, if there is a filter on
time, the user can choose to display 2013 and 2015 sales in the chart.
c. To prevent users from modifying the threshold filter, disable Allow viewers to modify selections.
d. Select OK.
A warning will appear if you create a threshold range that already exists. You must either change the
range or apply additional filters to the threshold.
Results
The threshold you created will appear in the Conditional Formatting panel. It can be added to charts (see Using
Thresholds in Charts [page 1294]) or tables (see Using Thresholds in Tables [page 1430]).
Related Information
Use SAP Analytics Cloud story and page filters to narrow the scope of your analysis.
The Story Filter allows you to apply filters for all charts and tables in a Story that are based on the same model.
The Page Filter is the same as a Story Filter, but applies to just one page in a story. (An Input Control is one type
of page filter.)
Note
Page and story filters are enabled only after you've added at least one chart or table to your story.
For more details on how to apply filters, please see the following information:
Tip
In some circumstances, when a filter is applied to a chart and there is only one resulting data point or item,
the chart's legend will be hidden.
To override this default behavior, the following guidelines will make sure the legend is always visible,
whether you have one data point showing or multiple data points:
Restriction
Converting between story and page filters can only be done in the classic design experience, not in the
optimized design experience.
If you have a story filter and you want the filter to apply only to the charts on one story page, you can convert
the story filter to a page filter.
1. With a canvas or responsive page open, select a story filter from the filter bar.
2. Select Convert to Page Filter.
You can resize the filter object by selecting it and dragging its sizing handles. If you enlarge the filter object, it
becomes an input control that you can use to select filter values.
Example
If the filter is set to allow viewers to change the filter values, and to allow multiple filter values, you can
enlarge the filter object on the page so that the filter values appear in a list, with checkboxes. Then you can
change filter values by selecting and deselecting the checkboxes.
1. With a canvas or responsive page open, select an input control (page filter).
Measure-Based Filters
Measure-based filters are filters that are based on a range of values in a measure. For example, if you want to
include in your story only your company locations that have more than 200 employees, you could create a filter
based on an Employees measure.
Changes you make to a story or page filter affect related filters in the same story or page. (Story filter changes
affect both story filters and page filters in the story.)
Tip
Story filter changes cascade down to page filters and page filter changes cascade down to group filters, but
no filter changes cascade up.
However, when multiple group filters are configured to affect the same set of widgets through linked
analysis, the cascading effect is applied to all the group filters.
For more information on group filters, see Group Filters [page 1536].
For example, when you change a page filter value, any related page filters on the same page are updated
automatically. If you have both Country and Region filters on a page, and you change the Country filter value
from All to Sweden, the Region filter updates to show only regions within Sweden. All other region names are
hidden. You can select Show Inactive Values to display those hidden values.
Restriction
When an input control with the Show unbooked members option turned on has filters cascading their
changes into it, the unbooked members won't be included in the input control's list of members.
Note
• Range filters
• Hyperlink filters
• Multi-dimension filters (filters that apply to a combination of dimensions)
You can turn off the cascading effect in the settings for the filter or input control. When the cascading effect is
turned off, any filter value selection changes made in this filter do not affect other filters, and any filter value
selection changes made in other filters do not affect this filter.
You can define time periods based on years, half-years, quarters, months, weeks, or days (depending on the
time granularity defined in the underlying model), and apply the date range as a filter, so that only details in the
selected time period are visible.
If your data is recorded very frequently; for example, sensor data, you could use the Timestamp dimension
type, and be able to define time range filters based on hours, minutes, seconds, and milliseconds.
The ranges can be fixed or dynamic; for example, you could choose the fixed range January 2019 to December
2019. If this story is opened in 2020, the story will still show 2019 data. Dynamic date ranges shift based on the
current date. They offer a few more granularities such as current year, current quarter, and current month, and
you can offset the range from the current date.
• If you want to display data from three years ago, to two years into the future, you would choose Year
granularity, enter 3 under Look Back, and enter 2 under Look Ahead. If the current year is 2019, then the
date range is 2016 to 2021.
• If you want to display data for the current quarter, you would choose Current Quarter granularity. If the
current date is June 1, 2019, then the date range is April 1, 2019 to June 1, 2019.
• If you want to shift the entire range forwards or backwards instead of basing it around the current date,
choose Offset as the range type. For example, to set the range as the year before the previous year, you
choose Year granularity, select Look Back as the offset direction, Year as the offset granularity, and 2 as
the offset amount. If the current date is June 1, 2019, then the date range is 2017. Note that the Offset
Granularity can’t use a shorter period than the overall Granularity.
• If you switch on Include Range up to Current Period, the date range ends at the current period. If you
choose Year granularity, and enter 3 under Look Back, and the current date is June 1, 2019, then the date
range is January 1, 2016 to June 1, 2019. For offset ranges, the Include Range up to Offset Period setting
works the same way using the offset period instead of the current period.
Note
It is also possible to define multiple range time filters and apply these together. You could use this, for example,
to compare the first two months of the year over a three year period by defining three separate ranges for
months Jan–Feb for each of the three years. When these ranges are applied as a single filter, everything else
except the selected periods is filtered out.
In some workflows, for example, planning, you might want to set the current date of a dynamic date range filter
to be different from today's actual date. To learn how to do this, see Customizing the Current Date [page 1546].
Note
want viewers to be able to select future dates, disable the Look Ahead field by selecting
(Show/Hide) , and clear the check mark beside Look Ahead.
• For fixed date range filters, if you define a range using the Day granularity, all other ranges in the
filter can have only the Day granularity. The same restriction applies to Week. However, the Year,
Half-Year, Quarter, and Month granularities can be mixed. This restriction doesn't apply to dynamic
date range filters; for example, you can define one range with day granularity and another range with
year granularity.
• If you choose Day granularity, you can manually type dates into the date fields. If you do type in dates
manually, you'll need to use the date format defined by the Date Formatting setting in your user profile.
Dates entered in any other formats won't be recognized as dates.
• Time Range filters work for booked data only.
When you create a filter (a story filter, page filter, or calculation input control) for a flat dimension, if you select
“All Members”, a dynamic filter is created. This means that the latest dimension member descriptions are
Example: Say you create a filter with a list of countries, and initially the list of countries is: Canada, USA,
and Germany. If you create a static filter, those three full country names will be remembered. Later, someone
changes the dimension member descriptions in the model or the back-end server to CA, US, and DE. The
static filter will still display Canada, USA, and Germany, instead of CA, US, and DE. A dynamic filter though, will
display the updated member descriptions whenever the story is opened.
To access the story filters that are not currently displayed in the story filter bar, select See More Filters, and
then select a filter from the dropdown list.
Restriction
Some story filter options (such as converting between story and page filters) are not yet supported.
As a story designer, you can decide how your users apply changes to multi-selection filters:
• Immediately update the chart or table contents as members are selected or deselected in the filter.
• Make your changes in the filter dialog and then use Apply Selections to apply all those changes at the same
time.
This is the default setting for new member filters that allow multiple selections.
You can choose which method to allow for each filter by showing or hiding the option Apply Selection Button.
• Story filters: select the filter and then select Show/Hide Apply Selection Button .
• Page filters (input controls): right-click the filter and then select Show/Hide Apply Selection Button .
Linking to an External URL or to Another Page or Story (Optimized Story Experience) [page 1104]
Linking to Another Page, Story, or External URL (Classic Story Experience) [page 1108]
Measure-Based Filters [page 1533]
Group Filters [page 1536]
Customizing the Current Date [page 1546]
Input Controls: Dimension, Account, Measure, or Cross Calculation [page 1550]
Work with Story and Page Filters [page 162]
In SAP Analytics Cloud's Optimized Story Experience, there are various options available in the action and
context menus of story and page filters.
In the filter bar, select your story filter to open its dropdown menu.
1 You can search and select the dimensions or measures you want to filter.
3 Sort
• Default Order
• Ascending
• Descending
• Add Custom Order
If Cascading Effect is enabled, the changes that you make to your page filter
affect related filters on the same page. For more information, see Story and Page
Filters [page 1518]. If there is a check mark ( ) next to this option, it is enabled.
• Header Title
• Header Icon
• Subtitle
• Warning/Error Icon
• Select All Option
6 Display As
• Description
• ID and Description
• ID
You can create a filter that simultaneously updates multiple Edit mode
Linked Analysis charts, tables, or geo maps in your story. For more informa-
tion, see Creating a Linked Analysis [page 1101].
Submenu options:
• Default Order
• Ascending
• Descending
• Add Custom Order (Edit mode only)
If Cascading Effect is enabled, the changes that you make to Edit mode
Cascading Effect your page filter affect related filters on the same page. For
more information, see Story and Page Filters [page 1518]. If
Edit mode
Wrap Text If there is a check mark ( ) next to this option, it is ena-
bled.
Note
If the Apply Selection Button option is disabled, any new
dimensions or measures that you select or deselect in
your filter will directly apply to your page. If this option is
enabled, you will need to select Apply Selections for your
new filter choices to apply to your page.
• ID and Description
• ID
Select this option to remove your page filter from your story. Edit mode
Remove
Related Information
In SAP Analytics Cloud, there are various options available in the action and context menus of story and page
filters.
In the filter bar, select your story filter to open its dropdown menu.
1 You can search and select the dimensions or measures you want to filter.
For more information on converting story filters to page filters, see Story and
Page Filters [page 1518].
4 Settings
• Show/Hide
• Display As
• Cascading Effect
5 View Controls
You can create a filter that simultaneously updates multiple Edit mode
Linked Analysis charts, tables, or geo maps in your story. For more informa-
tion, see Creating a Linked Analysis [page 1101].
You can convert a page filter to a story filter. For more infor- Edit mode
Convert to Story Filter mation, see Story and Page Filters [page 1518].
If Cascading Effect is enabled, the changes that you make to Edit mode
Cascading Effect your page filter affect related filters on the same page. For
more information, see Story and Page Filters [page 1518]. If
Edit mode
Wrap Text If there is a check mark ( ) next to this option, it is ena-
bled.
• ID and Description
• ID
Select this option to open the Controls panel. This panel Edit mode
View Controls shows all the filters that are applied in your story. View mode
Select this option to remove your page filter from your story. Edit mode
Remove
Prerequisites
Page and story filters are enabled only after you've added at least one chart to your story.
Procedure
• For a story filter, select (Story Filter/Prompt) on the top navigation panel. In the filter bar, select
(Add Story Filter/Prompt).
• For a page filter, select (Input Control) on the top navigation panel.
2. If you need to change the data source, select the name of the current data source and then select the data
source that you want to filter.
3. From the Add Time Filters list, you can quickly create a dynamic time filter that shifts based on the current
date, such as Month to Date or Previous Quarter.
Note
• Certain dimensions, for example date dimensions, can be filtered either by choosing members
from a list or by selecting a range. To filter by choosing members, choose the option Filter by
Member. To filter dimensions by selecting a range, choose the option Filter by Range, and skip to
Step 9.
• Measures are automatically filtered by range. If you're applying a measure-based filter, skip to Step
9.
For models with an Account dimension, this will appear as the Measures dimension and can only
be selected in the page filter as a measure-based range filter.
• You can create filters and input controls based on dimension attributes as well. For more
information about dimension attributes, see Combine Data with Your Acquired Data [page 722].
5. To change the display information for the filter, select to expand the Available Members menu, and then
select Show Description or Show Hierarchy options:
When you change how the dimension information is displayed, the corresponding tooltip is updated.
6. To display dimension members that don't contain any data, switch the Show unbooked members option
On.
If the Show unbooked members option is Off, only those members that contain data will be displayed. For
example, if you are filtering a currency dimension and only the Euro and US dollar members contain data,
the rest of the currency members will not be displayed.
7. In the Available Members area, select the checkbox beside the members you want to filter.
You can use the Search function to find the members you want. Select All Members to automatically select
all members in the dimension, or select the Exclude selected members checkbox to exclude the members
you select.
Note
When you select a member node in a hierarchy, all of its children are automatically selected.
8. Ensure that the members in the Selected Members area are correct.
9. To filter certain dimensions by selecting a range, choose the option Filter by Range instead of Filter by
Member.
Configure the filter range as follows (some of these configurations are available only for measure-based
filters, and some are available only for dimension-based filters):
• For measure-based filters, select the Dimension Context.
The dimension context is one or more dimensions that the measure is aggregated against. For
example, if you're analyzing salaries of company employees based on country and gender, you could
choose the two dimensions Country and EmployeeGender for context.
• For dimension-based filters, if you want viewers of your story to specify a single value within a range
instead of a range of values, expand the Multiple Selection list, and select Single Value Slider.
• Select end points for your range:
• For dimension-based filters, you can either drag the sliders on the range bar, or select end points
from the drop-down lists.
• For measure-based filters, type end point values into the Min and Max boxes.
• Select Add a New Range if you want to define additional ranges. Note that the single value slider is not
available if you define more than one range.
• For a single value slider, the end points you select represent the maximum and minimum values
that viewers of your story will be able to select, using the page filter or input control.
• For measure-based filters, viewers can set the Min and Max values outside of the initial range you
define. For example, if you set an initial range of 100 to 200, they could change the range to 50 to
500.
• Otherwise, the end points you select represent the initial ranges that viewers of your story will
see in the page filter or input control. Users can redefine the ranges using the page filter or input
control.
Note
By default, only the range between the end points you initially define is selectable in the page filter
or input control. If you select Display entire range slider, viewers will be able to select areas outside
of the initial ranges you define.
• If you want your range to extend to the start or end of the data set, even if the start or end values
change when the data is refreshed, choose Start or End instead of discrete values.
• For date dimensions, also define the granularity (for example, year or month). You can also set
dynamic ranges that shift based on the current date. To learn about this, see Story and Page Filters
[page 1518].
10. Choose whether you want to Allow viewers to modify selections.
If you allow viewers to specify filter values, they can either toggle on and off each filter value (if you chose
the Multiple Selection option or the Multiple Selection Hierarchy option), or select a single filter value (if you
chose the Single Selection option). The Multiple Selection Hierarchy option allows users to select children
nodes of specific dimension members.
Note
Viewers can reset any changes that they made to filters and input controls to get the original view of
Using large hierarchies in story filters has an impact on performance. It is recommended to use booked
values instead of accessing master data.
Viewers shouldn't be able to delete a designer's story filter, but you can allow them to do so.
12. Select OK to create the filter.
13. If you created a page filter, you can resize it to convert it to an input control.
An input control displays the filter values with checkboxes or radio buttons, to let users experiment with
different filter settings.
14. The filter or input control is assigned a name according to the dimension being filtered. If you want to
change the name, double-click it.
15. By default, the cascading effect is applied. If you don't want the cascading effect to apply to this filter, turn
off Cascading Effect in the filter settings.
For more information, see Story and Page Filters [page 1518].
Related Information
You can create story and page filters based on a range of measure values.
For example, if you want to include in your story only those company employees who earn between $50,000
and $100,000, you could create a filter based on the Salary measure.
When you use a measure-based filter, you need to specify one or more dimensions for aggregation context. In
the above example, you could choose the dimension EmployeeName for context.
Or, if you're analyzing salaries of company employees based on country and gender, you could choose the two
dimensions Country and EmployeeGender for context.
Note
If a story is not hosted inside a Digital Boardroom presentation, then filter evaluation starts at the story filter
level, then proceeds to the page filter level, and then to the widget filter level. If a story is hosted inside a
boardroom presentation, then filter evaluation starts with the topic filters first, and then proceeds to the story,
page, and widget filter levels.
Filters are evaluated per level. For example, if the story-level filter is defined as “only countries with sales
greater than $10 M”, and the result of that filter is USA, Canada, and Germany, then the page-level filters would
filter data from only these three countries.
Within a given level, there are two steps in evaluating dimension-based filters and measure-based filters:
• All dimension-based filters are applied first, by combining them together using AND logic.
• After that, the remaining measure-based filters are evaluated based on the result of the dimension-based
filters. All the measure-based filters are combined together using AND logic.
Controls Panel
When several types of filters are applied to a story, it can be difficult to understand which order the filters are
applied in. To show the filter evaluation order visually, the Controls panel displays all of the filters that apply to
the story.
To open the panel, select Controls, or from a widget's action menu, select View Controls , or
select a filter and then select View Controls.
Note that the Controls panel only displays the filters; you can't edit the filters directly from the Controls panel.
Once the Controls panel is opened, you can select other widgets to see which filters are applied to them. The
Controls panel updates automatically as each widget is selected.
The Controls panel is available in both Edit mode and View mode, and with all page types (grid, canvas, and
responsive).
Related Information
You can configure page filters to affect only some widgets (a group) on a page.
Tip
Page filters cascade down to group filters, but group filters don't cascade up to page filters.
When multiple group filters are configured to affect the exact same set of widgets through linked analysis,
the cascading effect is applied to each group (except for the group filters that have turned off the cascading
effect). However, there is no cascading effect between group filters that affect different sets of widgets.
Say you create three chart widgets on a page (Chart 1, Chart 2, and Chart 3). You can create a standard page
filter, Filter A, that affects all three charts. But you can also define a page filter, Filter B, that affects only Charts
1 and 2, and another filter, Filter C, that affects only Charts 2 and 3. Filters B and C are group filters.
Group Filter B
Group Filter C
Example
You may want to place two charts beside each other on a page, one displaying statistics for female
employees only, and the other for male employees only. Using a standard page filter, you would define a
Gender filter, but setting it to Female would make both charts display statistics for female employees. Using
group filters, you can define two Gender filters; one set to Female and the other set to Male.
You can also create a measure-based group filter on the page, Filter D, and configure it to affect only Charts
1 and 2. The groups would then look like this:
Group Filter B
Group Filter C
When multiple filters apply to a page, it can be difficult to understand which filters apply to a widget. The
Controls panel helps you by displaying the filters that apply to the selected widget, and which order the filters
are applied in. (See Measure-Based Filters [page 1533] for more information.)
For example, if you select Chart 1, the Controls panel shows you that Page Filter A, and Group Filters B and D
are being applied to Chart 1. If you select Chart 2, the Controls panel shows you that Page Filter A, and Group
Filters B, C, and D are being applied to Chart 2.
If you select the measure-based Group Filter D, the Controls panel shows you that dimension-based Page Filter
A is being applied before Group Filter D, but it can't tell you whether Group Filter B, or both B and C are being
applied before D, because different group filters apply to different widgets. In these cases, point to the filter in
the Controls panel, and then select Show filter information to see which filters may be applied.
2. Select the page filter, and select Linked Analysis. (For an input control, select Linked Analysis .)
The Linked Analysis pane appears.
3. Select Only Selected Widgets to see the list of widgets that are on the page.
Note
If the page contains any charts that are linked to another chart, using the Pin Selected Data Point
feature, those pinned charts won't appear in the list.
Advanced Filtering lets you create story and page filters based on multiple dimensions by defining a set of
logical conditions.
The dimensions used in Advanced Filtering can be filtered by using AND or OR conditions. These conditions
can be set to Include or Exclude the data that satisfies the filter conditions.
You can apply Advanced Filters by creating dimension tuples. A dimension tuple consists of a condition which
contains a number of dimension filters. If the dimension members are different the result will create an
asymmetrical filter.
Currently, advanced filtering can be used with Story and Page filters.
If you're filtering a planning model, some empty cells that don't support data entry may not be grayed out.
A message will appear when you try to enter data in these cells. See About Planning Data Entry Errors
[page 2216] for details.
Also, data locking may not be supported. See Configuring Data Locking [page 2525] for details.
Procedure
Page and story filters are enabled only after you've added at least one chart to your story.
• For a story filter, select (Story Filter) on the top navigation panel. In the resultant navigation panel,
select Advanced Filtering.
• For a page filter, select (Input Control) Advanced Filtering on the top navigation panel.
2. Start with an OR operator.
3. Add additional AND or OR operators directly under the top operator.
Adding additional operators will determine the number of dimension tuples desired for the filter.
4. Add some dimensions beneath each AND or OR condition, and define different members for each
dimension.
5. Determine which operators the viewer should have control to turn off and make sure Allow viewers to
disable the condition has a checkmark next to it.
Each operator condition has a setting to allow users to disable the condition. When the story is opened
in View Mode and the advanced filter token is selected, each operator has a checkbox and the viewer can
choose to enable or disable the conditions within View Mode.
6. Select OK to save the filter.
Operator types
Operator Description
AND When all conditions are satisfied, the results will be filtered.
Exclude AND When all conditions are satisfied, the results will be
excluded.
Suppose you want to create a filter which will always exclude the historical plans or forecasts, but will also
include the plans and forecasts of the selected or current period.
Example
Suppose you have a chart showing multi-dimensional data (for example, all products in the major regional
areas) and you want to filter different sets of products for each regional area.
1. Set the outer operator to Exclude OR – this will exclude data for each successful condition.
2. Create 2 or more inner AND operators.
3. Add your Region & Product dimension to each AND operator.
4. In each adjacent operator define a different region + product tuple.
In this example, the first tuple will exclude Apparel & Accessories from North and South America, the second
tuple is going to exclude Apparel & Footwear from Asia Pacific, and the third tuple is going to exclude Apparel
from EMEA. Our chart will now show the specific product members of each region.
In SAP Analytics Cloud, you can create filters on data from multiple dimensions.
Context
You can use Advanced Filtering to create tuple filters (filters that include different dimensions), or you can
select the dimensions from the chart or table and create a filter.
After the filter is created, it is added to the Filters section of the Builder panel.
• In a chart, select (Set Drill) and choose a hierarchy level for each dimension.
4. Select two values from two different levels, right-click one of them and then choose (Filter).
Results
Next Steps
To view the specific details of the filters, select the filter and then select Applies filter conditions to. You can then
use the Advanced Filtering dialog to view or change the filter.
Related Information
You have two tables you want to align with each other. Filtering in one of the widgets causes filtering in the other
widget as well.
Prerequisites
Linked analysis enables you to align two widgets with the same data model. The key needs to be the same to
make the synchronization possible.
1. Select one of the table widgets. Your selection will be the driver for the linked analysis.
For more information about linked analysis settings, see Creating a Linked Analysis [page 1101].
3. In the designer panel, select Only selected widgets and Filter on data point selection. Then select the
widgets you want to connect to.
4. Apply the changes.
5. Select the data you want to filter within the receiver table. Right-click one of the selected cells and choose
the filter option. Only your selected rows are visible now.
6. Click the Filter token to visualize the filter and select Applies filter conditions to in order to display the
detailed filter conditions.
7. Click OK to close the pop-up.
8. Now select the driver widget and choose the rows you want to filter. You see them filtered in the receiver
widget immediately.
9. Click the filter token in the receiver widget now. In the drop-down list, you see the Local filter as well as the
Linked Analysis filter.
Note
A linked analysis filter is an additional filter. The local filter of a widget stays.
10. Select the Linked Analysis filter. Click the Filter token to visualize the filter and select Applies filter
conditions to in order to display the detailed filter conditions.
11. Click Cancel to close the pop-up.
Related Information
For dynamic time range filters, you can change the current date setting so that it's different from the actual
current date.
By default, the definition of a date range is based on your computer clock's current date. For example, if you
have a date range that is defined to look back 2 days and look ahead 3 days from today, and today is February
10, 2019, then the effective range is from Feb. 8, 2019 to Feb. 13, 2019.
Another example: if you've created a range to show you sales figures from the beginning of the year up to the
current month, you can change the current date in the input control to look at sales from the beginning of the
year up to whichever month you've chosen to be the current month, without changing the range definition.
The story designer can link the input control to one or more of the dynamic date range filters defined in the
story and pages. The input control can also be set to System Current Date / Period, to prevent users from
customizing the current date.
To edit the input control's settings, or delete it completely, access the input control in the dynamic time range
filter dialog:
• If the input control is created based on a fiscal date hierarchy, it can only be linked to other dynamic date
range filters based on the same fiscal hierarchy, or another fiscal hierarchy with the same fiscal shift.
• Similarly, if the input control is based on a non-fiscal, calendar date hierarchy, it can only be linked to other
dynamic date range filters based on a non-fiscal calendar date hierarchy.
• The level at which the input control is defined also matters. For example, if the input control is defined at
the year level, then it can't be linked to dynamic date range filters that are defined at the month level.
• You can create only one custom current date input control.
• If the input control isn't linked to any dynamic date range filter, a red warning icon is displayed.
• The input control can be removed from the story filter bar. If this is done, all linked dynamic time range
filters will revert back to getting the current date based on the system date. The input control is still
present and can still be linked to any dynamic time range filter, and once this is done, it will again
appear in the story filter bar.
When editing a story, you can change the default options for the story filter bar.
People who view an SAP Analytics Cloud story have the option to change the story filter bar to a filter panel. For
more information, see Change the Filter Bar Orientation (Optimized Story Experience) [page 175].
To show or hide the filter bar, from the navigation toolbar select View Filter Panel . (When the toolbar is
You can change the default story visibility for anyone viewing the story.
You can decide whether to allow viewers to add their own story filters to a story that they are viewing.
You may want to rearrange the story filters to group similar filters together or to put the most important filters
first.
Remember
Changing the order of the filters does not affect how they are applied. All the dimension filters are still taken
into account when evaluating the condition on the measure or account.
Input controls give you the ability to change which dimensions or measures to display for your charts or tables.
Depending on which input control you create, you can either show a single selection or multiple selections. For
charts, dimension, account, and measure input controls allow only one selection, while Cross Calculation Input
controls can be set up to display multiple cross calculation members. For tables, the measure input control can
also display multiple cross calculation members.
Note
If your chart type doesn't allow negative values and you choose a measure that has negative values, you will
see a warning instead of an updated chart.
The steps to create a Dimension, Account, or Measure input control are quite similar.
Note
Account input controls will only work with the account hierarchy they were created with. If you change the
account hierarchy in your chart you will see an error message.
2. From the Builder choose 2. From the Builder choose 2. From the Builder choose
Add Dimension Create a Add Account Create Account Add Measure Create Measure
2. In Builder, beside Chart Structure, select (Add Chart Components) Add Cross Calculations
A Cross Calculations section is added with the default dimension Measure Values.
3. In the Cross Calculations section, choose Select Cross Calculation Create Cross Calculation Input
Control .
4. In the New Cross Calculation Input Control dialog, select from the available cross calculations.
The steps to create a Dimension, Account, or Measure input control are quite similar. One difference is that
measure input controls can be used with multiple cross calculations.
2. From the Builder choose 2. From the Builder do one of the 2. From the Builder do one of the
3. In the dialog, select the dimensions choose Add Account Add Measure Input
to add to the input control.
Input Control . Control .
Choosing All Dimensions will add
all the dimensions to the input con-
• In the Cross Calculations • In the Cross Calculations
Control . Control .
3. In the dialog, select the measures 3. In the dialog, select the measures
to add to the input control. to add to the input control.
Choosing All Account Members will Choosing All Members will add all
add all the members to the input the members to the input control.
control. Choosing a top-level node Choosing a top-level node in any
in any member hierarchy will add member hierarchy will add all the
all the members for that node and members for that node and below
below to the input control. to the input control.
4. When finished, select OK. 4. When finished, select OK.
Restriction
When you use the dimension input control, you can show totals for the currently selected dimension in
your table (right-click the dimension, then select Show/Hide Totals ). You can use the dimension
input control to select other dimensions and then show totals for those dimensions as well. However, when
you save your changes and close the story, only the totals for the last selected dimension will be saved.
The next time you open the story you will need to use the Show/Hide option to show the totals for any
dimension that you switch to.
There are two ways to change how members are displayed in a dimension input control: from the Builder panel
you can set all dimensions to use the same display settings, or from the New Dimension Input Control you can
set each dimension to use a different display setting.
Tip
If you customize the display values in the input control (or by using Edit Display Settings in the builder
panel), you won't be able to apply the same display values to all dimensions until you Reset Display
Settings.
Related Information
As a story developer, you can create a filter line, which can be composed of multiple dimension member filters,
either for an individual widget or a group of widgets in your story.
Context
Filter lines let you filter on data-bound widgets. There’re two modes in filter lines:
• Individual Widget Filter, which can be applied to an individual widget in the story. The widget can be a chart,
table or R visualization.
The widget's local filters are also displayed. You cannot select the dimension members on it in edit time.
• Group Filter, which can be applied to multiple widgets on the same page in the story. Supported widgets
are charts and tables.
You can preselect the members on it in edit time.
Note
When a filter line's applied to the widgets, it displays the selected members. Otherwise, it appears empty.
If you choose to define filter line for a group of widgets, filter line filters and each widget's local filters can exist
at the same time. The final filtering result of a widget is the joint data range of the corresponding filter line and
local filter. Your filter line won't be affected by any change on the local filter.
Procedure
1. From the Insert area of the toolbar, select Filters/Controls Filter Line to add a filter line
to the page or popup.
2. Open its Builder panel.
3. Select a filter line mode.
Note
Next Steps
Save and view the story. When you select (Set Filter), all the dimensions for filtering are displayed in the
dropdown. After you've selected the dimension members, the filters are applied to the corresponding widgets,
and the selections are displayed next to . You can remove or modify the filters afterwards.
Remove filter
Hover over the filter and select (Remove).
Modify filter Select the filter. A dialog pops up where you can make mem-
ber selection changes.
In edit time, after creating linked analyses and cascading effects in your story, you can open linked analysis
diagram, which offers a graphic overview of the relationships among widgets. You can also add or remove
widgets directly on the diagram to modify the existing relationships.
Linked widgets diagram is a graphic representation of interaction among widgets in the story, which you can
drag, zoom in on and zoom out on to view how widgets affect each other. It helps to manage relationships
especially when there’re many widgets in your story.
• Dimension member and measure-based input controls linked to charts, tables, R visualizations, geo maps,
value driver trees and custom widgets
• Linked analyses among charts, tables, geo maps and custom widgets
• Linked analyses from charts, tables, geo maps and custom widgets to R visualizations, Dimensions
dynamic text and images bound to data sources
• Filter lines linked to charts, tables and custom widgets
• Cascading effects among dimension member and measure-based input controls
Note
Make sure cascading effect is turned on in the input controls’ context menu if you want to modify the
relationships on the diagram.
Note
The links created via scripting and widgets without data source aren't shown on linked widgets diagram.
Invisible widgets can be shown on the diagram but are grayed out.
In addition, this interactive diagram also lets you set linked widgets to a specific widget, which include:
To trigger linked widgets diagram, from Tools in the toolbar, choose Linked Widgets Diagram. For the first time
to open, you can see an overview page with all the linked widgets in your story. When reopened during the
session, the diagram stays in the previous state.
For a chart or table, you can also directly access the diagram via Linked Analysis Diagram View.
Then, you can see all its linked widgets.
Each widget on the diagram is represented by a node with widget icon, ID and, if applicable, full title and
selected values.
With linked widgets diagram, you can both view and modify the existing widget relationships.
You can view widgets linked to a specific widget, all the widget relationships on a specific page, or all in the
whole story. To do this, choose Filter. You can see a dialog where you can select a widget, a page or All as
the scope. At the same time, you can choose whether to show unlinked widgets as well.
When you're viewing widgets linked to a specific one, you can do the following actions in a widget’s context
menu:
Option Description
Manage Linked Widgets Manage linked widgets to the selected central widget. A
panel opens where you can add or delete its source or target
widgets (also available for all widgets in page or story over-
view).
View Linked Widgets Switch to the selected source or target widget to view its
linked widgets (also available for all widgets in page or story
overview).
Remove Link Remove the selected source or target widget from the exist-
ing relationship.
Switch to Page Overview Switch to overview to view relationships among all widgets
on the same page as the selected widget (also available for
all widgets in story overview).
Switch to Story Overview Switch to overview to view relationships among all widgets in
your story (also available for all widgets in page overview).
Tip
When configuring relationships on the diagram, take account of the following for specific widget types:
Besides, when you hover over a central widget, you can select on either side to directly add its source or
target widgets.
When you're viewing all the widget relationships on a specific page, you can see an indicator on a widget if it's
also linked to widgets on other pages. By choosing View All Linked Widgets in the tooltip, you can switch from
page overview to focus on the widget.
Note
While linked widgets diagram can visualize widget relationships across pages and on story level, only
widgets on the same page and from the same data source are available for adding and managing linked
widgets on the diagram.
For linked analysis applied to all widgets in the story, all the related links are displayed but not manageable
on the diagram. For example, the Remove Link action from context menu isn't available.
When you have data from multiple currencies, you can change the currencies that are displayed by working
with either the measures or cross calculations.
If your model has currency conversion enabled, financial data can be displayed in the source currencies stored
in the model, as well as converted into target currencies.
You can add currency conversions and choose which ones to display by working with either cross calculations
for a classic account model, or measures in the new model type.
You can display a single cross calculation or measure when the structure isn’t added to the table, or add it to
the table to display multiple cross calculations or measures.
You can change the currency by changing the filter on measures or cross calculations.
For the new model type, the measures may include base measures in different currencies, as well as currency
conversions set up in the model. You can also add your own conversions in a story or analytic application using
the calculation editor.
For a classic account model, the available types of currencies include the following:
• Default Currency: The default currency set up for the model, converted using the applicable rates from the
model's currency table.
• Source currencies: The unconverted data from different members of the model’s currency dimension. The
name of this calculation reflects the header of the Currency column in that dimension.
• Currency conversions created in the Calculation Editor: These conversions allow control over which
currency is displayed and the rates from the currency table that are applied.
If you choose to display the default currency or a currency conversion, values for all public versions will be
displayed in the selected currency.
When the Cross Calculations dimension is added to the rows or columns of a table, you can display multiple
currency conversions by selecting them in the calculations filter. If you want to add a conversion that is not
in the Available Members list for the calculations filter, use the Calculation Editor to add the conversion. See
Adding a Currency Conversion Row or Column [page 1561] for details.
For each target currency that you select, all versions in the grid display a new Cross Calculations member
for the selected currency. The number of target currencies that can be selected at one time is determined by
the Maximum Currency Conversion Limit setting in the Model Preferences. For models created in the current
version of the software, the default limit is four.
If the currency table does not contain rates for all of the conversions in your table, a message appears noting
that rates are missing. If you have the appropriate permissions to edit or view the currency table, you can select
Add Missing Rates to open the currency table. The Add Missing Rates panel will show the information for the
rates that need to be added for a given currency.
Source Currencies
You can also add the source currency to the selected members for the Cross Calculations filter to show the
unconverted source currency data. Multiple currencies can be displayed for this member, but data for two or
more members is only aggregated when it belongs to the same currency.
Some parent members of the currency dimension may have child members booked with values in several
different currencies. In this case, the cell for their parent member shows a diagonal line instead of a value.
Because its children have different units, this type of cell does not show the aggregation of its children.
For example, if the organization dimension is set as the model’s currency dimension and you choose to display
sales for the EMEA region in local currencies, values for the United Kingdom would display in Pounds, values
for Germany would display in Euros, and no values would be shown for the EMEA member when both child
members have booked values. However, if the children of the EMEA member only show booked values in Euros,
for example, Germany and France, the EMEA member shows their aggregated value. If a currency conversion to
Euros is available, you could also add it to the table to simultaneously display values in Euros for the Germany,
UK, and EMEA members.
For models that have source data from multiple currencies but that do not have currency conversion enabled,
data can be displayed in the separate source currencies, but it cannot be converted into a target currency. Data
will only be aggregated if it belongs to the same source currency.
This functionality is similar to displaying source currencies for a model with currency conversion enabled, and
the same restrictions for planning operations apply. For more information, see Plan with Currency Conversion
[page 2239].
Related Information
You can use the Calculation Editor to add a currency conversion row or column to your table.
Prerequisites
Select a table that's based on a model with currency conversion. To learn more about currency conversion, see
Work with Currencies in a Classic Account Model [page 778] or Work with Currencies in a Model with Measures
[page 779].
Context
A currency conversion row or column allows you to see your values in different currencies and conversion rates.
There can be multiple conversion rates defined for each currency in the model’s currency table, and each
currency can apply to different dates and different categories or versions.
If the currency conversion table for your model contains different conversion rates for the target currency, you
can use the Date, Category, and Rate Version settings to control which rates are used in each conversion that
you apply to a table.
Procedure
Option Description
Calculation .
Using a model with measures In a Measures dimension, select (More) Add Calculation .
Option Description
Booking Date Converts each value using the rate that applies to its own date.
Booking Date + 1 Apply a rate from a year, quarter, or month after the date for each value.
Booking Date - 1 Apply a rate from a year, quarter, or month before the date for each value.
Fixed Date Apply the rates defined for that date to all values.
7. Choose a Category.
Option Description
Dynamic Uses the conversion rate defined for the category or version that correspond to each value in
the model.
Actuals, Forecast, and Applies the conversion rate defined for the selected category to all values.
so on
Specific Specify a Rate Version to use a conversion rate that's applied to specific versions.
8. Select OK.
A row or column is added. It shows the data values converted to the selected currency using the rates that
you specified.
Note
If the maximum number of currency conversions for a table based on a classic account model has been
exceeded, the Select Conversions menu appears. You can change the currency conversions that you
want to display and then select OK.
If the currency conversion fails due to missing rates, a message is shown. Select Show more to show
the rates that are missing by currency, category, rate type, and date.
Related Information
Prerequisites
Currency calculations must exist. For more information, see Adding a Currency Conversion Row or Column
[page 1561].
Context
These steps apply to a classic account model. For the new model type, currency conversions are added as
measures. You can change the filter on the measures structure to set which conversions are displayed.
Procedure
The dialog shows the maximum number of conversions that you can select, as determined by the model
preferences.
Some currencies may be used in calculations. To view any dependencies, select the (Information) icon
beside a currency.
3. Select OK.
Results
Rows or columns with the selected currencies will appear in the table.
There are some usage restrictions and some features to be aware of when creating stories using a live data
connection to SAP BW or SAP BW/4HANA.
For information on unsupported features with SAP BW, SAP S/4HANA and SAP BPC Live Connections in SAP
Analytics Cloud, see 2788384 .
Universal Display Hierarchy is a BW Query setting that allows two or more dimensions defined in the rows or
columns to be displayed hierarchically. This setting can be activated in the rows or columns of a table.
Restriction
The Universal Display Hierarchy setting is not supported for chart visualizations. It is only supported for
table visualizations.
The following topics provide additional information to consider when working with SAP BW and stories. Even
though the title only includes “(Specifics for SAP BW)”, the information is also valid for SAP BW/4HANA
connections.
Feature Description
Calculation Editor (Specifics for SAP BW) [page 1565] Learn about specifics on SAP BW live data connections
when using the calculation editor.
Create Custom Group Hierarchies (Specifics for SAP BW) If the SAP BW live data that you're working with doesn't have
[page 1567] a hierarchy that is organized in the way that you'd like, you
can create your own custom group hierarchies in your story.
Currency Conversion (Specifics for SAP BW) [page 1569] Convert the same currency or mixed currency values to an-
other currency.
Custom Hierarchies on Choropleth Layer (Specifics for SAP Apply user-defined hierarchies with custom shape files in the
BW) [page 1570] geo choropleth layer.
Linked Dimension on Shared Hierarchies (Specifics for SAP You can link on two dimensions with active hierarchies in two
BW) [page 1573] different SAP BW data sources.
Performance Optimization (Specifics for SAP BW) [page Learn how you can improve performance when working with
1573] the live data connection to SAP BW.
Story and Page Filter (Specifics for SAP BW) [page 1577] You use the story or page filter with cascading dimensions.
Searching in Filters and Input Controls (Specifics for SAP Learn about specifics on SAP BW live data connections
BW) [page 1578] when searching in filters or input controls.
Input Controls: Single Selection Choices for Filters (Specific In your input control and filter settings, you can choose to
to SAP BW) [page 1579] allow the selection of both parents and their children, rather
than one or the other.
Pasting Values into Page Filters (Specifics for SAP BW) When working with SAP BW live data sources, you can paste
[page 1580] values into your story or page filter from a list.
Show Un-Compounded Key (Specifics for SAP BW) [page Display the un-compounded key in a story when you want to
1581] see the part of a compounded key that makes sense in the
current query context.
Tables (Specifics for SAP BW) [page 1582] Learn about specifics on SAP BW live data connections
when working with tables.
Time Dimensions in SAP Analytics Cloud (Specifics for SAP Guidelines for using SAP BW time dimensions in SAP
BW) [page 1584] Analytics Cloud.
Time-Based Variances (Specifics for SAP BW) [page 1585] You can create a time-based variance in a chart that uses
data from SAP BW live data sources, but there are some
usage restrictions.
Time Series Charts (Specifics for SAP BW) [page 1586] You can use time series charts with level-based hierarchies.
There are specific issues you need to be aware of when you use the calculation editor in SAP Analytics Cloud
with SAP BW live data connections.
You can use the calculation editor to create custom calculations such as aggregations, calculated or restricted
measures, and so on. However, before creating calculations for SAP BW live data connections, you may need to
make modifications or apply some SAP notes. The following sections list the special requirements for different
calculation types.
In case you create a calculated measure based on measures with their sign reversed, the following behavior
applies:
• If the calculated measure is based on one single measure, sign reverse is inherited.
• If the calculated measure is based on two or more measures and one of them is not reversed, sign reverse
will be ignored.
The creation of a restricted measure on top of another restricted measure with the same dimension is not
supported.
To use calculated measures or other calculations in your restricted measures in SAP BW, you need to apply the
fix described in the following SAP Note.
SAP Note 2984958 desc: InA: Local selection (measure) not updated if dependent object changed
(Charts in the Optimized Story Experience) For date dimensions, you can specify a dynamic restriction based
on the current date.
Before creating a calculation to calculate the different between time periods in SAP BW, you need to apply the
fix described in the following SAP Note.
Aggregation types COUNTNULL and AVERAGENULL are not supported for SAP BW.
NULL values resulting from restricted measures are ignored in exception aggregation. The following example
shows the exception aggregation restricted to group 1 and group 2 based on the product group:
AVERAGE excl.
Average on COUNT excl. 0, 0, NULL on
Sales (Re- Count on Sales Sales (Re- NULL on Sales Sales (Re-
Product Group Sales stricted) (Restricted) stricted) (Restricted) stricted)
SAP BW
Group 3 300
SAP HANA
Group 3 300 1
Related Information
There are some usage restrictions and some features to be aware of when working with SAP Analytics Cloud
charts that are based on SAP BW live data connections.
In the optimized story experience, when you have date dimensions in your chart, you can add time calculations
to the chart from the builder panel.
For more information on dynamic time calculations, see Dynamically Add a Time Calculation [page 1260].
When working with SAP BW live data sources, you can create custom group hierarchies in your SAP Analytics
Cloud story.
If the SAP BW live data that you're working with doesn't have a hierarchy that is organized in the way that you'd
like, you can create your own custom group hierarchies in your story.
Restriction
Custom group hierarchies (custom groupings) are not supported in the following widgets or situations.
1. If you haven't already done so, select a table or chart in your story and open the Builder panel.
2. Select a dimension and then select the hierarchy in one of the following ways:
Your custom group is created and the table or chart is updated to show that hierarchy.
In an optimized story, you can create custom group hierarchies, find those custom groups more easily, and
undo and redo changes to your custom groups.
Restriction
1. If you haven't already done so, select a table or chart in your story and open the Builder panel.
1. If you haven't already done so, select a table or chart in your story and open the Builder panel.
2. Select a dimension and then select the hierarchy in one of the following ways:
• Table:
1. Select (Hierarchy) Change Hierarchy , and from the drop list select a different hierarchy
or custom group.
2. Select Apply.
Prerequisites
To use the currency conversion during query run time execution, your SAP BW system as well as the query
you want to use need to support the conversion. To be able to Select Conversions, you need measures with
currency values.
Note
You created a story with a table, chart or both containing measures with currency values.
Procedure
2. Select the table or chart you want to convert the currency of, click the (More Actions) icon, and choose
Select Conversions. You can do this in every mode, including the Edit mode.
3. In the Select Conversions pop-up, you can change the Target Currency from the list.
4. Choose the corresponding Currency Conversion Type from the drop-down list.
5. Click OK to confirm your selection.
Results
When you save the story, the currencies you converted are persisted.
Tip
If you want to have a quick access to your currency conversion table, you can pin the widget to your home
screen. In your story, select (More Actions) and then select Pin to Home. For the table, you can view and
change the currency from the home screen.
Related Information
Apply user-defined hierarchies with custom shape files in the geo choropleth layer.
Please note that the drill filter token above the map displays the location-based drill when combining level-
based and location-based drill.
You can expand, drill, or change hierarchy levels within charts (in the optimized story experience).
For charts in stories that are based on an SAP BW Live data connection, you can do the following drill and
expand actions. These actions also apply to charts that have custom groups.
• Single data point: right-click a data point and then choose either Drill Down, Drill Up or Expand/Collapse.
• Chart:
1. Right-click in the chart or select the action menu ( (More Actions)) and then select Drill.
2. Select a dimension, and then set the hierarchy level.
Note
When you set the hierarchy level from the context or action menu, it will be considered the highest
hierarchy level, and drilling up will be disabled.
Restriction
• SAP BW data models can't support Expand on multiple dimensions at the same time; only the outer
dimension will be expanded.
• A variable-level active hierarchy will override a chart-level active hierarchy.
• When you have more than one level expanded, you may not be able to undo your actions.
Related Information
SAP BW hierarchies with linked nodes are supported in SAP Analytics Cloud, with some restrictions.
Restriction
You can use SAP BW hierarchies that have linked nodes, but only in the optimized view mode and the
optimized story experience.
For more information on the optimized story experience, see Optimized Story Experience [page 1133].
You may need to apply a support package for this feature to work. For information on which support
package to apply, refer to the following SAP Note:
The following restrictions need to be considered when using the linked nodes in SAP Analytics Cloud stories.
Context Description
A multi-tenant scenario For SAP BW link nodes that exist in the query, the SIDs change when the tenant
• Save/Open story changes.
• Bookmarks
If the SID was saved with the story and the story is then opened on another
tenant, the SID can no longer be found.
Instead of using the SID, the KEY is sent to the backend. However, the node that
is selected from the backend may not be the one that you expect, but a different
linked (or original) node.
Classic to optimized story conversion If you have a classic design experience story that is using optimized view mode,
when you open the View mode, the SAP BW linked nodes may not work for
persisted InA.
Workaround: Convert the story to optimized design mode and then save it before
opening view mode.
Input control / Story filter • To be able to see the linked nodes inside the input control, you need to resave
the optimized story.
• De-selecting a linked node will not trigger a query to fetch the whole hierar-
chy.
After expanding or drilling, there might be some mismatch in the UI: one
linked node appears selected, but the other one is not selected.
This mismatch will automatically be fixed in the UI when you change the
selection in the input control.
Context Description
UDH SIDs from the backend are also used for the Universal Display Hierarchy (UDH).
However, for both linked nodes and duplicate nodes, the generated SIDs are not
stable or unique.
There is the added technical issue that these SIDs are assigned later than the drill
operation.
Frontend calculation (FEC) Frontend calculation in table with linked nodes in hierarchical data is not sup-
ported.
CSV Export CSV export with a hierarchy containing linked nodes and scope Point of view is
not supported.
Workaround: You can either select the Flatten the Hierarchy option or set the
scope to All to get an export without the parent nodes.
Context Description
Waterfall charts Currently, SAP BW linked nodes are not supported in water-
fall charts.
Expand Currently, linked nodes don't work with the expand feature.
You can link on two dimensions with active hierarchies in two different SAP BW data sources.
If both hierarchies are the same (technical ID, version, and validto are identical), a story filter or a page filter on
one of the hierarchies will be dispatched directly to all the widgets of the other data source.
If the hierarchies are not the same, an additional query looks for matching hierarchy nodes or leaves. A story or
page filter on one of the hierarchies will be converted to a filter containing these matching hierarchy nodes or
leaves of the other hierarchy.
If the hierarchies are not the same and there are no matching hierarchy nodes or leaves, the widget will show a
message saying "No matching data is available to display".
Learn how you can improve performance when working with the live data connection to SAP BW.
Note
Feature Description
Merging SAP BW Queries (Specifics for SAP BW) [page Improve performance by saving time, round-trips, and mem-
1574] ory by merging backend calls for multiple charts within a
story.
Memory Limit for SAP BW Queries (Specifics for SAP BW) A high data volume in SAP BW queries can result in memory
[page 1576] issues.
Optimize Query Performance (Specifics for SAP BW) [page Reduce the amount of metadata that is loading in your
1576] widget to increase performance when using queries with two
structures.
Improve performance by saving time, round-trips, and memory by merging backend calls for multiple charts
within a story.
Activate the query merge option for your story by clicking (Edit Story) Query Settings . Select
Enable Query Merge.
Select Visualize Query Merge to display colored charts in Edit mode. The same color in different charts
indicates that queries have been merged.
Chart Types
Instead of sending single queries for each chart widget, combine requests with each other. The following chart
types support the query merge option:
• Bar/Column
• Combination Column & Line
• Combination Stacked Column & Line
• Stacked Bar/Column
• Stacked Area
• Line
• Bullet
• Numeric Point
In general, charts within a story need to have the same query and the same variables in order to combine the
queries.
Note
The query merge process is triggered when opening the story (also triggered with the Set variables dialog
when Automatically open prompt when story opens is set) or when manually refreshing the story by clicking
Refresh.
• Queries contain the same dimensions with the same hierarchies and drill.
• Queries use the same sorting and ranking, but don't have any filters.
• Presentation types are the same.
• Queries are part of the same receiver group if they use linked analysis.
• Backwards Compatibility Guidance for Using Optimized Story Experience with SAP BW [page 1135]
• Enable InA on your ABAP Application Server [page 391]
Related Information
If you need to reduce the data volume of a SAP BW query for performance reasons, you can set a maximum
value for the OLAP effort. For more information, see SAP Note 2572550 .
To increase performance in your SAP Analytics Cloud story when using SAP BW queries with two structures,
reduce the amount of metadata that is being loaded.
When you are working with a large number of structure members and you notice decreased performance, you
can use the option Optimize Performance for Two-Structure Queries.
The option is in the Data and Performance section of the (Model Preferences) dialog.
Note
When switching on this performance optimization option, you need to be aware of the following impacts:
• User-defined formatting that is set up in the model is not applied. Instead, scale or decimal place
settings from the SAP BW backend are adopted.
• Variance charts show reversed values in measures with sign reversal.
• Sign reversal for restricted measures is not supported.
• Error bars cannot build a percentage-specific formula for SAB BW percentage measures.
• The styling panel does not show correct values for scale and decimal places.
• In the geo tooltips, the scaling factor is not displayed according to the settings.
• Date and time measures cannot be used for charts or geo maps even though they are displayed in the
drop-down menu of the Builder panel.
Another way to optimize performance is to use the option Reduce BW Query Metadata, which will reduce the
number of attributes in the model.
This option is also in the Data and Performance section of the (Model Preferences) dialog.
Related Information
SAP BW features and usage restrictions that you need to be aware of when you use SAP Analytics Cloud story
or page filters.
Note
Dynamic filters that are defined in BW Query designer are not supported in SAP Analytics Cloud story or
page filters.
When you create story or page filters within your story, you can use Filter by Range on non-numeric
dimensions.
Note
Hierarchy variables shouldn't be used if your story contains story or page filters. The page filters don't change
to match your hierarchy when you use the variable prompt to change your hierarchy values.
Drilling on an unassigned node doesn't return consistent results. If you drill on an unassigned node in your
chart and save and close the story, when you reopen the story the saved data is not returned. To resolve this
issue, deploy the following SAP Note to your SAP BW server:
• SAP Note 3029060 desc: InA: Rest node initially not expanded if dimension is initially on free axis
Learn about specifics on SAP BW live data connections when searching in filters or input controls.
You can search in page filters and input controls using either the ID or the description. (See Search in Filters
and Input Controls [page 164]). However, SAP BW connections have some usage restrictions when searching.
Prerequisite
The search feature may not work properly if you don't deploy the following SAP Notes to your SAP BW server
first.
• SAP Note 2795600 desc: ABAP BICS: API does not allow to define excluding selections in the flat
member value help
• SAP Note 2794525 desc: Source code changes as a prerequisite for other SAP Notes
• SAP Note 2815926 desc: Allow maxrowcount / paging for CDS queries
• SAP Note 2793958 desc: InA: MaxTupleCount in the value help response has a wrong value
• SAP Note 2819533 desc: InA: additional fields in the value help response: Level, TupleCountTotal and
ChildCount
• SAP Note 2822334 desc: InA: additional fields in the value help response: Level, TupleCountTotal and
ChildCount
• SAP Note 2883357 desc: BICS: Read mode Q not available in selector read modes
• SAP Note 2910522 desc: BW InA: F4 help with pattern filter and message BRAIN626
• SAP Note 2912673 desc: ValueHelp returns incorrect results for search on “* *”
• SAP Note 2919007 desc: InA F4 help: Wrong result using a pattern search that contains also “!”
• SAP Note 2958734 desc: F4 help and hierarchy node filter on dimension <> F4 help dimension
• SAP Note 3364468 desc: BW InA: Virtual Attribute <dimension>.path
• You won't have the option to show only selected members in the input control. This is due to the restriction
on processing requests when you have the same dimension used in more than one filter.
• If searching when the description is displayed doesn't return the expected results, set the input control to
display the ID instead and try searching again.
Example
If you have a compound dimension member with ID 10034/F32K, the member will be found when
searching for F32K, but not when searching for 10034.
In your input control and filter settings, you can choose to allow the selection of both parents and their children,
rather than one or the other.
The default Single Selection option in an input control allows you to include members from different levels, but
you can't include both a parent and its child.
To be able to include parents and their children as selectable members, do the following:
1. In the input control dialog, turn on the toggle for Select individual members.
2. Select the specific members that you want to include.
Note
When you turn the toggle off, the selection is cleared and you will have to re-select members to use in the
input control.
Tip
In the filter (or input control), you can hide the All option. Select the filter, right-click and then select
Show/Hide Select All Option .
When working with SAP BW live data sources, you can paste values into your story or page filter from a list.
When you have a page filter that has a large number of values, it can take a long time to find the specific values
to display in your filter. If you have the list of values saved somewhere else, you can paste them in and update
your filter.
Note
If you can't use the search dialog because of SAP BW restrictions, you won't be able to append or overwrite
values. For a list of SAP BW search restrictions, see Searching in Filters and Input Controls (Specifics for
SAP BW) [page 1578].
Pasting values only works for flat data, not hierarchical data.
The dialog to paste to is only available for multi-select filters, not single-select filters.
Only exact matches will be found, that is, exact spelling (not case-sensitive) or complete IDs. For example, if
you are trying to find “AB3D7” and you paste in “AB3”, you will see a message stating that “AB3” could not
be found.
The values you paste must match the displayed format: if you display ID and Description, you can paste in
either ID or Description (but not both values for any one member). If you display the member IDs, you can
only paste in ID values, and when displaying member descriptions, you can only paste in description values.
Pasting values for date dimensions (time, day, or date) is not available when your display is set to ID and
Description.
Prerequisite
Set up a text file with the values that you want to paste, one value per line. You can include both IDs and
descriptions, but not on the same line.
Procedure
The filter is updated as is the chart or table that uses that filter.
Restriction
A message will display the first few values that couldn't be found, but if there are a lot of values, it won't
display all of the them.
When you have compound dimensions with leaf nodes that can't be matched, you won't be shown the list
of values that didn't match.
Related Information
Display an SAP BW un-compounded key in an SAP Analytics Cloud story when you want to see the part of a
compounded key that makes sense in the current query context.
Restriction
The option to show un-compounded keys is available for SAP BW and SAP BW/4HANA, but only for
optimized view mode and the optimized story experience.
For more information on the optimized story experience, see Optimized Story Experience [page 1133].
You may need to apply a support package for this feature to work. For information on which support
package to apply, refer to the following SAP Note:
You can enable a setting that displays un-compounded keys in your stories.
Example
Your data includes regions and countries, but a region is not uniquely identifiable without the country. If
there is no filter set on the country, you always see the compounded key.
However, if you restrict a widget to a single country, you will want to see the un-compounded key. (This
restriction can happen via story filter, page filter, or widget filter.)
Only the key display of the member values is changed, not the description fields. This means that there is a
difference only if the widget shows the key for a dimension.
After you enable un-compounded key display, there are some circumstances that won't allow the un-
compounded key to be displayed. In those situations, the compounded key is displayed.
Widget-Specific Filters
Filters that are added to charts and tables (and appear in the Applied to Chart or Applied to Table context
menu) will always show the compounded key.
Tuple filters may be able to display an un-compounded key, however, when those filters interact with other
filters, only compounded keys will be displayed.
Hierarchy node filters that have one leaf, but also have the instruction to convert to flat selection won't display
an un-compounded key.
DISPLAY_KEY Property
If you add the DISPLAY_KEY property to a table dimension, the compounded key is displayed instead of the
un-compounded key.
Learn about specifics on SAP BW live data connections in SAP Analytics Cloud when working with tables.
When using a BW query with two structures, in-cell charts and thresholds can be defined in the table for
members of both the first and second structure dimensions.
In SAP Analytics Cloud, you can right-click a dimension header and select a particular drill level.
In order to benefit from a dynamic number of drill levels, the following note must be applied to SAP BW:
3247690 .
You can display images from BW in table cells, if the image attribute type is a URL.
Note
When you export the table to PDF, the image's URL is exported, not the image.
1. In the Builder panel, find your dimension and then select (More) Properties .
2. In the Set visible properties dialog, select the appropriate property (for example, External URL).
Measures and key figures can only be sorted when there is no hierarchy defined. Even if the hierarchy is
changed to a flat presentation the order can not be changed.
When using SAP BW, unknown read modes received from the server can't be mapped to booked or unbooked
data. To prevent unintended changes, the Unbooked Data setting is disabled in the Builder, and the read mode
is used for the dimension.
When applying formatting (styling) rules to tables, the following levels are not supported for SAP BW live
connections:
• Displaying short, medium, or long text: If a visible property is the same as the description format and
Description is selected as a property, only description details will be shown.
Related Information
In SAP BW, some time dimensions are part of a virtual time hierarchy. SAP Analytics Cloud needs to include
the full set of those time dimensions in the Query definition, either as Navigation Attributes, or as
Free-Dimensions (Free Characteristic section of the Query Designer).
For example, to be able to use hierarchy 0YEA_MON_DAY, the following dimensions also need to be part of the
query definition: 0CALYEAR, 0CALMONTH and 0CALDAY. (To be treated as a time dimension in SAP Analytics
Cloud, the dimension needs to have a reference to 0DATE or the type needs to be DATS.
If the date dimension has a correctly defined level-based hierarchy, it can be treated as a time dimension
instead of as a dimension with alphabetically ordered member descriptions (even in a Flat Representation).
To treat a dimension from a live SAP BW data source as a time dimension in SAP Analytics Cloud, use the
following steps:
1. To make the levels of the hierarchy available, you need to activate the virtual time hierarchies within SAP
BW. For more information, see Activating Virtual Time Hierarchies.
2. To allow the Information Access Layer (InA) to consume the Time dimension as a Level-Based Hierarchy,
please apply the following note.
SAP Note 2637157 desc: InA: Support time level hierarchies
Related Information
You can create a time-based variance in a chart that uses data from SAP BW live data sources, but there are
some restrictions.
Before you can create your time-based variance, you need to make sure that your SAP BW live data source has
a time hierarchy defined. You will also need to be aware of the following guidelines and usage restrictions:
• Your time hierarchy must have day level granularity. (You won't be able to use Year-Quarter-Month, or
Year-Month (YQM or YM) in your variances.)
• Tuple filters are not supported.
• A single value range filter must have the full range, not a partial range.
For example, if your selected date range is 2003-2004, but December 2003 is missing, that would be a
partial range.
• When changing hierarchy levels, you can't use Include Parent Level.
• Time variance supports having a story/page/group filter and a chart filter defined on the time dimension
at the same time.
• Time variance can include values outside of the data filter for story/page/group filters on the time
dimension.
For example, filtering on {2003,2004,2005} will give you previous year variance data for 2003 (equivalent
to 2003-2002) even though 2002 is not in the original filter of {2003,2004,2005}.
• Only one time dimension is allowed. Time variance on a time dimension is only supported if no other
different time dimensions are used in the chart (axis, filter, or other variances).
• Time variance is not supported when a time dimension filter contains a member with lower granularity
than the displayed drill, except in the following situations:
• Chart member filter – the filter is always supported.
• Chart/story/page/group range filter – the filter is supported if it can be expressed only using filter
members at the displayed drill granularity.
• Time variances are not supported on any story-created calculations or SAP BW measures that are not
selection candidates (candidates are determined by the measure definition in the BW model).
• Time variances are not supported in blending
To learn how to add a time-based variance to your chart, see Adding Time-Based Variances to Charts [page
1318]
Related Information
Time Dimensions in SAP Analytics Cloud (Specifics for SAP BW) [page 1584]
Time Series Charts (Specifics for SAP BW) [page 1586]
You can create time series charts with level-based hierarchies that use data from SAP BW live data sources, but
there are some restrictions.
In order to switch between different time granularity levels in SAP Analytics Cloud, all dimensions
(characteristics) that are members of the level-based hierarchy should be among the free characteristics.
The following screenshot shows the default hierarchy based on Calendar day. The hierarchy levels are Year,
Quarter, Month, and Day.
Quarter is also modeled in the free characteristic area so that you can switch between different hierarchies.
One of the valid hierarchies should be present as a default hierarchy for the date dimension. Otherwise, the
time series chart won't be rendered until the hierarchy is manually changed.
The following screenshot shows that the hierarchy (with levels Year, Month, and Day) is active in the query
definition when you run the query in SAP Analytics Cloud.
If you want to make the levels of the hierarchy available, you need to activate the virtual time hierarchies within
SAP BW. For more information, see Activating Virtual Time Hierarchies.
Restrictions
• Restricted measure on a level-based date dimension with any of the following scenarios:
• The value is not displayed if you select date calculations (for example, previous year).
• The chart stops rendering if you select by range.
You can display the range by selecting members using an input control and then filtering by range.
Related Information
Time Dimensions in SAP Analytics Cloud (Specifics for SAP BW) [page 1584]
Time-Based Variances (Specifics for SAP BW) [page 1585]
When using an SAP BW query with two structures in SAP Analytics Cloud, in certain situations you can remove
the second structure.
Note
For some widgets and features in a story, you can remove the second structure when the structure is filtered
to contain only one structure member. The following information describes how different features will behave
after you remove the second structure.
The second structure can be removed from the table rows or columns when there is exactly one filter member
on that structure.
You can use any method to create a filter that contains only one member: story filter, widget filter, or builder
panel.
The second structure can be removed from the chart dimensions when there is exactly one filter member on
that structure.
As long as the second structure is in a filter but not on an axis, it can be removed from the reference line.
If you select a parent node in a hierarchy, the second structure is applied because the child nodes are
automatically selected. However, if you select a child node, the second structure can be removed from the axis.
SAP BW version- and time-dependent hierarchies are supported in SAP Analytics Cloud, with some
restrictions.
The following restrictions need to be considered when using either version-dependent or time-dependent
hierarchies.
Linked dimensions The linking for hierar- The hierarchy link dialog hasn't been changed, so the linking will work as
chies is done only by
it did previously.
the name.
Because you can only link hierarchies by their name, you don't need to
know about the due date or version.
Filters/Input Controls Member selector: no You can specify the hierarchy in the filter dialog.
date or version selec-
tion.
Dynamic text Variables on the di- If there is a variable on the dimension (dueDate and/or version), the
mension variable is always taken.
Level-Based Hierar- Version and due date are not supported for level-based hierarchies.
chies
Variable handling Changing variables If you encounter problems when trying to submit a variable dialog, check
that have incorrect in-
if a variable is missing; if so, then add it.
formation
For example, you have variables for hierarchy and due date but not for
version. If you now add a version (that doesn't exist on the selected
variable hierarchy), the variable definition call locks the variables, and the
hierarchy isn't changed.
Converting old stories Story contains a per- Old stories might have a due date persisted in the story.json.
sisted due date
To be able to use the default due date, you need to open the hierarchy
dialog once and submit it. Then re-save the story.
You can apply some styling and customization options to different elements on the SAP Analytics Cloud story
canvas, including charts, tables, clocks, and text boxes.
You can style the appearance of SAP Analytics Cloud story widgets by changing fonts, colors, axis scaling on
charts, and so on.
Input controls, charts, tables, and other widgets have their own styling options. You can change background
colors or fonts, add borders, set hyperlinks, apply filters, and so on.
Note
You can also set default styles for all widgets in your story preferences. For more information, see Story
Preferences and Details [page 1035].
Some options may not be available to all users, and some options are only available when a specific
area of the widget is highlighted. The heading in the Styling panel identifies the area.
For example, for a chart or table the Styling panel heading may show Title, Data Cell, Axis Labels, and so
on. Selecting a different part of the widget changes the heading and the styling options.
You can apply many individual styling options in table widgets, whether you are setting different styles for a set
of data cells, the title cell, or some other portion of the table. However, too many individual styling options could
affect the table's performance.
There is a limit of 500 individual styling options. When you have reached that limit and add additional styling
options, you will see a warning message that includes an option to remove all individual styling options. If you
don't want to remove all your individual styling options, you can select specific cells and reset them to their
default settings.
If you want to apply styling options to your table data, consider using styling rules instead. For more
information on styling rules, see Styling Rules for Tables [page 1602].
The following table contains all the style properties available for the widget types. However, some properties or
a subset of properties may not be available for a specific. For example, while all of the widgets have the Actions
“Order” properties, only some widgets have “Rotate” properties.
Widget Background Color: select a background color for this widget. Chart
Clock
Border: after you add at least one border line to your widget, you can change the
Data Action Trig-
line using the following options. ger
• Style Multi Action Trig-
ger
• Color
Image
• Line Width Input Control
• Corner Radius Pictograms
Pinned Visualiza-
tions
RSS Reader
Table
Text
Value Driver Trees
Rotate
• Rotate Right 90
• Rotate Left 90
• Rotate 180
Properties
• Enable Sort Option in Boardroom
Data Points Select all data points or specific data points, lines, or other options. Chart
Font Select whether the style change applies to all the text labels or one or more of the Chart
following labels.
Chart
Standard labels for most chart types:
Text Selection
• Axis Label
• Data/Dimension Label
• Legend
• Chart Title
• Subtitle
• Trellis Label
• Error Message
• Primary
• Primary Value
• Primary Label
• Primary Variance Value
• Primary Variance Title
• Secondary
• Secondary Value
• Secondary Label
• Footer Text
Scale: select how to display the numbers on an axis: whether to show all digits for
each data point or to display the values as thousands, millions, or billions.
• Unformatted
• Thousand
• Million
• Billion
• Auto-formatted: picks the best scale for the values.
1. The system finds the absolute maximum value for each measure in the
widget.
2. It then finds the auto number scale for each axis (Charts) or row/col-
umn (Table).
If there are multiple measures on the same axis, it uses the maximum
absolute value to determine the number scale.
The following numbers are used to determine which scale is automatically
chosen:
• Unformatted: values < 10,000
• Thousand: values < 10,000,000
• Million: values < 10,000,000,000
• Billion: values >= 10,000,000,000
For example, if the displayed values are less than 1 (for example, 0.11, 0.05)
when the number format is set to Million, then Auto-formatted changes the
format to Thousand.
Tooltip Account Scale / Tooltip Measure Scale (available in the optimized story
experience): choose how to display the number scale for tooltips that are added
to a chart:
• Default
• Inherit Scale: match the scale that is being used in the chart.
Inherit Scale is not supported when Scale is a percentage.
Scale Format
• Default
• k, m, bn
• Thousand, Million, Billion
(Table) Display unit / Currencies: select whether to display scale units or curren-
cies in the same cell as the numeric value or in a row or column header.
• Default
• Row
• Column
• Cells
Restriction
(Chart) The results may not be as expected if the number of model-de-
fined decimal places are less than your choice. The correct number of
decimal places are displayed, but the value has been rounded up based
on the model-defined number of decimal places.
• (Chart) Automatic: truncates values to at least three digits. Values that are
less than one will have three decimal places. Larger values will have the
decimal places truncated.
Example
Show Sign As
• Default
• +/-
• ( ): brackets will appear around negative values instead of a minus sign.
• Above Chart
• Below Chart
• Beside Chart (Right)
Labels You can change the behavior or style of different labels in your charts. Chart
• Automatic
• Horizontal
• Diagonal
Note
• If you have multiple measures or multiple dimensions, the labels
won't display on the diagonal; instead, they will be displayed in the
default horizontal direction
• Diagonal labels aren't supported when axis labels are grouped.
Grouped labels show a divider (| - bar or pipe symbol) between the
axis labels in the axis label area. The following examples are some
situations where diagonal labels aren't supported:
• A Bar/Column chart that has more than one measure and at
least one dimension.
• A Bar/Column chart that has multiple stacked dimensions.
• A drilled Time dimension.
• Right
• Left
• Middle
• Truncate by Dimension/Measure:
• Truncate by Dimension/Measure: select the dimension or measure
• Truncation Direction: select Right, Left, or Middle
Data Label Background Color: use a background color for data labels to improve
the readability of the data labels.
To use a custom color or to change the opacity of the background color, from the
data label color picker, select More and then make your choices.
Data Label
• Avoid Data Label Overlap – select whether to display all chart labels, or to
detect when labels overlap and display only some labels.
• Prioritize Largest Absolute Value
• Show Only the Front Bar Label
• Round Data Label Values – if there is not enough room on the chart, rounding
the data label values allows all labels to be displayed. However, rounded
values may not include the specified number of decimal places.
Donut or Pie
Axis Axis Line Color: change the color of the axis line. Chart
Slider
Minimum
Note Maximum
The keypad Step Size
slider is only
available if you
have the Digi-
tal Boardroom
add-on.
Time Format
Note
The clock Date Format
widget is only
available if you
have the Digi-
tal Boardroom
add-on.
• Contain: The entire image is contained in the frame, maintaining the image's
aspect ratio.
• Cover: The image is scaled to cover or fill the entire frame, maintaining the
image's aspect ratio. Some parts of the image may be cropped.
• Stretched: The entire image is stretched to fit in the entire frame.
• Pan: The image is scaled to fill the horizontal dimension of the frame. The
bottom of the image may be cropped.
Pictogram Use the following options to modify shapes or pictograms in your story. Pictograms
Properties
• Fill Color
• Line Color
• Outline Width
Hyperlink Link to another story, page, or external URL from this widget. Chart
Image
Pinned Visualiza-
tions
Text
• Default: uses best practices styling and color scheme. Includes different
column heading lines for each type of planning version (Actual, Forecast, and
so on).
Color Fill for Editable Cells: Provide a color for cells that can be edited.
• Report Styling: follows the International Business Communication Standards
(IBCS) guidelines.
Not every option is available for every template:
• Frequency of Reading Lines: Changes the number of lines displayed for
ease of reading. By default this is set to one line per row.
• Show group lines: Displays a line around groups in the table.
• Alternating Rows: designed for list reporting.
• Basic (previously Standard): a simple design that includes gridlines and
shading for row and column headings.
Color Fill for Expand Icon: lets you choose the color of the expand icon.
This makes it easier to see the expand/collapse arrows in your table after you've
changed the background color.
• Symbol (default) – use the story-defined symbols and colors for the thresh-
old ranges.
• Color Values – the threshold colors are applied to the cell data.
• Color Background – the threshold colors are applied to the cell background.
• Color Background Without Values – the threshold colors are applied to the
cell background and the values are hidden.
• Default: sets the row height according to the content in the row by calculating
the needed height based on the font-size as well as icons and padding. The
default setting will adjust the height to avoid cutting off text.
• Cozy: uses more padding than the Default setting uses. Applies the padding
above and below the text within the cell.
• Condensed: uses less padding than the Default setting uses. Applies the
padding above and below the text within the cell.
• Super condensed: uses less padding than the Condensed setting uses.
• Custom: lets you specify the height of the column by changing the value in
Height in Pixels.
Style When selecting cells to apply style changes to, you can apply the change to an Table
entire region or to specific cells. Regions include Title, Header region, and Data
region.
You can create a new style which can be applied to cells in the current table, or to
cells in other tables in the story.
Styling Rules Styling rules allow you to make text style changes along a hierarchy, changing Table
For more information, see Styling Rules for Tables [page 1602].
Fill: choose from available colors or choose More to display the color selector and
opacity selector.
Add or delete a Row or Column: select a header region table cell to use these
options to add custom rows or columns outside the data region.
The column or row placement will be automatic. When you have multiple dimen-
sions or measures, you can only use the outer header region cells to add rows or
columns.
Tip
Add some content or styling to your new custom row or column before
making any changes to the table's data. Until they have been modified, the
custom rows or columns are treated as temporary objects and may disap-
pear when you make data changes.
• Alignment
• Line width
• Color
• Pattern
• Style
• Left Padding or Right Padding: you can add padding to the left or right side of
a cell.
Many of the styling options on the story pages are similar to the those for the widgets, including font and
background options. (The Grid pages have the same options as tables.)
There are some options that are only found on the Responsive and Canvas story pages, but not every option is
available on both page types.
Grid • Show Grid (You can also use Ctrl + Alt + G to show or hide the grid.) Canvas
• Snap to Grid Responsive
• Snap to Object
Dynamic pages grow or shrink to accommodate the widgets, while Fixed pages
can be set to a specific size.
Size: set a Custom size or select a predetermined size from the list:
• Letter
• Legal
• Tabloid
• A3
• A4
• B4
• B5
• 16:9 (HD)
Continuous Height: select continuous height if you want the height to remain
constant.
Fit Page to Grid: select to automatically update the width and height settings to
align with the grid.
Show Footer
Margins: select the margins for the canvas between None, Normal, Narrow, or
Wide.
In SAP Analytics Cloud, styling rules allow you to make text style changes along a hierarchy, changing styles for
sibling, descendant, or child members.
Styling changes that you make to the hierarchy are also applied to new members in the hierarchy.
Restriction
For SAP BW, only two styling levels are currently supported: Self and All.
6. Restriction
Only SAP Analytics Cloud planning-enabled models can use the read-only settings.
Styling Tips
Some styles are applied to the content of the cell, not the cell itself. In those cases, the cell retains it's styling
even if the position changes (for example, a row is deleted or added).
• Styling child cells will only apply the style to the child cells and not their parent cells.
• Having repetitive member names in the same color. This can only be done when the selected table
template is the new default template.
Styles that are applied to the cells and not just their contents include the following:
Related Information
Context
When a group is selected, it can be treated as a single tile. For example, you could create a group from two
charts and your company logo, and then move the group to a different location on the canvas. Styling can be
applied to the group. You can also move the group to the front or the back of the canvas.
Note
Styling can still be applied to individual tiles within a group. The alignment, size, and position of a tile within a
group can also be modified.
Note
Procedure
1. CTRL + Click multiple tiles, or marquee select multiple tiles, and then select Group .
Order Items Move the group from the front to the back of the canvas.
Available options:
• (Send Backward)
• (Send to Back)
• (Bring Forward)
• (Bring to Front)
Actions Description
Order Items Change the position of a tile from the front to the back of the group.
Available options:
• (Send Backward)
• (Send to Back)
• (Bring Forward)
• (Bring to Front)
Available options:
• (Align Left)
• (Align Center)
• (Align Right)
• (Align Top)
• (Align Middle)
• (Align Bottom)
Size and Position Add coordinates to change the position and size of a tile in the group container.
• W: Width of the tile in pixels.
• H: Height of the tile in pixels.
• X: Horizontal position of the tile in pixels.
• Y: Vertical position of the tile in pixels.
• ɑ: Angle for rotation.
For more details on specific styling options, see Formatting and Styling Items on a Story Canvas [page
1589] .
Results
A group is created and styling is applied. To remove a group, select the group and then select
Ungroup .
Related Information
You can align the positions of multiple widgets in edit time to improve the visual appeal of your stories.
Note
To align widgets, first select all the widgets that you want to reposition. Then, in the context menu select
(Align Widgets) and one of the following options according to your needs:
• Align Left: Align the left edge of each selected widget to the leftmost edge of all selected widgets.
• Align Right: Align the right edge of each selected widget to the rightmost edge of all selected widgets.
• Align Top: Align the top edge of each selected widget to the topmost edge of all selected widgets.
• Align Bottom: Align the bottom edge of each selected widget to the bottommost edge of all selected
widgets.
• Align Center: Center the selected widgets horizontally, ranging from the leftmost edge to rightmost one of
all.
Note
The height or width of the widget won't change, whether you define it as auto, pixel or percentage values.
When you align widgets, the settings of the widgets' left, right, top or bottom margins may change under
different conditions. Take aligning widgets to the leftmost edge as an example:
• If the left margins of one or more widgets are defined in pixel or the left margins of all widgets are defined in
auto, after alignment the units of all widgets' left margins are changed to pixel.
• If no widgets’ left margins are defined in pixel and the left margins of one or more widgets are defined in
percentage, after alignment the units of all widgets' left margins are changed to percentage.
You can customize story themes by setting preferences for the styling of widgets, popups and pages at a time,
and by writing and applying CSS to them. The themes can be used across stories.
• Configure Preferences and Use Themes (Optimized Story Experience) [page 1609]
• Settings for Theme Preferences (Optimized Story Experience) [page 1614]
• Customize Your Themes Using CSS (Optimized Story Experience) [page 1621]
Learning Tutorial
Click through the interactive tutorial illustrating how to change and customize theme with CSS in optimized
story experience in step-by-step instructions (3:00 min); the tutorial is captioned exclusively in English:
You can configure preferences for the styling of the current story, and create a theme from it or from scratch,
which provides you with an efficient and reusable way to define the styles of any stories. You can further use
and modify your themes, and use related API to let story viewers change themes.
Theme and preferences are one-stop solution to your enterprise's branding. It provides a consistent look and
feel that complies to the corporate standard and differentiate your stories from hundreds of others. You can
define a theme to store your favorite styles for canvas, popups and different types of widgets or choose an
existing one to instantly change the story's look and feel.
Note
When you find that the widget, popup or page in your story doesn’t have the corresponding styling after
applying the theme, it may be that the styling defined in the styling panel overwrites the overlapping
settings defined in theme preferences. Therefore, go to the Styling panel, select (Restore Theme
Preferences) so that the theme can be fully applied to the story.
You can configure preferences for the styling of canvas, popups and widgets in one place, which are exclusive to
the current story.
Procedure
You can see a dialog called Theme Preferences, where you can define story-level and widget-level styling
settings.
3. In Theme Preferences you can change the background color of canvas or popups and styling of widgets as
you like.
On the right side, the preview is available to you to check the styling. However, not all stylings are reflected
in preview. See the following section.
4. Choose Save and Apply.
Results
You've configured and saved preferences for the current story. You can see that your story immediately applies
the preferences.
See a list of stylings that aren’t supported for preview in Theme Preferences.
Use a Theme
You can apply an existing theme to your story, which saves you from configuring preferences.
Procedure
If the theme you want doesn't appear in the dropdown list, select Browse for More Themes…, and find it in
the Files repository.
Results
You can also select a widget, canvas or popup to change its styling in its Styling panel.
Note
The styling settings override the theme preferences and are only applied to the current story. Even if you
later change the theme, what you defined in the Styling panel is kept.
To restore to the theme preferences, select (Restore Theme Preferences) on the upper-right corner of the
Styling panel.
Create a Theme
You can create a theme either from the existing preferences or from scratch, which can be used across stories.
Note
To create a theme, ensure that you have the permission Create for the object type Theme.
To create a theme independent of the current story, under (Theme) in the toolbar, select Create Theme....
Then, follow the same steps to configure theme preferences and save as a file.
You can modify the theme and override the existing preferences according to your needs, and delete the
themes you no longer need.
Note
To modify and delete a theme, ensure that you have the permission Create and Delete respectively for the
object type Theme.
If the theme is created by another user, make sure you have access to the theme in the Files repository.
To modify a theme, in edit time, find it under (Theme) in the Format section of the toolbar. When hovering
over it, choose to open Theme Preferences. Then, you can change and save the settings according to your
needs:
• If you want to save these changes in the current theme, select Save.
Note
Theme updates are not only applied to the current story, but also to all the stories using this theme.
• If you want to save these changes to a new theme, select Save As.
Note
Code Syntax
Application.setTheme()
To specify the theme in the script, press Ctrl + Space to open the theme selector. After you choose a
theme, the corresponding theme ID will be displayed in the syntax. If you choose not to define a theme in the
syntax, the default light theme is applied.
Note
Currently, calling the setTheme() API from a popup doesn't affect the theme settings in it. To solve this
issue, you can add a panel to the popup and include all the widgets in it.
Here's an example that shows how to leverage the API to let viewers switch between different themes for the
story.
Example
First, add a dropdown Theme_Dropdown to the canvas. In its Builder panel, fill ID column with the theme IDs
and the Text column with the corresponding theme names.
Sample Code
Viewers can select from the dropdown list to change to a different theme for the story.
Based on the URL pattern you're working with, use either an ampersand & or a semicolon ; to separate URL
parameters.
• If the URL pattern is /bo/story/, use a semicolon ; as the URL parameter separator.
https://<TENANT>/sap/fpa/ui/tenants/2a2bb/bo/
story/3228410105CE7C33800048A45E0FF376?
mode=view;themeId=27D04C012D3FBC945A7C2E884AE6C1AC
• If the URL pattern is /story2&/s2/, use an ampersand & as the URL parameter separator.
https://<Tenant>/sap/fpa/ui/tenants/2a2bb/app.html#/
story2&/s2/3228410105CE7C33800048A45E0FF376/?
url_api=true&mode=view&view_id=story2&themeId=27D04C012D3FBC945A7C2E884AE6C1AC
Set default formatting preferences for your story and widget items.
For information on opening the theme preferences dialog, see Configure Preferences and Use Themes
(Optimized Story Experience) [page 1609].
Story Settings
There are five threshold states to choose from, and you can change the default symbol and color for each of the
states.
You can also change the symbol to one of the following options.
Widget Settings
The following widget settings apply to basic mode, advanced mode or both.
Theme Preferences
Widget Styling Options Description
Comment Widgets
Filter Lines (advanced
mode)
Flow Layout Panels
(advanced mode)
Header (advanced
mode)
Input Controls
Input Fields (advanced
mode)
Image
Page Books (advanced
mode)
Panels
R Widget
RSS Reader
Shapes
Tables
Tab Strips (advanced
mode)
Text
Value Driver Trees
Web Page
Charts Default Color Palettes Select the default color palettes for charts and geomaps in your story.
Geo
• Standard: determines the colors available in all styling options.
The standard palette affects dual measure comparisons and all di-
mensions in charting.
• Continuous: can have any number of colors.
Continuous palettes are commonly used in geospatial analysis
charting.
• Diverging: has a range based on three different colors. You can set
the colors for the left, center, and right values, typically using a
neutral color for the center value.
A diverging palette affects heat maps, tree maps, and geospatial
analysis in charting.
• Sequential: has a range based on two different colors. You can only
select the end values to change the color.
A sequential palette affects heat maps, tree maps, and geospatial
analysis in charting.
Charts Default Variance Color Set the default colors for variances in charts and geo maps in your story.
Geo
You can set defaults for the following values:
• Positive
• Negative
• Null / 0
Charts Default Axis Line Color Select the default axis line color for charts in your story.
Geo
Charts Default Tile Border You can choose from several border options for your widget.
Geo
Comment Widgets
Flow Layout Panels
(advanced mode)
Header (advanced
mode)
Input Controls
Input Fields (advanced
mode)
Image
Page Books (advanced
mode)
Panels
R Widget
RSS Reader
Shapes
Tables
Tab Strips (advanced
mode)
Text
Value Driver Trees
Web Page
Charts Default Text The text options change based on the widget.
Geo
Input Controls
• Select all text in the widget or specific text such as a header title or a
subtitle.
Header (advanced
mode) • Font
R Widget • Size
RSS Reader • Color: select a color or select More to create your own color option.
Tables
• Style: apply bold, italics, or other styles.
Text
• Alignment
Web Page
• Lists: add bulleted or numbered lists.
Buttons (advanced Default Font Style The text options change based on the widget.
mode)
Checkbox Button
• Select all text in the widget or specific text such as a header title or a
Groups (advanced subtitle.
mode) • Font
Dropdowns (advanced • Size
mode)
• Color: select a color or select More to create your own color option.
Filter Lines (advanced
mode) • Style: apply bold, italics, or other styles.
Input Fields (advanced • Alignment
mode) • Lists: add bulleted or numbered lists.
List Boxes (advanced
mode)
Radio Button Groups
(advanced mode)
Range Sliders (ad-
vanced mode)
Sliders (advanced
mode)
Tab Strips (advanced
mode)
Text Areas (advanced
mode)
Value Driver Trees
Buttons (advanced Default Button Style You can change the default settings for the following options:
mode)
• Type
• State
• Show Icon
• Upload Icon
• Icon Position
You can also change the default colors for the following options:
• Border
• Background
Checkbox Button Default Checkbox You can change the default colors for the following options:
Groups (advanced Button Group Style
mode) • Border
• Background
• Check Mark
Dropdowns (advanced Default Dropdown Style You can change the default colors for the following options:
mode)
• Border
• Background
Dropdowns (advanced Default Dropdown You can change the default colors for the following options:
mode) Menu Style
• Selected
• Mouse Hover
• Mouse Down
Filter Lines (advanced Default Filter Line Style You can change the default colors for the following options:
mode)
• Border
• Background
Filter Lines (advanced Default Filter Menu You can change the default colors for the following options:
mode) Style
• Mouse Hover
• Mouse Down
Input Fields (advanced Default Input Field You can change the default colors for the following options:
mode) Style
• Border
• Background
List Boxes (advanced Default ListBox Style You can change the default colors for the following options:
mode)
• Selected
• Mouse Hover
• Mouse Down
Page Books (advanced Default PageBook Style You can change the default color for the following options:
mode)
• Selected
• Unselected
You can either turn scrolling off or set auto scrolling for the following
option: Vertical Scroll
Panels Default Panel Style You can either turn scrolling off or set auto scrolling for the following
options:
• Horizontal Scroll
• Vertical Scroll
Radio Button Groups Default Radio Button You can change the default colors for the following options:
(advanced mode) Group Style
• Border
• Background
• Check Mark
Range Sliders (ad- Default Range Slider You can change the default colors for the following options:
vanced mode) Style
• Progress Bar
• Background Bar
Sliders (advanced Default Slider Style You can change the default colors for the following options:
mode)
• Progress Bar
• Background Bar
Switches (advanced Default Switch Style You can change the default colors for the following options:
mode)
• Toggle Bar
• Switched On
• Switched Off
Tab Strips (advanced Default Tabstrip Style You can change the default colors for the following options:
mode)
• Selected
• Mouse Hover
• Header Background
You can also change the default behavior for the following options.
• Horizontal Scroll
• Vertical Scroll
• Margin
Text Areas (advanced Default Text Area Styles You can change the color for the following text areas.
mode)
• Border
• Background
Tables Default Styling Select a styling template from the following options:
Template
• Default: uses best practices styling and color scheme. Includes dif-
ferent column heading lines for each type of planning version (Ac-
tual, Forecast, and so on).
• Report Styling: follows the International Business Communication
Standards (IBCS) guidelines.
• Alternating Rows: designed for list reporting.
• Basic (previously Standard): a simple design that includes gridlines
and shading for row and column headings.
Tables Default Threshold Style The threshold style: changes how threshold cell data is displayed.
• Symbol (default) – use the story-defined symbols and colors for the
threshold ranges.
• Color Values – the threshold colors are applied to the cell data.
• Color Background – the threshold colors are applied to the cell back-
ground.
• Color Background Without Values – the threshold colors are applied
to the cell background and the values are hidden.
Value Driver Trees Default Node Style Select a background color for the header.
You can define multiple CSS classes either as global default or for individual widget, popup or page, which
brings more flexibility to the styling in your stories. The styling settings conform to CSS standards and are no
longer limited to the existing ones provided in the styling panel or theme preferences.
Note
To use CSS, ensure that you have the permission Create for the object type Theme.
You can define CSS in the CSS editor and apply it to individual widgets, popups, pages or the whole story.
Procedure
It's usually composed of a custom class name, element selector, property and value.
Sample Code
.my-theme-1 .sap-custom-button:hover {
background-color: #00ff00;
border-color: #0000ff;
}
/**
* In this example, my-theme-1 is a custom class name, sap-custom-button is
an element selector and :hover is a pseudo.
* !important syntax is NOT supported.
*/
Note
For an element selector, it comes from the supported classes shown in the editor.
4. Apply the CSS class you defined in the CSS editor to individual widgets, popups, pages or the whole story:
• To set the CSS class for a specific page, popup or widget, go to its Styling panel. For example, you’ve
defined CSS class named my-button-1 for button-specific settings. Under Generic Properties, you
just need to enter my-button-1 for . Then the button applies the corresponding CSS settings.
• To set the CSS class as the story’s default CSS class, select Global Settings from Outline, and enter
the custom class name under Global Default Class Name.
Then, all widgets, popups and pages in the story apply the corresponding CSS settings.
it the default appearance first. To do this, go to (Theme) under Format in the toolbar, and select
the theme your story is using. In Theme Preferences switch on Enable Theme CSS, and enter the
custom class name under Global Default Class Name. Choose Save and Apply. Later story designers
or developers can still enter another global CSS class is entered in Global Settings to overwrite your
default settings.
Note
When both individual and global default CSS classes are defined, the former overwrites the latter one.
Results
In edit time, you can see that your story immediately applies the styling after you've defined and assigned the
CSS class.
Later to let viewers set CSS, you can use script APIs such as setCssClass() and getCssClass(). For more
information, see Optimized Story Experience API Reference.
You can load CSS into any existing theme, which can be reused across stories. The CSS settings overwrite the
existing theme and settings in the Styling panel of any widget, popup or page.
Procedure
1. From the Format section of the toolbar, select (Theme). Hover over the theme you want to modify, and
You can see that all the CSS classes you've defined in the CSS editor appear below.
4. Choose Save to save the theme with CSS or Save As to save a new theme.
Results
The CSS settings have become part of the theme that can be reused across stories. If you modify CSS and
want to load the latest CSS, select Reload Story CSS in Theme Preferences.
The priority of styling methods from highest to lowest is: widget styling defined in the CSS editor, widget
styling from theme CSS, widget styling defined in the styling panel and widget styling defined in the theme
preferences.
As a story designer, you can use responsive pages and define responsive rules to design stories that can adapt
to different screen sizes.
Prerequisites
To design a story that can adapt to different screen sizes, use responsive pages. If you're creating a responsive
story from scratch, on the Stories start page, under Create New choose Responsive. Then, select Optimized
Design Experience as the design mode.
Context
Note
For iOS and Android phones and tablets, device preview doesn't include the view time toolbar, so the
preview and actual view might be different. When designing your stories, take account of the vertical
space of the toolbar, so that they fit into these devices.
On responsive pages, widgets can be grouped by lanes, and you can configure the responsive rule for each lane.
Widgets in the same lane stay together when the responsive page is resized. To add a lane, from (More
Actions) select Add Lane, and then select Add Lane to Left, Add Lane to Right, Add Lane Above or Add Lane
Below.
Click through the interactive tutorial illustrating how to define responsive rules in step-by-step instructions
(3:00 min); the tutorial is captioned exclusively in English:
Procedure
1. From the device preview bar, select a device for which you'd like to configure the responsive rule.
By default the device preview bar is shown at the bottom. If not, from View in the toolbar, select Device
Preview Bar.
Auto No limit
Note
For iOS phones and tablets, some features aren't supported, including Clock, multi-line Chart Title,
Smart Insights, Footer, Background and Dynamic Images, Value Driver Tree, R Visualization, certain
kinds of Dynamic Texts and Web Fonts.
For Android phones and tablets, some features aren't supported, including Clock, Smart Insights,
Chart Footer, Background and Dynamic Images, Value Driver Tree, R Visualization, Geo Map, Data
Action Trigger Widget and Currency Conversion in Chart.
If your device isn't in the list, you can select Add Device to add it to the list. Enter the width and height in
pixel, and then you can configure the responsive rule for such a screen size.
Note
Currently, when you add your custom device, specifying iOS or Android system isn't yet supported.
Responsive Rule Configuration appears, with the selected device and corresponding screen width.
4. To start responsive rule configuration, switch on Activate.
The responsive rule automatically cascades to smaller devices. To configure another rule for its smaller
device, you need to activate in the same way.
5. Set the widget position in either of the following ways:
• Free: You can freely move and position the widgets in the lane. Widgets stay in a specific position,
irrespective of others and device type.
You can also determine the space allowances on widgets' top or left:
1. Under Set the space on top or Set the space on left, select Add Widget.
2. From the dropdown list, select Each Widget or a specific widget.
The setting for a specific widget overwrites the one for each widget.
3. Specify the grid spaces:
• For the space on top, enter an integer between 0 and 200.
• For the space on left, enter an integer between 0 and 35.
In responsive rule configuration, widgets' size and position are specified by grid. Therefore,
showing the grid makes it more convenient for you to do the settings. Under the dropdown list
of your page, select Page Styling, and then in the Styling panel of the page, select Show Grid.
You can also select widgets to remove from auto-flow and set the space on its top and left so that they
can be freely positioned to a specific place.
A combination of free positioning and auto-flow is helpful when most widgets need to displayed one by
one on smaller devices like mobile phone, while some always stay in a specific position regardless of
screen size, such as your branding image.
6. Set the widget size:
a. Under Set the widget width or Set the widget height, select Add Widget.
b. From the dropdown list, select Each Widget or a specific widget.
The setting for a specific widget overwrites the one for each widget.
c. Specify the widget width or height in grid.
• For the widget width, enter an integer between 1 and 36. The percentage is also shown for you to
calculate how much of the screen width the widgets take up.
• For the widget height, enter an integer between 1 and 200.
7. Set the widget visibility.
Under Hide the widgets, select the widgets to be hidden on the device. For example, you can hide a table if it
has too many data to be mobile friendly.
8. Define rules for all the other lanes on your page.
Results
You can configure responsive rules for more devices to make your story adapt to different devices. The device
preview bar lets you select any device to have a live preview in edit time of how your story looks like and adjust
your rule configuration accordingly.
Note
Once you activate responsive rules, you can't further resize widgets either by mouse or in its Styling panel.
To do so, you need to deactivate the rules.
In view time, the widgets are resized, repositioned or hidden when the screen reaches or is no greater than
certain width, according to the responsive rule you've configured.
The following APIs, such as layout APIs, aren't supported for responsive lanes:
• setLeft, getLeft
• setRight, getRight
• setTop, getTop
• setBottom, getBottom
• Page.moveWidget
Related Information
Learn about what composites are, related permissions, relationship with optimized stories and how to manage
them.
What's a Composite
A composite is a combination of widgets with configured data and scripting elements, which can be reused
across optimized stories. It can be a unified corporate header or footer, page navigation panel, collapsible
section with visualizations, or chart with a button to switch to another chart type or table view. With
composites ready to be used in stories, story designers and developers spend less time building stories, and
they can do further actions and customizations on the composite as a whole as well, such as changing styling
and adding scripts.
You can create a composite and then import and directly use it in your optimized story as other SAP Analytics
Cloud built-in widgets.
For the existing restrictions in creating and using composites, refer to Restrictions in Composites [page 1642].
• To create a composite, ensure that you have the Create permission for the object type Composite.
• To import and use a composite in stories, ensure that you have the Read permission.
• To modify a composite, ensure that you have the Update permission.
• To delete a composite, ensure that you have the Delete permission.
• To share a composite, ensure that you have the Share permission.
Composites are saved as files, and you can find and manage them in Files.
For example, you can edit details, share, delete and favorite a composite:
If you proceed to delete the composite, the references between it and the stories are lost, and the composite
aren't loaded in the stories and are displayed as a placeholder. The composite is no longer in Assets panel or
Insert menu, while you need to delete it from the stories.
Modify Composites
If a composite's modified while it's actively used in stories, the stories get updated when reloaded.
Procedure
a. Select Save .
b. Choose where to save your composite from the file repository.
c. Enter a name for your composite, and a description (optional).
Next Steps
You've created and saved a composite. Story designers or developers can then import it to stories and use it
as other built-in widgets. For more information, refer to Use Composites in Your Story Design (Optimized Story
Experience) [page 1638].
Note
Only edit mode is supported in composites, while view mode isn't yet available.
Later if you want to change composite variables, from Tools in the toolbar, select Edit Prompts and then the
data source to open the prompt.
You can override composite variables for an individual table or chart by selecting (Edit Table Prompts or Edit
Chart Prompts) on it. Then, select Set Table Variables or Set Chart Variables, and set the variables. Instead, if
you want the table or chart to use composite variables, select Use Composite Variables.
As a composite creator, you can define interface functions and events to add custom logic and let story
developers add scripts to your composite.
You can define an interface function, so that story developers can use it in stories.
To add a script function, in the Interface section of Outline, select (Add Script Function) to the right of
Functions. Then, configure the script function in the panel, and edit scripts by selecting to the right of the
function.
For how to create a function, you can refer to Use Script Objects [page 1666].
Note
The interface function names should be other than the existing composite related function names, for
example, fireEvent.
You can define a script event for your composite, so that in stories story developers can write scripts for the
composite.
To add a script event, in the Interface section of Outline, select (Add Script Event) to the right of Events. You
can enter a name for the event.
In stories, the event'll be displayed when story developers select (Edit Scripts) to the right of the composite
in Outline.
Furthermore, in composites, you can use the following API to the trigger the event:
Code Syntax
In addition, in the Scripting section of Outline, you can add script variables and script objects, which can be
reused in scripting in the composite. You can also add scripts to the widgets in the composite. With scripts
available, the composite itself has customized interactions and custom logic.
Related Information
Check out what menu options are available for creating your composite.
File Menu
Submenu Description
Save As...
Edit Composite Composite Details: You can edit name and description, and
enable translation for the composite.
Note
You also need to enable translation for the stories using
the composite so that it gets translated there.
Edit Menu
Submenu Description
Undo
Redo
Refresh
Insert Menu
• Chart
• Table
• Text
• Panel
• Planning Actions:
• Data Action Trigger
• Multi Action Trigger
• BPC Planning Sequence Trigger
• Others:
• Image
• Shape
Submenu Description
Add New Data Import data from a file, data source, or an existing dataset or
model to create visualizations in your composite.
Publish Data (planning models only) For more information, refer to Planning on Public Versions
[page 2188].
Edit Prompts Set variables for a data model used in the composite. Charts
or tables built from the model get updated based on the
values you enter.
Formula Bar For more information, refer to The Formula Bar [page 1398].
Allocate (planning models only) • Distribute Value: For more information, refer to Assign
and Distribute Values with the Planning Panel [page
2233].
• Execute Allocation Process: For more information, refer
to Run Data Actions, Multi Actions, and Allocations
[page 2260].
Version History (planning models only) For more information, refer to Undo, Redo, and Revert
Changes to Versions [page 2179].
Version Management (planning models only) For more information, refer to About the Version Manage-
ment Panel [page 2174].
Value Lock Management (planning models only) For more information, refer to About Value Lock Manage-
ment [page 2206].
Cell References and Formulas • Show References: For more information, refer to Cell
References [page 1439].
• Remove Reference
• Show Formulas: See all formulas in the composite.
Linked Widgets Diagram View and manage widget relationships in the composite.
View Menu
Submenu Description
Right Side Panel Right side panel consists of Builder and Styling panels,
where you can configure and style a widget in your compo-
site.
• Errors, where you can view all the errors and warnings in
your composite and quickly locate them in your scripts.
For more information, refer to Check Errors in Scripting
[page 1676].
• Reference List, where you can check out where and how
a widget, script variable or script function is used in
scripting. For more information, refer to Check Referen-
ces in Scripting [page 1677].
Advanced Mode You can enable advanced mode, which has the following ad-
ditions:
As a story designer in Optimized Story Experience, you can import composites to your story and use it as other
SAP Analytics Cloud built-in widgets.
Restriction
Currently, composites are only supported on canvas pages in Optimized Story Experience.
To use your composite in Optimized Story Experience, you need to import it to your story first.
Procedure
1. To import a composite, in the Assets panel, hover over Composites, and choose (Import Composite).
Once the composite's imported to the story, it's listed under Composites as other available widgets.
To delete a composite from Assets panel or Insert menu, hover over it, and choose (Delete).
Note
Note
If the imported composites aren't added to, or actively used in your story, when the story is transported
to another system, they won't be transported with it.
3. To add a composite to your story, drag and drop it from Assets to the page, or select it from Insert.
The composite is listed in the Outline panel, with the default ID Composite_1.
Next Steps
Once a composite's added, you can do actions on it as other widgets in your story, such as edit ID, find
reference, enable or disable mouse actions to resize or reposition, show or hide, styling, and edit scripts.
Note
The actions apply to the composite as a whole, rather than the widgets in the composite. However, theme
preferences can apply to the individual widgets.
After adding a composite to your story, you can customize it in the Styling panel.
In the Styling panel of the composite, you can assign CSS class to it, set its size and position, add borders, set
view time visibility, allow vertical or horizontal scroll, and so on.
The composite name is displayed on the top of the Styling panel. By selecting Show More..., you can find
more information, such as description and file path.
In the Builder panel of the composite, you can choose what variables are used by the widgets inside:
• Use Widget Variables (default): The widgets inside the composite still use their own variables, which have
been set by the composite creator.
• Use Story Variables: You can override the widget variables and use the story variables, which are applied to
all the tables and charts in the story that use the same model.
Note
You need to add the model used by the composite to the story. Then, edit the prompts.
As a story developer, you can add scripts to the composites in your story to customize their interactions.
To add scripts to your composite, hover over it in Outline, and select (Edit Scripts) to the right of it. Select an
available event to write the scripts for it in the editor.
Note
Edit Scripts is only available if the composite creator's defined interface events.
Code Syntax
Composite.getLayout(): Layout
In addition, you can use the interface functions defined by the composite creator on the composite. You can
find the available functions for the composite in the value help.
The scripts work on the composite as a whole, rather than the individual widgets in the composite.
Look up restrictions in composites as files, composite editor and Optimized Story Experience.
For toolbar actions and related features available in the composite editor, refer to Navigate Within Composites
[page 1635].
The following actions in file repository aren't yet supported for composites:
• Schedule publication
• Publish to catalog
• Catalog
• Favorites
• Shared With Me
Unsupported Features
The following features aren't yet supported in composites:
• View mode
• Multiple pages
• Page styling
• Popups
• Technical objects
• Export
• Bookmark
• Link dimensions, add linked models to charts
Feature Restrictions
Supported widgets Limited widgets are supported for creating composites. For a
list of available widgets, refer to Navigate Within Composites
[page 1635].
Linked analysis Linked analysis can only be applied to Only This Widget or
Only Selected Widgets.
Edit prompts The option Automatically open prompt when story opens
isn't available.
Dynamic text • All types of input controls, story filters, model variables
and page number can't be the source for dynamic texts
in composites.
• Other story properties take those from the stories that
are using the composite.
Dropdown, checkbox group, radio button group, input field • Only manual input is supported as data source.
• Enable the write-back in runtime isn't available.
Scripting • Widget script APIs are supported, while global ones are
partially supported.
• onInitialization event isn't supported.
Page styling In the Styling panel of the page in composites, settings such
as ID, title, canvas size, page layout aren't available.
Unsupported Features
The following features aren't yet supported for composites in Optimized Story Experience:
Feature Restrictions
Chart color The measure colors set in the chart's Builder panel aren't
carried from the composite editor to Optimized Story Expe-
rience.
Export, bookmark In edit mode, when you select what can be exported or book-
marked in the related technical objects, you can only select
the composite as a whole rather than the individual widgets
inside. This means in view mode, either all the widgets or
nothing in the composite can be exported or bookmarked.
You can use custom widgets, which extend the functionalities of SAP Analytics Cloud and complement the
standard palette of widgets according to your needs.
Note
Custom widgets work in Google Chrome and Microsoft Edge (version 79 and higher) only.
As a developer you can build your own widgets in addition to the widgets delivered in SAP Analytics Cloud. For
more information, see SAP Analytics Cloud Custom Widget Developer Guide.
Follow the steps to upload a custom widget to SAP Analytics Cloud so that you can use it as other widgets in
your story.
Prerequisites
Please note that you need the appropriate permission to create and upload custom widgets. For more
information, see Standard Application Roles in SAP Analytics Cloud Help on SAP Help Portal at http://
help.sap.com.
Procedure
Results
After uploading a custom widget, you can find it listed in (Add) Custom Widgets and Assets panel.
Next Steps
However, when you add a custom widget that differs in minor version from the one present in SAP Analytics
Cloud, it replaces the present one. For example, if a custom widget of version 1.5.0 exists in SAP Analytics
Cloud, then adding either version 1.4.0 or 1.6.0 replaces version 1.5.0.
If you use a custom widget in a story that is transported between different SAP Analytics Cloud systems, the
story archive automatically includes the referenced custom widget as it works for models. In this way only one
archive needs to be imported to the target system to get the story up and running.
You can know which stories are actively using a custom widget when you try to delete it from an SAP Analytics
Cloud system in the Custom Widgets tab. The Delete Custom Widgets dialog lists all related stories and warns
you that deleting the custom widget breaks them.
Example
In the following example, the first custom widget is used by a story named Ticker Test, the second one isn't
used in other stories, and the third one is used in two stories for which you have no authorization.
If you choose to ignore the warning and delete the custom widget regardless of active use and later realize
that this was a mistake and reupload the custom widget, the references between stories and this custom
widget still stay lost.
This means that the export of the custom widget with the story actively using it as well as the display of
related stories in the dialog won't work anymore. In this case, you can see a warning when opening an
affected story:
To solve this, just save the story. You can therefore repair the dependencies and no longer see the warning.
You can use custom add-ons, which extend the predefined set of widget add-ons in Optimized Story
Experience.
You can upload custom add-ons to SAP Analytics Cloud and add them to widgets in stories to customize parts
of the built-in widgets, such as adding visual elements to a chart, modifying tooltip contents and overriding the
existing styling.
As a developer, you can build your own widget add-ons. For more how to develop, restrictions, hosting
considerations and more, refer to SAP Analytics Cloud Widget Add-On Developer Guide.
Currently, the following add-on types and corresponding chart types are supported:
• Tooltip
Supported chart types exclude numeric point.
• Plot area (general)
Supported chart types are bar/column, stacked bar/column, stacked area and line
• Plot area (numeric point)
Supported chart type is numeric point.
Permissions
Upload your widget add-on to SAP Analytics Cloud so that you can add it to the widgets in your story.
Prerequisites
To upload custom add-ons, ensure that you have the permission Create for the object type Widget Add-On.
Procedure
2. Choose (Create).
3. In the Upload File dialog, select the add-on from your device, which is a JSON file.
Results
After uploading the add-on, you can find it listed under Widget Add-Ons and manage the uploaded add-ons
there.
Next Steps
You can upload different major versions of an add-on at the same time, for example, versions 1.0.0 and
2.0.0.
However, when you upload an add-on that differs in minor version from an existing one on your tenant, the
former replaces the latter. For example, if a widget add-on of version 1.5.0 already exists on the tenant, then
adding either version 1.4.0 or 1.6.0 replaces it.
You can add a custom add-on to the widget in your story to customize its behavior.
Prerequisites
To add custom add-ons, ensure that you have the permission Read for the object type Widget Add-On.
Procedure
All the available add-ons of this type are listed under Add-On.
b. Select an add-on from the dropdown list. You can search by keywords to find it.
c. To close the dialog, choose OK.
Results
You've added the custom add-on to the widget. The widget displays your own add-on instead of the SAP
Analytics Cloud built-in one.
You can further add add-ons of other types in the same way.
To change to another add-on of a type, in the Builder panel, remove the existing one first, and add another one.
Know about what happens when you delete a custom add-on from SAP Analytics Cloud.
When you try to delete a custom add-on from an SAP Analytics Cloud system in the Widget Add-Ons tab, if
it's actively used in stories, a dialog appears, which lists all the related stories and warns you that deleting the
add-on will break them.
If you choose to ignore the warning and delete the widget add-on, you'll receive an error message about loading
the add-on when opening an affected story. The add-on is no longer attached to the widgets that used it, and
you need to remove it in the widgets' Builder panel so that you can save the story with changes.
If you reupload the add-on after the deletion, the references between it and stories stay lost. When you open
an affected story, a warning appears. To solve this, just save the story, and you can therefore repair the
dependencies and no longer see the warning.
As an application designer or story developer, you can add scripts to your analytic applications or stories to
implement your custom logic and offer viewers more interactions with them.
• Configure Bookmark Settings and Use Related APIs (Optimized Story Experience) [page 1683]
• Use Bookmark Set Technical Objects in Analytic Applications [page 1686]
• Use a Calendar Integration Technical Object and Related APIs [page 1689]
• Enable Data Change Insights and Use Related APIs [page 1697]
• Use an Export to PDF Technical Object [page 1700]
You can create interactive and highly custom-defined analytic applications and stories using APIs provided by
SAP Analytics Cloud.
To enable interactivity, as an application designer or story developer you can write scripts that are executed
when end user performs an action in view time. For example, you can add a button widget and assign scripts to
its onClick event.
The scripts consist of one or more statements written in a JavaScript-based language, which follow a specific
syntax. You can write scripts in the script editor.
All objects, fields and functions available are listed in Analytics Designer API Reference (for analytic
applications) and Optimized Story Experience API Reference (for optimized stories).
Script editor lets you, as an application designer or story developer, write scripts for each widget and thus
create interactive and highly custom-defined analytic applications or optimized stories.
Context
To enable interactivity, you configure the behavior of widgets and write scripts that are executed when the
viewer performs an action in the analytic application or story. For example, you can add a button widget and
assign scripts to the its onClick event. You can also write scripts based on other system events like the
onInitialization event of the application or page or scripts that are executed whenever data is changed.
Scripts consist of one or more statements written in a JavaScript-based language, which follow a specific
syntax. You write scripts in the script editor. You can find all objects, fields and functions available for scripting
in Analytics Designer API Reference (for analytic applications) or Optimized Story Experience API Reference
(for optimized stories).
Procedure
The widget icon and name are displayed in the Outline panel.
2. Hover over the widget name.
3. Select . If the widget supports multiple events, select and then one of the events that triggers the
execution of scripts.
The script editor opens in a new tab, which displays the name of the event and the widget, page or
application to which the script will be assigned, for example, Table_1 - onSelect.
You can change the order of the multiple tabs by dragging and dropping them horizontally.
4. Type in one or more statements with this syntax:
<ComponentVariable>.<function>(<arguments>);.
Tip
You can call the value help at any place in the script by pressing CTRL + Space .
5. When you've finished your script, close the tab by selecting at its right side.
Related Information
As an application designer or story developer, you can use different kinds of value help in the script editor
depending on the context in which you're working.
Note
setVariableValue for tables and charts, and addMeasure for charts only support single variable values.
That's why the member selector also allows only single selection.
Tip
If pressing CTRL + SPACE doesn't display any values, make sure that the keyboard shortcut isn't used
elsewhere. For example, in some Chinese input methods, it's used to switch language between Chinese and
English. To make the shortcut take effect, you can do either of the following:
Note
The result of metadata selection looks like this in the script editor:
To get metadata for variable values, press CTRL + Space . If you are working with a dimension variable, you
can open the member selector; otherwise you can only add or show the variable values in the code completion
list.
If you work with a variable that is not a dimension, the value help is displayed in the following way:
And the selection result in the script editor looks like this:
If a function needs a selection object as an argument, you can see available arguments by pressing CTRL +
space . Value help is provided for keys, which represent the dimensions of an associated data source, or for the
values of keys, which represent dimension members of the corresponding dimensions.
Note
To start the selection object you have to type in braces {} within the parentheses ( ).
You can also get value help for the corresponding dimension members of the dimension property when you try
to assign or compare properties.
When you work in the script editor, you can use keyboard shortcuts to simplify your work.
Note
Keyboard shortcuts apply to both Windows and Mac. For Mac, replace the Windows keys with the mapping
Mac keys.
goDocEnd CTRL + End Move the cursor to the end of the docu-
ment.
goLineStart ALT + Left Move the cursor to the start of the line.
goLineEnd ALT + Right Move the cursor to the end of the line.
goGroupRight CTRL + Right Move to the right of the group after the
cursor. A group is a stretch of word
characters, a stretch of punctuation
characters, a newline or a stretch of
more than one whitespace characters.
delGroupBefore CTRL + Backspace Delete to the left of the group before the
cursor.
delGroupAfter CTRL + Delete Delete from the start of the group after
the cursor.
Related Information
As an application designer or story developer, you use script variables, which are useful for storing
intermediate results repeatedly used in a script, for example.
Script variables are reusable elements that store a value of a certain type, which exist throughout the runtime.
The script variables you create in an application or story are available to itself only. You can define their values
not only in the application or story, but also in its URL by adding a specific URL parameter.
For each script variable, you can define name, optional description, type and a default value, which is optional
and depends on the variable type. A script variable can be defined as an array, which represents a set of values
of the specified type. You can use the following primitive types:
• string
Represents text as a sequence of characters.
• boolean
Represents a logical value, either true or false.
• number
Represents a floating-point numerical value.
Can be entered in ordinary format or scientific notation format such as 1e+20 or 1e20. If the number is
longer than 21 digits, it will be automatically displayed in scientific notation format.
• integer
Represents a non-fractional numerical value.
Scientific notation format isn't supported for entering values. However, at runrime if the number is longer
than 21 digits, it'll be displayed in scientific notation.
In addition to the primitive types described above, you can select a wider variety of non-primitive types for the
script variable, such as button, category, chart, clock, data source and table.
Procedure
1. In the Scripting area of the Outline panel, (for analytic applications) choose right next to Script
Variables, or (for optimized stories) choose Script Variables to add a new script variable.
You can see the variable beneath Script Variables, and a new panel, Script Variable, is opened.
2. In the panel, change the name of the script variable.
3. (optional) Enter the description for the script variable.
4. Choose the type of the script variable and whether to set it as an array.
5. Specify the default value depending on the type.
You can now use the variable in scripting within this application or story.
You can initialize the application or story based on your needs by simply making some changes to its URL. The
value defined in the URL parameter can be passed to the corresponding script variable.
Procedure
1. Choose the script variable you want to define, and go to the Scipt Variable panel.
2. Select Expose variable via URL parameter.
Note
This option isn't available for array type variables and non-primitive type variables (types other than
string, boolean, integer and number).
3. Save the application or story with the changes and choose Run Analytic Application or View.
4. In the URL of the analytic application or story, enter a new parameter that starts with ;p_ and is followed
by the script variable's name and value.
For example, if you want to set the value of ScriptVariable_1 to 3.14, add the following parameter to
the original URL:
;p_ScriptVariable_1=3.14
5. Reopen your application or story URL.
You can use a script variable in your script as a local variable. For example, you change the value by assigning a
new one and use this value by passing it as argument when calling a function.
If you write scripts in the application or story, you can insert the script variable you've created by selecting it
from the value help in the script editor. You can activate the value help at any place in the script by pressing
CTRL + Space .
If you need to use the script variable in calculation, you can type @ in the formula in Calculation Editor to insert
it. All the available script variables will be automatically displayed in the dropdown list. Note that only global
variables of string, integer and number types can be referenced in the calculation editor, while the array type
isn't supported.
When you use the script variable as the judging expression in an If statement and the expression returns a
null value, the If statement will also return a null result. For example, if you write a script like this and set the
script variable to undefined in runtime, the data returned and displayed in the chart isn't -100, but null.
You can also define script variables as the source of dynamic text in a text widget.
To change an existing scripting variable, select it in the Outline panel. You can see the Scipt Variable panel open
again where you can also make the changes there.
To delete an existing scripting variable, choose beside the script variable in the Outline panel, and select
Delete.
Note
If the variable’s been used in a calculated measure, you need to modify the calculation first before deleting
the variable.
As an application designer or story developer, you can use script objects to encapsulate a set of functions that
can be reused in scripting.
A script object is a non-visible part of an analytic application or optimized story, which groups a set of
functions. You can use it directly in event handlers. This keeps you from duplicating codes and makes
maintenance of applications or stories easier.
Once you've added a script object, you can write scripts for it, which act as a script function. Each script
function has the following elements:
• name
• return type
• arguments
You can access the script object in scripts by entering its name like other widgets. Inside the function of a script
object you have access to all objects in the application or story such as widgets, global variables and popups.
Procedure
1. In the Scripting area of the Outline panel, (for analytic applications) choose right next to Script Objects,
The new script object is displayed under Script Objects, and a Script Function panel opens.
2. In the Properties section, configure the following settings for the script function:
Element Description
Note
All modifications take effect immediately without your having to select Done, for example, after you
press enter on an input, change the dropdown selection or switch to an array.
3. In the Arguments section select to create a new argument for the function.
4. In the Argument panel, edit name and type of the argument.
7. In the Outline panel select (Edit Scripts) right next to the script object to write scripts for the new script
function.
You've created a script object. You can use it in any widget event handler.
You can add multiple script functions to the script object by selecting Add Script Function next to
it in the Outline panel.
Example
The following example displays the elements of the script function computeAverage.
• name: computeAverage
• return type: number
• arguments: value1, value2 and value3 (all number type)
• scripts: return (value1 + value2 + value3) / 3.0;
To change the name, description, arguments and return type of a script function, select it in the Scripting area
of the Outline panel. The panel opens where you can directly update your changes. After you've changed the
elements, select Done to close the panel.
To edit an argument, in the Arguments section of the Script Function panel, select (Edit) when hovering over
To delete a script object or script function, select next to it and then Delete.
As an application designer or story developer, you can use pattern-based functions in analytic applications or
optimized stories to facilitate your work by only providing input and output examples instead of writing scripts.
Context
With pattern-based functions you can define how a certain input shall become a certain output. For example,
[email protected] shall become John Doe.
After creating a script object, you can add a pattern-based function to it by providing such input and output
training examples.
1. In the Outline panel select next to a scripting object and then Add Pattern-Based Function.
You can see that the Pattern-Based Function panel opens and a new function is displayed under the script
object in Outline.
2. You can change the name of the pattern-based function and add description.
Note
Sometimes one training example might not be enough due to potential ambiguity. In this case you can
use up to three. You may find the example below.
5. Select Create so that a machine learning algorithm starts looking for a pattern that matches every
example.
Note
If you want to undo your changes on a training example, select Reset, which will set the pattern-based
function to the last working pattern with the according training examples.
6. Verify the pattern by entering test examples. Select next to Verify Pattern, and enter an input. In the
output field you should see the right output that follows the pattern defined.
7. After the pattern-based function's been successfully created, select Done.
You can use it in every script of the application or story just like any other script object functions:
Assume you have a list of dates in the form of month.day.year, for example, 10.11.2011, and you want to switch
day and month and only take the last two digits of the year. You type 10.11.2011 as input and 11.10.11 as
output in the Training Example section.
However, the example is a bit ambigous as it is not clear whether 11 at the end means month or year. You need
to provide a second training example like 09.05.2020 as input and 05.09.20 as output.
If you have a list of appointments, for example, John Doe has an appointment on 06.07.20 at 3:00pm. and you'd
like to transform this into Name: John Doe, Date: 06.07.20, Time: 3:00pm, you can directly type them as input
and output respectively.
Related Information
Since SAP Analytics Cloud, analytics designer version 2020.2, only types available in the standard library
are suggested when you create a new analytic application and call the value help in the script editor. For
applications created prior to this version, you can manually start the optimization.
Context
Once you've added a widget or a technical object to your application, the associated type library is displayed. If
you remove the last widget or technical object associated with a type library, the library is also removed and no
longer displayed in the value help. Prior to version 2020.2, all type libraries were displayed, which could affect
your scripting experience and runtime performance.
Procedure
2. From the toolbar choose (Edit Analytic Application) Optimize Type Libraries .
You can see a dialog Optimize Script Libraries informing you of the optimization effects.
3. Choose Optimize.
Note
If you've already used some type libraries that aren't directly related to objects used in the application,
for example, you've used feed enum values in your application that contains no charts, this could
lead to scripting errors. Please check the error list after the optimization and update your scripts
accordingly.
Note
For further information about various type libraries, see Analytics Designer API Reference.
As an application designer or story developer, if you want to rename an item in your analytic application or
story, automatic refactoring works, which reduces your efforts to manually adjust its references.
In analytic applications or stories, you can enter names for widgets and other items to ease the handling efforts
at design time. For example, you can use the names repeatedly in scripting.
If you later want to rename some items, various references to it have to be adjusted, mainly scripts, dynamic
texts and calculated measures. Previously, you had to do this work manually, which is a tedious job, especially
for large and complex applications or stories.
Automatic refactoring makes this process as simple as possible. Ideally, you don't need to do anything.
Renaming just works, keeping the application or story intact.
However, there are cases where renaming could result in scripting errors. In such cases you'll be informed of
any issues, and you can decide whether to cancel renaming or continue.
When you rename an item in either Outline or the Styling panel, these things will happen:
• All the related scripts will be adjusted automatically if there's no error or overlapping name.
Note
If the script declares a local variable with the same name as that specified during renaming, the local
variable will be automatically renamed in the pattern <oldName>_$.
For example, the script declares var OkButton = Button_1;, and now you rename Button_1 to
OkButton. Then the script will be changed to var OkButton_$ = OkButton;
Special Cases
In the following special cases, you have to interact with the system:
• If the renamed item is a script variable that is exposed via URL parameter, a confirmation dialog will pop up
saying that this might break the application or story link with the variable parameter.
• If the renamed item is found in scripts with errors, a confirmation dialog will pop up saying that these
scripts will be skipped and that you have to manually adjust them.
• If the renamed item is found in a script that has parameter in conflict with the new name, a confirmation
dialog will pop up saying that the script will be skipped and that you have to manually adjust it.
Example
The onPostMessageReceived event has two parameters, message and origin. If the item used in
your script is now renamed to message, this conflict can't be resolved. This also applies when the new
name conflicts with a script function argument name.
• If scripts use built-in objects, such as console, ArrayUtils and MemberDisplayMode, which conflicts
with the new name, a confirmation dialog will pop up saying that the scripts will be skipped and that you
have to manually adjust them.
Note
In this case the scripts do not need to contain references to the renamed item.
In such cases, you can either continue, or cancel the refactoring and reconsider the new name, for example.
As an application designer or story developer, to find errors in your scripts, you can use the web browser’s
developer tools to set breakpoints for debugging or use the debug mode in your analytic application or story.
Before you start debugging your scripts, learn how to find and view them in the developer tools of the web
browser, Google Chrome.
After you run the analytic application or story, open the web browser's developer tools. You need to run the
scripts at least once during the current session so that you can find them there.
Before your scripts get executed at runtime, SAP Analytics Cloud transforms them. This is why the
JavaScript in the web browser's developer tools doesn't look exactly the same as the scripts you wrote
in the script editor during design time.
Then to search for scripts by name, press Ctrl + P to open the search field, and enter the name.
The names of the scripts follow a specific pattern: <WIDGET_NAME>.<FUNCTION_NAME>.js. The scripts are
grouped in a folder named <APPLICATION_NAME>.
APPLICATION_NAME The name that you set for your application or story when you
saved it
FUNCTION_NAME The name of the function or event handler that holds the
script you wrote
Note
Special characters in your application or story name will be replaced by _, except - and ., which will be kept
as they are.
Example
Let's assume you have an application called My Demo Application, which contains a button Button_1 with
an onClick event handler.
The name of the script would be Button_1.onClick.js and it would be located in a folder called
My_Demo_Application.
You can set breakpoints at certain points in your scripts to pause and check any errors.
Open the scripts you want to pause during its execution, and to the left select the number of the line where you
want to pause.
A blue marker indicates that the scripts will be paused here the next time they're executed.
To reset a breakpoint, just select the marker you want to remove, and the scripts won't be stopped at this point.
You can launch the SAP Analytics Cloud debug mode and use debugger; statements in your scripts, so that
your scripts can automatically pause at multiple points during runtime for you to detect any errors.
Instead of setting breakpoints at runtime, at design time you can add the debugger; statements to your
scripts.
Then, to launch the debug mode, at runtime add parameter debug=true to the URL.
https://<TENANT>/sap/fpa/ui/tenants/<TENANT_ID>/bo/application/<APPLICATION_ID>/?
mode=present&debug=true
Note
If the debug parameter receives a different value from true or no value at all, it's ignored and the debug
mode isn't enabled.
When you run your analytic application or story in debug mode, the script will automatically stop its execution
at the debugger; statements.
In debug mode, your script comments are generated into the transformed JavaScript. This helps you recognize
your scripts in the web browser's developer tools.
Note
The names of the scripts are generated differently in debug mode. All script names receive the suffix -dbg,
for example, Button_1.onClick-dbg.js.
As an application designer or story developer, you can open the errors panel to view all the errors and warnings
in your analytic application or optimized story and quickly locate them in your scripts.
Procedure
The number displayed by the button only indicates the number of errors, but not warnings. All the
errors in the application or story are counted even if you set filters on them in the panel later.
• To display only errors or warnings, choose to the upper right corner of the panel, and select the
corresponding options.
• To expand or collapse all the displayed items, choose to the upper right corner of the panel, and
select Expand All or Collapse All.
3. Select any error or warning for details and further actions.
You can see the corresponding script with the error highlighted.
4. Once an error or warning is fixed, it will no longer be displayed in the panel. After fixing all, choose Show/
Hide Info Panel again from the toolbar or in the panel to close the Errors panel.
As an application designer or story developer, you can open the reference list to quickly check out where and
how an element, such as a widget, script variable, function or technical object is used in scripting.
Context
In the reference list of an element, you can find all the related elements referencing it. This is especially helpful
when you want to better build or understand the business logic of the analytic application or optimized story or
want to change or delete a specific element in a complex application or story.
1. In Outline, hover over the element that you want to check, for example, Button_1.
Tip
You can select any of the referencing elements in the panel to locate it in both application or story
and Outline. You can also select any script to directly open the script editor and locate the referenced
element in scripting.
To expand or collapse all the referencing elements, choose in the upper right corner of the panel
and select Expand All or Collapse All.
4. After finishing checking the reference list, choose or Show/Hide Info Panel under Views in the toolbar to
close the panel.
You can add input fields or text areas to your analytic applications or optimized stories so that viewers can
enter values and trigger actions based on them.
Context
Viewers can enter text in both input fields and text areas. The difference between the two widgets is that input
fields can only display contents in a line, while text areas can automatically wrap texts according to its size. Text
areas can hold 1000 characters at maximum in multiple lines.
Procedure
1. From the Insert area of the toolbar, select Input Field or Text Area to add the
widget to the canvas, popup or page.
• Manual Input
Under Display Hint, you can enter hint texts to be initially displayed in the input field. If you disable this
option, no hint will be displayed either in edit or view time. After this option is enabled, when viewers
start to enter a value in the input field at runtime, the hint texts will be overwritten by the users' inputs.
• Script Variables
• Model Variables
• Tile Filters & Variables
• Analytic Application/Story Properties
4. (optional) You can turn on Enable the write-back in runtime and select a script variable in your application
or story if you want the input field value to be passed to the variable in view time.
5. You can further leverage the following APIs to allow for more functionalities in the input field or text area:
• To get and set the value in the input field or text area, use getValue() and setValue() respectively.
• To tell whether the input field or text area is enabled and enable it, use isEnabled() and
setEnabled() respectively.
To tell whether the input field or text area is editable and make it editable, use isEditable() and
setEditable() respectively.
Note
The input field is grayed out in view time when disabled, but not when uneditable.
• To trigger the actions after the input, for example, when users enter the text and press Enter , use
onChange().
For more information about APIs, you can refer to API Reference [page 1653].
As an application designer or story developer, you can configure sliders or range sliders so that viewers can
input and change values dynamically and trigger what-if scenarios based on the value.
Context
The difference between a slider and range slider is that a slider defines a single value while a range slider
defines a specific range of data.
1. To add a slider or range slider, from the toolbar select and then Slider or Range Slider.
You can see that the widget is added to your application or story with the default name Slider_1 or
RangeSlider_1 displayed in Outline. You can change the name there or in the Styling panel.
2. In the Builder panel, set minimum and maximum for the slider or range slider. Then:
• For a slider, enter the default value under Current Value.
• For a range slider, enter the default start and end values.
Note
The differences among current value, maximum, minimum, end value, and start value should all be
divisible by the step value.
4. (Optional) In the Styling panel, you can define the styling properties such as color of the progress bar and
background bar (the remaining part other than the progress bar), and scale, scale format and decimal
places of all the numbers on the widget.
5. You can use the following APIs to let viewers interact with the slider or range slider:
Code Syntax
// Event - It's triggered when an end user finishes the interaction and
releases the handler of the slider bar
onChange()
//Allow end users to trigger the creation of a range slider with a
predefined start and end value.
Range.create(start: number, end: number): Range
As an application designer or story developer, you can bind some simple widgets' values to variables like script
variables, model variables, tile variables (widget-level variables) and analytic application or story properties so
that the values can be updated automatically. The other way around, by binding you can enable writing the
value set for a simple widget in view time back to a certain variable and applying the variable wherever needed.
The simple widgets and corresponding variable types available for binding are:
Tip
The analytic application or story properties include current user, time or date, last modified date or date/
time, last modifier and creator.
Tip
In general, if not specified, the variables available for binding are primitive type variables (string, boolean,
integer or number) and can be array or single value.
Note
Push, pop or index of array in scripting won’t be reflected in the bound widget.
Note
Align the type of widget’s value with the write-back script variable.
Say you want to synchronize Slider_1’s current value with a script variable used in Chart_1, Variable_1.
In view time, select (Edit Chart Prompts) in Chart_1 and adjust the value of Variable_1.
Say you want to use Slider_1’s current value that end users select in runtime as the threshold of Chart_1’s
conditional formatting.
6. Select (Threshold Options) beside the measure you want to apply conditional formatting to and then
Edit Ranges….
7. Configure the threshold. Under Comparison Measure, select Variable_2.
Current value of the slider is automatically written back to the variable applied to the chart’s conditional
formatting. The threshold of Chart_1 will always take the current value of Slider_1.
Caution
If end user changes such a widget’s value directly or via API at runtime while you’ve defined a variable
bound to this widget’s value at design time, the variable binding will become invalid.
Related Information
As a story developer, you can configure settings and use related APIs for viewers to bookmark the state of the
story.
Procedure
1. From View section in the toolbar, select Left Side Panel to open Outline.
2. Select Bookmarks.
The Bookmarks side panel opens, where you can configure settings for viewers to bookmark the state of
the story. Bookmarks is also the name of the bookmark technical object, which can be reused in scripting.
3. Enter a version number, which must be an integer.
Note
If you change the version, all the bookmarks created based on the previous version become invalid. To
reopen the invalid bookmarks, you need to change the version number back to the previous one.
Code Syntax
In addition to the API, you can define whether story viewers’ last changes to the dynamic variable
values can be saved with bookmarks via Story Details. For more information, see Story Preferences and
Details [page 1035].
If viewers save a bookmark when you haven’t enabled the option, it might be opened
with the last changed variable values of the last opened bookmark after you later set
isKeepLastDynamicVariableValue? to true or selected Override default settings and Keep last
saved values for dynamic variables.
You can also leverage the following APIs to let viewers apply, list, delete bookmarks or open the bookmark
sharing dialog in the story:
Code Syntax
Example
Here are some script examples that show how to let story viewers save, list and delete bookmarks in view time.
Example
In this example, you want to let viewers capture the state of the story and save it as a bookmark with
property in a quicker way.
First, add an input field InputField_1 and a button Button_1 to the story.
Sample Code
Example
In this example, you want to let story viewers remove any bookmarks of the story from the dropdown.
First, add a dropdown Dropdown_1 and another button Button_2 to the story.
For Dropdown_1:
Sample Code
For Button_2:
Sample Code
Bookmarks.deleteBookmark(Dropdown_1.getSelectedKey());
When story viewers open the dropdown list, they can see and select any available bookmarks. Then,
when clicking Remove Bookmark, they can delete the bookmarks selected from the dropdown.
Related Information
As an application designer, you can use a bookmark set technical object to let application users save the state
of your analytic application as a bookmark at runtime.
Procedure
The side panel Bookmark Set opens, with the default name of the bookmark BookmarkSet_1 displayed. You
can change the name if you want to.
2. Enter a version number. The number must be an integer.
Note
To generate a new version of the bookmark set at any time, change the version number. After that,
all the bookmarks created based on the previous version will become invalid. To reopen the invalid
bookmarks, you need to change the version number back to the previous one.
3. Select Included Components to choose the canvas, popups, widgets and technical objects to be included in
bookmarks.
4. Select Done to finish configuring the bookmark set technical object.
5. To let application users save a bookmark including its properties when viewing the analytic application, you
need to add a widget such as a button and write script for it, using the syntax below:
Code Syntax
Note
You can define whether application users’ last changes to the dynamic variable values can be saved
with bookmarks via API or Analytic Application Details. If application user saves a bookmark when you
haven’t enabled the option, it might be opened with the last changed variable values of the last opened
bookmark after you later set isKeepLastDynamicVariableValue? to true or selected Override
default settings and Keep last saved values for dynamic variables.
Code Syntax
Results
After creating a bookmark set technical object and writing the scripts for it, you can let application users
change the states of the widgets at will, save different states as bookmarks and view them at any time.
Example
Here are some script examples that show how to let application users save, list and delete bookmarks in
runtime by leveraging bookmark related APIs.
Example
In this example, you want to design an application that lets application users capture the application state
and save it as a bookmark with property.
First, in addition to the technical object BookmarkSet_1, add an input field InputField_1 and a button
Button_1 to the canvas.
Sample Code
Example
In this example, you want to let application users remove any bookmarks in the application that are all listed
in the dropdown.
First, in addition to the technical object BookmarkSet_1, add a dropdown Dropdown_1 and another button
Button_2 to the canvas.
For Dropdown_1:
Sample Code
For Button_2:
Sample Code
BookmarkSet_1.deleteBookmark(Dropdown_1.getSelectedKey());
When application users open the dropdown list, they can see all the available bookmarks. Then, when
selecting Remove Bookmark, they can delete the bookmarks selected from the dropdown.
Related Information
As an application designer or story developer, you can add a calendar integration technical object and
use calendar task APIs on it, to submit, decline, approve, reject and activate a calendar task, for example.
Furthermore, you can enquire the properties of a calendar task and use convenience APIs to verify if you can
perform a task status change. The calendar task must be either of type general task, review task or composite
task.
To add a calendar integration technical object, in the Scripting section of the Outline panel, (for analytic
applications) choose right next to Calendar Integration, or (for optimized stories) choose Calendar
Integration .
A side panel opens where you can set the ID of the technical object and choose whether the calendar toolbar is
visible in view time.
Then, you can use the following APIs for the calendar integration technical object:
• CalendarIntegration.getCurrentTask()
• CalendarIntegration.getCurrentTask().getStatus()
• CalendarIntegration.getCurrentTask().getType()
• CalendarIntegration.getCurrentTask().hasUserRole(CalendarTaskUserRoleType)
Depending on the calendar role type, additional APIs such as submit, decline, approve and reject, can be
used for the following task types:
• general task
• composite task
• review task
To use these APIs, you first have to convert a CalendarIntegration to the target calendar task using the
cast API.
If you want to retrieve information about a specific task, you can use the getCalendarTaskById API. Simply
pass the eventId from the query string in the URL. The analytic application or story doesn't need to be
opened from the calendar task.
Sample Code
The following code logs the result of the getType API to the browser console for any task:
You can use the getRelatedTaskIds API to retrieve all the calendar task IDs for which the opened application
or story is a working file. The API returns either an array of task IDs, or if no tasks are found, an empty array.
Note
Recurring composite tasks are currently not delivered as part of the result.
Sample Code
With this sample code snippet, you can retrieve all the tasks for which the opened analytic application or
story is a working file
You can use the getCurrentTask API to inquire the calendar task that is loaded with your analytic application
or story.
This is necessary because a user can't perform operations on a calendar, like submit and decline or approve
and reject, if an analytic application or story isn't associated with a calendar through a working file.
Sample Code
You can get the overall status of a calendar task by using the getStatus API.
Sample Code
The code snippet returns the task status from the calendar task:
You can get the task type of a calendar task by using the getType API. The API returns the type of task that is
created in the calendar (general, composite or review task).
Sample Code
You can inquire the calendar user role of the currently loaded calendar task by using the hasUserRole API.
The API accepts the following roles:
• CalendarTaskUserRoleType.Reviewer
• CalendarTaskUserRoleType.Owner
• CalendarTaskUserRoleType.Assignee
The API returns true if the current logged in user has the specified user role given as parameter, and false
otherwise.
You can use the createCompositeTask API to create a composite task with properties and options.
When creating a composite task, you can choose a name, startDate and endDate, which are the mandatory
properties. You can choose between an analytic application or a story when specifying the work file.
Sample Code
With this sample code snippet, you can create a composite task that starts now and ends on January, 16th
2022. There's one work file belonging to this composite task, the current opened application or story. One
assignee is defined, the current logged in user. Additionally, a reviewer is specified, in one review round. The
option autoActivate replaces the Activate the task at start date automatically option in the software user
interface.
You can use the activate() API to activate a task and also specify the notify option. This API replaces the
Activate or Activate and Notify option from the calendar view.
As an assignee, you can use the decline API to decline a composite task or general task for all assignees. If
there are multiple assignees, you'll decline the task for your role only. This replaces the option Decline Your Role
Only in the Decline Task dialog. The API returns true if successful, false otherwise:
• CalendarCompositeTask.decline()
• CalendarGeneralTask.decline()
Sample Code
To decline a composite task, you can use the following code snippet:
As an assignee, you can use the submit API to submit a calendar of type General Task or Composite Task. The
API returns true if successful, false otherwise.
• CalendarGeneralTask.submit()
• CalendarCompositeTask.submit()
To submit a calendar task for a composite task, you can use the following code snippet:
Sample Code
As a reviewer you can use the approve API to accept a composite task or a review task. The assignee should
have first submitted the task before. The API returns true if successful and false otherwise:
• CalendarCompositeTask.approve()
• CalendarReviewTask.approve()
Sample Code
To approve a calendar task for a review task, you can use the following code snippet:
As a reviewer you can use the reject API to reject a composite task or a review task for the assignee to have
another look at it. The assignee should have first submitted the task before. The API returns true if successful
and false otherwise.
• CalendarCompositeTask.reject()
• CalendarReviewTask.reject()
Sample Code
To reject a composite task, you can use the following code snippet:
You can use the submit, decline, approve or reject APIs for any composite task given a task ID.
With this sample code snippet, you can submit any composite task given a task ID.
You can use the technical object CalendarIntegration together with the convenience APIs
canUserSumit,, canUserDecline, canUserApprove, canUserReject to check if you can perform a task
status change.
Depending on the calendar role type, the APIs canUserSumit, canUserDecline, canUserApprove,
canUserReject can be used for the following task types:
• general task
• composite task
• review task
Note
To use these APIs, you first have to convert a technical object CalendarIntegration to the target event
using the cast API.
You can use the canUserSubmit API to verify if a calendar task is ready to be submitted for the assignee user
role. If you use the API when the task status is Open , the task status changes to In Progress.
Sample Code
The following code snippet submits a task with the id "myTaskId" that is open:
You can use the canUserApprove API for the user role reviewer on a composite task, or for a user role
assignee on a review task to verify if the task can be approved.
Sample Code
The following code snippet approves all the related tasks that can be approved, for the current logged in
user, either for a review or a composite task:
You can use the canUserReject API for the user role reviewer on a composite task or for the userrRole
assignee on a review task, to check if the task can be rejected.
Sample Code
The following code snippet rejects all composite task for the current logged in user:
You can use the canUserDecline API for the user role assignee on a composite or a general task to verify if
the task can be declined.
Sample Code
The following code snippet verifies if the logged in user can decline the current opened task:
As an application or story designer, you can enable Data Change Insights for a story and its specific charts.
As a developer, you can use the related technical object and APIs to allow for more activities, such as saving a
snapshot to capture any state of the analytic application or optimized story for data comparison.
If you're a viewer, for more information about Data Change Insights, see Subscribe to and View Data Change
Insights [page 177].
To enable Data Change Insights for an analytic application, from File in the edit time toolbar, select (Edit
Analytic Application) Analytic Application Details .
Then, to enable data change insights for an individual chart in the story, in its Styling panel select Data Change
Insights under Quick Menus. In view time Data Change Insights will then be available in the chart's context
menu for viewers to select the subscription mode.
Note
To ensure that viewers receive notifications after subscription, you need to add an Export to PDF technical
object.
You can add a Data Change Insights technical object and use related APIs to allow for more functionalities.
Procedure
1. To add a Data Change Insights technical object, in the Scripting section of the Outline panel, (for analytic
applications) choose right next to Data Change Insights, or (for optimized stories) choose
The right side panel opens with a default name DataChangeInsights_1 displayed. You can change the name
there.
2. Select a version type of the Data Change Insights technical object:
If you change the version number or version type, the previous snapshots of applications or stories for data
comparison become invalid.
3. Select Done.
4. You can use APIs related to Data Change Insights to allow for more functionalities in view time.
Code Syntax
// Save a snapshot. One application or story can have only one snapshot
per day, and the later one will override previous one within the same day.
saveSnapshot(): Boolean
Code Syntax
For more information about the APIs, see Analytics Designer API Reference or Optimized Story Experience
API Reference.
By default, data change insights snapshots are saved on the current SAP Analytics Cloud tenant. You can
also save them in a remote data repository by configuring the storage in the Data Change Insights Snapshots
section in System Administration Data Source Configuration .
Note
To configure the storage of data change insights snapshots, you need to have the Create permission for the
object type Remote Repository Snapshot.
Restriction
Currently, saving data change insights snapshots isn't supported in remote data repositories based on XS
Node.js.
As an application designer or story developer, you can use an Export to PDF technical object and related APIs
to let viewers export an analytic application or story to a PDF file.
Context
Only visible contents can be exported to PDF. Therefore, the following invisible elements won't be exported:
• Invisible part in scrollable charts, tables or containers including tab strips, panels, flow layout panels and
page books
• Popups that haven't been opened
Viewers can export popups that are opened via script API.
• Collapsed table cells
• Lazy rendered widgets
• Lazy loaded data source
• Comments on invisible data cells
Note
For R visualizations, static plots are exported. However, R visualizations written in RHTML (iFrame) aren't
fully supported, for example, the ones with pictures from external sources.
Widgets might not keep all CSS styling settings when exported to PDF, for example, text decoration and
opacity.
Tip
Custom widgets support PDF export now. However, to make sure that all the defined elements can be
exported to PDF, custom widget developers need to be aware of the export restrictions and test first and
then set supportsExport to true in the custom widget JSON file.
Tip
If your image's hyperlink is an external URL, you can add a parameter to it to ensure that certified users can
export this image to PDF.
Parameter Options
Procedure
1. In the Scripting section of the Outline, (for analytic applications) choose right next to Export to PDF,
The side panel Export to PDF opens with a default name ExportToPDF_1.
2. In the side panel, select Included Widgets to choose the widgets you want to include in PDF export.
3. In addition, you can set the following properties according to your needs:
• Name of the technical object
• (Optimized Story Experience) Page range
• Paper size: Auto, A2, A3, A4, A5, Letter or Legal
Note
If you set the paper size to Auto for the Export to PDF technical object, exporting full contents on a
table via API uses A4 as the default.
Restriction
Note
If you haven't enabled Commenting in embedded mode via System Administration System
Configuration , the commenting related settings and APIs won’t take effect.
Note
The API that triggers the export is supported only in Microsoft Edge and Google Chrome, but not in
Internet Explorer.
Restriction
• When a table is exported to PDF via exportReport() or exportView() API, the in-cell texts
won't be exported if they exceed certain lengths, depending on the paper size of the export.
• For exportReport() API:
• The PDF exports don’t support Unicode (UCS-2 BE/UTF16 BE).
• (Optimized Story Experience) The export won’t be generated if there’s no table in the whole
story or selected in the included widgets.
Tip
In Optimized Story Experience, write scripts with exportView() at the end, so that multiple pages are
exported in view time without the interference of other running scripts.
6. The properties you defined in the Export to PDF panel are the default export settings. To offer users the
flexibility to customize the export settings, you can design a new settings panel or dialog by leveraging
related APIs. For detailed information, refer to Analytics Designer API Reference (for analytic applications)
or Optimized Story Experience API Reference (for optimized stories).
After creating an Export to PDF technical object, you can choose to manually generate PDF export via API
in publication. Compared to automatically generating PDF export, this option controls the timing of the PDF
export more accurately and doesn't miss any relevant scripting event happening after the completion of
onInitialization event.
First, (for analytic applications) in the Styling panel of Canvas or (for optimized stories) from Global
Settings in Outline, under Schedule Publication Settings, select Manually generate PDF export via API.
Then, for the onInitialization event of Canvas or story page, write the following script as the last line:
If (Application.isRunBySchedulePublication()) {
Scheduling.publish();
}
Related Information
As a story developer, you can use the Export to PowerPoint technical object with script APIs to customize the
view time export experience.
Context
Only visible contents can be exported to PowerPoint. For more information, refer to Export a Story as a PDF or
a PPTX (Classic Story Experience) [page 253].
Procedure
You've added the Export to PowerPoint technical object, whose default name is ExportToPPTX_1. You can
change to another name.
2. Enter a name for the exported file. By default it takes the story name.
3. Specify the pages to include in the export, by default All. You can enter a range or page numbers.
Note
The story's URL is included in the appendix. If you don't want to include it, you need to enable Remove
Story URL from Appendix in the system configuration.
For more information about the supported APIs, refer to Optimized Story Experience API Reference.
Search for ExportPptx there.
Restriction
When a table is exported to PowerPoint via the API, the in-cell texts won't be exported if they exceed
certain lengths, depending on the paper size of the export.
Tip
Write scripts with exportView() at the end, so that multiple pages are exported in view time without
the interference of other running scripts.
As a story developer, you can use a text pool technical object with related APIs to translate the texts in the
scripts.
Prerequisites
To translate the texts used in the scripts, make sure to enable translation for the story. From the edit time
toolbar, select (Edit Story) Story Details . Then, switch on Enable translation.
Context
In the following script example to apply Hello World to a text widget, the text is static and won't be translated:
Sample Code
Text_1.applyText("Hello World");
Procedure
You've added the text pool technical object, whose default name is TextPool_1. You can change to another
name if you want.
2. In the Text Pool configuration panel, enter the texts to be used in the scripts and translated:
a. To add a text pool value, choose (Add).
You've added a text pool value, whose default ID is Value_1. You can change the ID as you want, which is
used in scripting and corresponds to the text.
b. In the Text column, enter the corresponding source text for translation, for example, Hello World.
c. You can add other text pool values in ID and text pair in the same way.
Note
You can only add one text pool technical object per story. Therefore, put all the texts you want to
translate during the configuration.
For example, you can write the following script to apply the text configured in the text pool technical object
TextPool_1, whose ID is Value_1, to the text widget Text_1:
Sample Code
Text_1.applyText(TextPool_1.getText("Value_1"));
For more information about editing translations, refer to Edit Translations Manually [page 2975].
Results
You've used the text pool technical object with related APIs to translate the texts in scripts to different
languages. In view time, these texts can be dynamically changed with the data access language, based on the
translated texts in the Translation dashboard.
As an application designer or story developer, you can use the timer technical object with script APIs to trigger
timing events in view time.
Context
• Creating simple animations that make image or text widgets change their contents every couple of
seconds
• Sending notifications to end users at a certain interval
• Refreshing analytic applications or optimized stories at a certain interval
Multiple timers can exist simultaneously, but each timer works independently from others.
Procedure
1. In the Scripting section of the Outline panel, (for analytic applications) choose right next to Timer,
The side panel Timer opens with a default name Timer_1 displayed. You can change the name at will.
2. Choose Done to finish adding the timer.
3. To start the timer in view time, you can use the following API for the canvas, page, popup or widget, for
example, the onClick event of a button if you want to start the timer when end users clicks it:
Code Syntax
start(delayInSeconds: number)
Note
• The delay time set in the timer can be either an integer or a decimal, measured in seconds. For
example, if you write start(0.1), then the delay time is 100 milliseconds.
• Calling the start() API on a timer that is already running overrides the first start() API. This
means the timer stops and immediately restarts to realize the delay time set by the second
start() API.
4. To repeatedly restart a timer once time's out, you can go to its onTimeout event and use the start() API.
There are some other script APIs available for the onTimeout event of a timer, such as stop()
and isRunning(). For detailed information, refer to Analytics Designer API Reference (for analytic
applications) and Optimized Story Experience API Reference (for optimized stories).
Here's an example of using the timer to automatically scroll images horizontally from right to left at a certain
interval.
First, insert three different images, Image_1, Image_2 and Image_3 and place them side by side on the page.
Take notes of their left margins, which are 112px, 240px and 368 px.
Then, create an image type script variable Animation_Widgets and an integer type script variable LeftMargins.
Enable Set As Array for the two script variables.
After that, add a script object Util_Animation with a function doAnimation and then add a timer Timer_1 .
Sample Code
console.log('onInitialization - begin');
LeftMargins = [112, 240, 368];
Animation_Widgets = [Image_1,Image_2,Image_3];
//Start the timer after 3 seconds after initializing the page.
Timer_1.start(3);
console.log('onInitialization - end');
For doAnimation:
Sample Code
var num = 3;
//Redefine the sequence of the images.
var first = Animation_Widgets[0];
Animation_Widgets[0] = Animation_Widgets[1];
Animation_Widgets[1] = Animation_Widgets[2];
Animation_Widgets[2] = first;
// Update the left margin of each widget.
for(var i=0;i<num;i++) {
Animation_Widgets[i].getLayout().setLeft(LeftMargins[i]);
}
For Timer_1:
Sample Code
When end users view the application or story, they can soon see the images automatically scroll from right to
left in turn.
As an application designer or story developer, you can use the Data Actions technical object and related script
APIs to let viewers perform data actions, set and read parameter values.
Prerequisites
Before you can use the data actions technical object, there have to be data actions created already.
• To create or edit a Data Actions technical object, you need the permission to edit the analytic application or
story.
• To select a data action in the Data Action Configuration side panel, you need the read permission for it.
• To execute a data action, viewers need the execute permission for it.
Context
Data action is a flexible planning tool for making structured changes to a version of model data, such as
copying data from one model to another.
1. To add a data actions technical object, in the Scripting section of the Outline panel, (for analytic
applications) choose right next to Data Actions, or (for optimized stories) choose
Data Actions .
Note
• To execute a data action successfully, you need to enter the values of all parameters, either in Data
Action Configuration panel or via the API setParameterValue.
• The design of the data action itself can't be changed in Data Action Configuration, but you can
adjust it by setting parameter values there.
4. Select Done.
5. After adding the technical object, you can leverage the following APIs to work with data actions in your
application or story:
• Executing data actions
• As a blocking operation. Other scripts won't be running until the data actions execution is
complete.
Code Syntax
execute(): DataActionExecutionResponse;
• As a non-blocking operation. Other scripts can be running at the same time without waiting for the
data actions execution to complete.
Code Syntax
executeInBackground(executionName: string):
DataActionBackgroundExecutionResponse;
The API returns the execution status, either accepted or error, and a unique execution ID.
In this case, the onExecutionStatusUpdate event of data actions technical object can be called
when the execution status changes:
Code Syntax
Code Syntax
Sample Code
filteredMemberIds.push(singleFilter.value);
}
} else if (filters[i].type ===
FilterValueType.Multiple) {
var multiFilter =
cast(Type.MultipleFilterValue, filters[i]);
if (!multiFilter.exclude) {
filteredMemberIds =
filteredMemberIds.concat(multiFilter.values);
}
}
}
dataAction.setParameterValue(parameterId, {
type: DataActionParameterValueType.Member,
members: filteredMemberIds
});
Sample Code
if (InputControl_1.getInputControlDataSource().isAllMembersSelected()) {
Copy_Accounts.setAllMembersSelected("LBHParam", "LBH2");
}
Sample Code
As an application designer or story developer, you can use the Multi Actions technical object and related script
APIs to let viewers run multi actions, set and read parameter values.
Prerequisites
Before you can use the multi actions technical object, there have to be multi actions already created.
• To create or edit a multi actions technical object, you need the permission to edit the analytic application or
story.
• To select a multi action in the Multi Actions Configuration side panel, you need the read permission for it.
• To execute a multi action, viewers need the execute permission for it.
Context
Multi action can help users save time when they need to run multiple data actions in sequence, publish
versions, run predictive scenarios or combine these operations. There're two ways for you to use this feature:
Procedure
1. To add a multi actions technical object, in the Scripting section of the Outline panel, (for analytic
applications) choose right next to Multi Actions, or (for optimized stories) choose
Multi Actions .
• To let viewers execute a multi action successfully, you need to enter the values of all parameters,
either in Multi Actions Configuration panel or via the API setParameterValue.
• The design of the multi action itself can't be changed in Multi Actions Configuration, but you can
adjust it by setting parameter values here.
4. Select Done.
5. After adding the technical object, you can leverage the following APIs to work with multi actions in your
application or story:
• Executing multi actions
• As a blocking operation. Other scripts won't be running until the multi action execution is
complete.
Code Syntax
execute(): MultiActionExecutionResponse
The API returns the execution status, either success, error or running.
• As a nonblocking operation. Other scripts can be running at the same time without waiting for the
multi action execution to complete.
Code Syntax
executeInBackground(executionName: string):
MultiActionBackgroundExecutionResponse
The API returns the execution status, either accepted or error, and a unique execution ID.
In this case, the onExecutionStatusUpdate event of multi actions technical object can be called
when the execution status changes:
Code Syntax
Code Syntax
Code Syntax
In SAP Analytics Cloud, analytics designer or Optimized Story Experience, you can define OData services
based on an existing on-premise SAP S/4HANA live connection in your system that was created via CORS (
cross-origin resource sharing) connectivity. Additionally, you can also define OData services based on SAP BW
systems, SAP HANA systems, and SAP Business Planning and Consolidation (BPC) systems that were also
created via CORS.
You can create an OData service, define the endpoint URL and use script APIs to let viewers trigger
transactional actions at the backend, such as filling out an order form or others.
In analytics designer amd Optimized Story Experience, OData actions can be called from and executed in the
backend system via scripting. Furthermore, it's also possible to read and use the data of entity sets exposed via
OData services via APIs.
• OData actions are operations exposed by an OData service, which may have side effects when invoked.
• OData action imports, also called unbound actions, are not associated with any OData entity set or entity.
They generally expose simple parameters. All parameters are set via POST body.
• OData bound actions are actions that may be invoked on a specific entity. They always contain a first
parameter that is set via URL to resolve the binding to the entity within the relevant entity set, and all other
parameters are set via POST body.
In general, actions can be bound on every type, but in analytics designer and Optimized Story Experience,
only binding on single entities is supported.
• For OData you should configure the direct connection CORS at the backend. To follow the instructions on
the connection configuration, refer to Live Data Connections to SAP S/4HANA [page 355]. In connection
configuration, the CORS connection is referred to as direct connection.
Note
• The service path you configure should correspond to the end-point URL of the OData service.
• In addition to the service path listed in the documentation, you need to add another service
path /sap/opu/odata4/
• The connection should also support if-match as allowed header.
• Only parameters of simple types are supported. Actions with mandatory parameters of unsupported types
are not available. For actions with optional parameters of unsupported types, the action itself is supported,
but those parameters are not.
• In case of bound actions, actions bound to an entity collection are not supported. Only actions bound to
a single entity are supported. It's only possible to set this binding parameter by specifying the key of the
entity on which the action should be carried out.
To execute an OData action, you need to first add an OData service with the corresponding actions to your
analytic application or optimized story.
Prerequisites
The end-point URL of the OData service is available. You need to enter it manually when creating the service.
Procedure
1. In the Scripting section of Outline, (for analytic applications) choose right next to OData Services, or
The OData services technical object is created, with the default name ODataService_1 displayed in Outline.
A side panel opens on the right.
2. In the side panel, do the following:
a. (Optional) Change the default name of the new service to a meaningful name.
b. Under Connections, select the system from the list of available SAP S/4HANA systems whose
connections were already created in SAP Analytics Cloud.
c. Manually enter a valid end-point URL of the OData Service, in the format /some/end_point/url/.
You need to know the URL because browse catalog isn't available.
3. Select (Refresh) next to Metadata to review the metadata of the new OData service.
4. Select Done to close the panel.
Now you can use the OData action. For more examples, see Scripting Example 1: Use OData Actions in
Analytics Designer or Optimized Story Experience [page 1773] and Scripting Example 2: Use OData Actions in
Analytics Designer or Optimized Story Experience [page 1777].
With OData services you can not only can access OData actions, but also read and use the data of entity sets
exposed via OData services.
This lets you use the data of the entity sets for any purpose except data visualization in a table or a chart.
For example, you can read and display one member in a text widget. To access entity sets exposed via OData
Services, you can use two APIs in your script:
• getEntity: Retrieves a single OData entity from an entity set by specifying the key to the entity. This is
similar to selecting a single row from a SQL table via SELECT * FROM T1 WHERE ID = <id>.
• getEntitiesFromEntitySet: Retrieves all entities from an OData entity set. This is similar to SELECT
TOP <N> * FROM T1. With this API you can use the OData system query options filter, orderby,
select, count, top and skip.
Note
The number of entities is limited to 1000 if no system query options have been used. If system query
options are set, there's no limit to the number of retrieved entities, unless you set a value for the
system query option top.
• getEntitiesCountFromEntitySet: Retrieves the number of entities from an OData entity set. This API
works like the getEntitiesFromEntitySet API and you can also use the OData system query options
filter, orderby, select, count, top and skip.
Example
The parameters filter, orderby and select have string values, which will be 1:1 used in the OData
request. This allows you to use all features supported by the query options. The parameters skip and top
have integer values, which will be used in the OData request. The second parameter in both API methods is
passed as object, in which the system query options can be combined in an arbitrary way.
Sample Code
When you start writing scripts in SAP Analytics Cloud, analytics designer, it might be helpful for you to go
through typical use cases and examples for scripting in analytic applications. This helps you adjust the scripts
to your business needs.
Here you can find typical scripting patterns and examples to help you script with SAP Analytics Cloud, analytics
designer:
Enable planning option and API Enable planning for tables to let plan- Use Planning Enabled Property and se-
ners input data tEnabled API [page 1719]
startEditMode API Start edit mode on specific planning Use startEditMode API on Public Ver-
area in public version sions [page 1721]
Version management APIs on SAP Publish public or private versions of Use publish API on Versions [page
Analytics Cloud data models: publish planning data 1723]
version API
Version management APIs on SAP Publish a private version as a public one Use publishAs Public Version API [page
Analytics Cloud data models: publish as 1725]
public version API
Version management APIs on SAP Revert versions of planning data Use revert API on Versions [page 1726]
Analytics Cloud data models: revert
version API
Version management APIs on SAP Delete a public or private version Use deleteVersion API [page 1727]
Analytics Cloud data models: delete
version API
Version management APIs on SAP Create a private version from an exist- Use copy API on Versions [page 1727]
Analytics Cloud data models: copy ver- ing version
sion API
Version management APIs on Check if a private version is owned by Use getOwnerId API [page 1729]
SAP Analytics Cloud data models: the current user
getOwnerId API
Version Management on SAP BPC Mod- Version Management APIs for SAP Use Version Management APIs on SAP
els Business Planning Consolidaton mod- BPC Models [page 1730]
els
Calendar task APIs Change status and properties of a cal- Use a Calendar Integration Technical
endar task Object and Related APIs [page 1689]
Possible error messages How to prevent error messages when Possible Error Messages in the Con-
using APIs text of Version Management APIs [page
1731]
setUserInput and submitData Change the value of a cell in a table at Use setUserInput and submitData Plan-
API runtime ning APIs [page 1733]
Data locking APIs Check if a model is data locking ena- Use Data Locking APIs [page 1735]
bled and set or get data locks
APIs related to BPC planning sequence Run a BPC planning sequence, open the Use APIs Related to BPC Planning Se-
prompt dialog to modify the variable, quence [page 1738]
set, get, remove or copy the variable
values
Formatting APIs Format certain values in an application Use Formatting APIs to Format Num-
bers and Dates [page 1739]
Refresh data API Trigger data refresh Use Refresh Data API [page 1740]
Pause refresh options and APIs Pause data refresh Use Pause Refresh Options and APIs
[page 1742]
Navigation APIs Navigate through analytic applications Use Navigation APIs [page 1749]
Layout APIs Set widget size and position dynami- Use Layout APIs [page 1750]
cally in the application
APIs for story filters and variables Show or hide filter panel, set, get or re- Use APIs on Story Filter and Variables
move story filters and variables [page 1752]
Selection APIs for tables and charts Retrieve the currently selected mem- Use Selection API for Tables and Charts
bers and their dimensions [page 1754]
Result set APIs Get a result set based on an input data Use Result Set APIs [page 1755]
selection in a table or chart
Range and exclude filters Set range and exclude filters Use setDimensionFilter API on Range
and Exclude Filters [page 1758]
getVariableValues API Return an array of the current values of Use getVariablevalues API [page 1761]
a specified variable
getDimensionFilters API Return an array of the current values of Use getDimensionFilters API [page
a specified dimension 1762]
openSelectModelDialog and Change the table's model at runtime ei- Use openSelectModelDialog and set-
setModel API ther by calling the selection dialog for Model APIs [page 1763]
models or directly with a valid model ID
NULL members and total members NULL members and total members are NULL Members and Totals Members in
supported in script APIs. Scripting APIs [page 1765]
sendNotification API Send notifications to application users Use sendNotification API [page 1766]
Comment-related APIs Handle data cell comments, widget Use Comment-Related APIs [page
comments and comments on comment 1767]
widgets
Busy indicator APIs Define busy indicators on different lev- Define Busy Indicators and Use Related
els for long-running actions APIs [page 1770]
APIs to get user information Get information, teams and roles of Use APIs to Get User Information [page
user 1772]
Example 1: OData Actions in an Applica- Execute OData actions exposed by a Scripting Example 1: Use OData Actions
tion connected on-premise system in Analytics Designer or Optimized
Story Experience [page 1773]
Example 2: OData Actions in an Appli- Display backend system's response in Scripting Example 2: Use OData Ac-
cation an application tions in Analytics Designer or Opti-
mized Story Experience [page 1777]
Best Practices
Here you can find typical use cases for creating analytic applications and setting up the interaction among
widgets by scripting:
Switch between a chart and a table by using a toggle Best Practice: Switch Between Chart and Table [page 1778]
Filter a table or a chart by using a single measure selected Best Practice: Filter Table and Chart Through Dropdown
from a dropdown (Single Selection) [page 1781]
Filter a table or a chart by using multiple measures selected Best Practice: Filter Table and Chart Through Checkboxes
from a checkbox group (Multi-Selections) [page 1785]
Filter a table, a chart or an R visualization by using a filter Best Practice: Use Filter Line for Filtering Switchable Table
line and Chart [page 1789]
Filter dimensions and display data according to hierarchies Best Practice: Filter Table and Chart Through Cascaded Fil-
tering [page 1793]
Control which measures and dimensions are displayed in Best Practice: Add and Remove Dimensions on a Table [page
tables 1797]
Filter a table or a chart by using popup window Best Practice: Filter Table and Chart Through Settings Panel
(Popup Window) [page 1805]
Create and open a popup window that contains extra infor- Best Practice: Handle Selections in Table and Chart and
mation about the selected element Open a Details Popup [page 1812]
As an application designer or story developer, when you use a planning model for the table in your analytic
application or optimized story, you can enable planning in the Builder panel or leverage setEnabled() API to
let planners input data.
When you add a table with planning model, Planning Enabled is by default active in its Builder panel. This
property determines whether the table is input-enabled or not.
Note
If you use BPC live data connections, planners can switch between read-only and input mode for tables in
view time by selecting Switch to Read-Only Mode or Switch to Input Mode from the context menu.
In addition, you can use setEnabled() API to let planners turn on read-only mode or input mode.
Example
Note
If you set the setEnabled() API to false, on a table with BPC live data connection with an input-ready
query, two cases can occur:
• If no data cells've been changed, the table is set to read-only and the according locks in BPC are
released.
• If data cells've been changed, no publish dialog is shown. The table is set to read-only mode and the
locks and changed data are kept in the BPC System. If the data is published or reverted with the help of
APIs, the kept data is removed, and the BPC locks are deleted.
As an application designer or story developer, you can use getPlanningAreaInfo with filter related APIs to
define the planning area. You can further use startEditMode and copy API to edit or copy the customized
planning area.
You can use getPlanningAreaInfo API, which returns the PlanningAreaInfo object, which is intialized
with filters applied to the table. If the data source associated with the table doesn't support planning, undefined
is returned.
Code Syntax
You can use the following APIs with the PlanningAreaInfo object to define the planning area that isn't in the
table context:
Code Syntax
• changeFilter: Sets a filter on the dimension. Any existing filter on the dimension is overwritten.
Code Syntax
The PlanningAreaMemberInfo object can be passed as a JSON object to method arguments. It has
members and hierarchy as properties.
• getFilters: Returns filters that define the planning area. Filters on Version and Measure dimensions are
ignored.
Code Syntax
getFilters(): PlanningAreaFilter[]
Here's a script example that defines the planning area with filters applied to Table_1:
Sample Code
Related Information
As an application designer or story developer, you can use version management APIs to publish, delete and
revert versions, even if the table isn't planning-enabled, for example.
Note
If you don't want version management scripts to be executed when planning is disabled for the table, use
Table.getPlanning().isEnabled() to set this.
As an application designer or story developer, you can use startEditMode API to start edit mode on specific
planning areas in public versions.
Code Syntax
startEditMode(planningPublicEditOption: PlanningPublicEditOption,
planningAreaFilters?: PlanningAreaFilters): boolean;
For the parameter PlanningPublicEditOption, you can specify the planning area:
Note
The startEditMode API won't be affected by the setting Auto-generate based on the table context in
tables' Builder panel.
Here's a script example to start edit mode on customized planning area with specified filters:
Sample Code
Related Information
As an application designer or story developer, you can use the publish API for publishing public or private
versions.
Before using the publish API, make sure that you've checked the following:
• You've created an application or optimized story that contains a table and SAP Analytics Cloud planning
model.
• You've assigned the planning model to the table.
• You've made up your mind on how to visualize or expose the publish API to the viewers. For example, you
want them to click on a button to publish a version.
• If users try to publish a version that exceeds the resource limits and there're not enough resources
available in the system for the publishing operation, they'll see a corresponding message.
The following example shows how you can use the publish API for publishing the public version Actual:
Sample Code
The following example shows how you can use the publish API for publishing all public versions:
Sample Code
You can use the publish API for private versions in two different ways:
Sample Code
2. The target version is specified. You can further specify the default behavior for publishing a private version
to an unrelated public version by the parameter updateChangedDataOnly. If it's false, the entire private
version overwrites the public target version. If it's true, only the changed data is published to the public
target version.
Sample Code
// override the forecast version with myActual and update only changed data
var myActualVersion = Table_1.getPlanning().getPrivateVersion("myActual");
var forecastVersion = Table_1.getPlanning().getPublicVersion("Forecast");
if (myActualVersion && forecastVersion) {
myActualVersion.publish(forecastVersion, true);
};
You can specify the parameter in the publish API to let viewers resolve changes in conflict with another
viewer's when publishing a version.
Note
The API works whether Display options to resolve version with publish conflicts is enabled or not in the
system configuration.
For publishing public versions, you can choose to show the warning dialog for viewers to resolve conflicting
changes, directly publish the version with conflicting changes overwritten by another viewer without showing
the dialog, or revert the version without showing the dialog:
//option 1: Shows the warning dialog for viewers to resolve conflicting changes.
{ publicPublishConflict: PublicPublishConflict.ShowWarning }
//option 2: Publishes the version with conflicting changes overwritten by
another viewer without showing the conflict warning dialog.
{ publicPublishConflict: PublicPublishConflict.PublishWithoutWarning }
//option 3: Reverts the version without showing the conflict warning dialog.
{ publicPublishConflict: PublicPublishConflict.RevertWithoutWarning }
For publishing private versions, you can choose to either show the warning dialog for viewers to resolve
conflicting changes, or directly publish the version with conflicting changes overwritten by another user
without showing the dialog:
//option 1: Shows the warning dialog for viewers to resolve conflicting changes.
{ privatePublishConflict:PrivatePublishConflict.ShowWarning }
As an application designer or story developer, you can use the publishAs API for creating a public version
from a private one.
Before using the publishAs API, make sure that you've checked the following:
• You've created an application or optimized that contains a table and SAP Analytics Cloud planning model.
• You've assigned the planning model to the table.
• You've made up your mind on how to visualize or expose the publishAs API to the end users.
• If users try to publish a version that exceeds the resource limits and there're not enough resources
available in the system for the publishing operation, they'll see a corresponding message.
Via publishAs API you can specify the name for the new public version. Optionally, you can specify its
category. If no category is specified, the category of the private version is used.
Example
In the following example, the private version myActual is published as a public budget version called newBudget
when the script gets executed:
Sample Code
As an application designer or story developer, you can use the revert API for reverting versions of planning
data.
Before using the revert API make sure that you've checked the following:
• You've created an application or optimized story that contains a table and SAP Analytics Cloud planning
model.
• You've assigned the planning model to the table.
• You've made up your mind on how to visualize or expose the revert API to the end users. For example, you
may want them to click on a button to revert a version.
Example
The following example shows how you can use the revert API for reverting the modified public plan version:
Sample Code
In the following example you can use the revert API for reverting all the modified public versions:
Sample Code
As an application designer or story developer, you can use the deleteVersion API for deleting a public or
private version.
Before using the deleteVersion API, make sure that you've checked the following:
• You've created an application or optimized story that contains a table and SAP Analytics Cloud planning
model.
• You've assigned the planning model to the table.
• You've made up your mind on how you to visualize or expose the deleteVersion API to the end users.
Example
The following script shows how you can use the deleteVersion API for deleting a specific version:
Sample Code
As an application designer or story developer, you can use the PlanningVersion.copy API for creating a
private version from an existing version. This function only works for SAP Analytics Cloud models.
Before using the copy API make sure that you've checked the following:
• You've created an application or optimized story that contains a table and SAP Analytics Cloud planning
model.
• You've assigned the planning model to the table.
• You've made up your mind on how to visualize or expose the copy API to the end users. For example, you
may want them to click on a button to copy a version.
Note
All versions are copied with the default currency conversion, which can't be changed.
You can use the following API to copy and create versions:
Code Syntax
You can specify the area to be copied via the parameter planningCopyOption:
Sample Code
var originalVersion =
Table_1.getPlanning().getPrivateVersion("privateVersionToCopy");
if (originalVersion) {
originalVersion.copy("targetVersionName", PlanningCopyOption.AllData,
PlanningCategory.Budget)
};
Sample Code
var originalVersion =
Table_1.getPlanning().getPublicVersion("publicVersionToCopy");
if (originalVersion) {
originalVersion.copy("targetVersionName", PlanningCopyOption.NoData)
};
• PlanningArea: Only data in the planning area is included when the version is copied.
The following script copies the data in planning area from a public version to create a private one with a
unique name:
Sample Code
var originalVersion =
Table_1.getPlanning().getPublicVersion("publicVersionToCopy");
if (originalVersion) {
The following script copies the data in planning area from a private version to create one with a unique
name in forecast category:
Sample Code
var originalVersion =
Table_1.getPlanning().getPrivateVersion("privateVersionToCopy");
if (originalVersion) {
originalVersion.copy("Version"+ Date.now().toString(),
PlanningCopyOption.PlanningArea, PlanningCategory.Forecast));
}
• VisibleData: Only data in the viewport is included when the version is copied.
• CustomizedPlanningArea: Only data in the customized planning area is included when the version
is copied. You need to specify the dimensions and members that construct the planning area via
PlanningAreaFilters, which is mandatory in this case.
The following script applies filters and then copies the data in the customized planning area from an actual
version to create another one:
Sample Code
As an application designer or story developer, you can use the getOwnerId API for getting the ID of the user
creating the private version.
Getting the ID of the user who created the private version can check whether the version is owned by the
current user. The check is necessary because a user can't perform certain operations on a private version like
deleting, copying or publishing if the user doesn't own it.
Note
This API can only be called on a private version and is therefore not relevant for BPC models.
Sample Code
You can use the Version Management APIs for SAP BPC models in analytic applications and optimized stories.
In SAP Business Planning and Consolidation (SAP BPC), there's always a version (of type public version with ID
"All Versions"), which you can't delete. Therefore, when working with SAP BPC models, you don't have to think
about private version APIs or the deleteVersion API.
To retrieve the SAP BPC version, you can use either the getPublicVersions()[0] or
getPublicVersion("All Versions") API. Both API calls return the same version.
Example
This example shows how you can use the publish API to publish modified data on SAP BPC:
Sample Code
Example
This example shows how you can use the revert API to revert a version.
Sample Code
Learn how to prevent possible error messages when you use version management APIs.
A previously deleted version, whether it was deleted by the script API or end user interaction, behaves
differently from a non-deleted one.
Reproduction
The easiest way to reproduce this is to delete any version and simply continue with script execution:
When calling any action function (revert, publishAs, publish or delete), the following error message is
displayed: Version 'newBudget' does not exist anymore.
There are several errors that can occur when you use the publishAs API.
The following code leads to an error, as the version name is too long:
The following error message is displayed: Enter a version name that is shorter than 255
characters.
The following code leads to an error, as the version name contains characters that aren't allowed:
The following error message is displayed: Enter a version name that doesn't contain the
characters \\ : ; ' = [ ].
The following code leads to an error, as the version name already exists:
The following error message is displayed: You can't save this version with name 'Actual',
because this name already exists. Please use a different name.
The following code leads to an error, as the actual version shouldn't be deleted.
The following error message is displayed: You can't delete the public version of category
'Actual'. This version always has to be available.
As an application designer or story developer, you can use setUserInput and submitData APIs to enable
end users to change the value of a cell in a table in view time.
• You've assigned a planning model to the table - either a SAP BPC model or an SAP Analytics Cloud
planning model.
• You've set the table to planing-enabled. Just check whether you've selected the table property Planning
enabled.
When you change a booked value, the leaf members that aggregate up to that value are adjusted proportionally
to reflect your change. When you enter a new value in an unbooked cell, which displays a dash character (-)
instead of a value, values are also booked to one or more leaf members for each dimension that aggregates up
to the cell.
Note
Be aware that you can’t use the setUserInput API for planning on a null cell displaying a # character for
SAP BPC models as it's currently not supported.
This API updates the cell of a table with a specified value and returns true if the operation was successful and
false if the operation failed.
Sample Code
The selection must be a selection from the table, and the value must be supplied according to the user
formatting settings. It can be one of the following:
Note
When setting a value for a key figure of a BPC model, in addition to using a string that represents a
number, you can use digits, letters, and special characters in the value.
This API submits the updated cell data with all the modified data and triggers the refresh of data. It returns true
if the operation was successful and false if the operation failed.
Sample Code
Table_1.getPlanning().submitData();
The submitData API should be called after all the desired cells have been planned. Just call submitData()
once to observe the results on the planned table.
Example
The example is based on a table called Table_1 that has an underlying planning model.
Table_1.getPlanning().setUserInput(
Table_1.getSelections()[0],
"500000000.99");
Table_1.getPlanning().submitData();
Useful Details
When you work with the APIs, note the following information:
• If you add a new dimension using an API, you must first expand the dimension to be able to plan the cell.
• If you plan a cell with the same value, the API returns true and overwrites the existing value.
• If you plan setUserInput() with an invalid value, an error message is displayed and the API returns false.
• If there's more than one table with the same underlying model, all the tables are updated once you plan a
cell.
Note
After calling setUserInput(), it's best to immediately submit your input with
Planning.sumbitData(). Otherwise, unexpected results may occur. For example, if other operations
are performed in scripts or by user interaction that lead to the refresh of the planning table before your
input submission, your input could be discarded.
Related Information
You can use data locking APIs in analytic applications and optimized stories to check if a model is data-locking
enabled and to set or get data locks, even if the table is not planning enabled.
• Table.getPlanning().getDataLocking
• Table.getPlanning().getDataLocking.getSate
• Table.getPlanning().getDataLocking.setState
Note
• Data locking and the data locking APIs are not supported for SAP BPC models.
• If data locking is disabled for a model, all locks will be deleted. If it is turned on again later, all members
are reset to the default lock state. The same happens if the default lock state or driving dimensions are
changed.
You can use the getDataLocking API to check if a planning model is enabled for data locking.
The check is necessary because an end user cannot perform certain operations on a table, like setState and
getState, if the model is not data-locking enabled.
Example
Sample Code
You can maintain the locking state of a cell belonging to a driving dimension by using the
Table.getPlanning().getDataLocking.getSate API. This API function only works for SAP Analytics
Cloud planning models that are enabled for data locking.
Example
Please have a look at the example below for how to set the data lock state for a selected cell, instead of
using the Manage Data Locksquick actions menu option of the table.
Sample Code
To make this example work, you have to select a cell on the table for which you want to retrieve the lock
state. Alternatively, you can define the selection yourself in the script editor.
If you have activated the option to show data locks for the table (select Show/Hide Data Locks in the
table's quick action menu), the lock icons will be updated after the API has been run.
The API returns true if the setting of the lock was successful, and false for the following cases:
• You can't set the data lock state on a private version. In this case, the error message You can only set data
locks on public versions. Please use a public version and try again. is displayed.
• If you try to set the state with the value Mixed, you will be notified in the script editor with the error
message You can't set the state with the value 'mixed'. Please specify either 'open', 'restricted' or 'locked' as
value.. The same error message is shown at runtime, if you attempt to execute the script, and the script
fails.
• If you select multiple cells and attempt to set the state, only the selection of the first cell will be considered
when setting the state.
Example
Sample Code
Note
This API replaces the table's quick action menu Manage Data Locks.
As an application designer or story developer, after adding a BPC Planning Sequence Trigger widget, you can
use related APIs to let viewers run the planning sequence without manually setting any variable values and do
various operations on the variable values of planning functions in view time.
You can add a BPC Planning Sequence Trigger widget to let viewers run a planning sequence with one or more
planning functions with BPC live data connection.
Then, you can use a combination of related APIs to offer viewers more options to interact with it. For example,
they can run a planning sequence with the pre-defined variable values and modify the values at runtime as well.
Note
As BPC live data connection isn't supported for mobiles, the related APIs aren't supported as well.
You can use the following variable related APIs on data source level for setting, getting, removing and copying
the variable values of a planning sequence:
Code Syntax
Code Syntax
BpcPlanningSequence.execute(): BpcPlanningSequenceExecutionResponse;
The following API is for opening the prompt dialog where viewers can modify the variables:
Code Syntax
BpcPlanningSequence.openPromptDialog();
You can write scripts in analytic applications and optimized stories to let viewers format the existing number
values and date values.
• number values
• Set maximum decimal places
Example: 1.23456 will be turned into 1.234 if the maximum decimal place is set as 3.
• Set minimum decimal places
Example: 1.23 will be turned into 1.230 if the minimum decimal place is set as 3.
• Set the decimal separator
Example: 1.23 will be turned into 1 23 if the decimal separator is set as empty space.
• Enable digit grouping
Example: If digital grouping is enabled, 123456 will be turned into 123,456.
• Set separator of digit grouping
Example: 123,456 will be turned into 123.456, if dot is set as the separator of the digital grouping.
• Display sign and set sign format
Example: In a normal case, “+” and “-” can be set as the sign format. In some special cases, “()” can be
set to indicate negative value too.
• Display scale and set scale multiplier and affix
Example: E can be used as the scale multiplier, such as 1E-3; M can be set as a scale affix so that
123,456,789 can be set as 123M.
Note
Number format APIs only apply to measures on axes, Feed.ValueAxis and Feed.ValueAxis2.
Tooltip measures on axes, Feed.TooltipValueAxis, aren’t supported, for example.
• date values
Set date format: For example, 2016/01/10 will be turned into Jan 10, 2016 if you set a date pattern in MM
dd, yyyy format.
For detailed information about all related APIs, refer to Analytics Designer API Reference (for analytic
applications) and Optimized Story Experience API Reference (for optimized stories).
Example
You'd like to let end users view the formatted current system date in a text widget Text_1 via clicking a
button. Write the following script for the button:
Sample Code
You'd like to let end users format the digit grouping of a number input in the widget InputField_1 according
to their user preference settings via clicking a button. Write the following script for the button:
Sample Code
var s = InputField_1.getValue();
var n = Number.parseFloat(s);
Text_1.applyText(NumberFormat.format(n));
Example
You'd like to let end users format the digit grouping of a number input in the widget InputField_1 according
to your customized setting. Here's the script for the button:
Sample Code
var s = InputField_1.getValue();
var n = Number.parseFloat(s);
var nf = NumberFormat.create();
nf.setMinimumDecimalPlaces(3);
nf.setGroupingSeparator(".");
nf.setDecimalSeparator(",");
Text_1.applyText(nf.format(n));
In this case, the new format will apply the digit grouping separator “.” and the decimal separator “,” and the
formatted number will be displayed in the widget Text_1.
By leveraging the refresh data API in analytic applications or optimized stories, you can let viewers trigger data
refresh either at widget, data source, application or story level.
You can use the following API to refresh all data sources of an application or story, which updates all the
widgets associated with each data source:
Code Syntax
You can also specify the data sources, and only the widgets associated with these data sources will be updated.
The supported widgets are chart and table.
The script is fully executed without waiting for all the associated widgets to be updated.
You can use the following API, which triggers the data refresh of a specific data source and updates the widgets
associated with it at the same time. The supported widgets are chart, table and geo map.
Code Syntax
DataSource.refreshData(): void
You can use the following API to refresh the data of a specific widget:
Code Syntax
Widget.getDataSource().refreshData();
Example
You can write the following script for the onClick event of a button, so that all the data sources of your analytic
application or story will be refreshed when end users click this button:
Sample Code
Application.refreshData();
In this example, you want to let end users click a button to refresh the data sources of Chart_1 and Table_1.
Write the following script for the onClick event of the button:
Sample Code
Sample Code
Chart_1.getDataSource().refreshData();
Chart_2.getDataSource().refreshData();
Timer_1.start(60);
For more information about how to set up and use a timer, see Set up Timers [page 1706].
Related Information
Note
Under Data Refresh in the Builder panel of a widget or the whole canvas, you can find the following three modes
for data refresh:
Note
Widgets in inactive popup dialogs, tabs in tab strips and pages in page books are treated as invisible
ones.
Script APIs
You can also leverage DataSource.setRefreshPaused API to let end users set the data refresh mode in
view time. The script is fully executed without waiting for all the widgets associated with the data source to be
updated when their pause mode isn't on. The supported widgets are charts and tables.
Code Syntax
The modes defined in script APIs correspond to the options in the Builder panel:
You can also use ApplicationPage.setRefreshPaused (in Optimized Story Experience) and
Application.setRefreshPaused to enable or disable data refresh on page, story or application levels in
view time. When the pause of data refresh is disabled, the widgets associated with the data sources are
updated at the same time with data refresh.
Sample Code
Note
If you set data refresh mode for the same widget in both Builder panel and script editor, scripting will
overwrite the settings in the panel.
When planning is enabled, viewers can't pause a widget via setRefreshPaused API. They need to disable
planning first via setPlanningEnabled API.
Note
However, some local property and styling setting APIs such as DataSource.setMemberDisplay
can cause the widget to re-render.
• Open Explorer API and Smart Insights use the current state of the widget.
Note
• Prompt APIs can be called but the widget won’t refresh after prompt values are set.
There’re exceptional cases where some queries are still triggered even when widgets don't always refresh:
Note
Therefore, we recommend you set MemberInfo if pos-
sible like the example below:
Chart_1.getDataSource().setDimensio
nFilter("Location_4nm2e04531", {
dimensionId: "Location_4nm2e04531",
id: "[Location_4nm2e04531].
[State_47acc246_4m5x6u3k6s].&[SA1]"
,
description: "California",
displayId: "California"
});
Note
This is applicable when variable is set via scripting in
view time or initial value is set in edit time.
• ProcessHierarchyCatalogManagerCreatio
n
• getHierarchiesFromCatalog
• submitVariable
• fetchMembersWithDescription
• submitVariable
• processSynchronization
Table.openSelectModelDialog • turnOffBatchAndShutdownQueryManager
• buildquerymanager
• activateVariableVariant
• submitVariables
• submitVariables
• turnOffBatchAndShutdownQueryManager
• processMemberHelp (to fetch members of Ac-
count)
Initial loading of application or story with SAP BW model and Two queries are triggered:
variables
• fetchStoryVariableDetail
• submitBWVariableforActivePage
• submitVariable
• processSynchronization
• processSynchronization
• processSynchronization
Example
Let’s say you want to let end users turn on or off data refresh and interaction of Chart_1 and Table_1 via two
switches, Switch_Refresh and Switch_Interaction.
Sample Code
if (Switch_1.isOn()) {
Chart_1.getDataSource().setRefreshPaused(PauseMode.Off);
Table_1.getDataSource().setRefreshPaused(PauseMode.Off);
} else {
Chart_1.getDataSource().setRefreshPaused(PauseMode.On);
Table_1.getDataSource().setRefreshPaused(PauseMode.On);
}
Sample Code
Chart_1.setEnabled(!Chart_1.isEnabled());
Table_1.setEnabled(!Table_1.isEnabled());
When viewers turn off the two switches, the chart and table won’t refresh when they perform various activities,
such as adding filters to and setting hierarchy level of the chart and table, setting the variable value used by the
reference line in the chart and collapsing the parent node in the table.
Once they turn on the two switches again, the chart and table automatically refresh and display the latest
results.
As an application designer or story developer, you can use navigation APIs to let end users navigate from one
analytic application or story to another application or a specific page of a story in view time.
You can define the target story or analytic application to navigate to, parameters to append to its URL, and
whether to open it in a new tab by using the following APIs:
Code Syntax
NavigationUtils {
openStory(storyId: string, pageId: string, parameters ? : UrlParameter |
UrlParameter[], newTab ? : boolean): void
openApplication(appId: string, parameters ? : UrlParameter |
UrlParameter[], newTab ? : boolean): void
createStoryUrl(storyId: string, pageId: string, parameters ? :
UrlParameter | UrlParameter[]): string
createApplicationUrl(appId: string, parameters ? : UrlParameter |
UrlParameter[]): string
openUrl(url: string, newTab ? : boolean): void
}
Note
We highly recommend you specify the desired mode for the target application or story via the URL
parameter in the API.
Sample Code
NavigationUtils.openStory("story_id", "page_id",
UrlParameter.create('mode','embed'),false);
NavigationUtils.openApplication("application_id",
UrlParameter.create('mode','embed'),false);
NavigationUtils.createStoryUrl("story_id", "page_id",
UrlParameter.create('mode','embed'));
NavigationUtils.createApplicationUrl("application_id",
UrlParameter.create('mode','embed'));
For a list of display parameters, see Use Application URL [page 1965] and Open Story URL.
Example
If you want to let users go to a specific page in a story when they click a button, and the story will be opened in
view mode with 123 assigned to script_var_1, write the following script for the button:
NavigationUtils.openStory("story_id", "page_id",
[UrlParameter.create('mode','view'), UrlParameter.create("p_script_var_1",
"123")],false);
In the script above, you don’t need to enter story ID and page ID manually. By entering Ctrl + Space , you
can open the value help. Choose Open story selector, and you can go to My Files and select the corresponding
story.
In some special cases, API calls will be ignored as they set invalid values:
• Changing the height of widgets, such as dropdown, filter line, input slider, and range slider, won't take
effect because they are read-only.
• Different widgets have minimal width and height size differences. Input values smaller than the minimal
size won't take effect.
• Cannot set the size and position of a canvas. You can only get a canvas' current size and position (always
{top: 0px, left: 0px}) and set them to a widget.
Code Syntax
Example
Add a table Table_1, a button Button_1 and two shapes Shape_1 and Shape_2 to the canvas or page. Write
the script for Button_1 so that it can set table size to 600*400, and meanwhile set the top margin of
Shape_1 to 60 px and the top margin of Shape_2 to 10 percent of its parent container:
Sample Code
Table_1.getLayout().setWidth(600);
Table_1.getLayout().setHeight(400);
Shape_1.getLayout().setTop(LayoutValue.create(60,LayoutUnit.Pixel));Shape_2
.getLayout().setTop(LayoutValue.create(10,LayoutUnit.Percent));
If you set the canvas or page’ size to dynamic, after you click the button and resize your window, the top
margin of Shape_1 will stay as 60 px while the top margin of Shape_2 will automatically fit to new size of the
window.
Example
When the application or story's canvas size is dynamic, write the script for the onResize event to make the
size of Table_1 automatically adjust to the size of the window whenever you resize the window.
Sample Code
Related Information
Understand Styling Options of Canvas and Widgets in Analytic Applications [page 1930]
In Optimized Story Experience, as a story developer, you can use script APIs related to story filters and
variables to let story viewers change the visibility of the filter panel, set story filters and set story variables, for
example.
You can use the following APIs to let viewers show or hide the filter panel and check its visibility respectively:
Sample Code
You can use script APIs related to story filters, which are applied to all widgets with the same data source
across pages. The APIs also work for linked analysis applied to All Widgets in the Story.
You can use the following script APIs to set, remove and get the story filters respectively:
Sample Code
FileDataSource.removeDimensionFilter(dimension: string|+DimensionInfo)
FileDataSource.getDimensionFilters(dimension: string|+DimensionInfo):
[+FilterValue]
Note
Dimension and fixed date range filters are supported, while widget-level measure and dynamic date filter
aren't supported by the APIs.
In addition, via the APIs you can let viewers create custom current date (CCD) input controls based on either
calendar or fiscal time:
Sample Code
class CurrentDateTime {
enum CalendarTimeGranularity {
Year,
HalfYear,
Quarter,
Month,
Week,
Day
}
class CalendarTime {
granularity: CalendarTimeGranularity,
year: integer, // >= 0, for example, 2013
halfYear?: integer, // for example, 1..2
quarter?: integer, // for example, 1..4
month?: integer, // for example, 1..12
week?: integer, // for example, 1..53
day?: integer, // for example, 1..31 (day need to work
together with month and year)
}
enum FiscalTimeGranularity {
FiscalYear,
FiscalQuarter,
Period,
FiscalDay
}
class FiscalTime {
granularity: FiscalTimeGranularity,
fiscalYear:integer, // >= 0, for example, 2013
fiscalQuarter?: integer, // for example, 1..4
period?: integer, // for example, 1..12
fiscalDay?: integer, // for example, 1..31 (fiscalDay need to work
together with period and fiscalYear)
}
You can use the following script APIs to set, remove and get the story variables respectively:
Sample Code
FileDataSource.setVariableValue(variable: string|+VariableInfo,
variableValue: string|number|+VariableValue|[+VariableValue], options?:
+SetVariableValueOptions)
FileDataSource.removeVariableValue(variable: string|+VariableInfo)
FileDataSource.getVariableValues(variable: string|+VariableInfo):
[+VariableValue]
You can use the selection API for tables and charts in analytic applications and optimized stories to retrieve the
currently selected members and their dimensions.
Get a Selection
The getSelections() API returns an array of Selection objects, each representing a single selection.
The properties of such a Selection object are the dimension IDs. The values are the member IDs.
Sample Code
{
"dim1": "member1",
"dim2": "member2"
}
You can access the selected member of a dimension using var memberId =
selection[“dimensionId”];
You can iterate over all dimensions of a selection with a for-in loop like this:
Sample Code
With a selection you can retrieve specific data from data sources.
Sample Code
When you specify the selection object to be passed onto a getData() call, a value help for dimensions and
members helps you enter the required dimension and member IDs.
Related Information
You can use result set API in analytic applications and optimized stories to get a result set based on an input
data selection in a table or chart, so that you can traverse and get each data cell in the result set.
Before the result set API is introduced, you can retrieve individual data cells using DataSource.getData().
However, it can't retrieve all members for a dimension in a specific result set.
In a result set API, the input data selection specifies all or an arbitrary subset of the initial data set, for example:
Sample Code
If the input data selection is left empty, all values of the initial data set will be retrieved.
For a chart with multiple measures per data point, for example, a bubble or scatterplot chart, the result set API
will return all available measures of each data point as different records. And if the source data has a hierarchy,
parent member information will also be returned.
Some changes made by end users at runtime will affect data returned by the result set API as well, including:
Note
Note
Script APIs
Code Syntax
For more information about the APIs, refer to Analytics Designer API Reference (for analytic applications) or
Optimized Story Experience API Reference (for optimized stories).
Example
The first example specifies that the input parameter location equals CT1.
Sample Code
Chart_1.getDataSource().getResultset({
"@MeasureDimension": "[Account_BestRunJ_sold].
[parentId].&[Gross_Margin]",
"Location_1": "[Location_1].[State_1].&[CT1]"
});
Output Code
[{
"@MeasureDimension": {
"id": "[Account_BestRunJ_sold].[parentId].&[Gross_Margin]",
"description": "Gross Margin",
"formattedValue": "489000000",
"rawValue": "489000000"
},
"Location_1": {
"id": "[Location_1].[State_1].&[CT1]",
"description": "California",
"parentId": "[Location_1].[State_1].&[SouthAmerica]"
}
}]
Example
The second example leverages the getDataSelections API to get the product members of the selected
data in the table.
Sample Code
As an application designer or story developer, you can use script API DataSource.setDimensionFilter()
on range filters and exclude filters.
Exclude Filters
You can filter out members from the drill-down with exclude filters. The following examples show the use of
exclude filters:
• In the following example, a single filter value is set for dimension "EMPLOYEE_ID". This keeps only the
member with employee ID 230 in the drill-down:
Sample Code
• In the following example, a single but excluding filter value is set for dimension "EMPLOYEE_ID". This
removes the member with employee ID 230 from the drill-down:
Sample Code
• In the following example, multiple filter values are set for dimension "EMPLOYEE_ID". This keeps the
members with employee IDs 230 and 240 in the drill-down:
Sample Code
• In the following example, multiple but excluding filter values are set for dimension "EMPLOYEE_ID". This
removes the members with employee IDs 230 and 240 from the drill-down:
Sample Code
Range Filters
You can filter ranges of members in the drill-down with range filters.
Note
The following examples show the use of range filters on numeric dimensions:
• In the following example, the range filter applied to dimension "EMPLOYEE_ID" keeps the members with
employee IDs between 230 and 240 in the drill-down:
Sample Code
• In the following example, the range filter applied to dimension "EMPLOYEE_ID" keeps the members with
employee IDs less than 230 in the drill-down:
Sample Code
• In the following example, the range filter applied to dimension "EMPLOYEE_ID" keeps the members with
employee IDs less or equal than 230 in the drill-down:
Sample Code
• In the following example, the range filter applied to dimension "EMPLOYEE_ID" keeps the members with
employee IDs greater or equal than 230 in the drill-down:
Sample Code
• In the following example, the range filter applied to dimension "EMPLOYEE_ID" keeps the members with
employee IDs greater than 230 in the drill-down:
Sample Code
Sample Code
Here's a scripting example, where time range granularity, start and end are specified:
Sample Code
Note
• For tables, the setDimensionFilter() API still works based on the default dimension hierarchy.
• For charts, the setDimensionFilter() API won't take effect. In other words, to set time range filter
on charts via API, make sure that the time dimensions are in hierarchy. For example, you can use
setHierarchy() before setDimensionFilter() API:
Sample Code
Chart_2.getDataSource().setHierarchy("Date_703i1904sd","YHM");
Chart_2.getDataSource().setDimensionFilter("Date_703i1904sd",
[CurrentYearFilter,PreviousYearFilter]);
You can use API getVariableValues() in analytic applications and optimized stories, which returns an array
of the current values of the specified variable.
Sample Code
Note
Currently for variables using SAP BW models, getVariableValues() and other variable-related APIs can
only access input-ready ones. Customer Exit variables can’t be accessed via these APIs.
Sample Code
What you set with setVariableValue is not necessarily the same as what is returned from
getVariableValues. For example, you have several options to set multiple values, and the following
two are equivalent:
Sample Code
The getVariableValues API will always return as few VariableValue objects as possible, grouping all
single values with the same sign (including or excluding) into one. In the previous example, an array with
only one object is returned by getVariableValues:
You can use the API getDimensionFilters() in analytic applications and optimized stories, which returns
an array of the current filter values of the specified dimension.
API Definition
The following example shows how the API is used (sample call).
Sample Code
To work with such an instance, that is, to access its type-specific properties, it needs to be cast to the
corresponding type first using the type property. Please see the following section.
Sample Code
Known Restrictions
You can change the table's model at runtime by using the openSelectModelDialog API in analytic
applications and optimized stories to call the Select Mode dialog or directly via the setModel API and a valid
model ID.
API Definition
Table_1.openSelectModelDialog();
Example
Sample Code
This API enables you to set a new table model directly via modelID:
Sample Code
setModel(modelID:string);
Note
You can open any desired model in the Modeler to find the
correct modelId of the related model in the according URL:
Example
Sample Code
You can use NULL members and Totals membersin scripting APIs in analytic applications and optimized
stories.
NULL members often occur in SAP HANA models, where the joining of database tables can produce NULL
values. Many existing APIs didn't handle the NULL member correctly.
Note
If "Show Totals" is enabled for a dimension in the builder panel of a table, extra data cells showing aggregates
are displayed in the table. User want to access the numbers in these cells, or react on clicking them. Until now
these cells were only accessible through the ResultSet API, but not via getData.
The Alias enumeration has been enhanced with two new literals: Alias.NullMember and
Alias.TotalsMember. These literals should be used in scripts whenever a NULL or Totals member needs
to be referenced.
The following DataSource APIs have been enhanced to handle or return NULL or Totals members correctly:
• getMembers():
returns Alias.NullMember for the NULL member's ID
• setDimensionFilter():
allows to filter on Alias.NullMember
• setVariableValue():
allows to set Alias.NullMember as the value of a dimension member variable
• getData():
allows to specify Alias.NullMember in the passed selection, and in the Table case, also
Alias.TotalsMember to retrieve the value of a totals data cell
• getResultSet(), getDataSelections():
allows to specify Alias.NullMember and Alias.TotalsMember in the passed selection
• getResultMember():
allows to specify Alias.NullMember in the passed selection
• getSelections():
returns Alias.NullMember and Alias.TotalsMember if the corresponding cells have been selected
• setUserInput():
allows to specify Alias.TotalsMember in the passed selection, in order to plan a totals cell
Note
Totals cells currently cannot be locked; this means setState() will not work)
You can use sendNotification API in analytic applications and optimized stories to send notifications to
viewers. Notifications can be accessed in the Notifications panel, and via email by configuration in the script.
With sendNotification API, you can trigger notifications when the analytic application or optimized story is
running at the backend, for example, when it’s scheduled for publication. You can also trigger in view time, for
example, upon click of a button or initialization of the application or story.
Note
If you want to send notifications without scheduling, ensure that your own mail server’s been configured.
For more information, refer to Configure Email Server [page 2787].
Code Syntax
Application.sendNotification({
title: string,
content?: string,
receivers?: string[], // default: current user
mode?: ApplicationMode, // default: Present
parameters?: UrlParameter[], // array, no optional single URL parameter
isSendEmail?: boolean, // default: false
IsSendMobileNotification?: boolean // default: false
});
Tip
Non-SAP Analytics Cloud users don't have access to SAP Analytics Cloud notifications, while they can
receive email notifications. Therefore, to send them notifications via the API, set isSendEmail to true.
Each notification includes a title and a button to open the analytic application or story.
• Contents (in HTML format with limited support, including tags <i>, <b>, <u>, <a> and <br>)
• Recipients
• Application or story mode (Present, Embed or View)
• URL parameters as key-value pairs
• Whether to send an email notification or not
• Whether to send the notification to SAP Analytics Cloud iOS mobile app or not
Notifications can be sent to non-SAP Analytics Cloud users at most three times per scheduling task. To
SAP Analytics Cloud users, there's no limit.
Example
In this example, when the application or story is opened, it will send notifications to Jack if
<quantitySold> is below 50 million. He can receive notifications in both Notifications panel and email,
and open the application or story in presentation mode. The notification title is Quantity sold below 50M,
and content is The quantity sold is below estimation. For more details, please check it out..
Sample Code
Related Information
As an application designer or story developer, you can use APIs to let viewers show or hide all comments,
handle table cell comments and manage filters on comment widgets.
All the existing comment APIs support comment widgets. Data cell comments edited by APIs are synchronized
with comment widgets as well.
The comment-related APIs support SAP BW live data models. To know about comments built on the models,
see Adding Comments to a Data Cell [page 1407] or Adding Comment Widgets to Your Story [page 1120].
You can use the following API to enable or disable comment mode, which means showing or hiding all the
comments:
Code Syntax
You can use the APIs to let viewers get, remove, add, update and like table cell comments.
Code Syntax
Example
For example, you’ve created an input field InputField_1 and would like to let viewers:
1. Select Button_1 to see the comment on a table cell in the input field.
2. Select Button_2 and post the comment in the input field to any selected table cell.
Sample Code
var comments =
Table_1.getDataSource().getComments().getAllComments(Table_1.getSelections()
[0]);
console.log(comments);
var str = "";
for (var i = 0; i < comments.length; i++) {
Sample Code
You can also use the following APIs to set, get or remove filters on comment widgets:
Code Syntax
Example
Here’s a script example that shows how to make a comment shown in a specific comment widget and apply
the same filter as table to it.
Write the following script, so that when 2021 is selected in the filter Date, the comments in
CommentWidget_1 are displayed:
Sample Code
CommentWidget_1.getCommentingDataSource().setDimensionFilter("Date_703i1904
sd", "[Date_703i1904sd].[YHM].[Date_703i1904sd.YEAR].[2021]");
Furthermore, write the following script to copy the filter Location from Table_1 to CommentWidget_1:
Sample Code
var filter =
Table_1.getDataSource().getDimensionFilters("Location_4nm2e04531")[0];
if (filter.type === FilterValueType.Single) {//Table_1 has dimension filter
"Location=SA1"
var singleFilter = cast(Type.SingleFilterValue, filter);
CommentWidget_1.getCommentingDataSource().setDimensionFilter("Location_4nm2
e04531", singleFilter.value);
} else if (filter.type === FilterValueType.Multiple) {//Table_1 has
dimension filters "Location=SA1" or "Location=SA2"
CommentWidget_1.getCommentingDataSource().setDimensionFilter("Location_4nm2
e04531", multipleFilter.values);
}
Related Information
As an application designer or story developer, you can define busy indicators and use related APIs to
temporarily block viewers from doing other actions, for example, when the analytic application or optimized
story is loading or when the script is running.
• Busy indicator that displays automatically when loading is longer than the predefined delay. Once it’s
triggered, the actions in the entire analytic application or optimized story are blocked.
• Busy indicator that’s owned by an analytic application or optimized story, popup or container, such as a
tab strip or panel. It’s defined by APIs and when it’s triggered, only the actions on that specific level are
blocked.
You can define an automatic busy indicator, which can be triggered by loading activities on either application
or story level or widget level. Whenever there’s a loading that exceeds the predefined delay time, the entire
application or story is blocked.
Procedure
1. For analytic applications, go to the Styling panel of Canvas. For optimized stories, select Global Settings
from the Outline panel.
2. Under Loading Indicator Settings, select Enable loading indicator when necessary.
3. Optional: Enter information text that you want to display with the loading indicator icon.
Results
In view time, the loading indicator automatically appears when the loading activity exceeds the time you've
defined, and disappears after it's completed.
You can also use API for enabling an automatic busy indicator.
Code Syntax
The information text and loading indicator delay time follow the settings in the Styling panel of canvas (for
analytic applications) or Global Settings (for optimized stories).
You can use APIs to show or hide busy indicators on different levels so that only the actions on the specific
levels are blocked when such busy indicators are triggered in view time.
• Applications or stories
• Popups
• Containers (including tab strips and panels)
Code Syntax
// Show busy indicator. And if the text parameter is specified, show the text
along with the busy indicator icon.
Application.showBusyIndicator(text?: string) // cover the whole page
Popup_1.showBusyIndicator(text?: string) // cover the pop up only
TabStrip_1.showBusyIndicator(text?: string) // cover the tab strip only
Panel_1.showBusyIndicator(text?: string) // cover the panel only
Example
In this example, you can write scripts to show the busy indicator when a request is sent via a postMessage
event and hide it when a response is received from the outside page:
Sample Code
button.onClick() {
Application.showBusyIndicator(); // Show application manual indicator
Application.postMessage(PostMessageReceiver.Parent, message, "http://
localhost:8080");
}
As an application designer or story developer, you can use getUserInfo(), getTeamsInfo() and
getRolesInfo() APIs, which return information about the current user in view time.
You can use getUserInfo() API to get information about the user currently using your analytic application or
optimized story:
Sample Code
Application.getUserInfo()
Output Code
{
"id": "666666",
"displayName": "John",
}
In addition, you can use the following APIs respectively if you only want to get the team or role of the current
user.
Application.getTeamsInfo()
The getTeamsInfo() API returns all the teams the current user belongs to, for example:
Output Code
[
{ "name": "DataActionRole_Admin",
"description": "" },
{ "name": "127", "description": "127
Test" },
{ "name": "MultiActionRole_Admin",
"description": "" },
{ "name": "Team_Supervisor", "description":
"" }
]
Sample Code
Application.getRolesInfo()
The getRolesInfo() API returns all the roles of the current user, for example:
Output Code
[
"Admin",
"BI Admin"
]
Note
To use getTeamsInfo() or getRolesInfo() API, viewers need to have the Read permission for the
object type Team or Role.
As an application designer or story developer, you can let viewers execute OData (V4) actions within an analytic
application or optimized story, which are exposed by a connected on-premise SAP S/4HANA system.
Prerequisites
You've created an analytic application or optimized story and added an OData service to it.
3. Select in the quick actions menu of the widget to open the script editor.
You'd like to write a script to trigger execution of the OData action in the source system.
4. In the script editor, do the following:
a. Type the name of the OData service you've added and specified, ODataService_1, followed by ..
b. Press CTRL + Space , and the script editor assists you with code completion and value help wherever
possible. Select the function executeAction().
c. Place the mouse cursor between the brackets ( () ), press CTRL + Space , and then select action
CancelMyFlights from the value help of executeAction(), followed by , and {}.
d. Place the mouse cursor between the brackets ({}), press CTRL + Space , and select the parameter
DateFrom from value help, followed by : .
e. Type the date 2019-01-01, followed by ,.
f. Press CTRL + Space , and select DateTo from value help, folllowed by :.
g. Type in the date 2019-12-31, and finish with ;.
You've created a script to execute an OData action. This action has a simple syntax with two parameters
and looks like this:
5. Insert another button, rename it to Book Flight in the Styling panel, and open the script editor.
The BookFlight Action is a bound action, which is more complex than the first one.
6. Type the following script in the script editor by using code completion and value help:
ODataService_1.executeAction("Flight/Book", {Flight:
{Airline:"UA",Flightconnection:"0941", Flightdate:
"2019-01-05"},NumberOfSeats: 1});
Results
You've configured a simple OData action. Now you can run your application or story, and book and cancel
flights for the selected values.
You can enhance your application or story and start using other script methods to fill the parameter values
dynamically with local or global variables.
Finally, you can make the response from the back end system visible in the application or story via a text field.
To find out more details, see Scripting Example 2: Use OData Actions in Analytics Designer or Optimized Story
Experience [page 1777].
As an application designer or story developer, you can display the back end system's response to an OData
action run in your analytic application or optimized story via text widgets and scripts.
Prerequisites
You've created an analytic application or optimized story with OData actions and two buttons to book and
cancel flights for selected values.
Procedure
3. For the Book Flight button, rewrite with the following script:
5. Run the application or story, and book and cancel a flight to see the error messages.
As an application designer or story developer, learn how to let viewers switch between a chart and a table by a
toggle.
Prerequisites
• You've already added a table and a chart widget and placed them on top of each other.
• For the table, if the pagination option (Auto-Size and Page Table Vertically) was selected, you've deselected
it.
Note
The visibility setting that is used for the script is also used for table pagination. Using table pagination
and the visibility script in the same story tab may not provide the results that you expect to see.
• To follow this sample use case, use the model BestRun_Advanced as data source.
To switch between the chart and table, you add one image that represents a chart and the other one that
represents a table. Then you write scripts for each of the images so that when viewers click on the chart image,
the chart appears and the table is hidden, and vice versa.
The default view time settings are to have the table visible and the chart invisible.
Procedure
When viewers click on the chart image, the table image appears in the same place.
4. To enable the switch between table and chart, you need to edit the names and then the scripts of both
images. Change the name of the table image to Switch_to_Table_display, and the name of the
chart image to Switch_to_Chart_display.
5. To edit the chart image's script, from the chart image's context menu, select Edit Scripts...
onClick .
The script editor of this image's onClick event opens. Here, write a script that makes it possible to switch
from the chart to the table.
6. Enter this script:
Sample Code
Chart.setVisible(true);
Table.setVisible(false);
Switch_to_Table_display.setVisible(true);
This script makes the chart visible and the table invisible, and it makes the table image visible and the chart
image invisible. You've configured the application or story in a way that when the chart is visible, the table
image is visible indicating that viewers can switch back to the table.
7. Similarly, from the table image's context menu, select Edit Scripts... onClick .
8. Enter this script in the script editor:
Sample Code
Chart.setVisible(false);
Table.setVisible(true);
Switch_to_Table_display.setVisible(false);
Switch_to_Chart_display.setVisible(true);
This script makes the table visible and the chart invisible, and it makes the chart image to visible and the
table image invisible. You've configured the application or story in a way that, when the table is visible, the
chart image is visible indicating that viewers can switch back to the chart.
9. Save the application or story, and open it in view time.
Results
When you click on the table image, the table appears instead of the chart, and the image is changed to a
chart icon, back to the default view.
As an application designer or story developer, learn how to let viewers filter on a table or a chart by selecting a
single measure from a dropdown widget.
Prerequisites
• You've already added a table and a chart widget and placed them on top of each other.
• To follow all functions of this sample use case, you've completed the exercise Best Practice: Switch
Between Chart and Table [page 1778] and can now enhance your analytic application or optimized story.
Context
The dropdown widget lists all the measures of your data source, which acts as measure filter. When viewers
select a measure from it, the filter is applied to both table and chart.
1. Insert a dropdown.
2. Change the name of the dropdown to Dropdown_Measures.
3. Add a label to the dropdown to indicate to viewers that they can select measures through the dropdown.
a. Insert a text widget
b. Place the text widget to the left of the dropdown.
c. Enter a text, Selected Measure, for example.
d. Optional: Resize the text widget.
4. To let viewers select a value from the dropdown, define a script variable that acts as a global variable, which
can be accessed from anywhere in your application or story.
a. From Scripting in Outline, add a script variable.
b. In the Script Variable panel, enter CurrentMeasureFilterSelection as name, leave type as string,
and enter [Account_BestRunJ_sold].[parentId].&[Gross_MarginActual] as default value.
c. To close the Script Variable panel, select Done.
5. To define what happens when users select a value from the dropdown, create a script object. In this object,
write a function that sets the measure filter according to what the user has chosen from the dropdown.
a. From Scripting in Outline, add a script object.
b. To rename the folder, hover over ScriptObject_1, and select (More) Rename . Enter Utils.
c. To rename the function, hover over function1, and select (More) Rename . Enter
setMeasureFilter.
d. Select the function setMeasureFilter, and when the Script Function panel opens, select (Add
Argument).
e. Enter selectedId as name of the argument, leave type as string, and select Done.
f. To write the script for the function, hover over the function setMeasureFilter, select (Edit Scripts).
Enter the following script in the script editor:
Sample Code
Table.getDataSource().removeDimensionFilter("Account_BestRunJ_sold");
if (CurrentMeasureFilterSelection !== "") {
Chart.removeMeasure(CurrentMeasureFilterSelection, Feed.ValueAxis);
}
Table.getDataSource().setDimensionFilter("Account_BestRunJ_sold",selecte
dId);
Chart.addMeasure(selectedId, Feed.ValueAxis);
With this script you define what happens to the table and the chart when users select a measure
from the dropdown. The existing filters applied to the table and chart are removed and replaced with
captured values.
6. To define how to pass the captured value to the setMeasureFilter function, write script for the onSelect
event of the dropdown widget. Hover over the dropdown widget in Outline, select (Edit Scripts), and
enter the following script:
Utils.setMeasureFilter(Dropdown_Measures.getSelectedKey());
This script gets the selected value of the dropdown and passes it to the setMeasureFilter function as
parameter.
7. To define what happens when the analytic application or optimized story is first run, write script for the
onInitialization event of the application or story page. In Outline hover over Canvas (for analytic
applications) or the relevant page (for optimized stories), select (Edit Scripts) onInitialization ,
and enter the following script:
Sample Code
With this script you can make sure that on initialization, all the available measures of the table are loaded
into the dropdown. The first measure in the list is the selected value and filter.
8. Save the application or story, and open it in view time.
Results
Your application or story looks like this when opened in view time:
The measure selected from the dropdown filters on the chart as well. When you select Discount, the chart looks
like this:
As an application designer or story developer, learn how let viewers to filter on a table or a chart by selecting
multiple measures from a checkbox group.
Prerequisites
• You've already added a table and a chart widget and placed them on top of each other.
• To follow all functions of this sample use case, you've completed the exercise Best Practice: Switch
Between Chart and Table [page 1778] and can now enhance your analytic application or optimized story.
Context
Compared to the dropdown widget, the checkbox group widget allows for multiple selections. In this use case,
you add a checkbox group widget, which lists all the measures of your data source and acts as measure filters.
On top of that, you add three buttons:
• Button set selected, which filters on the table and chart according to the selected measures from the
checkbox group
• Button remove all, which removes all the selected measures and saves from deselecting them one by one in
the checkbox group
• Button set all, which applies all the available measures to the table and chart and saves from selecting
them one by one in the checkbox group
Procedure
1. Add a checkbox group, and place it on the left side of your table.
2. Change the name of the checkbox group to CheckboxGroup_Measures.
3. In the Builder panel of the checkbox group, select Value 1 and then (Delete) to remove it. Remove Value
2 in the same way.
4. Add a label to the checkbox group to indicate to viewers that they can select measures through the
checkboxes.
a. Insert a text widget.
b. Place the text widget on top of the checkbox group.
c. Enter a text, Measures, for example.
d. Optional: Resize the text widget.
5. For easy access to the checkbox group's measures, add the three buttons mentioned earlier:
a. Insert three buttons and place them beneath the label.
• The first script variable AllMeasures is set as an array and holds all the measures that viewers can
select in the checkbox group.
• The second script variable CurrentMeasureFilterSelection is set as a string and holds the
measures that viewers have selected in the checkbox group.
b. To rename the folder, hover over ScriptObject_1, and select More Rename . Enter Utils.
c. To rename the function, hover over function1, and select More Rename . Enter
setMeasureFilter.
d. Select the function setMeasureFilter, and when the Script Function panel opens, choose (Add
Argument) in the Arguments section.
e. Enter selectedIds as name of the argument, leave type as string, switch on Set As Array. Choose
Done twice to close the panels.
f. To write the script for the function setMeasureFilter, hover over it in Outline, and choose (Edit
Scripts). Enter the following script in the script editor:
Sample Code
// remove Measures
Table.getDataSource().removeDimensionFilter("Account_BestRunJ_sold");
if (CurrentMeasureFilterSelection !== [""]) {
for (var i=0;i<CurrentMeasureFilterSelection.length; i++){
Chart.removeMeasure(CurrentMeasureFilterSelection[i],
Feed.ValueAxis);
}
}
// add Measures
Table.getDataSource().setDimensionFilter("Account_BestRunJ_sold",selecte
dIds);
for (i=0;i<selectedIds.length; i++){
Chart.addMeasure(selectedIds[i], Feed.ValueAxis);
}
With this script you define what happens to the table and the chart when viewers select filter values in
the checkbox group:
a. Hover over the set selected button in Outline, select , and enter the following script in the script
editor:
Sample Code
Utils.setMeasureFilter(CheckboxGroup_Measures.getSelectedKeys());
The script calls the Utils.setMeasureFilter function and passes to it the selected measures of
the checkbox group.
b. Hover over the remove all button in Outline, select , and enter the following script in the script
editor:
Sample Code
CheckboxGroup_Measures.setSelectedKeys([""]);
Utils.setMeasureFilter([""]);
The script removes all the selected measures from the checkbox group and passes an empty array
to Utils.setMeasureFilter, which updates your table and chart as well as your global variable
CurrentMeasureFilterSelection.
c. Hover over the set all button in Outline, select , and enter the following script in the script editor:
Sample Code
CheckboxGroup_Measures.setSelectedKeys(AllMeasures);
Utils.setMeasureFilter(AllMeasures);
The script sets the selected keys of the checkbox group to the AllMeasures script variable you
defined before and passes the same variable to the Utils.setMeasureFilter function.
9. Define what happens when the application or story is first run.
a. In Outline hover over Canvas (for analytic applications) or the relevant page (for optimized stories),
Sample Code
CheckboxGroup_Measures.addItem(measures[i].id,measures[i].description);
//add the measure to the selectedKeys
selectedKeys.push(measures[i].id);
CheckboxGroup_Measures.setSelectedKeys(selectedKeys);
With this script, you make sure that on initialization, all the available measures of the table's data
source are loaded. You define a selected keys array of type string and, using a loop, you add the
measures to your checkbox group and the selected keys array. You also call on the setSelectedKeys
function of the checkbox group and set its selected keys to your array. Finally, you set the script
variable AllMeasures and the measure filter to the selected keys.
10. Save the application or story, and open it in view time.
Results
If you click on the remove all button, all measures are deselected, and there's no data on the table.
Let's select a few measures, Gross Margin Plan, Quantity Sold, Original Sales Price abs Dev, and Discount, in the
checkbox group and click on the set selected button. The table is updated accordingly.
As an application designer or story developer, learn how to make a filter line apply to both your chart and table,
which are switched between each other.
Prerequisites
• You've already added a table and a chart widget and placed them on top of each other.
Procedure
a. Hover over the table in Outline, and select (Edit Scripts) onResultChanged .
b. Enter this script in the script editor:
Sample Code
console.log('OnResultChanged');
Chart.getDataSource().copyDimensionFilterFrom(
Table.getDataSource(), "Location_4nm2e04531");
Chart.getDataSource().copyDimensionFilterFrom(
Table.getDataSource(), "Product_3e315003an");
Chart.getDataSource().copyDimensionFilterFrom(
Table.getDataSource(),
"Sales_Manager__5w3m5d06b5");
Chart.getDataSource().copyDimensionFilterFrom(
Table.getDataSource(),
"Store_3z2g5g06m4.Store_GEOID");
The script copies each of the dimension filters added in the filter line.
5. Save the application or story, and open it in view time.
When you select the filter line, the four measures you've configured appear.
When you select one of the measures, in this example, Location, a window pops up, which lets you choose the
members to be included in the table or chart. Let's select San Francisco, Las Vegas and Portland.
As an application designer or story developer, learn how to let viewers filter on dimensions and then according
to hierarchies (such as flat presentation and category) to display the data.
Prerequisites
• You've already added a table and a chart widget and placed them on top of each other.
• To follow all functions of this sample use case, you've completed the exercise Best Practice: Switch
Between Chart and Table [page 1778] and can now enhance your analytic application or optimized story.
Context
You'd like to add two dropdowns, one for filtering on dimension and the other for filtering on hierarchy. The
dropdown for the hierarchy filter changes according to the chosen dimension. There's always one consistent
option for hierarchy, which is Flat Presentation, and there might be only this or more options depending on the
chosen dimension. Viewers can add different filters by selecting from the dropdown lists.
Procedure
8. In the Outline panel, hover over Dropdown_Hierarchies, and select . In the script editor, write the
following script for the onSelect event of the dropdown:
Sample Code
This script gets the selected value of the dropdown list and accordingly sets the hierarchy of the table
and the chart when the script variable CurrentDimension is referenced, so that the hierarchy dropdown
displays the filtered options.
9. In the Outline panel, hover over Dropdown_Dimensions, and select . In the script editor, write the
following script for the onSelect event of the dropdown:
Sample Code
This script gets the selected option from the dimension dropdown and saves it in a variable sel.
• All the dimensions are removed from the table and chart and replaced with the selected dimension.
• All the hierarchies are removed from the table and chart. The hierarchies available to the selected
dimension are got from the data and loaded in the hierarchy dropdown list.
• Flat Presentation is set as the default hierarchy.
10. In the Outline panel, hover over Canvas (for analytic applications) or the relevant page (for optimized
Sample Code
With this script, on initialization all the available hierarchies of the dimensions are loaded, and Flat
Presentation is set as the default option of the hierarchy dropdown. The script is the same as a part of
what happens when a dimension is chosen.
11. Save the application or story, and open it in view time.
If you keep the dimension Location and change the hierarchy to States, the table displays location in state
hierarchy instead of flat presentation.
If you change the dimension to Product, and the hierarchy dropdown list changes accordingly. You can select
Category, and products are displayed in categories.
As an application designer or story developer, learn how to let viewers control which measures and dimensions
to be displayed on the table.
Prerequisites
Context
To let viewers add and remove dimensions and measures on a table, you'd like to add the following five
checkbox groups:
• The first one displays all the available measures for them to select for the table.
• The second one displays the available dimensions for them to select for the columns.
• The third one displays the available dimensions for them to select for the rows.
• The fourth one displays the dimensions for them to add to the second or third checkbox group.
• The fifth one is invisible at view time and only needed to sort dimensions.
On top of that, you add seven buttons for easy access to the checkbox groups' measures.
1. Create the first checkbox group, which displays all the available measures.
a. Add a checkbox group widget, and place it to the left of your table. Leave some space above so that you
can add labels and buttons later on.
b. Change the ID of the widget to CheckboxGroup_Measures.
c. In the Builder panel of the widget, remove the initial values Value 1 and Value 2 from the checkbox
group value list.
2. Add a label to the checkbox group.
a. Add a text widget, and place it above the checkbox group.
b. Enter Measures as the text.
c. Change the ID of the widget to CheckboxGroup_Measures_Label.
3. Similarly, create a second checkbox group with the ID CheckboxGroup_Columns, which displays the
available dimensions for columns. Add a label with CheckboxGroup_Columns_Label as the ID and
Columns as the text.
4. Create a third checkbox group with the ID CheckboxGroup_Rows, which displays the available dimensions
for rows. Add a label with CheckboxGroup_Rows_Label as the ID and Rows as the text.
5. Create a fourth checkbox group with the ID CheckboxGroup_Free, which displays the available
dimensions for columns and rows checkbox group. Add a label with CheckboxGroup_Free_Label as
the ID and Free as the text.
6. Create the last checkbox group:
a. Add a checkbox group, and enter CheckboxGroup_AllDimensions as its ID.
b. Since the last checkbox group is invisible, it doesn't need a label and can be placed anywhere, for
example, on the right.
c. To make the checkbox group invisible, under Actions in its Styling panel, deselect Show this item at
view time (for analytic applications), or select Hidden under View Time Visibility (for optimized stories).
7. For easy access to the checkbox groups' measures, add the seven buttons:
• For the first button, enter Button_setMeasureFilter as the ID and set selected as the text.
Place it between Measures and the first checkbox group.
This button sets the selected measures from the measures checkbox group as the measures on the
table.
• For the second button, enter Button_removeAllMeasures as the ID and remove all as the text.
Place it to the right of the first button.
This button deselects all the measures from the measures checkbox group and removes all from the
table.
• For the third button, enter Button_setAllMeasures as the ID and set all as the text. Place it to
the right of the second button.
This button sets all the available measures as the measures on the table.
• For the fourth button, enter Button_ColRemove as the ID and Remove as the text. Place it next to
Columns.
This button removes the selected dimensions from the columns checkbox group and the dimension
columns from the table.
• For the fifth button, enter Button_RowRemove as the ID and Remove as the text. Place it next to Rows.
This button removes the selected dimensions from the rows checkbox group and the dimension rows
from the table.
• The first script variable, called AllDimensions, is string type set as an array and holds all the
dimensions in the dataset.
• The second script variable, called AllMeasures, is string type set as an array and holds all the
measures that viewers can select from the checkbox group.
• The third script variable, called CurrentDimensionColumn, is string type set as an array and holds
the selected dimensions to add to columns.
• The fourth script variable, called CurrentDimensionRows, is string type set as an array and holds the
selected dimensions to add to the rows.
• The fifth script variable, called CurrentMeasureFilterSelection, is string type set as an array and
holds the selected measures from the measures checkbox group.
9. To define what happens when viewers make selections in one of the checkbox groups, create a script
object, and write functions in it.
a. In the Scripting section in Outline, add a script object.
Sample Code
CheckboxGroup_Columns.removeAllItems();
CheckboxGroup_Rows.removeAllItems();
CheckboxGroup_Free.removeAllItems();
CurrentDimensionColumn = ArrayUtils.create(Type.string);
CurrentDimensionRows = ArrayUtils.create(Type.string);
console.log(["CurrentDimensionColumn should empty",
CurrentDimensionColumn.slice()]); console.log(["CurrentDimensionRows
should empty", CurrentDimensionRows.slice()]);
// Dimension in Columns
var dimCol = Table.getDimensionsOnColumns();
if (dimCol.length > 0) {
for (var i=0;i<dimCol.length; i++){
CurrentDimensionColumn.push(dimCol[i]);
console.log(["CurrentDimensionColumn ", dimCol[i]]);
}
}
// Dimension in Rows
var dimRows = Table.getDimensionsOnRows();
if (dimRows.length > 0) {
for (i=0;i<dimRows.length; i++){
CurrentDimensionRows.push(dimRows[i]);
CheckboxGroup_Free.setSelectedKeys([CurrentDimensionRows[i]]);
dimdesc = CheckboxGroup_Free.getSelectedTexts();
CheckboxGroup_Rows.addItem(CurrentDimensionRows[i],dimdesc[0]);
CheckboxGroup_Free.removeItem(CurrentDimensionRows[i]);
}
}
}
if (CurrentDimensionColumn.length > 0) {
for (i=0;i<CurrentDimensionColumn.length;i++){
if (CurrentDimensionColumn[i] !== "") {
CheckboxGroup_Free.setSelectedKeys([CurrentDimensionColumn[i]]);
dimdesc = CheckboxGroup_Free.getSelectedTexts();
CheckboxGroup_Columns.addItem(CurrentDimensionColumn[i],dimdesc[0]);
CheckboxGroup_Free.removeItem(CurrentDimensionColumn[i]);
}
}
}
Sample Code
// remove Measures
Table.getDataSource().removeDimensionFilter("Account_BestRunJ_sold");
// add Measures
Table.getDataSource().setDimensionFilter("Account_BestRunJ_sold",selecte
dIds);
// save the current selection into global variable
CurrentMeasureFilterSelection = selectedIds;
10. Write scripts to define what happens when viewers click on the buttons you just created.
a. For setMeasureFilter, write the following script:
Sample Code
Utils.setMeasureFilter(CheckboxGroup_Measures.getSelectedKeys());
Sample Code
CheckboxGroup_Measures.setSelectedKeys([""]);
Utils.setMeasureFilter([""]);
This onClick function script removes all the selected measures from the checkbox group and passes
an empty array to the Utils.setMeasureFilter, which updates your table.
c. For setAllMeasures , write the following script:
Sample Code
CheckboxGroup_Measures.setSelectedKeys(AllMeasures);
Utils.setMeasureFilter(AllMeasures);
This onClick function script sets the selected keys of the checkbox group to the AllMeasures script
variable you defined before and passes the same variable to the Utils.setMeasureFilter function.
d. For ColRemove, write the following script:
Sample Code
This onClick function script gets the selected keys of the columns checkbox group and removes
these dimensions from the table. It then calls the setDimensionCheckboxes function to set the
checkboxes according to the new selections.
e. For RowRemove, write the following script:
Sample Code
This onClick function script gets the selected keys of the rows checkbox group and then removes
these dimensions from the table. It calls the setDimensionCheckboxes function to reset the
checkboxes again according to the new selections.
f. For AddtoCol, write the following script:
Sample Code
This onClick function script gets the selected keys of the free checkbox and adds the dimensions
to the columns of the table. The script then calls the setDimensionCheckboxes function to set the
checkboxes according to the new selection.
g. For AddtoRow, write the following script:
Sample Code
This onClick function script gets the selected keys of the free checkbox and adds the dimensions
to the rows of the table. It also calls the setDimensionCheckboxes function to set the checkboxes
according to the new selection.
11. Define what happens when the application or story is first run by creating the onInitialization
function.
a. In the Outline panel, hover over Canvas (for analytic applications) or the relevant page (for optimized
Sample Code
// Measures
// get all measures from the table data source
var measures = Table.getDataSource().getMeasures();
// define array or the selected Keys
var selectedKeys = ArrayUtils.create(Type.string);
if (measures.length > 0) {
for (var i=0;i<measures.length; i++){
// add the Measure to checkbox group
CheckboxGroup_Measures.addItem(measures[i].id,measures[i].description);
//add the measure to the selected Keys
selectedKeys.push(measures[i].id);
}
}
CheckboxGroup_Measures.setSelectedKeys(selectedKeys);
console.log(["selectedKey ", selectedKeys]);
AllMeasures = selectedKeys;
// define array or the selected Keys
var selectedDims = ArrayUtils.create(Type.string);
var dims = Table.getDataSource().getDimensions();
if (dims.length > 0) {
for (i=0;i<dims.length; i++){
CheckboxGroup_AllDimensions.addItem(dims[i ].id,dims[i].description);
selectedDims.push(dims[i].id);
}
With this script, you make sure that on initialization, all the available measures of the table's data
source are loaded.
You define a selected keys array of type string and, using a loop, you add the measures to your
measures checkbox group and the selected keys array. You set the selected keys of the checkbox
group to the selectedKeys variable and set your script variable AllMeasures to selectedKeys
since it still holds all the measures of your dataset.
Afterwards, you define another string array and put all the dimensions of the data source
in it, and add these dimensions as items of the checkbox group of all dimensions
CheckboxGroup_AllDimensions.
Next, you set the script variable AllDimensions to the string array selectedDims that you created
to store the dimensions in.
Finally, you call the functions of setMeasureFilter to set the selected keys to the array
selectedKeys and to call the setDimensionCheckboxes function to set the dimension checkboxes
to its initial state.
12. Save the application or story, and open it in view time.
Results
If you select Time in the Free checkbox and select add to Column, this dimension gets added to the table
columns so that you can get a more detailed view.
You can get back to starting condition by selecting Remove of the column and row checkboxes.
You can't remove all the dimensions from the columns because at least one dimension is needed.
As an application designer or story developer, learn how to let viewers filter a table or a chart using a popup
window.
Prerequisites
• You've already added a table and a chart widget and placed them on top of each other.
• To follow all functions of this sample use case, you've completed the exercise Best Practice: Switch
Between Chart and Table [page 1778] and can now enhance your analytic application or optimized story.
Context
Through a dropdown list, viewers can filter the table and chart according to certain measures of the dataset,
including Gross Margin, Discount, Quantity Sold and Original Sales Price.
In the popup widget, they can then switch between chart and table through a radio button and control the
measures Actual, Plan, Absolute and % of Deviation through a checkbox group.
Procedure
a. In the Outline panel, (for analytic application) choose next to Popups, or (for optimized stories)
d. In the Builder panel, under Checkbox Group Value, choose twice to create Value 3 and Value 4.
e. For Value 1, enter Actual as ID and text respectively. For Value 2, enter Plan as ID and text
respectively. For Value 3, enter _Abs as ID and Absolute as dedicated text. For Value 4, enter
_Percent as ID and % Deviation as dedicated text.
f. Set all four values as Default.
Sample Code
Popup_Settings.open();
This script variable holds the concatenated filter of the dropdown list on the page and the checkbox
group in the popup window.
d. Similarly, create the second script variable. Enter CurrentMeasureGroup as name, and instead of
switching on Set As Array, enter Gross_Margin under Default Value.
This script variable holds the current measure filter from the dropdown list.
e. For the third script variable, name it as CurrentMeasureSelection.
This script variable holds the selected measures from the checkbox group in the popup window.
9. To define what happens when a filter is selected, create a script object. In this object, write a function that
sets the measure filter according to what viewers have chosen from the checkbox group.
a. In Scripting of Outline, add a script object.
b. Rename ScriptObject_1 to Utils.
c. Rename function1 to setMeasureFilter.
d. Select the function setMeasureFilter, and when the Script Function panel opens, choose (Add
Argument).
e. Enter selectedId as name of the argument, leave type as string, and choose Done twice.
f. To write the script for the function, hover over setMeasureFilter in the Outline panel, and choose .
Sample Code
Table.getDataSource().removeDimensionFilter("Account_ BestRunJ_sold");
if (CurrentMeasureGroup !== "") {
Chart.removeMeasure(CurrentMeasureGroup, Feed.ValueAxis);
}
Table.getDataSource().setDimensionFilter("Account_Bes tRunJ_sold",
selectedId);
Chart.addMeasure(selectedId, Feed.ValueAxis);
This script removes the set measure filters from table and chart and instead inserts the selected
measure sent to the function.
10. In the Outline panel, hover over Dropdown_MeasureGroup, and select . Enter the following script in the
script editor:
Sample Code
// help variables
var Filter_Pattern_1 = "[Account_BestRunJ_sold].[parentId].&[";
var Filter_Pattern_2 = "]";
var Filter_Area = ArrayUtils.create(Type.string);
// remove the "old" filter and set the new filter selection
Table.getDataSource().removeDimensionFilter("Account_ BestRunJ_sold");
Table.getDataSource().setDimensionFilter("Account_Bes tRunJ_sold",
Filter_Area);
First, this script shows which value was selected and removes the measures of the measure groups. It then
saves the current selection in the script variable CurrentMeasureGroup. The script filters on all the inputs
given by showing the selected measures in the checkbox in the popup.
After getting these values, all old filters get removed so the new ones can be applied. To get a valid filter, the
selected measures get concatenated to a filter statement.
11. To write the script for the buttons OK and Cancel, in Outline hover over Popup_Settings, and choose .
Enter the following script:
Sample Code
// help variables
var Filter_Pattern_1 = "[Account_BestRunJ_sold].[parentId].&[";
var Filter_Pattern_2 = "]";
var Filter_Area = ArrayUtils.create(Type.string);
// remove the "old" filter and set the new filter selection
Table.getDataSource().removeDimensionFilter("Account_ BestRunJ_sold");
Table.getDataSource().setDimensionFilter("Account_Bes tRunJ_sold",
Filter_Area);
// save the current measure filter selection into a global variable
// Note --> this global variable need to be set with the default
values on the onInitialization event from the Main Canvas
CurrentMeasureFilterSelectionPopup = Filter_Area;
CurrentMeasureSelection = Selected_Measures;
// write the current measure filter selection to the browser console
console.log(["Measure Selection: ", CurrentMeasureSelection]);
This script starts off with an if statement that differentiates the buttons according to their IDs.
It then gets the selections from the checkbox group in the popup window and removes the measures
currently being used as filters for the chart.
To get a valid filter, the selected measures are concatenated to a filter statement that is saved in the script
variable CurrentMeasureFilterSelectionPopup and the selected keys of the checkbox group in the script
variable CurrentMeasureSelection.
Afterwards, the script gets the selected key of the radio button group in the popup window. If Show Table is
selected, the table is set to visible and the chart to invisible, and vice versa if Show Chart is selected.
Lastly, the popup window closes if you click on either of the buttons.
12. Save the application or story, and open it in view time.
Results
When you run the analytic application or optimized story, it looks like this:
When you click on the settings icon, the popup window appears.
If you switch back to the table via popup window, the previous filters and settings remain.
As an application designer or story developer, learn how to create a popup window for viewers that contains
extra information about the selected elements.
Prerequisites
• You've already added a table and a chart widget and placed them on top of each other.
• To follow all functions of this sample use case, you've completed the exercise Best Practice: Switch
Between Chart and Table [page 1778] and can now enhance your analytic application or optimized story.
Viewers can open a popup window, which displays information about their selections.
In the table, they can select a measure cell, a dimension cell or a data cell. In the chart, they can select a
dimension cell and a measure or dimension chart bar, for example, Gross Margin Plan for Lemonade.
• The first lets them choose which dimension (Location, Product, Store or Sales Manager) to be filtered on
the table or chart.
• The second displays the available hierarchies that can be used to change how the data is displayed.
Note
In this example, only single selection is supported for the table and chart.
Procedure
a. In the Outline panel, (for analytic application) choose next to Popups, or (for optimized stories)
• The first script variable, called CurrentDimension, holds the current selection from the dimensions
dropdown list.
• The second script variable, called CurrentMeasures, is set as array and holds the selected measures.
• The third script variable, called CurrentDetailsMeasures, is set as an array and holds the data
about the selections to display the data in the popup window.
a. In the Scripting section of Outline panel, add a script variable.
b. In the Script Variable panel, enter CurrentDimension as name and leave type as string.
c. To close the Script Variable panel, choose Done.
d. Add a second script variable, with CurrentMeasures as name, string as type and Set As Array
switched on.
e. Add a third script variable, with CurrentDetailsMeasures as name, string as type and Set As Array
switched on.
f. To close the Script Variable panel, choose Done.
7. Hover over Dropdown_Dimensions in the Outline panel, choose (Edit Scripts), and enter the following
script in the script editor:
Sample Code
Table.addDimensionToRows(sel);
//Chart
Chart.removeDimension(CurrentDimension,Feed.CategoryAxis);
Chart.addDimension(sel, Feed.CategoryAxis);
//Details_Chart remove dimension filter
Details_Chart.getDataSource().removeDimensionFilter(CurrentDimension);
// write filter information into the browser console
console.log(['CurrentDimension: ',CurrentDimension]);
console.log(['Selection: ', sel]);
// save the current selection (dimension) into a global variable
CurrentDimension = sel;
// get hierarchies from the current dimension
var hierarchies = Table.getDataSource().getHierarchies(CurrentDimension);
var flag = true;
// remove all current items form the Dropdown_Hierarchies
Dropdown_Hierarchies.removeAllItems();
// loop
for (var i = 0; i < hierarchies.length; i++) {
Dropdown_Hierarchies.addItem(hierarchies[i].id,hierarchies[i].description);
if (flag === true) {
var hierarchy = hierarchies[i].id;
flag = false;
}
}
}
// write hierarchy information to browser console
console.log(['Hierarchy: ', hierarchy]);
console.log(['Current Dimension: ',CurrentDimension]);
// set Flat Hierarchy as Default
Dropdown_Hierarchies.setSelectedKey('__FLAT__');
// Table
Table.getDataSource().setHierarchy(CurrentDimension,'__FLAT__');
// Chart
Chart.getDataSource().setHierarchy(CurrentDimension,'__FLAT__');
// Details_Chart
Details_Chart.getDataSource().setHierarchy(CurrentDimension, '__FLAT__');
This onSelect function first gets the selected element of the list. It then replaces any already set
dimensions in table and chart by the newly selected dimension. This dimension is also added to the details
chart in your popup window. It writes the filter information in the browser console and saves the selection
in the script variable CurrentDimension.
To set the available hierarchies for the selected dimension, the script loops through the available
hierarchies of the data source in relation to the current dimension to push all available hierarchies in
the dropdown list.
Finally, the default hierarchy for table, chart and details chart is set to flat presentation.
8. Hover over Dropdown_Hierarchies in the Outline panel, click (Edit Scripts), and enter the following script
in the script editor:
Sample Code
This onSelect function simply sets the hierarchy of table, chart and details chart to the selected element
in the dropdown list.
9. In the Outline panel, hover over Table, choose (Edit Scripts), and enter the following script in the script
editor:
Sample Code
Details_Chart.removeMeasure(CurrentMeasures[i],Feed.ValueAxis);
Details_Chart.addMeasure(memberId,Feed.ValueAxis);
}
// Details_Chart.addMeasure(memberId,Feed.ValueAxis);
CurrentDetailsMeasures.push(memberId);
Popup_show = true;
}
// Dimension
else {
console.log(['Selection Dimension: ',dimensionId]);
console.log(['Selection Member: ', memberId]);
Details_Chart.getDataSource().setDimensionFilter(dimensionId,
memberId);
Popup_show = true;
}
}
}
if (Popup_show === true) {
Popup_Details.open();
}
This onSelect function captures the selection made in the table and writes it into the console.
Until the selected element is determined, the popup window is set invisible. The script loops over the table,
captures whether it was a measures, dimension or data cell and then pushes this information unto the
Details chart. Then the selected measures are saved in the variable CurrentDetailsMeasures.
Finally, the popup is set to visible and the popup window opens.
10. In the Outline panel, hover over Chart, choose (Edit Scripts), and enter the following script in the script
editor:
Sample Code
Details_Chart.addMeasure(memberId,Feed.ValueAxis);
CurrentDetailsMeasures.push(memberId);
Popup_show = true;
}
// Dimension
else {
console.log(['Selection Dimension: ',dimensionId]);
console.log(['Selection Member: ',memberId]);
Details_Chart.getDataSource().setDimensionFilter(dimensionId, memberId);
Popup_show = true;
}
}
}
}
if (Popup_show === true) {
Popup_Details.open();
}
This onSelect function gets the selected element of the chart and saves it in the variable sel.
The popup window is set to invisible. The script removes the current measures from Details_Chart
and replaces with the measure or dimension filter. Then, the selected measures are saved in the variable
CurrentDetailsMeasures.
Finally, the popup is set to visible and the popup window opens.
11. To write the script for the button Cancel, hover over Popup_Details in the Outline panel, select (Edit
Scripts), and enter the following script in the script editor:
Sample Code
// remove the current measure selection and set all default measures for
the details chart
for (var i = 0; i < CurrentDetailsMeasures.length; i++) {
Details_Chart.removeMeasure(CurrentDetailsMeasures[i], Feed.ValueAxis);
}
CurrentDetailsMeasures = ArrayUtils.create(Type.string);
for (i = 0; i < CurrentMeasures.length; i++) {
Details_Chart.addMeasure(CurrentMeasures[i], Feed.ValueAxis);
}
// close the popup
Popup_Details.close();
This script removes the content of the variable CurrentDetailsMeasures from Details_Chart and
sets the default measures from the script variable CurrentMeasures as the measure of the details chart.
Lastly, the popup window closes when viewers click on the button.
12. Define what happens when the application or story is first run by creating the onInitialization
function.
a. In the Outline panel, hover over Canvas (for analytic applications) or the relevant page (for optimized
With this script you make sure that on initialization, the hierarchies are loaded into
Dropdown_Hierarchies, and the hierarchy is in flat presentation by default.
After doing that, you fill the script variable CurrentMeasures with the available measures of gross
margin, actual, plan, absolute and percent.
13. Save the application or story, and open it in view time.
If you select dimension data cell Los Angeles, the popup window appears. It gives an overview of the measures
Gross Margin, Plan, Absolute and Percent for the location Los Angeles over time.
In the browser console, you can always see the current selection printed here.
If you select the measure cell Gross Margin Actual, this measure is shown in the popup window in relation to
time.
If you change the dimension to Location, the hierarchy to States and choose Nevada, the popup window
appears.
Select the measure Gross Margin Abs Dev in regard to the dimension Orange with pulp.
Use a MemberInfo object in addition to member ID in setDimensionFilter() API. This improves your story
or analytic application's performance as there’s no roundtrip to the backend to fetch the member’s description.
If you use the method setDimensionFilter() on a data source and pass only a member ID, then there's a
roundtrip to the backend to fetch the member’s description:
When you pass a MemberInfo object instead, which contains a description, then there's no roundtrip to the
backend:
Sample Code
Sample Code
Whenever the member description is available to you, use MemberInfo to improve your story or application's
performance.
Note
If the descriptions in the filter aren't visible to the end users, simply use a dummy description.
You can open your analytic application or optimized story in view mode to view and analyze the performance of
the running scripts in the script performance popup, which appears as a swim lane diagram.
Tip
Enter the parameter in front of the # character in the URL and precede the parameter with a question
mark ?.
• From Tools in the edit time toolbar, select (Performance Optimization) Analyze Script Performance
in View Time .
The application or story opens in a new tab, with the parameter added to the URL.
When the application or story is fully loaded, press Ctrl + Shift + A or Ctrl + Shift + Z .
In the script performance popup, the performance results are displayed in a swim lane diagram where each
script event is represented as a blue lane. The events are numbered according to the running sequence. As the
scale is the same for all events, you can immediately identify the long running events.
From tooltip you can see the exact running duration of an event.
If there are some performance relevant events during the script execution, which can be ideally optimized by
you as an application designer or story developer, they are displayed below the triggering event as separate red
swim lanes with the same scale.
• Waiting for [widget name] to be loaded in background means that a script tries to access a widget that is
not loaded at the particular point in time. The scripts are in waiting position until the widget is loaded in the
To have a clearer view, you can hide some measurements by selecting the lanes. To refresh and show all
measurements again, select Show All.
Related Information
Best Practices for Performance Optimization in Your Story Design [page 1127]
Best Practices for Performance Optimization in Your Analytic Application Design [page 1951]
The SAP Analytics Cloud Digital Boardroom is a place to design a real-time, interactive boardroom
presentation.
Note
The SAP Digital Boardroom is an add-on to user licenses and requires at least one user license of the BI
User, Predictive, Planning Standard, or Planning Professional type.
Before using the Digital Boardroom, you must be assigned a Boardroom Viewer or Boardroom Creator role
by your administrator. For administrators, see Standard Application Roles [page 2838] for details.
Creating Presentations
Running a Presentation
Use the Digital Boardroom to transform your executive meetings. Replace static presentations and stale
information with interactive discussions based on real data – allowing you to make fact-based decisions to
drive your business.
From the side navigation, select Files. On the Files page, select the (Filter) tool and choose Digital
Boardrooms. You can further refine the list by selecting a filter (All files and choose Owned by me or Shared with
me).
On the Files page, you use any of the toolbar operations to manage your Digital Boardrooms. For more
information, see Manage Files and Folders [page 50].
Present a Digital Boad- Click the title of any presentation to present it.
room presentation.
Or, while editing an agenda or dashboard on the canvas, choose (Start Presentation)
from the View section of the toolbar.
tion, select (Digital Boardroom). From the Create New area, choose Agenda ( ) or
Dashboard ( ).
Note
For any file dependencies (for example, stories used in the Digital Boardroom presenta-
tion), the person you share the Digital Boardroom presentation with also needs permis-
sion to view the file dependencies. You can share an individual file directly from the app or
you can share multiple files from the Files page.
When you select the Favorites ( ) view, you can select a filter (All files and choose Owned by
me or Shared with me) to further refine the list.
• Agenda ( ): Your traditional boardroom meeting structure. Create agenda items, then add and combine
pages from any story into your topics.
Use an agenda for meetings with a timed schedule, where items are presented in a linear order (possibly by
different people).
• Dashboard ( ): A modern, exploratory, corporate steering presentation. Create freeform topics to match
your business organization, then add and combine pages from any story.
Use a dashboard to allow the presenter to decide where to go next, without a predefined path to follow.
You can display on the Home screen a Recent Presentations tile listing the five most recent SAP Digital
Boardroom dashboards and agendas you have viewed.
The right displays help you get the most out of your Digital Boardroom presentations. Follow these instructions
to pick a setup and configure your screens and browser.
Topic Sections
For recommended hardware, see System Requirements and Technical Prerequisites [page 2742].
With multiple large 4K touch screens connected to a single PC, you can present several topic pages at once.
You can connect up to three screens to provide an immersive experience. A standard PC can support three
monitors, with HDMI or DVI ports.
You can also use a single large screen, like a Microsoft Surface Hub, a Cisco Webex Board, or a single touch-
screen monitor connected to a PC.
Responsive layouts in Digital Boardroom presentations can adjust to the number of touch screens you are
using. With a single screen, the visible page can show a preview of pages in the same topic. You can select or
swipe from one to the next, or jump from one to another using bread crumbs or other navigation features. For
more information, see Presentation Settings and Theming [page 1849].
Another option is a video wall, where several individual monitors are combined into a single screen. This
video shows an example.
1. Connect the video cables and the USB cables for touch control to the PC.
If you need to use Google Chrome or Microsoft Edge as your browser, you can use a third-party application to
span the browser across multiple screens.
While not tested by SAP, applications such as UltraMon and DisplayFusion are known to work for spanning SAP
Digital Boardroom across multiple monitors.
Related Information
Before adding stories to your SAP Analytics Cloud boardroom presentation, go to the Stories area of the
application and optimize the display and enabled features for the Digital Boardroom.
It is recommended to design Digital Boardroom presentations using Responsive story pages. Responsive
pages allow you to create layouts that automatically resize and reflow when viewed on different screen sizes,
and are required for viewing on mobile devices. Canvas pages can also be used; however, grid pages are not
supported. If a story contains a mixture of responsive, canvas, and grid page types, only the responsive and
canvas pages are imported into the Digital Boardroom designer.
Each page title in your story is visible in the Digital Boardroom navigation when presenting. Choose meaningful
titles for your story pages to clearly communicate the purpose of the visualizations on that page.
Data on a story tile can be sorted from the (Sort) context menu or filtered by ranked values from the
(Rank) context menu. You can also show suggested variances by selecting (Variance) from the context
menu. You can enable these options so presenters can use them from within a Digital Boardroom agenda or
dashboard.
1. With a tile selected, open the Designer panel and select Styling.
2. In the Boardroom Properties section, select any of the options:
• Enable Sort Option in Boardroom
• Enable Top N Option in Boardroom
• Enable Variance Option in Boardroom
While presenting a story tile in the Digital Boardroom, you can launch the Explorer from the (Explorer)
context menu. The Explorer allows you to change measures, dimensions, and chart types, and edit filters, to
further analyze your data. For more information, see Explore Your Data (Classic Story Experience) [page 1188]
and Launching the Explorer from a Story Page (Classic Story Experience) [page 1193].
You can also access Smart Insights from the Explorer to see more information about a particular data point
in your visualization (for details, see Smart Insights [page 2003]).
1. With a tile selected, open the Designer panel and select Builder.
2. In the Properties View Mode section, select Enable Explorer.
If you want to restrict the number of measures and dimensions that are visible in the Explorer, select
Configure Measures & Dimensions. Note that all measures and dimensions that are currently in the chart
are automatically included and can’t be removed. Also, if you don’t specify any additional dimensions or
measures, then only the ones used in the chart or table are available in the Explorer.
You can clear your configuration of measures and dimensions by selecting Clear Selection.
Restriction
• Measure and dimension configuration is not available for blended charts and tables.
Planning model data in tables can be modified in a presentation. To accommodate cell modification on touch
devices, such as a three-screen display with touch capabilities, enable the touch key pad in the Stories area
first.
After saving your story, the touch keypad can now be used to edit numbers in a table cell during the boardroom
presentation.
Topic Sections
Use the Digital Boardroom to build a sequence of agenda items, then add and combine pages from any story
into your topics.
1. From the side navigation, select (Digital Boardroom), then choose Agenda ( ) in the Create New
section.
2. In the Create New Presentation dialog, select a location (private, public, or workspace folder).
3. Enter a unique name and an optional description and choose OK.
4. Enter a title, presenter name, and time for the first agenda item.
You can select the icon to add an image to the agenda item from a local file such as a profile picture for
the presenter. Choose Remove Image to delete the current image.
5. Choose Designer and open the Build tab. Enter a title, description, location, and date for the
presentation properties.
(Optional.) Set the Mobile enabled option if you want this presentation to be available on the mobile app.
Now that you have basic details filled out, let's start adding story content.
Import a Story
The content for your agenda comes from story pages containing the visualizations and data you want to
present. Let's import a story into this presentation.
Note
Imported stories are arranged as page bundles in the Stories panel and can be expanded to see the pages
inside.
Once you have an imported story, it's time to build the agenda structure.
Each agenda item in a Digital Boardroom agenda can have one or more topics associated with it. Topics are
containers for story pages. By adding topics to agenda items in a sequence and story pages to those topics,
you create the order and flow that your boardroom meeting will follow. To start, let's add a topic to the first
agenda item and fill it with a single story page.
Tip
Group similar topics under each agenda item with the aim of clearly separating content between different
business organizations and presenters.
1. Click (Add Topic) under your agenda item to add a connected, empty topic.
2. Enter a topic title.
3. From the Stories panel, drag-and-drop a story page onto the topic square.
Topics are not limited to single story pages. You can drag an entire story or multi-select a subset of story
pages and drag them all into a single topic to be displayed together. Limit the number of story pages to 20
or less to avoid overloading the topic.
After adding pages to a topic, you can change the order they are displayed within the topic using a drag-and-
drop approach, or even drag them to other existing topics on the canvas. To help you keep track of everything,
the Stories panel indicates what story pages are already used on the Digital Boardroom canvas with the
icon. Select to see a list of topics the page is used in. Choosing a topic centers that topic on the canvas:
At the top of the Stories panel, select and choose Used stories or Unused stories to filter your view of
stories displayed in the panel.
Note
When presenting topics with multiple pages in the Digital Boardroom, the display of the story pages is
determined by the lane settings of the Preferences option in the context menu.
You can accommodate multiple presenters in a board meeting by creating additional agenda items. To add a
new agenda item:
1. Choose New Agenda Item from the Insert section of the toolbar.
If your agenda consists of many topics, you may want to mark a few important topics as featured to provide
quick access to them during the presentation.
To set a featured topic, select Add as featured . When presenting the agenda, you can open featured
topics by opening the presentation structure panel or by selecting Agenda from the context menu and then
opening the Featured Topics panel.
Your agenda can get very busy after you add multiple agenda items, topics, and story pages to the canvas.
There are several ways you can simplify your view of the canvas to get a better picture of the overall agenda:
• Collapse entire topic trees under an agenda item by selecting (Expand/collapse topic tree) underneath
the parent agenda item.
In addition to cleaning up the canvas view, be sure to delete unused stories from the Stories panel by selecting
At any time when building your agenda, save and then choose (Start Presentation) from the View section
of the toolbar to launch the Digital Boardroom presentation.
For details on presenting your agenda, see Running Your Digital Boardroom Presentation [page 1857].
You can also customize the look of your Digital Boardroom presentation by setting display options and creating
a custom theme. For details, see Presentation Settings and Theming [page 1849].
Related Information
Topic Sections
Use Digital Boardroom dashboards to create freeform topics to match your business organization, then add
and combine pages from any story.
1. From the side navigation, select (Digital Boardroom), then choose Dashboard ( ) in the Create New
section.
2. In the Create New Presentation dialog, select a location (public, private, or workspace folder).
3. Enter a unique name and an optional description and choose OK.
5. Choose Design and open the Build panel. Enter a title and description presentation properties.
(Optional.) Set the Mobile enabled option if you want this presentation to be available on the mobile app.
Import a Story
The content for your dashboard comes from story pages containing the visualizations and data you want to
present. Let's import a story into this presentation.
Note
Imported stories are arranged as page bundles in the Stories panel and can be expanded to see the pages
inside.
Individual story pages can be shown or hidden using the Expand and Collapse button.
Once you have an imported story, it's time to build the structure.
Each topic in a Digital Boardroom dashboard can have one or more story pages associated with it. To start, let's
fill the root topic with a single story page.
Tip
An overview page makes a good starting point for a dashboard-style presentation. An overview allows you
to show views of high-level KPIs, and you can add jump points to other pages for a deeper dive.
1. From the Stories panel, drag-and-drop a story page onto the root topic square.
Topics are not limited to single story pages. You can drag an entire story or multi-select a subset of story pages
and drag them all into a single topic to be displayed together. Limit the number of story pages to 20 or less to
avoid overloading the topic.
After adding pages to a topic, you can change the order they are displayed within the topic using a drag-and-
drop approach, or even drag them to other existing topics on the canvas. To help you keep track of everything,
the Stories panel indicates what story pages are already used on the Digital Boardroom canvas with the
icon. Select to see a list of topics the page is used in. Choosing a topic centers that topic on the canvas:
At the top of the Stories panel, select and choose Used stories or Unused stories to filter your view of
stories displayed in the panel.
Note
When presenting topics with multiple pages in the Digital Boardroom, the display of the story pages is
determined by the lane settings of the Preferences option in the context menu.
With the root topic established, you can now start to build out your dashboard. By adding topics to the canvas,
either connected in a parent-child relationship, or free-floating and accessible by navigation links, you create a
navigation flow for possible exploration.
Wherever possible, create topics as connected children to other topics for easier navigation. To view a list of
all child topics, select a parent topic on the canvas and then expand the Topic Structure section of the
Build panel. You can change the order by clicking and dragging a child to a new position.
Use free-floating topics and navigation links for content that you may cover optionally.
1. Click (Add Topic) under your root topic to add a connected, empty topic.
When presenting, topics connected as children to parents will follow as the next topic using standard
navigation options.
Note
In addition to creating new topics, you can also import topics from other dashboards. Choose
Select existing and select one or more topics from an existing dashboard. Select Live link for the topics to
be updated automatically when the source dashboard changes.
Under the Tile heading, choose (New jump point). In the dialog, select a tile, and choose Add. Use the
canvas to select the story page to link to. If you selected a chart or table as the jump point and a different
topic as the target, you can also select Apply selected dimension as a filter. With this setting, a presenter
can jump to the target and simultaneously apply the selected value as a story filter.
Note
Jumping to a target from a value in a table or chart is not supported on mobile devices.
You can change the labels for jump links afterward from the Navigation panel. These labels can help the
presenter determine the target of the link.
When presenting that particular page, the link is available by right-clicking the page or individual tile with
the link to open the context menu, and choosing Jump to.... For image, shape, text, and header tiles with
jump links, you can also left click the tile to navigate directly to the target, or to pick the target if there are
more than one.
The label you used for the link is displayed. Select the link to jump to that new page.
For links with a filter, select the values you want to filter and choose from the menu that appears.
You can also mark a few important topics as featured to provide quick access to them during the presentation.
Your dashboard can get very busy after you add multiple topics and story pages to the canvas. There are
several ways you can simplify your view of the canvas to get a better picture of the overall dashboard:
• Collapse entire topic trees under a parent topic by selecting (Expand/collapse topic tree) underneath
the parent topic.
• Use the zoom controls to zoom out ( ), zoom in ( ), or center the root topic ( ) on the canvas. You
can also click and drag on the canvas to manually reposition the entire dashboard.
Some dashboard interactions are limited when zoomed out. Zoom in to the maximum size to edit all
aspects of the dashboard.
In addition to cleaning up the canvas view, be sure to delete unused stories from the Stories panel by selecting
Tip
Open the Table of Contents ( ) from the View section of the toolbar to see a list of all topics, stories, and
story pages used in the dashboard.
At any time when building your dashboard, save and then choose (Start Presentation) from the View
section of the toolbar to launch the Digital Boardroom presentation.
For details on presenting your dashboard, see Running Your Digital Boardroom Presentation [page 1857].
You can also customize the look of your Digital Boardroom presentation by setting display options and creating
a custom theme. For details, see Presentation Settings and Theming [page 1849].
Related Information
When creating a Digital Boardroom dashboard or agenda, you can add topic filters that apply to all the data
from one of the models included in a topic.
Story filters can also be applied to the content while creating a story, but topic filters can allow more flexibility.
Your topic may contain pages from several different stories, and you may want to apply a single filter to data
from a model across all of these stories. Your presentation might also include the same story page in two
different topics, where each topic has a different focus, for example, an analysis of sales data for two regions. In
these cases, you can use topic filters to show the appropriate data.
If any of the models in the topic have prompts, you can set prompt values to restrict the dimension members
that are available for your filter. Select (Edit Prompts) to set the values.
To add a filter, select (Add Topic Filter) and choose the source model. Next, select the dimension to filter, or
choose an option from the Add Time Filters list to quickly create a dynamic time filter.
To learn more about creating filters, including filtering by range, see Applying a Story or Page Filter [page
1530].
You can also create advanced filters here. For more information about creating this type of filter, see Advanced
Filtering [page 1537].
Note
By default, the filters that you create here will also be applied to topics nested within this topic in the tree
structure. You can change the filter from any of these topics. To disable this setting, choose on the filter
and deselect Apply filter to sub-topics.
Topic filters can also be narrowed down further during a presentation by selecting (Filter) from the context
menu or the action bar. For example, if the topic is filtered to a specific region, you can refine the topic filter to
analyze data for individual cities within the region.
If you want to refine filters during a presentation, ensure that the Filter option is added to the context menu
or action bar in the Presentation Settings. For more information, see Presentation Settings and Theming [page
1849]
Customize the look of your Digital Boardroom presentation by setting display options and creating a custom
theme.
Topic Sections
You can customize the layout, display, and other options from the Presentation Settings dialog. Choose
(Presentation Settings) from the File section of the toolbar.
Move the slider to choose between different multi-page layout options. For different screen setups,
here are some best practices:
• Single screen: Set the slider to 1 and turn Enable Preview off.
• Multi-screen: Set the slider to match the number of screens. Turn on Enable double-space
between pages.
Turn this setting on to show a preview of the next story page when using a single lane setup.
Agenda orientation
(Agenda only) Change between Horizontal and Vertical orientations of the agenda overview page
information.
(Agenda only) Change the font size of the text in the agenda overview page.
Choose Tabs to display all page titles together as tabs for easier navigation, or Page Header to
align each page title with its content.
Determine the position of the page navigation buttons. Choose Split to place the previous and next
page buttons on opposite sides of the screen.
Interaction
Context Menu Drag and drop to customize the position and availability of actions on the Context menu. See
Customize the Context Menu [page 1853] below for details on each menu action.
Determine the position of the action bar. Select multiple values to be able to access the bar from
different edges of the screen.
Enable this setting to always show the action bar during your presentation. Otherwise, you'll
access it by placing the cursor near the edge of the screen.
If you choose Top or Bottom and enable Fix action bar to selected position, other options are
available:
• Stacked Header/Footer: The action bar is stacked above or below the existing header or
footer.
• Combined Header/Footer: The action bar is combined with the existing header or footer to
save screen space. The action bar is right-aligned.
• Footer (available for the Bottom position only): The action bar replaces the navigation bread-
crumb and logo in the footer.
Set to On to only show the icons in the action bar, without text titles.
Action Selection
Drag and drop actions to customize the Action bar. Actions can be individually positioned on the
left, center, or right side of the bar. You can also select Left Aligned, Centered, and Right Aligned to
align all actions together.
Add RSS feeds to your presentation. While presenting, choose RSS feed from the Action bar to
display your feeds.
• None: The story data in your Digital Boardroom presentation is not automatically refreshed.
Use this option if you do not want the data to change while presenting.
• Custom Interval: The story data in your Digital Boardroom presentation is automatically
refreshed according to the set time interval. Choose the number of minutes or hours between
data refreshes.
Note
For live data connections, a refresh pulls the latest data from the connected data source for
your stories. For imported data, a refresh only pulls the latest from the existing model data
already imported into SAP Analytics Cloud.
You can tailor the Context menu so it has the navigation and chart actions suitable for your specific
presentation.
Option Description
Results are divided into tabs, showing results for the current
presentation topic and global results across all public, pri-
vate, and shared Digital Boardroom presentations you have
Opens the Filter dialog, where you can view the topic filters
Filter ( ) and story filters that are currently applied, and further refine
the selected members for each filter.
Note
You must first enable the Sort options for each story in
the Stories area. See Enable Sort and Top N Options in
Preparing Stories for the Digital Boardroom [page 1832]
for details.
Note
You must first enable the Rank options for each story in
the Stories area. See Enable Sort and Top N Options in
Preparing Stories for the Digital Boardroom [page 1832]
for details.
Note
You must first enable the Explorer for each story in the
Stories area. See Enable Data Exploration in Preparing
Stories for the Digital Boardroom [page 1832] for de-
tails.
Your presentation may include many different stories, each with its own unique styling. You can override story
styling when presenting in the Digital Boardroom, allowing you to create a harmonized visual look for your
presentations. To override the story styling:
The styling override can be changed and deleted from the Story Styling section.
Read this topic to learn how to run a presentation that someone shared with you, or that you created.
Navigating Topics
Your presentation is structured into different topics, and each topic can include one or more pages of content.
Depending on the type of presentation and how it’s set up, there’s different ways to navigate through the topics.
To pick a specific topic that you want to open, select the menu button ( ) on the top left corner, and then
choose a topic or subtopic.
Breadcrumbs
There are usually breadcrumbs at the bottom of the screen that show if you’ve navigated to a subtopic. Pick a
parent topic here to return to it, or choose the name of the presentation to return to the first screen.
In some dashboard presentations you can jump to another topic from a specific page or widget. Right-click or
long-tap a page or widget with a jump link set up to open the context menu, and then select Jump to... for a list
of targets.
There might also be text boxes or images in your presentation that you can left-click or tap to jump directly to
another topic.
See Add More Topics with Navigation [page 1843] for details.
Switching Pages
Each topic in your presentation can have one or more story pages. If you can't display all the pages at once, use
these options to navigate between them.
Page list
The top of the presentation shows a list of pages in the current topic. Select one to jump to that page.
Navigation buttons
If navigation buttons appear at the side of the screen, you can use them to move sequentially through the
content in your presentation. Your presentation could also be set up to let you swipe through the pages on a
touch device.
To see what other actions are available, check the context menu and action bar. The presentation designer
can customize them, so the options can vary. For example, you might use them to navigate, interact with your
charts and data, and change display settings. For a full list of options, see Options on the Context Menu and
Action Bar [page 1862].
Action bar
Move your mouse near the edge of the screen to open the action bar.
Context menu
Right-click, or long-tap on touch devices, to open the context-sensitive menu. You can open it from the
background of your presentation, or from a widget (but not from data points).
By default, your presentation uses the prompt values that were set in stories.
To change values across one or more pages, you'll need access to the Edit Prompts button on the action bar
or context menu. You can apply your changes to every page in the presentation, or just to the page that you're
on. The new prompt values apply to each widget based on that model, as well as other models linked to the
variable.
Note
The Set for All Pages option isn't available for BW models that use hierarchical variables.
Running Simulations
Some planning-enabled widgets can let you enter data directly in your presentation to explore a simulation. To
enter data, select a table cell or value driver tree and use the slider and keypad to adjust values.
For details about these widgets, see Plan and Analyze Using Value Driver Trees [page 2289] and Entering
Values in a Table [page 2192].
Right-click a data point to show a context menu. The options depend on the type of widget and the model that
it’s based on. See Chart Context Menu Options [page 1239] for more details.
The options in these menus depend on the presentation settings, as well as where you right-clicked to open
the context menu. For details about customizing these menus when creating a presentation, see Choose Your
Presentation Settings [page 1850]. These options might appear:
Context menu
Preferences ( ) Groups the Maximize ( ) and Lane
Note
For live data connections, a refresh
pulls the latest data from the con-
nected data source for your stories.
For imported data, a refresh only
pulls the latest from the existing
model data already imported into
SAP Analytics Cloud.
Agenda ( )
When you're finished presenting, you can close the browser window if you're done working in SAP Analytics
Cloud.
Related Information
Discover how to use SAP Analytics Cloud, analytics designer for analytic application design and create highly
customized analytic and planning applications.
Getting Started
• Understand Styling Options of Canvas and Widgets in Analytic Applications [page 1930]
• Align Widgets in Analytic Applications [page 1932]
• Use Flow Layout Panels and Panels to Design Responsive Layout [page 1934]
• Define Themes for Your Analytic Applications [page 1937]
• Customize Your Analytic Applications' Themes Using CSS [page 1941]
• Best Practices for Performance Optimization in Your Analytic Application Design [page 1951]
• Work with Time Series Forecast in Charts in Analytic Applications [page 1973]
• Work with Explorer and Smart Insights in Analytic Applications [page 1974]
• Work with Smart Discovery in Analytic Applications [page 1976]
• Work with Search to Insight in Analytic Applications [page 1977]
• Subscribe to and View Data Change Insights [page 177]
• Apply Smart Grouping to Charts in Analytic Applications [page 1979]
SAP Analytics Cloud, analytics designer is the capability of SAP Analytics Cloud for professional design of
centrally governable analytic content, ranging from dashboards to sophisticated planning and smart analytic
applications.
An analytic application is a document that can contain visualizations such as tables, charts, and filters to
allow navigation and analysis of data. You can personalize the visualizations and interaction behaviors of the
UI elements according to user requirements. Analytic applications are created and consumed in SAP Analytics
Cloud, analytics designer. Analytics designer gives you high flexibility to create powerful applications for data
analysis and data planning tailored to your user's business needs.
Required Licences
All SAP Analytics Cloud licenses include the creation and consumption of analytic applications. For planning
applications, please note the following:
• If you only need read access to existing planning models and create private versions only, you can use the
SAP Analytics Cloud for business intelligence license.
• If you need to create public versions and use all planning features, the SAP Analytics Cloud for planning,
standard edition is required.
• If you need to create or update planning models for your planning applications, the SAP Analytics Cloud for
planning, professional edition license is required.
As an application designer, make sure that you have got the permission for working with analytic applications.
Administrators or users who can edit roles need to add the Analytic Applications permission with create, read,
update and delete rights to your role, or assign the standard role Application Creator to you.
Make sure that the users who consume your applications have got the read rights for analytic applications in
their roles.
See Standard Application Roles [page 2838] and Permissions [page 2844].
An analytic application in SAP Analytics Cloud is an analytical application that visualizes data in various forms
and lets you navigate the data. It can also allow planning.
Analytic applications can range from simple static dashboards showing static numbers to highly customized
applications with many options to browse and navigate data, change the visualization, navigate across multiple
pages or areas, up to an extensively customized look and feel according to customer branding.
The analytics designer for SAP Analytics Cloud is the capability to create analytic applications.
There is a dedicated design environment in SAP Analytics Cloud in order to create such applications. The term
design in analytics designer doesn't refer specifically to the UX or UI design aspect of an analytic application.
It is the entire process of creating an analytic application: defining the data model, laying out the screen,
configuring widgets, and wiring it all up with the help of custom scripts.
The analytics designer offers you another way to create analytical content in SAP Analytics Cloud.
What can you do with analytic applications that you can't do with stories?
An analytic application typically contains some custom logic, expressed with the help of scripts. A story is
created in a self-service workflow and can comprise many laid-out widgets and a lot of configured functionality.
However, the amount of customization is limited to the foreseen possibilities offered in the story design-time
environment. With analytic applications there is much more flexibility to implement custom behavior. It also
requires a higher skill level to create those.
From a consumption point of view, there shouldn't be any difference between stories and analytic applications.
The consumer shouldn't be aware of whether the analytical content is a story or an analytic application. The
exposed widgets, the available functionality, and the look, feel, and behavior should be the same.
However, analytic applications can use widgets, which do not exist for stories, and can have custom logic,
which cannot be implemented in stories since there is no scripting. In general, stories and applications share
widgets and functionality to a large extent, but some widgets can only be used in applications because they
need to be scripted (dropdown boxes or buttons, for example).
Stories and analytic applications share functionality and widgets and may even have very similar design
environments, so why are two different artifact types necessary? The answer is that story designers and
application designers have completely different expectations. This is related to the differences between stories
and applications.
In the story design environment, it's practically impossible for you to create a story that doesn't work. The
expectation of self-service design time for stories is that business users are guided (to some extent also
limited) in what they do and can do. The story design time is supposed to consist of multiple configuration
steps that prevent business users from creating something which breaks. With story design time, we ensure
some level of consistency.
It's completely different with applications, especially with the added scripts. As soon as application designers
add custom logic with scripting, they have complete freedom to change the default behavior of the entire
analytic application. The design environment will provide everything to create correct applications, but it
doesn't guarantee that the application is correct or won't break.
In addition, an analytic application has a dedicated life-cycle. You start it and then there are certain steps which
are performed, like the startup event, for example. The story doesn't have that. You can switch the story
between edit and view mode as often as you like.
These are major differences. That is why we offer two artifacts and two corresponding design-time
environments to create stories and analytic applications.
An analytic application is always data-driven. The foundation of an analytic application is one or more
underlying SAP Analytics Cloud models.
As a first step you decide whether you want to visualize your data in a table or a chart, and add a table or a
chart to your analytic application. This triggers another step for picking a model. In addition to widgets showing
data, you add to the layout other widgets that control data, such as filters, arrange and configure them, and
wire them up.
Almost all widgets expose events. In order to add custom logic to the analytic application, you can implement
event handlers with the help of the scripting language.
The variety of analytic applications is really huge. Analytic applications can range from very static visualizations
of a few data points to very advanced, highly customized and interactive applications which offer rich
navigation and generic built-in exploration capabilities. However, there are some patterns of analytic
applications:
Almost all widgets, whether smart, data-related widgets or simple widgets such as buttons and dropdown
boxes, expose events. Even the analytic application itself exposes events such as a startup event or similar. In
order to add custom logic to the application, you can implement event handlers with the help of the scripting
language.
Example
Let's say a dropdown box is populated with the available years in the data model - 2015 to 2019. The
dropdown box exposes the event OnSelect, which is triggered whenever a value is selected from the
dropdown box. The implementation of that event handler could read the selected value and set a filter for
the selected year in the model. The numbers shown will reflect the selected year.
Because you can add many event handlers using the scripting APIs of all widgets and other service APIs
offered, the application can be geared towards the specific needs of a customer.
The scripting language is JavaScript. Scripts are executed by the Web browser without any transformation. We
use the Web browser's script execution engine, which is available out of the box. In order to offer good tool
Example
Let's say that there is an API method available for filtering members: setFilter("YEAR", "2014").
The plain JavaScript method expects two strings and this is also what is executed at runtime by the
Web browser. However, our definition of the API method uses dedicated predefined types for our problem
domain, that is setFilter(Dimension, Member). With that definition, the system checks and validates
that the passed strings are a valid dimension and a valid member.
The script editor also uses the type information. It doesn't just statically check the types, but uses the
knowledge about the underlying model and provides value help to offer dimensions and members existing in
the model.
Procedure
1. From the side navigation on the home screen, choose (Analytic Applications). Go to the Applications
tab and select Create New Application .
When you open the home screen, initially the side navigation is displayed in a collapsed view. To expand the
• Select Insert (Table) from the toolbar to add a table based on an existing model.
A dialog will pop up that allows you to choose one. You can choose Select an Existing Model or SAP
Datasphere Analytical Dataset based on where your data for use is located.
Note
For tables, you can choose No Model and add a data source later. The table will be displayed empty.
Once you have chosen an existing model, it will be used as primary default model whenever you
add a table or chart to the analytic application. You can change this data source by selecting
(Change primary model) in the Builder panel.
• Select Insert (Chart) from the toolbar to add a chart based on an existing model.
4. You can add more widgets, such as a container, dropdown, filter line, image, text and button, via in the
Insert menu.
The widgets will be added to the canvas and displayed in Outline with its default ID.
Note
You can move a widget up and down via dragging and dropping in the Outline panel. The widget listed
higher will take precedence over the below ones on the canvas or the same popup.
5. (optional) You can change the default widget name and adjust it to your needs by double clicking the
widget in the Outline panel or via its Styling panel.
6. (optional) You can change the styling of a widget in the Styling panel. To open the panel, select (Edit
Styling) from its (More Actions) menu or select the widget and then Designer in the upper-right corner
of the analytic application.
7. Add scripts to your widgets to implement custom-specific behavior in addition to basic user interaction at
runtime:
1. In the Outline panel hover over a widget.
You can also trigger scripting by selecting (Edit Scripts) in the (More Actions) menu of the
widget.
Note
Your analytic application name can include valid alphanumeric characters and underscores.
Spaces can be included but not at the start or end of a name.
The analytic application will be saved in the location you selected. If you saved the analytic application to
a public folder, it is visible to all users. If you saved the analytic application to a private folder, only you can
see and edit it until you share it with other users. For more information, see Share Files or Folders [page
193].
9. Select Run Analytic Application in the upper right corner.
Note
You have to save any changes to your application first before seeing the latest version at runtime.
Related Information
Explore the menu options and objects in your analytic application in SAP Analytics Cloudand get a deep insight
in your data.
You have opened an analytic application and want to view and analyze your data.
An analytic application in SAP Analytics Cloud is an analytical application that visualizes data in various forms
and lets you navigate the data. It can also allow planning.
Analytic applications can range from simple static dashboards showing static numbers to highly customized
applications with many options to browse and navigate data, change the visualization, navigate across multiple
pages or areas, up to an extensively customized look and feel according to customer branding.
Here's an overview of the main menu options that you can find within an analytic application:
Note
The available menu options vary depending on the way the analytic application was created and the data it
contains.
Menu/Functionality Description/Submenu
File
Edit Analytic Application:
Save:
• Save
• Save As...
• Convert to Optimized Design Experience (saved analytic
applications only)
There're two conversion options:
• Convert: Converts the analytic application to Op-
timized Design Experience in Stories, which over-
writes the original application.
Lets you continue to use the same ID for the con-
verted story.
• Convert + Save As: Clones your analytic application
and then converts the cloned version to Optimized
Design Experience in Stories.
Your original application keeps its ID, and the
cloned version has a new story ID.
Note
The conversion to Optimized Design Experience is
one-way, which means that reverting back to Clas-
sic Design Experience in analytics designer isn't
supported.
Note
If your analytic application has a recurring publica-
tion and is later converted to an optimized story,
when you view or edit this publication in Calendar,
the file settings remain unchanged. Therefore, we
recommend deleting the scheduled publication and
rescheduling one after the conversion.
• Copy
• Duplicate
• Paste
• Paste Overall Values Only
Share:
• Share...
• Publish to Catalog
Undo
Redo
Views Outline
Info Panel
Insert
Chart
Table
Panel, Tab Strip, Flow Layout Panel, Page Book, Text, Input
Field, Text Area, Dropdown, Checkbox Group, Radio Button
Group, Button, Slider, Range Slider, Input Control, Filter Line,
Image, Shape
Custom Widgets
Conditional Formatting
Performance Optimization:
Data
Refresh:
• Refresh
• Configure Auto Refresh
Edit Prompts:
Link Dimensions
Display
Theme:
Edit CSS
When you run your application, it is by default displayed in fullscreen mode. However, when you hover the
mouse cursor at the top of the page, you can access a condensed version of the toolbar. To switch to view
mode, click the button in the toolbar to leave the fullscreen. In view time mode the toolbar is fixed and
always displayed in the application in its full version depending on the application's context.
Note
Your choice of menu options in the toolbar changes depending on the type of application and the data it
contains (analytic or planning application). The most important functions can be reached by quick access
buttons.
Functions related to planning applications such as Publish Data and Version Management are only shown if
there is a planning-enabled table available in the application. If not, the functions are hidden. An exception
for this is a planning application that is based on a BPC model: In this case only Publish Data is displayed.
• Edit
This section contains the following functions:
Refresh
• Refresh: Reload the analytic application to get the
most recent data.
• Auto Refresh: Automatically refresh the analytic appli-
cation at an interval set by the application designer.
• Tools
This section contains the following functions:
Edit Prompts You can filter and display data by setting variables for a
data model used in the analytic application. Charts or ta-
bles built from the model get updated based on the values
you enter.
Subscribe to Data Change Insights In view time, you can subscribe to data change insights
of your analytic application or optimized story to keep
informed and track of data changes on a regular basis.
Predictive Forecast You can select a single cell that can be edited to start a
predictive time series forecast. For more information, refer
to Run Predictive Forecasts on Table Cells [page 2247].
Version History Use the version history function to quickly undo or redo
private or public version changes.
Version Management With this option you can copy, publish, delete and share
public and private versions. For more information, refer
to Create, Publish, and Manage Versions of Planning Data
[page 2170].
Note
The Value Lock Management function can be found in the context menu of the table. Use this option
to create value locks so that data in the selected cell can't be changed or edited. This option is only
displayed if the table is based on a planning model that supports frontend value locks. For more
information, refer to About Value Lock Management [page 2206].
Note
The options Version History and Version Management display a check symbol if one of them is selected.
• Display
Comment Mode You can show or hide all the comments in the analytic
application by toggling on or off Comment Mode from the
Display menu.
Note
You can only view and create comments on a widget
or a cell of a table that is based on a planning model.
• For planning applications you'll find the Publish Data button quite prominenty displayed in the toolbar. This
button lets you quickly publish or revert your edits to public versions.
• By clicking Edit Analytic Application you can switch to the edit mode of the analytic application where you
can make changes to your application, for example by adding more widgets or changing the model. When
you edit the application you can use the action menu options described in the following paragraph.
Click anywhere within the area of a widget (for example chart or table to open the action menu. Expand
(More Actions) to see all the options available in the menu.
Note
Note that these options vary depending on the widget type and the data it contains. Just make a choice and
the selected widget will get updated right away!
In the following paragraph you'll find as an example the action menu options for charts
Allows you to apply an ascending or descending sort to your chart. Select the meas-
Sort
ure or dimension you want to sort, and the sort direction.
To sort on a measure that isn't included in the chart, select Advanced Sorting, choose
a measure, and then set the sort criteria.
• Top 5
• Bottom 5
• Top N Options:
• Mode - Top or Bottom
• Value - used as the number of values you want to include in the filter.
• Measure
• Cross Calculation
• Version
When a Top N filter is applied, text is displayed at the top of the chart indicating
the filter direction and value. To change the filter, select the filter text and input new
values.
Shows the difference between versions of a measure or the difference between time
Compare To
periods. For more information, refer to How to Add a Variance to a Chart (Classic
Story Experience) [page 1315]
a) CGR: add a Compound Annual Growth Rate line to a chart that has date values on
Add its axis.
b) Comment: add comments to your widget and show or hide them by switching the
a) toggle on or off.
For a list of options you can add, refer to Chart Context Menu Options [page 1239].
b)
Allows you to show or hide chart elements. By default, most elements are shown.
Show / Hide
The following elements can be hidden:
• Chart Title
• Subtitle
• Chart Details
• Legend
• Threshold Legend
• Data Labels
• X-Axis Labels
• X-Axis Title (Bubble, Scatterplot, and Histogram charts)
• Y-Axis Labels
• Y-Axis Title (Bubble and Scatterplot charts)
• Total
• Footer
• Time Intervals
• Date Picker
• Navigator
• Vertical Grid Lines
• Horizontal Grid Lines
• Confidence Interval
• Past Period Forecast
Edit Scripts Opens the script editor where you can edit the chart scripts.
You can manually change the axis values. For more information, refer to Modify Axes
Edit Axis
on Single or Dual Axis Charts [page 1284].
Allows you to export data from your chart as a CSV file, with or without formatting. For
Export
more information, refer to Export Table Data as a CSV or XLSX File [page 248].
Opens the Styling panel. For more information on styling options, refer to Formatting
Edit Styling
and Styling Items on a Story Canvas [page 1589].
Note
Note that the menu options you get vary depending on how the chart or table was created and the data it
contains. Just make a choice and the selected chart or table will get updated right away!
2.Exclude Hide data that are not immediately useful for your current analysis.
Drill Drill down on data that has multiple levels so you can display your data at the level
you want.
a)
a) Choose a level to Drill Down. For example, drill down to Countries from Location.
b) b) Choose a level to Drill Up. For example, drill up to Countries from Cities.
c) Expand/Collapse: see all levels of the hierarchy. For example, you can see Cities,
c)
Countries and Location data altogether.
d) Set Drill: change the hierarchy back to the default level or choose another hierar-
d) chy and level to display from a list. For example, if the current level is Countries, you
can reset the hierarchy back to Cities, the default level that the application designer
has set, or you can drill down on Department data instead.
3.Compare To Compare the selected data point of a chart with date or time data to itself from
previous time periods.
If you are working with a bar chart, simply click on a bar and hover over another bar
to see their difference in value.
4.Break Axis Get more information about the data of a selected chart or table.
At application runtime you can view and create comments on a widget or table cell from both analytic and
planning models. At design time you can leverage script APIs to allow for more actions about comments.
Commenting is available in both stories and analytic applications. Only the specifics in the analytics designer
are described here.
• chart
• table
• geo map
• image
• shape
• text
• RSS reader
Note
First, to enable commenting for a widget, at design time go to the Quick Menus section of its Styling panel
and select Comment.
Note
If you create a story from an analytic application, comments won’t be carried to the story.
Add Comments
For how to add comments to a table cell, refer to Adding Comments to a Data Cell [page 1407].
To add comments to a widget, from the context menu select Add Comment.
In the view time mode (after selecting Exit Fullscreen), you can directly show or hide all comments by selecting
If your application is in embedded mode, you need to first switch on Commenting in Embedded Mode via
System Administration System Configuration . Then you can leverage the following API to show or
hide comments:
Code Syntax
Application.setCommentModeEnabled(isEnabled:bool)
In addition to directly creating or removing comments in an analytic application as in a story, you can leverage
scripts APIs to enable getting, posting or removing comments on a table cell either based on comment IDs or
the data context.
Example
For example, you’ve created an input field InputField_1 and would like to allow application users to:
Sample Code
var comments =
Table_1.getDataSource().getComments().getAllComments(Table_1.getSelections()
[0]);
console.log(comments);
var str = "";
for(var i = 0 ; i < comments.length; i ++){
var text = comments[i].text;
str = str + txt;
}
InputField_1.setValue(str);
Sample Code
As an application designer, you can add a comment widget to the canvas to gather and manage table data cell
comments in your analytic applications. When bound to the same data source and restricted to the same data
cell, table cell comments are synchronized with comment widgets.
Context
Filter line isn't supported. Filters in an analytic application opened from calendar tasks aren't supported as
well.
Procedure
1. From the toobar select More Widgets Comment Widget to add a comment widget to the
canvas.
2. In the Builder panel of the comment widget, configure a data source.
Note
You can also use the following APIs to set, get or remove filters in comment widgets:
Code Syntax
Example
Here’s a script example that shows how to make a comment shown in a specific comment widget and
apply the same filter as table to it.
Write the following script to display the comments in CommentWidget_1 when 2021 is selected in the
filter Date:
Sample Code
CommentWidget_1.getCommentingDataSource().setDimensionFilter("Date_703i1
904sd", "[Date_703i1904sd].[YHM].[Date_703i1904sd.YEAR].[2021]");
Furthermore, write the following script to copy the filter Location from Table_1 to CommentWidget_1:
Sample Code
var filter =
Table_1.getDataSource().getDimensionFilters("Location_4nm2e04531")[0];
if (filter.type === FilterValueType.Single) {//Table_1 has dimension
filter "Location=SA1"
var singleFilter = cast(Type.SingleFilterValue, filter);
CommentWidget_1.getCommentingDataSource().setDimensionFilter("Location_4
nm2e04531", singleFilter.value);
} else if (filter.type === FilterValueType.Multiple) {//Table_1 has
dimension filters "Location=SA1" or "Location=SA2"
var multipleFilter = cast(Type.MultipleFilterValue, filter);
CommentWidget_1.getCommentingDataSource().setDimensionFilter("Location_4
nm2e04531", multipleFilter.values);
}
Results
All relevant table cell comments of the same data range will be displayed in the comment widget.
The comment widget supports all the existing comment APIs. Table cell comments edited by APIs will be
synchronized with the comment widget as well.
You can copy and paste widgets together with corresponding scripts, as well as scripting objects together with
functions within an application or from one application to another.
Prerequisites
You've created an analytic application with widgets that contain scripts, or you've created scripting objects
such as script variables, script objects and OData services. Now you'd like to use one or more of these widgets
or scripting objects with the corresponding scripts in another application or even within the same application.
Context
The procedures of copying and pasting widgets and scripting objects are the same. In the procedure below,
we'll take widgets as an example.
Procedure
1. On the canvas select the widget you want to reuse and then Copy in its (More Actions) menu.
2. Decide whether you want to use the widget and script within the same application or another.
• If you want to reuse the widget and script in another application, then:
1. Select Copy.
2. Close the application.
3. In the same browser tab, open the application where you want to reuse the copied widget.
Note
Currently, you can't paste copied widgets or scripting objects to an application opened in a
new window or tab.
Note
You can also use the keyboard shortcuts CTRL + C and CTRL + V .
• If you want to reuse the widget and script within the same application, select Duplicate.
The widget is automatically added to the canvas.
Alternatively, you can select Copy and follow the same steps as copying to another application using
the paste function from the toolbar.
Results
You can see that the script has also been copied from one widget to the pasted widget as next to the pasted
widget name in the Outline panel is already active. You can select it to check the copied script.
You can copy one or several supported stories widgets to an analytic application in the same browser page.
Context
You can only copy widgets from a story to an analytic application, but not the other way around, from an
analytic application to a story.
The places that you can copy widgets to can be the canvas, popup or widget container such as panel or tab. You
can directly copy a chart or table widget from a story, or one from the Smart Insights panel or Examine panel to
an analytic application.
After you've successfully copied, the state of the widgets, for example, the applied filters, hierarchy, ranking
and sorting states will look exactly like that in story, and all relevant settings such as the data binding,
necessary variables, styling settings and the other properties will be kept as well.
Note
Some stories widgets and features that are not yet supported in analytic applications won’t be pasted to
the target analytic application.
Other restrictions:
Procedure
You can copy a widget either via Ctrl + C and Ctrl + V or the following steps:
1. In the story, choose the widget or group of widgets you want to copy.
You can restore the latest deleted widget or scripting object when designing an analytic application.
When you delete one or more widgets or scripting objects at one time, a Restore option will be displayed in the
confirmation dialog:
Note
Only the widgets or scripting objects deleted in last operation will be restored.
After you restore a widget, all its previous state, including ID, measure and dimension binding, filters, order and
settings will be kept, as well as the relationships with filter lines. However, its position in the canvas or popup
won't be kept.
The system will also try to keep the previous name of the widget or scripting object if it hasn't been occupied
elsewhere. If the name is occupied and needs to be changed, you'll be informed.
For technical objects such as bookmark and export to PDF, if you add a new object right after the deletion,
the deleted one won't be restored.
As a user of both SAP Analytics Cloud and SAP Datasphere, you can create an analytic application starting in
SAP Datasphere, either based on SAP Analytics Cloud space or SAP Datasphere space.
There are several scenarios to work with SAP Analytics Cloud and SAP Datasphere. For further information,
please see chapter Integration to SAP Analytics Cloud in the Integration Guide of SAP Datasphere.
Prerequisites
• To use SAP Analytics Cloud within the context of an SAP Datasphere space (Space Aware Connectivity),
SAP Datasphere and SAP Analytics Cloud need to be on the same tenant.
• To use an SAP Datasphere analytical dataset, at least one live connection to an SAP Datasphere system
needs to be configured in SAP Analytics Cloud.
You can choose an SAP Datasphere space and navigate to SAP Analytics Cloud to create the application by
selecting a dataset that is connected to the space and the space connection for a data-based widget such as
table and chart.
Procedure
The analytics designer of SAP Analytics Cloud is displayed where you can start to create an application. In
the URL header you can also see the space ID of the SAP Datasphere space you've selected in step 2.
5. Insert a model-based widget like a table or chart.
A dialog is displayed where you can select an analytical dataset that is connected to the space and the
space connection.
6. Select the dataset you would like to work with.
7. Choose OK.
8. Create the analytic application according to your needs.
9. Save the analytic application.
10. Select Run Analytic Application in the upper right corner.
You can choose the SAP Analytics Cloud space and navigate to SAP Analytics Cloud, where you can choose
whether the table or chart in your application is based on an existing data model or on an analytical dataset.
For the dataset option you're asked to select a configured connection, space and finally the dataset itself.
Procedure
The analytics designer of SAP Analytics Cloud is displayed where you can start to create an application. If
an application is created in the SAP Analytics Cloud space, the space ID won't be shown in the URL.
5. Insert a model-based widget like table or chart.
As an application designer, you can use popups and dialogs to design interactive analytic applications and
dashboards.
Context
You can add multiple popups to your analytic application, which help application users quickly enter
information, perform configurations or make selections, for example. It can also be used to display more
specific data for a selected item on the canvas. You can turn the popup into a dialog, which has the look and
feel consistent with other dialogs in SAP Analytics Cloud.
Because the popup acts as a container, you can put any other widgets into it, such as table, button, or
checkbox.
Procedure
You can see that an empty popup opens with the default name Popup_1.
2. If you want to turn the popup into a dialog, in the Builder panel of the popup, select Enable header & footer.
Otherwise, skip to step 5.
3. Optional: You can change the title and configure buttons for the dialog.
4. Select Apply.
5. Add widgets to the popup or dialog, such as dropdown boxes, tables or charts.
For dialogs, you can only add widgets to areas other than the header and footer.
Note
If no widgets are added, even if you save the analytic application with the empty popup or dialog,
application users won't see it when triggering it at application runtime.
6. Edit the styling of the popup in the Styling panel. For example, you can change its height and width.
8. To return to the popup itself or canvas, select in the tab and choose your popup or canvas.
9. To delete the popup, choose next to it in the Outline panel and then (Delete).
Related Information
You add the text widget to your analytic application to enter text or apply dynamic text to it.
The text widget is used in both stories and analytic applications. Only the properties specific for the analytics
designer are listed here.
Note
You can define the following elements as the source of dynamic text in analytic applications as in stories:
Note
For how to add dynamic text and more information, see Add Dynamic Text to a Story [page 1098].
In addition, you can choose Script Variables as the source of dynamic text.
Note
Only variables of string, number or integer types can be used. Variables set as array aren't supported.
You can use getPlainText() API to get the text with line breaks kept and applyText() API to change the
text in the text widget. You can use \n in the applyText() API to add line breaks in the text widget.
Example
In this example, when the application user clicks on the button at runtime, a new line This is a new
paragraph. appears under the current text in the text widget Text_1.
To achieve this, write the following script in the button's onClick event:
Sample Code
Before:
After:
You can configure input controls to dynamically change which dimension members to display for widgets in
analytic applications without scripting and compare specific measure values side by side.
Currently in SAP Analytics Cloud, analytics designer, there're two types of input controls:
Furthermore, you can turn on the cascading effect of input controls so that any value selection changes you
make to one will affect related others from the same data source, and vice versa.
Create an input control to control which dimension members to display for your widgets via filtering.
Procedure
For time-based dimensions, you can either choose Filter by Member or Filter by Range to set the input
control.
For more information about input control settings, you can refer to Step 6 to Step 10 on Applying a Story or
Page Filter [page 1530].
5. Select OK.
You’ve created an input control. By default all the widgets from the same data source will display any selected
dimension members from the input control. You can further use linked analysis to modify the widgets affected
by filtering. For more information, see Create Linked Analyses in Analytic Applications [page 1903].
From (More Actions) on the input control, you can select Edit Input Control to change the dimension
members. To change the display information of the input control, select Display As and choose one of the
options: Description, ID and Description and ID.
You can create a calculation input control within a widget to display specific measures of restricted dimension
members via filtering.
Context
A calculation input control can dynamically control which dimension members to be included in measure
calculation.
Procedure
1. Select a widget you want to apply the calculation to and then Designer.
2. Create calculation in the widget’s Builder panel:
• For chart, under Measures select + Add Measure, and then from the dropdown, select + Create
Calculation.
• For table and R visualization, under Columns select when hovering over Account, and then select +
Add Calculation.
• For value driver tree, select + Add Account, and then from the dropdown select + Create Calculation.
• name
• measure
• dimensions related to the measure
5. Under Values, select New Calculation Input Control from the dropdown.
• name
• values: restricted dimension members for selection
7. Select OK.
You can see that the restricted dimension members are displayed in the dropdown.
8. Select OK to close the calculation editor.
Results
You can see that the widget displays restricted measure with the calculation input control beside it. You and
application users can select any dimension members from the input control to dynamically compare the
measure values.
Note
You can change the name on a calculation input control directly beside or in Calculation Input Control
dialog. However, you can't change it in Outline or Styling panel.
You can turn on cascading effect for specific input controls, so that any value selection changes in one input
control will affect others from the same data source and vice versa.
Prerequisites
All the input controls related to the cascading effect must come from the same data source.
Context
With the cascading effect on for multiple input controls, you can make them affect each other in your
application. For example, if you have both Country and Region filters, and you change the Country filter value
from All to Sweden, the Region filter will be updated to show only regions within Sweden while all other region
names are hidden.
Note
Calculation input controls can’t affect dimension member input controls, but can be affected.
Procedure
Option Description
All Input Controls from This Data Source (default option) Any value selection changes you make to
this input control will affect all the input controls from the
same data source and vice versa.
Only Selected Input Controls Any value selection changes you make to this input con-
trol will affect only specific input controls from the same
data source and vice versa.
You’ve created the cascading effect. Any value selection changes made to this input control will simultaneously
update other ones depending on the interaction option you’ve selected. To display hidden values in the affected
input controls, select Show Inactive Values on them.
From (More Actions) on the input control, you can turn off the cascading effect. When there’s no
checkmark next to Cascading Effect, any changes made to this input control won’t affect others or vice versa.
Note
At design time, cascading effect won't be applied to any input controls in popups.
You can create linked analysis on an input control, chart, table or geo map, so that when you filter or drill
through hierarchical data on it, other widgets included in the analysis can be simultaneously updated in your
analytic applications.
Prerequisites
All the widgets linked to each other must come from the same data source. This also includes blended data
source.
Procedure
Your selection will be the source widget in linked analysis, whose changes will affect other widgets.
2. From the widget's (More Actions) menu, select (Linked Analysis) Settings.
3. Select an interaction option to configure its target widgets to be affected:
Option Description
All Widgets from This Data Source Filtering or drilling through hierarchical data on this widget
will update all the widgets in the application.
Note
At runtime, when a chart’s pause mode is
switched off or it’s called by refreshData()
API, its data points can’t be selected to filter the
target widgets.
Only Selected Widgets Filtering or drilling through hierarchical data on this widget
will update only specific widgets. You can select the target
widgets to be included in linked analysis.
Note
Chart types including histogram, waterfall and time series chart aren’t supported for linked analysis
settings. They can only be configured as target widgets.
4. Select Apply.
Results
You’ve created linked analysis, which can make any change to the widget affect other ones, depending on the
interaction option you’ve selected. For geo maps, any polygon filter created or shape selection can affect other
widgets to re-render.
You can further access linked widgets diagram, which can visualize the relationships you’ve set and help you
adjust the existing settings.
Note
When you copy an input control with linked widgets to another analytic application or from canvas to a
popup, linked analysis won’t be copied.
Note
You can create a filter line, which can be composed of multiple dimension member filters, either for an
individual widget or a group of widgets in your analytic application.
Context
Filter lines let you filter data-bound widgets. There’re two modes in filter lines:
• Individual Widget Filter, which can be applied to an individual widget in the application. The widget can be a
chart, table or R visualization.
The widget's local filters are also displayed. You cannot select the dimension members in it in design time.
• Group Filter, which can be applied to multiple widgets in the application at one time. Supported widget
types are charts and tables.
You can preselect the members in it in design time.
Filter lines will display the selected members when it’s applied to the widgets. Otherwise, they appear empty.
Note
We recommend disabling the filter tokens of data-based widgets. The concept of the data based widget
filter tokens and that of filter line don't align well with each other, so a simultaneous use could cause
unexpected behavior.
To disable the filter tokens of a data-based widget, from the (More Actions) menu of the widget, select
Show/Hide Chart Details/Table Details and uncheck Filters.
If you choose to define filter line for a group of widgets, filter line filters and each widget's local filter tokens
can exist at the same time. The final filtering result of such widget will be the joint data range filtered by
corresponding filter line filters and its filter tokens. And the filters of your filter line won't be affected if the
widget's filter token changes.
Procedure
1. From the Insert area of the toolbar, select Filter Line to add a filter line to the canvas or popup.
2. Select the filter line under the canvas or corresponding popup and then Designer.
If you select an R visualization widget, additionally you need to specify the Input Data as the data
source for dimensions in the filters.
b. (Optional) Select Add Dimension to define one or more dimension member filters.
If no dimensions are added, the local filters in the widgets will be the default ones.
Note
If you've specified dimensions in the previous step, you can only select the members from them. If no
dimensions are specified, all dimensions and members from the data source will be available.
d. Choose the widgets you want to apply the filters to.
By default they are all data-bound widgets from the selected data source. You can select specific
widgets to be filtered.
Next Steps
Save and run the application. When you select (Set Filter), all the dimensions for filtering are displayed
in the dropdown. After you've selected the dimension members, the filters are applied to the corresponding
widgets, and the selections are displayed next to . You can remove or modify the filters:
Remove filter
Hover over the filter and select .
Modify filter Select the filter. A dialog pops up where you can make mem-
ber selection changes.
Linked widgets diagram is a graphic representation of interaction among widgets, which you can drag, zoom in
on and zoom out on to view how widgets affect each other. It helps to manage relationships especially when
there’re many widgets in your application.
Note
Make sure cascading effect is turned on in the input controls’ context menu if you want to modify the
relationships on the diagram.
Note
The links created via scripting and widgets without data source won't be shown on linked widgets diagram.
Invisible widgets can be shown on the diagram but are greyed out.
Unlike filter line and linked analysis, which can only set widgets to be affected, this interactive diagram lets you
set all the linked widgets to a specific widget, which include:
• From Tools in the analytic applications toolbar, select Linked Widgets Diagram.
For the first time to open, you’ll see an overview page with all the linked widgets in your application. If you
reopen the diagram, it’ll stay in the previous state.
Each widget on the diagram is represented by a node with widget icon and ID. When you hover over a widget,
you can see its ID and, if applicable, full title, visibility status and filters in detail on its tooltip.
Besides, when you hover over a link between the two widgets, you’ll see a tooltip indicating the relationship,
either Linked Analysis, Filtering from Filter Line or Cascading Effect.
With linked widgets diagram, you can both view and modify the existing widget relationships.
To view linked widgets to a specific widget, you can directly select its ID from the dropdown. You’ll see all its
source and target widgets on the diagram.
Option Description
Manage Linked Widgets Manage linked widgets to the selected central widget. A
panel will open where you can add or delete its source or
target widgets (also available for all widgets in overview).
View Linked Widgets Switch to the selected source or target widget to view its
linked widgets (also available for all widgets in overview).
Remove Link Remove the selected source or target widget from the exist-
ing relationship.
Note
When configuring relationships on the diagram, take account of the following for specific widget types:
Besides, when you mouse over a widget, you can select + icon on either side to directly add its source or target
widgets. The dropdown also lets you switch to overview or any widget for viewing and managing relationships.
Note
Filter lines in individual widget filter mode aren’t supported for adding or removing links.
In the relationships overview, you can select Include Unlinked Widgets to see whether any widgets need to be
added to the existing relationships.
Related Information
As an application designer, with Linked Dimensions you can join a primary model with secondary ones that
contain common dimensions in your analytic application. You can therefore create visualizations with blended
data source, as well as filters that can simultaneously update all the charts and tables.
Blending is available in both analytic applications and stories. Only the specifics in analytics designer are listed
here. For more background information, prerequisites and steps about blending, refer to Blend Data [page
1066].
Restrictions
All the restrictions in stories are applicable to analytic applications as well. Also refer to Blend Data [page
1066].
In addition, the following features in analytic applications are not yet supported for blending:
• universal model
• filter line
• data point comment
• navigation panel
• Currently, the APIs and value help only work on primary model of blended chart or table, while secondary
models aren’t supported. For example, you can’t use setDimensionFilter to filter dimension members
in a secondary model.
• For blended charts or tables, the IDs of dimensions and measure members contain modelId. Therefore,
when you need to use the output from one API as the input parameter for the consequent one, remember
to check whether the dimension IDs in the two APIs match.
• Calculated measures don’t belong to either primary or secondary models, so they don’t have modelId but
UUID.
Example
You’d like to use the product selections in Table_3, which is a blended table, to filter data in Table_1,
which uses a single model. The value help for the product dimension doesn’t include the modelID, so
you need to use console.log to get the right ID.
Sample Code
Unsupported APIs
A popup whose size is larger than the main canvas will not display correctly. Therefore, we recommend you not
to place that big a popup on the canvas.
Need to add at least two widgets to a popup to run the popup as designed
We recommend you add at least two widgets to a popup as widgets are the visualization of the popup. If
no widgets are added, you won't see the popup displayed when you trigger it while running the analytic
application. If only one widget is added, the height and width you set for the popup won't take effect.
When a table or chart in the canvas acts as the source widget of a filter line
widget in a popup, source widget cannot find the filter line as its reference
after reloading the analytic application
In the case when a table or chart in the canvas acts as the source widget of a filter line widget in a popup and
you reopen or refresh the analytic application, you will find the filter line is not listed in the reference list of the
table or chart widget after you choose Find Reference. This is because currently we don't initiate the filter line
widget in the popup when you reloading an analytic application.
To solve this, for now we recommend you activate the popups by clicking on each of them. Then the reference
list will display all relevant results.
Calling the setTheme API in a popup does not affect the theme settings in the
popup
Now calling the setTheme API in a popup does not affect the theme settings in the popup. To solve this
problem, you can add a panel to a popup and include all widgets in it. Then when you trigger the set theme API
in a popup, all widgets added to the panel will apply the new theme settings.
Sometimes when designing an application, if you apply a new theme to the application and directly run the
application without clicking to preview the popups, the new theme settings won't apply to the popups during
the application run time.
To solve this, for now we recommend you activate the theme settings for the popups by clicking on each of
them when designing the application. Then the theme will be correctly applied to the popups.
Some keyboard keys are not supported when working with more than one
table in a popup
When a Popup has more than one table that has the option Optimized Presentation checked, the enter and
backspace keys on the keyboard don’t work on the other table.
Please follow this workaround: When a table is used on a Popup, don’t check the option Optimized Presentation
for the table.
When you work with tables in a popup, some functions in planning applications aren’t available such as mass
data entry, distribute values, value lock management and version history.
Therefore, we don’t recommend using planning-related functions in popups. Alternatively, you can place the
table on your canvas or a panel widget and then use the setVisible API to mimic the effect of a popup.
When you use the Planning Model API createMembers() for other dimensions other than a generic
dimension, such as a Date dimension, the operation will not be successful.
Calling the version management APIs is not supported for BPC writeback enabled versions. Please use the
Version Management option and/or the Publish Data toolbar button instead.
The DataSource function setVariableValue is not validating the values specified (neither at runtime nor at
design time). All values and value combinations which are accepted in the prompt dialog will be supported. All
other combinations might lead to errors or to an inconsistent state.
When you apply a dimension filter with the DataSource function setDimensionFilter, then setting ranges
on fiscal hierarchies isn’t supported.
After applying a dimension filter with the DataSource function setDimensionFilter, users can modify
filters in a table using the <n> Filter link in the table’s title area, though the setting Allow viewers to modify
selections is turned off for the filter in the filter panel.
Runtime filters complies with filter selection mode set at design time
When multiple values are set via API such as setDimensionFilter at runtime, if the filter was set to Single
Selection mode at design time, it won’t switch to the multiple selection mode and the value selection change
won’t be triggered. Even if you remove and then reconfigure a filter, its original selection mode will be kept.
Can’t run smart discovery in a smart discovery dialog launched after calling
SmartDiscoveryAPIs as no analytic Entity is specified
For the smart discovery defined before version 2021.1, If you call SmartDiscovery API and set
setSettingsPanelVisible to true, you need to first input value in the new field Entity before you can
successfully run this smart discovery.
Styling panel cannot control whether the quick menu options of a chart
widget is visible or not in Explorer
In the Styling panel of a chart widget, the section Quick Menus controls whether the options in the chart's quick
menu bar is visible or not when running the analytic application. However currently if you set the options such
Some theme settings of a table widget won't take effect once you have
customized styling options
After you have modified styling options of a table widget or any contents in a table widget, the settings below
won't take effect:
• In the styling panel, choose Restore to Theme Preference and the table won't be restored to theme
preference.
• Set the theme of the table back to default theme and default theme won't be applied.
Caution
After you delete a theme, as a result of browser cache, sometimes you won't be able to save the theme as a
new theme again.
Changing sort order in navigation panel is not supported for SAP Analytics
Cloud Models connected to SAP HANA connections
When working with the navigation panel API openNavigationPanel and thus displaying the navigation panel,
the reorder of measures is not supported for SAP Analytics Cloud models connected to SAP HANA data
sources.
R visualization widgets of HTML content are not supported in Safari on Mac when third party cookie is not
allowed in Safari.
Currently in Safari on a mobile device, Request Desktop Website for an analytic application is not supported.
While you can embed web pages, stories and other analytic applications into your analytic application via the
web page widget, embedding an analytic application into stories or digital boardroom isn't officially supported.
Currently, analytics designer only supports dimension member input controls and calculation input controls
based on restricted measures. For more information, see Configure Input Controls in Analytic Applications
[page 1899].
Calculation input controls for controlling the version or the cut-over date for
the forecast table layout are not supported in Analytics Designer
When you work with dynamic forecast note that calculation input controls for controlling the version or the
cut-over date for the forecast table layout are not supported in Analytics Designer.
If an input control is collapsed, with All Members as selected members, and shows hierarchy in flat
presentation, after setSelectedMembers API is called, it always shows member ID. To show description
on the input control as configured, expand it, or do not select All Members or flat presentation in design time
settings.
When triggered by an Export to PDF scripting object using exportReport API or by the Export function from a
table’s context menu with All for Scope in settings, the PDF generation may cause garbled codes in the table for
some languages such as Chinese, Japanese, Korean, Russian and Arabic.
When exported to PDF, widgets might not keep all CSS styling settings, for example, text decoration and
opacity.
Due to the restrictions of JavaScript library and JavaScript PDF generator library that custom widgets are
based on, custom widgets might not be fully exported to PDF. For instance, the chart axes of a custom widget
based on D3.js won’t be exported. Custom widgets with CSS settings might not keep all the styling when
exported to PDF, either.
For panels, flow layout panels, tab strips and page books, only contained widgets that can be seen at once
without scrolling will be exported to PDF via the exportView() API. The same applies to data in scrollable
charts and tables.
Number format settings and related APIs don’t apply to tooltip measures for
charts
Number format settings only controls the in-context chart measures, where measure selection doesn’t include
tooltip measures. To make all measures’ scaling consistent, we recommend changing the tooltip measures to
the form of calculated measures so that you can modify their format.
Similarly, number format APIs only apply to measures on axes, Feed.ValueAxis and Feed.ValueAxis2.
Tooltip measures on axes, Feed.TooltipValueAxis, aren’t supported, for example.
addMeasure, removeMeasure and addMember APIs for charts support a limited number
of feeds
For charts, addMeasure and removeMeasure APIs only support Feed.ValueAxis, Feed.ValueAxis2,
Feed.Color and Feed.bubbleWidth. When the structure member is a measure, addMember API only
supports these feeds as well.
You can use various widgets in analytics designer, including table, image, text, dropdown, checkbox group,
radio button group, button and so on.
For widgets that are used both in stories and analytic applications, only the properties specific for analytics
designer are mentioned here.
12.2.17.1 Chart
Add charts to an application to present data in a graphical way. Charts can emphasize irregularities or trends in
data, and help you focus your business analysis on those areas.
The chart widget is used in both stories and analytic applications. Only the properties specific for the analytics
designer are listed here.
To make it possible to create a new story for the chart directly from the chart widget, select the option
Open in New Story in the section Quick Menus of the chart widget's Styling panel. When you run the analytic
application, you'll be able to select the icon next to the chart and choose Open in New Story.
12.2.17.2 Table
The table widget is used in both stories and analytic applications. Only the properties specific for the analytics
designer are listed here.
To make it possible to create a new story for the table directly from the table widget, select the option Open in
New Story in the section Quick Menus of the table widget's Styling panel. When you run the analytic application,
you'll be able to select the icon next to the table and choose Open in New Story.
Using the geo map widget, you can overlay multiple layers of business data on geo map with detailed
geographic information to perform analyzes on your geographic data.
The geo map widget is used in both stories and analytic applications. Only the properties specific for the
analytics designer are listed here.
Create Story from Widget Checkbox Not enabled by default. If you enable
the option as an application designer,
the option will be available when appli-
After adding a geo map widget to the canvas or popup, as an application designer you can write scripts to
trigger changes to different layers of the geo map. For example:
Sample Code
//click the button to set the bottom layer of the geo map to invisible
GeoMap_1.getLayer(0).setVisible(false);
For more detailed information about the geo map APIs, refer to Analytics Designer API Reference in the SAP
Help Portal.
12.2.17.4 R Visualization
You use the R visualization widget to create and edit visualization based on R scripts. By setting the relevant
properties, you can change the size and style of the widget.
Caution
R visualizations are supported in restricted scenarios. Therefore, we strongly recommend using custom
widgets to create your custom visualizations. For more information about custom widgets, see Use Custom
Widgets in Analytic Applications [page 1927].
The R visualization widget is used in both stories and analytic applications. Only the specifics for the analytics
designer are listed here. For how to create an R visualization and general restrictions, you can refer to Add R
Visualizations to Stories [page 1506].
For managed R server, now three more language strings are supported: Chinese, Japanese, and Korean. To
display these language strings, make sure you write extra R script like below:
Sys.setlocale("LC_ALL", "zh_CN.UTF-8")
Sys.setlocale("LC_ALL", "ja_JP.UTF-8")
Sys.setlocale("LC_ALL", "ko_KR.UTF-8")
After adding an R script and displaying the running result as visualization on the main canvas, you can write
scripts, for example, to customize the local data source, set input parameters as numbers or strings, get
input parameter as numbers or strings and check if there's any error when executing R scripts. More detailed
information about the APIs is available in Analytics Designer API Reference.
Related Information
12.2.17.5 Image
Using the image widget, you can enhance analytic applications by adding images with hyperlinks.
The image widget is used in both stories and analytic applications. Only the properties specific for the analytics
designer are listed here.
Besides adding a hyperlink to an image in the same way as you do in a story, you can call the hyperlink API
in your scripts to customize more scenarios. If you set different hyperlinks for an image in both ways above,
the hyperlink defined via API has higher precedence. And noted that different from hyperlinks added in a story,
hyperlinks defined in your scripts only support external URLs and will always be opened in a new tab.
More detailed information about the API is available in SAP Help Portal at Analytics Designer API Reference.
Using the shape widget, you can enhance analytic applications by adding shapes with hyperlinks.
The shape widget is used in both stories and analytic applications. Only the properties specific for the analytics
designer are listed here.
Besides adding a hyperlink to a shape in the same way as you do in a story, you can call the hyperlink API in
your scripts to customize more scenarios. If you set different hyperlinks for a shape in both ways above, the
hyperlink defined via API has higher precedence. And noted that different from hyperlinks added in a story,
hyperlinks defined in your scripts only support external URLs and will always be opened in a new tab.
More detailed information about the API is available in SAP Help Portal at Analytics Designer API Reference.
12.2.17.7 Symbol
Provide a way for application designer to easily add special characters or symbols into text.
You can insert symbols in different kinds of texts in Analytic Application including but not limited to:
• text widget
• Chart/Table title
• Names of reference line
• Other places in Builder Panel
Add an RSS Reader to your analytic application to present relevant articles from an RSS feed alongside your
data and visualizations.
The RSS reader widget is used in both stories and analytic applications. Only the properties specific for the
analytics designer are listed here.
You can leverage the open APIs to dynamically update the list of RSS feeds according to your actions—for
example, to show blogs relevant to your area of interest. For detailed information about all available open APIs,
refer to Analytics Designer API Reference.
You can embed contents via the web page widget into your analytic application to combine your analytics with
additional live, hosted contents.
The web page widget is used in both stories and analytic applications. Only the specifics for SAP Analytics
Cloud, analytics designer are mentioned here. For more information about embedding and specifics for stories,
refer to Adding an Embedded Web Page [page 1114].
You can apply additional sandbox restrictions to allow for more activities from the embedded web page by
selecting allow-downloads and allow-popups in the Builder panel.
You can embed web pages, stories and other analytic applications into your application via this widget.
However, before embedding either web pages or SAP Analytics Cloud contents, consider potential issues such
as:
• Performance issues: Using embedding may slow down your application’s runtime performance.
• Session logon issues: Sessions can’t be shared across web page widgets. For instance, application users
may see SSO popups respectively from each web page widget that embeds stories with BW live data
connection, even if these stories are bound to the same data source.
Restriction
Embedding other SAP Analytics Cloud contents into stories via the web page widget isn’t officially
supported. This includes but not limited to other stories and analytic applications.
Related Information
Understand Message Communication Between Host and Embedded Web Pages [page 1947]
With a text area widget, analytic application users can enter multiple lines of text or comments and
descriptions retrieved from other widgets.
Analytic application users can use both the text area and the input field to enter text. The difference between
them is that the input field displays content only in one line while the text area can automatically wrap lines
according to the size of the widget. The maximum text length of a text area widget is 1000 characters.
Display Hint toggle, string If you disable this option, no hint will
be displayed either in the analytic appli-
cation design time or at runtime. After
enabling this option and entering a hint,
when application users later start to en-
ter a value in the text box during the an-
alytic application runtime, the hint will
be overwritten by the user-entered text
automatically.
After adding a text area widget to the canvas or popup, as an application designer you can write scripts to
change the text area, such as set it to editable or not editable.
More detailed information about the API is available in SAP Help Portal at Analytics Designer API Reference.
12.2.17.11 Button
Button widgets let the application user interact with the analytic application.
Button Styles
• Standard Button
• Lite Button
• Emphasized Button
• Positive (Accept) Button
• Negative (Reject) Button
These styles bring default border and background colors for different states including Default, Mouse Hover
and Mouse Down. Based on the default settings, you can further customize colors of your buttons in different
states.
After specifying the colors, you can choose to upload an icon and decide whether to show or hide the icon in
the button. The size of the uploaded icon will be automatically adjusted to adapt to the size of the button.
The icon can be placed either to the left or right of the text in the button. Both the icon and text will be center
aligned.
12.2.17.12 Dropdown
Dropdown widgets let the application user select items, for example to set a filter. To enable user interaction,
you need to add a script to the onSelect event of the dropdown. The script is triggered when the application
user selects an item in the dropdown widget.
Value list of values With the value property, you can add
values, change the order of values, or
remove them. Each value has the fol-
lowing properties:
• value
• text (optional)
If you enter a text for a value, the
text is displayed. If no text is en-
tered, the value is displayed.
Checkbox group and radio button group widgets display options for application users to select items to set a
filter, for example.
The checkbox group or radio button group widget has the following specific properties:
Display Option Vertical Option, Horizontal Option Specifies the display options.
Label Text on, off, String If you select the option, you can enter a
label text for the widget.
Style Properties
Value list of values With the value property, you can add
values, change the order of values, or
remove them. Each value has the fol-
lowing properties:
• value
• text (optional)
If you enter a text for a value, the
text is displayed. If no text is en-
tered, the value is displayed.
A tab strip has two tabs by default. As an application designer, you can add more tabs to a tab strip and insert
any widgets into different tabs according to your needs. When you move, copy, delete, show, or hide a tab strip,
all the tabs and widgets in the tab strip will change accordingly.
After adding a tab strip widget to the canvas or popup, as an application designer you can write scripts to set
the tab's text or acquire the tab's key or text. For example:
Sample Code
For more information about the tab strip APIs, refer to Analytics Designer API Reference.
You can use custom widgets, which extend the functionalities of SAP Analytics Cloud, analytics designer and
complement the standard palette of widgets according to your needs.
Note
Custom widgets work in Google Chrome and Microsoft Edge (version 79 and higher) only.
As a developer you can build your own widgets in addition to the widgets delivered in SAP Analytics Cloud,
analytics designer. For more information, see SAP Analytics Cloud Custom Widget Developer Guide.
Follow the steps to upload a custom widget to SAP Analytics Cloud, analytics designer so that you can use it as
other widgets in your analytic application.
Prerequisites
To create and upload custom widgets, the Create permission for Custom Widget must be selected in the role
that you are assigned.
Procedure
1. On the Analytic Applications start page, choose the Custom Widgets tab.
2. Select (Create).
3. In the Upload File dialog, choose Select File.
4. Select the custom widget file, for example box.json.
Results
When you want to insert the custom widget into your analytic application, you can find it via (Add)
Custom Widgets.
Next Steps
However, when you add a custom widget that differs in minor version from the one present in analytics
designer, it replaces the present one. For example, if a custom widget of version 1.5.0 is present in analytics
designer, then adding either version 1.4.0 or 1.6.0 replaces version 1.5.0.
Exported archives of analytic applications by default contain all referenced custom widgets.
If you use a custom widget in an analytic application and the application is transported between different SAP
Analytics Cloud systems, the application archive automatically includes the referenced custom widget (as it
works for models as well). In this way only one archive needs to be imported to the target system to get the
application up and running.
You can know which applications are actively using a custom widget when you try to delete it from an
SAP Analytics Cloud system in the Custom Widgets tab. The Delete Custom Widgets dialog lists all related
applications and warns you that deleting the custom widget will break those applications.
Example
In the following example, the first custom widget is used by an application named Ticker Test, the second
one isn't used in other applications, and the third one is used in two applications for which the you have no
authorization.
Note
If you choose to ignore the warning and delete the custom widget regardless of active use and later realize
that this was a mistake and re-upload the custom widget, the references between applications and this
custom widget will stay lost.
This means that the export of the custom widget with the analytic application actively using it and the
display of related applications in the dialog won't work anymore. In this case, you'll see a warning when
opening an affected application:
In your analytic application, you can style a widget by changing fonts, colors, axis scaling for chart and so on,
or change its size and set its position in its parent container. You can also set the size of the canvas as fixed or
dynamic.
The styling option is available in both stories and analytic applications. Only the styling options specific to
analytics designer are listed here.
You can define the size of a widget and set its position in its parent container in the Size and Position section of
the Styling panel. The parent container can be a popup, canvas, panel or tab strip.
Left, Right, Top and Bottom define the widget's distance to the four borders of its parent container. The value
of margins can be defined as auto, pixel or percent. When a widget is smaller than its parent container,
theoretically the sum value of its left margin, right margin and width should be no greater than the width of the
parent container. But when the parent container is a panel or tab strip, the widget's width alone can be greater
than the width of the parent container. This also applies to the top margin, bottom margin and height of the
widget.
You can define the dynamic layout of your canvas under the Canvas Size section of its Styling panel.
When you select Dynamic, Canvas Content Layout displays, from which:
• If you select Center Aligned, all contents in the canvas will always be centered horizontally at runtime.
Note
Before wave 2019.20, the canvas content layout of an analytic application is by default center aligned.
Now it's by default fit to device.
In the Styling panel of a widget, under Quick Menus you can configute whether the quick menu bar itself and
the options in it are visible or not at runtime. The quick menu here refers to both the quick menu of a widget
and that of data points in a table or chart.
You can align the positions of multiple widgets in design time to improve the visual appeal of your analytic
applications.
To align widgets, first select all the widgets that you want to reposition. Then, in the context menu select
(Align Widgets) and select one of the following options according to your needs:
• Align Left: Align the left edge of each selected widget to the leftmost edge of all selected widgets.
• Align Right: Align the right edge of each selected widget to the rightmost edge of all selected widgets.
• Align Top: Align the top edge of each selected widget to the topmost edge of all selected widgets
• Align Bottom: Align the bottom edge of each selected widget to the bottommost edge of all selected
widgets.
• Align Center: Center the selected widgets horizontally, ranging from the leftmost edge to rightmost one of
all.
• Align Middle: Center the selected widgets vertically, ranging from the topmost edge to bottommost one of
all.
• Distribute Horizontally: Align the selected widgets equidistantly in horizontal direction.
• Distribute Vertically: Align the selected widgets equidistantly in vertical direction.
The height or width of the widget will not change whether you define it as auto, pixel or percentage values.
When you align widgets, the settings of the widgets' left, right, top or bottom margins may change under
different conditions. Take aligning widgets to the leftmost edge as an example:
• If the left margins of one or more widgets are defined in pixel or the left margins of all widgets are defined in
auto, after alignment the units of all widgets' left margins will be changed to pixel.
• If no widgets’ left margins are defined in pixel and the left margins of one or more widgets are defined in
percentage, after align the left edges, the units of all widgets' left margins will be changed to percentage.
This rule also applies when you align widgets to right, top and bottom.
As an application designer you can use a combination of panels and flow layout panels, which are widget
containers, to design analytic applications that can adapt to different screen sizes.
You can use a panel or flow layout panel as a container to group any type of widgets together, so that when a
panel is moved, copied, deleted, shown, or hidden, all the widgets in the panel will follow it. In addition, you can
nest one panel or flow layout panel inside another.
Compared with common panel, flow layout panel can provide responsive layout. For example, if a screen size is
too small to fit two widgets in one row, then the widget on the right will flow to the next row automatically.
Note
Widgets that are set to invisible at runtime don’t take up any space inside a flow layout panel.
To be more specific:
• If a widget’s width is set as static, such as pixel, the widget will flow to the next row if the screen size is too
small for it to fit in.
• If a widget’s width is set to percentage (recommended for mobile), when the screen size gets smaller, the
widget’s size will shrink as well by its percentage until its minimal width is reached. After that it will flow to
the next row.
Note
Layout APIs such as setLeft, setTop, setRight and setBottom won’t take effect in flow layout panels.
You can drag and drop any widget to the panel or flow layout panel directly on the canvas or Outline. For flow
layout panels, you can multi-select widgets and then from (More Actions) select Group in Flow Layout Panel
to place them all at once. You can also set the width or height of all the selected widgets to percentage from the
menu.
To let application users show or hide a panel or flow layout panel, you can leverage the script APIs
isVisible() and setVisible(). For detailed information, refer to Analytics Designer API Reference.
You can also configure responsive rules in the Builder panel of a flow layout panel by setting break points.
There you can define how the width and height of certain widgets will be adjusted as well as which widgets to be
hidden in runtime when the screen width is smaller than a certain threshold.
You have the flexibility to keep some contents at a fixed position and others in responsive layout, such as
always leaving a header on the top or a navigation panel on the right of the application.
Besides, you can nest common panels in a flow layout panel to create more sophisticated layouts.
Themes provide you with an efficient and reusable way to define the styles of your analytic applications. You
can create a customized theme, use and modify it, and leverage related API to allow application users to
change theme at runtime.
A theme is a one-stop solution to your enterprise's branding. It provides a consistent look and feel that
complies to the corporate standard, while differentiating itself from hundreds of other applications.
You can create a theme to store your favorite styles for canvas, popups and different types of widgets, which
can also be reused in other analytic applications. Or you can choose an existing theme in your file repository to
instantly change the application's look and feel.
Note
When you find that the widget, canvas or popup in your analytic application doesn’t have the corresponding
styling after applying the theme, it may be that the styling defined in the styling panel overwrites the
overlapping settings defined in theme preferences. Therefore, go to the Styling panel, select (Restore to
Theme Preferences) so that the theme can be fully applied to the application.
Prerequisites
To create a theme, ensure that you have the object permission. For example, the creator and admin roles have
access to themes.
Procedure
Results
A customized theme is created and applied to your analytic application. You can also reuse it in any other
applications.
You can directly use an existing theme to define your analytic applications or modify and override the existing
theme preferences according to your needs.
Prerequisites
To use a theme, ensure that you have the object permission. For example, the creator and admin roles have
access to themes.
If the theme is created by another user, make sure you have access to the theme in the Files repository.
1. Select (Theme) in the Display section of the toolbar and choose a theme.
If the theme you want doesn't appear in the dropdown list, select Browse for More Themes… to choose
from the Files repository.
Note
The styling settings that override the theme preferences will only be applied to the current application.
Even if you change to another theme, what you defined in the Styling panel will be kept.
To restore to the current theme's default settings, select (Restore to Theme Preferences) on the
upper-right corner of the Styling panel.
3. To modify the preferences of the existing themes:
a. Select (Preferences) to the right of the theme in the Theme dropdown list.
b. Change the theme settings according to your needs.
c. Save the changes:
• If you want to save these changes in the current theme, select Save Save and Apply .
Note
Theme updates will not only be applied to your current theme, but also to all the analytic
applications using this theme.
• If you want to save these changes to a new theme, select Save As.
4. If you want to delete the theme, go to Files, select it and then (Delete).
Results
• In edit mode, you can decide whether to apply the updated theme, or keep the previous theme by saving as
a new theme.
• In view mode, the application will display the latest theme settings.
• In edit mode, you can decide whether to apply the updated theme, or keep the previous theme by saving as
a new theme.
• In view mode, the application will display the Light Theme (Default) setting.
You can leverage the theme-related API to allow application users to flexibly change the theme of the
application at runtime.
Code Syntax
Application.setTheme()
To specify the theme in the script, press Ctrl + Space to open the theme selector. After you choose a
theme, the corresponding theme ID will be displayed in the syntax. If you choose not to define a theme in the
syntax, the default light theme will be applied.
Note
Currently, calling the setTheme() API from a popup doesn't affect the theme settings in it. To solve this,
you can add a panel to the popup and include all the widgets in it.
Here's an example that shows how to leverage the API to allow application users to switch between different
themes for your application.
Example
First add a dropdown widget Theme_Dropdown to the canvas. In its Builder panel, fill ID column with the
theme IDs and the Text column with the corresponding theme names.
Sample Code
Now, users can select from the dropdown list to change to a different theme for the application.
At runtime, you can apply a specific theme by directly adding the theme ID to the application URL.
Based on the URL pattern you're working with, use either an ampersand & or a semicolon ; to separate URL
parameters.
• If the URL pattern is /bo/application/, use a semicolon ; as the URL parameter separator.
https://<TENANT>/sap/fpa/ui/bo/application/4FA12EC04829FDC682399273A7A3A0C?
mode=view;themeId=D991AAEEC518947626D749EDFF57D64C
https://<Tenant>/sap/fpa/ui/app.html#/
analyticapp&/aa/2F602684051BC2914F0661E45399B0A3/?
mode=view&themeId=BA20000457CA6331EC089B6775F5DB51
As an application designer, you can define multiple CSS classes in the CSS editor either as global default or per
widget, which brings more flexibility to the styling in your analytic applications. The styling settings conform
to CSS standards and are no longer limited to the existing ones provided by SAP Analytics Cloud, analytics
designer.
Click through the interactive tutorial illustrating how to change and customize your application theme in
step-by-step instructions (2:00 min); all tutorials are captioned exclusively in English:
You can define a CSS class in the CSS editor and apply it to individual widgets, popups or the whole application.
Procedure
It's usually composed of a custom class name, element selector, property and value.
.my-theme-1 .sap-custom-button:hover {
background-color: #00ff00;
border-color: #0000ff;
}
/**
* In this example, my-theme-1 is a custom class name, sap-custom-button is
an element selector and :hover is a pseudo.
* !important syntax is NOT supported.
*/
Note
For an element selector, it comes from the supported classes shown in the editor.
4. Apply the CSS class you defined in the CSS editor to individual widgets, popups or the whole application:
• To set the CSS class as the application’s default CSS class, go to the Styling panel of the canvas first. In
the Application Settings section, enter the custom class name under Global Default Class Name.
Then, all widgets and popups in the application apply the corresponding CSS settings.
• To set the CSS class for a specific popup or widget, go to its Styling panel. For example, you’ve defined
CSS class named my-button-1 for button-specific settings. Under Analytics Designer Properties, you
just need to enter my-button-1 for CSS Class Name. Then the button applies the corresponding CSS
settings.
Note
When both CSS classes are defined, the widget-specific one overwrites the global default one.
Results
At design time, you can see that your application immediately applies the styling after you've defined and
assigned the CSS class.
Later to let application users set CSS styling at application runtime, you can use script APIs such as
setCssClass() and getCssClass(). For more information, refer to Analytics Designer API Reference.
Note
The CSS you've defined for a widget can't be carried to the new window when you create a story from it or
open it in explorer.
Note
Some CSS you've defined for a table might be different if its optimized presentation mode is enabled.
You can enable theme CSS for your application and load it into the theme that can be applied across
applications. The CSS settings overwrite the existing theme and settings in the Styling panel of any widget
or popup.
Procedure
1. From the Display section of the toolbar, select (Theme). Hover over the theme you want to modify, and
Results
The theme can then be reused in other applications. If you modify CSS and want to load the latest CSS, select
Reload Application CSS in Theme Preferences.
Note
The priority of styling methods from highest to lowest is: widget styling defined in the CSS editor, widget
styling from theme CSS, widget styling defined in the styling panel and widget styling defined in the theme
preferences.
You can schedule publication to distribute analytic application views to selected users or user groups via email
at a predefined time or with frequency.
Prerequisites
To be able to generate PDF for the publication, you must add an Export To PDF technical object to your
application.
You can create a single or recurring schedule for publication of your application. Later you can view, modify or
delete the schedule in Calendar.
Only the publication settings and configurations specific for the Analytics Designer are listed.
There are two ways of triggering PDF export when scheduling a publication for an analytic application:
• Automatically generate PDF export. This is the default setting, PDF will be generated automatically when
the render of application is considered to completed. In this case, you don’t have to write scripts. An
application will be considered fully rendered when the last line of script in the onInitialization event is
executed. Any event scripting happened after that will be bypassed.
• Manually generate PDF export via API. This is the advanced setting that we recommend if you want
to control the timing of the PDF export more accurately and not to miss any relevant event scripting
happened after the onInitialization event is completed.
In this case, we recommend you write the following script as the last line of code for the
onInitialization event of the Canvas:
If (Application.isRunBySchedulePublication()) {
Scheduling.publish();
}
The two schedule publication settings can be configured in the Styling panel of the Canvas.
Procedure
1. Use either of the two ways to schedule publication for an analytic application:
• Go to (Files), select the analytic application that you want to schedule and from the toolbar, select
• In the application’s run time (Exit Fullscreen), from the toolbar, select Tools and choose
Schedule Publication.
Then you can add one or more script variables for application users to customize, as well as define a
default value for each of them.
Only the variables that are exposed via URL parameters and can be configured via URL will be included.
8. Add one or more views in the Distribution part to schedule and share customized views of your application
or bookmarks of the analytic application. You can specify same or different SAP Analytics Cloud recipients
or non-SAP Analytics Cloud recipients for each view.
Note
For each view, you’ll have up to 60 seconds to generate PDF export. If the time exceeds that limit,
scheduling will skip current task and continue to publish the next view.
9. (Optional) Edit the name of the PDF file from the File Name field.
10. Click PDF Settings to configure the PDF settings such as the paper size and choose whether to include
appendix or comment or not.
11. Click OK.
publication task, in the Schedule Publication panel of Calendar, click the button (Detailed status).
When you embed analytic application in a host HTML page or embed a web page in analytic application
through the web page widget, you can follow this guide to enable message communication between host and
embedded web pages.
Using the posting message related APIs, you can realize either of the following scenarios:
Before embedding an analytic application via a iFrame in the host HTML page, you need to first make sure the
host HTML page is added as a trust origin in System Administration App Integration Trusted Origins .
Then you can trigger bi-direction communication between the host HTML page and analytic application using
the following functions.
The postMessage event is to post messages from the analytic application to the host HTML page.
When an end user triggers a callback function from the analytic application, it sends out data to the parent
receiver page that hosts the iFrame, or the top-level one of a specific target origin when there're multiple levels
of pages embedded in one another.
Code Syntax
The onPostMessageReceived event is to handle messages sent from the host HTML page to the analytic
application. It can also handle messages from an HTML page embedded in an analytic application.
Caution
We advise you to always check the origin when receiving an event-triggered message, because a malicious
site can change the location of the window and therefore intercept the data sent via iFrame's postMessage
without your knowledge.
In the current scenario, the parent window which hosts the iFrame can post messages to the analytic
application. The messages are then retrieved by the application and trigger its changes accordingly such as
updating some input data.
Example
In this example, you'd like to embed your analytic application in a host HTML page, whose URL is http://
localhost:8080.
Sample Code
Then you want to allow end users to view the message received from the host HTML page in a text box of
the embedded application. Write the script below for the onPostMessageReceived event of the canvas:
Sample Code
If (origin == “http://localhost:8080”) {
Text_ReceivedMessage.applyText(message);}
You can trigger bi-direction communication between the host analytic application and the web application
embedded via web page widget.
You can use the following API to allow for messages from the host analytic application posted to the embedded
web application:
Sample Code
Note
The parameter targetOrigin is optional. If it's left empty, the URL defined in the web page widget will be
taken as the target origin by default.
The event for handling messages sent from the host analytic application depends on the type of the embedded
application:
• If the embedded application is an SAP Analytics Cloud application, once the message is received, it handle
the messages via the onPostMessageReceived event.
• If the embedded application is other web application, once the message is received, the it can use the
event window.on("message") to handle the message.
The host analytic application can use the onPostMessageReceived event to handle messages from the
embedded web application.
• If the embedded application is an SAP Analytics Cloud application, use the postMessage event to post
messages.
Because several users can work on an application at the same time, there could be conflicts when the
application is saved. In this case you'll receive system notifications to inform you how to proceed.
Note
You can see the notifications only if you use the supported web browsers. As of today, they are Google
Chrome and Microsoft Edge.
If you edit an application simultaneously with one or more different users at design time, you will get a system
notification telling you that there are other users editing the analytic application. This notification will also
provide a link to immediately saving the application.
If another user save the application while you're editing, you will get a system notification that the application
has been updated by another user. You can save a copy of the application or open the last revision.
Note
You can't directly save the application and use the corresponding keyboard shortcuts.
Note
If you've made changes to the application and selected Open Latest Revision, all your changes will be
lost.
You can continue to edit the application based on the newest version.
• If you ignore this notification and close it, you are able to continue the work on your application. You may,
however, only save a copy of the application.
Follow the tips to help your analytic applications run smoothly with optimized performance.
Top Tips
To significantly improve the analytic application performance at startup and save system resources, first
consider the following four tips.
thus increases your analytic application’s performance. To do this, from the toolbar select (Edit Analytic
Application) Analytic Application Settings and then Load invisible widgets in the background in the dialog.
Invisible widgets aren’t only widgets set to be hidden in runtime but also those inside invisible containers or on
inactive tabs in tab strips, pages in page books and popups.
You can pause the refresh of specific widgets in your analytic application. For example, you can
use setRefreshPaused API to pause the initial refresh of a chart or table until it’s updated by
setDimensionFilter API in the onInitialization event. You can select Refresh Active Widgets Only from
the Builder panel to pause the refresh of invisible widgets except when they are called by fetch data APIs.
For more detailed information, refer to Use Pause Refresh Options and APIs [page 1742].
You can deselect Planning Enabled from the Builder panel of a table using planning model if it isn’t used
for planning at all. You can also leverage getPlanning().setEnabled API to enable planning at runtime
whenever it’s needed.
For more detailed information, refer to Use Planning Enabled Property and setEnabled API [page 1719].
Use a MemberInfo object in setDimensionFilter() API, which contains member description in addition to
member ID, so that there’s no roundtrip to the backend to fetch the member’s description.
For more detailed information, see Use MemberInfo Object in setDimensionFilter() API [page 1823].
When using setVariableValue() API on models based on SAP BW or SAP BW/4HANA live data connection,
you can set loadDescriptions to false to save each member value from description requests.
Design Tips
You can improve your analytic application's performance by design. You'll also receive related system
reminders when building your application.
Expanded input controls are convenient because they allow you to quickly search for or select members.
However, an expanded input control refreshes often and may affect the application's performance.
When you have an input control with high cardinality dimension, for example, with 100 dimension members or
more, leaving it expanded can affect your application's performance.
Therefore, we recommend collapsing input controls. When an input control is collapsed, you must select it to
display all the members.
When auto top N is used, all the data are transferred from the system, which takes a while to render the top N
values. However, when you set top N, the data are sorted and top N values are selected in the system first. Only
top N values are sent, which are immediately rendered.
To get a good quality background image, you could use a compressed web image or an SVG image.
Related Information
You can enable optimized view mode in your analytic application to make the contents load faster at runtime.
Optimized view mode includes several usability improvements, and it can also reduce application loading times
in certain scenarios. However, not all features are available within this mode.
Only the specifics for SAP Analytics Cloud, analytics designer are mentioned here.
Optimized view mode isn’t by default enabled. You need to go into each analytic application to enable it
respectively.
Note
Optimized view mode only applies to runtime, while you can enable it only at design time.
You'll see a dialog notifying you that the analytic application is being optimized. The application will be saved
and reloaded when you've successfully enabled optimized view mode.
If the application can't be optimized or some items can't be included, you'll see a warning message. Review the
details and, if possible, resolve the issues.
Note
After you enable or disable Optimized View Mode for an analytic application that’s already been scheduled
for a recurring publication, in Calendar the file settings for this publication are still from the previous
mode. In such cases, we recommend deleting the scheduled publication and rescheduling one after the
conversion.
Unsupported Features
Currently in SAP Analytics Cloud, analytics designer, the following features aren't yet supported in optimized
view mode. For other restrictions that exist in both stories and analytic applications, refer to Optimized Story
Experience Restrictions [page 1160].
Note
Undo or redo operations for planning workflows at runtime aren't yet supported.
Feature Restrictions
Note the following respective restrictions for features in SAP Analytics Cloud, analytics designer, which are
different from classic view mode:
• Data access language: Measure and dimension descriptions are based on the data access language used
by the analytic application when it was optimized, but not when it’s opened.
• Theme: Some theme settings might not be applied to these widgets: filter line, chart and input control.
• Export: Exporting widgets to CSV or tables to XLSX only supports Point of View as scope, while All isn’t
supported.
• Filter line:
• In group filter mode, dynamic time range filters don't align with the system time when it’s changed.
• Filters in group filter mode aren't shown in the appendix of the PDF export triggered by the table’s
context menu.
• Data refresh: The onResultChanged event isn't triggered when the chart or table is invisible and set
to Refresh Active Widgets Only. When the widget becomes visible, the necessary query is sent and
onResultChanged event is triggered.
• Table navigation panel: On the table navigation panel, which is triggered by openNavigationPanel()
API, the option to deselect all account dimensions isn't available.
• APIs:
• When triggered by setVariableValue API, the variable dialog doesn’t display member description
for charts and tables.
• The following APIs might not work well for tables if the
hierarchical dimensions used in the script haven’t been used as
the dimensions in the tables: Table.getDataSource().setDimensionFilter(),
Table.getDataSource().copyDimensionFilterFrom(),
Table.getDataSource().getDimensionFilters(),
Table.getDataSource().getHierarchy(),
Table.getDataSource().setHierarchyLevel().
• At runtime, setting multiple values on tables and R visualizations via API such as
setDimensionFilter changes the selection mode of a filter even if it was set to Single Selection
at design time.
• Scripting: Value help for dimensions, measures and all their members aren’t supported for geo map related
scripting, but you can manually input the parameters.
Geo map (not with point of interest/feature layer) • APIs that work in classic view mode:
• GeoMap.openInNewStory()
• GeoMap.setContextMenuVisible()
• GeoMap.setQuickActionsVisibility()
• APIs that only work in optimized view mode but don’t
work with data source associated with geo map layers:
• GeoMap.getLayer().getDataSource().
getMemberDisplayMode()
• GeoMap.getLayer().getDataSource().
setMemberDisplayMode()
• GeoMap.getLayer().getDataSource().
getDataExplorer()
• GeoMap.getLayer().getDataSource().
getDimensionProperties()
• GeoMap.getLayer().getDataSource().
setHierarchy()
• GeoMap.getLayer().getDataSource().
expandNode()
• GeoMap.getLayer().getDataSource().
collapseNode()
Chart getEffectiveAxisScale
With the background loading of widgets you can improve the startup performance of analytic applications
that contain invisible widgets. When you start your application, all widgets (both set as visible or invisible) are
initialized before the application starts.
As an application designer, you can change this default behavior and activate the loading of invisible widgets in
the background. This way, all widgets that are initially visible are initialized at application start and displayed to
the application user. In a second step, the invisible widgets are initialized in the background to be available if the
application user works in the application and changes the visibility of the widgets.
Note
When we talk about invisible widgets, we talk about widgets with the property Show this item at view time
switched off but also about widgets inside invisible panels or on tabs in tabstrips that are not active at
startup.
The background loading increases the perceived startup performance of the application as the startup screen
appears faster.
To improve the performance and change the standard behavior, choose in the Edit Analytic Application ( )
menu Analytic Application Settings Load Invisible Widgets on Initialization .
For testing purposes (which mode is working better for the particular application) or if you want to enable
the application user to overwrite your default setting (if the mode doesn't match the application user's
expectations), the application user can use the URL parameter loadInvisibleWidgets. The URL parameter
has two possible values:
• loadInvisibleWidgets=onInitialization
This forces the classical behavior and loads all widgets on initialization before the widget shows up.
• loadInvisibleWidgets=inBackground
This forces the background loading of invisible widgets after the initial visible widgets have shown up for
the application user.
If the mode Load Invisible Widgets in Background is used, it's possible a script will try to access a widget that
does not exist at that time. This is especially true for the onInitialization script. Therefore, there are some
best practices to reveal the full potential of the optimization:
In SAP Analytics Cloud, analytics designer you can use the widget action Always initialize on startup to
influence the performance of your analytic application if the application runs using background loading.
In the widget's styling panel you can set the action Always initialize on startup. This allows you for example,
to access widgets that are used in the onInitialization script without the need to wait for these widgets to be
initialized in background.
If you set a widget to Always initialize on startup, the widget will be initialized with all the visible
widgets if background loading has been enabled.The widget initialization on startup takes place before the
onInitialization script is performed. This will bring a performance advantage if the widgets are accessed
while those scripts (or any other script that will be started) are performed and before the background loading
has been finished.
Note
• Setting a widget to Always initialize on startup has no effect if the widget is embedded in a container
that is invisible and Always initialize on startup on the container is not set. So if you use this function for
widgets that are embedded in a container, please make sure that either the container itself is visible at
runtime or you have enabled Always initialize on startup for the container.
• Always initialize on startup doesn’t work on widgets in popups. To use this option, move the widget from
a popup to the canvas.
• If the widget for which you set Always initialize on startup is used as data pool only and never becomes
visible at runtime, SAP strongly recommends to place this widget directly on the canvas instead of
embedding it in a container.
• Please be aware that in the context of background loading "widget is visible" means the following:
• You have set the widget action Show this item at view time in the widget's styling panel.
You can improve the performance in terms of saving time, round trips, and memory by merging backend calls
for multiple charts within an analytic application.
To activate the query merge for SAP BW queries in your application, select (Edit Analytic Application)
Query Settings . Switch on Enable Query Merge.
Note
To activate the merge for SAP HANA calculation views in your application, select (Edit Analytic
Application) Query Settings and switch on Enable Query Batching. Here you can also determine the
number of queries to be merged (at least 4 and at most 20).
In general, charts within an application need to have the same query and the same variables as a basis in order
to merge the queries.
• Queries contain the same dimensions with the same hierarchies and drill.
• Queries use the same sorting and ranking, but don't have any filters.
• The application does not have filters defined inside the threshold panel.
• The widget does not have a local filter on a dimension that is also used in a restriction in SAP BW.
• Queries are part of the same receiver group if they use linked analysis.
• The widget does not have a local filter on a member of the secondary structure of a query.
• The widget filter is not defined as complex tuple or range filter.
• Presentation types are the same.
• Zero Suppression for the query is not active when any dimension is selected in the chart.
Instead of sending single queries for each chart widget, you can combine requests with each other. The
following chart types support the query merge:
When you view your analytic or planning application in SAP Analytics Cloud, the toolbar helps you to navigate
to the functions that are most important to you in your current workflow.
At first you don't see the toolbar as the application is opened in fullscreen mode. However, when you hover
the mouse cursor at the top of the page, you can access a condensed version of the toolbar. To switch to view
mode, click the button in the toolbar to leave the fullscreen. In view time mode the toolbar is fixed and
always displayed in the application in its full version depending on the application's context.
Note
Your choice of menu options in the toolbar changes depending on the type of application and the data it
contains (analytic or planning application). The most important functions can be reached by quick access
buttons.
Functions related to planning applications such as Publish Data and Version Management are only shown if
there is a planning-enabled table available in the application. If not, the functions are hidden. An exception
for this is a planning application that is based on a BPC model: In this case only Publish Data is displayed.
• Edit
This section contains the following functions:
Function Description
• Tools
This section contains the following functions:
Predictive Forecast If this option is enabled, you can select a single cell that
can be edited, to start a predictive time series forecast.
For more information, please see Work with Predictive
Time Series Forecast in Tables in Analytic Applications
[page 1972].
Edit Prompts If this option is enabled, you can filter application data
from the data source level by setting application variables
that you want to display in the application. Charts or ta-
bles that are built from that data source will get updated
based on the details you entered.
• Distribute Values
For more information, seeAssign and Distribute Val-
ues with the Planning Panel [page 2233].
• Execute Allocation Process
Version History Use the version history function to quickly undo or redo
private or public version changes.
Version Management With this option you can copy, publish, delete and share
public and private versions. For more information, see
About the Version Management Panel [page 2174].
The Value Lock Management function can be found in the context menu of the table. Use this option
to create value locks so that data in the selected cell can't be changed or edited. This option is only
displayed if the table is based on a planning model that supports frontend value locks. For more
information, see About Value Lock Management [page 2206].
Note
The options Version History and Version Management display a check symbol if one of them is selected.
• Display
This section contains the following functions:
Function Description
Note
You can only view and create comments on a widget
or a cell of a table that is based on a planning model.
• For planning applications you'll find the Publish Data button quite prominenty displayed in the toolbar. This
button lets you quickly publish or revert your edits to public versions.
You can directly view your analytic application in embed mode in Safari on an iPad with an optimal user
experience.
Note
Currently for viewing analytic applications in Safari on iPad, only embed mode is supported. The URL
to access the application should be composed in a syntax with embed parameter appended. For more
information, see Use Application URL [page 1965].
Note
Prerequisites
First, the application should be marked as mobile enabled. To do this, at design time from File in the toolbar, go
to (Edit Analytic Application) Analytic Application Details . Make sure that Enable mobile support is
on. If your application contains custom widgets, custom widget developer needs to mark the custom widgets
as supported for mobile by setting supportsMobile to true in contribution json file.
On your device to view the analytic application, open Settings, select Safari and turn off the two options
Prevent Cross-Site Tracking and Block All Cookies.
Also make sure that you've met the minimum requirements for iOS and iPad. For details, refer to Mobile
Requirements section in System Requirements and Technical Prerequisites [page 2723].
Restrictions
Currenly in Safari on iPads, the following widgets in analytic applications aren't supported:
• geo map
• R visualization
• comment widget
For restrictions on SAP Analytics Cloud mobile app, see iOS Mobile App Feature Compatibility [page 131] and
Android Mobile App Feature Compatibility [page 148].
In theory, it returns false when you view the application on desktop or request a desktop website in Safari.
You can use the application URL to open and view the analytic application or share it with other application
users. The URL uses a fixed structure and you can add parameters to it.
You can get the application URL either in the Share Application dialog or by composing by yourself.
Note
We don't recommend directly copying the URL from your browser address bar, for backward compatibility
considerations.
If you have the permission to share the application, you can get its URL by selecting it and choosing
Share . In the Share Application dialog, you can find the URL under Default Link.
Compose by Yourself
You can use the following structure to compose application URL:
https://<TENANT>/sap/fpa/ui/tenants/<TENANT_ID>/bo/application/<APPLICATION_ID>
Replace <TENANT> with the public SAP Analytics Cloud URL for your tenant, <TENANT_ID> with your tenant ID
in lowercase letters, and <APPLICATION_ID> with the ID of the application you want to open.
• To find your tenant ID, go to System About . Your tenant ID is listed under System Name.
Note
Please convert your tenant ID to lowercase letters. Otherwise, the URL won't work correctly.
• To get your application ID, go to Files to find and open the application. Copy the application
ID from the URL. For example, in a URL https://.../aa/19F03902279139EC3896DB38A0F60557, the
application ID is 19F03902279139EC3896DB38A0F60557.
Note
Display Parameters
You can add a parameter to the application URL to modify how an application is displayed.
Parameter Options
mode • edit opens the application in edit mode. You can see
Indicates the mode the application will open in. the main menu and toolbar. You can switch to present
mode by selecting Run Application. If you don't have
By default the application will open in edit mode if the you
the edit permission, you'll have a read-only view of the
haven't specified the URL parameter.
application.
• present opens the application in presentation mode.
The runtime toolbar will appear if you hover over the
top of the application while the main menu is removed.
You can add variable parameters if your application uses a model with variable constraints.
Note
Variable parameters can be combined with display parameters. Any variables will be applied to the
application before filters are applied.
Note
If you have linked variables, your URL needs to include all the models with their respective variables.
For the list of all variable parameters, refer to SAP Analytics Cloud URL API Developer Guide.
You can add a parameter that defines a script variable's value, so that your application will open with the value
assigned to this variable. For more information, see Use Script Variables [page 1664].
In SAP Analytics Cloud, analytics designer you can create applications for planning your business data.
These planning applications support both manual and automated data entry and changes. You can enter the
planning data manually in the table (in cells or rows) or use data actions to enter data automatically.
For planning data, you have to use a planning model for the table in your application. If so, the property
Planning Enabled is active in the table's Builder panel, which controls whether the table is input-enabled or not.
Unlike stories, the table is only input-enabled during runtime.
Note
Enter Data
You can enter data by selecting the cell and entering values. After that, the changed cells are highlighted in the
table. For more information, see Entering Values in a Table [page 2192].
The table's More Actions menu lets you use planning-related functions, which are only displayed when the table
uses a planning model. Here're the functions, and some of them are only shown at runtime:
(Allocate) Distribute Values from the view time toolbar. For more information, see Assign and
Distribute Values with the Planning Panel [page 2233].
If you run your application in view mode, you can find a fixed toolbar in full version depending on the
application's context. This toolbar makes it easier for you to work also with planning model data as common
functions like Version Management and Allocate are prominently displayed, and Publish Data lets you quickly
publish your data changes or revert your edits to public versions.
Note
The menu options in the toolbar change depending on the type of application, analytic or planning, and the
data it contains.
You can find the following planning related functions in the toolbar if there's a planning-enabled table in the
application. If not, the functions are hidden.
• Publish Data
For more information, see Planning on Public Versions [page 2188].
Before leaving the analytic application, check whether to publish your data changes. You'll also be
reminded when leaving the application without publishing:
To not show this dialog at runtime, at design time, go to the Styling panel of canvas, and under Planning
Settings deselect Remind of publishing all data changes before leaving.
• Version Management
For more information, see Create, Publish, and Manage Versions of Planning Data [page 2170].
• Version History
For more information, see Undo, Redo, and Revert Changes to Versions [page 2179].
• Allocate
There two ways to allocate values:
• distributing a value using the planning panel
For more information, see Assign and Distribute Values with the Planning Panel [page 2233].
• running an allocation process
For more information, see Run Data Actions, Multi Actions, and Allocations [page 2260]. For
background information on allocations, please see Learn About Allocations [page 2513].
You can use a Data Action Trigger widget to automate planning functions for local planning models.
To add the widget, from the Insert section of the toolbar, select (Add) More Widgets Data
Action Trigger to open the Data Action Trigger configuration panel.
Note
Data action trigger cannot be executed at design time, but only at runtime.
You can add a Multi Action Trigger widget to automate a sequence of operations including data actions,
predictive steps, and version management steps on one or more versions and planning models.
To add the widget, from the Insert section of the toolbar, select (Add) More Widgets Multi
Action Trigger to open the Multi Action Trigger configuration panel.
Note
Multiple action trigger cannot be executed at design time, but only at runtime.
You can use the BPC Planning Sequence Trigger widget to run planning sequences for BPC live data connection
models. In this way, application users can run a planning sequence defined in the BPC system, which can
contain one or more planning functions.
To add this widget, from the Insert section of the toolbar, select (Add) More Widgets BPC Planning
Sequence Trigger , and then select a BPC live connection model and planning sequence.
After adding the widget, also add a table with the same model to the application. The planning sequence is
applied to the table when application users select the BPC planning sequence at runtime.
Note
BPC Planning Sequence cannot be executed at design time, but only at runtime.
For more information, see Run Planning Sequences from BPC [page 2553].
Besides, as an application designer you can use related APIs to offer application users more options to interact
with the planning sequence. For more information, see Work with Planning Applications [page 1967].
You can use a Value Driver Tree widget to visualize the value chain of your business and carry out driver-based
planning.
From the Insert section of the toolbar, select (Add) More Widgets Value Driver Tree to pick a
data source and add the widget to your application.
For detailed steps to set up and use value driver trees, see Set Up Value Driver Trees for Planning [page 2282].
Note
If you want to add a legacy value driver tree to your analytic application, you need to transform it first. For
details, see Transforming Legacy Value Driver Trees [page 2291].
Related Information
You use the calendar to view, create and manage your processes and tasks. By integrating applications in the
calendar, you can accomplish your due tasks and receive an overview of your data more conveniently.
To access analytic applications from the calendar, see Complete Your Tasks in Stories and Analytic Applications
[page 2644].
If you're an assignee and you access an application from a task, then you can submit, decline, view, or set the
progress of your task.
If you're a reviewer and you access an application from a task, then you can perform the Approve or Reject
actions within your application.
If a context has been defined for your task, the application will be opened with the analytic application filters
applied for you. You can find (Analytic Application Filter) in the view time toolbar, and the number on it
indicates the number of applied application-level filters.
When you select , you can see the filters under the toolbar. You can select it again to hide the filters.
The Controls panel informs you of the filters applied to any widget in the application. To open the panel, select
View Controls from the dropdown menu of the applied analytic application filter or (More Actions) menu of
a widget. Then, select any widget to view the applied filters on different levels.
When working with planning models in analytic applications, you can enable predictive time series forecasts on
the data within tables.
A predictive time series forecast runs an algorithm on historical data to predict future values for a specific
measure in a planning model.
For enabling the predictive forecasts at runtime you as an application designer have to use a planning model
for the table in the application. Besides you need planning rights and a planning license to run a predictive time
series forecast: either SAP Analytics Cloud for planning, standard edition or SAP Analytics Cloud for planning,
professional edition.
At runtime application users can select a single cell that can be edited and then (Predictive Forecast) in the
Tools section of the toolbar.
You can use APIs to enable predictive time series forecasts on the data within time series or line charts in your
analytic applications.
A predictive time series forecast runs an algorithm on historical data to predict future values for a specific
measure.
You can leverage the following API to let application users enable automatic forecast on a chart Chart_1:
Sample Code
Chart_1.getForecast().setType(ForecastType.Auto);
You can also customize the number of forecast periods via API. For more detailed information, refer to
Analytics Designer API Reference.
Related Information
SAP Analytics Cloud provides powerful tools to let you explore and gain insights into your data. These tools are
used in both stories and analytic applications with some minor differences in usage. Only the usage specific to
analytics designer is described here.
Launch Explorer
When you run an analytic application, you can launch explorer for a chart or table to select different dimensions
and measures, experiment with filters and so on.
Note
Before running an application and initiating the explorer, you need to enable the explorer for the charts or
tables in the Builder panel.
• Save and run the application. Select a chart or table for data exploration. From (More Actions) select
(Open Explorer).
• Launch via script APIs that can also add additional dimensions and measures to the data exploration
scope. For example:
Sample Code
For more information about related APIs, refer to Analytics Designer API Reference.
In the facet panel of the explorer, you can change dimensions and measures to generate charts or tables for
preview. You can also change chart types, show or hide elements and do sorting in tables, export the data to file
and so on in the explorer. For related information, you can refer to Explore the Data in a Story Shared with You
(Classic Story Experience) [page 1195].
To be able to save a chart or table to a new story directly from the explorer, select the option Open in New Story
in the section Quick Menus of the widget's Styling panel when designing the application. You'll then be able to
select the icon in the explorer's visualization area and choose Open in New Story.
As an application designer, you can add custom menu items to explorer's visualization area to let application
users apply the data exploration results to tables or charts, for example.
1. In the Scripting section of the Outline panel, select to the right of (Data Explorer Configuration) to
add a scripting object.
The Data Explorer Configuration panel is opened on the right side.
2. In this panel, add custom menu items in the Menu Settings.
3. Write script for the onMenuItemSelect() function of this DataExplorerConfiguration scripting
object to define the activities you want to trigger when selecting these menu items.
Sample Code
Say you want to apply the explorer results of Chart_1 to another chart Chart_2. Write the scripts below:
Note
Carrying over all the show/hide settings in the explorer or the original widget's styling settings aren't
supported.
5. Select the icon in the explorer's visualization area and choose the menu item you’ve defined.
At last if you want to save the updated table or chart, you can bookmark the application or publish it to PDF.
Smart insights automatically discovers key insights, including correlations, exceptions, clusters, links and
predictions of data, which brings you straight to the results instead of requiring time-consuming manual data
exploration.
In analytic applications, smart insights is available only when explorer is enabled. Also make sure you've
selected Smart Insights in the Quick Menus section of the Styling panel of the corresponding chart or table
widgets.
In both application view and explorer view, you can trigger smart insights by selecting a data point in a chart or
a data cell in a table and then (Smart Insights) in its context menu.
Note
You cannot trigger smart insights for charts and tables in popups.
In analytics designer, smart discovery can only be accessed via script APIs, which runs a machine learning
algorithm to help you explore your data in a specific context and uncover new or unknown relationships
between columns within a dataset.
For more detailed information about the APIs, refer to the component SmartDiscoveryDimensionSettings
and SmartDiscoveryStructureSettings in Analytics Designer API Reference.
• Launching via user interaction events such as clicking a button. A dialog will pop up displaying the URL as a
hyperlink, which application users can click to launch smart discovery in a new browser page.
Note
• Launching via selecting a widget with hyperlink property. You can create a smart discovery URL and use it
as a hyperlink in images, shapes and texts, so that application users can click the widget to launch smart
discovery in a new browser page.
For more detailed information about the APIs, refer to the component SmartDiscovery in Analytics Designer
API Reference.
Note
For the smart discovery defined before version 2021.1, If you call SmartDiscovery API and set
setSettingsPanelVisible to true, in the Smart Discovery configuration panel you need to first input
value in the new field Entity before you can successfully run it.
You can use script APIs to let application users use search to insight in your analytic applications, which is a
natural language query interface offering quick insights about the data.
Search to insight displays query results for indexed live HANA data models and acquired data models that you
have access to, or any dimensions that do not have data access control or you have data access control on.
Note
Currently search to insight can only identify and work with words in English.
• Simple mode – this is designed for application users who have little knowledge of the models and want to
perform searches simply by asking questions.
• Advanced mode – this is designed for application users who have some knowledge of the models and
dimensions behind the data and want to switch models and select dimensions according to their needs
when querying data.
You need to create a search to insight scripting object before application users can query data.
1. In the Scripting section of the Outline, select right next to (Search to Insight) to create a scripting
object.
The side panel Search to Insight opens, with the default name of the object SearchToInsight_1 shown. You
can change the name if you want to.
2. Under Models to Search, select Add Model to add one or more models for query.
Note
Only models supported by search to insight can be added to the Search to Insight component. For live
data models, only indexed live HANA data models are supported. Acquired data models don’t need to
be indexed manually. Indexing is automatically triggered for acquired data models when you perform a
query on the models for the first time.
Related APIs
After adding the scripting object to the canvas or popup, you can leverage the following APIs to open or close a
search to insight dialog:
Code Syntax
By using applySearchToChart() API, you can apply the same questions in search to a chart. You can also
leverage the following variable related APIs to save variable value in a search to insight object and then apply to
chart when calling applySearchToChart():
Code Syntax
Example
In this example, you can design a simple application that enables application users to choose different
modes of search to insight according to the questions they enter.
First, in addition to the scripting object SearchToInsight_1, add an input field InputField_1, a button
Button_1 and a checkbox group CheckboxGroup_1 to the canvas as below:
Sample Code
Result: In runtime, application users can use search to insight simple mode by entering a question
and selecting Search to Insight. They can select Advanced mode to trigger advanced mode and then
perform searches.
2. Receive Question from Host HTML Page and Apply Search to Insight Results to a Chart
Example
In this example, you can build your own search to insight user interface and integrate search to insight
results to your own portal.
First, embed your analytic application in your own portal via iFrame. Maintain the corresponding code
to get what users input in the portal and post it to the embedded application.
Then go back to the analytic application. Besides adding the scripting object SearchToInsight_1, write
the following script for the onPostMessageReceived event of the application:
Sample Code
SearchToInsight_1.applySearchToChart(message, Chart_1);
Chart_1.setVisible(true);
Result: In your own portal, users can ask a question and see a chart appear in the embedded analytic
application displaying the corresponding search to insight results.
Smart grouping is available for both bubble and scatterplot charts for correlation analysis. You can use related
APIs to trigger grouping of data points in charts in analytic applications based on similar properties.
After enabling smart grouping for a chart Chart_1 in its Builder panel, you can leverage the following API to let
application users choose the number of groups:
Sample Code
Chart_1.getSmartGrouping().setNumberOfGroups();
Related Information
Use the Microsoft Office add-ins to analyze your SAP Analytics Cloud data with Microsoft Excel. You can
deepen your analysis and add data for planning and save it back to the cloud. During your analysis, you can
update your table with actual data from SAP Analytics Cloud at any time.
With SAP Analytics Cloud, add-in for Microsoft Excel, you can bring your SAP Analytics Cloud data into
Microsoft Excel and continue your analysis there.
Integrated with Microsoft 365, the add-in lets you format, analyze, and enrich your data based on the latest
content from the SAP Analytics Cloud platform.
You can analyze SAP Analytics Cloud Analytic and Planning models based on Import Data connections. For
planning, you can also add data and save it back to the cloud.
You can also use live data connections to analyze SAP Datasphere analytic models, SAP S/4HANA Cloud
queries, SAP S/4HANA queries, and SAP BW queries.
You can use SAP Analytics Cloud for Microsoft Excel with Excel for Web and Excel desktop on Windows and
Mac.
• Analyze SAP Analytics Cloud models, SAP Datasphere analytic models, SAP S/4HANA Cloud queries, SAP
S/4HANA queries, and SAP BW queries in Excel
• Deepen your analysis with filters, totals and Excel formulas
• Sort and rank your data
• Update your table with the latest data from the cloud at any time
• Add data for planning and save it back to the cloud
• Share your Excel workbooks with others
You can download the add-in from the Microsoft Office store at SAP Analytics Cloud, add-in for Microsoft
Excel .
Analysis is a Microsoft Office add-in that allows multidimensional analysis of data sources in Microsoft Excel
and MS Excel workbook application design.
It enables you to analyze and plan with data sources from SAP BW, SAP BW/4HANA, SAP HANA,SAP Analytics
Cloud and SAP S/4HANA Cloud.
In Microsoft Excel, Analysis is available in two separate tabs in the ribbon: Analysis and Analysis Design.
Using the design panel, you can analyze the data and change the view on the displayed data. You can add and
remove dimensions and measures to be displayed easily with drag and drop. To avoid single refreshes after
each step, you can pause the refresh to build a crosstab. After ending the pause, all changes are applied at
once.
You can refine your analysis using conditional formatting, filter, prompting, calculations and display hierarchies.
You can also add charts to your analysis. If you want to keep a status of your navigation, you can save it as an
analysis view. Other users can then reuse your analysis.
For more sophisticated workbook design, Analysis contains a dedicated set of functions in Microsoft Excel to
access data and meta data of connected systems. There are several API functions available that you can use
with the Visual Basic Editor, to filter data and set values for variables.
You can also plan business data based on the current data in your data source. You can enter the planning data
manually and enter planning data automatically using planning functions and planning sequences of SAP BW
Integrated Planning.
Analysis must be installed on your local machine. You can connect directly to a system or you can connect via
a platform to include data sources. You can use the following platforms to store and share workbooks: SAP
BusinessObjects business intelligence platform, SAP BW / SAP BW/4HANA and SAP Analytics Cloud.
The add-in is available in the SAP Software Download Center. To get more information about the installation
and the update cycles, please check its Product Availability Matrix .
You can find more information about Analysis on the SAP Help Portal at SAP Analysis for Microsoft Office,
edition for SAP Analytics Cloud.
You can find more information about that version on the SAP Help Portal at SAP Analysis for Microsoft Office.
Augmented Analytics comprises a set of SAP Analytics Cloud features that enhance the analytics process
using machine learning.
The Augmented Analytics features include Smart Insights, Search to Insight, Smart Discovery, and Smart
Predict.
Restriction
For more information on non-standard date dimensions, see Customize Date Dimensions [page 676].
Just Ask
Smart Insights
Smart Predict
Search to Insight
Powered by artificial intelligence, Just Ask enables you to search your data easily and efficiently using business
terms you are familiar with. Simply ask your question using natural language, and Just Ask will instantly provide
answers as simple charts and tables.
Working with data from acquired models or SAP Datasphere, you can query Just Ask to get quick answers to
questions and incorporate the results into a story, export to CSV or Excel files, or for further exploration using
the Data Analyzer tool. You can ask questions such as:
The Just Ask search field identifies and auto-completes relevant search strings including dimension names,
dimension values, and measure names. You can use synonyms for measure and dimension names if defined in
your system by admin users.
Just Ask is only available with tenants hosted on Cloud Foundry (AWS), Google Cloud Platform, and Microsoft
Azure. It is not available for tenants hosted on Neo or AliCloud environments.
Access quick insights about your data by searching with the Just Ask tool.
Context
Just Ask works with either models from SAP Datasphere, or acquired data models that have been added by an
Admin user to the system. For more information, see Manage Just Ask [page 1994].
To launch Just Ask, select the corresponding icon from the main toolbar.
Just Ask can also start directly from your Home screen if the corresponding tile has been activated.
Restriction
Procedure
You can search Just Ask without selecting specific models. The underlaying artificial intelligence
will search the most relevant models for search prompt.
b. Select Add Model to search to add models to the current Just Ask session.
These models can be sourced from the SAP Analytics Cloud My Files repository or SAP Datasphere.
Browse through the Select Model dialog to your desired model and select it.
Note
To add models from SAP Datasphere to your search session, you will need to provide an SAP
Datasphere account, login credentials, and choose the tenant of the target model.
To access an SAP Datasphere model, choose one of the tenants listed under Datasphere.
Note
When choosing a SAP Datasphere model, you must have the READ privilege on the CONNECTION
objects.
When selecting models to add to the current search session, you cannot search with model
selections in both My Session Models and Indexed for Just Ask. A warning message will display to
inform you of such a scenario.
Models selected using Add Model to search will be listed under My Session Models for the duration of
your session. They will not be available once you log off or reload the page.
c. If a specific model you want to search is not listed, you can request to add it to Just Ask. At the bottom
of the Select model list, use the Click here to create a model request to an admin link to initiate the
request.
Browse through the Select Model dialog to your desired model and select it.
To help you with starting a query, select the + Add to Search icon to the left of the search field.
Select any entry from the list available measures, and dimensions to populate the search field.
d. If you know what you want to query, enter your question directly in the Ask a question search field.
As you type, auto-complete suggestions will display, select the most appropriate suggestion to
populate the search field. In addition, keyword suggestions will display below the search field. Select
any to add them to the search question.
When you are ready to run the search, press enter or select the Search icon in the search field. For
more information on structuring query prompts, see Just Ask Search Inputs [page 1993]
e. Alternatively, under the search field, use any of suggested searches listed in Sample Questions to
populate the search field. Sample questions are listed under separate section headings.
Search results are displayed as cards under the search field. Each card contains the following information:
• The search query used to generate the result.
Next Steps
Once the search cards are displayed you can do any of the following:
1. Interact with the data displayed in a card. The following interaction options are available:
• You can select the more options icon to set or reset drill levels, Sort the display in ascending or
descending order, set rank, or export the data as CSV or Excel files. For more detailed information on
the chart context menu options, see Chart Context Menu Options [page 1239].
•
You can opt to switch the chart to a table. For more information on these interaction options for tables
see Table Menu Options on Story Pages [page 1374].
Note
To explore the data further using the Data Analyzer tool, select the Analyze Data icon. Data Analyzer
will open in a separate tab displaying the data from your search result.
•
Use the Copy icon to copy the card and paste the contents into a story page using the Ctrl+ + V
shortcut.
You can currently only copy charts. Copy for tables is not supported in this release. The copy
functionality is currently not available for SAP Datasphere models.
2. If you want to continue your data journey, use any of the Suggestions to the right to to run a search for
the suggested search input. Essentially, these are alternative queries generated by the system to help you
explore different aspects of the current data model, and possibly lead to more accurate results.
Note
Use the search inputs listed below to query your data in the Just Ask search field.
The section below describes which search inputs and formats are supported.
Note
To achieve the best experience with Jusk Ask it is recommended that you use these supported inputs.
It is also recommended that you use the provided sample questions or the alternative suggestions once
you have executed your search. As a Natural Language Querying (NLQ) tool, Just Ask works best when
measures, dimensions, supported formats, and keywords are used to search through the data.
Provide a sum of one or more measure quantities Gross margin by store and location
Sum one or more measures for different time aggregation Let me see the gross margin by month
Apply one or more entity filters on one or more dimensions Show me gross Margin by Subregion and store
Filter a measure for specific time period or range Gross margin until previous month
Either sum one or more measure over dimensions and Profit in 2013 and 2015 monthly
apply entity filters
Profit by year/quarter in 2022
Either filter and sum over time Profit in 2013 and 2015 monthly
Trend visualizations of one or more measures Trend gross margin and discount by year
Note
Use configuration settings to prepare and maintain Just Ask for your system users.
• The Just Ask toolbar icon; is displayed for all users in a system where conversational analytics is enabled
for Just Ask.
Ensure the Default Mode is set to Just Ask in the Conversational Analytics section. This sets Just Ask as
the only conversational analytics tool available to all users. If Search to Insight is selected, as an admin
user you will have access to both conversational analytics tools.
1. Select the Edit Description icon above the search field. You can use this free text field to provide any general
information about the system for your end users.
2. Under Sample Questions, select Add section to create a new section of questions (queries) for your end
users. Within any section, select Add Question to add input prompts for all system users to search their
data. Alternatively, you can select the X within a sample question to remove it.
Select the Manage Models button to display Just Ask configuration settings. Only users with the
JustAskManage privilege can access these settings. Use the Manage models section to display all the available
models.
by system users. Select More Exclude to prevent users from searching the model. An
excluded model will display as grayed out but will still be available for future use in Just Ask. Use
More Delete to remove a model from Just Ask.
• Pending request: the models listed in this tab have activation requests pending and are not yet available for
use.
The previous section focused on model management in general. This section details the various options for
managing individual models.
To access all the configuration settings for a model, select the More Edit option in the All
Models tab. The hardcoded and configurable settings for the model are listed in four separate tabs:
• Description Select edit to display a text box in which can provide a description and other
information on this model, and then Save.
• Use the Data section to explore the data available in the model. Information for your data columns
includes:
• Parameter Type: column type specified as dimension or measure.
• Category: Name used for the internal data type.
• Name: column name.
• View Values: When available select View to explore the actual values (children) of the column.
• Synonyms: alternative name identifiers for the column.
Explore Data Allows you to explore and preview the model's data or a subset of data.
Under Select Parameters use Add Object to choose the dimensions or measures you want to preview. Use
Add a Filter to specify what values to exclude from the model's dimensions and measures.
Your selected parameters and filters are displayed under Preview Data.
The following quick actions are available for the currently selected model.
Note
To help users with their queries in Just Ask, you can define synonyms for measure and dimension names.
These synonyms can in turn be used when querying the data. The synonyms will also be suggested in the auto
complete in the search field, and be part of consequent results and visualizations when used in a query.
• Dimensions or measures. For example, the dimension store can have shop, outlet, or kiosk as
synonyms. Shorthand expressions for dimension name can also be added as synonyms. For example,
inc stmt is potentially a synonym for income statement.
• The values of a data column can also have synonyms defined. For example, if a location dimension has New
York City as a value, NYC could serve as a synonym.
The synonym creation process for two entity types is slightly different.
1. In All Models, select the model containing the target column for which you want to create a synonym.
2. Select the row for the target column. Four tabs associated with the target column appear.
3. Navigate to Overview Synonyms . This tab displays the synonyms for your selected column.
4. Enter the synonym for the column in Synonyms and Save.
1. In All Models, select the model containing the values for which you want to create synonyms.
2. Select the View link in the row for your target value under the View Values heading.
3. In the displayed list of values, select the specific value for which you want to create a synonym.
4. Add a synonym to the Synonyms field and Save.
Rules in Just Ask are the application of specific business logic to a model when defined conditions are met. The
rules are applied before the execution of the search questions.
An example of a rule: when a search question contains sales managers as an input, a rule specifies that all
charts and table so only refer to sales personnel in a specific region.
1. In All Models, select the model to which you want to apply a rule.
2. Open the Rules tab and select+ Add Rules.
The following dialog is displayed:
3. After naming your rule in the Set Rule Conditions tab, use the provided fields to define how and when the
rule is activated. Under When search contains select one of the conditions in the drop-down list on the left,
and then associate it with measures or dimensions on the right.
Note
You can create a rule as either an OR or an AND statement for measures or dimensions.
General
• Please do not add more than five models to your Just Ask system.
• Interpretation of number formatting is not currently supported. When using numbers in your query use 10
or 10000 instead of 100,000 or 10.00, respectively.
• Metadata localization is not currently available.
• Results may vary depending the selection of single, multiple, or all models.
• While sorting and ranking is supported in charts, it is not currently supported in tables.
• No support of "exclude" for time filters.
The table below lists which keywords and formats are not supported when querying Just Ask
Queries that include measures and dimensions that do not Show me the SALES in Paris last years
exist in the searched models.
Unsupported if SALES is a measure not present in the
target models.
Mathematical functions (count of, average, maximum, Number of products for customer
minimum, …)
How many employees there are in this region
Mathematical operations between measures (sum, Differences of gross margin and discount
difference, multiplication, division, ratio etc.)
Sum of profit and revenues
Questions that include For each For each country give me the sales
Comparison involving more than two query statements Compare gross margin by product with profit in 2022 vs
2023
Querires on the model metadata. How many measures and dimensions are included in the
model
Smart Insights uses statistical analysis to automatically provide insights to better help you understand your
data.
When you look at a chart or table, there's always more to understand about what's going on in your data. With
Smart Insights, you can explore beyond the data that is readily visible to you at first glance.
Smart Insights take the form of a textual insight combined with a visualization to provide you with as much
information as possible about your selected data point or variance. These insights are presented to you within
your story in the Smart Insights panel.
You can get Smart Insights on your acquired data or live SAP HANA connections when working with most chart
types, tables cells, and all variance types in stories. Smart Insights supports both classic account models and
new model types.
Note
For information on using Smart Insights with live SAP HANA connections, see Setting up Smart Insights for
Live SAP HANA Direct Connections [page 325].
To learn more about the different model types, see Get Started with the New Model Type [page 627].
If Explorer is enabled on the visualization from which it's triggered, Smart Insights are also available in the
Explorer view, and the Digital Boardroom.
Note
When using the context menu to access Smart Insights from a cell in a table in Digital Boardroom, the
(Explorer) icon appears in the wheel once Enable Explorer is selected in the builder panel. This opens
Explorer mode, and Smart Insights is then enabled at this level. If Enable Explorer is not selected in the
builder panel, then the wheel can still appear, without the (Explorer) option.
Note
• Cross calculations
• Account dimensions that have multiple hierarchies
• Calculated columns created in tables. For more information about these types of calculations, please
see Create Custom Calculations Within Your Table [page 1469].
• Blended models
• Embedded mode
• Multiple data points selected in a chart or table
• Comparison:
• Bar/Column
• Combination Column & Line
• Combination Stacked Column & Line
• Stacked Bar/Column
• Trend:
• Area
• Line
• Distribution:
• Heat Map
• Radar
• Tree Map
• More:
• Donut
• Marimekko
• Pie
Restrictions
• If a dimension in your hierarchy has over 200 K nodes, the top contributor insight for that specific
dimension is based on the level of the leaf node, not the other hierarchical levels.
• Smart Insights won't return top contributor insights if the returned result sets for the top contributor
queries from the backend is more than 1 million rows.
Related Information
To access Smart Insights, you need to trigger the Smart Insights panel when working with acquired data, or live
SAP HANA direct connections. You can do one of the following to access the Smart Insights panel:
• When you select a data point or a variance in your visualization, and use the context menu to choose
(Smart Insights...)
• When you use the context menu in a cell in a table and select (Smart Insights)
• When you select or View more... from the Smart Insights dynamic text in a chart footer
Smart Insights are displayed in the Smart Insights panel within your story. Each type of insight quickly offers
you information from a different perspective on your selected data point or variance. To find out more, you can
check out the chapter called Types of Smart Insights.
Note
To access Smart Insights when working with live SAP HANA connections, there are some specific steps to
follow. For more information, see the chapter called Setting up Smart Insights for Live SAP HANA Direct
Connections [page 325].
You can generate Smart Insights from an integrated variance or a data label.
When working with an integrated variance or a data label, you can select either of these options to generate
Smart Insights:
• Data Point
• Variance
Note
Related Information
When you trigger the Smart Insights panel, the panel shows you varying information about your selected data
point. Depending on the data point you select, one or more of the following types of Smart Insights will display
in the Smart Insights panel:
Check out the chapters below for more information about each of these types of Smart Insights.
Related Information
Smart Insights: How Your Data Point Changed Over Time [page 2007]
Smart Insights: Top Contributors to Your Data Point [page 2009]
Smart Insights: How Your Data Point is Calculated [page 2012]
Smart Insights can show you how your data point has changed over time.
What's happening in the background is that Smart Insights performs a live analysis of your selected data point
to highlight its most recent interesting changes over time. These changes are selected based on the date
dimension hierarchies in your model (quarter/month/year...). Smart Insights uses the percentage variance of
the current time period compared to all other historic time periods. The most interesting time hierarchy in the
variance chart is also highlighted, so you can focus on the most important elements of your data.
The data point you select must be based on a model that contains at least one date dimension. To get
Smart Insights about how the data point has changed over time, you must make sure that the date
dimension isn't on one of the axes in your chart.
If date dimension filters are applied to your data point on either a chart, page or story level, the Smart Insights
panel considers these filters when returning information. The information is displayed using a textual insight
and a chart. The textual insight describes a significant change Smart Insights has identified in your data over
time.
The chart visualizes the changes in the values of your data point for different date dimension hierarchies. The
options displayed for date dimension hierarchies in the Smart Insights panel, depend on the date dimension
hierarchies in your model. Depending on your model, the potential options are day, week, month, period,
quarter, half-year, year.
The chart also shows the variance for the selected time hierarchy.
You can explore changes to your data point. Choose your preferred date dimension hierarchy within the Smart
Insights panel.
Example
If you select 'Month' as the time hierarchy, the chart shows a 13-month time range of data point values and
matching variance points.
Note
• The 'How has this changed' data point analysis over time isn't available in the following scenario:
• working with the calculated measure type Difference From or variance.
Related Information
Smart Insights can show you the main contributors to your data point.
What's happening in the background is that Smart Insights performs a live analysis of your data point to
highlight its most interesting dimensions. It also obeys any applied filters, and user authorizations. The Smart
Insights algorithm ranks the highlighted dimensions in order, depending on how much their member values
deviate from the average with regard to the selected dimension. You can see the top 5 dimensions and their top
10 member values displayed as top contributors.
As a story creator, you can customize the top contributor Smart Insights results, to make sure they are as
relevant as possible to your business context. If certain dimensions or dimension members aren't interesting to
you, you can exclude them. Top contributors found for other dimensions are then shown in the Smart Insights
panel instead. The selected dimensions or dimension members aren't factored into your top contributor
results. The exclusion of these dimensions or dimension members persists for the lifetime of all charts
connected to the same model in your story, until you select to include them again.
The excluded dimension members are added to the Others category in the summary chart in the Smart
Insights panel, even if there are less than top 10 contributing dimension members.
1. Select Edit mode within your story. If the toolbar is collapsed, choose File (Edit Story) Smart
Insights Settings . If the toolbar is expanded, select File (Edit Story) Smart Insights
Settings .
The Smart Insights Settings dialog opens.
2. Identify the relevant model from the list, and select + Select Dimension Members to open the list of
dimensions you could potentially exclude from your top contributor Smart Insights results.
3. Select the relevant dimension from the list.
The Select Members to Exclude from Smart Insights for <selected dimension> dialog appears.
4. Select All Members to exclude the entire dimension.
Your selection All Members then appears in the dialog box with a strikethrough, to indicate that the entire
dimension is excluded from your top contributor Smart Insights results.
5. To exclude specific dimension members, select instead the relevant checkbox beside the dimension
member name. You can also use Search to find specific dimension members.
Smart Insights results, choose File (Edit Story) Smart Insights Settings , and deselect the
dimensions or dimension members.
8. Select OK.
When you run Smart Insights again, these dimensions or dimension members are again factored into your
top five contributor results.
Hierarchical Data
All dimension members, not just at the leaf node level, are considered by Smart Insights as potential top
contributors to your selected data point. The dimension can even have multiple hierarchical levels, or already
appear in your chart and still be considered. If a level in your dimension hierarchy has only one member, Smart
Insights discards it as uninteresting. If Smart Insights calculates the same score for two levels in your hierarchy,
it favors the level closer to the root of your dimension hierarchy. Every member in a hierarchy is looked at.
To see at what level in your hierarchy a dimension occurs, simply hover over the dimension in the chart or the
title of the Smart Insights panel.
Note
• If a dimension in your hierarchy has over 200 K nodes, the top contributor insight for that specific
dimension is based on the level of the leaf node, not the other hierarchical levels.
• If your dimension hierarchy levels run deeper than 5 five levels down from the root, the graphical
display of the hierarchy is truncated to only show the bottom four levels.
• If you apply a filter to a specific dimension in a dimension hierarchy, the top contributor insight for that
specific dimension is based on the level of the leaf node, not the other hierarchical levels.
• Smart Insights won't return top contributor insights if the returned result sets for the top contributor
queries from the backend is more than 1 million rows.
The Smart Insights panel highlights the contributing factors (dimensions and dimension members from your
model), to the data point you selected. Each contributing factor is explained with a textual insight and a chart.
If your selected data point has a large number of generic dimensions, you can see two charts within the top
contributor section of the Smart Insights panel.
The first chart shows you the top 10 (or less) contributors to your selected data point, sorted from largest to
smallest. If your selected data point has less than 10 dimension members, this chart also displays an average
When there are greater than 10 contributors, or if there are any unassigned values, the second chart shows a
summary of how the total measure value of your data point breaks down into different categories. Depending
on your data model, the summary chart displays a combination of these categories:
Note
• When working with models on a live SAP HANA connection, unassigned values are considered as top
contributors to your selected data point, and so don't appear in the summary chart.
• The summary chart can't currently be copied to your story.
• Working with variance based on a calculated measure is not supported for top contributor insights.
Drill into the insights in your chart and see another chart offering further insights. Want to know more? Simply
select another data point of interest within the chart, and select the (Smart Insights) again. The Smart
Insights panel offers more insights. Go deeper into your data, and watch as other insights in the panel adapt to
your new data exploration path.
Want to go back to the previous panel view before you started to explore your insights? Select (Back) in the
Smart Insights panel.
Related Information
If the data point you selected is based on a calculation, the Smart Insights panel displays information about
how your data point was calculated, including any formula used, and the aggregation type.
You can also see the value of each measure in your formula, by hovering over it. If any filters are applied on your
chart, these filters are taken into account in the displayed measure value. You can get further insights on each
measure in your formula by hovering and selecting Smart Insights in the context menu. The Smart Insights
panel then displays insights specific to the measure you've selected. You can continue to drill further into your
data for more insights, and of course return to where you began by selecting (Back) in the Smart Insights
panel.
Aggregation Types
If your data point is calculated using the aggregation type AVERAGE, AVERAGE excl. NULL , AVERAGE excl.
0, NULL, MIN or MAX, the Smart Insights panel displays further insights. You can see an overview of the
minimum, maximum, and average values of your data point in a numeric point chart. The Smart Insights panel
also displays a histogram chart, which lets you see how the measures and dimensions used to create your
calculated data point are distributed.
Outliers
If outliers exist for your data point, an Outliers button is available in the Smart Insights panel. Select the Outliers
button to include these outliers in the histogram chart. It's good to know about outliers, so you fully understand
the effect they have on your data values. Outliers tend to skew the average value. If your dataset has outliers,
we suggest using other types of calculations in your story. Both the number point chart and the histogram
chart can be copied to your story.
Note
Outliers are shown in the histogram chart by default, if your data point is calculated using the aggregation
types MIN or MAX. You can exclude them from the histogram chart by selecting the Outliers button.
Account Dimension
If your data point represents the account dimension of your model, the Smart Insights panel also displays a
textual insight and a chart. The textual insight explains that the data point representing the account dimension
Note
To learn more about formulas and calculations, you can select the link to the chapter called Formulas and
Calculations in SAP Analytics Cloud Help on the SAP Help Portal at https://help.sap.com/viewer/index.
To learn more about account dimensions, you can select the link to the chapter called Attributes of an Account
Dimension in SAP Analytics Cloud Help on the SAP Help Portal at https://help.sap.com/viewer/index.
Related Information
You can add Smart Insights charts to your story or Smart Insights tokens that display dynamic text in chart
footers.
Visualizations that are generated in the Smart Insights panel can be added to your story.
The Smart Insights panel generates textual and visualization insights to help you better understand your
data. To add the visualization information to your story, select one of the following options in the copy chart
dropdown:
• Copy
• Copy to New Canvas Page
• Copy to New Responsive Page
• Copy to Page 1
You can enhance the information displayed in your chart by adding a chart footer containing some Smart
Insights text. The chart footer text gives you a better understanding of what's going on in your chart and can
help you make better business decisions. The text calls out the highest contributor to your chart and provides a
helpful preview of the types of insights available to you in the Smart Insights panel.
Note
To add a dynamic text token, you need to be working in your story in edit mode.
To add Smart Insights text as a chart footer, you can do the following:
2. From the chart action menu, select (More Actions) Add Smart Insights .
The chart footer text is added to the bottom of your chart.
3. Select View more... to open the Smart Insights panel, so you can explore the insights in more detail.
Note
The Smart Insights dynamic text is automatically updated based on the data point selected in your chart.
Related Information
Let Smart Discovery analyze your data by running a machine learning algorithm to help you explore your data
in a specific context, to uncover new, or unknown relationships between columns within a dataset.
In SAP Analytics Cloud, you can use Smart Discovery to clearly define the context of the business question you
want to ask your data. To form the context, or topic of this business question, you must choose values for some
settings in the Smart Discovery side panel.
Some of the settings options that you see in the Smart Discovery side panel are determined by the type of
underlying model you're using - a classic account model or a new model type. The main difference between the
model types is how they handle measures, which are the structures in your model that hold numeric values. To
learn more about the different model types, you can refer to the chapter called Get Started with the New Model
Type [page 627].
In a classic account model, model values are stored in a single default measure. To define the context of your
business question when using a classic account model, you must select a value for these settings:
Target The Target is the measure or dimension you’d like to know more about.
Entity The Entity defines the dimension or dimensions you’d like to explore in relation to the target, and the
level to which Smart Discovery must aggregate your data to analyze it for you. Within the dimension
or dimensions you choose to define your entity, you can include all, or exclude some dimension
members. There's no need for any data preparation. Simply select the Entity value from the Smart
Discovery side panel.
Example
If you want to explore revenue by customer, you'd select 'revenue' as the Target, and 'customer' as the
Entity.
You can also refine the scope of your topic by adding Filters, but the filter selection is optional.
The new model type exposes measures as single entities and lets you add and configure multiple measures
with aggregation and units to fit your data. To define the context of your business question when using a new
model type, you must select a value for these settings:
Target The Target is the account or dimension you’d like to know more about.
Entity As above
Measure The Measure you select determines the values that are assigned to the account dimension in the
model, to be analyzed by Smart Discovery, ignoring all other measures in the model.
Example
Select 'revenue' as a Target, 'customer' as an Entity, and 'amount in euros' as a Measure to get Smart
Discovery results only for revenue by customer, where the accounts are consuming the values for the
measure 'amount in euro'. So, your Smart Discovery results show only those accounts in the model that
have values for 'amount in euros'.
If your new model type contains an account dimension, all accounts that appear in your Smart Discovery
results are based on the Measure you select in your settings.
Note
You can run Smart Discovery for measures associated with the "average", "minimum", and "maximum"
exception aggregation types. These measures are included in the Smart Discovery analysis and can be
selected as Key Influencers.
You can also refine the scope of your topic by adding Filters, but the filter selection is optional.
Restriction
You can always ask more than one business question. Explore your data from different angles by asking Smart
Discovery to analyze the same target in relation to different entities, and it produces different results.
You can use custom names for the target, target group, and the entity. The custom names you choose are then
used instead of the default name, wherever they appear in Smart Discovery.
To narrow down the scope of the topic, or to optimize running Smart Discovery on large datasets, you can
create filters to exclude specific records from dimensions in your model.
Smart Discoveries are driven by users working on stories. You can exclude particular dimensions or measures
from the analysis, focus on particular dimension members, and display relationships from a list of influencers.
Smart Discovery always gives an overview of your data by automatically building charts to begin discovering
more about your data. The generated content shows the target for the selected entity in relation to interesting
or relevant dimensions and measures. Depending on the data you select to analyze, Smart Discovery can also
show the influencers on the topic of the discovery, how they relate to one another, and key members or value
ranges. Smart Discovery also alerts you if no significant or insightful relationships are present in your data.
After running Smart Discovery, you can simulate numeric targets, using the results of the key influencer
analysis. You can also view data records that are highlighted by the predictive model as being unexpected.
The Smart Discovery results can be saved as story pages that you can share within your organization.
Note
Currently, you can't run Smart Discovery on more than one million cells. By filtering the data first, you can
reduce the number of cells included in your dataset. The number of cells is calculated by multiplying the
number of measures by the number of records.
The Export/Import functionality via the Content Network does not work for Smart Discovery.
Related Information
You can run Smart Discovery to gain insights into how underlying variables influence your data in a specific
context.
You can launch Smart Discovery when you start a new story or open a story containing the dataset you want to
explore.
You can launch a Smart Discovery from the Stories start page or from the Files page.
• From the side navigation, choose (Stories). To create a new story, select From a Smart Discovery. To
use an existing story, go to the Recent Files section, where you can see the last 25 files you worked on.
• From the side navigation, choose Files. On the Files page, you can search for the story in a public,
private, or workspace folder.
Once the story is loaded, change the mode to Edit. When the toolbar is collapsed, select Tools Smart
In the Topic section of the Smart Discovery panel, you can configure different settings. Target, Entity, and
Measure are required, but Filters are optional.
Target
The Target is the data that you want to explore.
• New model type with an account dimension: select the Target dimension or account from the + Select a
dimension or an account list under the Topic area.
• Classic account model or a new model type without an account dimension: select the Target dimension
or measure from the + Select a dimension or a measure list under the Topic area.
Note
When you select a dimension as your target, the Select Target Group Members for <selected dimension>
dialog is displayed with all the dimension members listed under Available Members.
1. Select the members you want to use as part of the target group definition: they're automatically added
to the Selected Members column.
It's not possible to add all members to the Selected Members column, as that leaves nothing for
comparison during the analysis.
You must have two groups - one representing the dimension members that belong to your selected
target group, and one representing the dimension members that don't belong to your target group.
2. Select OK to continue.
Tip
Your selection is then represented in the Smart Discovery panel as your selected target group members.
Smart Discovery analyzes the characteristics of your data selection, that differentiate the dimension
members that belong to your selected target group from the dimension members that don't belong to
your target group.
Entity
The Entity defines the dimensions you would like to explore in relation to your Target.
Measure
To add a Measure, choose + Select a measure.
Note
The Measure settings appear only when your underlying model is a new model type with an account
dimension.
Filters
You can refine your Topic by adding Filters.
Under Topic, you can select Filters from the + Add filters list.
Note
Currently, you can't run a Smart Discovery on more than one million cells. By filtering the data first, you can
reduce the number of cells included in your dataset.
The number of cells is calculated by multiplying the number of measures by the number of records.
You can configure additional settings through the Advanced Settings section of the Smart Discovery panel.
Version
Included Columns
Use the Included Columns section to specify which measures (for classic account models, or new model types
without account dimensions), accounts (for new model types with account dimensions), and dimensions to
include in the discovery. By default, all measures, accounts, and dimensions in the dataset are included in the
discovery scope.
Note
The option to select Measures appears only when your underlying model is a classic account model, or
a new model type without account dimensions. You can run Smart Discovery for measures associated
with the "average", "minimum", and "maximum" exception aggregation types. These measures are
included in the Smart Discovery analysis and can be selected as Key Influencers.
Note
The option to select Accounts appears only when your underlying model is a new model type with
account dimensions.
The number of selected measures, accounts and dimensions is indicated as well as a listing of the selected
items under Measures, Accounts, and Dimensions.
You can select Run straight away after defining your target, entity, measure, filters, and advanced settings.
If you decide to select Preview first, you can choose Run from the Smart Discovery Data Preview dialog.
Once Smart Discovery completes the data analysis, the available Smart Discovery results are displayed under
separate pages:
• Overview
• Key Influencers
• Unexpected Values
• Simulation
After you have finished analyzing the results from your discovery, you can:
• Save the Smart Discovery results as a part of the new or existing story.
• Share the story with other users in your organization.
Related Information
Choose the Preview button from the Smart Discovery side panel to get a brief summary of the data you’ve
selected to explore.
Selecting the Preview button opens the Smart Discovery Preview dialog, which presents a summary of the type
of question you'd like to ask your data. The type of Smart Discovery results presented depends on your data
selection.
Note
If Smart Discovery assesses that your data selection is unlikely to return good results, you're advised in
advance that only an overview page can be generated.
When you select a dimension as your target, and define a target group, Smart Discovery analyzes the
characteristics of your entity that differentiate the dimension members that belong to your selected target
group, from the dimension members that don't belong to your target group.
Example
If your selected Entity is Customer, your selected Target is Product, and your selected target group is
IPA, Beer, Pale Ale, Smart Discovery analyzes the characteristics of Customer that have bought
IPA, Beer, Pale Ale compared with the ones that haven't bought IPA, Beer, Pale Ale. The
following is an example of the type of textual information you'd then see in the Smart Discovery Preview
dialog:
Smart Discovery analyze your data using machine learning to identify the characteristics of Customer that
differentiate the two Product [IPA, Beer, Pale Ale] groups - (Yes) and (No)
- (Yes) represents Customer with at least one Product in [IPA, Beer, Pale Ale]
- (No) represents Customer without any Product in [IPA, Beer, Pale Ale]
One chart represents the total number of instances of your selected entity in your data, which also explains to
how many rows the dataset will be aggregated. The other chart shows a summary of the number of entities
that have members in your target group versus the number of entities that have no members in your target
group, shown in percentage.
Regression
If you've selected a measure as your target, Smart Discovery clearly indicates that it will analyze your target
measure in the context of your entity. You can also see two numeric point charts. One numeric point chart
represents the total number of instances of your selected entity in your data. The other numeric point chart
shows the total target measure in the context of your entity.
Example
If your selected Entity is Customer, and your selected Target is Order Value, Smart Discovery analyzes
all instances of Order Value in relation to Customer.
In both cases (selecting a dimension or a measure as your target), the Smart Discovery Data Preview dialog
also gives you an indication of the types of story pages that are going to be provided to you, once you run Smart
Discovery. The types of story pages that can be produced depend on your data selection.
You can also hover over the target value, entity value, and any applied filters to get more information on your
selection.
Selecting the Preview button is optional. As you gain confidence in using Smart Discovery, you can always
select the Run button straight away after selecting your Target and Entity.
Smart Discovery generates results that provide you with an overview of your data.
• Overview
• Key Influencers
• Unexpected Values
• Simulation
The Overview page provides visualizations to summarize your Smart Discovery results.
The Overview page is split into two halves. The first half provides a summary of your data. The visualizations in
the second half of the page change according to the type of data you've selected.
Even if the Smart Discovery's insight quality is considered to be poor or low, this page is still created, as it's just
an overview of your data.
All the relevant measures and dimensions are aggregated to the level of the entity. For dimensions that have
multiple values for a single entity, the number of dimension members of each entity is counted.
The Key Influencers page lists (ranked from highest to lowest) up to 10 dimensions and measures or accounts
that significantly impact your target in relation to your entity.
The key influencers are displayed in a table and sorted by their degree of influence from highest to lowest:
• Strong
• Moderate
• Weak
The second column indicates the name of the dimension or measure determined to be a key influencer, and the
last column shows the dimensions or measures correlated with each influencer.
Note
We aggregate all the relevant measures and dimensions to the level of the entity. If one of your dimensions
has several values for one entity, we count the number of dimension members of each entity. We rename
these types of dimensions as Number of <Dimension>, where <Dimension> is the original name in your
dataset.
The Unexpected Values page includes a table listing existing values, and the predicted values are displayed
along with the other corresponding dimensions.
To add or remove columns from the displayed table, select Edit Columns.
This page isn't displayed if the insight quality is considered to be too low or there are no unexpected values.
The underlying machine learning model assesses the quality of the analysis. Each chart has an insight quality
checkbox indicating when the quality of the Smart Discovery results is good. If the underlying machine learning
model analysis indicates low confidence in the analysis quality, you're also alerted in the chart.
Two additional interactive charts (scatterplot and bar/column) are provided to help visualize the difference
between the predicted and existing values listed in the table. Use either the Search or Edit Columns options to
focus or filter out columns from the table. When you sort the table on a particular column, the columns in the
bar or column charts reflects this sort order. Any changes in the table are automatically reflected in the charts.
You can select a specific outlier in the charts and see the related ID orEntity. For example, if your selected Entity
is Customer, you see the Customer ID when you select the outlier in the chart.
Note
Unexpected Values are available only when you select a measure or account as your target.
The simulation view helps you understand how changing the values of your key influencers could have an
impact on the value of your target measure in relation to your entity. For more information on the simulation
view, see Smart Discovery Results: Simulation [page 2024].
Related Information
The simulation view helps you understand how changing the values of your key influencers could have an
impact on the value of your target measure in relation to your entity.
Note
To change a key influencer value and simulate the impact on your target in relation to your entity, use
either the displayed slider or dropdown to specify a new value:
Note
The Simulate button is enabled only when you change one of your influencer values.
The underlying machine learning model assesses the quality of the Smart Discovery results. Each chart has an
insight quality checkbox indicating when the quality of the analysis is good. If the underlying machine learning
model analysis indicates low confidence in the quality of the Smart Discovery results, you're also alerted in the
chart.
When you select the Simulate button, the effect of your changes is reflected in the target numeric point chart.
Potential changes include:
• The value of the Target for Entity column increases, decreases, or stays the same
• The percentage increase or decrease is displayed
• A textual insight that offers a brief description of the positive or negative influence one or more of your
influencer value changes. The level of impact is described in one of these ways:
• Strongly negative
• Weakly negative
• Negative
• Neutral
• Weakly positive
• Positive
• Strongly positive
Example
Expected Gross Margin is €0.14 Million, positively influenced mainly by Discount [252.29 K] + Product
[Orange Juice].
Note
• Bottom Border
• Left Border
• Top Border
• Right Border
• All Borders
• Top and Bottom Borders
• Left and Right Borders
• No Border
Text Selection Select the text element you want to customize. Options include:
• All Text
• Title
• Header
• Input Label - the name of the influencer
• Impact Description - for example, Neutral, Weakly Negative, Strongly Positive, and so on.
Font Select the font you want to apply to the selected text.
Size Select the font size you want to apply to the selected text.
Color Select the font color you want to apply to the selected text.
Style Select the font style you want to apply to the selected text.
Note
Related Information
While working with indexed models based on acquired and SAP HANA, SAP S/4HANA, SAP Universe, and SAP
BW live data, you can query Search to Insight to get quick answers to questions and incorporate these insights
into a story. See Live Data Restrictions for Search to Insight [page 2038] for more information on working with
live data.
Note
Both new models types and classic account models are supported in Search to Insight. For more
information on the new model types and classic account models, see Get Started with the New Model
Type [page 627].
Restriction
The Search to Insight search field identifies and auto-completes relevant search strings including dimension
names, dimension values, and measure names. Suggested strings are based on the entire contents of what
was typed in the search field. You can use synonyms for measure and dimension names if they are defined in
your system. For more information see Defining and Working with Synonyms [page 2035].
Note
During autocomplete, users will see dimension values regardless of their access rights. Any visualization
resulting from Search to Insight will respect access rights.
Note
The interface can only identify and work with words in English.
Supported keywords
The section below describes which keywords and formats are supported.
Rank and sort Show top N values for the specified Show top five Texas sales
measure
by priority
Syntax: top/highest/largest/
best N [Measure] by [Dimension]
Hierarchical Navigation Show hierarchical breakdown if Show Taxes for all Regions
modeled as part of a hierarchy
Describe model metadata Identify all dimensions and measures Describe sales data
for the specified model
Limit search to a single model Specify model name … within sales data
Use time phrases Create a time series chart across a Show net revenue by time
time dimension
Measure refinements Filter values above a given threshold Show me sales above 300
dollars
Syntax: [Measure] higher than/
larger than/bigger than/
greater than />/above N
Use number formats Include numbers in queries using any Show me Product sales
of the following formats below.
above 3 million
Thousands – K, k, Thousand, or
thousand
As number phrases:
Half a Million/Billion/Thousand
A Quarter Million/Billion/Thousand
Related Information
Access quick insights about your data by querying the Search to Insight tool.
Context
Search to Insight works with indexed (local) acquired and live SAP HANA, SAP S/4HANA, SAP Universe, and
SAP BW data models. By default, data models that do not have Model Privacy enabled, or any dimensions that
do not have Data Access control activated are indexed. You can choose to index or unindex a model in Set Up
Model Preferences [page 664]. For information on working with Search to Insight on the mobile app see Using
Search to Insight on the Mobile App [page 125].
You can launch Search to Insight by selecting the corresponding icon from the main toolbar;
it can also be launched from your Home screen if the corresponding tile has been activated.
Restriction
Procedure
• If you know what you want to query, enter your question directly in the Ask a question search field.
As you type, auto-complete suggestions will display, select the most appropriate suggestion. If you
select the "n top matches" link for a suggestion, you can then set the search scope to a particular
model.You can use synonyms for measure and dimension names if they are defined in your system.
Note
For a summary of how and what you can format as a Search to Insight question, see Search to
Insight [page 2027]. To clear the search field, use the provided (delete) icon.
Note
Newer model types will list accounts, measures, and dimensions to help you generate a query
question. For more information on the newer model types see Get Started with the New Model
Type [page 627].
Note
When you select a search recommendation, a search is automatically launched. The search scope
will be the model on which the recommendation is based.
The search result is displayed above the search field. When you run subsequent searches, previous results
will continue to display, with the most recent result appearing at the bottom. To clear all displayed results,
Note
A data source for a result may prompt you to add variables before any data can be displayed. A prompt
will appear when you get the first result that uses the data source. After you've set the variables, that
information will be used by all results that use the same data source. To change story variables:
• In the Search to Insight toolbar, select (Edit Prompts) and choose the data source with the
prompts you want to edit.
• To override Search to Insight variables for individual results, under the chart name select
(Edit Chart Prompts) and then select Set Chart Variables.
• To switch to using Search to Insight variables, open the Edit Chart Prompts dialog again,
and select Use Search To Insight Variables.
• Use Recommended follow ups under the chart to display the data using different chart types.
Note
Recommended follow ups will not be available for every search result.
• Copy options
• Reuse Question: use this option to populate the search field with the question that generated the
result.
• Copy: use this option to copy the displayed data for use in a story.
Related Information
Use the Synonym Definitions page to specify and manage synonyms for your terms.
Prerequisites
To work in the Synonym Definitions you need Read and Update rights to Synonym Dictionary. By default, Admin
roles already have the rights.
Context
To help users with their queries in Search to Insight you can define synonyms for measure and dimension
names in your searchable data. These synonyms can in turn be used when querying for insights. The synonyms
will be suggested in the search field auto complete, and in be part of consequent results and visualizations if
used in a query.
Procedure
Defining Synonyms
Note
5. Repeat steps 2-4 for all your terms. When ready select (Save).
Using the term Income Statement in the above image as an example, the synonym inc stmt will be
suggested and used as an alternate term when entering a query in Search to Insight:
If a synonym is used to generate a search result, you can hover the synonym within the result title to
display information on the target term as used in the data source.
6. To delete terms, select the entries for terms you want to delete, Choose (Delete) from the toolbar.
7. Select the Deploy Synonym Definitions icon to choose one of the following:
• Import File: to import a CSV file containing terms and corresponding synonyms.
• Export File: to export a CSV file of your current synonym definitions for deployment in another system.
The following restrictions apply when you run Search to Insight on live SAP HANA, SAP S/4HANA, SAP
Universe, and SAP BW data models.
Data Access
• The available indexed metadata (dimension and measure names, and dimension members) for all users is
determined by the data access security of the user who indexed the model.
• Visualizations based on indexed data will respect individual user access rights.
Metadata
Note
BW Models
• Indexing is based on the flat representation of data. Index results do not account for hierarchies.
• Models containing two-structure dimensions are not supported.
Variables
After indexing a model containing variables, the query interpretation will be based on those variables. The
variable values cannot be modified when using Search to Insight. For example, a dynamic date variable will
effectively be static as any visualizations rendered will be using the indexed variable values.
A Predictive Scenario helps address a business question requiring predictions. It is a workspace, where you
create and compare predictive models to find the best one to bring the best predictions to address the
business question.
The following types of predictive scenarios are available. You choose the one that best fits your business
question.
Classification What is the likehood that a future event occurs? This event is observed at an individual level
(customer, asset, product, ...) and at a certain horizon (in the year, before the end of week, in
the month after a customer contact, ...) .
Example
Who is likely to buy or not buy your new product? Which client is or isn't a candidate for
churn?
Regression What could be the prediction of a business value, taking into account the context of its
occurrence?
Example
What will be the revenue generated by a product line, based on planned transport
charges and tax duties?
Time Series What are the future values of a business value over time, at a certain granularity/place?
Example
How much ice cream will I daily sell over the next 12 months? I have my historical daily
sales information, but I'd like to be able to include other factors such as vacation months,
and the seasons.
You can create one or several predictive models within a predictive scenario. Each predictive model produces
intuitive visualizations of the results making it easy to interpret its findings. Once you have compared the key
quality indicators for different models, you choose the one that provides the best answers to your business
question, so you can apply this predictive model to new data sources for predictions.
To verify that Smart Predict is available in your SAP Analytics Cloud system, check out the SAP Note
2661746 .
14.5.1.1 Restrictions
Availability of Smart Predict Smart Predict is available in all the regions and for most tenant types.
For more details on exceptions and general availability, refer to the SAP Note
2661746 .
Data Sources (acquired datasets You can create predictive scenarios on datasets that use the following data sources:
and planning models)
• local file (.txt, .csv, .xlsx)
Note
Files with extension .xls are not supported.
Note
We recommend you to upgrade your SAP Analytics Cloud version to 1.0.43
to have drop parent hierarchy nodes functionality. Although you can import
dataset with a lower C4AAgent version, hierarchy selection will be disabled
and a corresponding message will be shown in query builder.
• Google Drive
• SAP Integration Suite Open Connectors
Dataset - Storage formats • The following data types are currently supported:
• Date and Date & Time, in the following formats:
• YYYY-MM-DD
• YYYY/MM/DD
• YYYY/MM-DD
• YYYY-MM/DD
• YYYYMMDD
• YYYY-MM-DD hh:mm:ss
Where YYYY stands for the year, MM for the month, and DD for the day of
the month, hh stands for hours from 0 to 24, mm stands for minutes from 0
to 59, and ss stands for seconds from 0 to 59.
Example
January 25, 2018 will take one of the following supported formats:
• 2018-01-25
• 2018/01/25
• 2018/01-25
• 2018-01/25
• 20180125
• The column name restrictions are the same as the SAP HANA ones. If some
characters are not supported, the column name is automatically converted to
a supported name. The original name is kept as a column description in the
metadata.
• UTF-8 encoding is supported.
Note
Time variables are currently not supported by Smart Predict. If your dataset
(acquired or live dataset) contains a column that contains only time variables,
this column won't be included in the training process.
Dataset Maximum Sizes and Lim- See System Sizing, Tuning, and Limits [page 2743]
its
Time Series Predictive Scenario • The Date and Date & Time formats that should be used in your dataset are the
following:
• YYYY-MM-DD
• YYYY/MM/DD
• YYYY/MM-DD
• YYYY-MM/DD
• YYYYMMDD
• YYYY-MM-DD hh:mm:ss
Note
While you can use this format in both live and acquired datasets, the
seconds (ss) won't be taken into account during the training of your
predictive models.
Where YYYY stands for years, MM stands for months, DD stands for day of the
month, hh stands for hours from 0 to 24, mm stands for minutes from 1 to 59,
and ss stands for seconds from 0 to 59.
Note
Regardless of the date granularity you select in your time series predictive
scenarios with a dataset as your data source, every date format should
include years, months and days. This means that even if you just want a
quarterly or monthly forecast, the date format in your dataset still needs to
include days.
Example
Let's say you use the YYYY-MM-DD date format, you can create time series
predictive scenarios where the date granularity can be:
• Year expressed as YYYY-01-01 where YYYY is variable (moving year).
• Quarter or Month expressed as YYYY-MM-01 where YYYY-MM is varia-
ble (moving month).
• Weekly data in the date format YYYY-MM-DD taking for instance the 1st
day of the week as the characters DD (moving week).
• Day (calendar dates) expressed as YYYY-MM-DD where YYYY-MM-DD is
variable (moving day).
• Smart Predict expects a date per each period to learn on: if you want to forecast
your monthly sales, you provide a date per month representing the value of the
corresponding month.
• In a time series predictive scenario, you can define entities, each generating its
specific predictive model simultaneously.
For example, if you define a column with countries as an entity, Smart Predict will
generate as many predictive models as there are countries in your data source.
• The following limits are recommended when using a time series forecast model:
• Number of forecast periods (independent of the number of entities): 500
maximum
Time Series Forecasts Smart Predict time series forecasts don't persist the settings for Number Formatting
selected by the user in the User Preferences section of SAP Analytics Cloud Profile
Settings.
Classification Predictive Scenario In a classification predictive scenario, the target can only be a binary column that only
takes two values, for example, true or false, yes or no, male or female, 0 or 1. For this
type of scenario, Smart Predict considers that the positive target value, or positive
target category of this column, is the least frequently occurring value in the training
dataset. However, to make sure your trained predictive model is reliable, you need
to make sure that you have a minimum representation in your training dataset. For
example, if your dataset contains very few failures, your predictive model won’t be
able to predict the under-represented category failures.
Training a Predictive Model Smart Predict currently excludes the following columns when training your predictive
model:
Note
Date & Time is supported by Smart Predict.
Please note the following restrictions when using live datasets with your predictive models:
SAP HANA SQL Views using row-level security You should not allow the creation of live datasets on
top of SAP HANA SQL Views using row-level security
(see Structure of SQL-Based Analytic Privileges). In Smart
Predict you access the dataset using the SAP HANA
technical user configured at the data repository level, and
not using the SAP Analytics Cloud user profile. This could
result in a security issue as all SAP Analytics Cloud users
would get access to the data accessible by the SAP HANA
technical user. For more information, see Configuring a SAP
HANA technical User in the On-Premise SAP HANA System
[page 344].
Number of columns for live datasets There is a limit of 1000 columns when using live datasets
with predictive models.
Live Data Sources Note: Cloud deployments of SAP HANA systems are
currently not supported.
Privileges for a SAP HANA technical user A maximum of 4000 tables/SQL views are displayed for
creating a live dataset through browsing. It is recommended
that the SELECT privileges for a SAP HANA technical
user are limited to only tables/SQL views required for the
predictive models. For more information, see Configuring
a SAP HANA technical User in the On-Premise SAP HANA
System [page 344]
Train and Apply steps with live datasets Train or Apply operations using live datasets that last longer
than 8 hours, don't complete.
Date Format For live datasets, the following default SAP HANA date
formats are supported:
• DATE
• SECONDDATE
• TIMESTAMP
Related Information
Predictive Content Creator and Predictive Admin are the roles in the application that include permissions for
Predictive Scenarios.
The following tables detail what you can do in Predictive Scenarios, and which roles and permissions you need
to perform the action.
Note
If you are an admin, you can create custom roles. For more information, see Create Roles [page 2877] and
Standard Application Roles [page 2838].
Connections
Delete a connection x
Users
Delete a user x
Predictive Models
Predictive Scenarios
Data Repositories
* This means the Predictive Content Creator can effectively view the existing data repositories when creating
a live dataset but this does not mean the Predictive Content Creator can view the data repositories in the
Administration User Interface.
** The Predictive Content Creator cannot access the Administration User Interface and therefore can't
effectively see the data repositories.
For more information, see Adding and Configuring the Data Repository in SAP Analytics Cloud [page 346].
To consume a generated dataset in a story or model, please refer to the following chapter:
To consume predictions saved in a planning model version, you can create story tables or charts on your
planning model, selecting the private version used for predictive model application. For more information on
planning in tables, and creating a chart, please refer to the following chapters:
For required permissions on datasets, models, and stories please refer to the following chapters:
Note
The BI Admin and Planning Admin roles include all Predictive Admin permissions by default.
Related Information
Here are some explanations of terms that you'll encounter when you create a predictive scenario:
• Predictive Scenario: A workspace where you create and compare predictive models to find the one that
provides the best insights to help solve a business question requiring predictions. Currently, you can
choose between 3 different types: classification, regression, and time series forecasting.
• Predictive Model: The result found by Smart Predict after exploring relationships in your data using SAP
automated machine learning. Each predictive model produces visualizations and performance indicators
based on certain requirements that you have set, so you can understand and evaluate the accuracy of the
predictive results. You'll probably want to experiment a bit with different predictive models, varying the
input data, or the training settings, until you are satisfied with the accuracy and relevance of the results.
• Data source: The form and origin of the data that you'll use to create a predictive model. This could be a
dataset in a database or a planning model in an SAP Analytics Cloud story.
• Target: The variable that you want to explain or predict values for. Depending on your data source, this
could be the column or dimension that you're interested to know.
Note
In the Smart Predict documentation the term variable is used to mean either column or dimension.
However, in the user interface and messages, you'll see the specific term for the data source being
used: columns in datasets and dimensions in planning model versions.
• Entity: Only used in time series forecasting predictive scenarios. You can split up a population into distinct
sections called entities. A predictive model is created for each entity allowing you to get more accurate
forecasts aligned with the entity's particular characteristics.
• Influencers: The variables that have an influence on the target. By default the predictive model considers
all the columns or dimensions as influencers, and during training, will only retain the significant ones. You
can chose to exclude influencers that you consider not worth including in the training. This is useful when
dealing with large data sources.
• Training: The process that uses SAP automated machine learning to explore relationships in your data and
find the best combinations. The result is a formula, your predictive model, that can be applied to new data
to obtain predictions.
Related Information
A Predictive Scenario is a preconfigured workspace that you use to create predictive models and reports to
address a business question requiring the prediction of future events or trends. You choose the one that is
relevant to the type of predictive insights you are looking for.
Restriction
Smart Predict is not available in all regions or for all tenant types.
You can find examples on how to create and use the predictive scenarios available in Smart Predict in our
playlist on YouTube or looking at the individual videos:
Predictive
Scenario
Type Link to videos Description
Overview on Basic Concept in Smart Predict: The Explore the different predictive scenarios currently available in
Predictive Predictive Scenario Smart Predict: Classification, Regression and Time Series Pre-
Scenarios dictive Scenarios.
available
Classification Smart Predict: Finding the best classifi- Using a simple scenario, you will create a Classification Predic-
cation predictive model tive Scenario. You will create a first predictive model and check
the accuracy of your predictions. Then, you will duplicate your
predictive model and will train it using another dataset. You will
finally compare the 2 predictive models to find the best one.
Classification Smart Predict: Understanding the con- In the video Smart Predict: Finding the best classification predic-
fusion matrix tive model, you have built a predictive model to answer your
business issue. Now, you will interpret the results of the confu-
sion matrix for this business case and understand how to set a
threshold (or cut off point) to best categorize your target group.
Classification Smart Predict: Using the profit simula- In the previous videos, you have created a predictive model to
tion answer your business issue and have interpreted the confusion
matrix and set a threshold. Now, you will use the profit simulation
of Smart Predict, to calculate the return on investment using this
predictive model, and set the ideal threshold that allows you to
optimize your profit.
Classification Smart Predict: Applying a classification In a previous video, we created a classification predictive model
predictive model to identify which customers to contact with a marketing cam-
paign. Now, we’ll use this predictive model on actual customer
data to create the output dataset containing the answer to this
question.
Regression Smart Predict: Debriefing a Regression Using a simple scenario we will create a Regression Predictive
Predictive Model Scenario and will go through the generated reports to evaluate
the accuracy of the predictive model.
Time Series Smart Predict: Creating a Time Series Using a simple scenario, you will create a Time Series Predictive
Forecasting Predictive Scenario Scenario. You will create a first predictive model and go through
the different settings to be filled to create the scenario.
Time Series Smart Predict: Debriefing a Time Series In the video Smart Predict: Creating a Time Series Predictive
Forecasting Predictive Model Scenario, you have created a Time Series Predictive Model in
your Predictive Scenario. Now you will go through the generated
reports and evaluate the accuracy of your predictive model.
Time Series Smart Predict: Creating a segmented Using a simple scenario and the segmented variable of a Time
time series predictive model Series predictive model, you will create several predictive models
at once. You will get accurate predictions per predictive model
considering the characteristic of each individual segment.
Once you've created a predictive scenario, you add its first predictive model. You start by selecting a data
source that contains historical data, then specify the target that you want to explain or predict the values for.
The predictive model is built using SAP automated machine learning algorithms that explore relationships in
the data, and find the best combinations. This is called training the predictive model, and the result is the
predictive model that can be applied to new data to obtain predictions.
Each predictive model produces visualizations and KPIs that help you understand and evaluate the accuracy of
a predictive model.
Depending on your business question, you will probably want to experiment a bit with different predictive
models, varying the training data and settings to deliver a more accurate or relevant predictive output.
When you are confident that you have a trained predictive model that generates results that satisfy your
business question, you can apply that predictive model to new data.
In the Smart Predict documentation the term variable is used to mean either column or dimension. However,
in the user interface and messages, you'll see the specific term for the data source being used: columns
in datasets and dimensions in planning model versions. Rows contain the observations for the variable. For
example, in a database containing information about your customers, the <name> and <address> of those
customers are variables.
In a predictive scenario, variables have different roles that you assign when defining the predictive goal and
the training requirements for a predictive model. For example, a variable can be a target, another can be
Related Information
Continuous Values are numerical, continuous, and The variable <salary> is both a nu-
sortable. They can be used to calculate merical variable, and a continuous
measures; for example, mean or var- variable. It may, for example, take
iance. on the following values: <$1,050>,
<$1,700,> or <$1,750>.
During modeling, a continuous varia-
ble may be grouped into significant dis- The mean of these values may be calcu-
crete bins. lated.
Ordinal Values are discrete. They can be re- The variable <school grade> is an
grouped into categories and are sorta- ordinal variable. Its values actually be-
ble. long to definite categories and can be
sorted. This variable can be:
Ordinal variables may be:
Nominal Values are discrete. They can be re- The variable <zip code> is a nomi-
grouped into categories. nal variable. The set of values that this
variable may assume are clearly dis-
Caution tinct, non-ranked categories, although
Binary variables (variable with 2 they happen to be represented by num-
distinct values only) are considered bers. For example: <10111>, <20500>
as nominal variables. They are the or <90210>.
ones that can be used as target for
classification predictive models The variable <eye color> is a nominal
variable. The set of values that this var-
iable may assume are clearly distinct,
non-ordered categories, and are repre-
sented by character strings. For exam-
ple: <blue>, <brown>, <black>.
Textual A type of nominal variable containing For example the variable <Bluetooth
phrases, sentences, or complete texts. Headphones Customer Feedback>
Note Textual variables are used for text anal- is a textual variable. The values for
yses. this variable can be <Durable cord,
These variables are currently not
connect easy to phone and
supported by Smart Predict, and
plug.>, <Great fit and great
are therefore excluded from the
sound!> or <Great length and
training of a predictive model.
color. Super fast charging.>.
Note
During training, the values of the categorical variables are regrouped into homogeneous categories. These
categories are then ordered as a function of their relative contribution with respect to the values of the
target variable. For more information, see Category Influence [page 2093].
• String
• Integer
• Number
• Boolean
• Date
• Date and Time
• Time
• Angle
Note
Variables with Time (not timestamp) and Textual storage formats aren't currently supported by Smart
Predict. These variables won't be considered when training the predictive model.
A variable corresponds to a column in a dataset or a dimension in a planning model. The observations relating
to each variable correspond to the rows. Variables that have been specified as a target, or an entity identifier,
are not considered as influencers. Unless you exclude certain influencers, all other variables are treated as
influencers. The training retains the most significant ones for the predictive model reports for debriefing.
Date The variable used for the date values. The date formats that should be used in
your dataset are the following:
Note
• YYYY-MM-DD
This variable is mandatory for a • YYYY/MM/DD
time series predictive scenario.
• YYYY/MM-DD
• YYYY-MM/DD
• YYYYMMDD
• YYYY-MM-DD hh:mm:ss
Note
Let's say you use the YYYY-MM-DD
date format, you can create Time
Series Predictive Scenarios where
the date granularity can be:
Example
If a predictive model has the
target variable <has bought
the product Yes/No>, you
should exclude the influencer
<Billing amount> if it con-
tains the cost for the product.
Tip
If there is a variable that is influenc-
ing the prediction at very high level
then there is a chance that it is a
leak variable.
When you create a new predictive model, you can edit properties and specify descriptive information for a
column, to ensure that the columns are identified in the right category for the predictive model generation.
Note
any updates you make in the dataset are permanent: if you (or another user) reuse this dataset in another
predictive model or another predictive scenario, the changes will remain.
Field Values
Storage: • String
Data type for the column. • Integer Note that a telephone or account numbers should not be considered numbers.
• Number
• Boolean
• Date
• Date and Time
• Time
• Angle
Type: Continuous: columns whose values are numerical, continuous, and sortable. They can be used
Statistical data type to calculate aggregations (such as min, median or max).
Nominal: columns that label data. They have no quantitative value (such as 1 and 2 to indicate
male or female).
Tip
While creating a predictive model, if the column you want to select as target or entity (time
series forecasting) isn't available it is likely that this column wasn't assigned the right data
type, you can correct this here in Edit Column Details.
Missing: For example, if you enter #Empty, then any rows with no entries will receive #Empty as a value.
Key Your dataset needs to have at least one key column if you use a regression predictive model.
In Smart Predict, you need to provide data so that your predictive model can be trained or applied to generate
the predictions. This data can be organized using different formats depending on the type of predictive model:
Related Information
When you create a predictive model, you initially specify a training data source, a target variable, and then
define additional training settings. Training is a process that uses SAP automated machine learning algorithms
to find the best relationships or patterns of behavior in the data source. The result is a predictive model that
you can apply to a new data source to predict with a probability what could be the value of the target for each
element of the data source.
While creating the predictive model, you've selected a training data source. As the values of the target variable
are known in this data souce, the data can be used to evaluate the accuracy of the predictive model's results.
During the training process, the data source is cut into sub-sets using a process called partition strategy, with
a final partition used to validate the predictive model's performance, using a range of performance indicators
and graphical tools. For more information regarding the partition strategy, refer to the related link.
The following graphics summarize what's happen when you click Train with a training dataset as data source:
• If the training is successful, the predictive model produces a range of performance indicators and graphical
charts that allow you to analyze the training results. Assessing the accuracy and robustness of the training
is called debriefing the predictive model. You can find more information on debriefing clicking on the
related link Look for the Best Predictive Model at the end of this page.
• If the training is not successful, use the Status Panel (click the icon Settings panel), to access detailed
information why warnings and errors were generated during the training process.
Smart Predict doesn't take into account neither the textual variables nor the time variables that are
contained in the columns of your training dataset. Therefore, these variables are excluded from the training
of your Predictive Model. For more information, see the Restrictions.
Once you are satisfied with the accuracy and robustness of your predictive model, you can apply it to a new
data source for predictive insights, ensuring that the new data source has the same information structure as
the training data source. For more information on the apply step, click the related link below Generating Your
Predictions.
Related Information
A partition strategy is a technique that decomposes a training data source into two distinct subsets:
• A training subset
• A validation subset
Thanks to this partition strategy, the application can cross-validate the predictive models generated to ensure
the best performance.
The following table defines the roles of the two data subsets obtained using partition strategies.
For Time Series Forecast, the validation subset allows you to calculate the confidence interval (Error Min
and Error Max) of the predictions.
Related Information
As the values of the target variable are known in your training data source, the data can be used to evaluate the
accuracy of the predictive model's results. Thanks to the partition strategy, Smart Predict first creates several
predictive models versions and then cross-validates the results of these generated predictive models in order
to keep the one with the best performance.
In the case of a classification model, Smart Predict bases the selection looking at the performance indicators
Predictive Power and Prediction Confidence. It selects the model with the best compromise between perfect
quality and perfect robustness.
In the case of a regression model, Smart Predict looks at the performance indicators Root Mean Squared
Error (RMSE) and Prediction Confidence. It selects the model with the best compromise between these two
performance indicators.
The selection of the best time series predictive model is based on the horizon-wide MAE: The time series
predictive model is applied on the past observations found in the validation set. For each period, the predictive
model calculates as many forecasted values as requested by the analyst. This is called the horizon of forecasts.
Each of those forecasted values is compared to the corresponding actual one. Then, for each possible horizon,
a per-horizon MAE can be calculated. The horizon-wide MAE is the mean of all per-horizon MAE values.
Create a predictive scenario and work with one or more predictive models.
Context
You create a predictive scenario that corresponds to the type of information you need to answer a business
question. The predictive scenario is where you create and store one or several predictive models.
Tip
• From the side navigation, select Predictive Scenarios. On the Predictive Scenarios start page, go to
the Recent Files section, where you can see the last 25 files you worked on.
• From the side navigation, select Files. On the Files page, you can go to a public, private, or
workspace folder to find your files.
Procedure
1. From the Predictive Scenarios start page, select the type of predictive scenario that best fits your
business question.
Tip
You can create a predictive scenario from the Files page by choosing (Create) Predictive
Scenarios
The New Predictive Scenario dialog appears. It lists files and folders in the Files repository. If you already
have a user folder, you can store your predictive scenario there, or you can create a new folder.
2. Select a location (for example, public, private, or workspace folder) to save the file.
3. Enter a meaningful and unique name and an optional description. The description can provide the business
intent of the predictive scenario, such as a description of the problem the predictive scenario is trying to
solve. For example, predicting who will buy a product, or creating groups of customers.
4. Select OK.
Your predictive scenario is created and the Settings pane opens automatically. The pane contains the
parameters and options that defines your predictive scenarios first predictive model. The settings available
depend on the type of predictive scenario you select.
Predictive scenarios can also be added as predictive steps in multi actions. It can then be triggered in a
story or analytic application, or scheduled as a calendar task. For more information on multi actions, refer
to Automate a Workflow Using Multi Actions [page 2347] - SAP Help Portal.
Once you've created your predictive scenario, you can add your first predictive model.
Depending on the type of predictive scenario you have created and the training data source type you will use to
train your predictive model, the predictive model settings can be different.
Note
• Using an acquired dataset as data source: Don’t forget that an acquired dataset must be uploaded in
SAP Analytics Cloud, under the Files area, before you can use it.
• Using a live dataset as data source: Before you start using live datasets as data source, you need
to need to check with your administrator that your live SAP HANA data repository works. For more
information, see Setting up Live SAP HANA Data Access for Smart Predict [page 337].
• Using a planning model as data source (Time series predictive scenario only): Before you can use
your planning model as a data source, you must ensure that you have a least read access to it. There
are also some specificities when currencies conversion is enabled. See Use Currencies with Predictive
Planning [page 2309].
Note
Keep in mind that your training and application data source must come from the same data source
location. You can't apply a predictive model on a live dataset if it was trained with an acquired dataset, nor
can you apply a predictive model on an acquired dataset if it was trained using a live one.
Regarding planning model: the same planning model you use as a source will be the one where you save the
predictive forecast back.
Related Information
Before you train your classification or regression predictive model, you need to specify how you want your
predictive model to be trained through the Settings panel.
The following sections mirror the sections of the Settings pane you need to complete to create your predictive
model.
Description Enter what your predictive model is try- For example, predict if a customer will
ing to do. churn or not.
Training Data Source Browse and select the data source that The data source can be an acquired da-
contains your historical data. taset or a live dataset.
Edit Column Details Check and update if necessary the col- You might need to check the statistical
umns contained in your data source. type if you cannot select it as your tar-
get at next step.
Predictive Goal
Target Select the column from your data For a classification predictive model,
source that contains the information the target column must contain binary
you want to get predictions for. values only (for example: yes or no).
Influencers
Exclude as influencer Select the influencers that should not All of the influencers contained in
be taken into consideration by the pre-
your training data source can inflluence
dictive model.
more or less the target.
Example
Imagine that you want to launch
a phone survey. You decide to
limit the survey up to 3 ques-
tions. In this case, as you need
to focus on the questions that
best influence the prediction, you
check the option Limit Number
Of Influencers and set Maximum
Number of Influencers to 3
Click Train button. Thanks to the generated reports, you can analyze the predictive model performance and
decide if you need to further refine your predictive model or if you can use it with confidence. For more
information, see Looking for the Best Predictive Model [page 2074].
Before you train your Time Series predictive model using a dataset as data source, you need to specify how you
want your predictive model to be trained through the Settings panel.
The following sections mirror the sections of the Settings pane you need to complete to create your predictive
model.
Note
Don't forget that the date formats in your time series data source must be:
• YYYY-MM-DD
• YYYY/MM/DD
• YYYY/MM-DD
• YYYY-MM/DD
where YYYY stands for the year, MM stands for the month, DD stands for the day of the month, hh stands
for hour, mm stands for minutes, and ss stands for seconds.
Example
January 25, 2018 will take one of the following supported formats:
• 2018-01-25
• 2018/01/25
• 2018/01-25
• 2018-01/25
• 20180125
General
Description Enter what your predictive model is try- For example, forecast product sales by
ing to do. city.
Times Series Data Source Browse and select the data source that
contains your historical data.
Edit Column Details Check and update if necessary the col- You might need to check the statistical
umns contained in your data source. type if you cannot select it as your tar-
get at next step.
Predictive Goal
Number Of Forecast Periods Select the number of predictive fore- See How Many Forecasts can be Re-
casts you want. quested? [page 2069].
Entity Select up to five columns from your This corresponds to identify each en-
data source for which you want to get tity that you want to get predictive
distinct forecasts. This field is optional. forecasts for. The predictive model will
capture specific behaviors for each en-
tity and will generate distinct predictive
forecasts.
Train Using Select whether you want to train your It can be useful to define the range of
predictive model using all observations observations that will be used to train
or a window of observation. the predictive model. You may want to
ignore very old observations or inappro-
If you choose to use a window of obser-
priate observation to avoid that your
vations you'll need to specify the win- predictive model learns based on obso-
dow size you want to use. lete/inappropriate behavior.
Example
For example, if you want to forecast
travel costs for next year, you might
want to ignore a couple of months
in your past data where travel has
been frozen for budget reasons.
Until Select whether you want to train your Last Observation: Let the application
predictive model until the last observa-
use the last training reference date as
tion or another date of your choice.
a basis.
Force Positive Forecasts Switch the toggle on if you want to get This turns negative predictive forecasts
positive forecasts only. to zero. This can be useful when predic-
tive forecasts only make sense as pos-
itive predictive forecasts. For example,
if you need to forecast the number of
sales for one of your main product by
major cities for a region. It makes no
sense to get negative values. Either you
sell a number of products or you sell
none of them. Negative values will be
turned to 0.
Click Train & Forecast button. Thanks to the generated reports, you can analyze the predictive model
performance and decide if you need to further refine your predictive model or if you can use the predictive
forecasts with confidence. For more information, see Analyzing the Results of Your Time Series Predictive
Model [page 2113].
The field Number of Forecast Periods allows you to select the number of predictive forecasts to generate.
You can set the number of predictive forecasts that corresponds to your business needs. However, this number
can be set with some limits:
• In the case that your time series data source is a dataset that contains future values for influencers, your
number of predictive forecasts needs to be inferior or equal to the number of future values observations
you have in your data source.
Example
If you have future values for the next six months, then the number of predictive forecasts requested
cannot exceed six.
• The number of predictive forecasts delivered with confidence intervals is determined as follows:
• If the time series data source size is equal or fewer than 12 periods, it is treated as a small data source
case and by default the number of predictive forecasts with confidence intervals is set to 1.
• In the other cases, the number of predictive forecasts with confidence intervals is set to 1/5 of the time
series data source size.
Example
If your time series data source contains 1,000 observations, Smart Predict can provide up to 200
predictive forecasts with confidence intervals. If you ask for more than 200 predictive forecasts,
the accuracy of the forecasts starting from the 201st cannot be evaluated.
Related Information
One way to increase your predictive model’s accuracy is to add influencers to your dataset. These influencers
can then be used by Smart Predict to improve its understanding of the relationships between your data.
Note
Your company noticed that the maintenance costs of their stores are getting too high. You need to analyze
them to see where to cut costs but also predict future maintenance costs better to avoid going over budget.
You create your first predictive scenario with a Time Series predictive model to assess the maintenance costs
per store. You choose the overall expenses as target, the date of these expenses as date variable and the store
ID as entity.
You train your first predictive model excluding the twenty-three possible influencers.
The Expected MAPE of your first predictive model in the debrief is at 26.71%.
Note
You want the percentage of your predictive model’s Expected MAPE to be as low as possible as it indicates
the percentage of error you can expect in your predictive forecasts.
You notice that some of the variables excluded as influencers such as the number of Saturdays and Sundays
have a direct relation to the date dimension your used in your predictive model. You realize they impact the
insights and could improve the accuracy of your predictive forecasts if they were included as influencers.
You create a second predictive model by duplicating your first predictive model. However, this time you include
all influencers and train your second model.
The Expected MAPE of your second predictive model in the debrief drops to 20.77%.
Related Information
Once you have set up the settings of your predictive model, you need to complete two additional steps before
you can evaluate its accuracy:
1. You save your predictive model to add it to the predictive model list of your predictive scenario. The
predictive model is saved with the status Not Trained.
Note
2. You train your predictive model. During training, Smart Predict explores and learns from relationships in
the data source to find the best combination or patterns of behaviour and generates the predictive model.
Its status is updated immediately in the predictive model list as Trained.
Note
It can happen that the training fails. In this case, check the Status Panel clicking . You can also click on
the Status directly in the predictive model list.
Related Information
You can keep informed about what is happening during the modeling process by regularly checking out the
Status panel.
At the top of the Settings panel, click to access to information on the predictive model, and any errors that
occurred during the training step or apply step actions.
Note
The Status panel stays empty until your predictive model is trained.
View Displays
Model Status The area where you can access the error messages. For ex-
ample, if the training failed, you can get some information on
what went wrong.
Detailed Logs Logs that display the details of each step of the process.
In case of a problem, it allows you to provide SAP support
professionals with information.
Once your predictive model is saved, it is added to the list of predictive models contained in your predictive
scenario with the status Not Trained. From this area, you are able to monitor the status of your predictive model
at each step. The predictive model list is located at the bottom of the screen.
Note
For Time series predictive models that are split up into entities, errors/warnings are displayed in the
debriefing reports directly and exact error / warning is displayed per entity.
The Status panel is located at the top of the Settings panel, by clicking .
You've trained your first predictive model using your training data source and it's now time to evaluate if you
can use it with confidence to generate your predictions.
You can evaluate your predictive model's performance using a range of performance indicators. You can also
add new predictive models with different settings and compare their performances if you want to refine your
results.
As soon as your predictive model is saved, it is added to its predictive model list at the bottom of the screen.
You can view each predictive model's status and perform the following actions from the list:
Context
Procedure
Duplicate a predictive model to create a new version in the same predictive scenario.
Context
You want to experiment with new settings starting from a predictive model version that already exists. You can
duplicate the predictive model's settings to create a new version of the current predictive model to:
Procedure
1. Open the predictive scenario, which contains the predictive model you want to duplicate.
2. Open the Predictive Model list.
3. Click at the predictive model level you want to duplicate, and select Duplicate.
You create an exact copy of the original version of your predictive model.
Note
You can delete one or several predictive models from any of your predictive scenarios.
Context
Procedure
1. Open the predictive scenario, which contains the predictive model you want to delete.
2. Open the Predictive Model list.
You want to create a new predictive model from scratch in your predictive scenario.
Context
Your predictive scenario is created and you've already created at least a first predictive model. Now you want to
create a new predictive model to test new training settings and compare the results.
Procedure
The new predictive model is added to the predictive scenario and appears in the predictive model list. You
can now easily compare the existing predictive models to find the one that best fits to your need.
Predictive model performance indicators help you assess the performance and robustness of your predictive
model.
You can check the performance and robustness of your predictive model using several performance indicators.
The indicators available depend on the type of predictive model you have set up:
Regression predictive model Root Mean Squared Error (RMSE) [page 2088]
You can assess the performance of your Time Series Forecasting model using one of the performance
indicators below:
The Expected MAPE (Mean Absolute Percentage Error) is the evaluation of the error made when using the
predictive model to estimate the future values of the target, over the horizon. For each actual observed value,
the predictive model calculates as many forecasted values as requested by the analyst. This is called the
horizon of forecasts. Each of those forecasted values is compared to the corresponding actual ones to calculate
a performance indicator. Then, for each horizon, a per-horizon performance indicator can be calculated. The
Expectedvalue of the performance indicator is the mean of all per-horizon values that have been calculated
(Horizon-Wide performance indicator).
Example
If the Expected value of a performance indicator is 12%, the value of that same performance indicator is
expected to be close to 12% when using a forecasted value.
Related Information
The Expected MAPE (Mean Absolute Percentage Error) is the evaluation of the error made when using the
predictive model to estimate the future values of the target, over the horizon.
The MAPE is the mean of the absolute differences between actual and forecasted values, expressed as a
percentage of actual values.
The lower the MAPE the better the model. A MAPE of zero indicates a perfect predictive model.
An Expected MAPE of 12% indicates that the error made when using a forecasted value will be close to 12%
of the actual value.
MAPE is often preferred because it's a simple percentage indicator and values of MAPE can be compared
between different entities.
However, the MAPE can be misleadingly high when the actual values are close to zero (because the error is
divided by a value close to zero), wrongly suggesting that the model is not accurate. In such cases, other
performance indicators should be preferred.
Related Information
The Expected MAE (Mean Absolute Error) is the evaluation of the error made when using the predictive model
to estimate the future values of the target, over the horizon.
The MAE is the mean of the absolute differences between actual and forecasted values, expressed in the unit of
the actual values.
The lower the MAE the better the model. An Expected MAE of zero indicates a perfect predictive model.
Related Information
The Expected MASE (Mean Absolute Scaled Error) is the evaluation of the error made when using the
predictive model to estimate the future values of the target, whatever the horizon.
The MASE is the MAE of the model (mean of the absolute differences between actual and forecasted values),
divided by the MAE of a naive lag1 model (that is a model that would always use the last known value
as prediction). The MASE provides a comparison between the model performance and the naive model
performance.
The lower the MASE, the better the model performance. An Expected MASE greater than 1 indicates that the
model performance is worse than that of the naive model.
Related Information
The Expected RMSE (Root Mean Squared Error) is the evaluation of the error made when using the predictive
model to estimate the future values of the target, over the horizon.
The RMSE is the square root of the mean of the squared differences between actual and forecasted values,
expressed in the unit of the actual values.
The lower the value of the Root Mean Squared Error, the better the predictive model is. A perfect predictive
model would have a Root Mean Squared Error value of 0.
The Root Mean Squared Error has the advantage of representing the amount of error in the same unit as the
predicted column making it easy to interpret. If you are trying to predict an amount in dollars, then the Root
Mean Squared Error can be interpreted as the amount of error in dollars.
14.5.1.5.1.5.1.5 Expected R2
The Expected R² (Coefficient of Determination or "R squared") is the evaluation of the error made when using
the predictive model to estimate the future values of the target, over the horizon.
The R² is the proportion of the variation of the target variable that is explained by the predictive model.
An Expected R² of 1 indicates a predictive model that perfectly captures the variation of the signal. Note that
R² does not indicates a goodness of fit. A model with a high R² can predict the variation of the signal but not
necessarily be precise on its level.
Related Information
You can assess the performance of your classification or regression model using one of the performance
indicators below:
The predictive power measures the ability of your predictive model to predict the values of the target variable
using the influencers present in the training data source.
The predictive power indicator takes a value between 0% and 100%. This value should be as close as possible
to 100%, without being equal to 100%.
A predictive power of 1 is a hypothetically perfect predictive model, where the influencers are capable of
accounting for 100% of information in the target variable. In practice, however, this is usually an indication that
an influencer, that is 100% correlated with the target variable, was not excluded from the data source analyzed.
A good practice would be to exclude this influencer when you define the settings of your predictive model.
Tip
To improve the predictive power of a predictive model, try adding new influencers to the training data
source.
Example
A predictive model with a predictive power of 79% is capable of accounting for 79% of the variation in the
target variable using the influencers in the data source analyzed.
No exact threshold exists to separate a “good” predictive model from a “bad” predictive model in terms of
predictive power as this depends on your business case. The predictive model of a customer-based scenario
can be considered "good" with a predictive power of 40, while the predictive model of a finance-based scenario
usually requires a predictive power above 70 to be considered "good".
The prediction confidence indicates the capacity of the predictive model to achieve the same performance
when it is applied to a new data source, which has the same characteristics as the training data source. If the
distribution of data is different between the two data sources, the predictive model is no longer useful.
Prediction Confidence takes a value between 0% and 100%. This value should be as close as possible to 100%.
To improve your Prediction Confidence, you can add new rows to your data source, for example.
Predictive Power and Prediction Confidence are the two main indicators of performance for your predictive
model.
The following graph displays the predictive power and the prediction confidence calculation:
For classification predictive models, it corresponds to the ratio of correctly classified rows to the total number
of rows.
Example
A classification rate of 0.82 means that 82% of the rows in the training dataset are correctly classified by
the predictive model.
The classification rate is not very well adapted to unbalanced cases, when the target category is not
very frequent. For example, if there is only 1% of the target category, it's very easy to have a very high
classification rate. In such a case, check the Predictive Power or the Area Under the ROC Curve (AUC).
The Area Under the ROC Curve (AUC) is another way to measure predictive model performance. It calculates
the area under the Receiver Operating Characteristic (ROC) curve. The AUC is linked to Predictive Power (PP)
according to the following formula: PP = 2 * AUC - 1. For a simple scoring predictive model with a binary
target, this represents the probability that a randomly chosen signal observation will have a higher score than a
randomly chosen negative observation (non-signal).
One of the interests of the Area Under the ROC Curve is its independence from the target distribution. For
example, even if you duplicate each positive observation twice by duplicating rows in the dataset, the AUC of
the predictive model stays the same.
Tip
AUC is a good measure for evaluating a binary classification system. It is useful for cases when the target
category is not very frequent, which is not the case of the Classification Rate.
Example
Sensitivity, which appears on the Y axis, is the proportion of CORRECTLY identified signals (true positives)
found (out of all true positives on the validation data source).
[1 – Specificity], which appears on the X axis, is the proportion of INCORRECT assignments to the signal
class (false positives) incurred (out of all false positives on the validation data source). (Specificity, as
opposed to [1 – specificity], is the proportion of CORRECT assignments to the class of NON-SIGNALS –
true negatives.)
The Error Mean or Standard Error of the Mean quantifies the precision of the predictive model's estimations.
It's used to determine how precisely the mean of the predictive model's predicted values estimates the
population mean.
A mean of zero indicates that the mean of the predictive model is the same as the actual data. A mean value
close to 0 is better.
A negative mean value indicates that the predictive model always underestimates the values, and often
generates values under the actual values.
A high mean value indicates that the predictive model over-estimates the target values, and often generates
values above the actual values.
To improve the accuracy of your predictive model, you can bring additional influencers that make the target
clearer to the training data source.
The Error Standard Deviation or Standard Deviation is a measure of variability that quantifies how much the
errors vary from one another.
A low standard deviation is better because indicates that the data points tend to be close to the mean of the
data source.
A high standard deviation indicates that the data points are spread out over a wider range of values.
To improve the accuracy of your predictive model, you can bring additional influencers that make the target
clearer to the training data source.
The Gini index is a measure of the predictive power based on the Lorenz curve. It is proportionate to the area
between the random line and the predictive model curve. The Gini index is defined as the area under the Lorenz
curve. It is the area between the ‘Trade-off’ curve and the obtained curve multiplied by 2.
Value 0 corresponds to random predictive model, value 1 corresponds to ideal predictive model.
The Maximum Error is the highest value resulting from the calculation of the absolute differences between the
predicted and the actual values for each row of the data source.
It's always positive and values closer to zero are better. It gives an idea of how far the prediction can be (at
worst) from the actual value.
The Root Mean Squared Error (RMSE) is one of the two main performance indicators for a regression predictive
model. It measures the average difference between values predicted by a predictive model and the actual
values. It provides an estimation of how well the predictive model is able to predict the target value (accuracy).
The lower the value of the Root Mean Squared Error, the better the predictive model is. A perfect predictive
model (a hypothetic predictive model that would always predict the exact expected value) would have a Root
Mean Squared Error value of 0.
The Root Mean Squared Error has the advantage of representing the amount of error in the same unit as the
predicted column making it easy to interpret. If you are trying to predict an amount in dollars, then the Root
Mean Squared Error can be interpreted as the amount of error in dollars.
You can improve the Root Mean Squared Error by adding more influencer in the training data source.
What is the formula used to calculate the Root Mean Squared Error?
The Root Mean Squared Error is calculated using the following formula:
where:
N = Number of observations
The Root Mean Squared Error can be interpreted as the standard deviation of the error (it's the square root of
the error variance).
The Influencer Count indicates the number of influencers used in the predictive model.
Tip
To improve the predictive power of your predictive model, you may want to add Influencers to the training
data source.
Tip
To improve the prediction confidence of a predictive model, you may want to add new observation rows to
the training data source. In case of a classification predictive model, keep in mind that the number of rows
of the class less represented must ideally be higher than 1000.
Once you've trained your classification predictive model, you can analyze its performance to make sure it's as
accurate as possible.
Use the dropdown list to access and analyze the reports on influencers and predictive model performance.
• Predictive Power is your main measure of predictive model accuracy. It takes a value between 0% and
100%. This value should be as close as possible to 100%, without being equal to 100% (100% would
be a hypothetically perfect predictive model; 0% would be a random predictive model with no predictive
power). To improve your Predictive Power, you can add more influencers, for example.
For more information, refer to Predictive Power [page 2083].
• Prediction Confidence indicates the capacity of your predictive model to achieve the same degree of
accuracy when you apply it to a new data source, which has the same characteristics as the training data
Note
Depending on your business issue, you can look at the other provided performance indicators for the
predictive model, and also review the profile of the detected curve. For more information, refer to Assessing
Your Predictive Model With the Performance Indicators [page 2078] and The Detected Target Curve [page
2102].
Does the target value appear in sufficient quantity in the different data sources?
Get an overview of the frequency in each data source of each target class (positive or negative) that belongs to
the target variable.
It's usually recommended that you have at least 1000 records of the each class in your data source. Under this
threshold, the validity of the prediction confidence is no longer guaranteed.
• If the influence value is positive, we are more likely to get "minority class".
• If the influence value is negative we are less likely to get "minority class".
For more information, refer to Category Influence [page 2093], Grouped Category Influence [page 2094] and
Grouped Category Statistics [page 2094].
Is my model producing accurate predictions? Can I evaluate the costs/savings using this
model?
Use the Confusion Matrix tab and assess the predictive model performance in detail, using standard metrics
such as specificity.
Use the Profit Simulation tab and estimate the expected profit, based on costs and profits associated with the
predicted positive and actual positive targets.
For more information, refer to Confusion Matrix [page 2095], The Profit Simulation [page 2099].
Can I see any model errors? Is my predictive model producing accurate predictions?
Use a large panel of performance curves in the Performance Curves tab, to compare your predictive model to a
random predictive model and a hypothetical perfect predictive model:
• Determine the percentage of the population to contact to reach a specific percentage of the actual positive
target with The Detected Target Curve [page 2102].
What's next?
You have two possibilities:
• Your are satisfied with your predictive model's performance after checking the performance indicators.
Then you can use it: See Generating Your Predictions [page 2128].
• You would like to see if you can improve your predictive model's performance:
• Duplicate your current predictive model and experiment with updated settings. You can then compare
the two versions and find the best one. See Duplicating a Predictive Model [page 2076].
• Update the settings of your predictive model and retrain it. See Define Settings and Train a
Classification or Regression Predictive Model [page 2151]
Caution
• Delete your predictive model. See Deleting a Predictive Model [page 2077]
Target Statistics are expressed in percentage and show how often each target class appears in the data source.
For each data source, the Target Statistics report lists the categories of the target variable. For each category,
the table indicates how often it appears compared to the other category. The target category (positive or
negative target) is by default the less frequent one.
The Influencer Contributions view allows you to examine the influence on the target of each influencer used in
the predictive model.
The most contributive ones are those that best explain the target.
Only the contributive influencers are displayed in the reports, the influencer with no contribution are hidden.
The sum of their contributions equals 100%.
The number of influencers that are displayed, depends on the predictive model settings you have defined
at the creation steps. Indeed, if you decided at the creation step to Limit Number Of Influencers to 3 for
example, then you will get the information on the 3 most important influencers at a maximum.
Related Information
Category influence is an analysis of the influence of different categories of an influencer on the target,
computed from basic information:
The higher the absolute value of the influence, the stronger the influence of the category is: categories with
values equal to 0 or close to 0 are categories with no influence on the target.
• Categories with positive values are categories where observations are more likely to be in the positive
category of the target: The percentage of positive targets within this category is above the percentage of
positive target in the whole data source.
• Categories with negative values are categories where the observations are more likely to be in the negative
category of the target: the percentage of positive target within this category is below the percentage of
negative target in the whole data source.
The influence is computed for each category and provided by the engine using this formula:
where:
Related Information
Grouped Category Influence shows groupings of categories of an influencer, where all the categories in a group
share the same influence on the target variable. You can quickly see which category group has the most
influence.
The X axis represents the influence of the grouped categories on the target variable. The Y axis represents the
influencer of the grouped categories.
The length and direction of a bar show whether the category has more or fewer observations that belong to the
target category:
• A positive bar (influence on target greater than 0) indicates that the category contains more observations
belonging to the target category than the mean (calculated on the entire validation data source).
• <0> means that the category has no specific influence on the target.
• A negative bar (influence on target less than 0) indicates that the category contains fewer positives cases
(%) than the percentage of positive cases in the overall validation data source.
Grouped Category Statistics show the details of how the grouped categories influence the target variable over
the selected data source.
You can use the Validation and Prediction dropdown list to compare the results obtained by the predictive
model (training subset) and the one obtained on validation (validation subset).
The Confusion Matrix, also known as an error matrix, is a table that shows the performance of a classification
predictive model's performance by comparing the predicted value of the target variable with its actual value.
Each column of the Confusion Matrix represents the observations in a predicted category, while each row
represents the observations in an actual class:
Actual 1 (= Actual Positive Targets) Number of correctly predicted positive Number of actual positive targets that
targets (True Positive =TP) have been predicted negative (False
Negative = FN)
Actual 0 (= Actual Negative Targets) Number of actual negative targets that Number of correctly predicted negative
have been predicted positive (False targets (True Negative = TN)
Positive = FP)
The observations are classed into positive and negative target categories:
• Positive target (Predicted 1 and Actual 1): An observation that belongs to the population you want to target.
• Negative target (Predicted 0 and Actual 0): An observation that does not belong to this target population.
The Confusion Matrix reports the number of false positive, false negative, true positive, and true negative
targets. It is a good estimator of the error that would occur when applying the predictive model on new data
with similar characteristics.
By default, the Total Population is the number of records in the validation data source. This is a part of your
training data source that Smart Predict keeps separate from the training data, and uses to test the predictive
model's performance.
The classification model allows you to sort the Total Population from the lowest to highest probability. To get
the predicted category, which is what you are interested in, you need to choose the threshold that determines
who or what gets into that category, and the others that don't make it. Sliding the threshold bar allows you
to experiment with this number to see the resulting Confusion Matrix for the population on which you want to
apply your predictive model.
Note
Refer to the section How is a Decision Made For a Classification Result? [page 2096] for information on how
Smart Predict automatically sets the threshold.
• Get a detailed assessment of your predictive model's quality. This is because it takes into account a
selected threshold that transforms a range of probability scores into a predicted category. You can also use
standard metrics such as specificity. For more information, see the related link.
• To estimate the expected profit, based on costs and profits associated with the predicted positive and
actual positive targets. For more information, see the related link.
In some cases, assessing the predictive model quality based on the error matrix is more relevant than using
metrics like the classification rate.
Example
In a business scenario where you want to detect fraudulent credit card transactions, the False Negative
(FN) class can be a better metric than the Classification rate. If your predictive model to detect the
fraudulent transactions always predicts "non-fraudulent", the Classification rate will be 99.9%.
The classification rate is excellent, but it isn’t a reliable metric to evaluate the real performance of your
predictive model because it gives misleading results. These results are usually due to an unbalanced data
source, where there is a lot of variation in the number of samples in different classes.
This performance issue will show up in the error matrix as a high False Negative (FN) class (number actual
fraudulent transactions detected as non-fraudulent by the predictive model)
Related Information
A predictive model will predict a probability for a targeted value, but the decision as to whether or not the value
belongs to one category or another (0 or 1), is based on setting a threshold, so that anything above it is Yes,
below is No. Smart Predict sets this threshold automatically for you.
The automatically determined threshold is the point where you have the same % of positive observations for
the applied data source population as you do for the training data source population.
You can use the Confusion Matrix to compute metrics to associate with different needs.
Definition:
N = Number of observations
FN (False Negative) = Number of actual positive targets that have been predicted negative.
FP (False Positive) = Number of actual negative targets that have been predicted positive.
This example of an Confusion Matrix is based on one specific threshold and associated with a specific
percentage of the population and a specific percentage of attained positive target.
Example
A company wants to do a marketing campaign. They would like to target the campaign to the customers
that will answer positively to the campaign and to avoid unecessary costs. They have built a model to
classify the customers into two categories:
• Positive Targets (Predicted 1 and Actual 1): The customers will response positively to the campaign and
need to be contacted.
• Negative Targets (Predicted 0 and Actual 0): The customers will response negatively to the campaign
and don't need to be contacted.
• By default, the application proposes to contact 24.1% of the population (see 1 on the graphic below).
Note
Note
Above this threshold, customers will not be targeted for marketing actions.
• 24.1% of the population (see 3) is considered as positive cases and is selected in the marketing
campaign.
• The percentage of "True Positives" is 16.31% (see 4) whereas the percentage of actual positives is
23.86% (see 5).
• The Classification rate is 84.6%. This means that almost 85% of the customers will be correctly
classified in the two categories (answer positively/answer negatively to the campaign) when you will
apply the validation data source.
• The sensitivity is 68.35%. This means that almost 70% of customers that will answer positively to
the campaign are correctly predicted as positive targets. These customers will be selected for the
campaign.
• The Specificity is 89.77%. This means that almost 90% of customers that will answer negatively to the
campaign are correctly predicted as negative targets. These customers will not be contacted for the
campaign.
Associate a profit/cost with the positive categories (observations that belong to the population you want to
target) of the Confusion Matrix.
You can visualize your profit based on the selected threshold, or automatically select the threshold based on
your profit parameters.
Set the threshold that determines which values are considered positive (see the relevant-related link) and
provide the following:
• a Cost Per Predicted Positive: you define a cost per observations classified as positive by the Confusion
Matrix. This covers the costs both for True Positive Target (actual positive targets that have been predicted
as positive) and False Positive Target (actual negative targets that have been predicted positive).
• a Profit Per Actual Positive: you define a profit per True Positive Target (targets correctly predicted as
positive) identified by the Confusion Matrix.
The Total Profit table is updated accordingly to calculate your profit/cost. You obtain an estimation of the gap
between the gain of the action based on a random selection (without any predictive model) and the gain based
on the selection.
To see the threshold that will give you a maximum profit for the profit parameters you have set, click Maximize
Profit.
This example of a Profit Simulation is based on one specific threshold and associated with a specific
percentage of the population, a specific percentage of attained positive target and a specific cost.
Associate a cost/profit
Example
As an example to understand how the profit simulation works, we will consider the same example as for the
error matrix.
In our Confusion Matrix example (see the relevant-related link for more information), we have decided on
the following threshold:
The marketing department has estimated that the cost per contacted customers is 2€ and that the profit
per customers that will really answer positively is 20€ (see 3).
The total profit matrix is updated accordingly and displays the following results:
You obtain an estimation of the gap between the profit of the action based on a random selection (without
any predictive model): 8, 314€ (see 4) and the profit based on this selection: 34,634€ (see 5).
Example
If with those unit cost/profit (see 1 on the graphic below), you select the option Maximize Profit, the matrix
is updated as follow:
Evaluate the accuracy of your predictive model using the performance curves.
Use the Performance Curves tab to compare the performance of your predictive model to a random and a
hypothetical perfect predictive model.
Related Information
Determine the percentage of the population to contact to reach a specific percentage of the actual positive
target.
The Detected Curve compares your predictive model to the ideal and random predictive models. It lets you
determine the percentage of the population to contact to reach a specific percentage of the actual positive
target.
Example
A company wants to do a mailing campaign. They have built a predictive model to target to which
customers to send the campaign. The predictive model will classify the customers into two categories:
The predictive model debrief displays the following Detected Target curve:
• With a random predictive model, you would reach 30% of the positive population (= population that will
response to the mailing).
• With a perfect predictive model, you would reach 100% of the positive population (= population that
will response to the mailing).
• With the Smart Predict predictive model (the validation curve), you would reach 78% of the positive
population (= population that will response to the mailing).
Related Information
Use the lift curve to see how much better your predictive model is than the random predictive model.
The lift is a measure or the effectiveness calculated as the ratio between the results obtained with and without
a predictive model. The lift curve evaluates predictive model performance in a portion of the population.
The X axis shows the percentage of the population and is ordered from highest probability to lowest probability.
The Y axis shows how much better your model is than the random predictive model.
Example
A company wants to do a mailing campaign. They have built a predictive model to target to which
customers to send the campaign.
The predictive model will classify the customers into two categories:
• You would reach 3.09 times more positive cases with your predictive model than with a random
predictive model.
• A perfect predictive model would reach 4.19 times more positive cases than the random predictive
model.
See how your classification model handles the compromise between sensitivity and specificity.
This curve shows the True Positive rate against the False Positive rate as the detection threshold is varied:
• The X Axis shows the [1-Specificity]. It represents the proportion of actual negative targets that have been
predicted positive (False Positive targets).
• The Y Axis show the Sensitivity. It represents the proportion of actual positive targets that have been
correctly predicted (True Positive targets).
Each point on the curve represents a Sensitivity/[1-Specificity] pair corresponding to a particular threshold, so
the closer the curve is to the upper left corner, the higher the overall accuracy of the predictive model.
Example
At 40% of False Positive targets (observations incorrectly assigned to the negative target) we see the
following:
• A random predictive model (that is, no predictive model) would classify 40% of the positive targets
correctly as True Positive.
• A perfect predictive model would classify 100% of the positive targets as True Positive.
• The predictive model created by Smart Predict (the validation curve) would classify 96% of the positive
targets as True Positive.
Check [1-Sensitivity] or Specificity against the population using the Lorenz Curve.
Using the selector, you can display the cumulative percentage for:
• [1-Sensitivity], where Sensitivity is the proportion of the actual positive targets that have been correctly
predicted,
• Specificity, which is the proportion of actual negative targets that have been correctly predicted..
The [1-Sensitivity] curve also known as the "Lorenz Good" displays the cumulative proportion of false negative
targets with regard to the selected population threshold.
The X Axis shows the percentage of the population ordered from the lowest to the highest probability. The Y
Axis shows the [1-Sensitivity], that is [1- the proportion of positive targets classified as True Positive]. This is
equivalent to the proportion of the missed positive targets.
The results are ordered from the lowest probability (on the left) to the highest probability (on the right).
The Specificity curve, also known as the "Lorenz Bad " curve, displays the cumulative proportion of actual
negative targets that have been correctly predicted with regard to the selected population threshold.
The X Axis shows the percentage of the population ordered from the lowest to the highest probability whereas
the Y Axis shows the Specificity.
• The positive targets would represent the population with a high risk: this population should not be granted
a credit.
• The negative targets would represent the population with a low risk: This population could be granted a
credit.
For the following examples, we will consider a threshold set at 80% of the population with the lowest probability
that the customers cannot reimburse the credit.
Example
• A "random predictive model" (that is no predictive model) would not identify 80% of the population
with a high risk (= population that should not be granted a credit).
• A perfect predictive model would not identify 17% of the population with a high risk (= population that
should not be granted a credit).
• The predictive model created by Smart Predict (the validation curve) would not identify 40% of the
population with a high risk (= population that should not be granted a credit).
• A random predictive model would classify 80% of the population with a low risk as True Negative (=
population that could be granted a credit).
• A perfect predictive model would classify 100% of the population with a low risk as True Negative (=
population that could be granted a credit).
• The predictive model created by Smart Predict (the validation curve) would classify 93% of the
population with a low risk as True Negative (= population that could be granted a credit).
The Density curves display the density function of the score (probability that an observation belongs to each
class) for positive and negative targets.
The length of an interval is its upper bound minus its lower bound.
The X axis shows the score and the Y axis shows the density.
As a default view, a line chart is displayed with the following density curves:
• The blue curve, Positives: This curve displays the distribution of population with positive target value per
score value.
As an example, check the density curves below. The first example is a good model because there is a small
overlapping zone with low density. This means the predictive model is pretty good at separating the positive
and negative cases. Whereas in the second example, you see a large zone with high density for both positive
and negative cases.
The fewer observations there are and the smaller the score interval for the overlap zone, the better it is.
Example
Example
Once you've trained your regression predictive model, you can analyze its performance to make sure it's as
accurate as possible.
Use the dropdown list to access and analyze the reports on influencers and predictive model performance.
• Root Mean Squared Error (RMSE) measures the average difference between values predicted by your
predictive model and the actual values. The smaller the RMSE value, the more accurate the predictive
model is.
• Prediction Confidence indicates the capacity of your predictive model to achieve the same degree of
accuracy when you apply it to a new data source, which has the same characteristics as the training data
source. It takes a value between 0% and 100%. This value should be as close as possible to 100%. To
improve your Prediction Confidence, you can add new rows to your data source, for example.
For more information, refer to Root Mean Squared Error (RMSE) [page 2088] and Prediction Confidence [page
2083].
Get some descriptive statistics on the target value per data source.
Check how the top five influencers impact on the target. Only the top five contributing influencers are displayed
as a default.
In the Influencer Contributions report, analyze the influence of different categories of an influencer on the
target:
• If the influence value is positive, we are more likely to get "minority value".
• If the influence value is negative we are less likely to get "minority value".
For more information, refer to Category Influence [page 2093], Grouped Category Influence [page 2094] and
Grouped Category Statistics [page 2094].
Can I see any errors in my predictive model ? Is my predictive model producing accurate
predictions?
Compare the prediction accuracy of your predictive model to a perfect predictive model using a graph and
detect the predictive model errors very quickly.
What's next?
• Your are satisfied with your predictive model's performance. Then you can use it: Generating Your
Predictions [page 2128].
• You would like to see if you can improve your predictive model's performance:
• Duplicate your current predictive model and experiment with updated settings. You can then compare
the two versions and find the best one. See Duplicating a Predictive Model [page 2076].
• Update the settings of your predictive model and retrain it. See Define Settings and Train a
Classification or Regression Predictive Model [page 2151].
Caution
• Delete your predictive model. See Deleting a Predictive Model [page 2077].
Quickly identify the predictive model errors thanks to the Predicted vs. Actual chart.
This chart shows the accuracy of your predictive model. It displays the actual target value as a function of the
prediction.
During the training phase, predictions are calculated using the training data source.
To build the graph, Smart Predict groups these predictions on 20 segments (or bins). Each segment represents
roughly 5% of the population.
By default, the following curves are displayed (but you can still customize the graph to fit your needs):
• The Validation - Actual curve shows the actual target values as a function of the predictions.
• The hypothetical Perfect Model curve shows that all the predictions are equal to the actual values.
• The Validation - Error Min and Validation - Error Max curves show the range for the actual target values.
For each curve, a dot on the graph corresponds to the segment mean on the X-Axis, and the target mean on th
Y-axis.
The area between the Error Max and Error Min represents the possible deviation of your current predictive
model: It's the confidence interval around the predictions.
What can the chart tell you about your predictive model's accuracy?
You can draw three main conclusions from your Predicted vs. Actual chart depending on the relative positions
of the curves on the graph:
If your Predicted vs. Actual chart is between any of these three cases, the prediction confidence indicators
remains the best way to assess your predictive model's accuracy.
Example
You are working for an insurance company. You want to adapt client’s premium rates according to their age
while accounting for their risk of sudden death. You want to make sure the age tiering is accurate.
The predictive model debrief displays the following Predicted vs. Actual graph:
In our example, when the prediction (in blue) is 45 years old, the actual value ("validation value" taken from
our historical data) is 44.75 years old. The error min and error max calculated by our predictive model are
respectively 33.17 years old and 56.34 years old.
As you can see, the blue curve (our predictive model) and the green curve (the hypothetical perfect model)
are very similar, then it means that you can rely on the predictions.
For continuous target, the Target Statistics give descriptive statistics for the target variable in each data source:
Name Means
Minimum Minimum value found in the data source for the target varia-
ble.
Maximum Maximum value found in the data source for the target varia-
ble.
Standard Deviation Measure of the extent to which the target values are spread
around their average.
Once you've trained your Time Series predictive model, you can analyze its performance to make sure it's as
accurate as possible.
Analyze the reports to get information on your predictive model composition and evaluate your predictive
model performance.
Is the main performance indicator high enough to consider my predictive model robust and
accurate?
Check the quality of your predictive model performance over one of the proposed performance indicators:
The performance indicators evaluate the "error" that would be made if the forecast was calculated in the past
where the actual values are known. Usually, the lower the performance indicator, the better your predictive
model performance (except for the R² indicator).
The Expected MAPE is the default performance indicator but you can choose any of the proposed performance
indicators to evaluate your model performance.
What are the predicted forecast values provided by the predictive model?
Analyze the predicted values for the predictive model over a set of known data from the training data source.
Check if there are outliers in the forecast and detect anomalies on the target.
For more information, refer to The Predictive Forecasts [page 2116], The Time Series Outliers [page 2117] and
The Time Series Outliers (Future) [page 2117].
For more information, refer to The Forecast vs. Actual Graph [page 2115] and The Time Series Outliers [page
2117].
What are the past data that most influences the target?
Identify whether the target is influenced by the recent past or far past in the case of an autoregressive
component.
The lags are numbered with negative integers representing their distance in the past from the forecast. Lag -1 is
the point in the past just before the forecast. Lag -2 is two points in the past..
For more information, refer to Past Target Value Contributions [page 2125].
What's next?
You have two possibilities:
• Your are satisfied with your predictive model's performance. Then you can use it: Saving Predictive
Forecasts Generated by a Time Series Predictive Model into a Dataset [page 2141] or Save Predictive
Forecasts Back into your Planning Model [page 2317].
• You would like to see if you can improve your predictive model's performance:
• Duplicate your current predictive model and experiment with updated settings. You can then compare
the two versions and find the best one. See Duplicating a Predictive Model [page 2076].
• Update the settings of your predictive model and retrain it. See Define Settings and Train a Time Series
Predictive Model [page 2152].
• Delete your predictive model. See Deleting a Predictive Model [page 2077].
If you choose to get predictive forecasts per entity, the reports for each entity are available in the Forecast
and Explanation tabs. If there are less than 20 entities, then these reports are available automatically following
training. You select the column values that appear together forming an entity, for example Product X, Store Y,
from the top left dropdown list in both tabs to view its report.
If a predictive model contains more than 20 entities, the reports for each entity are not available automatically
following training, but are accessed on demand. You just have to select the entity, and after a slight delay, the
reports are created and made available. This is to ensure that time isn't lost creating reports for predictive
models with a high numbers of entities, when not all of those reports may be required all at once. Once a report
is available, you can then access it immediately any time afterwards.
The Target Statistics is described in detail in the Forecast tab of your report. The Target Statistics describe
information on the target, the minimum, maximum, and average (mean) values, as well as the standard
deviation measure.
Note
If you choose to get predictive forecasts per entity, you have this information for each entity.
The Forecast vs. Actual graph appears in the Forecast tab of your report. The Forecast vs. Actual graph shows
curves for the predicted values (forecast) and actual values (target) for the time series data source. You can
then quickly see how accurate your predictive model is. The predictions are displayed at end of the graph.
For each forecasted value, the predictive model shows an estimation of the minimum and maximum error. The
area between this upper and lower limit of the possible errors in the predictive forecasts produced by your
predictive model, is called the confidence interval. It's only displayed for the predictive forecasts.
Outliers are values marked with a red circle on the graph (see The Time Series Outliers [page 2117] for
more information). The forecasting error indicator is the absolute difference between the actual and predicted
Note
If you choose to get predictive forecasts per entity, you have this information for each entity.
Note
You can display the data as a table. See Customizing the Visualization of Your Debrief [page 2126].
Related Information
Visualize a graphic representation of how a specific prediction is impacted by the various model components.
The Prediction Breakdown graph appears in the Forecast tab of your report. The Prediction Breakdown graph
shows a waterfall chart representing the impact of the model component on the predicted value for a specific
data.
The total bar represents the predicted value. The other bars represent the impact of a specific component. A
bar with a positive value means that the associated model component increases the predicted value. A bar with
a negative value means that the associated model component decreases the predicted value. For example, if
the value of the Trend bar is 50, it means that the model trend increases the predicted value by 50.
The bars are ordered by decreasing impact on the predicted value, independently of the direction of the
impact. This order cannot be changed.
To maintain optimal readability, the Prediction Breakdown is limited to 10 bars. To ensure that limit is never
exceeded, the model components with the smallest impacts can be aggregated into positive or negative Others
bars.
Restriction
The Prediction Breakdown graph is available only for the first 24 forecast points.
The Predicted Forecasts are described in the Forecast tab of your report.
Note
If you choose to get predictive forecasts per entity, you have this information for each entity.
The Time Series Outliers (Future) are described in the Forecast tab of your report.
Anomalies are time series values that are outside the zone of possible error for the predictive forecast, which is
defined by the upper and lower limit.
Example
Your facilities department wants to monitor the electrical consumption of your building. The time series is
very regular with consumption peaks in the day time, low consumption in the night, and some seasonalities
related to vacations, for example.
A predictive model based on this time series will forecast a very low consumption at 11:00 PM.
At 11:15pm the predictive model is re-forecasted and the actual consumption for 11:00 PM is known. It is
very far from what was expected by the predictive model: an anomaly is detected.
Note
If you choose to get predictive forecasts per entity, you have this information for each entity.
The Time Series Outliers (Past) are described in the Forecast tab of your report.
The default view (table) displays details about the outliers on the time series and the predictive forecasts.
An actual time series value is qualified as outlier once its corresponding forecasting error is considered to be
abnormal relative to the forecasting error mean observed on the estimation data source. The forecasting error
indicator is the absolute difference between the actual and predicted values. This is also called the residue. The
residue abnormal threshold is set to 3 times the standard deviation of the residue values on an estimation (or
validation) data source.
Note
If you choose to get predictive forecasts per entity, you have this information for each entity.
The Time Series Breakdown is described in detail in the Explanation tab of your report. The elements in the
report can be different depending on the type of modeling technique used to build your predictive model.
Note
We describe the modeling techniques used by Smart Predict in the two tables below. In the first table, we
describe the time series breakdown modeling technique that may be applied when you're working with a
time series with limited disruptions. In the second table, we describe the smoothing technique that may be
applied when you're working with a disrupted time series that doesn't follow a regular trend or cycle.
Information Description
Trend The Trend is the general orientation of the time series. The
report can show linear or piece-wise trends.
Influencers These represent the part of the time series impacted by the
influencers specified in the field Influencers of the predictive
model settings.
For example, the predictive model can detect that the previ-
ous 2 values have an impact on the actual values.
For more information you can refer to the chapter called Past
Target Value Contributions [page 2125].
Residuals Residuals refer to what is left when the trend, cycles, and
fluctuations have been extracted from the initial time ser-
ies. Residuals are neither systematic nor predictable. They
reflect the part of the time series that Smart Predict can't
explain or model. The smaller the residuals, the better the
predictive model. A good predictive model produces residual
data that contains no pattern.
Information Description
Residuals Residuals refer to what is left when the trend and cycles have
been extracted from the initial time series. Residuals are nei-
ther systematic nor predictable. The smaller the residuals,
the better the predictive model. A good predictive model
produces residual data that contains no pattern.
Note
If you choose to get predictive forecasts per entity, you have this information for each entity.
Related Information
The Relative Impact of a component represents the weight of that component in the absolute value of
the actuals across the time series. A Relative Impact of n% means that the considered component alone
represents n% of the actuals. The Relative Impact for the components adds up to 100%, including the Final
Residuals.
Examples
Example 1
In this example the trend alone weighs for 91% of the actuals. The yearly cycle weight represents only small
variations around the trend and weighs only for 7% of the actuals.
Trend 91%
Final Residuals 2%
Example 2
Trend 71%
Final Residuals 6%
The section Impact of Cycles is displayed when some cycles are detected in the forecasted time series (see The
Time Series Breakdown [page 2118]). It provides details about how the target is impacted by cycles both for
the seasonal cycles and for the fixed length cycles. The cycles are named after their recurrence (Yearly Cycle, 6
Days Cycle, and so on).
Each bar represents the impact of the cycle for a given period, that is how much the cycle increases or
decreases the value predicted for this period.
Some cycles have a constant amplitude. This means that for a given period within the cycle the impact of
this period will be the same for any occurrence of the cycle. In such cases only one occurrence of the cycle is
displayed (there is only one series displayed in Impact of Cycles) as the impact for each period is identical for
any occurrence of the cycle.
In the example below, the impact on the prediction for Saturday (Sat) is -85027.87 and this impact is the same
every week.
Some cycles repeat over time with an amplitude which can change. For a specific period of the cycle the impact
is different for each occurrence of the cycle. To illustrate this evolution, the impact of the last 3 occurrences of
the cycle is displayed in Impact of Cycles (the chart has 3 series).
In the example below we can see that the amplitude of the cycle increases over time.
Smoothing Techniques
When a smoothing technique is used Smart Predict always displays the values of the last 3 occurrences for
each period.
The Past Target Value Contributions identify the past observations that most influence the forecast.
The Past Target Value Contributions are described in the Fluctuations section of the Explanation tab of your
report.
At the step of identifying the model components, Smart Predict found that previous values of the time series
have an impact on the actual values.
The Past Target Value Contributions graph shows how the target is influenced by the recent past, or distant past
in the case of an autoregressive component.
The lags are numbered with negative integers representing their distance in the past from the predictive
forecast. Lag -1 is the point in the past just before the forecast. Lag -2 is two points in the past.
Example
Let's take the following example: we have created a predictive model to forecast the ozone rate for the next
12 months.
Thanks to this graph, you can identify if the ozone rate is influenced by observed values in the recent past. It
also shows the most important dates. The lags are numbered with negative integers that represent how far
back in the past they are from the predictive forecasts. Smart Predict found that the 2 previous values have
an impact on the subsequent values. This is why the graph stops at 2. Using these lags, you can analyze
how the previous values influenced the subsequent ones. Here you see that the lag -2 is more influential
than the lag -1.
Note
If you choose to get predictive forecasts per entity, you have this information for each entity.
For each type of debrief, Smart Predict proposes a default overview. You can customize the generated debrief
Context
To customize the generated debrief, use the Visualizations Settings dialog box. Depending on the metrics you
want to display, different options are available.
Remember
The customized settings are stored for each type of predictive model and user. For example, if you
customize a classification debrief for a given predictive model, the debrief of all the other classification
predictive models is updated with the same change. However, the debrief settings for other users are not
affected; their customizations are unchanged.
Note
Procedure
Note
For example, a scatter plot visualization is only available if the metrics include at least two measures.
4. In the Analysis section, select the information for the type of chart that you want to display.
According to the type of visualization you have chosen at step 2, you can assign data fields to the feeding
area of your visualization.
For example, if you select a column chart to display the metrics, you define which data field to display in
the X and Y axis. Also, you can map the data series of the chart to different metrics using the One color per
option.
Note
When you change the type of visualization, the displayed information fields change accordingly. For
example, if you choose a bar chart, you can select the category influencers to display on the X axis, the
influencer or measure for the Y axis, and the colors for either the influencer or measure values. You can
5. In the Interactivity section, select an element that limits the chart display to data that interacts with only
that element. Depending on the type of chart, this could be a single influencer or measure.
Note
Some elements are mandatory because they are data fields that qualify the metrics. You must assign
the mandatory elements to the boxes that specify the chart elements. Otherwise, the missing data field
is automatically entered in the Selectors box.
Related Information
The type of chart available depends on whether it is appropriate for visualizing the type of influencer analysis in
your debrief. Depending on how you are visualizing your model data, you can choose from the following chart
types:
Pie Compare categorical data as percentages. If you Show the predicted percentage breakdown of contributing
have more than 10 contributing influencers, then voting regions to the results of a national election.
using a bar or column chart would show a clearer
view of the data spread.
Bar Compare categorical data along the vertical axis You have an employee turnover predictive model that pre-
by the category count or percentage on the hori- dicts potential churn level for staff. Your target variable
zontal axis displayed as bars. is Employee Churn Estimate. The influencers Marital-sta-
tus, Age, Qualification Level, Salary, Recent Promotion, and
Column Show the same information as a bar chart, with Training Participation are plotted along one axis, and the
the axes interchanged: inlfuencers along the hor- percentage contribution of each category to Churn Esti-
izontal axis by the group count, or percentage mate is plotted as a bar or column on the other.
values on the vertical axis displayed as columns. For a table, the columns would be the category influenc-
ers Marital-status, Age, Qualification Level, Salary, Recent
Promotion, and Training Participation, and the percentage
contribution of each category to Employee Churn Estimate
appears in each cell row.
Radar Display data for multiple influencers in two di- You have a predictive model to predict sales of candy. Your
mensions with multiple categories represented target variable is Chocolate Sales, and you plot different
on radial axes. chocolate flavors around the radial axes of a bubble chart.
Your categorical influencer that is measured over the axes
is three brands of chocolate. The spread of sales figures
around the axes would give a good idea of which different
brands would do better for the same flavor than others.
Tag Cloud Represent category influencer names as text jux- A retail chain selling multimedia and cultural products
wants to venture into publishing to produce a compilation
taposed geographically on a canvas, where the
of "retro" styled detective stories. The target audience
font size of each text label indicates the influence
is younger readers not familiar with traditional detective
on the target variable. Tag charts are useful when characters. They develop a predictivve model including in-
the influencer names have semantic significance, fluencers such as education level, age, buying history for
for example keywords in a twitter feed, country DVDs, books, games, MP3s and streaming video, to predict
names, business companies' stock market val- a possible taste for different detective profiles. The results
could be easily represented as a tag cloud with the names
ues, or different television shows' audience rat-
most likely or not to appeal, for example Sherlock Holmes,
ings for a night's viewing.
Father Brown, Miss Marple, Hercule Poirot, Auguste Dupin,
Philip Marlowe, and others.
Line Show a model performance curve. You want to see the performance curve of your training
predictive model compared to the validation and random
plots for a predictive model that predicts what percentage
of a population are identified positively as having a disease,
after being tested using a new screening test.
Bubble See the correlation between two influencers, one You have a predictive model to predict fatal car accidents.
dependent on the other. The correlation is repre- Using a bubble chart, you could evaluate dependency be-
sented by third influencer at the plot position and tween influencers such as "Car Accident Frequency" and
the area of the plot shows the magnitude of the "Speed", with a categorical influencer of Yes or No for Fatal-
relationship. ity.
You've assessed the performance of your predictive model and you're confident using it to generate
predictions.
To generate the predictions, the process can differ following the type of predictive model and the type of data
source used.
Context
You want to generate and save the predictions for a predictive model of type classification or regression.
Procedure
Restriction
If your application dataset contains more columns than your training dataset, the additional
columns will be ignrored by the application process.
• Statistics & Predictions: This is information about your predictive model that you want to have in the
generated dataset.
Apply Date It's the start date of the predictive The type of the column is
model application. TIMESTAMP.
Train Date It's the start date of the predictive The type of the column is
model training. TIMESTAMP.
• Statistics: select the statistics regarding the influencers you want to save in your dataset:
Statistic Description
Assigned Bin When selected, individuals in the application population are assigned to referring
quantiles defined on the validation population.
Assigned bins explained: The validation population during training is spread out
in quantiles (bins), each defined by a range of scores, to serve as references
(assigned bins) to an application population. When a predictive model is applied,
each individual in the application population is allocated to an assigned bin
based on its predicted score. As each assigned bin represents 10% of the train-
ing population, if the population structure is unchanged, this % value should
remain stable on the application population. If this is not the case, it doesn’t
mean that the predictive model is no longer accurate, rather that the structure of
the population has changed. For example there are more or less potential churn-
ers now, than in the past. The accuracy of the predictions should be monitored to
back up the decisions.
Note
The number of bin is set to 10 and isn't customizable.
See the section How does Smart Predict Create Assigned Bins? [page 2136] for
information on using assigned bins.
Outlier Indicator For each row in the application dataset, the Outlier Indicator is 1 if the row is an
outlier with respect to the target, otherwise 0.
Prediction Description
Predicted Category For each row in the application dataset, the Predicted Category is the target
category determined by the predictive model.
Classification predictive mod-
els (nominal target with 2 val- The percentage of predicted target categories found in the application dataset
ues only) corresponds to the Contacted Population percentage that is set by default when
entering the Confusion Matrix.
Any change done by the user in the Confusion Matrix does not affect the
Predicted Category in the generated dataset.
Prediction Probability For each row in the application dataset, the Prediction Probability is the probabil-
ity that the Predicted Category is the target value.
Classification predictive mod-
els (nominal target with 2 val-
ues only)
Predicted Value For each row in the application dataset, the Predicted Value is the value predicted
for the target.
Regression predictive models
(continuous target)
Prediction Explanations For each row of the application dataset, the Prediction Explanations is a set of
explanations for the prediction.
Classification and regression
predictive models
Note
If you do not select any statistics or predictions, only the target and the key influencer(s) are
included.
5. Click Apply.
The status of your predictive model is updated to <Applied>. You can find your generated dataset with
the forecasts by viewing the Recent Files (from the side navigation, choose (Datasets) Recent
The Prediction Explanations can be used to display the reasons explaining why Smart Predict has generated a
specific prediction for a specific entity of the application dataset.
An explanation (or reason) is a combination of variable and its value. For instance: age, 35. It corresponds to the
value assigned for a given variable in order to produce a specific prediction. The strength, tells how much this
value is impacting the prediction and the direction of this impact:
• For classification, a positive strength pushes the prediction (score or probability) towards the positive value
of the target, while a negative value pushes the prediction towards the negative value of the target.
• For regression, a positive strength increases the predicted value, while a negative strength decreases the
predicted value.
Each explanation corresponds to a row of the output dataset associated to the following columns:
Column Description
Explanation Rank For each prediction, the explanations are ranked according
to their impact (absolute strength value) on the prediction.
(classification and regression)
The lower the rank, the most important the impact of the
specific explanation on the prediction.
Explanation Influencer This column provides the name of the influencer for the
explanation.
(classification and regression)
Explanation Influencer Value This column provides the value associated to the influencer
for this prediction.
(classification and regression)
This column can mix different types of values depending
on the associated influencer (number, string, date…). There-
fore, the type of this column is always string and no local
relevant formatting is applied to the value (default English
local is used always).
Explanation Contribution This column is the contribution of the influencer for the pre-
diction. The contribution has same scale as the target. If you
(regression only) are predicting an amount in US dollar and the contribution
is “10” then the contribution can be interpreted as “10 US
dollars”.
• Smart Predict includes both other positive reasons and other negative reasons. Hence, it can generate less
than 10 explanations in certain scenarios..
• When the Prediction Explanations option is used, the size of the input dataset is limited to 100 columns and
90,909 rows. This comes from the 1 million rows restriction for the dataset. As 11 rows are generated for
each entity when the explanations are enabled, an input of 90 909 rows will lead to 90 909 * 11 = 1 million
row in the output.
Smart Predict is configured to generate a maximum of 10 explanations per prediction so the amount of
explanation is not overwhelming. When the predictive model uses more than 10 influencers to generate the
predictions, Smart Predict aggregates the explanations with the lowest absolute strength (less contributing
influencers) into two groups:
The strength associated to Positive Others and Negative Others is the sum of the strength of the aggregated
explanations.
When the predictive model uses less than 10 or exactly 10 influencers to generate the predictions, Others group
is not generated as the provided list of explanations is complete.
To allow the explanation for regression predictive models to be visualized using waterfall charts, Smart
Predict introduces a Baseline pseudo influencer. The contribution for the baseline influencer is the mean of
the predicted target and represents the average prediction. The predicted value is the sum of this baseline
contribution with the contributions of the other influencers used by the predictive model. In other words, the
predictions are centered on this baseline contribution.
Example
Let us assume we have generated some attrition predictions for a set of employees enabling the Prediction
Probablity and Prediction Explanations.
The table below shows the rows that have been generated for the employee GOV92027.
GOV92027 0, 64 0
• The influencer that had the greatest impact on the predicted probability to leave the company for the
employee GOV92027 specifically is Age. Age=67 has a very strong positive influence on the likelihood to
leave (it strongly increases the chances for this employee to leave).
• The second strongest influencer that impacted the predicted probability to leave the company for
GOV92027 is Salary. Salary=86000 has a meaningful negative impact on the likelihood to leave the
company (because the strength is negative, it decreases the chances for this employee to leave).
• The existence of a Positive Others influencers indicates that because there were more than 10 influencers
involved in the prediction, some small positive influencers have been aggregated, resulting in a total
strength of 0.06.
• For a binary prediction (classification) a positive strength increases the predicted probability while a
negative strength decreases the predicted probability.
• For a numeric prediction (regression) a positive strength will make the predicted value higher while a
negative strength will make the predicted value lower.
As a normalized measure of the contribution, the strength measures how far the individual contributions are
from the average contribution baseline.
• The strength values doesn't depend on the scale or unit of the target. This is true for all the influencers, and
helps you to compare the strength of two distinct influencers.
• It is possible to use thresholds to discretize the contribution into categories, such as neutral contribution
and low contribution.
Thresholds Category
Because the strength is a normalized value it cannot be interpreted in the probability space.
During the training step, Smart Predict uses past observations compiled in a training dataset to create a
predictive model.
• For a classification predictive model: Smart Predict associates to each observation (customer, product,
etc…) a probability that an event (target) occurs. Then, it uses this probability to group the list of
observations, ranged in decreasing order from the most probable to the least probable in 10 bins (or
groups). Each bin represents 10% of those observations and in each bin, the observations have the same
level of probability.
• For a regression predictive model: Smart Predict associates to each observation a predicted value. Based
on this value, it groups the list of observations ranged from the highest to the lowest predicted value in 10
bins (or groups). Each bin represents 10% of those observations and in each bin, the observations have the
same value or range of values.
Example
Let's take the following example: you want to know if customers will buy your new product "P". You train
your predictive model using a training dataset containing past observed observations for 1,000 customers.
As a result, Smart Predict has ranged your observations as follows:
Bin number Number of customers in the bin Average probability to buy "P"
Then, you use your predictive model to get predictions on a new set of customers. Let's say your
application dataset contains observations on 700 customers.
Smart Predict will give you the following result in the generated dataset:
6 45 customers (~ 6%) 8%
7 50 customers (~ 7%) 7%
8 35 customers (~ 5%) 4%
9 32 customers (~ 5%) 3%
10 88 customers (~13%) 1%
Simulating/Estimating the number At the training step, Smart Predict has as-
of positives cases. Example
signed each observation to a bin (one bin
equals 10% of the dataset), which corre- Let's have a look at our example
sponds to a probability to be a positive above.
case. At the training step, you know the
Smart Predict associates to each cus- actual number of positive targets
tomer his/her probability to buy the prod- by bins as you train your predictive
uct P and check if this probability makes model on known data.
the customer belongs to bin 1, 2, 3, etc. by At the application step, you don't
referring to the bins defined in the train- know that. But once the predictive
ing step. As each bin is associated to an model is applied, you know for each
average percentage of positive cases, you customer of the application dataset
can easily estimate the number of posi- to which bin it belongs to. You can
tive cases. therefore estimate the total of cus-
tomers who would buy "P".
Note
It can happen that the distribution of
the observations is not similar (10%
of observations in each bin). It's not
because the structure of the popula-
tion has changed that the predictive
model is not relevant anymore (see
next point).
Monitoring the population structure Dividing the dataset into bins means that
each bin should contain +/-10% of the Example
observations. However, if this changes, Having a look back at the example
then it indicates that your population is above, you can see that the distribu-
changing. For example, there could be an tion per bin in the generated dataset
effect that advertising on social media is not similar as in the training data-
sites might influence and attract more set. For example, for bin 1, we have
young customers, rather than other age 200 customers, which correspond to
groups. It doesn't mean that the predic- 28% of the dataset. It could simply
tive model is not efficient anymore. But be because you have more young
it may be an alert to check this perform- customers, but with the same buying
ance with more data from the recent past behaviour as young customers in the
(than the ones used to train the model). training population.
Monitoring the predictive model Once the predictive model has been ap-
performance plied, it is easier to analyze the classifi-
cation performance by bins, rather than
interpreting the performance curve. Use
the classification rate (see The Metrics
[page 2097]) calculated at the training
step associated with each bin, and detect
any variation of this rate when applying
your predictive model.
Example
In the following example, you want to predict the deal values for the next quarter. Your training dataset
contains observations on 3,000 customers.
1 300 customers (= 10% of the dataset) Predicted values between 90,001 and
100,000 $
2 300 customers (= 10% of the dataset) Predicted values between 80,001 and
90,000 $
3 300 customers (= 10% of the dataset) Predicted values between 70,001 and
80,000 $
4 300 customers (= 10% of the dataset) Predicted values between 60,001 and
70,000 $
5 300 customers (= 10% of the dataset) Predicted values between 50,001 and
60,000 $
6 300 customers (= 10% of the dataset) Predicted values between 40,001 and
50,000 $
7 300 customers (= 10% of the dataset) Predicted values between 30,001 and
40,000 $
8 300 customers (= 10% of the dataset) Predicted values between 20,001 and
30,000 $
9 300 customers (= 10% of the dataset) Predicted values between 10,001 and
20,000 $
Then, you use your predictive model to get predictions on a new set of customers. Let's say your
application dataset contains observations on 800 customers.
Smart Predict will give you the following result in the generated dataset:
1 110 customers (~ 14% of the dataset) Predicted values between 90,001 and
100,000 $
2 100 customers (~ 13% of the dataset) Predicted values between 80,001 and
90,000 $
You can use Assigned Bins to monitor the population structure: As each bin should contain +/-10% of the
observations, if these figures increase or decrease for one or several bins, it indicates that your population is
changing and you might need to retrain your predictive model with more recent data. For example, having a
look back at the example above, you can see that the distribution per bins is quite similar in the generated
dataset as in the training dataset. However, we could have different results. For example, for bin 1, we could
have 300 customers, which correspond to 37.5% of the dataset.
You've assessed the performance of your predictive model and you're confident saving the predictive forecasts
into a dataset.
Context
Procedure
generated dataset with the forecasts, from the side navigation, choose (Datasets) and view the list of
recent files, or from the side navigation, choose Files and search for the predictive model in a private,
public, or workspace folder.
Here are the columns that are added to your generated dataset:
Forecast This is the column where you find the forecast values for
the target based on the number of requested forecasts
specified in the predictive model settings.
Error Min For each requested forecast at a given horizon H, the pre-
dictive model calculates a confidence interval. The Error
Min value is the lower bound of this confidence interval.
It is equal to the forecasted value – sigma(RMSE)*1.96,
where sigma (RMSE) represents the standard deviation of
RMSE between the actual and forecasted target value at
horizon H. The weighted value of 1.96 corresponds to a
confidence level of 95%.
Error Max For each requested forecast at a given horizon H, the pre-
dictive model calculates a confidence interval. The Error
Max value is the upper bound of this confidence interval.
It is equal to the forecasted value + sigma(RMSE)*1.96,
where sigma(RMSE) represents the standard deviation of
RMSE between the actual and forecasted target value at
horizon H. The weighted value of 1.96 corresponds to a
confidence level of 95%.
Once the predictions are generated, a new dataset is created. You can augment your SAP Analytics Cloud
stories or models using the data available in this generated dataset. The process might differs depending on
the type of dataset (acquired or live) you used to generate this dataset and the insights you want to reuse in
SAP Analytics Cloud.
• Save Predictive Forecasts Back into your Planning Model [page 2317]
• Save Predictive Forecasts Back into your Planning Model [page 2317]
• Creating a Predictive Model [page 2063]
• Looking for the Best Predictive Model [page 2074]
• Generating Your Predictions [page 2128]
• Generating Your Predictions [page 2128]
• Using Your Acquired Generated Dataset in a Story [page 2143]
• Using Your Live Generated Dataset in an SAP Analytics Cloud Model [page 2148]
Using Your Acquired Dataset Generated by a Classification or Regression Predictive Scenario [page 2145]
Using Your Acquired Dataset Generated by a Time Series Predictive Scenario [page 2147]
Using Your Acquired Generated Dataset in a Story [page 2143]
Using Your Live Generated Dataset in an SAP Analytics Cloud Model [page 2148]
You can use your acquired generated dataset either directly in a story, or in a story via an SAP Analytics Cloud
model. Depending on the type of predictive model, you need to keep different elements in your generated
dataset to consume it.
The generated dataset containing your predictions can be consumed in a story using some features available in
SAP Analytics Cloud.
Note
If you want to consume your generated dataset in a story or via an SAP Analytics Cloud model in SAP
Analytics Cloud, only the first 100 columns will be taken into account.
Note
You need to keep specific information in your generated dataset in order to consume it in a story. The type
of information you need, depends on the predictive scenario type. For more information, refer to the related
links.
• Directly in a story:
Note
When you upload the generated dataset in a story, it implicitly becomes an "embedded" model. For
more information, see Models in Stories [page 1058].
If you update the generated dataset later on, the story will be updated as well.
Note
The SAP Analytics Cloud model can be shared with other users.
Related Information
Using Your Acquired Dataset Generated by a Classification or Regression Predictive Scenario [page 2145]
Using Your Acquired Dataset Generated by a Time Series Predictive Scenario [page 2147]
Depending on how you want to use your Classification or Regression Predictive Scenario, you need to keep
different information in your generated dataset containing predictions.
To consume your generated dataset in a story, you need to keep some information:
• The application dataset influencers, if these influencers are not available from another source.
• The predictions
• The key influencers
Note
If you have specified the key influencers during the training of your predictive model, they are
automatically added.
From the side navigation, choose (Stories) . For more information on working with stories, you can refer
to the topic Create a New Story (Classic Design Experience) [page 974]. As soon as a change will be done in the
generated dataset, the story will be automatically updated.
Note
When you upload the generated dataset in a story, it implicitly becomes an "embedded" model.
To combine existing data with your predictions, you need to keep the following information in your generated
dataset:
• The predictions
• The key influencer(s)
Note
If you have specified key influencers during the training of your predictive model, they are automatically
added.
Thanks to the key influencer(s), you can blend the predictions with other data sources, in the context of a story.
You can first use the generated dataset to create an SAP Analytics Cloud model and then consume it in a story.
Note
You will then be able to easily share this model with other users.
For more information on working with models, you can refer to the chapters called Create a New Model [page
645] and Set Up Model Preferences [page 664].
Related Information
Depending on how you want to use your Times Series Predictive Scenario, you need to keep different
information in your generated dataset containing the predictive forecasts.
To consume your generated dataset in a story, you need to keep at least the predictive forecasts.
Note
From the side navigation, choose (Stories) . For more information on working with stories, you can refer
to the topic Create a New Story (Classic Design Experience) [page 974]. As soon as a change will be done in the
generated dataset, the story will be automatically updated.
Note
When you upload the generated dataset in a story, it implicitly becomes an "embedded" model.
For more information, refer to the related link Creating a new story.
To combine existing data with your predictions, you need to keep the forecasts in your generated dataset.
Note
Thanks to the date variable, you can blend the predictions with other data sources, in the context of a story.
You can first use the generated dataset to create a model and then consume it in a story.
You will then be able to easily share this model with other users.
For more information on working with models, you can refer to the chapters called Create a New Model [page
645] and Set Up Model Preferences [page 664].
Related Information
You have generated the predictions in a live dataset and you now want to use the predictions in SAP Analytics
Cloud.
Context
Live datasets cannot be consumed in SAP Analytics Cloud as they are. To be able to use your predictions, you
need the help of an IT administrator and to go through some additional steps.
Caution
As of Google Chrome version 80, you need to configure your SAP on-premise data source to issue
cookies with SameSite=None; Secure attributes. If the SameSite attribute is not set, cookies issued
by your SAP data source system will no longer work with SAP Analytics Cloud. Refer to SameSite Cookie
Configuration for Live Data Connections [page 266] for more information.
Restriction
Your IT administrator user has to create calculation views before you can consume the generated tables
containing your predictions. For more information, see Creating Calculation Views to Consume Live Output
Datasets [page 347].
Procedure
Short tour guide around Smart Predict with the Help topics accessible from the application interface.
These topics are accessed directly from the Smart Predict application. We've grouped them together as sort
of a short tour guide around the application interface. You'll be able to create a predictive scenario, as well as
add and train a new predictive model without needing to have a background in the predictive analytics field.
However, to get the most out of your new predictive scenarios, we suggest that you take a bit of time to browse
about the more in depth topics available in this guide as well.
As you're creating a new predictive scenario, you need to choose one that is relevant to the type of predictive
insights you are looking for. Read the descriptions and examples to help you decide which one is best for you.
If you'd like more information to help you choose your predictive scenario, then go to our playlist on
YouTube . You'll find worked examples showing how to create and use predictive scenarios.
You'll define settings for the initial predictive model for the predictive scenario.
Restriction
Smart Predict features are not available in all regions or for all tenant types.
Related Information
You enter values for parameters in the Settings tab. These are used to train a predictive model. Training is a
process that takes these values and uses SAP machine learning algorithms to explore relationships in your data
source to come up with the best combinations for the predictive model.
General section
• The predictive model name is a default one and can't be edited, but you can add descriptive text if required.
• Training Data source: Browse to and select a dataset that contains the historical data you want to use to
train the predictive model. This dataset must already be available to SAP Analytics Cloud.
• Edit Column Details: Click to open the list of columns in the dataset. Properties such as data type,
statistical type, and others can be changed in the predictive model. This is for users that are familiar
with the dataset. In general you won't have to consider this dialog unless you are trouble shooting. More
information can be found here:Editing Column Details [page 2057].
• Target: Browse to and select the variable that you want to predict values for.
Influencers section
• Columns that can have an influence on the target values. Certain may have too much influence on the
target, and so can cloud the effect of other columns that are related to your business question.
• Exclude as Influencers: You can select the columns that you don't want to be taken into account when the
predictive model is trained.
• Limit Number of Influencers: You can specify the the maximum number of columns that the predictive
model will consider as influencers. Only the most contributive influencers are retained.
Train: Click to start training the predictive model with your settings. There is also a Train Predictive Model icon
in the toolbar.
What's next?
The training produces performance indicators that you will use to evaluate the results. This is called debriefing
the predictive model.
Related Information
You enter values for parameters in the Settings tab. These are used to train a predictive model. Training is a
process that takes these values and uses SAP machine learning algorithms to explore relationships in your data
source to come up with the best combinations for the predictive model.
General section
• The predictive model name is a default one and can't be edited, but you can add descriptive text if required.
• Time Series Data Source: Browse to and select a dataset or a planning model that contains the historical
data you want to use to train the predictive model. This data source must already be available in SAP
Analytics Cloud.
• Version: For a planning model data source, select the version to use for training. It must be a public version,
not in edit mode, or a private version. You have a least read access to it.
• Target: Select the variable that you want to forecast values for.
• Date: Select the date variable. Check here for supported date formats: Restrictions [page 2040].
• Time Granularity: The level of time granularity available in the data source.
• Number of Forecast Periods: Select the number of forecast periods that you want the predictive model to
generate.
• Entity: Allows you to split a population into distinct entities. You select one or more nominal variables
that identify each entity that you want to get forecasts for. For example you want to know sales forecasts
for each country. A predictive model is generated for each entity that captures a specific behavior and
produces distinct forecasts for each available combination of variable values. If this type of prediction is
useful for you, then click the box, and select up to five variables to identify the entity that you want to split
the population by.
• Train Using: The range of observations that will be considered for training. You can select all observations
or define a window of time that the observations will be taken from.
• Until: The date of the last observation to be considered. You can select to use the last observation in the
data set, or you can define a date.
Click Train and Forecastto start training the predictive model and generate the forecasts.
What's next?
Training and forecasting produces performance indicators that you will use to evaluate the results. This is
called debriefing the predictive model.
Related Information
Define Settings and Train a Time Series Predictive Model [page 2152]
Defining the Settings of a Time Series Predictive Model Using a Dataset as Data Source [page 2066]
A predictive model produces performance indicators and reports as a result of a successful training. Here is
a short summary of the different components that you can use to debrief your results so you can verify the
accuracy of your predictive model.
• What does Predictive Power measure? : It's your main measure of predictive model accuracy. The closer its
value is to 100%, the more confident you can be when you apply the predictive model to obtain predictions.
You can improve this measure by adding more variables.
• What is Prediction Confidence?: It's your predictive model's ability to achieve the same degree of accuracy
when you apply it to a new dataset that has the same characteristics as the training dataset. It takes
a value between 0% and 100%. This value should be as close as possible to 100%. To improve your
Prediction Confidence, you can add new rows to your data source, for example.
• Does the target value appear in sufficient quantity in the different datasets? Get an overview of the
frequency in each dataset of each target class (positive or negative) that belongs to the target variable. For
more information, refer to Target Statistics [page 2092].
• Which influencers have the highest impact on the target? Check how the top five influencers impact on the
target. For more information, refer to Influencer Contributions [page 2092].
• Which group of categories has the most influence on the target? In Influencer Contributions, you can analyze
the influence of different categories of an influencer on the target. For more information, refer to Category
Influence [page 2093], Grouped Category Influence [page 2094] and Grouped Category Statistics [page
2094].
• Using the Confusion Matrix: It's the only way to assess the model performance in detail, using standard
metrics such as specificity. It allows you to quicky see the actual correctly detected cases and the false
detected cases.
• You can use the Profit Simulation tab to estimate the expected profit, based on costs and profits associated
with the predicted positive and actual positive targets.
For more information, refer to Confusion Matrix [page 2095], The Profit Simulation [page 2099].
• Can I see any errors in my predictive model? Is my predictive model producing accurate predictions? Use a
large panel of performance curves in the Performance Curves tab, to compare your predictive model to a
random model and a hypothetical perfect predictive model:
• Determine the percentage of the population to contact to reach a specific percentage of the actual positive
target with The Detected Target Curve [page 2102].
• Check how much better your predictive model is than the random predictive model with The Lift Curve
[page 2103].
• Check how well your predictive model discriminates, in terms of the compromise between sensitivity and
specificity with The Sensitivity Curve (ROC) [page 2104].
• Check the values for [1-Sensitivity] or for Specificity against the population with The Lorenz Curves [page
2104].
• Understand how positive and negative targets are distributed in your predictive model with The Density
Curves [page 2106].
What's next?
If you are not satisfied, you can try to improve your predictive model by changing the settings, or if necessary
changing the data source.
Related Information
A predictive model produces performance indicators and reports as a result of a successful training. Here is
a short summary of the different components that you can use to debrief your results so you can verify the
accuracy of your predictive model.
• Prediction Confidence indicates the capacity of your predictive model to achieve the same degree of
accuracy when you apply it to a new data source, which has the same characteristics as the training data
source. It takes a value between 0% and 100%. This value should be as close as possible to 100%. To
improve your Prediction Confidence, you can add new rows to your data source, for example.
• Root Mean Squared Error (RMSE) measures the average difference between values predicted by your
predictive model and the actual values. The smaller the RMSE value, the more accurate the predictive
model is.
• How does the target value appear in the different datasets? Get some descriptive statistics on the
target value per dataset. For more information, refer to Target Statistics [page 2112].
• Which influencers have the highest impact on the target? Check how the top five influencers impact on
the target. For more information, refer to Influencer Contributions [page 2092].
• Which group of categories has the most influence on the target? In Influencer Contributions, you can
analyze the influence of different categories of an influencer on the target: If the influence value is positive,
we are more likely to get "minority value". If the influence value is negative we are less likely to get "minority
value". For more information, refer to Category Influence [page 2093], Grouped Category Influence [page
2094] and Grouped Category Statistics [page 2094].
• Can I see any errors in my predictive model? Is my predictive model producing accurate predictions?
Compare the prediction accuracy of your predictive model to a perfect predictive model using a graph and
detect the model errors very quickly. For more information, refer to Predicted vs. Actual [page 2111].
What's next?
If you are satisfied with the results of your predictive model, use it. For more information, see Generating and
Saving the Predictions for a Classification or Regression Predictive Model [page 2129].
If you are not satisfied, try to improve your predictive model changing the settings, for example.
A predictive model produces performance indicators and reports as a result of a successful training. Here is
a short summary of the different components that you can use to debrief your results so you can verify the
accuracy of your predictive model.
• Is the selected performance indicator high enough to consider my predictive model robust and accurate?
Check the quality of your model performance over one of the proposed performance indicators. They
evaluate the "error" that would be made if the forecast was calculated in the past where the actual values
are known. The Expected MAPE is the default and preferred performance indicator but you can choose
any of the proposed performance indicators to evaluate your model performance. Please refer to the Time
Series Forecasting Performance Indicators [page 2078] page for more details.
• Which forecasts are provided by the predictive model? Have a close look at the actuals and forecasts. The
Explanation tab of the report shows trends, cycles, and fluctuations in the signal, each with a description.
In the Forecast tab of the report, check if there are outliers in the forecasts and detect anomalies on the
actual. For more information, refer to The Predictive Forecasts [page 2116], The Time Series Outliers [page
2117] and The Time Series Outliers (Future) [page 2117]
• How accurate is my predictive model? Use the Forecast vs. Actual graph to visualize the predicted values
(forecast) and actual values for the data source. You can then quickly see how accurate your predictive
model is, what are the outliers, and the confidence interval. For more information, refer to The Forecast vs.
Actual Graph [page 2115] and The Time Series Outliers [page 2117].
What's next?
If you are satisfied with the results of your predictive model, you can then go ahead and use it. For more
information, see Saving Predictive Forecasts Generated by a Time Series Predictive Model into a Dataset [page
2141] or Save Predictive Forecasts Back into your Planning Model [page 2317].
If you are not satisfied, try to include more historical data when preparing your data.
Related Information
You can keep informed about what happened during the modeling process once it's complete by checking out
the Status panel.
At the top of the Settings panel, click to access information on the predictive model, and any errors that
occurred during the generation or apply actions:
View Displays
Model Status The area where you can access the error messages. For ex-
ample, if the training failed, you can get some information on
what went wrong.
Detailed Logs Logs that display the details of each step of the process.
In case of a problem, it allows you to provide SAP support
professionals with information.
Related Information
Learn about business planning and how SAP Analytics Cloud helps you to carry out your plans by bringing a
broad range of planning features together with analytics, predictive, and collaboration capabilities.
• Use the Calendar to Organize Your Collaborative Planning Processes [page 2279]
• Understand General Rules for Advanced Formula Calculations for Planning [page 2364]
• Add Advanced Formulas to Your Data Action [page 2376]
• Design Advanced Formulas Using Scripts [page 2379]
• Design Advanced Formulas Using the Visual Tool [page 2381]
• About Script Formulas and Calculations in Advanced Formulas for Planning [page 2391]
• Change Values in Advanced Formulas with DATA, RESULTLOOKUP, and LINK [page 2473]
• Optimize Advanced Formulas for Better Performance [page 2479]
Live connection to SAP BPC provides an alternative to use the powerful planning engine in BPC embedded to
realize advanced and complex enterprise-wide planning. You can perform the following BPC planning activities
directly in an SAP Analytics Cloud story:
With SAP Analytics Cloud, you can cover a range of planning operations like scheduling tasks, kickstarting
forecasts with predictive features, building custom planning applications, carrying out data entry and version
management, and writing powerful scripted calculations.
As you’re working in the application, you can collaborate with your team and apply advanced analytics to your
planning data. This way, you know that everyone’s aligned on the same goal, and you can get more value out of
your plan.
What Is Planning?
Reporting and analysis often focuses on historical data (known as actuals), but a business needs a clear vision
of the future, too.
Planning is all about setting strategic goals and then determining how to meet those goals by creating annual
budgets, tracking progress in forecasts, and simulating scenarios to find new opportunities. These plans are
formed by projecting actuals into the future, by gathering input from different departments, and by considering
trends, risks, and opportunities in the market.
Executive boards and finance departments play a big role in planning. But an effective plan needs input and
support from the whole organization. Maintaining close integration between departmental plans and the overall
strategic goals and financial plans is known as collaborative enterprise planning.
• Planning modelers: These specialists have a strong understanding of the organization’s data and the other
systems in their landscape. They’ll play a big role in setting up the planning solution in SAP Analytics Cloud,
including creating models and structured planning operations.
• Planning reporters: These users can be finance specialists who are working with the data on a daily basis
to create budgets and forecasts, and to provide financial reports and analysis to business users and upper
management.
Planning reporters also include many users who just need to contribute planning data for their specific
area as part of a bottom-up planning process.
• Planning viewers: These are stakeholders in the planning process, such as business users and executives,
who need to check to see how their team or organization is doing. Sometimes they might run their own
simulations or contribute some planning data as part of a calendar task.
In this video, you’ll get an overview of SAP Analytics Cloud features you can use during your planning
processes, including topics such as stories, models, version management, data access control, currency
conversion, using the Calendar, working with the SAP Analytics Cloud, add-in for Microsoft Office, and creating
data actions.
Applying analysis and predictive features while you're planning can help you plan faster and more accurately,
and get a better understanding of your business.
For example:
• When starting a plan, you might use predictive scenarios to set the initial values, letting you quickly identify
the overall trend as well as expected fluctuations in your data. The predicted values give you a quick start
that's grounded in past performance.
• While you're adjusting this plan with manual data entry, you might refer to overall KPIs as well as variance
charts for each data point to get quick feedback about whether the plan is on target or not.
• If you want to understand the trends and drivers in your data, or investigate the root cause of a specific
issue, you may want to do some free-form exploration of your data, and analyze it with smart features that
identify key influencers and outlying data points.
You can integrate these features closely with the planning process so that data analysis, manual planning, and
predictive features all reinforce each other without making your process slower or more complicated.
Planning modelers do most of their work in this phase. After the initial setup of the solution, they can update
and enhance it during later cycles.
Use this diagram to get descriptions of each step and click to open detailed instructions.
• Define Valid Member Combinations for Planning Using Validation Rules [page 2547]
• Configuring Data Locking [page 2525]
• Create a Data Action [page 2322]
• Set Up Your First Allocation Process [page 2516]
• Create a Multi Action [page 2353]
• Set Up Value Driver Trees for Planning [page 2282]
• Schedule Data Actions in the Calendar [page 2265]
Next, planning reporters work to putting the figures into the plan, and sharing their work with viewers.
Use this diagram to get descriptions of each step and click to open detailed instructions.
There are often iterative changes in this part of the process as planners and stakeholders at different levels and
in different departments collaborate on the plan.
When the data is ready, there are a few steps to wrap up the process.
Since SAP Analytics Cloud combines planning with analytics and predictive features, you can also support your
planning workflows with several features that aren't specific to planning:
• Tables: You'll often use tables to keep track of your data, and several planning features are based here.
While working with tables, you can also customize your layout and formatting and enhance the table with
calculations, data point comments, and in-cell charts. For more information, refer to Use Tables to Visualize
Data [page 1351].
Related Information
When you’re planning in SAP Analytics Cloud, you’ll usually work with data from a planning model. Read on to
get familiar with this type of model.
Types of Models
When a model is created, the modeler chooses whether it’s an analytic model or a planning model.
A table shows unbooked cells with dashes (1) and grayed-out calculated cells (2) that don't support data entry.
Tables also give you an entry point for version management. Through the version dimension, you can work
with multiple versions of the data that could reflect different categories (such as actual and plan), different
scenarios (pessimistic and optimistic), and different periods (Q1 forecast and Q2 forecast).
Structured Planning
As well as manual data entry, most planning processes involve structured planning operations like data
actions, multi actions, and allocation processes. These are created in advance by modelers to reflect your
organization's business rules and to automate repetitive parts of the planning process. You can run them to
make a set of changes to the model data all at once. For more information, refer to Run Data Actions, Multi
Actions, and Allocations [page 2260].
Most planning models are import data models. Unlike live data models, your source data is copied into SAP
Analytics Cloud so that you and your team can adjust it during planning tasks. (One exception is live data
connections to SAP BPC embedded.)
Planning models store data at the lowest levels of each dimension hierarchy, known as the leaf level. A model's
data foundation is made up of a row of data for each combination of leaf dimension members. Each table cell
usually represents data from many of these rows.
You can book values directly to leaf members, or to parent members. For data entry on parent members,
the system automatically disaggregates your data to leaf members. For details, see Disaggregation of Values
During Data Entry [page 2207].
Unassigned Data
By default, dimensions in planning models usually have an unassigned member with the number sign (#) as
its ID. It's always a leaf member. You can store data in this member if you don’t want to distribute it among the
other dimension members.
For example, if you're planning capital expenses but don't want to decide which locations will receive the new
equipment yet, you can book the data to the unassigned member for the organization dimension.
Some dimensions like account, version, and date don’t have an unassigned member, because all of your data
needs to be explicitly assigned to one of the dimension members.
Publishing your changes to a version saves those changes back to the model. For details, see Create, Publish,
and Manage Versions of Planning Data [page 2170].
Note
Planning viewers or users with BI licenses can change planning data for their own simulations, but they
can’t publish their changes.
SAP Analytics Cloud currently supports two types of import data models for planning: a classic account model
and a new model type with measures.
The main difference is that the new model type can have multiple measure values for each row of model data.
Classic account models have a single measure. Because of this, the new model type offers some different
configuration options. For details, see Get Started with the New Model Type [page 627].
For planning on models with measures, take note of the following points:
Account dimensions are optional in planning models with measures. This is because each measure sets many
of the same properties of an account dimension member, such as aggregation, scale, units, and whether there
are currency values or not. For background information about different model configurations, see Choosing a
Model Configuration [page 631].
A model with measures can contain decimal measures or integer measures. Different restrictions apply to each
data type, so you should keep these restrictions in mind while planning on measures, especially when copying
data across them.
In tables, data entry and copy and paste operations can't exceed a measure's value range or number of decimal
places.
When using the planning panel or creating data actions, messages will also let you know about changes in data
types across different measures.
Keep this in mind if you’re copying measures using parameters, too. For example, if you’re copying a decimal to
a measure parameter with any data type, no warning shows up when you’re creating the data action, but a user
might select an integer measure when running the data action. This could result in decimal place data being
discarded during the copy step, or in the data action failing if source values exceed the integer value range.
When using measure parameters for source or target members, it’s recommended to define them with specific
data types to avoid this case.
For details about parameters, see Add Parameters to Your Data Actions and Multi Actions [page 2338].
In some cases when figuring out whether a value has changed or remained the same, the software doesn’t
distinguish between different measures for a specific set of leaf members.
This happens because models with measures store the values for each measure on the same row in the fact
table. (Each row is associated with a combination of leaf members from all dimensions.)
• When you enter data for two measures that belong to the same set of leaf members, a message appears
showing that one database record was changed.
• When data locks are applied with measures as a driving dimension, this issue affects importing or deleting
values in the modeler. For any set of dimension members with a mix of locked and unlocked measure
values, you won’t be able to change any of the measure values by importing or deleting data in the modeler.
(Appending unbooked values to these measures is allowed, since it doesn’t change the data.) For details
about data locking, see Configuring Data Locking [page 2525].
• When you copy a public version and publish the resulting private version to a different public version,
you can choose to only publish the changed dimension combinations (Publish member combinations with
changed data). In this case, all measures will be published for each set of leaf members where any measure
was changed. For details, see Create, Publish, and Manage Versions of Planning Data [page 2170].
Restrictions
Importing data from SAP BPC is currently not supported by models with measures.
With a planning model in SAP Analytics Cloud, you can use version management to organize, compare, and
maintain different versions of your data.
In this section, you will learn about the following topics related to working with versions:
You may not need many versions of your historical data. Often, a single actuals version is created and updated
by importing data. But when planning for the future, organizations might create many plans for various
reasons.
These plans could each use a different version of the data. When working with versions, you can change data in
one version without impacting the data in other versions.
Version management lets you manage different versions and types of plans. For example, your organization
might create a new version for each new forecast period. You might also need to separate your forecasts from
other types of plans like budgets and strategic plans.
Versions exist as a dimension in the model. Modelers could start with multiple versions when importing data to
the model and you could create new versions while planning.
Conduct a variance analysis between the actual and budget versions in a table.
Often, you’ll need to check whether your actuals data meets the planned amounts by analyzing the variance
between the two versions.
In this example, the Forecast Layout shows actual data for past time periods and forecast data for the future,
with a specific cutover date. For more information, refer to Creating a Forecast or Rolling Forecast Layout [page
1428].
After data action tracing is run, a tracing version will be automatically created based on the target version
selected during tracing.
Public versions can be viewed by users with the required permissions, but private versions can only be viewed
by the owner unless the version is shared. Tracing versions can only be viewed by the owner.
The following table describes the differences between public, private, and tracing versions:
How is the version created? Created by importing it to the Created by a user copying Created when a user runs
model from a data source or another version during plan- the data action tracing dur-
by publishing a private ver- ning. ing planning.
sion during planning.
Who can view the data? Any user with access to the Only the version owner un- Only the version owner.
model, unless additional se- less they share the version.
curity is applied to the ver-
When sharing a private ver-
sion.
sion, you can assign permis-
sions as Read Only or Read
and Write.
Who can edit the data? Any user who can view the Only the version owner un- Only the version owner.
data, unless additional secur- less they share the version.
ity has been applied to the
version dimension.
Who can publish the data? Users with the required per- Users with the required per- Data in a tracing version
missions. missions can publish their can’t be published.
private versions to public ver-
Publishing might override
sions.
changes made by other
users, so it is best to keep
public edits short and publish
often.
Who can delete the version? Users with the required per- Only the version owner. Only the version owner.
missions.
Additional security measures applied to the model or version dimension can impact permissions. For example,
Data Access Control can be applied to the version dimension to set read or write permissions for public
versions. For more information about securing your data, refer to Learn About Data Security in Your Model
[page 799].
You can use the category property to organize your versions by type.
• Actual
• Budget
• Plan
• Forecast
• Rolling Forecast
You can create multiple versions in the same category. For example, you could create different forecast
versions to explore scenarios such as optimistic, baseline, and so on.
A model can only have one public actuals version, which is usually imported from a file or data source.
However, you can still create private copies of this version.
A modeler can also define currency conversation rates across a category. For more information, refer to Learn
About Currency Conversion Tables [page 788].
In this video, you will open the Version Management panel, change data in a public version, see how to publish
changes or revert them, create and edit a private version, and publish it to make the data public.
In SAP Analytics Cloud, you can use the version management panel to view and manage all the versions you
have access to.
Some version management functions can also be accessed by right-clicking a version name in a table and
selecting Version from the menu.
More Start Edit Mode Start Edit Mode on a public version. For
more information, see Planning on Pub-
lic Versions [page 2188].
• Name
• Access Rights
• Category
• Type
• Model
• Last Synced with BPC
• Currency
• Currency Conversion Version
• Size
• Planning Area
• View Comment Statistics
• Name
• Category
• Type
• Model
• Created Date
• Last Synced with BPC
• Currency
• Currency Conversion Version
• Shared By
• Shared With
• Public Source Version
• Size
• Planning Area
• View Comment Statistics
• Name
• Category
• Type
• Model
• Created Date
• Last Synced with BPC
• Currency
• Currency Conversion Version
• Public Source Version
• Size
• Planning Area
Note
If you perform an action on a version that is currently being used in a background process, you will
have the option to either stop the background process and try your action again, or wait until the
background process is completed and try your action again. It is recommended that you wait for the
current background process to finish and then try your action again. You can view information about the
current running process by choosing Show Details.
If you choose to cancel the background process, it will take some time to stop the process. Canceling
a running data action may result in data loss. For more information, see the Note in the "Cancel a Data
Action" section of Monitor Data Actions [page 2344].
When working in the version management panel for tables, Show Versions in Use Only is enabled by default.
This option shows only the versions that are currently being used and that match the current settings. To show
all versions, disable Show Versions in Use Only.
Note
Show Versions in Use Only is not supported for value driver trees.
Additionally, if versions of a table are filtered by cross-calculation or hidden by the Unbooked Data option,
these invisible versions might still be shown in the version management panel even when Show Versions in
Use Only is enabled.
You can use the version management panel to view the size of a version.
• Public Edit Mode: The number of records in the public edit version is displayed. Public Version is the
database records in the public version. Initial Planning Area is the area where you started editing. Your Edits
(Unaggregated) are the database records that you have edited that have not been published.
• Private Version or Tracing Version: The number of records in the private version or tracing version is
displayed. Initial Planning Area is the area where you started editing. Edits (Unaggregated) are the database
records that you have edited that have not been published.
Note
Edits (Unaggregated) is displayed as Fewer than 1k if they are fewer than 1,000.
Comment statistics, under Version Management, give you information on the comments associated with a
planning model, both at the version and model level. Information such as the number of data point comments,
dimension comments, and threads in the model and the version selected are all part of the statistics.
Prerequisites
• You must have permission to Manage and Read comments at the role-level.
• You must have Read permission on planning models in the tenant to access Version Management.
Procedure
You can use the Version Management panel to view the comment statistics.
Note
In SAP Analytics Cloud, you can use the Version History panel to display, undo, redo, or revert changes to
versions.
Context
You can use the Version History panel to display all changes made to a private version, a public version in
edit mode, or a tracing version. This includes changes made in a story and unpublished changes made by
processes like data actions or allocations. You can undo or redo those changes by selecting the revision
(change) that you want to go back to.
Note
After you publish changes to a version, you won’t be able to revert the changes.
You can access the history of all versions for any models present in the story or analytic application. Only the
owner of a shared private version can see its version history.
You can also undo or redo changes outside of the Version History panel in the following ways:
Procedure
1. Select a table that contains a private version, a tracing version, or unpublished changes to a public version.
2. Choose one of the following options:
• In View mode, from the toolbar, select Tools Version History . In Edit mode for a classic story,
from the Data section of the toolbar, select (History). For an optimized story, from the Tools
The current state is indicated on the timeline and selecting a different state will revert the version to that state.
To create a new version in SAP Analytics Cloud, copy an existing version to a private version and then edit the
new version, or create a blank public or private version.
Context
You have two options for creating versions. The first option is to create a private copy of a public version.
The second is to create a blank public or private version. Creating a blank public or private version has fewer
restrictions than copying an existing version and allows more flexibility in how you want to manage your
versions.
Copying Versions
Context
To copy an existing public version, you can select the version in the version management panel and drag it into
the private versions list, or right-click a version header and select Version Copy Version . All newly
copied versions are private until they are published.
To see an example of how to create a private version by copying a public version, you can watch Video: How to
Manage Versions and Categories for Planning [page 2993].
Note
Tracing versions can only be created by running the data action tracing. For details, see Run Data Action
Tracing and Check Tracing Results [page 2505].
• Copy data in recommended planning area: If this option is enabled on your model, you can copy a
recommended subset of data defined in the model settings. The recommended planning area is based
on data access, data locking, or both. For more information about the recommended planning area, see
Optimize Planning Models Using the Planning Area [page 695].
• Copy all data: Copy all data from the original version.
• Copy visible data: Copy the data visible within the current table filters.
• Choose which data to copy: Manually choose which data you want to copy.
1. Select a table.
3. Locate the version that you want to copy and choose (Copy).
4. In the Copy Data to a Private Version dialog, enter a name for the version.
5. Leave the default Category, or choose a different category.
6. To copy values from the version, select one of the following options:
Option Description
Copy data in recommended planning area Copies a recommended subset of data defined in the model settings.
Copy visible data Copies the data visible within the current table filters.
Choose which data to copy Change the filter values for the new private version.
Note
If your private version exceeds the size limit set up for the model, you’ll see a warning. You can still
create the version, but performance may be affected due to its size. For better performance, consider
creating a private version and selecting Copy data in recommended planning area or Choose which data
to copy to reduce the size of the version.
If you chose Refine Filter, the Set Filters for Account dialog appears; select your new filter options, and then
select OK.
Results
You can now modify the data in the new private version.
Context
You can create blank public or private versions by selecting (New) in the version management panel.
For newly created private versions, choosing a source version is optional. If you choose a source version, data is
not copied to the new version, but data locking settings and dimension property settings are carried over. If no
source version is chosen, then the dimension property will be blank in the new version.
Procedure
1. Select a table.
2. From the toolbar, select (Version Management). The version management panel is displayed.
3. For either a Public Version or a Private Version, choose (Create Blank Public Version or Create Blank
Private Version).
Note
Create Blank Public Version is only available if you have the role-based Maintain permission for the
version dimension and the Maintain access to the model.
4. In the Create Blank Public Version or Create Blank Private Version dialog, enter a name for the version.
5. Leave the default Category, or choose a different category.
6. If you are creating a blank private version, decide whether you want to choose a Source Version. This is
optional, and if you choose a source version, data locking settings and dimension properties will be carried
over but data will not be copied.
7. If currency conversion is enabled on your model:
• Choose the Rate Version.
• If you are creating a blank private version with an account model, choose the Change Conversion.
8. Choose Create.
Results
The newly created blank public or private version will appear in the version management panel and the
corresponding column will appear in the table.
In SAP Analytics Cloud, you can save a private version to make it public, either in the same category or as a
different category. You can also save edits to public versions.
Context
When you save a private version that has multiple currencies, the data is displayed in the correct currencies.
If you have added comments to the private version, those comments will also be published to the public
version. If you don't want the comments published, de-select the option for Include comments. (You can't
de-select the comments option when you drag the private version to the public versions section; the comments
will be published.)
One way to make a private version public is to publish the changes on a private version to an existing public
version. If you make changes to a private version that conflict with another user's changes published to the
existing public version while you were editing, you can choose whether to publish only your non-conflicting
changes and discard your conflicting changes, or publish all your changes while overwriting the other user's
changes.
Note
The choice of whether to discard or overwrite conflicting changes is only available to models with
measures. It is not supported for classic account models.
Note
To have such a choice, go to System Administration System Configuration , and turn on the toggle
of Display options to resolve version publish conflicts.
Procedure
1. Select a table.
Note
Publish As is only available if you have the role-based Maintain permission for the version
dimension and the Maintain access to the model.
If you chose to save a new version, the version is now available in the Public Versions section of the Version
Management panel. If you chose to update a version, the version is updated. If you saved or dragged a
private version, the original private version is no longer available in the Version Management panel.
Note
If data access control is enabled for a dimension in the model, it restricts changes to data in public
versions, but not in private versions. In a private version, you may want to simulate a scenario that
involves changing dimension members for which you do not have Write permissions, for example. In
this case, you cannot save the changes to those members to a public version. Only members for which
you have Write permissions will be updated. For more information about data access control, see Set
Up Data Access Control [page 803].
In SAP Analytics Cloud, you can share a private version with other users. You can choose to make the version
read-only, or allow others to edit it.
Context
Data access control settings on dimensions other than the version may restrict other users from viewing the
data if they do not have the same permissions as you. For details, refer to Learn About Data Security in Your
Model [page 799].
Procedure
1. Select the table that has the private version that you want to share.
3. Select the private version from the list and then select More Share : choose either Read Only
or Read and Write.
4. In the dialog that appears, choose the users and then choose OK.
Results
The private version is shared and a notification is sent. The users can now modify the data.
Note
If you gave the users write access to a shared private version, they may be able to delete it.
Restriction
Only the owner of the private version can see the version history details.
In SAP Analytics Cloud, public versions are often controlled by data privileges and other security measures,
such as data locks or validation rules. These features control who can update public versions and what data
can be edited.
When working with public versions, permissions applied to a user’s role indicate whether they can read, update,
maintain, or delete data from a model.
While all users with read access for a public version can create a private version to plan on, other security
settings may be enforced when you publish the version. Security settings will also be enforced when publishing
changes to a public version.
If some of your changes are restricted by existing security feature settings, those changes will be lost when
you publish to the public version. You will receive a warning message asking whether you want to publish the
remaining data or cancel the publish.
Note
If you choose Show Unpublished Data in the warning message, you will be able to see how many data
were unpublished, as well as a reference list of unpublished data and the reason they were unable to be
published. The reference list is limited to 1000 rows of data. You can export the reference list to a CSV file;
however, the CSV file will show the same 1000 rows of unpublished data as the table, not all unpublished
data.
Note
To use the Publish As option that will create a new public version, you need to have the role-based Maintain
permission for the version dimension and the Maintain access to the model. For the Publish option, you
need to have the Maintain access to the model and Write access to the public version to be updated (if Data
Access Control is switched on).
Context
When you make edits to a public version, the version is put into Edit Mode. This creates a private version of the
public version that only you can see until the changes are published.
You can leave edits unpublished to resume later, however, it is best to publish changes often when editing
public versions. If other users are editing the same version, your changes could overwrite theirs when you
publish because the most recent update will overwrite previous states.
If you make changes to a version that conflict with another user's changes published to the same version
while you were editing, you can choose whether to publish only your non-conflicting changes and discard your
conflicting changes, or publish all your changes while overwriting the other user's changes.
Note
The choice of whether to discard or overwrite conflicting changes is only available to models with
measures. It is not supported for classic account models.
Note
To have such a choice, go to System Administration System Configuration , and turn on the toggle
of Display options to resolve version publish conflicts.
If your model has a recommended planning area applied, you can still choose to put all version data into
Edit Mode by manually selecting Start Edit Mode on a public version in a table or the Version Management
panel. However, write permissions still apply and you will only be able to publish changes within your write
permissions.
You can also choose to Customize Planning Area when putting the public version into edit mode. This option will
let you choose which data will be put into edit mode. However, you will only be able to publish changes within
your write permissions.
If a model has a recommended planning area defined, the Builder panel for any table that is built on the model
includes a checkbox of Auto-generate based on the table context. After you select this checkbox, the scope of
the recommended planning area when you put a public version into Edit Mode takes into account data access
control, data locking, or both, as well as the current table context that covers story, page, and table filters.
Note
Data actions will only run on data within a public versions' planning area when the version is in Edit Mode. If
you use the recommended planning area, data actions will only run on this data. If you put all version data
into Edit Mode, data actions may run on all version data, however, write permissions will still apply.
Procedure
If a recommended planning area has been defined on the model, this data will be put into Edit Mode once
the first changes are made. If the checkbox Auto-generate based on the table context has been selected,
the recommended planning area will also take into account the table context.
• If you want to manually decide which data to put into Edit Mode, do one of the following:
• Choose your public version in the Version Management panel and select Start Edit
Mode .
• Right-click the version name in a table and select Version Start Edit Mode .
If your model has a recommended planning area defined, you can choose from the following options:
• Recommended Planning Area: Only a recommended subset of data will be put into Edit Mode,
which can help optimize performance on large versions.
Note
If the table has its Auto-generate based on the table context checkbox selected, you can't manually
decide which data to put into Edit Mode. A recommended subset of data that also takes into account
the table context will be put into Edit Mode.
2. After you have finished your edits, do one of the following actions:
Option Action
Publish all the public versions that you are editing. Choose one of the following options:
• Select Publish Data from the toolbar.
• In the Publish Data window, you'll have a chance
to check which versions are involved.
• Navigate to another part of the application, then se-
lect Publish and Leave.
• You can select Show Details to check which ver-
sions are involved
Publish one public version. 1. Select a table with the public version you want to
publish.
Revert changes for one public version. 1. Select a table with the public version you want to
publish.
Publish or revert changes to multiple versions. 1. Select Publish Data Advanced from the tool-
bar. A dialog appears, prompting you to revert your
changes or publish them. Choose one of the following
options:
Publish.
• To update all the versions at once, select either
Revert All or Publish All.
2. Select OK.
Related Information
When working with a table based on a planning model in SAP Analytics Cloud, you can create and edit model
values by typing in the table cells.
In this section, you will learn about the following topics related to editing values in a table:
You can also explore more advanced topics related to data entry:
In this video you’ll explore the range of planning options in tables by reviewing key options in the Builder panel,
entering values in empty leaf and parent cells, adjusting the values in booked cells, copying values, distributing
values using the planning panel, and creating new dimension members.
When working with a table based on a planning model in SAP Analytics Cloud, you can create and edit model
values by typing in the table cells.
• You have permissions and authorizations to make changes to the model values.
Users with BI roles can create private versions and change single booked values. Planner Reporter,
Modeler, and Admin roles can use any of the planning tools to enter data on public or private versions.
For more information, see Permissions [page 2844]. For more information about working with versions,
refer to Create, Publish, and Manage Versions of Planning Data [page 2170]
• The cell is not locked, either by a value lock or by data locking.
For more information on value locks in stories, see About Value Lock Management [page 2206]. For more
information on data locking, see Configuring Data Locking [page 2525].
• If the cell is calculated by a formula, the formula must have one or more inverse formulas defined, and
the target cells must be booked. For more information, see Inverse Formulas [page 917]. For restricted
measures and accounts, you can also change the values of cells that are included in the scope of the
restriction.
• The value can be disaggregated from the cell to one or more leaf members for each dimension in the
model. For more information, see Disaggregation of Values During Data Entry [page 2207].
Tip
When you modify data in a table cell, all of the cells in the visible area (processed area) – including any new
cells (records) – will be highlighted.
Only the cells in the currently visible area (currently processed area) will be highlighted. If your table has a
vertical scroll bar, cells in areas that haven't been processed yet won't have any highlighted cells.
Note
You can choose to show tooltips that explain why a selected cell does not allow data entry. To enable these
tooltips, from your table select Show/Hide Reason for unplannable data .
Data entry and copy and paste operations do not affect members that have been excluded from the table
by story, page, or table filters on dimensions or dimension attributes. However, data actions and allocation
processes can still affect these members.
Members that are selected in a filter but set to invisible are treated as visible members when you type or paste
values in the table.
When entering values in table cells you can overwrite the existing value with an absolute or relative value.
Absolute values are used when you want to type the exact number, and relative values are used when you want
the system to help with changes such as increments or decrements.
Whether you do absolute or relative changes you can also choose to use the scale notation.
Thousand 1T 1 Thousand 1K
Million 1M 1 Million 1M
*nn Multiply by nn
/nn Divide by nn
To modify the existing value with a specific value, do one of the following:
• Use the backspace key to delete some or all digits of the current value.
• Use the mouse (or arrow + shift key) to select the specific digits that you want to replace or delete.
Example
You can change cell values by copying and pasting, cutting and pasting, or using allocation features. For more
information, see the following topics:
To have a smoother data entry experience for your table, you can use fluid data entry mode instead. In this
mode, data values entered in a fast sequence will be processed together at the same time as a batch; you don’t
need to wait for the system to update between entries.
To enable this mode, in the Builder panel look for the property Default Data Entry Mode, and then change the
default option to Fluid Data Entry Mode.
Note
Default Data Entry Mode is set to Fluid Data Entry Mode by default for newly created tables.
The option Fluid Data Entry Mode is only available for an optimized presentation table.
If you use the functionality of Add Member in the fluid data entry mode, please first make a data entry on
the newly added member and wait for it to be processed before moving on, which will give you a better data
entry experience.
When you use cells outside the data region (custom cells) in the fluid data entry mode, cell references won’t
update after data entry.
Note
To set or change the time interval for fluid data entry batches, go to System Administration System
Configuration , and update the value for Time interval of fluid data entry batches (in milliseconds). Data
values entered with time between every two successive data entries within this set time interval will be
treated as belonging to the same fluid data entry batch.
For example, you set the time interval to 500 milliseconds (0.5 seconds). This means that during your data
entry, if there is a 0.5-second of inactivity, a batch is processed, and the next data value you enter will be
treated as the first one of a new fluid data entry batch.
Note
What you need to know about the fluid data entry mode:
• During fluid data entry, after you change a cell, its parent and child cells are locked, data changes are
processed, and then locked cells are unlocked. However, there are rare cases where cells may be locked
slowly or unlocked too fast, making both parent and child cells editable at the same time. Please pay
attention as you may get unexpected values in the parent or child cells. Note that parent and child cells
won't be locked during fluid data entry on a table based on an SAP BPC live data model.
Note
During fluid data entry, if the table layout is changed (removal or addition of rows/columns), you may get
unexpected results like misplaced values or loss of values. Here are some of the scenarios where this issue
may arise:
• changes of data that lead to the removal of rows or columns in booked mode.
• data entry on a parent node that makes visible its child members and thus a whole row or column in
booked mode.
• selecting or deselecting in page filters that results in the removal or addition of rows/columns.
If you run into this issue, please manually refresh your table to make sure the table layout is shown
correctly and your data entry will then work as expected.
When you change a booked value, the leaf members that aggregate up to that value are adjusted proportionally
to reflect your change.
When you enter a new value in an unbooked cell, which displays a dash character (-) instead of a value,
values are also booked to one or more leaf members for each dimension that aggregates up to the cell. The
Unassigned member usually receives these values. For more information, see Disaggregation of Values During
Data Entry [page 2207].
Normally, versions can be specified in the table axis and filters. They can also be specified in calculations,
either in the account formulas or calculated measures in the Modeler using the RESTRICT or LOOKUP
formulas, or restricted accounts or measures in the story. You can refer to the following scenarios to see
how the version is determined for a table cell and its corresponding public edit mode created when doing data
entry.
Data entry is allowed on both Restricted_Version Actual and Restricted_Version Plan, and the public edit mode
of Actual and Plan will be created, respectively.
Note
When it comes to data entry on calculated measures calculated by formulas with multiple versions
involved, data entry is made on the version of the target of the inverse formula and a corresponding public
edit mode of the version is created. For example:
Data entry is allowed on Nested_ResctictActualAddPlan for the Plan version, which is the version restricted
in the target of the inverse formula, and a public edit mode of the version Plan is created.
Versions conflict
When there’re conflicts between versions specified in different places for a table cell, data entry follows the
rules below:
If versions specified in different places are the same, data entry is allowed and a public edit mode for the
version is created. If versions specified in different places conflict, data entry is not allowed. Note that constant
Data entry is allowed on CrossCalc_Plan for Plan but is not allowed for versions Actual, Budget, and Budget
N+1, because version is restricted in CrossCalc_Plan to be Plan.
For the cross calculation of CrossCalc_Plan_CS, because Constant Selection is enabled and the constant
dimension set to version, the Plan version has priority over Actual, Budget, and Budget N+1 specified in the
columns.
The version for all table cells of the CrossCalc_Plan_CS is treated as Plan and the value of CrossCalc_Plan_CS
for Plan automatically fills the cells for other three versions. Data entry can be made on CrossCalc_Plan_CS for
all versions, which will always lead to the public edit mode of version Plan being created.
As shown in the table, data entry is only allowed on cells where versions specified in calculations in the row and
column are the same.
Note
Let’s dive into an example of data entry on calculated measures using RESTIRCT and LOOKUP that restrict
a particular version.
For Restict_Version Actual, data entry is only allowed on version Actual (the version restricted in the
formula) but not allowed on other versions. Data entry on version Actual will lead to the creation of its
public edit mode. Any data entry attempt on other versions will lead to this error message:
Sorry, we couldn’t carry out your data entry because all leaf member combinations
for this cell are excluded by filters. Please check any advanced filters or
restricted members on the account or cross calculation dimensions, or contact the
story owner.
For LookUp Version Actual, data entry is allowed on version Actual (the version is restricted in the formula).
Once a data entry is made on version Actual, the public edit mode of version Actual is created and the value
is filled for other versions. After the public edit mode of version Actual is created, data entry is allowed on
all versions for LookUp Version Actual and entered value is automatically filled for other versions. For how
the LOOKUP formula works, see Restrict and Lookup [page 906] .
The updated cell cannot be changed as it is not input enabled: The target version
is not a plan version.
For more information about planning data entry errors, see About Planning Data Entry Errors [page 2216].
In tables and grids in SAP Analytics Cloud, you can copy and paste cell values, the underlying values of their
leaf members, and formulas. You can also create references between cells.
Data can be copied within or across grids and tables, and you can also copy data from an external source.
In a table, you can paste data into cells that can receive data input.
You can also cut and paste values. For details, see Cutting and Pasting Cell Values [page 2202].
You can copy and paste from an external source, such as an Excel spreadsheet. To do this, select the source
cells in the external data source and press Ctrl+C in Windows or Cmd+C on a Mac, then select the target cells
and press Ctrl+V or Cmd+V.
When copying data within a table, there are two types of paste operations: pasting underlying values or pasting
overall values.
When you copy a source cell with model data, all of the underlying values that aggregate up to it are copied as
well. By default, all of these values are pasted if possible.
For example, you might copy a Q1 member and paste it to Q2 for the same version. On the month level for a
calendar year, the value for January will be copied to April, the value for February will be copied to May, and the
value for March will be copied to June.
Underlying data can be pasted when the following conditions are met:
• You are copying across two time periods at the same level, such as December and January, or you are
copying across two different leaf members, such as from one sales manager to another.
If you are using non-standard user-managed time hierarchies on the date dimension, the target cell and
source cell need to have the same time hierarchy. For example, if you are copying data from December
to January, and there are more weeks in December, the underlying values won’t be pasted.
When these two conditions are not met, only overall values are pasted. In this case, the behavior is the same as
typing the value into the cell. The distribution of the copied value among the leaf members is determined by the
existing proportions between those members if the cell already has a value.
Note
Copying and pasting underlying values isn’t supported with a BPC live data connection. You can still copy
and paste overall values, though.
If you want to paste overall values only, you can select Edit Paste Overall Values Only from the toolbar.
You can also right-click and choose Paste Overall Values Only from the context menu.
Note
Choosing Paste or Paste Overall Values Only from the context menu does not support pasting data from
external sources such as Excel.
You can copy and paste to and from table cells that show percentages. When copying and pasting between
percentage cells and numerical cells in a table or in an external spreadsheet, 100% is treated as a numerical
value of 1.
A rectangular group of selected cell values can also be copied and pasted to multiple cells in a table.
Note
If the area that you are pasting to contains cell references, you cannot paste multiple cells to that area. You
will need to copy and paste values individually, or paste to a different area.
This type of operation can also be performed by selecting a source area and dragging the bottom right corner
of the region horizontally or vertically to include the target area. For example, you could select several account
values for January and February 2019 and drag them to the December 2019 cell to paste the January and
February data to alternating months for the rest of the year. If the target region is larger than the source region,
the copied values are repeated. If the target region is smaller, not all of the source values are pasted.
When copying from a grid or an external spreadsheet, cells that are empty contain the en dash (–) character,
or contain the minus (-) character are treated as unbooked. You can therefore copy and paste data from an
external spreadsheet to a region of unbooked cells while keeping some of the target cells unbooked. Pasting an
unbooked value will replace the booked cell with an unbooked cell, and delete the underlying values.
If your model is using non-standard user-managed time hierarchies, only the overall values can be pasted to
cells with different time hierarchies. Refer to Customize Date Dimensions [page 676] for more information
about customizing date dimensions.
For more information on improving performance when conducting mass data entry, refer to Performance
Considerations [page 2205].
You can paste values to a calculated measure or account if you've defined an inverse formula for it.
Pasting to currency conversions is also supported. For a model with measures, you can copy and paste values
to or from base currency measures and conversion measures. However, the values won't be converted. See
Plan with Currency Conversion [page 2239] for details.
For example, you may have created a blank version with an empty top-level account like Income Statement,
which has several calculated subaccounts. To quickly fill in the values, you can copy and paste the value of this
account, including its underlying records, from an existing version to the new version.
Say one of the Income Statement subaccounts is Revenue, calculated as Price * Units Sold. If Price and Units
Sold don't aggregate up to Income Statement, Revenue doesn’t receive any values in the new version. To get
those values, you can copy the calculation inputs: Price and Units Sold.
When you copy an aggregated account with calculated subaccounts, possible empty target cells will be
highlighted. Source and target cells need to belong to the same account, and meet the requirements for
copying and pasting details. Also, the target cell can't be a restricted measure or account.
You can copy and paste between restricted accounts and measures. The target cells must be included in the
restriction applied to the account or measure, however.
In many cases, such as when you are copying from one period to another in a forecast layout, it's helpful to
copy the underlying values of the source cell. You can copy underlying values across restricted accounts and
measures when a few conditions are met:
In SAP Analytics Cloud, you can cut and paste cells to paste the values while deleting them from the source
cells.
Cut and paste by selecting source cells and pressing Ctrl + X or Cmd + X , then selecting target cells and
pressing Ctrl + V or Cmd + V .
In most cases, the cell that you cut will become unbooked when you paste the value. There are a few
exceptions, though:
• If the value of the source cell cannot be deleted, it will only be copied. You'll see a message in this case. See
Deleting Values in a Table [page 2203] for a list of restrictions.
• If you cut a group of cells and paste to an overlapping group of cells, new values are pasted to the
overlapping cells.
• If you cut a cell and paste it to a cell that aggregates up to the source cell, the source cell keeps its original
value. For example, you can cut from 2020 and paste to December 2020. In this case, 2020 keeps the
same value but only December is booked.
For more details, see Copying and Pasting Cell Values [page 2199].
In SAP Analytics Cloud, you can delete values in a table to remove them from a public or private version of a
planning model, or to simulate changes to embedded data.
You can delete data values from a table by selecting a cell or multiple cells and then pressing Delete . You are
able to delete from public versions and private versions. Pasting an unbooked cell to a booked cell (from a table
or an external spreadsheet) will also delete the underlying data values.
When you choose to delete a value, keep the following information in mind:
• Delete removes the value instead of setting the cell value to zero.
• When publishing, the deletions are propagated to the public version for those values that were already
present in the private version when it was created.
• You can use the history feature to undo and redo delete actions.
• In Mass Data Entry mode, delete followed by copy & paste operates as a simple value change, not a
redistribution.
• For first or last accounts, delete removes all data in the time span of the node (for example, the whole
year).
Restrictions
There are some things that you need to be aware of when deleting values:
• Deleting values may affect performance for queries and planning. For more information, refer to
Performance Considerations [page 2205].
• You can't use delete in assignee versions of input schedules or in BPC write-back models.
• Delete doesn't work for formula accounts with inverse formulas.
• Delete doesn’t work in tables that have cell locks. Unlock all cells in your table before deleting values.
You can also cut and paste cell values. The cut cell values are deleted when you paste them. For more
details, see Cutting and Pasting Cell Values [page 2202].
In SAP Analytics Cloud, you can enter multiple values in a table without waiting for the system to update
between entries.
Context
Adding multiple values to a table can be time-consuming if you have to wait for the system to update the data
source after each entry. This feature provides an editing session that allows you to enter multiple values before
processing the updates. You also have the option to process and save some updates and then continue to make
changes.
Procedure
Tip
You can also select this option by using the following keyboard shortcut: Ctrl + Alt + M (for
Windows) or Control + Option + M (for Mac).
You can enter values in unbooked cells and change existing cell values. Each time you enter a cell value,
the cells that aggregate up to it are locked until after you have left the editing session. Both cells directly
changed by you and those with values updated by calculations will have a background color applied (for
example, a blue background color).
Note
You can copy and paste data to multiple cells during a data entry session, but there are some
restrictions:
• Underlying values cannot be copied across versions; only values can be copied.
• The source cell used for the copy is disabled to avoid conflicting entries.
• Underlying values cannot be copied from a cell that was originally NULL.
Note
Note
During a mass data entry session for a table built on an SAP BPC Embedded model, after you change
the value of a member, we suggest you don't change the values of its parent members (including those
of Totals that it contributes to) or child members.
3. To control how your data changes are applied, select one of the following:
• Process Data - Process the changes you've made so far and then continue adding data. When you
process the changes, all the changes resulting from data entry will have a different background color
applied (for example, a yellow background color).
Tip
You can also select this option by using the following keyboard shortcut: Ctrl + Enter (for
Windows) or Control + Return (for Mac).
• Exit Mass Data Entry - Any changes that have not been processed won't be saved.
Results
For data entry processing performance reasons, it is better to avoid updating too many cells at the same
time. When updating multiple cells, the application runs dependency checks and verifies potential restrictions
pairwise. This leads to quadratic growth with the number of changed cells in several cases. While this can be
considered inconvenient, it is actually faster to split the update into multiple smaller ones, rather than running
a single sizable update in one go.
A technical limit restricts the number of low-level data changes to avoid runtimes growing too long. When
that limit is exceeded, the application produces an error message and data entries are rejected. This can also
Simple Scenarios
For simpler cases of mass data entry, the above-mentioned dependency checks are not necessary as the grid
setup isn't as complex. Dependency checks can be skipped if the grid meets the following conditions:
Tip
This can be achieved either by selecting leaves manually, or by selecting the Show only leaves in widget
option in the hierarchy settings for the given dimension, if available.
Note that for performance reasons, the actual area of changed cells is not specifically considered with respect
to these conditions. The whole grid needs to fulfill them.
Bigger copy and paste mass data entries often meet these conditions. If they don’t already, a few changes
could be sufficient.
This can significantly improve performance. If you notice a sudden drop in performance, it can be worth
checking whether the grid still meets the conditions.
Bulk Deletions
This also applies when deleting a large number of cells. However, there are two main differences:
• For deletions to run quicker, visible dimensions don’t have to be restricted to leaves only.
• Performance gains are only possible if the cells you delete form a rectangle with no gaps.
The latter is met automatically if you mark an area for deletion by dragging from one edge to another.
However, removing a few cells from a large area can significantly slow down the processing, even if they are still
unbooked and don’t necessarily require deletion.
In SAP Analytics Cloud, you can prioritize locks to control data entry for table cells.
When you lock a table cell, the value in that cell will not be updated when you make changes to the data. By
default, data entry processes have lower priority than cell locks. If you want to give data entry processes higher
priority, you can change the order of the locks.
You can rearrange locks in the Value Lock Management panel to change their priority. After you move one lock,
priority values appear for all locks. The first item in the list (lock or data entry process) shows priority 1, which
is the highest priority.
You can set locks individually in the table (by selecting a cell and then using Lock cell for each lock) or you can
use the Value Lock Management panel to quickly lock multiple cells.
In SAP Analytics Cloud, when you enter or change data for a planning model cell, the value is automatically
spread to leaf members that aggregate up to it. This process is called disaggregation.
For planning models, data is stored in leaf members of each dimension hierarchy. Parent members only
show the aggregated values of their children, and generally don't contain values on their own. Because of
this, disaggregation happens whenever you change data that represents more than one combination of leaf
members for all dimensions.
Tip
Keep in mind that disaggregation doesn’t just happen over the dimensions added to your table axes. For the
other dimensions, disaggregation will occur unless they’re filtered to a single leaf member.
Note
If your model contains both measures and dimensions, a modeler can choose whether to use aggregation
settings from the accounts or measures. To learn more, see Set Structure Priority and Create Custom Solve
Order [page 701].
Classic account models don’t have separate measures, so the accounts always set the aggregation type.
There are a few ways that you can get more control over disaggregation while planning in a table:
• Change the data disaggregation in the Builder panel to choose how data disaggregation occurs: Set Up
Model Preferences [page 664]
• Change the input enablement settings to control if cells will appear as editable or read-only in a table: Set
Up Model Preferences [page 664]
• Apply locks to cells that shouldn’t receive values: About Value Lock Management [page 2206]
• Copy and paste a cell that has the correct underlying distribution: Copying and Pasting Cell Values [page
2199]
• Use the planning panel to specify values and proportions among a group of cells: Assign and Distribute
Values with the Planning Panel [page 2233]
When you change a booked value, the leaf members that aggregate up to that value are adjusted to reflect your
change. Usually, this happens proportionally.
Some aggregation types don’t support data entry on booked parent members, so the corresponding cells
won’t be input enabled in a table. You can check the reason that a cell is greyed out by selecting Show/Hide
Reason for unplannable data from the table action menu ( ) and selecting the cell.
When you enter a value in an unbooked cell, which displays a dash character (-) instead of a value, the following
rules are used to determine how to disaggregate the value along each dimension:
• If the unassigned member (#) is available as a leaf member of the source cell, this member receives the
same value as the source cell, and other members of the dimension remain unbooked.
• The unassigned member may not be available, for example, because it is filtered out of the table, because
it does not aggregate up to the source cell, or because it does not exist for dimensions such as date and
account. In these cases, the value is spread to leaf members of the dimension based on the aggregation
type.
• This aggregation type is determined either by account or measure settings, depending on which has
priority. When the account or measure just has a default aggregation type, all dimensions will use it for
SUM The source value is divided equally among the leaf members.
For example, if you enter one million in a cell with two leaf
members, the leaf members receive 500,000 each.
AVERAGE and AVERAGE excl. NULL Each leaf member receives the same value as the source
cell. In this case, the leaf members receive one million each.
(Available as exception aggregation. To support disaggrega-
tion, they need to be used with SUM.)
NONE Each leaf member receives the same value as the source
cell. The leaf members receive one million each.
FIRST The first leaf member receives the same value as the source
cell. If you enter one million for Q1 using the calendar year,
(Available as exception aggregation)
for example, January receives one million.
LAST The last leaf member receives the same value as the source
cell. If you enter one million for Q1, March receives one mil-
(Available as exception aggregation)
lion.
For some exception aggregation types, data can’t be disaggregated from unbooked cells. In this case, you can
enter data directly to leaf members of the exception aggregation dimensions, but not to their parents. If the
exception aggregation dimensions aren't added to a table axis, they need to be filtered to a single leaf member
to enable data entry:
• SUM
• NONE
• MIN
• MAX
• AVERAGE excl. 0, Null
• MEDIAN (and variants that exclude null and zero values)
• FIRST QUARTILE (and variants that exclude null and zero values)
• THIRD QUARTILE (and variants that exclude null and zero values)
• AVERAGE when used with NONE as default aggregation
• COUNT (when booked, leaf members will only show a value of 1)
For other aggregation types, data entry is not supported at all, for example:
• LABEL
• COUNT excl. NULL and COUNT excl. 0, NULL
• NONE when used with SUM as default aggregation
Note
If you enter data for an aggregated account member, its children may include different account types
such as Income and Asset accounts as well as Expense and Liabilities and Equity accounts. In this
For more information on the aggregation types and sign switching, see Attributes of an Account Dimension
[page 606].
If you do not want to book values to the Unassigned member for a dimension, you can use one of the following
methods:
• After the value is booked to an Unassigned member in the table, select it and select Distribute
Values to distribute the value to other members.
• To prevent values from being booked to the Unassigned member, filter the dimension and select all
members except the Unassigned member. You can apply this filter to an individual table by selecting Add
Filters in the Builder panel, or to the entire story by selecting (Story Filter) from the top navigation
panel.
Note
• If your model contains many dimensions where the Unassigned member is not available, spreading
data to all leaf members for each dimension may result in slow performance. In this case, you'll either
get a warning about slower performance, or a message to filter the data or change the target cell to
create fewer data records in a single operation.
• Limits apply to data entry on unbooked cells: data can't be disaggregated to more than 650 000
records from a single cell. For a mass data entry session, the combined limit for all changed cells is 6
500 000 records.
• In some cases where you are working with more than one hierarchy for a dimension, data may be
booked directly to parent nodes. For more information, see Entering Values with Multiple Hierarchies
[page 2211].
In a table based on a planning model in SAP Analytics Cloud, you can enter data into cells calculated by
dynamic time filters, including member functions such as YTD.
When you enter data on a YTD, QTD, or MTD value, the change in value is assigned to the last booked member
of the date dimension.
For example, consider a table that shows YTD values for 2018, with $1 million booked to both January and
February 2018 and the rest of the months unbooked:
If you change the YTD value for April 2018 from $2 million to $3 million, the extra $1 million will be booked to
February 2018:
Inverse functions can be defined for calculated members based on restrictions that use dynamic time filters.
This allows you to simulate planning scenarios by changing KPIs such as YOY growth, for example. For more
information, see the Inverse formulas with dynamic time filters section of Inverse Formulas [page 917].
In SAP Analytics Cloud, when you enter planning data for a dimension with multiple hierarchies, you can end up
with values booked directly to parent nodes in some of the hierarchies.
For example, you might be working with a geography dimension displayed in one table using a hierarchy
that shows the regions as leaf members, and in another table that shows regions as parent members of the
countries. If you use the first table to book a value to North America, the second table will show the value
booked directly to that parent node.
You might use this type of workflow if you want global leads to book values to the regions only, and then have
regional managers spread those values to the country level. But the tables can look confusing side-by-side. (If
you don’t want any booked parents to show up, you can turn on the Show only leaves in widget option. See
Modifying a Table [page 1363] to learn how.)
Tip
To set up this workflow, you can add hierarchies to the geography dimension that exclude the leaf members
of the original hierarchy. For more information, see Parent-child hierarchies with Not in Hierarchy members
[page 616]. You can then exclude the “Not In Hierarchies” member in your story. These members are then
excluded in your stories.
Working with the second table, you can continue performing data entry, for example, by changing the value for
North America or booking data to other members in the hierarchy. Because data is aggregated differently for
each hierarchy, the value of North America may appear inconsistent between the two tables when you book
data to its children.
If you want to disaggregate the value of North America to its leaf members, you can select the North America
cell in the second table and choose Distribute Values . The source cell is automatically locked, and
you can spread its value to leaf members of that hierarchy. See Assign and Distribute Values with the Planning
Panel [page 2233] for details.
In this case, all of the value is spread to the United States member. Since it's a leaf member in both hierarchies,
the data is consistent across the two tables.
If you need to get rid of the booked parent node and can’t use spreading, you can also delete the cell value
[page 2203] or Undo, Redo, and Revert Changes to Versions [page 2179].
Related Information
If you encountered an error message when entering data in a table in SAP Analytics Cloud, check here for
solutions and suggestions.
In some tables with advanced filters or multiple restricted measures and filters, you might see the following
message after trying to enter data:
Sorry, we couldn’t carry out your data entry because all leaf member combinations
for this cell are excluded by filters. Please check any advanced filters or
restricted members on the account or cross calculation dimensions, or contact the
story owner.
There are a few possible reasons that data entry didn’t work for a specific cell:
• The table has two or more restricted accounts or measures whose filters don’t overlap at all. For example, a
cell with a restricted account that shows 2021 data and a restricted measure that shows 2022 data.
• A restricted account or measure excludes members but those cells still appear in the table because they
are included for other accounts or measures.
• An advanced filter excludes a cell but the cell still appears in your table because of the structure of the
filter. For example, the filter includes 2019 plan data and 2020 forecast data, and you enter data on the
2019 forecast cell.
For details about more error messages in cases where versions defined in calculations like restricted measures
are involved, see Working With Versions [page 2195].
You entered planning data into a table and an error message appeared telling you to check if cells are blocked
by validation rules or cell locks, or if your data changes conflict with each other.
If the aggregation type of the account dimension for this story is of type None, an error message appeared
telling you that some member values couldn't be assigned the correct value due to validation rules, cell locks,
or other technical reasons.
There's a validation rule that defines this data cell as an Please ask your planning modeler to check the validation
invalid member combination for data entry. rules defined for this model. See: Define Valid Member Com-
binations for Planning Using Validation Rules [page 2547]
One or more child cells are locked by cell locks and prevent Please check the cell locks in the table and unlock the af-
you from entering data to this parent data cell. fected cells. Then it will be possible to assign the member
value and the parent cell can be updated with your data
entry. See: About Value Lock Management [page 2206]
When entering planning data into a table or triggering a planning action, such as a data action or multi action,
you may have received an error message telling you that the data is outside of the planning area for that
version.
Data can’t be changed because it is outside of the planning area for this version.
The planning area is the data in a version that can be used for planning actions. When editing a public version
or creating a private version, you may have the option to use a recommended planning area to edit a defined
subset of data. If you choose to use the recommended planning area, or you filter the data copied to your
private version, only the data within this range can be changed.
To change data outside of the recommended planning area on a public version, you will need to publish your
changes and restart edit mode on All Version Data. Permission requirements for publishing changes will still
apply.
For details about planning on public versions or creating private versions, see Planning on Public Versions
[page 2188] and Creating Versions [page 2182]. For more information about the recommended planning area,
see Optimize Planning Models Using the Planning Area [page 695].
A model with measures can contain decimal measures or integer measures. Different restrictions apply to each
data type, and data entry and other planning operations need to meet these restrictions:
When an integer measure receives a decimal value, the largest integer no larger than the decimal value will be
stored..
When an integer measure receives a value that’s outside the supported value range, there will be a numeric
overflow and an error message will be returned.
Sorry, we can't process your data change since there is a numeric overflow.
When a decimal measure receives a value that’s outside the supported value range or has over 7 decimal
places, the value may be rounded to an approximate value.
• For decimals, the restrictions don’t apply to trailing zeros. For example, 100.00 counts as a single digit
with no decimal places.
• If you’re using scale suffixes, the restrictions apply to the actual value that’s booked to the cell. For
example, entering 1.1 million results in a value with no decimal places.
Dimension attributes give you additional details about each dimension member, like the color of a product or
the employee responsible for a region. There are two ways to add attributes to a table:
• Display the attribute of each dimension member without changing the table’s structure.
• Add attributes to the rows or columns like a dimension (this can add new cells to the table).
Each dimension member has only a single value for each attribute. However, if you’re showing unbooked data
and you add attributes to the rows or columns, your table may show cells that don’t exist in your model.
For example, in the image below each customer has a single customer status (shown in the italicized Customer
Status column), but cells are available for all three customer statuses. This is because customer status is
added to the table’s columns, too.
If you try to enter data in one of the cells highlighted in yellow, you’ll see the following message:
Sorry, we couldn’t carry out your data entry because the combination of dimension
members and attribute values is not valid for the cell that you tried to change.
Please check your filters or contact the story owner.
columns. If you remove the attribute, you can still display it by selecting Properties for the dimension
in the Builder panel and choosing the property.
When planning in a table, there are circumstances that don’t support planning operations, such as data entry
or planning features. The Reasons for Unplannable Data option lets you see why you can’t plan on a cell.
When values can’t be changed in a table, the cell will appear grayed out. You can learn more about why data
entry isn’t possible on a cell by selecting Show/Hide Reason for unplannable data from the table action
For more information about the hint, look for the message in this table.
Table Title
Hint Description Related links
Dimension attribute not defined Add the dimension For more information about dimension
attribute used in the attributes, refer to Data Entry with Di-
formula, inverse formula, mension Attributes [page 2219].
or its condition to one of
the axes.
Post aggregation
No data entry possible
on post aggregations. This
cell uses the following
post aggregations: <post
aggregations>
Dynamic time filter on calculated Data entry on an account For more information on data entry with
account with a dynamic time filter dynamic time filters, refer to Entering
is only supported if Values with Dynamic Time Filters [page
the referenced account 2210].
is not calculated or if
the dynamic time filter
selects exactly one time
member.
Unbooked cell with dynamic time filter For dynamic time filters For more information on data entry with
data entry on unbooked dynamic time filters, refer to Entering
cells is not supported. Values with Dynamic Time Filters [page
Please select the 2210].
corresponding cell in
an account that is not
navigated.
Unbooked cell with LINK formula For LINK formulas data For more information about the LINK
entries are supported on formula, refer to Link Formula Details
booked cells, only. [page 903].
No version
This cell belongs to no
version.
Blending result Data entries on blended For more information on blending data,
queries are not supported. refer to Blend Data [page 1066].
You can still enter data
on the original query.
Data Entry on a Booked Mixed Cell Where All Booked Leaves Are Locked
Data disaggregation based on data locking is enabled and therefore mixed cells are editable. The cells have a
mixed state because some of their children members are locked and some are open. However, if all the booked
leaves are locked and only unbooked leaves are open, the mixed cell will get an error message because none of
the booked leaves can be changed.
Note
The disaggregation algorithm currently makes a distinction between booked and unbooked data. Because
of this, if all disaggregation candidates are locked due to data locking or invalid due to validation rules,
Related Information
Check Validation Rule Results and Warnings for Planning [page 2237]
No need to leave your story to add new dimension members: add them to your table and let the system update
the master data.
Context
Working in a table and suddenly realize that the member that you need doesn't exist? You don't need to leave
your story and go to the model. Simply add the new dimension member to your table (sometimes referred to
as adding a “member on the fly”) and keep on working.
When you define the new member, you supply a description and an ID for it, and you can set the read and write
access for others.
Note
You can't create members for the account, version, or date dimensions, or new measures for a model with
measures.
You can't add members to a table that has advanced or tuple filters.
You can't delete members from a table; you'll need to open the model to delete any members.
You can also add a new member to your table for an existing but unbooked member.
To add such a member for the account, version, or date dimensions, or for level-based hierarchies, the
Unbooked Data setting must be turned off.
Note
You can add an existing member or a new member not yet available in the model (a "member on the fly") to
your table in mass data entry mode.
When you add a member in mass data entry mode and make data entry on this member, if you also delete
existing cell values or copy and paste existing values in the same mass data entry session, the yellow
background color won't be applied to the data cell of the added member (or any data cell that is changed by
simply entering values) after the processing of data changes is done.
For more details on the mass data entry mode, see Entering Multiple Values in a Table [page 2204].
Note
You can add an existing member to a table built on an SAP BPC Embedded model in mass data entry mode.
The following are what you need to know:
• After you add an existing member to the table and make data entry on the added member, we suggest
that you don't change the values of its parent members or the values of Totals that it contributes to.
• If you've changed the value of a parent member, we suggest that you don't add an existing member to
the table as its child member and then make data entry on the added member.
Note
For a table built on an SAP BPC Embedded model, we suggest that you don't do the following at the same
time:
The value of the locked parent cell may still be changed after data processing is done.
Note
If you add a member, you can’t make data entry on the cells of the added member that are sourced from
formulas. As a workaround, you can make data entry on the base accounts or measures. Then data entry
on these cells sourced from formulas will work, too. Alternatively, you can first enable Unbooked Data for
the corresponding dimension and make data entry on the member that is now being displayed.
Some settings, such as currency and responsibility, may not be available if the existing dimension does not
have those settings.
Example
There are cases where multiple new rows are added after you add a new member.
The first or outer dimension is Region, and the second or inner dimension is Color.
Germany Red 4
Green 5
Blue 6
Canada Red -
Green 7
Blue -
• Adding a member to the outer dimension: when you add a member to the outer dimension, it is not
clear which member of the inner dimension you intend to add data to. For that reason, several new
rows are added to the table, all with unbooked values.
Germany Red 4
Green 5
Blue 6
France Red -
Green -
Blue -
Canada Red -
Green 7
Blue -
Procedure
A new line appears. Inner dimension members relative to the member where you trigger the Add Member
functionality (trigger member) are automatically replicated and filled into the new line. Outer dimension
members are replicated but just not shown in the new line by default.
If you trigger the Add Member functionality on an existing new line to create a second new line,
members in the existing new line that correspond to the trigger member used to create the existing
new line and its inner dimension members won’t be replicated and filled into the second new line.
2. To add an existing but unbooked member, type in its ID or description in the member cell of the new line
and select from a list of potential matches. Alternatively, select the value help icon in the right of the cell to
open the member selector, where you can select the desired member.
Note
Adding an existing but unbooked member is also supported if your table is built on an SAP BPC
Embedded model. However, you can’t add existing members for dimensions defined in the columns in
such a table.
To add existing members for tables built on SAP BPC Embedded models, you need to apply the
following SAP Note: 3238379 .
3. To add a new member not available in the model, type in an ID you desire in the member cell of the new
line. When you see the message indicating no existing member matches that ID, press Enter .
Note
Adding a new member not available in the model is not supported if your table is built on an SAP BPC
Embedded model.
4. In the dialog that appears, provide a Description and an ID for the new member.
5. Include names of people that are allowed to have read or write access to the new dimension member.
As the creator, you always have write access to the members you create.
6. Click Apply to create the new member.
Results
A member is added to the table. Inner dimension members are also filled into the new line.
Until you exit the story, you can still edit the details of the new member.
In SAP Analytics Cloud, you can use the planning panel to quickly move values in a table.
Context
The panel can help if you’re working on a task that’s too complex for basic data entry and copying and pasting,
but doesn’t need a structured data action or allocation process.
For example, you might have an unassigned overhead cost that you want to spread to a few different cost
centers, or maybe you need to redistribute sales volumes among different regions without changing the overall
volume. These tasks are difficult with basic spreadsheet applications, but you can handle them quickly with the
planning panel.
You can switch between operation types as you work with the panel.
Options such as custom source values and different driver types let you carry out more scenarios with the
panel. You can also allocate along multiple dimensions in one step, and pick target cells from different hierarchy
levels.
You can also interact with the table while the planning panel is open by collapsing or expanding dimension
members, setting the drill level of a dimension, showing or hiding unbooked members, and showing or hiding
rows and columns.
If you like to use your keyboard for planning tasks, you can use shortcuts to navigate in the grid and do all your
data entry and distribution without the mouse. See Keyboard Shortcut List for Tables [page 1395] for details.
You can see previews of the new values as you work, and then apply all the changes together when you finish
the operation.
Context
Distribute operations can help when you need to assign a value to a group of cells, or distribute a value from a
source cell to a group of cells.
For example, if your travel expenses are higher than budgeted partway through the year, you might want to
assign the variance to the upcoming forecast periods. This way, you’ll set new targets that reduce your costs
enough to meet the budget.
1. Select a source cell that contains the amount to distribute. It doesn’t need to be an editable cell.
(If you want to type your own source value, you can do that after opening the panel.)
In this case, you’ll select the variance for travel expenses, which is calculated by a formula.
2. Open the planning panel by right-clicking the cell and selecting Distribute Value.
The Planning Panel opens, showing the location and value of the source cell.
3. Configure the source value as necessary with the following options:
• Typing a new source value: This lets you assign any amount without referencing a source cell.
• Lock the source cell: If you have a value booked directly to a parent member, this option lets you spread
it to its leaf members. It’s enabled by default in this case. See Entering Values with Multiple Hierarchies
[page 2211] for details.
• (Select source cell): Select to pick a different source cell from the table.
In this example, Book as additional amount is mandatory because the variance is calculated by a formula
and isn’t editable.
4. Choose your target cells.
You might also see some recommendations for common operations that you can set up with one click.
Otherwise, activate the (Add Target) button next to Where to? and select cells in the grid. You can also
use the keyboard to navigate through the grid and type values into your target cells.
The target area can include cells on different levels of a hierarchy, from different members of multiple
dimensions, and from different measures.
If you’re choosing different accounts, you won’t be able to select account types that are incompatible with
the source or with other targets, such as income and expenses.
Note
• If you select a group of dependent cells, such as EMEA, Germany, and Frankfurt, or Gross Margin
and Total Revenue, you’ll only be able to enter values for one of them. Cells that are dependent on
the source cell are greyed out and can’t be picked as targets unless Book as additional amount is
selected.
• The planning panel doesn’t carry out currency conversion when your operation involves source
and target cells with different currencies. Only the numeric values are distributed. In a model with
measures, you can use conversion steps in data actions to copy data with currency conversion. For
details, see Convert Currencies in a Data Action [page 2342].
In this case, you’ll use Append to add the variance to the existing forecast.
6. Use the Driver list to choose how to set the target cell values.
• Input Values: Add the exact target values, either in the table or in the panel.
• Input Weights: Set proportional weights for each cell. For example, if you want to get weights from a
different time period, you could copy and paste those values from the table.
• Equally: Divide the source value equally among the cells.
• Proportionally: Use the existing proportions between the cells.
For distributing variance, it makes sense to divide the values proportionally between the remaining
quarters.
7. Set values for the target cells.
If you chose Input Values or Input Weights, you can now set the target values by typing or copying and
pasting. For Input Values, you can also change values directly in the table. Simple formulas such as +10%
are available in the table or panel, too.
When you select a target, the table will highlight the cell. You can also hover over the cell coordinate to see
its members. If you need to remove any targets, use the (Remove target) button.
Note
If your operation exceeds the number of decimal places or value range of a target measure, you’ll see
an info icon in the panel and some values might be rounded off when booked to the target cell. For
details, see Limits on Value Ranges and Decimal Places for Measures [page 2168].
You can see a preview of the values in the table, and the percentage distribution next to the targets in the
panel. If you used input values that don’t add up to the total amount, you’ll see a warning beneath the
targets. You can still apply the operation in this case, though.
The table is updated to show your changes. The forecast travel expenses are reduced, bringing the
variance back to 0.
Context
Redistribute operations in the planning panel let you change the proportions among a group of cells.
Say you’ve just copied values for marketing expenses from 2020 to your 2021 plan, but as part of a new
marketing strategy, you want to change how the expenses are spread to different promotion channels.
1. Select a group of source cells that includes the entire amount you want to redistribute.
The source area can include cells on different levels of a hierarchy, or from different members of multiple
dimensions and measures.
Note
• If you select a group of dependent cells, such as EMEA, Germany, and Frankfurt, or Gross Margin
and Total Revenue, you’ll only be able to enter values for one of them.
• The planning panel doesn’t carry out currency conversion when your operation involves cells with
different currencies. Only the numeric values are redistributed. In a model with measures, you
can use conversion steps in data actions to copy data with currency conversion. For details, see
Convert Currencies in a Data Action [page 2342].
For example, you can select the marketing expenses account for different promotion channels in 2021,
such as Online, Printed Media, TV, and Radio.
2. Open the planning panel by right-clicking the cells and selecting Redistribute Values.
The Planning Panel opens. In the source area, it shows the aggregate value of the cells that you selected.
To change the target cells, activate the (Add target) button and select or deselect cells in the grid.
• Input Values: Add the exact target values, either in the table or in the panel.
• Input Weights: Set proportional weights for each cell. For example, if you want to get weights from a
different time period, you could copy and paste those values from the table.
• Equally: Divide the source value equally among the cells.
• Proportionally: Use the existing proportions between the cells.
You just want to set out a ratio between the different promotion channels, so you choose Input Weights.
5. Set values for the target cells.
If you chose Input Values or Input Weights, you can now set the target values by typing or copying and
pasting. For Input Values, you can also change values directly in the table. Simple formulas such as +10%
are available in the table or panel, too.
When you select a target, the table will highlight the cell. You can also hover over the cell coordinate to see
its members. If you need to remove any targets, use the (Remove target) button.
If your operation exceeds the number of decimal places or value range of a target measure, you’ll see
an info icon in the panel and some values might be rounded off when booked to the target cell. For
details, see Limits on Value Ranges and Decimal Places for Measures [page 2168].
You can see a preview of the values in the table, and the percentage distribution next to the targets in the
panel. If you used input values that don’t add up to the total amount, you’ll see a warning beneath the
targets. You can still apply the operation in this case, though.
As a planning user, you need to understand how validation warnings work to successfully navigate the data
entry process in a table based on the validation rules defined for the underlying planning model.
After defining validation rules for a model, in a story based on this model, planners can only enter data for the
valid dimension member combinations specified in the validation rule.
If the planning users add all the dimensions that are defined in at least one of the rules to the table, all invalid
data that couldn't pass the validation rules will be marked with validation warnings. Parent-level data will be
marked as invalid only if all the base-cell data are invalid. Unbooked cells that are invalid are disabled for data
entry.
If the planning users disaggregate the values in such a story, booked data have higher priority than unbooked
data:
• If the planning users disaggregate a value to both valid and invalid booked data, the value will only be
disaggregated to the valid booked cells.
• If the planning users disaggregate a value to both valid and invalid unbooked data, the value will only be
disaggregated to the valid unbooked cells.
• If the planning users disaggregate a value to invalid booked data and valid unbooked data, the value
will be disaggregated to the booked invalid cells and an error message will inform the users that the
disaggregation failed.
When entering data into a table, validation warnings help you to quickly identify which cells are invalid for data
entry due to validation rules that have been defined for the underlying planning model.
Make sure you've added all dimensions to the table axes (either to the rows or to the columns) that are used in
at least one of the validation rules defined for this model.
Invalid cells will be marked with a warning icon in all versions (except the versions of the Actual category), as
you can see in this example:
In these invalid cells, you can delete the values that had been added manually or by a data action before the
validation rule was created, and you can enter plan data for your private version. However, your data entries will
be reverted when you publish your version.
If you don't want to have the warnings displayed, you can turn them off from the table actions menu:
Choose Table Functions in the menu for body cells or choose (More Actions) and then Show/Hide
Validation Warning .
The validation status of parent nodes depends on the validation status of all child members. Independent of
whether the parent node is collapsed or expanded, or if there are filters applied and some child members aren't
visible, the system calculates the validation status of parent nodes based on all its child members available in
the master data.
This means that parent nodes are input-disabled only if all available members (regardless of having booked
or unbooked data) are marked as invalid (booked data) or are input-disabled (unbooked data). Input-disabled
cells are displayed with a grey dash character (-).
Unbooked data cells that are invalid for data entry are input-disabled, a grey dash character (-) indicates that
you can't enter data.
• If one member of a dimension is valid, the unassigned members are also valid for data entry.
• If all assigned members of a dimension are invalid, the unassigned members are invalid, as well.
Related Information
Define Valid Member Combinations for Planning Using Validation Rules [page 2547]
For models with different currencies in SAP Analytics Cloud, you have the flexibility to plan on any available
currency. However, most planning operations don’t apply currency conversion when copying data across
currencies or disaggregating it to multiple currencies, so messages will let you know when this happens.
If you plan on the base currency values, you won't be able to enter values outside the cross calculations
showing the base currencies.
If you plan on a currency conversion, all values stored in the version must also be converted to that
currency, even values that are not currently visible, and corresponding rates must be maintained. Other cross
calculations won't show values until you publish the version.
Whenever possible, currency conversion when creating private versions uses direct exchange rates. Publishing
these private versions performs currency conversion with inverted direct exchange rates stored in the rates
table to keep conversion in both directions as stable as possible. Adjustments of exchange rates in the rates
table during the lifetime of a private version, however, can lead to unexpected conversion effects during
publishing. An example is a rates table with a rate granularity of quarters being extended with additional rates
for all months.
For more information about copying and publishing versions, see Create, Publish, and Manage Versions of
Planning Data [page 2170].
Models with measures offer a couple of advantages for planning with currency conversion:
• You don’t have to pick a single planning currency when you start planning on a version. You can enter data
on base measures or currency conversion measures and immediately see the updated values for the other
dependent measures.
• You can copy data across different currencies using a currency conversion step in a data action. For details,
see Convert Currencies in a Data Action [page 2342].
• Models no longer limit the number of conversions that you can analyze and plan on in a story or analytic
application.
Planning on conversion measures performs planning operations such as disaggregation on converted values.
For example a data entry on a conversion measure first performs the conversion of the affected data region
with the same exchange rates as reporting, followed by the value distribution based on converted weights,
followed by a currency conversion using the calculated inverse rates of reporting (see section Example: A data
entry on a conversion measure). Similarly, copying and pasting a data region performs a conversion upfront,
and performs an inverse conversion using the target context if the target is a conversion measure as well.
Currency conversion is applied as late as possible during the reporting process. Therefore, currency conversion
most often is performed on aggregated values. When planning is involved, however, all affected member
combinations must be converted separately. Depending on the number of decimal places of the participating
measures and the number of affected member combinations, visible rounding issues can occur. You may see
this both in classic account models when creating a private version with currency conversion, as well as in
conversion measures in a model with measures.
You might see these rounding issues more often with very high and low conversion rates (for example: 1 USD
= 150 JPY). Another factor is base measures with a small number of decimals places (for example: a base
measure in USD with 2 decimal places).
Based on the exchange rate USD – JPY 150, a base measure in USD with two decimal places and a conversion
measure in JPY with 2 decimal places a data entry of 1000 is performed on the conversion measure. Based on
the assumption that the data entry is performed on a single member combination, the inverse reporting rate
1/150 is used to store 6,66 USD (truncate(1000 / 150, 2)) in the base measure. The updated grid, however, will
report 999 JPY for the conversion measure (as 150*6,66 = 999).
If you adjust the assumption that the data entry is performed on 1000 member combinations, then every
member combination converts 1 JPY and therefore stores 1000 member combinations with 0,00 USD, as
1/150 is less then 1 cent. Reporting will now lead to a much higher difference in the expected result: instead of
1000 JPY, now 0 JPY will be reported.
Except with a conversion step in a data action, source values aren't converted when copying or disaggregating
data across different currencies. For example:
• If you copy a value of 500 USD to a cell that displays a value in Euros, the resulting value is 500 Euros.
• If you enter 1 million to an unbooked account for North America while planning in local currencies, it could
result in values of 500,000 CAD for Canada and 500,000 USD for the United States. The numerical value is
divided evenly, but not the monetary value.
In cases such as this one, a message appears to let you know that data has been entered in multiple different
currencies. To avoid this, you can plan on a single fixed currency instead, and the values will be properly
disaggregated and then converted to the local currencies.
Note
When a parent member has a booked value and child members that display multiple currencies, the parent
member does not support planning operations. Data cannot be disaggregated to multiple currencies with
booked values.
Messages are also displayed when you create an allocation process or a data action that copies data between
different currencies. For example:
A data entry on the conversion measure Global Currency and the Region World distributes with equal weights
to Japan and USA. The currency conversion happens after the distribution. Similarly, in a data entry on booked
values the distribution is based on the weights of the converted values, followed by the inverse currency
conversion and the storage of the distributed values in the base measure (here Local Currency).
In a copy & paste where the conversion measure Global Currency is copied from 2022 to 2023 all matching
member combinations are first converted to USD before they are copied to 2023 where the values are
converted back to be stored in measure Local Currency. As the rate courses for 2022 and 2023 changed
from 120 to 150 the effect is that for Japan different values are stored in 2022 and 2023.
If the source measure is not a conversion measure no conversion is performed before the copy. Accordingly, no
conversion is performed after the copy if the target measure is not a conversion measure.
Your organization uses a currency-enabled planning model to perform planning tasks, including forecasting.
Each quarter, you and your colleagues are responsible for creating a new forecast for the year by copying the
forecast from the previous quarter, updating the figures based on changing assumptions, and publishing the
forecast as a new public version. Currently, you have a forecast for the year created in Q1 with the following
data, displayed in a fixed USD currency conversion:
A B C D
1 CATEGORY Forecast
2 VERSION ForecastQ1
3 ACCOUNT ORGANIZATION
5 EU $3 300 000
To do this, you can create a new rate version to apply to ForecastQ2. The ForecastQ1 version uses the default
rates for the Forecast category. You can copy these rates to a new rate version for ForecastQ2 and then update
the rates for Q3 and Q4.
1. You select Browse Currencies and choose the currency table for the model.
2. You select next to the Category column header in the currency table, and select Forecast from the Filter
section to show only the Forecast rates. Note that there is no Rate Version, because these are generic rates
for the Forecast category.
A B C D E F
3. To quickly create a new rate version, you copy and paste all of these cells below the existing data. To specify
a rate version, set the Category to Specific for the first of the new cells and then select the bottom right
corner of the cell and drag it to the other rate version cells to copy the value. Next, add a name in the Rate
Version column, such as Q2Rate. You can then update the exchange rates for the new rate version by typing
values in the Rate column, and then save the Currency table.
A B C D E F
4. You can now return to editing the story to create a new forecast version that uses the Q2Rate exchange
rates. For a model with measures, you can just copy the version and select the Q2Rate from the Rate
Version list.
For a classic account model, the table needs to display local currencies (that is, the unconverted base
currency data in your model). To do this, you add the Cross Calculations dimension to the table by
selecting Add Measures/Dimensions under the Columns heading of the Builder pane, and choosing Cross
Calculations. You then select (Manage Filters) next to the Cross Calculations dimension and choose
Local Currency as well as Default Currency (USD), and select OK.
5. From the Tools menu, you select (Version Management) and choose (Copy) next to ForecastQ1. You
name the version ForecastQ2 and choose Q2Rate in the Rate Version list. (For a classic account model, you
need to select Local Currency in the Change conversion list first).
You can use the same process to continue adding new forecast versions and adjusting exchange rates as
necessary. You can also apply rate versions to versions in different categories, or create currency conversions
using the Calculation Editor to apply specific rate versions.
Related Information
Assign and Distribute Values with the Planning Panel [page 2233]
Displaying Currencies in Tables [page 1559]
Currency [page 670]
When working with planning models in SAP Analytics Cloud, you can run predictive time series forecasts on
your data within a table.
Prerequisites
You will need planning rights and a planning license to run a predictive time series forecast. You'll also need
access to a table with planning model data and a date dimension in the columns.
Context
A predictive time-series forecast runs an algorithm on historical data to predict future values for a specific
measure in a planning model. The forecasted values can be later used in the planning process. The prediction
is only applied to the specific selected measure; however, booked data will be overwritten if you choose to run a
forecast over a cell containing historical data. Other data in the version will not be impacted.
While running a predictive time series forecast, you can determine the time range for the prediction and the
past time periods to use as historical data.
Note
• calculated measures
• formulas
• blended datasets
• local currencies
• on models with multiple time dimensions
• in tables with locked cells
• on measures with aggregation types Label or None
• on time granularities less than monthly
Procedure
1. Select a single table cell that can be edited. From the toolbar, select Tools Predictive Forecast .
Select an option under Granularity to specify the time granularity for the predictive forecast. Use the
Predict From field to specify the starting point for the predictive forecast calculation. Under Predict To,
specify a time period to end the calculation.
Restriction
By default, the lowest Granularity is Month. For fiscal years, Period is the lowest Granularity. Daily or
weekly granularity are not supported.
3. Under Advanced Options, specify options on how you want to run the predictive forecast.
a. Under Algorithm, select one of the following options to create the forecast:
• Automatic Forecast
• Linear Regression
• Triple Exponential Smoothing
b. If you do not want to have any negative values included in your results, under Output Values select
Positive Only.
c. Under Use Data From, specify on what historical data you want to run the algorithm.
d. Select which Version to use for the forecast input data.
By default the Actual version is used.
4. When you are ready to run the predictive forecast, select Preview.
The time series forecast is displayed in the Preview Predictive Forecast dialog. The preview contains the
past actual values together with predicted values displayed as a graph and a table row. Upper and lower
limits for the confidence interval are also provided for the graph. The table rows contain the predicted
values that will be added to the table. A dashed line (originating from the first point of the forecast)
represents the predictive model applied to past periods.
Note
You can preview the predicted values for booked cells after running a time series forecast.
For cells containing values, the cell's existing weighting is respected. For unbooked cells, the weighting of
the reference period is used.
5. To populate the table with the forecasted values, select OK.
A message indicates that the forecasted values were added to the table.
Related Information
In SAP Analytics Cloud stories, you can create input tasks that request colleagues to provide planning data in
your story.
Input tasks are used to gather feedback or additional information from colleagues, like input about planning
data from managers of different regions, product groups, or cost centers during your planning process. They
let you keep track of the planning process and make sure that everyone has access to the right data. Once
you've created the task, it is sent to the respective colleagues, who add the requested information and then
return the task.
Note
In optimized story experience, input tasks are not supported. We recommend to use calendar tasks
instead.
For more information, see From Input Tasks to Calendar Tasks [page 2578].
To create an input task, you must have a table that has the following criteria:
Example
For example, the model may have an Organization dimension with users responsible for different
regions or departments, and a Product dimension with users responsible for different groups of
products.
Your table can include the Cross Calculations dimension and restricted measure calculations. The assignee fills
in numbers for the restricted calculations, including adding numbers to unbooked cells.
If you include a value driver tree based on the same model as the table, the assignee can also provide input by
performing simulations on the value driver tree.
When your model has multiple currencies, you also need to consider the following criteria for your input task:
• The private version that you create uses the same currency as your source version.
Create an input task for your story to gather planning data from colleagues.
Prerequisites
Make sure you have the Maintain permission for the underlying planning model. You need a table with at least
one active private version before you can create an input task. Also, if your model has multiple currencies, you
can use only one currency when creating the input task.
Note
Any pages that you have hidden will not be included in the input task.
Context
When creating an input task, you and the people you assign to the task work on a private version of the story.
The original story won't be affected, changed data will only be visible to others after the input task has been
completed and you've published the private version.
Procedure
4. To set an end date, choose calendar and pick a date that is in the future.
Note
You can't set a start date, the input task will start immediately once you sent it to the assignee(s).
5. Optional: Decide whether to set the task to status Canceled on the end date: If so, select Automatically
cancel the task on the planned end date.
6. Optional: Create a reminder for the task:
Note
Only dimensions that have the Responsible option set to on in the Dimension Settings, are available.
10. To assign the task, select assignees from the Available Assignees column.
The available assignees are users who are identified in the model as responsible for a dimension member.
Tip
For a large hierarchy, you can select to search for members or assignees, and you can reduce the
long list of members by turning on the option for Show members with assignees only.
To assign the task to a different assignee, or if there is no responsible person defined for a dimension
member in the model, do the following:
1. Find the current assignee in the Available Assignees list, or select the dimension member that's
To reset all substitute assignees back to the default assignees, on the Summary page, select Reset to All
Original Assignees.
11. Optional: Select Assignee can see other assignees.
12. Select Send.
Results
The task is sent to the assignees and an Input Form is created. The Summary page is added to your story,
showing the current status of the input tasks. You can see at a glance whether assignees have accepted the
tasks or made any progress on them.
Note
Only you as the creator have the overview of the overall status.
You can also review the input task in the Calendar. The calendar details panel of the input task shows dates,
people assigned to the task, and a Context section that lets you see which model is used for the task.
Next Steps
If you need to change an assignee after the task has started, do the following:
1. From your Selected Assignees list, beside the assignee you wish to change, select Change User.
2. In the dialog, select a new user and then select OK.
If needed, you can also remove an assignee by clicking Remove User. If all assignees of the task are
removed, the input task is set to Canceled.
As the creator of an input task you also act as approver, and mark an input task as completed or provide other
feedback based on how the task is returned to you.
Input tasks are sent to assignees. The assignees make changes to the data, or they decline the task. In both
situations, the task comes back to you and you mark it as completed.
The Summary page for a task shows the current approval status for input tasks.
• Default view: see a high-level view of the number of assignees for each type of status (In Progress,
Open, Declined, and so on).
After an assignee completes an input task, you receive a notification to that effect.
select (Approve).
Tip
If you have received feedback from all assignees and are satisfied with it, select Approve All.
If you decide that the feedback is inaccurate or incomplete, you can select (Reject). The system sends the
input task back to the assignee.
An assignee can choose to Decline the input task. When that happens, the approver has the option to exempt
that assignee (select (Approve)), or to (Reject) the task to the assignee for a response.
When the approver exempts the assignee, the status changes to Canceled, and the assignee no longer has
permission to view the input task details.
Sending a Reminder
You can send a reminder to all assignees who have not yet completed the task.
1. Find the current assignee in the Selected Assignees list and choose (Change User).
2. From the Change User dialog, select the substitute.
3. Select OK.
All dimensions that were assigned to the original person are now assigned to the substitute.
Removing an Assignee
You can remove an assignee from the task – even if the task is in progress – as long as that assignee has not
already submitted their results.
To remove the assignee, find the assignee in the Selected Assignees list and choose (Remove User). The
assignee is removed; they will also receive a notification that they have been removed.
If all assignees of the task are removed, the input task is set to Canceled.
Once you have dealt with the input from all assignees, either by approving or exempting, and all assigned tasks
are accomplished, the data that has been entered by the assignees is transmitted to the original story. After
you've published your private version of the story, the data will be public for everyone who can see the story.
You can cancel an input task at any time: Click Cancel and the input task will be set to Canceled.
Related Information
When an input task is assigned to you, learn how to work on it and enter your planning data, or how you can
decline the input task.
Context
You received a notification about an input task that has been assigned to you. You have the option to work on
the input task or decline it.
If you decline the input task, the approver can choose to exempt you from that task. When that happens, you
will no longer have access to the input task details.
Procedure
The approver can reopen the input task and send it back to you.
3. To work on the input task, select your organization from Active members.
To undo or redo changes, from the toolbar, select (History). In the history panel, select the revision
that you want to keep.
2. After you have completed working with the task, select the Summary page.
3. Select Submit.
4. (Optional) Add a comment.
5. Select Submit.
Once an input task has been completed, you can view the final details.
Related Information
After you’ve set up a data action or multi action in SAP Analytics Cloud, add a planning trigger to your stories
or analytic applications to let users run it.
Prerequisites
• Permission to edit the story or analytic application where you want to add the planning action trigger.
• Permission to view the models.
For more information, see Permissions and Prerequisites for Multi Actions [page 2349].
If the target version is a public version, the data action or multi action will only run on data within the
planning area (also known as the data in Edit Mode). If the public version was put into edit mode using the
recommended planning area, the data action or multi action will only apply to data within the recommended
planning area. For more information about editing public versions, see Planning on Public Versions [page 2188].
Note
You can also run BPC planning sequences with a planning trigger. For details, see Run Planning Sequences
from BPC [page 2553].
A planning trigger lets users run a structured operation that can include copying and pasting data, running
allocations or scripted calculations, and publishing versions. For complex or repetitive planning tasks, it can
speed up the work and help avoid mistakes.
Planning triggers can run a data action, a multi action, or a BPC planning sequence. This page focuses on data
actions and multi actions. If you still need to create a data action or multi action, refer to the following pages:
In addition to setting up a planning trigger, there are other options to run data actions and multi actions:
• Scheduling data actions in the calendar: Schedule Data Actions in the Calendar [page 2265]
• Scheduling multi actions in the calendar: Schedule Multi Actions in the Calendar [page 2272]
• Adding data actions to analytic application scripts: Use Data Actions Technical Objects [page 1708]
• Adding multi actions to analytic application scripts: Use Multi Actions Technical Objects [page 1711]
If you are setting up your planning trigger using the optimized story experience in stories, you can make
additional formatting choices with the trigger. In the Style panel, you can choose what colors to use for the
trigger button and icon, upload your own icon, and choose the font, style, size, and color of your trigger text.
In this video, you will explore options for running data actions and multi actions using planning triggers or script
objects in stories and analytic applications, and using events in the Calendar.
Procedure
Story
1. On a canvas or responsive page, select
(Add) Planning Trigger .
2. Set the type of trigger to determine what type of op-
eration the trigger can run:
• Data Action Trigger
• Multi Action Trigger
• BPC Planning Sequence Trigger: For details, re-
fer to Run Planning Sequences from BPC [page
2553].
2. To help planners understand what the trigger does, type a label and a description for it.
3. Choose whether to always run in the background.
If you leave Always Run in Background deselected, users can choose to run the data action or multi
action in the background once it starts, or they can wait for it to finish before returning to the story
or analytic application.
4. Select the data action or multi action to run.
Note that you will only be able to select a data action or multi action that you have Read permission for.
5. If you're running a data action or multi action on a classic account model that uses currency conversion ,
choose an option from the Currency Conversion list. The data action or multi action will be executed on the
currency that you choose.
If the target version is a private version or a public version in edit mode, this setting has no effect. Instead,
the data action is applied to the currency that you’re working with in that version. For more information,
see Plan with Currency Conversion [page 2239].
6. For data actions, if the target version is a public version and you want the data action to run on the
recommended planning area (unless otherwise defined in edit mode), select Use recommended planning
area if target version isn’t in edit mode.
If the target version is a public version, the data action will run on the data in edit mode. If edit mode has
not been activated already, the data action will use the recommended planning area.
If you want to run the data action on all the version data, you will need to select Start Edit Mode on the
target version in the Version Management panel, and then choose All Version Data.
7. For a data action trigger, under Publish to Target Version after Execution, choose the desired behavior for
publishing your changes:
• Do not publish: Changes will not be published and the version stays in public edit mode. These changes
can be published manually or published as part of another process.
• Publish and fail if there are warnings: All of your changes will automatically be published. If any
restrictions such as data locks apply to the data you’re trying to publish, then the publish will fail.
• Publish and ignore warnings: All of your changes that are not affected by restrictions will automatically
be published. If any restrictions such as data locks apply to some of the data you’re trying to publish,
then the affected data will be discarded.
Note
• All of your unpublished changes to the target version will be published, even if they weren’t part of
the data action.
• This option only applies to public versions. If the data action runs on a private version or on a BPC
write-back model, you’ll need to publish the data manually.
Setting Options
From the Value list, select how you want to set the value:
• Member Selection: Choose values from a list.
Default Value: Use the default values set in the data
action or multi action, if any are available.
Story Filter: Link the values to a compatible story fil-
ter.
Input Control: Link the values to a compatible calcula-
tion input control or page filter.
If there is a problem with the parameter values, a message will let you know how to fix it. You may need to
select a different value or filter, or open the data action or multi action to see what's causing the problem.
For more information, see Add Parameters to Your Data Actions and Multi Actions [page 2338].
Results
When viewing or editing a story, or when viewing an analytic application, users can select the trigger icon to run
the multi action. For more information, refer to Run Data Actions, Multi Actions, and Allocations [page 2260].
If you selected Fixed Value as the input type for all prompts, the data action runs immediately. For Prompt
selections, the user can select members and choose Run.
starting the data action, you'll return to your story or analytic application. The Notifications ( ) list will show
the data action's progress, and a message will appear when it's complete. In the meantime, you can keep
working with your story or analytic application. You'll have to wait until the data action finishes to make changes
to the same version though, and you may need to refresh to see the results.
Unsupported Cases for Linking Parameter Values to Input Controls and Filters
Values from story filters and input controls aren’t supported in the following cases:
• The data action or multi action trigger is added to an analytic application. (Automatic data action tasks
also don't support this feature.)
• The available members list includes the All Members selection. To avoid this warning, select Edit Filter…
or Edit Input Control and deselect All Members, or deselect at least one member. (Deselecting the All
checkbox in the list of selected members won’t fix the problem.) Note that in the data action trigger, the
All Members selection is allowed if the parameter in question has had its Allow "All Members" checkbox
selected in the data action designer.
• Range selections, advanced filters, and filters that exclude members.
• Calculation input controls and page filters on a different page than the planning trigger.
• Calculation input controls, page filters, and story filters with more than 100 members.
• Measure and account input controls.
Related Information
If data actions, multi actions, or allocations are available in an SAP Analytics Cloud story or analytic
application, you can run them to carry out complex or repetitive planning operations more quickly and
accurately.
Context
Data actions, multi actions, and allocation processes are all structured planning operations. They form a big
part of the business logic for planning – that is, how data is created, stored, and changed during the planning
process.
Structured planning processes are set up by a modeler or admin (or sometimes a planner reporter for
allocation processes). They involve a sequence of changes to planning model data, such as copying data,
running scripted calculations, doing driver-based allocations, and publishing versions.
Planners can still have full control over the data with manual data entry, but structured operations make it
easier to carry out complex and repetitive tasks.
For details about each type of operation, refer to the following pages:
If the target version is a public version, the data action or multi action will only run on data within the
planning area (also known as the data in Edit Mode). If the public version was put into edit mode using the
recommended planning area, the data action or multi action will only apply to data within the recommended
planning area. For more information about editing public versions, see Planning on Public Versions [page 2188].
Prerequisites
The user must have the Execute permission for the data action.
If data access control is enabled for the dimension members you want to perform the data action on, you also
need to have Write access to corresponding dimension members to change any data. For more information,
refer to Set Up Data Access Control [page 803].
Context
Besides using a data action trigger, there are a couple of other options for running data actions:
• Scheduling data actions in the calendar: Schedule Data Actions in the Calendar [page 2265]
• Adding data actions to analytic application scripts: Use Data Actions Technical Objects [page 1708]
Procedure
1. Open the story or analytic application with the trigger for the data action.
2. Select the trigger icon to run it.
3. If there are prompts, set the members or values for them and select Run.
4. Check any warning or error messages and make changes accordingly. You might get a warning or error
message in the following situations:
• If there is a problem with the parameter values and you need to fix them.
Results
Running in Background
If the data action is set to always run in the background, or if you select Run in Background after starting it,
you'll return to your story or analytic application. A message appears when it’s complete, and you can also
check the Notifications ( ) list for the results. In the meantime, you can keep working with your story or
analytic application. You'll have to wait until the data action finishes to make changes to the same version
though, and you may need to refresh to see the results.
Prerequisites
To run a multi action, you’ll need access to a story or analytic application with a trigger for it. A set of
permissions and access rights for the multi action and the model data are also required. For more information,
refer to Automate a Workflow Using Multi Actions [page 2347].
To publish changes from a multi action, you’ll need additional permissions. For details, refer to Create, Publish,
and Manage Versions of Planning Data [page 2170]
Context
A multi action is a sequence of data actions, predictive steps, and publishing steps that can run on one or more
versions and models. For more information, refer to Automate a Workflow Using Multi Actions [page 2347]. You
can run it with a planning trigger in a story or analytic application.
You can also run a multi action by scheduling it in your Calendar. For more information, refer to Schedule Multi
Actions in the Calendar [page 2272].
1. Open the story or analytic application with the trigger for the multi action.
2. Select the trigger icon to run it.
3. If there are prompts, set the members or values for them and select Run.
Note
Some parameter values might prevent you from running the multi action. If there's any problems with
the parameter values, a message will let you know how to fix them.
Results
Running in Background
If the multi action is set to always run in the background, or if you select Run in Background after starting it,
you'll return to your story or analytic application. A message appears when it’s complete, and you can also
check the Notifications ( ) list for the results. In the meantime, you can keep working with your story or
analytic application. You'll have to wait until the multi action finishes to make changes to the same version
though, and you may need to refresh to see the results.
Incomplete Steps
In some cases, one of the multi action steps may not run successfully. Changes from the previous steps will still
take effect, but the multi action won’t proceed to the following steps.
• A version management step can’t run: You’ll get a message and notification with the version and model
name.
• A data action step can’t run: You’ll get a message and notification with the data action name. You can check
the data action monitor for details.
• A data action step runs too long and times out: The multi action won’t be able to complete. If your multi
action is taking longer than expected to run, check the data action monitor.
For more information about the data action monitor, refer to Monitor Data Actions [page 2344]. If you know the
name of the multi action, you can search for it to check the status of the data actions involved. (Note that its
name can be different from the label on the multi action trigger.)
Final Status
After running, the multi action will have a status of successful, failed, or executed with a warning. A multi action
is executed with a warning if any of the steps have warnings in them.
Context
Once an allocation process is set up, users with a planning license can run it in a story or analytic application to
speed up planning tasks. You can also apply filters on the allocation process if you want to limit which data is
involved.
To learn more about how allocation processes work, see Learn About Allocations [page 2513].
• You need to have access to a story or analytic application with a table based on the appropriate planning
model.
• An allocation process must be set up for this model. See Set Up Your First Allocation Process [page 2516]
and Build an Advanced Allocation Process [page 2519] for details.
• You need to have the execute permission for planning models to run allocations.
Note
Running allocation processes is not recommended as eventually the legacy allocations created in the
allocations app will no longer be available and all allocations will need to be created in the data action
designer. It is recommended that you create allocations as part of a data action and run it instead. To learn
more about creating allocations in the data action designer and setting up data action triggers to run data
actions, see Creating an Allocation Step [page 2332] and Set Up Planning Triggers [page 2256].
Procedure
1. Select the table based on the model where you want to allocate values.
The allocation process begins to run. If you select Run in Background, you'll get a notification when the
process finishes. Select Refresh to show the results of the allocation process in your table.
After you run an allocation process as a planning user, you may need to check its status in the job list. Or, as an
admin, you might need to see how other users' allocation jobs are doing.
To access the job list, select (Allocations) and choose Allocation Jobs. You can also select Go to Job List
from the message that appears when you start running an allocation job in the background.
A list of allocation jobs is shown. You can search the list, sort it, or filter it by job status. To see updated
information about the jobs, select Refresh.
In the SAP Analytics Cloud calendar, you can create automatic data action tasks that help you define the start
time of your data action. Data actions are a flexible planning tool for making structured changes to model data,
including copying data from one model to another.
Prerequisites
To create an automatic data action task, you must have existing data actions in the designer. For more
information, see Create a Data Action [page 2322].
Context
Automatic data action tasks can be for you alone or you can let others manage or view your automatic data
action task. If you don't have the appropriate permissions to run the data action, you can change the assignee
from yourself to another user who has the necessary permissions and send a permission request. Once the
assignee agrees, the task can run automatically at the specified start date or start condition.
• Set the start date. Need to schedule something on a recurring basis? You can set up recurrence, and
change individual occurrences as needed.
• Let the task start once another calendar event reaches a certain status (accomplished, for example), or
several other calendar events reach one of the defined statuses.
Procedure
Start By Description
Dependency The start date of your task depends on other events and
their status.
1. Select Add Events to define which events your task
shall depend on.
2. Under Statuses, select which status these events
need to meet before your task can start.
Note
All of the selected events need to meet any of the
chosen statuses before your task can start.
Parent Process Your task will start at the same time as its parent process.
Note
If you haven't added a parent process to your task yet,
you'll be asked to select one.
4. Optional: To specify the duration of the task, you can do the following:
• To change the default duration from 0 minutes to 10 minutes, select Estimated End Date.
• If you suppose that the duration of the task will be different, select Estimated End Date and specify the
end date you expect.
Note
The duration of an automatic task is fixed. Changes to the parent process won't influence the duration
of the automatic data action task.
Option Description
Recurrence Set the task to repeat by Minute, Hour, Day, Week, or Month.
Pattern
Note
A high recurrence frequency like every 5 minutes can put extra load on the system and
may slow it down.
Automatic data action tasks with two preceding task occurrences that are still running are
automatically canceled.
Start Date of First Set the planned start date and time for the first occurrence.
Occurrence
Note
6. Optional: If you don't want the task to start automatically, turn off the corresponding option that's
available depending on the chosen start condition:
• Automatically activate the task on the planned start date: The task will be activated automatically on
the selected start date.
• Activate the task automatically once any status of the dependencies is met: Once all selected
events your task is dependent on have met any of the selected statuses, the task will be activated
automatically.
• Activate the task automatically once its parent process starts: The task will be activated automatically
on the actual start date of its parent process.
7. Optional: If there's an overall process for the task, add the task to the process:
You can select only the processes that you can edit. You need to be the owner or the assignee of the
parent process.
If you've set up recurrence for your task, you can't add it to a parent process.
To see the relationship between calendar events and their parent processes more easily, use the List
workspace.
8. Optional: You can disable the API access to your task from external applications.
By default, this setting is turned on. For more information, see Access Calendar Events Using the Calendar
API [page 2681].
9. Select Create.
The data action will automatically run according to the selected start conditions, unless you've turned off
the option to have it activated automatically.
Note
The event is added to the calendar and opened in the Details panel.
If you created a recurring event, the Details panel will open with the Series tab selected instead of the
Occurrence tab.
10. In the Data Action section, select a data action from the dropdown list displaying recently used data
actions, or choose Select other data action... and select a data action from a private, public, or workspace
location.
If you want to edit the data action or get familiar with its steps and parameters, select
11. Under Publish to Target Version after Execution, choose the desired behavior for publishing your changes:
• Do not publish: Changes will not be published and the version stays in public edit mode. These changes
can be published manually or published as part of another process.
• Publish and fail if there are warnings: All of your changes will automatically be published. If any
restrictions such as data locks apply to the data you’re trying to publish, then the publish will fail.
• Publish and ignore warnings: All of your changes that are not affected by restrictions will automatically
be published. If any restrictions such as data locks apply to some of the data you’re trying to publish,
then the affected data will be discarded.
Note
• All of your unpublished changes to the target version will be published, even if they weren’t part of
the data action.
• This option only applies to public versions. If the data action runs on a private version or on a BPC
write-back model, you’ll need to publish the data manually.
12. Set values for the parameters of the selected data action, if needed.
From the Value list, select how you want to set the value:
Note
There are parameters that can only take one value
and parameters that can take more than one value.
Default Value Use the default values set in the data action, if available.
For more information, see Add Parameters to Your Data Actions and Multi Actions [page 2338].
Note
If a parameter represents a source or target measure, keep in mind that different measures can have
different value ranges and numbers of decimal places. See Limits on Value Ranges and Decimal Places
for Measures [page 2168] for details.
Select Add Owners. To let more people monitor and manage the task, you can
add additional owners.
1. Select You in the Assignee section, and then choose If you don't have appropriate permissions to run the data
Change from the dropdown menu. action, you can change the assignee from yourself to an-
other user who has the necessary permissions and send a
2. In the Change User dialog, select the user name of the
permission request.
person you'd like to send a permission request.
Select Add Viewers. If you want other people to be able to see the task, you can
share the task with them.
• As owners and viewers, you can add individual users or teams that have been created in SAP Analytics
Cloud:
Adding teams instead of individual users may help save you time, and you don't need to define who's
taking care of the task. You can choose to Resolve the team so the individual users are displayed, and
adapt the list according to your needs.
Note
After you resolve a team, the reference to that team is removed. Instead, the system behaves like
you added all the team members individually. Any changes to the team on the SAP Analytics Cloud
Security page (like adding or removing team members) will not be reflected in the People section of
your calendar task.
Tip
If you add someone and then decide to share the task with someone else, select the user name or the
team name, and then choose Change from the dropdown menu. In the Change User dialog, select the
new name.
If you add 7 or more users or teams as owners or viewers, only the first 5 user or team names are
displayed and a link (... 3 More, for example) lets you access a dialog with the complete list of added
users and teams. In this dialog, you can can search for user or team names, and you can add, change,
or remove them.
14. When you're finished filling in the details and are ready to send the task out, do one of the following:
• Select Activate.
Tip
This option is also available when you click on the arrow next to the status in the header section.
When you activate the task, and you've added some teams that haven't been resolved yet, the teams
are resolved automatically. You can remove individual users of the team, for example. Users assigned
to multiple teams are only counted once.
• If you replaced yourself with another assignee, select Update and Start Permission Request.
Note
If you don't start the permission request, the task will automatically be canceled on the planned
start date.
Results
After running the data action, its status will automatically change to Successful or Failed, depending on the
outcome. If you replaced yourself with another assignee and sent the permission request, but the assignee
didn’t grant permission before the task started, the data action won't run and the task status will be set to
Canceled.
Tip
To view the data action status in the Data Action Monitor, select Data Action Monitor in the Details panel of
the data action task.
Adding a Description
Under Description, you can add some details or instructions you'd like to share.
Adding Reminders
To set reminders, select Add Reminders in the Time section and fill in the required time settings.
Once you've activated the task, you may want to send an immediate reminder to the assignee: Select Send
Immediate Reminder and enter some text as message. The assignee will be notified right away.
If you haven't done so already, you can add the task to a process in the Hierarchy section.
Note
You can select only the processes that you can edit. You need to be the owner or the assignee of the parent
process.
If you've set up recurrence for your task, you can't add it to a parent process.
To see the relationship between calendar events and their parent processes more easily, use the List
workspace.
In the title area, you can select a different style for the task or create a new style. Select the arrow next to the
task title and choose Create Style..., or select one of the styles you've already created.
The style is a user-specific setting. The selected style is only visible to you.
If you've adapted an existing task, select View Changes to review all the changes that have been applied. Decide
whether to send them out or delete any of the changes, and then select Update or Update and Notify.
Related Information
In the SAP Analytics Cloud calendar, you can create automatic multi action tasks that help you define the
start time of your multi action. Multi actions are a flexible planning tool for making structured changes across
multiple model data and versions, including copying data from one model to another.
Prerequisites
To create an automatic multi action task, you must have existing multi actions in the designer. For more
information, see Create a Multi Action [page 2353].
Context
Automatic multi action tasks can be for you alone or you can let others manage or view your automatic multi
action task. If you don't have the appropriate permissions to run a multi action, you can change the assignee
from yourself to another user who has the necessary permissions and send a permission request. Once the
assignee agrees, the task can run automatically at the specified start date or start condition.
• Set the start date. Need to schedule something on a recurring basis? You can set up recurrence, and
change individual occurrences as needed.
• Let the task start once another calendar event reaches a certain status (accomplished, for example), or
several other calendar events reach one of the defined statuses.
• If your task is grouped within a parent process, let it start at the same time as its parent process.
Procedure
Start By Description
Dependency The start date of your task depends on other events and
their status.
1. Select Add Events to define which events your task
shall depend on.
2. Under Statuses, select which status these events
need to meet before your task can start.
Note
All of the selected events need to meet any of the
chosen statuses before your task can start.
Parent Process Your task will start at the same time as its parent process.
Note
If you haven't added a parent process to your task yet,
you'll be asked to select one.
4. Optional: To specify the duration of the task, you can do the following:
• To change the default duration from 0 minutes to 10 minutes, select Estimated End Date.
• If you suppose that the duration of the task will be different, select Estimated End Date and specify the
end date you expect.
Note
The duration of an automatic task is fixed. Changes to the parent process won't influence the duration
of the automatic multi action task.
5. Optional: If the start condition is set to Time, you can set up recurrence: Select Add Recurrence and
then fill in the settings.
Recurrence Set the task to repeat by Minute, Hour, Day, Week, or Month.
Pattern
Note
A high recurrence frequency of about every 5 minutes can put extra load on the system
and may slow it down.
Automatic multi action tasks with two preceding task occurrences that are still running are
automatically canceled.
Start Date of First Set the planned start date and time for the first occurrence.
Occurrence
Note
6. Optional: If you don't want the task to start automatically, turn off the corresponding option that's
available depending on the chosen start condition:
• Automatically activate the task on the planned start date: The task will be activated automatically on
the selected start date.
• Activate the task automatically once any status of the dependencies is met: Once all selected
events your task is dependent on have met any of the selected statuses, the task will be activated
automatically.
• Activate the task automatically once its parent process starts: The task will be activated automatically
on the actual start date of its parent process.
7. Optional: If there's an overall process for the task, add the task to the process:
Note
You can select only the processes that you can edit. You need to be the owner or the assignee of the
parent process.
To see the relationship between calendar events and their parent processes more easily, use the List
workspace.
8. Optional: You can disable the API access to your task from external applications.
By default, this setting is turned on. For more information, see Access Calendar Events Using the Calendar
API [page 2681].
9. Select Create.
The multi action will automatically run according to the selected start conditions, unless you've turned off
the option to have it activated automatically.
Note
The event is added to the calendar and opened in the Details panel.
If you created a recurring event, the Details panel will open with the Series tab selected instead of the
Occurrence tab.
10. In the Multi Action section, select a multi action from the dropdown list displaying recently used multi
actions, or choose Select other multi action... and select a multi action from a private, public, or workspace
location.
If you want to edit the multi action or get familiar with its steps and parameters, select (Open in
Designer).
11. Set parameters for your multi action, if needed.
From the Value list, select how you want to set the value:
Note
There are parameters that can only take one value
and parameters that can take more than one value.
Default Value Use the default values set in the multi action, if available.
For more information, see Add Parameters to Your Data Actions and Multi Actions [page 2338].
If a parameter represents a source or target measure, keep in mind that different measures can have
different value ranges and numbers of decimal places. See Limits on Value Ranges and Decimal Places
for Measures [page 2168] for details.
Select Add Owners. To let more people monitor and manage the task, you can
add additional owners.
1. Select You in the Assignee section, and then choose If you don't have appropriate permissions to run the multi
Change from the dropdown menu. action, you can change the assignee from yourself to an-
other user who has the necessary permissions and send a
2. In the Change User dialog, select the user name of the
permission request.
person you'd like to send a permission request.
Select Add Viewers. If you want other people to be able to see the task, you can
share the task with them.
• As owners and viewers, you can add individual users or teams that have been created in SAP Analytics
Cloud:
Adding teams instead of individual users may help save you time, and you don't need to define who's
taking care of the task. You can choose to Resolve the team so the individual users are displayed, and
adapt the list according to your needs.
Note
After you resolve a team, the reference to that team is removed. Instead, the system behaves like
you added all the team members individually. Any changes to the team on the SAP Analytics Cloud
Security page (like adding or removing team members) will not be reflected in the People section of
your calendar task.
Tip
If you add someone and then decide to share the task with someone else, select the user name or the
team name, and then choose Change from the dropdown menu. In the Change User dialog, select the
new name.
If you add 7 or more users or teams as owners or viewers, only the first 5 user or team names are
displayed and a link (... 3 More, for example) lets you access a dialog with the complete list of added
users and teams. In this dialog, you can can search for user or team names, and you can add, change,
or remove them.
13. When you're finished filling in the details and are ready to send the task out, do one of the following:
• Select Activate.
This option is also available when you click on the arrow next to the status in the header section.
When you activate the task, and you've added some teams that haven't been resolved yet, the teams
are resolved automatically. You can remove individual users of the team, for example. Users assigned
to multiple teams are only counted once.
• If you replaced yourself with another assignee, select Update and Start Permission Request.
Note
If you don't start the permission request, the task will automatically be canceled on the planned
start date.
Results
After running the multi action, its status will automatically change to Successful or Failed, depending on the
outcome. If you replaced yourself with another assignee and sent the permission request, but the assignee
didn’t grant permission before the task started, the multi action won't run and the task status will be set to
Canceled.
Adding a Description
Under Description, you can add some details or instructions you'd like to share.
Adding Reminders
To set reminders, select Add Reminders in the Time section and fill in the required time settings.
Once you've activated the task, you may want to send an immediate reminder to the assignee: Select Send
Immediate Reminder and enter some text as message. The assignee will be notified right away.
If you haven't done so already, you can add the task to a process in the Hierarchy section.
Note
You can select only the processes that you can edit. You need to be the owner or the assignee of the parent
process.
If you've set up recurrence for your task, you can't add it to a parent process.
To see the relationship between calendar events and their parent processes more easily, use the List
workspace.
In the title area, you can select a different style for the task or create a new style. Select the arrow next to the
task title and choose Create Style..., or select one of the styles you've already created.
The style is a user-specific setting. The selected style is only visible to you.
If you've adapted an existing task, select View Changes to review all the changes that have been applied. Decide
whether to send them out or delete any of the changes, and then select Update or Update and Notify.
Related Information
Use the SAP Analytics Cloud calendar to keep track of tasks in your planning processes, stay on schedule, and
collaborate with your team. Create, assign, and work on tasks, and add files, approval workflows, and processes
that link tasks together.
Create multiple tasks at once using recurrence, or generate tasks and processes automatically based on model
data. You can also use the calendar to schedule data locks, data actions, or multi actions, and to view your
input tasks.
In this video, you will create the events for a planning process using the integrated calendar in SAP Analytics
Cloud.
In this video, you will explore how different users can complete tasks designated in the calendar to complete a
typical planning workflow.
Click through the interactive tutorials illustrating a data collection scenario with SAP Analytics Cloud calendar
in step-by-step instructions; all tutorials are captioned exclusively in English:
Related Information
Use a value driver tree (VDT) in SAP Analytics Cloud to visualize the entire value chain of your business, instead
of looking at isolated KPIs.
You may want to take a driver-based approach to planning, where you identify key driver values and model their
impacts on top-level metrics.
In this case, you’ll need to see how value flows through your model. Like a table, a value driver tree shows
numeric values, and supports data entry so that you can adjust the plan. However, it also shows hierarchical
links between different values, and how the values are interconnected.
For example, if you increase marketing expenditures, you’ll expect your overhead costs and your sales volume
to increase. In turn, you’ll see an increase in your sales revenue, and the cost of goods sold. A value driver tree
lets you visualize these different links and see how your changes affect the bottom line.
Value driver trees let you pick which accounts or measures (depending on your model) and cross calculations
to display in each node, and how to filter the node. Then, you can link nodes together to form a directed graph.
You can also quickly expand or collapse different groups of nodes. This way, you get a streamlined look at the
most important values, without any distractions.
Value driver trees are set up to compare values across time periods that you specify, so they can help with long
term strategic planning, or focusing in on a forecast.
You also perform data entry by keypad, so value driver trees work well with touchscreen-based Digital
Boardroom presentations.
Note
Previously, you created value driver trees in a separate designer outside of stories. These legacy value
driver trees are no longer available, and any changes will not be saved. To transform existing legacy value
driver trees to the new value driver tree in a story, see Transforming Legacy Value Driver Trees [page 2291].
In this video, you will create a value driver tree to simulate how adjusting various planning values would impact
overall outcomes.
Add a value driver tree (VDT) to your story or analytic application to visualize how values flow through your
model, and to help you make strategic planning decisions and simulate different scenarios.
Prerequisites
Both planning and BI users can create value driver trees and enter data in them. Only users with planning
licenses can publish changes to a version, however.
Context
You’ll start by picking your model. Then, you can add and configure nodes manually, or automatically based on
the model’s structure. Finally, you’ll link the nodes to arrange them into a directed graph.
Each node shows at least one row of values. The rows contain a value for each time period that you include.
The row setup also depends on the combination of accounts or measures, and cross calculations that you add
to the node. The following options are available:
One account or measure + one cross Displays a single row of values. The
calculation. row title comes from the cross calcula-
tion name, if available, or the measure
One account + measure. No cross cal-
name.
culation.
If you select measures but no cross
One measure. No account or cross cal-
calculation, this will yield the same re-
culation.
sults as selecting the cross calculation
Measure Values.
One account or measure + multiple Displays a row for each cross calcula-
cross calculations. tion. The row titles come from the cross
calculation names.
Multiple accounts or measures + one The node displays a different row for
cross calculation. each measure or account. The row ti-
tles come from the account or meas-
Multiple measures + one account. No
ure names, depending on which has a
cross calculation.
larger amount.
Multiple accounts + one measure. No
If you select measures but no cross
cross calculation.
calculation, this will yield the same re-
Multiple measures. No cross calcula- sults as selecting the cross calculation
tion. Measure Values.
If you select measures but no cross calculation on models that have no accounts, this will yield the same
results as selecting the cross calculation Measure Value.
Displaying multiple measures and multiple cross calculations, or multiple measures and multiple accounts, at
the same time isn’t supported.
By right-clicking in your value driver tree, you can open a context menu and perform actions such as removing,
duplicating, or adding a node. To delete a value in your VDT, right-click on the value and choose Delete Value.
Procedure
1. Add a value driver tree to your story page or analytic application canvas.
• To add a value driver tree to a canvas, select Value Driver Treefrom the launch page or select
Add Value Driver Tree from the toolbar.
• To add a value driver tree to a responsive page, select Add Value Driver Tree from the toolbar.
• To add a value driver tree to an analytic application, select Add More Widgets Value Driver
Tree .
2. Select Create New Value Driver Tree and choose a model from the Based on list.
The model needs to be an import data model. You'll usually want to choose a planning model so that you
can do data entry, but analytic models with a Date dimension are available too.
The value driver tree is added. You can right-click it for configuration, styling, and navigation options,
including Undo and Redo. You can also use the Builder and Styling panel to set up your value driver tree.
3. Add and configure the value driver tree nodes.
There are three ways to add and set up nodes: manually, automatically, or by duplicating an existing node.
Auto-create creates nodes based on either the account hierarchy and calculation structure in the
model, or by measures and calculation structure in the model.
• To add nodes automatically, select Auto-create Value Driver Tree from Model. Afterwards, you can
edit, add, and remove individual nodes as required, or right-click and select Auto-create ( ) to
automatically add more nodes.
• To add a single node, right-click the value driver tree and select Add Node ( ). Then change the node
title to help you identify the values.
• To duplicate an existing node, right-click on the node and select Duplicate Node ( ).
Settings are available for individual nodes, as well as the default node configuration that applies to all
nodes. If you’re adding nodes manually, it’s easiest to start with the default node configuration and then
add nodes and change their settings as needed.
As you set up nodes, you can use the Unlink from default node configuration ( ) option for any setting that
was inherited from the default node configuration. This way, you’ll keep that setting when you change the
default node configuration.
Depending on the configuration of your model, you can add two of the three options to a node: Accounts,
Measures, and Cross Calculations.
Setting Description
Filter Here, you can filter the data or cross calculations. You can
filter by any dimension but account, measure, or cross
calculation.
Note
Date dimensions using week granularity or
custom time hierarchies can only be filtered by
member.
You can only change the date filter for the entire
value driver tree, not for individual nodes.
If you select a time range, the value driver tree uses
a presentation date dimension to set the periods that
display in each node. By default, it's the planning
date dimension. It filters the values for each node,
and it determines the time granularity and which
periods are displayed for each node unless you
specify a Presentation Date Range, too.
Note
• You can add calculations with input controls to let viewers customize the value driver tree.
However, input controls that let viewers pick the measure, account or cross calculation aren't
available.
• Calculated dimensions and Difference From aren't available for calculations in a value driver tree.
You can also apply filters to the entire widget, story, or page. A couple of restrictions exist:
• Measure-based filters and filters on calculated dimensions won't affect the value driver tree.
• Data entry isn't available when there's an advanced filter on the version dimension.
4. If necessary, set the Presentation Date Range for the value driver tree.
Value driver trees show different values for each time period that you include. It's easiest to set the time
range by filtering the date dimension. In some cases, you'll also want to set a Presentation Date Range. For
example, you might want to work with multiple date dimensions, or you might not want the date filter to
match the time periods that the value driver tree displays.
This setting determines the periods that are displayed for each node. To add one, open the default
node presentation and expand Presentation Date Range. If you're using multiple date dimensions, pick
the dimension that will set the displayed periods from the Date Dimension list. Then, select Add
Presentation Date Range Filter and choose the range and granularity.
You can also use no date filter to see the aggregated data for all time periods.
5. After you’ve added and configured the value driver tree nodes, you can start creating links between them.
The value driver tree flows from right to left. Hover over a node and drag and drop the left link icon ( ) to
link to a parent, or the right icon to link to a child. You can also add and remove parent and child links from
the Relationships section for each node.
Each node can link to multiple parents and children. However, you can’t set up cycles where a node is both
an ancestor and a descendant of another node.
Note
Links are for visualizing data only, and don’t impact the calculation of node values.
6. You can also drill into a node by a certain dimension, to split the node (and all its descendent nodes) based
on dimension members.
For each selected member, a new node will be created as a child of the selected node, with its title prefixed
by and its value filtered by the corresponding selected member. The descendants of the selected node will
be duplicated for each selected member and become the descendants of each new node, and values in
nodes in all duplicated nodes are filtered by the corresponding selected member. Note that if you drill into a
node without any descendants in the first place, only new child nodes will be created accordingly.
Note
The Drill into option is available in both view and edit modes. If you make changes in view mode, you
need to switch to edit mode before you can save your changes.
7. If you have a large value driver tree, you can condense the information by collapsing some of the nodes.
Right-click a node and select Collapse Node to hide its descendants. You can expand the node again while
viewing or editing the story.
8. Open the Styling panel to set up colors, borders, and other styling options. If your value driver tree has
longer node titles, you can specify a Minimum Node Width on this panel to make sure that the entire title is
visible.
Results
When you’re finished building your value driver tree, you can save your story or analytic application, begin using
it, or share it with other users.
In SAP Analytics Cloud, value driver trees include several functions to help you simulate scenarios and quickly
navigate your data.
Data Entry
To quickly simulate different scenarios and see how individual value changes affect the overall scenario, you
can enter data directly in a value driver tree based on a planning model.
Select a value to change it. You can use your keyboard or the keypad to edit the value. You can enter a new
value, or change the existing value using the slider or arrow keys. You can also input a mathematical function,
such as +10% (to increase the value by 10%) or +-300 (to reduce the value by 300). The new value is entered
to the model, and the rest of the value driver tree updates to show the effects of your change.
To undo or redo data changes, you can open the Version Management panel from a table showing the same
model. For more information, see Undo, Redo, and Revert Changes to Versions [page 2179].
Data entry won't be available when there's an advanced filter on the version dimension. For details about
this type of filter, see Advanced Filtering [page 1537].
Navigation
To navigate the tree, click and drag within the tile and use the mouse wheel to zoom. You can also right-click
To find individual nodes, there’s a search button in the upper right corner.
If you want to hide nodes that you don’t need, right-click a node and select Collapse Node to hide its
descendants. Use the right arrow that appears next to the node to expand it again ( ).
A map of the value driver tree is displayed in the lower-left corner by default, and you can drag the selected
area in the map or use the zoom buttons to change the view of the value driver tree, or you can select Fit to
Screen to show all nodes in the tile. You can also hide the map or drag it to reposition it on the tile.
You might want to show node data in a table too, for example to use different planning tools for data entry.
Follow these steps to set up the table.
1. Add the table and filter the measures, accounts or cross calculations dimensions to the same measures,
accounts or cross calculations that the node shows.
2. If the node has any other filters, create the same filters for your table.
3. Add the account, measure and cross calculations dimensions to the rows of the table, and add the
presentation date dimension to the columns.
4. Remove any other dimensions from the table’s rows and columns, including the version dimension.
Related Information
In SAP Analytics Cloud, you can transform a legacy value driver tree into a value driver tree (VDT) widget so
that you can work on it in your story and use newer features.
Context
There are a few things to consider when transforming a value driver tree:
• Calculations from year over year, simple calculation, and union nodes are not included in the
transformation process. The nodes themselves are included, and you can use them to display calculated
measures from your story instead.
• For year over year and simple calculation nodes, the output account will become the node’s measure after
transformation. Union nodes won’t have a measure, though.
• Only nodes shown in the Consumption view (that is, the nodes that appear in the story) will be copied to
the new value driver tree.
To learn more about transforming legacy value driver trees, as well as view the planned depreciation timeline,
see the blog post Value Driver Tree (VDT) Update – Depreciation of Legacy VDT and Migration Methods .
Procedure
Note
A new value driver tree widget is added to the page based on your legacy value driver tree.
Predictive planning is directly embedded in the Planning process to provide an integrated end-to-end
experience.
With Predictive planning, you can generate accurate forecasts on top of your actuals to accelerate your
day-to-day planning activities.
Predictive scenarios (time series forecasting models) are created on top of planning models, up to 1000
predictive models at a given time.
Predictive forecasts are persisted into private versions and promoted as public versions to expose the forecasts
to wider planner groups.
The data you provide to train your predictive models or apply to generate predictions can be organized using
various formats based on the planning model. You can define how you want your predictive model to be trained
and you can also generate forecasts per entity to get accurate business-oriented insights. The granularity
of the predictive forecasts is determined by the aggregation level of the combined dimensions. For more
information, please refer the topic Get Distinct Predictive Forecasts per Entities For your Planning Model [page
2311].
In the application, the Predictive Content Creator role provides appropriate permissions to do Predictive
Planning.
The following table explains what you can do in the Predictive Scenarios, and which roles and permissions you
need to perform the action.
If you are an admin, you can create custom roles. For more information, see Create Roles [page 2877] and
Standard Application Roles [page 2838].
Data Sources
Dataset
Note
These roles and permissions apply to both live and acquired datasets.
Delete a dataset x x
For more information, see About Datasets and Dataset Types [page 544].
Planning Models
Note
You must have the relevant license to create planning models. For more information, see Features by
License Type for Planning Models [page 68].
Predictive Content
What you can do Creator Predictive Admin Planning Modeler
Note
You need read access to the planning model. You
can create private versions once you have read
access.
Note
This applies to the private version of the planning
model only. You must have write access to the
private version related to the planning model. You
can always write to private versions you own, but
you need to be granted write access for the ver-
sions you don't own (shared versions).
Note
To publish a private version of your planning model with Smart Predict forecasts, you need Maintain
permissions at a global level, and at the level of the specific model. Maintain permissions aren't enabled by
default on the Predictive Content Creator and Predictive Admin roles. Planning Professional Admin , Planning
Professional Modeler , and Planning Standard Reporter are the roles in the application that include Maintain
permissions.
Type of predictive models For Smart Predict - Predictive Planning, i.e. the integration of
SAP Analytics Cloud Smart Predict with SAP Analytics Cloud
planning models, only time series forecasting is supported.
Input planning models • Type: Smart Predict - Predictive Planning only supports
standalone planning models (both new model types
and classic account models). All data sources that can
be leveraged by SAP Analytics Cloud planning models
are automatically supported, with the exception of SAP
BPC. Neither live nor acquired SAP Business Planning
and Consolidation (SAP BPC) planning models are sup-
ported.
• Version: You can use public or private versions. The
input version must be a public version, not in edit mode,
or a private version. You have at least a read access to it.
Note
• Smart Predict doesn't support predictive fore-
casting on calculated measures, including cur-
rency conversion measures, when your plan-
ning model is a new model type.
Entities (crossing of multiple dimensions) • You cannot select more than 5 dimensions or attributes
For more information on entities, see also Get Distinct Pre- to create a time series predictive model generating pre-
dictive forecast value per entity.
dictive Forecasts per Entities For your Planning Model [page
2311]
• The maximum number of entities supported is 1000.
• Attributes and Hierarchies:
• You can select the attributes (custom properties)
that form part of level-based hierarchies or that can
be freely defined.
• You can select system properties like the currency
for instance.
• The levels can be selected indirectly by selecting
the custom properties that form part of the level-
based hierarchy, whereas for parent child hierar-
chies, you need to create them as Smart Predict
can generate one segment for each leaf only.
Tip
It's possible to add custom properties to group
members in custom ways: you can use this
mechanism to keep the number of entities
under 1000 and perform an intermediate fore-
casting approach where predictive forecast is
run on intermediate nodes: For nodes above,
predictive forecasts are spread and for nodes
below, they will be summed.
Entity Filters • (Null) and (No Value) values are not supported for at-
tributes and hence not listed for member selection in
Available Members.
• When defining a filter, a maximum of 1000 values are
displayed under the list of Available Members for selec-
tion.
If the selection dimension has more members, the user
has to make a search limited to the list of Available
Members and the search displays a max of 1000 results.
Example
• The granularity of the date dimension of your plan-
ning model is defined as monthly.
• You have data for the months from January 2016 to
December 2020 - 60 months.
• You ask Smart Predict for 12 forecasts.
Influencer Restriction How Adding Influencers to Your Planning Model Can Poten-
tially Increase the Accuracy of Your Predictive Model? [page
2306]
Support of account dimension with multiple account hierar- Predictive Planning supports planning models where multi-
chies ple hierarchies are defined in the account dimension. It is not
possible to select the account hierarchy that should be used
to select the target variable for the predictive scenario. Only
the accounts that are part of of the default account hierarchy
can be selected in the settings of the predictive scenario.
Data volume The SAP Analytics Cloud does not allow retrieving more than
1.000.000 cells per query.
User Managed Date Members Date dimensions with User Managed members are sup-
ported by Predictive Planning. Nevertheless, you must en-
sure that no adjustment period exists in the training data. As
adjustment periods are meant to collect values that should
have been associated to anterior periods, the existence of
adjustment periods in the training data would bias the pre-
dictive model and the forecast would therefore not be relia-
ble.
Aggregation Restrictions
SAP Analytics Cloud allows you to create planning models and run predictive time series forecasts on your data
within a story grid or table. For more information, refer to Run Predictive Forecasts on Table Cells [page 2247].
With Smart Predict you can go one step further and generate forecasts per entities to get accurate business-
oriented insights, not only raw forecasts. You can directly use your planning model as the data source, no need
to extract data in a dataset.
Smart Predict uses the data available in your planning model to create and train a predictive model. You can
then analyze predictive forecast accuracies across the combined dimension values and understand time series
breakdown in details. Once you are satisfied with the accuracy of your predictive model, you can generate the
predictive forecasts: they are saving back directly in the private version of your planning model. It’s then easy
for you to augment your story with actual and predictive forecasts.
In Smart Predict, you need to provide data so that your predictive model can be trained or applied to generate
the predictions. This data can be organized using different formats depending on the type of predictive model:
You can find a detailed use case using a planning model as data source. Read this blog .
Note
They are restrictions on using planning models as data sources. See Restrictions [page 2040]
There are some settings to specify before you train your time series predictive model using a planning model as
data source.
To define how you want your predictive model to be trained, use the Settings panel as described in the tables
below.
General
Description Enter a description that explains what For example, you might want to
your predictive model is trying to do. Forecast product sales by
city.
Times Series Data Source Browse and select the planning model Smart Predict supports only standalone
you want to use as a data source. planning models, (both new model
types and classic account models).
Note
SAP Business Planning and Consol-
idation (SAP BPC) planning models
are not supported whether these
are live or acquired.
Version Browse and select the planning model The input version must be a public ver-
version you want to use as data source. sion, not in edit mode, or a private
version. You must have a least read ac-
cess to it. There are also some specif-
icities when currency conversion is en-
abled. See Use Currencies with Predic-
tive Planning [page 2309].
Target Select the numeric value containing the Smart Predict doesn't support calcu-
data to be forecasted.
lated measures when using a planning
model, even if an inverse formula is pro-
Note
vided. For more information on inverse
• If your target is related to
formulas, you can refer to the chapter
a currency, you also have
Inverse Formulas [page 917].
the option to select Default
Currency or Local Currency. For more information about using a
The option you choose deter- planning model as a data source, see
mines the currency used to the section Restrictions Using Planning
forecast and report on your Model as Data Source for Smart Predict
target. in Restrictions [page 2040].
Time Granularity By default, this refers to the level of The forecast points created by Smart
date granularity available in the plan- Predict respect the Time Granularity of
ning model data source. the planning model. For instance, if the
planning Time Granularity is month and
3 forecast periods are requested, the
forecasts will be created for 3 consecu-
tive months.
Number Of Forecast periods Select the number of predictive fore- For more information, see How Many
casts you would like to get. Forecasts can be Requested? [page
2069]
Note
There are specific restrictions on
Entities. For more information, see
the section Restrictions Using Plan-
ning Model as Data Source for
Smart Predict in Restrictions [page
2040].
Entity Filters You can select the relevant values in di- The Entity Filters help in filtering the en-
tities you want to get the forecasts for.
mensions or attributes that are defined
as part of entity.
Note
It is possible to select values and spe- There are specific restrictions on
cific members in the hierarchy (parent- Entity Filters. For more information,
child, level based, or flat) for a dimen- see the section Restrictions Using
sion defined as an entity. Planning Model as Data Source for
Smart Predict in Restrictions [page
This helps you to focus on the predic- 2040].
tive forecasting scope.
Restrictions:
Train Using Select whether you want to train your It can be useful to define the range of
predictive model using all observations observations that will be used to train
or a window of observation. the predictive model. You may want to
ignore very old observations or inappro-
If you choose to use a window of obser-
priate observation to avoid that your
vations you'll need to specify the size of predictive model learns based on obso-
the window you want to use. lete/inappropriate behavior.
Note
If the range of predictive forecasts
overlaps existing data in the private
version, data will be overriden.
Until Select whether you want to train your If you select a custom observation date,
predictive model until the last observa- make sure it stays within the time range
tion or another date of your choice. defined in the data source planning
model.
Convert Negative Forecast Values to Switch the toggle on if you want to get This turns negative predictive forecasts
Zero positive forecasts only. to zero. This can be useful when predic-
tive forecasts only make sense as pos-
itive predictive forecasts. For example,
if you need to forecast the number of
sales for one of your main product by
major cities for a region. It makes no
sense to get negative values. Either you
sell a number of products or you sell
none of them. Negative values will be
turned to 0.
Select the Train & Forecast button. Thanks to the generated reports, you can analyze the predictive model
performance, and decide if you need to further refine your predictive model, or if you can use the predictive
forecasts with confidence. For more information, see Analyzing the Results of Your Time Series Predictive
Model [page 2113].
To know more about why you would want to use influencers in context of time series models on top of
planning models, please refer the link below that explains How Adding Influencers to Your Planning Model Can
Potentially Increase the Accuracy of Your Predictive Model? [page 2306].
Related Information
Once you’ve trained your predictive model, the performance indicators can be too low to immediately consider
the predictive model accuracy (see Expected MAPE for a time series predictive model), and you coud also
gauge the influence of business drivers on forecasts.
One way to increase your predictive model’s accuracy is to add influencers to your planning model. These
influencers can then be used by Smart Predict to improve its understanding of the relationships between your
data.
Example
Your company noticed that the maintenance costs of their stores are getting too high. You need to analyze
them to see where to cut costs but also predict future maintenance costs better to avoid going over budget.
You create your first predictive scenario with a Time Series forecasting model to assess the maintenance costs
per store. You choose the overall expenses as target, the date of these expenses as date variable and the store
identity document (ID) as entity.
You train your first predictive model without selecting any influencer.
The Expected MAPE of your first predictive model in the debrief is at 26.71%.
Note
You want the percentage of your predictive model’s Expected MAPE to be as low as possible as it indicates
the percentage of error you can expect in your predictive forecasts.
You suspect or know that some of the numeric value in the data source such as the number of Saturdays and
Sundays have a direct relation to the predicted target. You realize they impact the insights and could improve
the accuracy of your predictive forecasts if they were included as influencers.
You create a second predictive model by duplicating your first predictive model. However, this time you include
some influencers and train your second model.
Your predictive model gained 22% of accuracy by simply including variables as influencers in your predictive
model. You can try a few influencers combinations to reach the level of accuracy you want.
Finally, the selected columns are only candidate or potential influencers. These columns and their relation to
the target to be predicted are analyzed by Smart Predict. The selected influencers can be ignored if using
them doesn’t improve the model accuracy. Especially, when a model is built over multiple entities, the selected
influencers can be relevant for some entities and not for others.
Influencers Validity
It’s important to understand that an influencer is valid only if the future values for that influencer can be known
in advance. Assuming that we are in 2020 and you want to generate 12 forecasts, one for each month in 2021,
then the value of the influencers must be known for each month in 2021.
For instance, if you want to predict the costs for 2021, then
• The budget is a valid influencer because you can assume that you are deciding the budget and it's known
for 2021. The number of advertisement campaigns is also possibly a valid influencer because you are
planning the advertisement campaigns in advance and have future values for number of advertisement
campaigns.
• The gas price, even if it has impacts on your costs is not a valid influencer because it’s unlikely that you can
provide the values for the coming months accurately.
Classic Account Model • If the predictive model target is currency relevant, then
currency policy used for the target is also applied to
the influencers. It means that if you have selected a
“default” currency policy for the target, then both the
target and the influencers are expressed using the de-
fault currency. If you have selected the “local” currency
policy, then both the target and the influencers are ex-
pressed in the local currency.
• If the predictive model target is not currency relevant,
then the “default” currency policy is used for the influ-
encers.
New Model The influencers value is always expressed using the local
currency. To avoid aggregating values expressed in different
currencies, you must add the dimension holding the cur-
rency for the influencer to the entity list.
Restrictions
• The influencers are displayed (in the influencers section) using the description of the account and measure
they are based on. If two influencers have the same description, it won't be possible to distinguish them in
the debrief.
• Currency conversion measures are currently not supported as influencers.
• Smart Predict does not support using influencers with formulas on New models that have an account
dimension.
• LOOKUP calculated measures are not supported as influencers.
Note
Related Information
The planning model used as a data source might contain currencies. But how does Smart Predict support
these currencies? It depends on how they are configured in your planning model data source, and on what type
of planning model you are using - a classic account model, or a new model type.
• Classic account model: You can generate predictions in Smart Predict using the default or local currency
to read and write data, when your planning model is a classic account model, and currency conversion is
enabled. Depending on your selection, the report for all your model entities are consistently expressed in
one default currency, or multiple local currencies. For example, if the default currency is USD, you always
see numbers expressed in USD in the Smart Predict report. If there are multiple local currencies in your
planning model, the numbers in the Smart Predict report reflect these multiple local currencies.
When you select the Default Currency setting, you can only write to planning versions configured to receive
default currencies.
When you select the Local Currency setting, you can only write to planning versions configured to receive
local currencies.
Note
To understand the currency displayed in the report, you can check the Smart Predict settings, and the
currency definition in your planning model. The default currency is indicated in the planning model.
Example
If you deal in Japanese yen, you would understand that your report is in Japanese yen, as your
Smart Predict currency setting is Local Currency.
• New model type: Smart Predict does support predictive forecasting on measures that are using
currencies, provided they are data-entry enabled (for more details on measure support restrictions, please
refer here Restrictions [page 2040]).
Smart Predict doesn't support predictive forecasting on calculated measures, including currency
conversion measures (available in the new model type).
To learn more about the different model types, you can refer to the chapter called Get Started with the New
Model Type [page 627].
To learn more about how currencies can be set up in a planning model, see Plan with Currency Conversion
[page 2239].
For classic account models, Smart Predict can support currencies when they are set at the planning model
level in the following cases:
added as an entity to
get distinct forecasts
per currency to prevent
mixed aggregations.
Caution
You must have at least read access to the input version, and written access to the output version (private
only). Public versions in edit mode are available for selection, but aren't supported.
Related Information
Thanks to Smart Predict, you can create distinct predictive forecasts per entities using your planning model
as data source, where the granularity of the predictive forecasts is determined by the aggregation level of the
combined dimensions. But what does it mean?
Example
Let's take an example to better understand how it works: Imagine that you want to forecast your future
sales by country and by product.
To build a predictive model with distinct forecast per entities taking your planning model as data source,
Smart Predict needs to match the data contained in your planning model (actuals are used as historical
data ) with the variable roles that are mandatory to generate the predictive forecasts:
Target Measure that does not involve calcula- It's the measure you want to forecast:
tion <Sales>
When application is done, the predictive forecasts are added to your planning model. In our example, it
means that the generated forecasts for Sales will be added for June and July.
Even if the time series predictive model is trained and applied based on a lowest level of date granularity,
you can still report data at upper level in a story.
Going back to our example: The times series has been generated on months basis. You can report by
aggregating the sales by quarters or years (instead of month).
Entities help you create your predictive forecasts at different levels, depending on your business needs. They
can also help you detect performance gaps in your predictive models in some cases.
Entities are subsets of your predictive forecasts that are calculated independently from a combination of one
to five dimensions. Each entity can be seen as an individual predictive forecast. These individual predictive
forecasts can be agreggated at a higher level if needed.
The level you need for your predictive forecasts depends on the insights you want to get from your predictive
forecasts and the data available in your source planning model. You can create entities by combining up to 5
dimensions for varying levels of high-detail predictive forecasts. You can also work without entities to keep your
predictive forecasts high-level.
Let’s take a car sales scenario to explore in more detail how entities can help you find the level of predictive
forecasts you need. Your company wants insights on future car sales. The company sells five car brands across
six countries. You have five months of data available in your source planning model, wand you want two months
of predictive forecasts.
Predictive forecasts without entities can give you insights on general trends across your data source, such as
future car sales across all countries and brands.
In this case, you use car sales as your target and Date as your date dimension and you train your predictive
model.
This high-level forecasting of the company’s future car sales is useful if you just need an overview without
looking into subset-specific trends.
Predictive forecasts using entities with multiple dimensions can give you insights on trends within and between
several subsets of your data source such as car sales per brand and country.
In this case, you keep car sales as your target and Date as your date dimension. You add Brand and Country
dimensions as entities and train your predictive model.
Predictive forecasts using entities with one dimension can give you insights on trends within one subset of your
data source such as car sales per country
In this case, you keep car sales as your target and Date as your date dimension. You add the Country dimension
as entity and train your predictive model.
These twelve predictive forecasts can also be aggregated at a higher level by Country or Date if needed. This
mid-level forecasting approach is useful when you need to focus on trends and relationships in one specific
subset such as how each country will perform individually and compared to other countries.
Entities are a useful tool to tailor the level of your predictive forecasts to your different business needs. You may
need to try a few forecasting combinations to reach the level of accuracy you want.
Note
Hierarchies are currently not supported as entities. For more information, see Restrictions [page 2040].
You've assessed the performance of your predictive model and you're confident using the predictive forecasts.
You want to integrate the predictive forecasts into your planning model.
Tip
• Private versions are only initially visible to the person who created them.
Note
When the default setting of Save Forecasts Values for Past Period is set to Off, only forecasts for future
periods are saved to the private version of your input planning model, in the measure that was selected
as Target. When the default setting is set to On, it allows you to save forecasts for a past period to the
private version of your planning model. This means you can assess the performance of your predictive
forecast values by using all the visual and modeling powers right there in your story, to compare the
difference between your predictive forecast and the actuals, plans or budget.
Data actions are a flexible planning tool for making structured changes to model data in SAP Analytics Cloud,
including copying data from one model to another.
Data actions are designed by modelers and then run by planners in stories or analytic applications, or
scheduled to run in the calendar.
A data action is created based on a planning model, and consists of one or more steps that are carried out on a
public or private version of the target model.
Data actions can be created from the side navigation or accessed in the file repository (Files). Recent data
actions will also show in the Recent Files section of your home screen.
In this video, you will explore the options available for managing repeatable planning processes using Data
Actions with basic steps that enable you to copy within models and across models, allocate values, re-use data
actions, and perform currency conversion.
The permissions for data actions depends on a combination of role-based permission for data actions and
folder-based permissions in the file repository.
In order to have certain permissions for the data actions in a file repository folder, you need to have specific
corresponding permissions for data actions that are assigned by the system administrator outside of the file
repository, on the Roles page. For more information, see Create Users [page 2857] and Permissions [page
2844].
If you do not have the required role-based permissions for data actions, then your permissions in the file
repository for data actions are restricted.
There are many possible combinations of role-based permissions and how they impact folder-based
permissions in the file repository. Two examples are data action designer and data action user.
If you are a data action designer with full permissions for your role, assigned by the system administrator, then
you will have full permissions for data actions you access in the file repository.
In the second example, if you are a data action user with Read and Execute permissions assigned by the system
administrator for data actions, you have Read permission in the file repository. This permissions combination
does not allow you to make changes to the data actions, but you can trigger them. As a data action user, you
also manage all permissions only from the file repository settings.
Note
There is no Execute permission for the folder-based permissions, so having Read permission functions the
same as having Read and Execute permission in the roles settings. Having Read permission is a prerequisite
for other permissions.
Note
Having Update and Delete permissions in the roles settings requires a Planning Professional license. If you
have a Planning Standard license, you can have only Read and Execute permissions for the roles-based
permissions.
There are a few general steps to set up and run data actions. The following types of steps are available:
• Copy step: This type of step copies data within a model based on a set of rules, filters, and aggregation
settings. For example, you can use it to copy data from 2018 to 2019. To learn how to create a copy step,
refer to Adding a Copy Step [page 2324].
• Cross-model copy step: With this type of step, you can copy data from a different source planning model
based on a set of filters and automatic or manual mapping between dimension members. For example,
you can bring together data from headcount and expense planning models into a central finance model. To
learn how to create a cross-model copy step, refer to Adding a Cross-Model Copy Step [page 2327].
• Allocation step: This type of step lets you run an allocation step as part of your data action. It allocates
values from a source dimension to a target dimension by driver values, or by direct assignment. For
example, an allocation step can help allocate overhead costs to different departments. To learn more,
see Learn About Allocations [page 2513]. To learn how to add an allocation step to data actions, refer to
Creating an Allocation Step [page 2332].
• Embedded data action step: With this step, you can run another data action as a step in your data action.
For example, you can quickly reuse a common set of steps in several different data actions, or multiple
times within a single data action. (Multi actions are also available to run multiple data actions in sequence,
along with predictive and publishing steps.) To learn how to create an embedded data action step, refer to
Adding an Embedded Data Action Step [page 2335].
• Conversion step: For models with measures, conversion steps let you copy between measures while
applying currency conversion. To learn how to create a currency conversion step, refer to Convert
Currencies in a Data Action [page 2342].
Adding Parameters
You might want to quickly update a value or change a member in several different places throughout a data
action, or allow a user to set their own values when they run the data action. In this case, you can use
parameters for those values.
There's always a default parameter for the target version, but you can add others that represent either
numbers or dimension members. To learn how, see Add Parameters to Your Data Actions and Multi Actions
[page 2338].
When you have finished adding steps for the data action, it can be set up to run it in a few different ways:
• Using a data action trigger in a story or analytic application: Run Data Actions, Multi Actions, and
Allocations [page 2260]
• Scheduling it in the Calendar: Schedule Data Actions in the Calendar [page 2265]
• Using a script object in an analytic application: Use Data Actions Technical Objects [page 1708]
• Running a multi action that contains the data action, using a trigger in a story or analytic application: Run
Data Actions, Multi Actions, and Allocations [page 2260]
From the file repository (Files), you can move, edit, copy, delete, show columns, or filter your data actions by
using the toolbar. Additionally, you can check on the status and details of your data actions by going to the Data
Action Monitor. For more information, see Monitor Data Actions [page 2344].
Related Information
About Script Formulas and Calculations in Advanced Formulas for Planning [page 2391]
Write Business Scenarios for Planning with Advanced Formulas Actions [page 2489]
Schedule Data Actions in the Calendar [page 2265]
Work with Planning Applications [page 1967]
In SAP Analytics Cloud, you can create a data action by specifying a source model, and then add steps to the
data action.
Prerequisites
To create a data action, you must have the appropriate permissions for the Data Action item. For example, the
modeler and admin roles contain these permissions.
Context
Follow these steps to create a data action from scratch. You can also copy an existing data action by opening
(Data Actions) from the side navigation, selecting a data action, and choosing (Copy the selected data
action).
Procedure
1. Select (Data Actions) from the side navigation to open the data actions start page, then select Data
Action in the Create New section.
When working on a planning model in the modeler, you can also create a data action based on it by
Most steps will use this model as the data source and target. For cross-model copy steps, this model will
be the target, and you can pick a different source model. You can also read and copy data from different
models in advanced formulas steps.
Note
You also have the option to change the default model after adding steps. If you change the default
model after steps have already been added, you might get some validation errors. The validation errors
will be marked and you can fix them manually.
• Undo or redo any changes by selecting (Undo) or (Redo) in the Operations toolbar.
• To reorder steps, drag them up or down in the left pane.
• To copy a step, select it in the left pane and choose (Duplicate Step).
• To delete a step, select it in the left pane and choose (Delete Step).
Results
You can debug your saved data action with tracing, to make sure it works as expected. For more information,
see Get to Know the Data Action Tracing [page 2500].
When your data action is ready, it can be set up to run it in a few different ways:
• Using a data action trigger in a story or analytic application: Run Data Actions, Multi Actions, and
Allocations [page 2260]
• Scheduling it in the Calendar: Schedule Data Actions in the Calendar [page 2265]
• Using a script object in an analytic application: Use Data Actions Technical Objects [page 1708]
In SAP Analytics Cloud you can add steps to your data actions that can be carried out on a planning model.
In this section, you will learn how to create the following data action steps:
In SAP Analytics Cloud, copy steps allow you to copy data from one set of members to another, specifying
options for filters and aggregation.
Context
• Quickly filling out future data for the model by copying data from the current time period to several future
periods.
• Copying data for a product in one region to another product in multiple regions.
Note
When choosing members or measures for the copy step settings, you can use a parameter. Parameters
let you create prompts, or update several values in your data action from one place. Hover over or select
Parameter in the member selector to see which ones are available. To learn how to create parameters, see
Add Parameters to Your Data Actions and Multi Actions [page 2338].
Currency conversion is not supported for copy steps and all values are copied without currency conversion.
Procedure
Note
• The version dimension is only available in the Filters section of a copy step. You can filter it
to set a fixed source version, for example, if you want to copy data from the forecast version
c. Continue adding filters on other structures as required. To remove a filter, select the icon beside it.
4. In the Aggregate To area, add any dimension where you want to override the default disaggregation
behavior.
For dimensions that are not included in the Aggregate To area, the copied data will match the distribution of
the source data by default. This behavior is similar to copying and pasting a cell with the Paste underlying
values option enabled. For more information, see Copying and Pasting Cell Values [page 2199].
For example, if you want to remove the distribution of data to different customers for privacy reasons, you
can aggregate to the unassigned member of the customer dimension. In this case, all of the copied data is
booked to the unassigned member.
a. Select Add Dimension and choose a dimension from the list.
Note
The dimensions that you add to this list cannot be added in the Filters or Copy Rule sections of the
step.
b. Choose a single leaf member for this dimension that will receive the value copied during this data
action step, or select Parameter and choose a parameter. Select OK.
c. Repeat these steps to add aggregation members for other dimensions as necessary.
5. Add copy rules to specify source and target members or measures for the copy step.
Depending on the Filter and Aggregate To settings, you may not need to add copy rules. For example, if you
want to copy the 2018 forecast to a target version, you only need to filter the step to the forecast version
and 2018. If you want to copy the 2018 forecast to 2018 and 2019 in the target version, a copy rule is
required.
a. Select Add Copy Rule.
b. For the new rule, choose a dimension or select Measures from the Measures/Dimension list.
(For data actions based on classic account models, this list is called Dimension.)
c. In the From column, select or start typing directly in the text field.
d. Choose a source member or measure from the list, or select Parameter and choose a parameter.
Select OK.
This source data, filtered according to your settings, will be copied to the targets that you specify.
You can select measures and accounts that are calculated or that use exception aggregation, but there
are some restrictions for these types of source values. See Copy Steps with Calculations and Exception
Aggregation [page 2337] for more information.
If your account dimension has multiple hierarchies, copy rules must use the same hierarchy for the
source member of the account dimension. You can use different hierarchies for the target members.
Note
• If you're copying across measures, keep in mind the value range and number of decimal places
for your source and target measures. If you're using a measure parameter for the source or
target, it's recommended to specify the data types where possible. For details, see Limits on
Value Ranges and Decimal Places for Measures [page 2168].
• For copy rules based on the date dimension, you can select one or more non-leaf members at
the same hierarchy level as the source member (for example, Q1 to Q2, or 2018 to 2018 and
2019).
For accounts that use exception aggregation, copying data along the exception aggregation dimension
is supported.
If there are any problems with the rule, a warning icon appears next to it. Hover over the icon to see
more information.
i. Continue to add copy rules as required, by selecting Add Copy Rule, or by selecting (Duplicate) next
to an existing copy rule.
6. To switch between appending or overwriting data, select next to Options and choose a setting from the
Write Mode list:
• Append (default): The copied data will be added to any existing data for the target members.
• Overwrite: Existing data for the target members will be overwritten by the copied data.
7. Choose an Auto-Generation Mode. The auto-generation mode determines how copy rules are applied to
target members based on time dimensions:
• Detect Automatically chooses the most suitable mode automatically.
• Day Granularity applies the copy rule based on the different number of days between members and
takes into account leap years.
• Week Granularity applies the copy rule based on the different number of days between members, but
does not take into account leap years. Weeks are always calculated as being seven days apart.
• Month/Year Granularity applies the copy rule based on the different number of months between
members.
8. Select Save Data Action.
Results
The copy step is added to the data action. You can continue to add, manage, and reorder steps as required.
With a cross-model copy step in SAP Analytics Cloud, you can copy data from one planning model to another.
You pick the model and version that contains the source data, add filters as necessary, and define how to copy
the data to the target model.
Context
For example, if you have detailed workforce data in a Headcount planning model and you want to add some of
the data to a Finance model for further analysis, you can use this feature to quickly copy the data.
Copying across models is easiest when the relevant dimensions are public dimensions shared by each model,
but it's not required. You can ignore source dimensions that aren't relevant to the target model, and set default
values for target dimensions that can't be mapped to an appropriate source.
If any problems occur while you're creating a cross-model copy step, they are identified by the icon. Hover
over this icon to see how to fix the problem.
Note
If data is booked on non-leaf members, you will get an error message. When you dismiss the error message,
any data booked on non-leaf members will be ignored in future executions. Be sure to check the results of
the data action and confirm that they are correct.
Before getting started on a cross-model copy step, you need to create a data action. For more information, see
Create a Data Action [page 2322].
Note
Cross-model copy steps can't copy data between different types of models. If you need to copy data
between a classic account model and a model with measures, use the LINK function in an advanced
formulas step instead. For details, see About Script Formulas and Calculations in Advanced Formulas for
Planning [page 2391].
When choosing members for the cross-model copy step settings, you can use a parameter instead of
dimension members. Parameters let you create prompts, or update several values from one place. Select
Parameter in the member selector to see which ones are available. To learn how to create parameters, see
Add Parameters to Your Data Actions and Multi Actions [page 2338].
Currency conversion is not supported for cross-model copy steps and all values are copied without
currency conversion.
Procedure
The Target Model is the default model for the data action; it can't be changed here.
4. In the Filter section, select the version dimension to pick the version of the source data that you want to
copy. You can also select Parameter to apply a parameter. This filter is mandatory.
The target version for the data action is set by the TargetVersion parameter.
5. If you want to add additional filters, select +Add Filter and choose a dimension. Then, select the members
whose data you want to include, or select Parameter and choose a parameter.
For example, to copy data for a single cost center in 2018, you would filter the Cost Center and date
dimensions.
When you create mappings between source and target members, you won't need to map the members
that are filtered out.
You can change the hierarchy of the dimension members in the filter. If the dimension has a parent-child
hierarchy, source members in copy rules based on that dimension will usually need to use the same
hierarchy used in the filter.
Note
You can filter based on a dynamic parameter. If the parameter has default values, you only need to map
those ones to finish creating your step. However, if you want to make sure that the data action will run
with any parameter values, you should complete the mapping.
6. In the Mapping area, set how the data will be mapped from the source model to the dimension members of
the target model.
For each of the target dimensions listed in this area, you need to either map a source dimension or select
a default value. You don't need to map source dimensions that aren't relevant for the target model. For
example, you might choose not to map Gender or Office Location from the Headcount model to the
Finance model.
• Set a default value for target dimensions when you don't have an appropriate source dimension to map
to them, or when you want to copy all the data to a single leaf member of the target dimension instead
of splitting it up according to one of the source dimensions.
For example, if you are mapping data from a model that doesn't use currency conversion to a model
that does use currency conversion, you can select the currency of the source model as the default
value for the Currency dimension of the target model.
To set a default value for a target dimension, remove any mapped dimension and choose Select default
value…, and then choose a single leaf member of the dimension, or apply a parameter.
• Map a source dimension to a target dimension when you want to distribute the data across different
members of the target dimension.
The account dimensions are automatically mapped together, and can't be unmapped. The date
dimensions used for planning are also mapped. If you want to map a leaf member of a date dimension
as a default value, or map a different date dimension, you can select (Delete) next to the source
dimension.
Tip
If you want to manually change the mapping, select No Auto-Creation and create copy rules. You
can also use auto-mapping but overwrite parts of the mapping by manually adding copy rules.
• No Auto-Creation: Copy rules are not generated automatically and you must create copy rules
manually.
• Identical Names: Source members are automatically mapped to target members if their names are
identical.
• Identical Names (Including Ancestors): Source members are mapped to target members if the
names of the source members, including the names of any of their ancestors, are identical to the
names of the target members.
• First day of time period: This option uses the first day of the source date member as a reference to
map to the target date member.
• When mapping date dimensions with different time granularities, the first day of the source
member will always be used as a point of reference.
• For example, this option could be used when copying from a source dimension with month-
level granularity to a target dimension with day-level granularity. If a source dimension month
starts on January 1st, 2021, this month will be mapped to the day member "January 1, 2021"
on the target dimension. This pattern will be repeated for all dimension members.
If you are mapping date dimensions with the same time settings and granularities, First day of time
period and Last day of time period will result in the same outcome. For example, if you are copying
across dimensions using months that don’t have week granularity, the data will just copy to the
corresponding month.
b. Choose the hierarchies for the dimensions you want to map together.
Note
Choosing Default Hierarchy uses the current default hierarchy, even if the default hierarchy has
been changed in the dimension settings in modeler.
d. Add copy rules by selecting either +Add Copy Rule, or selecting (Add Rule) next to an unmapped
source member in the list.
You may need to select and search for member names if the dimension contains many members.
You can also select to change the display settings and hierarchy shown for the unmapped source
members.
If the account dimension has multiple hierarchies, source members must use the same hierarchy as
the source context filter. If there isn't a filter on the account dimension, only the default hierarchy can
be used.
e. If necessary, select in the From column and choose a single source member. It doesn't need to be a
leaf member, and you can pick a member that was mapped automatically to copy its data to a different
target instead. You can also use a parameter to set the source member.
You can choose calculated accounts or accounts with exception aggregation as source members, but
the data action may take longer to execute. For fastest performance, filter out these accounts instead.
For more information, see Copy Steps with Calculations and Exception Aggregation [page 2337].
f. Select in the To column and choose one or more target leaf members, or use a parameter.
If you pick multiple targets, the value of the source member is copied to each target.
You can also aggregate data from multiple source members together in a single target member by
using the target member in multiple copy rules.
If you're using the Manual completion option, you must create copy rules that account for all the listed
source dimension members. Note that the list does not include members that you filtered out of the
cross-model copy step, so you can also change the filters to remove the remaining members.
You also don't need to complete the mapping for the date dimension. For example, if the source and
target models have different start or end dates, you can still run the data action without completing the
mapping manually. The unmatched time periods will be ignored.
j. When you have finished configuring the mapping, select Done to return to the cross-model copy step.
8. Continue mapping data to each target dimension. The icon appears on every target dimension that has
an incomplete mapping, and you can hover over this icon to see how to complete the mapping.
9. To switch between appending or overwriting data, select next to Options and choose a setting from the
Write Mode list:
• Append (default): The copied data will be added to any existing data for the target members.
• Overwrite: Existing data for the target members will be overwritten by the copied data.
10. Select Save Data Action.
Results
The copy step is added to the data action. You can continue to add, manage, and reorder steps as required.
Context
Allocation steps let you distribute values for members of a source dimension along members of a target
dimension, using driver values or direct assignment. For more information, see Learn About Allocations [page
2513].
Note
If your allocation step has a large number of rules and columns, you might get a warning message.
To reduce the number of allocation rules and improve the performance of the allocation step, you can
Procedure
Note
5. Under Target Context, choose a Booking Account and a Write Mode. To learn more, see the "Booking values
to a specific account" section of Build an Advanced Allocation Process [page 2519].
• Booking account is not available for measure models.
• If you enable Overwrite Target, you will overwrite any existing values in the target cells. To learn more,
see the “Overwriting your target values” section of Build an Advanced Allocation Process [page 2519].
6. Under Allocation Rules, choose your Source, Driver, and Target dimensions, and the hierarchies you want to
use for each. Driver dimensions can be accounts or measures.
7. Using and , continue adding and deleting Source, Driver, and Target dimensions as desired.
Note
You always need to have at least one source, driver, and target dimensions, so the delete option will
clear the column, but not delete it. You will then need to add a new source, driver, or target dimension.
For driver dimensions, there are default dimensions for account models and measure models.
8. Start adding allocation rules. To add an allocation rule, select the first cell of the allocation rule table, and
continue adding allocation rules as necessary. You can:
• Type part of a member ID or description in the cell to see suggestions.
Results
The allocation step is added to the data action. You can continue to add, manage, and reorder steps as needed.
In SAP Analytics Cloud, you can add an allocation step to run structured legacy allocations as part of your data
action.
Prerequisites
Set up the base legacy allocation step that you want to use before adding it to a data action. To learn how, see
Set Up Your First Allocation Process [page 2516]. You can only add steps that are based on the default model
of your data action.
Note
Currently you can only edit existing allocation steps using legacy allocations but can't create new ones.
You can also migrate existing ones so that they support detailed configuration in the data action designer.
Eventually allocation steps using legacy allocations will no longer be available and all allocations will need to
be created in the data action designer. To learn more about creating allocations in the data action designer,
see Creating an Allocation Step [page 2332].
Context
Allocation steps let you distribute values for members of a source dimension along members of a target
dimension, using driver values or direct assignment. For more information, see Learn About Allocations [page
2513].
(Unlike allocation processes, data actions don't offer the option to apply filters when you run them. You can
filter the allocation step here instead.)
Note
You can't filter the account dimension here, but you can apply an account filter to the base allocation
step. When you pick an allocation step with an account filter, the filter appears in this list.
Only steps that are based on the same model as the data source appear here.
Results
The allocation step is added to the data action. You can continue to add, manage, and reorder steps as
required.
Use an embedded data action step in SAP Analytics Cloud to run another data action as part of the one you’re
working on. Combining these steps with dynamic parameters lets you reuse data actions and set different
source or target members.
Prerequisites
Before carrying out these steps, set up the data action that you want to embed. Create dynamic parameters for
any numbers or dimension members that you want to change when embedding the data action.
An embedded data action step is a reference to another data action. The data action that you embed won’t be
changed as part of this process, and you can still edit it or run it on its own.
You might have a step that you need to run in several different data actions, or that you want to run several
times in a single data action. For example, you might create a data action that calculates subscription revenue
based on booking data, with a dynamic parameter for the product type.
Using embedded data action steps, you can reuse that data action multiple times in another data action. And
you can specify a different product type each time.
With this setup, you don’t need to copy and edit the revenue calculation step multiple times, and you can still
run all the calculations with a single data action.
However, you can’t create a loop where a data action ends up embedded within itself.
Note
You can also nest multiple data actions as steps in a multi action. In particular, this is helpful if you need to
run data actions on different versions or models, and if you need to publish data in between different data
actions. For more information, refer to Automate a Workflow Using Multi Actions [page 2347].
Procedure
1. In the data action page, select (Add Embedded Data Action Step).
2. From the Data Action Name list, select the data action that you want to embed.
Only data actions based on the same default model will show up in this list.
3. If you need to, change the Name and Description for the step.
4. If the embedded data action has dynamic parameters, you can set values for them here.
This way, you can set a different fixed value each time you reuse the data action. Or you can assign another
parameter from the container data action.
• The embedded data action will run on the same version, so you don’t need to set the default Target
Version parameter.
• For member parameters, specify a fixed member or use a parameter within the container data action.
(You can select to pick a member or parameter, or just type the ID or description to search. A list of
results appears as you type.)
If you want to use a parameter, make sure that it has the correct settings. You can find these by
• If the parameter has a default value set in the embedded data action, you can select the icon to use
that value.
5. Check for validation messages and then save your data action.
Results
When you run the data action that contains this step, the embedded data action will run as part of it.
For copy steps and cross-model copy steps in SAP Analytics Cloud, source members for copy rules can
be accounts and measures that are calculated or that use exception aggregation. Take note of these
considerations, though.
• You can’t add one of these accounts as a source member for a copy rule if any of its ascendants or
descendants in the hierarchy are also source members.
• You can’t select source members from different hierarchies of the same dimension.
• For cross-model copy steps, if the dimension has parent-child hierarchies, you must choose source
members from the same hierarchy used to filter that dimension in the data action. If you haven’t filtered
the dimension, you must use the default hierarchy.
• The data action may take longer to execute.
If one of these problems occurs, a message will identify the cause of the problem.
Note
These restrictions don’t apply to accounts or measures with exception aggregation when you configure
copy rules so that the data does not need to be aggregated. To do this, create a copy rule for each
leaf member of the exception aggregation dimension, instead of using a default value or non-leaf source
members.
Related Information
Add parameters to your data actions and multi actions in SAP Analytics Cloud to prompt users for values while
running the operations, and to make them easier to update and reuse.
About Parameters
A parameter replaces a numeric value, or one or more dimension members or measures in your data action or
multi action.
You create a parameter within a data action or multi action, and then add it in any of the steps in that object.
When working with multi actions, you can also create a parameter from the designer panel by choosing
(Available Parameters) and selecting Create Parameter from the dropdown menu.
• Fixed values work well if you’re using the same value in many places throughout the steps and you want to
be able to change the value quickly.
• Dynamic values add flexibility. They let users set their own values when running the data action or multi
action.
Users can also set dynamic prompt values when creating a planning trigger, an automatic data action task,
a scripted data action object, an embedded data action, or a data action step in a multi action.
For data actions, prompt values can be set explicitly, or by retrieving values from the filters applied to the
data.
Cross-Model Parameters
If you are creating a parameter based on a public dimension in a multi action, you can choose to make the
parameter reusable in other multi action steps with models using the same public dimension. This type of
parameter is called a Cross-Model Parameter. Cross-model parameters can only be reused across models and
steps within the current multi action.
You can assign a value to the cross-model parameter by choosing Member Selection, Default Value, Story
Filter, or Input Control. The assigned value will be used in steps across models.
Note
Formulas defined in public account dimensions can be model-specific. For more information, see the Public
Account Dimensions section of Learn About Dimensions and Measurements.
Cross-model parameters for public account dimensions are model-specific and can be interpreted
differently by different models. When a cross-model parameter for public account dimensions is used
in different data action steps, the formula defined in the model of the corresponding data action steps will
be used instead of the formula defined in the public account dimensions.
Number For example, you want to run an ad- Add a Number Parameter [page 2339]
vanced formula calculation with differ-
ent values for revenue growth rate or
gross margin percent.
Member For example, you want to embed your Add a Member Parameter [page 2340]
data action multiple times in a multi ac-
tion, and run it on different regions or
different products each time.
Measure (not supported for classic ac- For example, you want to let users set Add a Measure Parameter [page 2341]
count models) the target and source currencies in a
currency conversion step.
Parameters must meet the requirements of the fields where you want to use them. For example, if you want to
set up a parameter for the source member of a copy action, make sure you use the default model and the same
dimension as the copy action, and set the cardinality to Single.
If you want to see the requirements for a field, open the select member window for it and select or hover
over Parameter.
In these cases, the container object will show the embedded object’s parameters, but the container object
doesn’t inherit the parameters. You still need to set values for these parameters in the container object. To do
this, specify fixed values or apply a compatible parameter from the container object.
Note that you can apply a more restrictive parameter in a container object. For example, a multi action
parameter with Level set to Leaf can be applied to a data action parameter with Level set to Any.
When working with embedded measure parameters, the decimal place setting for the embedded data action or
multi action must match the container object.
Procedure
1. Open a data action or multi action, then select (Show Parameters List).
2. Select Create Parameter.
3. Type an ID using letters, numbers, and underscores (_).
Procedure
1. Open a data action or multi action, then select (Show Parameters List).
2. Select Create Parameter.
3. Type an ID using letters, numbers, and underscores (_).
4. Select Member/Measure as the Parameter Type.
(For classic account models, and for multi actions, the option is called Member.)
5. Specify the properties.
Prerequisites
The parameter needs to be based on a model with measures. For a data action, this means that the default
model needs to be a model with measures.
Procedure
1. Open a data action or multi action, then select (Show Parameters List).
2. Select Create Parameter.
3. Type an ID using letters, numbers, and underscores (_).
4. Select Member/Measure as the Parameter Type.
5. Specify the properties.
Note
If you want to use the parameter as a source or target for copying across measures, keep in mind
that measures can have different data types. For example, if your data action copies a decimal
measure to a measure parameter with any data type, a user might select an integer measure
when running the data action. When using measure parameters for source or target members, it’s
• Decimal Places: Select the number of decimal places to restrict the parameter to measures that match
that number. This setting can help you make sure that a source measure isn’t copied to a target
measure with fewer decimal places.
• Allow "All Members": You can select this checkbox to allow the "All Members" selection for your
parameter. Note that it is only available when both cardinality and data type are set to Any.
6. Choose your Input settings.
• Input: To set a member value that can change, set Input to Dynamic. For example, you might set the
value in a prompt when running the multi action, in the trigger or task that runs it, or in a container
multi action. Otherwise, select Fixed.
• Default Member: Select or type the measure.
If you're setting up a dynamic parameter, the value is optional, but you can still specify it as a default.
If you set the cardinality to Any, you can select more than one measure.
• For dynamic parameters, type a name and description for prompts to identify the parameter in data
action triggers, data action tasks, and prompts.
7. Select Done.
Related Information
If you need to copy data across base measures with different currencies in SAP Analytics Cloud, you can add a
conversion step to your data action.
A conversion step has one or more conversion rules. Each rule copies data from a source measure, converts it,
and writes the values to a target measure.
For example, if your model only has values in a measure with local currencies, it's difficult to plan across
different regions. You can use a conversion step to copy it to a blank base measure with a fixed currency to do
planning. (Another option is creating a currency conversion measure.)
For details about currency conversion, see Work with Currencies in a Model with Measures [page 779].
Other data action step types don’t include currency conversion, so they can give unexpected results when they
copy data across different source and target currencies. See Plan with Currency Conversion [page 2239] for
details.
Prerequisites
• Your data action needs to be based on a planning model with currency conversion and measures.
(Conversion steps don’t apply to classic account models.)
• Create your data action before adding a currency conversion step. For details, see Create a Data Action
[page 2322].
Context
Note
When setting filters, source measures, and target measures, you can use parameters instead of fixed
members or measures. Parameters let you run the data action with different values. Hover over or select
Parameter in the member selector to see which ones are available. To learn more, see Add Parameters to
Your Data Actions and Multi Actions [page 2338].
Procedure
1. Open your data action and select (Add Currency Conversion Step).
2. Type a name and description for your step.
3. In the Context area, filter the source data as required.
a. Select Add Filter and choose the dimension to filter.
b. Select the members whose data you want to include, or select Parameters and choose a parameter.
Select OK.
Note
You can filter the version dimension to set a fixed source version, for example, if you want to copy data
from the forecast version to the target version. Otherwise, data will be copied within the target version
only. (The TargetVersion parameter sets this version.)
You can set fixed measures, or use parameters. Conversion measures and calculated measures aren't
available.
c. Specify the conversion settings: Date and Category.
The conversion step uses these settings to pick conversion rates from the model’s currency table. For
details, see Learn About Currency Conversion Tables [page 788].
Booking Date Convert each value with the rate that applies to the
value's date.
Booking Date + 1 or Booking Date - 1 Convert each value with a rate that is offset from the
booking date by the specified period. For example,
convert a value for March 5 with the rate for April 5.
Fixed Date Convert all values with rates from a specific date.
Fixed category (for example, Actuals or Forecast) Use rates that apply to the category that you choose.
Your currency conversion table will need to have rates for the currencies, dates, categories, and rate
versions involved, as well as the rate types set in the model.
5. Set whether to overwrite values in the target measure, or to append the copied data to the existing target
values.
6. Save your data action.
Results
The conversion step is added to the data action. You can continue to add, manage, and reorder steps as
required.
After a data action is triggered, either in an SAP Analytics Cloud story or analytic application, via data action
tracing, from a calendar task, or as part of a multi action, you can monitor and manage it on the Data Action
Monitor page.
To access the Data Action Monitor page, you can either select the link in the message that appears when you
start to run a data action in background, or select (Data Actions) from the side navigation and choose the
Data Action Monitor tab.
The data action monitor provides an overview of your data action jobs based on your role. If you have Read
or Manage permission for Data Action Monitor Administration, you can view all data action jobs, including the
name, trigger source, and model data name for all data actions.
Note
In the data action monitor, the Triggered From column shows where the data action was originally triggered.
In some cases, like when a multi action with a data action step is triggered from the calendar, there can
be multiple Triggered From sources. When a source in the Triggered From column shows as Unavailable, it
means that you do not have permission to view the source.
Data action failed. When a data action fails, the private ver-
(Failed) sion reverts to what it was before the data action started.
All newly created edit versions are removed.
You can search for data actions in the list view, sort the columns, or filter the list view by data action job status,
triggering date, data action name, triggered from, triggered by, model, and version.
To view updated information about the data action jobs, choose the (Refresh) icon. To cancel all filters,
You can cancel pending and running data actions in the Data Action Monitor.
If you also have Manage permission for Data Action Monitor Administration and Execute permission for data
actions, you can cancel data actions that other users have triggered.
If you do not have those permissions, you can cancel only data actions that you have triggered.
To cancel pending or running data actions, select one or more data actions from the list and then select
(Cancel). You can also choose Select All to select all data actions from the list, except for data actions filtered
by filter or search inputs.
When a pending data action is canceled, the model data is not changed and the next data action can be
processed directly after the cancellation
When a running data action is canceled, the model data is not changed by the data action. The canceled data
action must be cleaned up and it will take a few minutes before the next data action can be processed.
To view details about the time it took to complete a data action, select Show/Hide columns and select the
Pending Time and Execution Time columns.
• Pending Time: Pending time is the amount of time the data action remained in the queue before starting.
• Preparing to Execute: Preparing to execute is the time needed to validate the data action, including the
parameters and models used in the data action, and prepare the edit version.
• Execution Time: Execution time is the amount of time it took to run the data action.
• Total Duration: Total duration is the overall time to took to complete the data action, including the Pending
Time and Execution Time.
You can also check details about execution status and detailed step information by selecting a data action
process.
In the data action details panel, the header provides additional information about execution, such as pending
time and execution time.
The data action details panel also has tabs for Status, Steps, and Parameters.
• Status tab: The status tab shows the status of the data action and when it was completed.
• Steps tab: The steps tab allows you to view a specific data action step by selecting the step name link. It
also shows the number of database records affected by each step and the duration of each step. If the step
is no longer available, the step name will display as text and you can hover over it to read the tooltip for
more information.
• Parameters tab: The parameters tab shows the parameter values you chose when running the data action,
instead of the default values.
Because the expendable sections for the Embedded Data Actions can be viewed in the tree table, you don't
have to open them in separate columns.
If a step fails, an info icon is displayed next to the name. You can hover over the info icon to learn more details.
To quickly access the place where you trigger a data action, click the corresponding data action name, or from
the Triggered from column, select the hyperlink of corresponding story, analytic application or calendar.
In the case when there’s no links available, for example, when a story was deleted or a data action task in
calendar was executed without data action assignment, Unavailable is displayed.
Multi actions help you orchestrate a set of operations across multiple planning models and versions in SAP
Analytics Cloud. They link together a sequence of steps such as data actions, version management steps, and
predictive steps, which all run from a single planning trigger.
Multi actions can help users save time when they need to run multiple data actions in sequence, publish
versions, run predictive scenarios, or combine these operations.
Consider using multi actions if your planning process involves running data actions on multiple versions or
models, or publishing data between data actions.
In this video, you will discover how to use multi actions to link together data actions and other tasks to
orchestrate complicated planning operations.
Multi actions and data actions are similar in some ways. They both consist of a sequence of steps that you set
up in a separate designer. When they’re ready, users can run them in stories or analytic applications using a
planning trigger.
The focus is different, however: Data actions let you design a variety of calculations to manipulate planning
data; multi actions are all about combining existing operations in a way that simplifies a task for users.
To use multi actions, you also should understand how data actions, parameters, and embedded data action
steps work. Refer to Get Started with Data Actions for Planning [page 2318], Add Parameters to Your Data
Actions and Multi Actions [page 2338], and Adding an Embedded Data Action Step [page 2335] for details.
To determine whether to use a data action or a multi action, check this table:
Run a data action multiple times on the You want to run depreciation calcula- Data Actions or Multi Actions
same version, and then publish it after- tions on several fixed assets for a single
wards budget version.
Run data actions on multiple different You plan on multiple versions simulta- Multi Actions
target versions neously and you want to populate all of
them with initial data.
Run data actions on multiple different You need to copy data from a strate- Multi Actions
target models gic planning model to multiple models
such as workforce, sales, and finance
models.
Run a data action, publish data, and You want to copy initial data to a ver- Multi Actions
then run another data action sion, publish it, and then run an ad-
vanced formulas calculation on the ver-
sion to prepare for planning.
Run data actions to prepare data for You want to refresh the input data for Multi Actions
time series forecasting, refresh the pre-
predictive planning and refresh the pre-
dictive model, and the predicted fore-
dicted forecasts.
casts in the story version.
Before creating a multi action, set up the models and data actions involved. For more information, refer to
Create a New Model [page 645] and Get Started with Data Actions for Planning [page 2318].
To work with multi actions, you’ll need a role with permissions for the multi actions object. These permissions
can be set in general, and individually for each multi action. For background information, refer to Permissions
[page 2844].
Create Lets you create multi Admin, Modeler SAP Analytics Cloud
actions. for Planning, professional
edition
Read Lets you open the multi Admin, Modeler, Planner SAP Analytics Cloud
actions start page and open Reporter, Viewer for Planning, professional
multi actions in the designer. edition or standard edition
It’s also required to add a
multi action to a planning
trigger, and to run a multi
action.
Update Lets you edit existing multi Admin, Modeler SAP Analytics Cloud
actions. for Planning, professional
edition
Delete Lets you delete multi Admin, Modeler SAP Analytics Cloud
actions. for Planning, professional
edition
Execute Lets you run a multi action. Admin, Modeler, Planner, SAP Analytics Cloud
Reporter, Viewer for Planning, professional
edition or standard edition
If you do not have the permissions listed above, you can still add a planning trigger with a multi action to a
story, but you will not be able to execute it.
To carry out specific tasks for multi actions, you may also need permissions for the data actions involved, as
well as access to the model data that will be changed.
This can include permissions for the model as well as privileges for data access control, model privacy, and
data locking, if they are set up. For details, refer to Learn About Data Security in Your Model [page 799].
Note
You can read, update, delete, and execute a multi action that you’ve created without multi action
permissions, but you will need the required permissions to publish your changes.
Add a data X X X X
action step to
a multi action
Add a version X X X
management
step to a
multi action
Add a predic- X X X
tive step to a
multi ac-
tion***
Set up a multi X
action trig-
ger*
Run a multi X X X X X
action with a
data action
step**
Run a multi X X X X
action with a
version man-
agement
step**
Run a multi X X X X X X
action with
predictive
step**
*You can add a planning trigger to a story withoutrRead permission for the multi action, but you will need this
permission to select the multi action in the multi action designer. Setting up multi action triggers also requires
permissions to edit a story or analytic application.
**If Publish target version automatically was selected on the multi action, a role with the maintain permission
for the planning model is required to successfully publish the changes. You can still run the multi action, but the
changes will not publish. For details about publishing permissions, see Create, Publish, and Manage Versions of
Planning Data [page 2170].
***You will also need predictive scenario permissions to add a predictive step to a multi action. For more
information, refer to Roles and Permissions for Predictive Scenarios [page 2044].
The permissions for multi actions depends on a combination of role-based permission for multi actions and
folder-based permissions in the file repository.
In order to have certain permissions for the multi actions in a file repository folder, you need to have specific
corresponding permissions for multi actions that are assigned by the system administrator outside of the file
repository, on the Roles page. For more information, see Create Roles [page 2877] and Permissions [page
2844].
If you do not have the required role-based permissions for multi actions, then your permissions in the file
repository for multi actions are restricted.
There are many possible combinations of role-based permissions and how they impact folder-based
permissions in the file repository. Two examples are multi action designer and multi action user.
In the second example, if you are a multi action user with Read and Execute permissions assigned by the
system administrator for multi actions, you have Read permission in the file repository. This permissions
combination does not allow you to make changes to the multi actions, but you can trigger them. As a multi
action user, you also manage all permissions only from the file repository settings.
Note
There is no Execute permission for the folder-based permissions, so having Read permission functions the
same as having Read and Execute permission in the roles settings. Having Read permission is a prerequisite
for other permissions.
After the basic permissions of Read and Execute are given in the roles settings, all additional permissions
can be managed directly in the file repository settings. Being able to add additional permissions in the file
repository setting streamlines the process by making it unnecessary to manage the permissions in both places.
Note
Having Update and Delete permissions in the roles settings requires a Planning Professional license. If you
have a Planning Standard license, you can have only Read and Execute permissions for the roles-based
permissions.
You can access and manage multi actions from the file repository. To navigate to the file repository, choose
(Files) from the side navigation.
In the file repository, you can move, edit, copy, delete, show columns, or filter your multi actions by using the
toolbar. If you have ownership of a multi action, you can share it by choosing (Share). For more information,
see Manage Files and Folders [page 50] and Share Files or Folders [page 193].
You can also access your multi actions from the multi action start page or in the Recent Files section of your
home page.
In SAP Analytics Cloud, you can create a multi action by adding steps, rearranging the sequence of steps, and
adding parameters.
Prerequisites
You’ll need permissions to create and update multi actions, and read access to any data actions you want to
include. For details, refer to Automate a Workflow Using Multi Actions [page 2347].
Context
As planning modelers and admins, you can follow these steps to create a multi action.
Note
If there’s an error while running a step, the following steps won’t run. The previous steps will still take effect.
Tip
If you will be adding multiple data actions based on the same model and version, it is recommended to
embed these data actions into a new data action first. You can then add the new embedded data action to
your multi action to improve performance. For more information on embedded data actions, refer to Adding
an Embedded Data Action Step [page 2335].
Note
Additionally, in the Multi Action Settings, you can enable Allow External API Access. With this option being
enabled, external systems can consume multi action APIs to access various capabilities offered by multi
actions, such as data import and planning. Note that this option is enabled by default. To access it, in the
Learning Tutorial
Click through the interactive tutorial illustrating how to create a multi action, add a multi action trigger and run
it in step-by-step instructions (4:00 min); the tutorial is captioned exclusively in English:
Procedure
1. From the side navigation, select (Multi Actions) and choose Multi Action in the Create New section.
If you want to base your multi action on one that already exists, you can select it in the multi actions list
You can apply a parameter from the multi action to each parameter from the data actions that it includes,
and to each target version for a version management step. You can also use fixed values or members.
If you’re not ready to set up parameters yet, you can start by adding steps instead.
For details about setting up parameters, refer to Add Parameters to Your Data Actions and Multi Actions
[page 2338].
4. Start adding your steps to the canvas.
• Adding a Data Action Step [page 2356]
The steps will run in order. You can also insert a step into the sequence by hovering over the preceding step
and selecting the + icon.
To quickly create a step that’s similar to an existing step, hover over the step and select
Duplicate .
5. If necessary, change and reorder your steps.
To change the order, hover over a step and select Move Up or Move Down .
To delete a step, select it and choose (Delete Step) from the toolbar.
6. Check the multi action for validation messages.
If a step has a warning or error icon, select it to see the problem in the panel.
If the Validation section of the toolbar shows a warning icon , hover over it to see where to fix the
problem.
(You can save the multi action while there are errors, but you won’t be able to run it.)
Results
You can now set up a planning trigger in a story or analytic application to run the multi action. For details, refer
to Run Data Actions, Multi Actions, and Allocations [page 2260].
You can also schedule your multi action in the Calendar. For more information, refer to Schedule Multi Actions
in the Calendar [page 2272].