0% found this document useful (0 votes)
54 views

Admin Mstr

Uploaded by

Mustapha FOUAD
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

Admin Mstr

Uploaded by

Mustapha FOUAD
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2953

Syst e m

Adm i n i st r a t i on
Gu i de

MicroStrategy ONE
M i cr o St r at egy ON E

Sep t em b er 2024
Copyright © 2024 by MicroStrategy Incorporated. All rights reserved.
Trademark Information
The following are either trademarks or registered trademarks of MicroStrategy Incorporated or its affiliates in the United States and certain other
countries:
Dossier, Enterprise Semantic Graph, Expert.Now, Hyper.Now, HyperIntelligence, HyperMobile, HyperVision, HyperWeb, Intelligent Enterprise,
MicroStrategy, MicroStrategy 2019, MicroStrategy 2020, MicroStrategy 2021, MicroStrategy AI, MicroStrategy Analyst Pass, MicroStrategy Architect,
MicroStrategy Architect Pass, MicroStrategy Auto, MicroStrategy Cloud, MicroStrategy Cloud Intelligence, MicroStrategy Command Manager,
MicroStrategy Communicator, MicroStrategy Consulting, MicroStrategy Desktop, MicroStrategy Developer, MicroStrategy Distribution Services,
MicroStrategy Education, MicroStrategy Embedded Intelligence, MicroStrategy Enterprise Manager, MicroStrategy Federated Analytics, MicroStrategy
Geospatial Services, MicroStrategy Identity, MicroStrategy Identity Manager, MicroStrategy Identity Server, MicroStrategy Insights, MicroStrategy
Integrity Manager, MicroStrategy Intelligence Server, MicroStrategy Library, MicroStrategy Mobile, MicroStrategy Narrowcast Server, MicroStrategy
ONE, MicroStrategy Object Manager, MicroStrategy Office, MicroStrategy OLAP Services, MicroStrategy Parallel Relational In-Memory Engine
(MicroStrategy PRIME), MicroStrategy R Integration, MicroStrategy Report Services, MicroStrategy SDK, MicroStrategy System Manager, MicroStrategy
Transaction Services, MicroStrategy Usher, MicroStrategy Web, MicroStrategy Workstation, MicroStrategy World, Usher, and Zero-Click Intelligence.
The following design marks are either trademarks or registered trademarks of MicroStrategy Incorporated or its affiliates in the United States and certain
other countries:

Other product and company names mentioned herein may be the trademarks of their respective owners.
Specifications subject to change without notice. MicroStrategy is not responsible for errors or omissions. MicroStrategy makes no warranties or
commitments concerning the availability of future products or versions that may be planned or under development.
CON TEN TS
Best Practices for MicroStrategy System Administration 13

Understanding the MicroStrategy Architecture 14


Communicating with Databases 19
Managing Intelligence Server 28
Managing and Monitoring Projects 44
Processing Jobs 55
Using Automated Installation Techniques 77

2. Setting Up User Security 79

The MicroStrategy User Model 80


Controlling Access to Application Functionality 88
Controlling Access to Data 113

Merging Users or Groups 143


Security Checklist Before Deploying the System 147

3. Identifying Users: Authentication 151

Workflow: Changing Authentication Modes 152

Authentication Modes 153


Implement Standard Authentication 156
Implement Anonymous Authentication 158

© 2024, M icr o St r at egy In c. 3


Syst em Ad m in ist r at io n Gu id e

Implement LDAP Authentication 160


Enable Single Sign-On Authentication 198
Enable Badge Authentication for Web and Mobile 606
How to Enable Seamless Login Between Web and Library 611
Set Default Authentication for Library Web Using the Library
Server Config File 613
Implement Database Warehouse Authentication 614
Authentication Examples 617

Security Configurations in MicroStrategy 620

Secure Communication in MicroStrategy 622


Configuring Web, Mobile Server, and Web Services to
Require SSL Access 630
Configuring Secure Communication for MicroStrategy Web,
Mobile Server, and Developer 631
Configuring MicroStrategy Client Applications to Use an
HTTPS URL 633
Enable HTTPS Connection Between the Refine Server and
Web Server for Data Wrangling 634
Testing SSL Access 641
Certificate Files: Common Extensions and Conversions 642

Self-Signed Certificates: Creating a Certificate Authority for


Development 643
Enforce Security Constraints for the Plugin Folder in
MicroStrategy Web or Library 652
Configure a Redirect URL Whitelist in MicroStrategy Web and
Library 654
Edit Password and Authentication Settings 657
Encryption Key Manager 661

4 © 2024, M icr o St r at egy In c.


Syst em Ad m in ist r at io n Gu id e

Prevent a CSRF Attack 664


Disallow Custom HTML and JavaScript in Dashboards,
Documents, Reports, and Bots 664
Configure Session Idle Timeouts 690
Enable Enforcing File Path Validation 693
Configure SameSite Cookies for Library 694
Configure SameSite Cookies for MicroStrategy Web and
MicroStrategy Mobile 699
Enable App Transport Security Using MicroStrategy Mobile
SDK or Library SDK 704
Enable Support for HTTP Strict Transport Security (HSTS) 704
Library Administration Control Panel 705
Enable Encryption for trustStore Secret Values 717
Upgrade Metadata Encryption to AES256-GCM 720

5. Manage Your Licenses 722

Manage and Verify Your Licenses 723


Audit and Update Licenses 728
Update CPU Affinity 737

6. Manage Your Projects 744

The Project Life Cycle 746


Implement the Recommended Life Cycle 751
Duplicate a Project 753
Update Projects with New Objects 758
Copy Objects Between Projects: Object Manager 762

Merge Projects to Synchronize Objects 809


Compare and Track Projects 818

© 2024, M icr o St r at egy In c. 5


Syst em Ad m in ist r at io n Gu id e

Delete Unused Schema Objects: Managed Objects 822


MicroStrategy System Monitors 827
Monitor System Activity: Change Journaling 828
Monitor System Usage: Intelligence Server Statistics 838
Monitor Quick Search Indices 856
Additional Monitoring Tools 857

8. Tune Your System for the Best Performance 892

Tuning: Overview and Best Practices 893

Design System Architecture 1026


Manage System Resources 1032
Managing User Sessions 1062
Governing Requests 1072
Manage Job Execution 1080
Governing Results Delivery 1096
Tune Your System for In-Memory Datasets 1103
Design Reports 1104
Configure Intelligence Server and Projects 1107
Tuning Narrowcast Server and Intelligence Server 1129

9. Cluster Multiple MicroStrategy Servers 1131

Overview of Clustering 1132


The Clustered Architecture 1135
Prerequisites for Clustering Intelligence Servers 1143
Cluster Intelligence Servers 1146

Manage Your Clustered System 1168


Connect MicroStrategy Web to a Cluster 1194

6 © 2024, M icr o St r at egy In c.


Syst em Ad m in ist r at io n Gu id e

10. Improving Response Time: Caching 1196

Result Caches 1203


Saving Report Results: History List 1240
Element Caches 1261
Object Caches 1276
Viewing Document Cache Hits 1284

11. Managing Intelligent Cubes 1286

Managing Intelligent Cubes: Intelligent Cube Monitor 1287

Governing Intelligent Cube Memory Usage, Loading, and


Storage 1296
Supporting Connection Mappings in Intelligent Cubes 1315

12. Scheduling Jobs and Administrative Tasks 1317

Best Practices for Scheduling Jobs and Administrative


Tasks 1318
Creating and Managing Schedules 1321
Scheduling Administrative Tasks 1328
Scheduling Reports and Documents: Subscriptions 1333
Configuring and Administering Distribution Services 1351

13. Administering MicroStrategy Web and Mobile 1379

Assigning Privileges for MicroStrategy Web 1380


Defining Project Defaults 1382
Using Additional Security Features for MicroStrategy Web
and Mobile 1384

Integrating Narrowcast Server with MicroStrategy Web


products 1393
Enabling Users to Install MicroStrategy Office from Web 1395

© 2024, M icr o St r at egy In c. 7


Syst em Ad m in ist r at io n Gu id e

FAQs for Configuring and Tuning MicroStrategy Web


Products 1397

14. Combining Administrative Tasks with System


Manager 1401

Creating a Workflow 1402


Defining Processes 1447
Deploying a Workflow 1545

15. Automating Administrative Tasks with Command


Manager 1553

Using Command Manager 1554


Executing a Command Manager Script 1560
Command Manager Script Syntax 1568
Using Command Manager from the Command Line 1570
Using Command Manager with OEM Software 1571

16. Verifying Reports and Documents with Integrity


Manager 1572

What is an Integrity Test? 1574


Best Practices for Using Integrity Manager 1578
Creating an Integrity Test 1580

Executing an Integrity Test 1584


Viewing the Results of a Test 1598
List of Tags in the Integrity Test File 1606

1. SQL Generation and Data Processing: VLDB


Properties 1622

Supporting Your System Configuration 1624

8 © 2024, M icr o St r at egy In c.


Syst em Ad m in ist r at io n Gu id e

Accessing and Working with VLDB Properties 1625


Details for All VLDB Properties 1636
Default VLDB Settings for Specific Data Sources 1925

2. Creating a Multilingual Environment:


Internationalization 1928

About Internationalization 1930


Best Practices for Implementing Internationalization 1933
Preparing a Project to Support Internationalization 1934

Providing Metadata Internationalization 1938


Providing Data Internationalization 1951
Making Translated Data Available to Users 1962
Achieving the Correct Language Display 1982
Maintaining Your Internationalized Environment 1988

3. List of Privileges 2004

Privileges for Predefined Security Roles 2005


Privileges for Out-Of-The-Box User Groups 2029
Privileges by License Type 2038

4. Multi-Tenant Environments: Object Name


Personalization 2051

How a Tenant Language Differs from a Standard Language 2052


Granting User Access to Rename Objects and View Tenant
Languages 2053
Renaming Metadata Objects 2054

Making Tenant-Specific Data Available to Users 2064


Maintaining Your Multi-Tenant Environment 2083

© 2024, M icr o St r at egy In c. 9


Syst em Ad m in ist r at io n Gu id e

5. Intelligence Server Statistics Data Dictionary 2086

STG_CT_DEVICE_STATS 2087
STG_CT_EXEC_STATS 2090
STG_CT_MANIP_STATS 2100
STG_IS_CACHE_HIT_STATS 2107
STG_IS_CUBE_REP_STATS 2112
STG_IS_DOC_STEP_STATS 2117
STG_IS_DOCUMENT_STATS 2125

STG_IS_INBOX_ACT_STATS 2133
STG_IS_MESSAGE_STATS 2142
STG_IS_PERF_MON_STATS 2150
STG_IS_PR_ANS_STATS 2153
STG_IS_PROJ_SESS_STATS 2159
STG_IS_REP_COL_STATS 2162
STG_IS_REP_SEC_STATS 2165
STG_IS_REP_SQL_STATS 2168
STG_IS_REP_STEP_STATS 2177
STG_IS_REPORT_STATS 2188
STG_IS_SCHEDULE_STATS 2201

STG_IS_SESSION_STATS 2204
STG_MSI_STATS_PROP 2212

6. Enterprise Manager Data Dictionary 2213

Enterprise Manager Data Warehouse Tables 2214

Relationship Tables 2266


Enterprise Manager Metadata Tables 2267
Enterprise Manager Attributes and Metrics 2269

10 © 2024, M icr o St r at egy In c.


Syst em Ad m in ist r at io n Gu id e

7. Command Manager Runtime 2860

Statement Reference Guide 2861


Executing a Script with Command Manager Runtime 2861
Syntax Reference Guide 2863

Configuring the Modeling Service 2864

Components and Architecture 2864


Modeling Service Configuration Properties 2865
Manually Configure the Modeling Service on a Linux Server 2869

Manually Configure the Modeling Service on a Windows


Server 2870
Change the Communication Port for the Modeling Service 2871
Configure HTTPS Connection Between Library Server and
Modeling Service 2872
Configure Modeling Service When Intelligence Server is TLS
Enabled 2874
Troubleshooting the Modeling Service 2875

8. MicroStrategy Web Cookies 2889

Session Information 2890


Default User Name 2894

Project Information 2894


Current Language 2895
GUI Settings 2895
Personal Autostyle Information 2896

System Autostyle Information 2896


Connection Information 2896
Available Projects Information 2897

© 2024, M icr o St r at egy In c. 11


Syst em Ad m in ist r at io n Gu id e

Global User Preferences 2897


Cached Preferences 2898
Preferences 2898
Methodology for Finding Trouble Spots 2907
Memory Depletion Troubleshooting 2908
Authentication Troubleshooting 2916
Fixing Inconsistencies in the Metadata 2924
Object Dependencies Troubleshooting 2930

Date/Time Functions Troubleshooting 2931


Performance Troubleshooting 2931
Project Performance 2931
Troubleshooting Data Imported from a File 2934
Subscription and Report Results Troubleshooting 2935
Drilled-To Report Returns No Data or Incorrect Data 2935
Internationalization Troubleshooting 2940
Troubleshooting Intelligence Server 2941
Logon Failure 2941
Modifying ODBC Error Messages 2944
Clustered Environments Troubleshooting 2946

Problems in a Clustered Environment 2946


Statistics Logging Troubleshooting 2949

12 © 2024, M icr o St r at egy In c.


Syst em Ad m in ist r at io n Gu id e

Best Practices for MicroStrategy System


Administration
MicroStrategy recommends the following best practices to keep your system
running smoothly and efficiently:

l Use the project life cycle of development, testing, production to fully test
your reports, metrics, and other objects before releasing them to users.

l If you need to delegate administrative responsibilities among several


people, create a user group. A user group (or "group" for short) is a
collection of users and/or subgroups. Groups provide a convenient way to
manage a large number of users and provide them with certain privileges.
MicroStrategy comes with a number of predefined groups for various
Administration tasks. For more information, see About MicroStrategy User
Groups.

l If you have multiple users working on a project with different functionality


needs, utilize security roles. A security role is a collection of project-level
privileges that are assigned to users. They can be used in any project
registered with Intelligence Server and users can have different security
roles in each project.

l Once Intelligence Server is up and running, you can adjust its governing
settings to better suit your environment. For detailed information about
these settings, see Chapter 8, Tune Your System for the Best
Performance.

You can use Enterprise Manager to monitor various aspects of


Intelligence Server's performance. Enterprise Manager is a MicroStrategy
project that uses the Intelligence Server statistics database as its data
warehouse. For information, see the Enterprise Manager Help.

Copyright © 2024 All Rights Reserved 13


Syst em Ad m in ist r at io n Gu id e

l If you have multiple machines available to run Intelligence Server, you can
cluster those machines to improve performance and reliability. See
Chapter 9, Cluster Multiple MicroStrategy Servers.

l Create caches for commonly used reports and documents to reduce the
database load and improve the system response time. See Chapter 10,
Improving Response Time: Caching.

Creating reports based on Intelligent Cubes can also greatly speed up the
processing time for reports. Intelligent Cubes are part of the OLAP
Services features in Intelligence Server. See Chapter 11, Managing
Intelligent Cubes.

l Schedule administrative tasks and reports to run during off-peak hours, so


that they do not adversely affect system performance. See Chapter 12,
Scheduling Jobs and Administrative Tasks

You can automate the delivery of reports and documents to users with the
Distribution Services add-on to Intelligence Server.

Understanding the MicroStrategy Architecture


A MicroStrategy system is built around a three-tier or four-tier structure.

l The first tier consists of two databases: the data warehouse, which
contains the information that your users analyze; and the MicroStrategy
metadata, which contains information about your MicroStrategy projects.
For an introduction to these databases, see Storing Information: the Data
Warehouse and Indexing your Data: MicroStrategy Metadata.

l The second tier consists of MicroStrategy Intelligence Server, which


executes your reports, dashboards, and documents against the data
warehouse. For an introduction to Intelligence Server, see Processing
Your Data: Intelligence Server.

Copyright © 2024 All Rights Reserved 14


Syst em Ad m in ist r at io n Gu id e

If MicroStrategy Developer users connect via a two-tier project source


(also called a direct connection), they can access the data warehouse
without Intelligence Server. For more information on two-tier project
sources, see Tying it All Together: Projects and Project Sources.

l The third tier in this system is MicroStrategy Web or Mobile Server, which
delivers the reports to a client. For an introduction to MicroStrategy Web,
see Chapter 13, Administering MicroStrategy Web and Mobile.

l The last tier is the MicroStrategy Web client, Library client, Workstation
client, or MicroStrategy Mobile app, which provides documents and
reports to the users.

In a three-tier system, Developer is the last tier.

Storing Information: the Data Warehouse


The data warehouse is the foundation that your MicroStrategy system is built
on. It stores all the information you and your users analyze with the
MicroStrategy system. This information is usually placed or loaded in the
data warehouse using some sort of extraction, transformation, and loading
(ETL) process. Your online transaction processing (OLTP) system is usually
the main source of original data used by the ETL process. Projects in one
metadata can have different data warehouses and one project can have
more than one data warehouse.

As a system administrator, you need to know which relational database


management system (RDBMS) manages your data warehouse, how the
MicroStrategy system accesses it (which machine it is on and which ODBC
driver and Data Source Name it uses to connect to it), and what should
happen when the data warehouse is loaded (such as running scripts to
invalidate certain caches in Intelligence Server, and so on).

Indexing your Data: MicroStrategy Metadata


MicroStrategy metadata is like a road map or an index to the information that
is stored in your data warehouse. The MicroStrategy system uses the

Copyright © 2024 All Rights Reserved 15


Syst em Ad m in ist r at io n Gu id e

metadata to know where in the data warehouse it should look for


information. It also stores other types of objects that allow you to access
that information. These are discussed below.

The metadata resides in a database, the metadata repository, that is


separate from your data warehouse. This can be initially created when you
run through the MicroStrategy Configuration Wizard. All the metadata
information is stored in database tables defined by MicroStrategy.

For more information about running the MicroStrategy Configuration


Wizard, see the Installation and Configuration Help.

To help explain how the MicroStrategy system uses the metadata to do its
work, imagine that a user runs a report with a total of revenue for a certain
region in a quarter of the year. The metadata stores information about how
the revenue metric is to be calculated, information about which rows and
tables in the data warehouse to use for the region, and the most efficient
way to retrieve the information.

The physical warehouse schema is a type of conceptual tool that is crucial


for you to visualize information's location in the data warehouse. This
includes table and column information about where things are actually
stored as well as maps, such as lookup and relate tables, that help the
system efficiently access that information. Persons who create the schema
objects in the MicroStrategy metadata must reference the physical
warehouse schema. Therefore, it is not actually stored in a location in the
metadata, but it is implicitly present in the definition of the schema objects in
the metadata.

The role of the physical warehouse schema is further explained in the Basic
Reporting Help.

In addition to the physical warehouse schema's implicit presence in the


metadata, the following types of objects are stored in the metadata:

Copyright © 2024 All Rights Reserved 16


Syst em Ad m in ist r at io n Gu id e

l Schema objects are objects created, usually by a project designer or


architect, based on the logical and physical models. Facts, attributes, and
hierarchies are examples of schema objects. These objects are developed
in MicroStrategy Architect, which can be accessed from MicroStrategy
Developer. The Project Design Help is devoted to explaining schema
objects.

l Application objects are the objects that are necessary to run reports.
These objects are generally created by a report designer and can include
reports, report templates, filters, metrics, prompts, and so on. These
objects are built in Developer or Command Manager. The Basic Reporting
Help and Advanced Reporting Help are devoted to explaining application
objects.

l Configuration objects are administrative and connectivity-related objects.


They are managed in Developer (or Command Manager) by an
administrator changing the Intelligence Server configuration or project
configuration. Examples of configuration objects include users, groups,
server definitions and so on.

Processing Your Data: Intelligence Server


Intelligence Server is the second tier in the MicroStrategy system.
Intelligence Server must be running for users to get information from the
data warehouse using MicroStrategy clients, such as MicroStrategy Web or
Developer.

Intelligence Server is the heart of the MicroStrategy system. It executes


reports stored in the metadata against the data warehouse and passes the
results of the reports to users. For detailed information about Intelligence
Server, including how to start and stop it, see Managing Intelligence Server,
page 28.

A server definition is an instance of Intelligence Server and its configuration


settings. Multiple server definitions can be stored in the metadata, but only
one can be run at a time on a machine. If you want multiple machines to
point to the same metadata, you should cluster them. For more information

Copyright © 2024 All Rights Reserved 17


Syst em Ad m in ist r at io n Gu id e

about clustering, including instructions on how to cluster Intelligence


Servers, see Chapter 9, Cluster Multiple MicroStrategy Servers.

Pointing multiple Intelligence Servers to the same metadata without


clustering may cause metadata inconsistencies. This configuration is not
supported, and MicroStrategy strongly recommends that users not configure
their systems in this way.

Tying it All Together: Projects and Project Sources


A MicroStrategy project is an object in which you define all the schema and
application objects, which together provide for a flexible reporting
environment. A project's metadata repository is established by the project
source in which you construct the project. The project's data warehouse is
specified by associating the project with the appropriate database instance.
For detailed information about projects, including instructions on how to
create a project, see the Project Design Help.

You can manage your projects using the System Administration Monitor. For
details, see Managing and Monitoring Projects, page 44.

A project source is a container stored in Developer that defines how


Developer accesses the metadata repository. Think of a project source as a
pointer to one or more projects that are stored in a metadata repository.

Two types of project sources can be created, defined by the type of


connection they represent:

l Server connection, or three-tier, which specifies the Intelligence Server to


connect to.

l Direct connection, or two-tier, which bypasses Intelligence Server and


allows Developer to connect directly to the MicroStrategy metadata and
data warehouse. Note that this is primarily for project design and testing.
Because this type of connection bypasses Intelligence Server, important
benefits such as caching and governing, which help protect the system
from being overloaded, are not available.

Copyright © 2024 All Rights Reserved 18


Syst em Ad m in ist r at io n Gu id e

In older systems you may encounter a 6.x Project connection (also two-
tier) that connects directly to a MicroStrategy version 6 project in read-
only mode.

For more information on project sources, see the Installation and


Configuration Help.

Communicating with Databases


Establishing communication between MicroStrategy and your databases or
other data sources is an essential first step in configuring MicroStrategy
products for reporting and analyzing data. This section explains how
MicroStrategy communicates with various data sources and the steps
required to set up this communication.

ODBC (Open Database Connectivity) is a standard database access


method. ODBC enables a single application to access database data,
regardless of the database management system (DBMS) that stores the
data. A DBMS is a collection of programs that enables you to store, modify,
and extract information from a database.

MicroStrategy Intelligence Server, when used in a three- or four-tier


configuration, is the application that uses ODBC to access a DBMS. ODBC
drivers translate MicroStrategy Intelligence Server requests into commands
that the DBMS understands. MicroStrategy Intelligence Server connects to
several databases (at a minimum, the data warehouse and the metadata
repository) to do its work.

Users of MicroStrategy Web can also connect to data sources using


database connections. A database connection supports connecting to data
sources through the use of DSNs, as well as through DSNless connections,
to import and integrate data into MicroStrategy.

This section describes the ODBC standard for connecting to databases and
creating data source names (DSNs) for the ODBC drivers that are bundled
with the MicroStrategy applications.

Copyright © 2024 All Rights Reserved 19


Syst em Ad m in ist r at io n Gu id e

The diagram below illustrates the three-tier metadata and data warehouse
connectivity used in the MicroStrategy system.

The diagram shown above illustrates projects that connect to only one data
source. However, MicroStrategy allows connection to multiple data sources
in the following ways:

l With MicroStrategy MultiSource Option, a MicroStrategy project can


connect to multiple relational data sources. For information on MultiSource
Option, see the Project Design Help.

l You can integrate MDX cube sources such as SAP BW, Microsoft Analysis
Services, and Hyperion Essbase with your MicroStrategy projects. For
information on integrating these MDX cubes sources into MicroStrategy,
see the MDX Cube Reporting Help.

This section provides information and instructions on the following tasks:


Copyright © 2024 All Rights Reserved 20
Syst em Ad m in ist r at io n Gu id e

Connecting to the MicroStrategy Metadata


MicroStrategy users need connectivity to the metadata so that they can
access projects, create objects, and execute reports. Intelligence Server
connects to the metadata by reading the server metadata connection
registry when it starts. However, this connection is only one segment of the
connectivity picture.

Consider these questions:

l How does a Developer user access the metadata?

l How does a user connect to Intelligence Server?

l Where is the connection information stored?

The diagram below illustrates three-tier metadata connectivity between the


MicroStrategy metadata database (tier one), Intelligence Server (tier two),
and Developer (tier three).

In a server (three-tier) environment, Developer metadata connectivity is


established through the project source. For steps to create a project source,
see the Installation and Configuration Help.

Copyright © 2024 All Rights Reserved 21


Syst em Ad m in ist r at io n Gu id e

You can also create and edit a project source using the Project Source
Manager in Developer. When you use the Project Source Manager, you must
specify the Intelligence Server machine to which to connect. It is through
this connection that Developer users retrieve metadata information.

The Developer connection information is stored in the Developer machine


registry.

Connecting to the Data Warehouse


Once you establish a connection to the metadata, you must create a
connection to the data warehouse. This is generally performed during initial
software installation and configuration, but it can also be established with
the following procedures in Developer:

l Creating a database instance: A MicroStrategy object created in


Developer that represents a connection to the data warehouse. A

Copyright © 2024 All Rights Reserved 22


Syst em Ad m in ist r at io n Gu id e

database instance specifies warehouse connection information such as


the data warehouse DSN, Login ID and password, and other data
warehouse-specific information. A database instance should have one
default database connection with one default database login.

l Creating a database connection: Specifies the DSN and database login


used to access the data warehouse. A database instance designates one
database connection as the default connection for MicroStrategy users.

l Creating a database login: Specifies the user ID and password used to


access the data warehouse. The database login overwrites any login
information stored in the DSN.

l User connection mapping: The process of mapping MicroStrategy users to


database connections and database logins.

For procedures to connect to the data warehouse, see the Installation and
Configuration Help.

Caching Database Connections


Connecting to and disconnecting from databases incurs a small amount of
overhead that may cause a small yet noticeable decrease in performance in
high-concurrency systems. With connection caching, Intelligence Server is
able to reuse database connections. This minimizes the overhead
associated with repeatedly connecting to and disconnecting from databases.

Connections can exist in one of two states:

l Busy: connections that are actively submitting a query to a database

l Cached: connections that are still connected to a database but not actively
submitting a query to a database

A cached connection is used for a job if the following criteria are satisfied:

l The connection string for the cached connection matches the connection
string that will be used for the job.

Copyright © 2024 All Rights Reserved 23


Syst em Ad m in ist r at io n Gu id e

l The driver mode (multiprocess versus multithreaded) for the cached


connection matches the driver mode that will be used for the job.

Intelligence Server does not cache any connections that have pre- or post-
SQL statements associated with them because these options may
drastically alter the state of the connection.

Monitoring Database Instance Connections


A warehouse database connection is initiated any time a user executes an
uncached report or browses uncached elements. The Database Connection
Monitor enables you to view the number of busy and cached connections to
the data warehouse. You can also view the name of the database instance,
the user who is using the connection, and the database login being used to
connect to the database.

If a database connection is cached, the ODBC connection from Intelligence


Server to the data warehouse remains open. However, if the data warehouse
connection surpasses the connection time-out or lifetime governors (set in
the Database Connections dialog box, on the Advanced tab), the ODBC
connection closes, and it no longer displays in the Database Connection
Monitor.

To View the Current Database Connections

1. In Developer, log in to a project source. You must log in as a user with


the Monitor Database Connections privilege.

2. Expand Administration, then expand System Monitors, and then


select Database Connections. The database connection information
displays on the right-hand side.

To Delete a Database Connection

In the Database Connection Monitor, right-click the connection and select


Disconnect.

Copyright © 2024 All Rights Reserved 24


Syst em Ad m in ist r at io n Gu id e

Benefiting from Centralized Database Access Control


All database connectivity is handled by Intelligence Server, which provides
centralized control of database access. The advantages of centralized
control include:

l Connectionless client—All connections to databases in the system are


made through Intelligence Server. This means that only the Intelligence
Server machine needs to have database connectivity. It also eliminates
the need to rely on identically configured connections on client and server
computers. This makes it easy to set up, deploy, and manage large
systems.

l Connection caching—Connecting to and disconnecting from databases


incurs a small amount of overhead that may cause a small, yet noticeable,
decrease in performance in high-concurrency systems. With connection
caching, Intelligence Server is able to reuse database connections. This
minimizes the overhead associated with repeated connecting to and
disconnecting from databases.

l Workload governing—Because only Intelligence Server connects to


databases, it can make sure that no one database becomes overloaded
with user requests. This is especially important for the data warehouse.

l User connection mapping—Intelligence Server can map MicroStrategy


users and user groups to data warehouse login IDs. This allows multiple
users to access the database using a single database login or different
database logins.

l Ease of administration/monitoring—Because all database connectivity is


handled by Intelligence Server, keeping track of all connections to all
databases in the system is easy.

l Prioritized access to databases—You can set access priority by user,


project, estimated job cost, or any combination of these.

Copyright © 2024 All Rights Reserved 25


Syst em Ad m in ist r at io n Gu id e

l Multiprocess execution—The ability to run in multiprocess mode means


that if one process fails, such as a lost or hung database access thread,
the others are not affected.

l Database optimizations—Using VLDB properties, Intelligence Server is


able to take advantage of the unique performance optimizations that
different database servers offer.

Updating VLDB Properties for ODBC Connections


VLDB properties allow Intelligence Server to take advantage of the unique
optimizations that different databases offer. Depending on the database
type, these properties can affect how Intelligence Server handles things like:

l Join options, such as the star join and full outer join

l Metric calculation options, such as when to check for NULLs and zeros

l Pre- and post-SQL statements

l Query optimizations, such as sub-queries and driving tables

l Table types, such as temporary tables or derived tables

For more information about all the VLDB properties, see SQL Generation
and Data Processing: VLDB Properties.

Upgrading Your Database Type Properties


Default VLDB properties are set according to the database type specified in
the database instance. MicroStrategy periodically updates the default
settings as database vendors add new functionality.

When you create the metadata for a MicroStrategy project, the database-
specific information is loaded from a file supplied by MicroStrategy (called
Database.pds). If you get a new release from MicroStrategy, the metadata
is automatically upgraded using the Database.pds file with the metadata
update process. The Administrator is the only user who can upgrade the
metadata. Do this by clicking Yes when prompted for updating the metadata.

Copyright © 2024 All Rights Reserved 26


Syst em Ad m in ist r at io n Gu id e

This happens when you connect to an existing project after installing a new
MicroStrategy release.

The MicroStrategy system cannot detect when you upgrade or change the
database used to store the MicroStrategy metadata or your data warehouse.
If you upgrade or change the database that is used to store the metadata or
data warehouse, you can manually update the database type to apply the
default properties for the new database type.

When you update the database type information, this process:

l Loads newly supported database types. For example, properties for the
newest database servers that were recently added.

l Loads updated properties for existing database types that are still
supported.

l Keeps properties for existing database types that are no longer supported.
If there were no updates for an existing database type, but the properties
for it have been removed from the Database.pds file, the process does
not remove them from your metadata.

In some cases, MicroStrategy no longer updates certain DBMS objects as


newer versions are released. These are not normally removed. However, in
the case of Oracle 8i R2 and Oracle 8i R3, the DBMS objects were merged
into "Oracle 8i R2/R3" for both Standard and Enterprise editions because
Oracle 8i R3 is no longer being updated. You may need to select the merged
version as part of your database instance if you are using a version of
Oracle 8i. This will become apparent if date/time functions stop working,
particularly in Enterprise Manager.

For more information about VLDB properties, see SQL Generation and Data
Processing: VLDB Properties.

You may need to manually upgrade the database types if you chose not to
run the update metadata process after installing a new release.

Copyright © 2024 All Rights Reserved 27


Syst em Ad m in ist r at io n Gu id e

To Manually Upgrade the Database Type Properties

1. In the Database Instance editor, click the General tab.

2. Select Upgrade.

The Readme lists all DBMSs that are supported or certified for use with
MicroStrategy.

Managing Intelligence Server


This section introduces you to basic Intelligence Server operation, including
starting and stopping Intelligence Server and running it as a service or as an
application.

You can improve your system and database performance by adjusting


various Intelligence Server governing settings to fit your system parameters
and your reporting needs. For detailed information about these settings, see
Chapter 8, Tune Your System for the Best Performance.

What Happens When Intelligence Server Starts?


Once a server definition is defined and selected for Intelligence Server using
the Configuration Wizard, the metadata connection information and server
definition name are saved in the machine's registry. When Intelligence
Server starts, it reads this information to identify the metadata to which it
will connect.

When Intelligence Server starts, it does the following:

l Initializes internal processing units

l Reads metadata connection information and server definition name from


the machine registry and connects to the specified metadata database

l Loads configuration and schema information for each loaded project

Copyright © 2024 All Rights Reserved 28


Syst em Ad m in ist r at io n Gu id e

l Loads existing report cache files from automatic backup files into memory
for each loaded project (up to the specified maximum RAM setting)

This occurs only if report caching is enabled and the Load caches on
startup feature is enabled.

l Loads schedules

l Loads MDX cube schemas

You can set Intelligence Server to load MDX cube schemas when it starts,
rather than loading MDX cube schemas upon running an MDX cube
report. For more details on this and steps to load MDX cube schemas
when Intelligence Server starts, see the Configuring and Connecting
Intelligence Server section of the Installation and Configuration Help.

If a system or power failure occurs, Intelligence Server cannot capture its


current state. The next time the server is started, it loads the state
information, caches, and History Lists that were saved in the last automatic
backup. (The automatic backup frequency is set using the Intelligence
Server Configuration Editor.) The server does not re-execute any job that
was running until the person requesting the job logs in again.

What Happens When Intelligence Server Stops?


When you initiate an Intelligence Server shutdown, it:

l Writes cache and History List information to backup files

l Cancels currently executing jobs

The user who submitted a canceled job sees a message in the History List
indicating that there was an error. The user must resubmit the job.

l Closes database connections

l Logs out connected users from the system

Copyright © 2024 All Rights Reserved 29


Syst em Ad m in ist r at io n Gu id e

l Removes itself from the cluster (if it was in a cluster)

It does not rejoin the cluster automatically when restarted.

As noted earlier, if a system or power failure occurs, these actions cannot be


done. Instead, Intelligence Server recovers its state from the latest
automatic backup.

Running Intelligence Server as an Application or a Service


Intelligence Server can be started as a Windows service or as an
application. If you run Intelligence Server as a service, you can start and
stop it from a remote machine with Developer or by logging into the
Intelligence Server machine remotely. In addition, you can configure the
service to start automatically when the machine on which it is installed
starts. For more information about running Intelligence Server as a service,
see Starting and Stopping Intelligence Server as a Service, page 31.

On rare occasions you may need to run Intelligence Server as an


application. This includes occasions when you need precise control over
when Intelligence Server stops and starts or when you need to change
certain advanced tuning settings that are not available when Intelligence
Server is running as a service. For more information about running
Intelligence Server as an application, see Starting Intelligence Server as an
Application, page 37.

Registering and Unregistering Intelligence Server as a UNIX


Service
In UNIX, when you configure Intelligence Server you must specify that it
starts as an application or a service. If you want to start Intelligence Server
as a service, you must register it as a service with the system. In addition, in
UNIX, if you want to start Intelligence Server as a service after having
started it as an application, you must register it as a service.

Copyright © 2024 All Rights Reserved 30


Syst em Ad m in ist r at io n Gu id e

To register or unregister Intelligence Server as a service in UNIX, you must


be logged in to the Intelligence Server machine with root privileges.

You can register Intelligence Server as a service in two ways:

l From the Configuration Wizard: on the Specify a Port Number page,


ensure that the Register Intelligence Server as a Service check box is
selected.

l From the command line: in ~/MicroStrategy/bin enter:

mstrctl -s IntelligenceServer rs

If you want to start Intelligence Server as an application after having


registered it as a service, you need to unregister it. Unregistering the
service can be done only from the command line, in
~/MicroStrategy/bin. The syntax to unregister the service is:

mstrctl -s IntelligenceServer us

Starting and Stopping Intelligence Server as a Service


Once the service is started, it is designed to run constantly, even after the
user who started it logs off the system. However, you may need to stop and
restart it for these reasons:

l Routine maintenance on the Intelligence Server machine

l Changes to Intelligence Server configuration options that cannot be


changed while Intelligence Server is running

l Potential power outages due to storms or planned building maintenance

You can start and stop Intelligence Server manually as a service using any
of the following methods:

l MicroStrategy Service Manager is a management application that can run


in the background on the Intelligence Server machine. It is often the most
convenient way to start and stop Intelligence Server. For instructions, see

Copyright © 2024 All Rights Reserved 31


Syst em Ad m in ist r at io n Gu id e

Service Manager, page 32.

l If you are already using Developer, you may need to start and stop
Intelligence Server from within Developer. For instructions, see
Developer, page 35.

l You can start and stop Intelligence Server as part of a Command Manager
script. For details, see Command Manager, page 36.

l Finally, you can start and stop Intelligence Server from the command line
using MicroStrategy Server Control Utility. For instructions, see Command
Line, page 36.

l You must have the Configuration access permission for the server definition
object. For information about object permissions in MicroStrategy, see
Controlling Access to Objects: Permissions, page 89. For a list of the
permission groupings for server definition objects, see Controlling Access to
Objects: Permissions, page 89.

l To remotely start and stop the Intelligence Server service in Windows, you
must be logged in to the remote machine as a Windows user with
administrative privileges.

Service Manager

Service Manager is a management tool installed with Intelligence Server


that enables you to start and stop Intelligence Server and choose a startup
option. Service Manager allows you to start, stop, and manage the following
services:

l MicroStrategy Intelligence Server

l MicroStrategy Listener

l MicroStrategy Distribution Manager

l MicroStrategy Execution Engine

l MicroStrategy Enterprise Manager Data Loader

Copyright © 2024 All Rights Reserved 32


Syst em Ad m in ist r at io n Gu id e

l MicroStrategy Collaboration Service

l MicroStrategy PDF Exporter

For instructions on how to use Service Manager, click Help from within
Service Manager.

Service Manager requires that port 8888 be open. If this port is not open,
contact your network administrator.

To Open MicroStrategy Service Manager in Windows

1. In the system tray of the Windows task bar, double-click the


MicroStrategy Service Manager icon, or .

2. If the icon is not present in the system tray, then from the Windows
Start menu, point to All Programs, then MicroStrategy Tools, then
select Service Manager.

To Open MicroStrategy Service Manager in UNIX

In UNIX, Service Manager requires an X-Windows environment.

1. Browse to the folder specified as the home directory during


MicroStrategy installation (the default is ~/MicroStrategy), then
browse to /bin.

2. Type ./mstrsvcmgr and press Enter.

Copyright © 2024 All Rights Reserved 33


Syst em Ad m in ist r at io n Gu id e

Using the Listener/ Restarter to Start Intelligence Server

You can configure Intelligence Server to start automatically when the


Intelligence Server machine starts. You can also configure the Restarter to
restart the Intelligence Server service automatically if it fails, but the
machine on which it is installed is still running. To do this, you must have the
MicroStrategy Listener service running.

To Start a MicroStrategy Service Automatically When the Machine


Restarts

1. From the Windows Start menu, point to All Programs, then


MicroStrategy Tools, then select Service Manager.

2. In the Server drop-down list, select the name of the machine on which
the service is installed.

3. In the Service drop-down list, select the service.

Copyright © 2024 All Rights Reserved 34


Syst em Ad m in ist r at io n Gu id e

4. Click Options.

5. Select Automatic as the Startup Type option.

6. Click OK.

You can also set this using the Services option in the Microsoft
Window's Control Panel.

To Start Intelligence Server Service Automatically when it Fails


Unexpectedly

The MicroStrategy Listener service must be running for the Re-starter


feature to work.

1. From the Windows Start menu, point to All Programs, then


MicroStrategy Tools, then select Service Manager.

2. In the Server drop-down list, select the machine on which the


Intelligence Server service is installed.

3. In the Service drop-down list, select MicroStrategy Intelligence


Server.

4. Click Options.

5. On the Intelligence Server Options tab, select the Enabled check box
for the Re-starter Option.

Developer

You can start and stop a local Intelligence Server from Developer. You
cannot start or stop a remote Intelligence Server from Developer; you must
use one of the other methods to start or stop a remote Intelligence Server.

Copyright © 2024 All Rights Reserved 35


Syst em Ad m in ist r at io n Gu id e

To Start or Stop Intelligence Server Using Developer

1. In Developer, in the Folder List, right-click the Administration icon.

2. Choose Start Server to start it or Stop Server to stop it.

Com m and Manager

Command Manager is a script-based tool that enables you to perform


various administrative and maintenance tasks with reusable scripts. You can
start and stop Intelligence Server using Command Manager.

For the Command Manager syntax for starting and stopping Intelligence
Server, see the Command Manager Help (press F1 from within Command
Manager). For a more general introduction to MicroStrategy Command
Manager, see Chapter 15, Automating Administrative Tasks with Command
Manager.

Com m and Line

You can start and stop Intelligence Server from a command prompt, using
the MicroStrategy Server Control Utility. This utility is invoked by the
command mstrctl. By default the utility is in C:\Program Files
(x86)\Common Files\MicroStrategy\ in Windows, and in
~/MicroStrategy/bin in UNIX.

The syntax to start the service is:

mstrctl -s IntelligenceServer start --service

The syntax to stop the service is:

mstrctl -s IntelligenceServer stop

For detailed instructions on how to use the Server Control Utility, see
Managing MicroStrategy Services from Command Line Using Server Control
Utility , page 39.

Copyright © 2024 All Rights Reserved 36


Syst em Ad m in ist r at io n Gu id e

Windows Services Window

You can start and stop Intelligence Server and choose a startup option using
the Windows Services window.

To Start and Stop Intelligence Server Using the Windows Services


Window

1. On the Windows Start menu, point to Settings, then choose Control


Panel.

2. Double-click Administrative Tools, and then double-click Services.

3. From the Services list, select MicroStrategy Intelligence Server.

4. You can do any of the following:

l To start the service, click Start.

l To stop the service, click Stop.

l To change the startup type, select a startup option from the drop-
down list.

l Automatic means that the service starts when the computer starts.

l Manual means that you must start the service manually.

l Disabled means that you cannot start the service until you change
the startup type to one of the other types.

5. Click OK.

Starting Intelligence Server as an Application


While the need to do so is rare, you can start Intelligence Server as an
application. This may be necessary if you must administer Intelligence
Server on the machine on which it is installed, if Developer is not installed
on that machine.

Copyright © 2024 All Rights Reserved 37


Syst em Ad m in ist r at io n Gu id e

Some advanced tuning settings are only available when starting Intelligence
Server as a service. If you change these settings, they are applied the next
time Intelligence Server is started as a service.

MicroStrategy recommends that you not change these settings unless


requested to do so by a MicroStrategy Technical Support associate.

There are some limitations to running Intelligence Server as an application:

l The user who starts Intelligence Server as an application must remain


logged on to the machine for Intelligence Server to keep running. When
the user logs off, Intelligence Server stops.

l If Intelligence Server is started as an application, you cannot administer it


remotely. You can administer it only by logging in to the Intelligence
Server machine.

l The application does not automatically restart if it fails.

In UNIX, if Intelligence Server has previously been configured to run as a


service, you must unregister it as a service before you can run it as an
application. For instructions on unregistering Intelligence Server as a
service, see Registering and Unregistering Intelligence Server as a UNIX
Service, page 30.

The default path for the Intelligence Server application executable is


C:\Program Files (x86)\MicroStrategy\Intelligence
Server\MSTRSvr.exe in Windows, and ~/MicroStrategy/bin in UNIX.

Executing this file from the command line displays the following
administration menu in Windows, and a similar menu in UNIX.

Copyright © 2024 All Rights Reserved 38


Syst em Ad m in ist r at io n Gu id e

To use these options, type the corresponding letter on the command line and
press Enter. For example, to monitor users, type U and press Enter. The
information is displayed.

Managing MicroStrategy Services from Command Line Using


Server Control Utility
MicroStrategy Server Control Utility (mstrctl) enables you to create and
manage Intelligence Server server instances from the command line. A
server instance is an Intelligence Server that is using a particular server
definition. For more information about server definitions, see Processing
Your Data: Intelligence Server.

Server Control Utility can also be used to start, stop, and restart other
MicroStrategy services—such as the Listener, Distribution Manager,
Execution Engine, or Enterprise Manager Data Loader services—and to view
and set configuration information for those services.

The following table lists the commands that you can perform with the Server
Control Utility. The syntax for using the Server Control Utility commands is:

mstrctl -m machinename [-l login] -s servicename command


[instancename] [(> | <) filename.xml]

Where:

Copyright © 2024 All Rights Reserved 39


Syst em Ad m in ist r at io n Gu id e

l machinename is the name of the machine hosting the server instance or


service. If this parameter is omitted, the service is assumed to be hosted
on the local machine.

l login is the login for the machine hosting the server instance or service,
and is required if you are not logged into that machine. You are prompted
for a password.

l servicename is the name of the service, such as IntelligenceServer or


EMService.

To retrieve a list of services on a machine, use the command mstrctl -


m machinename ls.

l command is one of the commands from the list below.

l instancename is the name of a server instance, where required. If a


name is not specified, the command uses the default instance name.

l filename is the name of the file to read from or write to.

If you want to. . . Then use this command. . .

Get information about the Server Control Utility

List all commands for the Server Control Utility.


-h
This command does not require a machine name, --help
login, or service name.

Display the version number of the Server Control Utility.


-V
This command does not require a machine name, --version
login, or service name.

Get information about the MicroStrategy network

List machines that the Server Control Utility can see and lm

affect. list-machines

Copyright © 2024 All Rights Reserved 40


Syst em Ad m in ist r at io n Gu id e

If you want to. . . Then use this command. . .

This command does not require a machine name,


login, or service name.

List the MicroStrategy services available on a machine. ls

list-servers
This command does not require a service name.

List the ODBC DSNs available on a machine. lod

list-odbc-dsn
This command does not require a service name.

Configure a service

Display the configuration information for a service, in XML gsvc instancename [>
format. For more information, see Using Files to Store filename.xml]
Output and Provide Input, page 43. get-service-
configuration
You can optionally specify a file to save the instancename [>
configuration properties to. filename.xml]

Specify the configuration information for a service, in XML ssvc instancename [<
format. For more information, see Using Files to Store filename.xml]
Output and Provide Input, page 43. set-service-
configuration
You can optionally specify a file to read the instancename [<
configuration properties from. filename.xml]

Configure a server

Display the configuration properties of a server, in XML


format. For more information, see Using Files to Store gsc [> filename.xml]
Output and Provide Input, page 43.
get-server-
configuration
You can optionally specify a file to save the [> filename.xml]
configuration properties to.

ssc [< filename.xml]


Specify the configuration properties of a server, in XML
format. For more information, see Using Files to Store set-server-
configuration
Output and Provide Input, page 43.
[< filename.xml]

Copyright © 2024 All Rights Reserved 41


Syst em Ad m in ist r at io n Gu id e

If you want to. . . Then use this command. . .

You can optionally specify a file to read the


configuration properties from.

Configure a server instance

Display the configuration information for a server instance, gsic instancename [>
in XML format. For more information, see Using Files to filename.xml]
Store Output and Provide Input, page 43. get-server-instance-
configuration
You can optionally specify a file to save the instancename [>
configuration properties to. filename.xml]

Specify the configuration information for a server instance,


ssic instancename
in XML format. For more information, see Using Files to
Store Output and Provide Input, page 43. set-server-instance-
configuration
instancename [<
You can optionally specify a file to read the
filename.xml]
configuration properties from.

Manage server instances

gdi
Display the default instance for a service.
get-default-instance

sdi instancename
Set an instance of a service as the default instance. set-default-instance
instancename

ci instancename
Create a new server instance. create-instance
instancename

cpi instancename
newinstancename
Create a copy of a server instance. Specify the name for the
new instance as newinstancename . copy-instance
instancename
newinstancename

di instancename
Delete a server instance. delete-instance
instancename

Copyright © 2024 All Rights Reserved 42


Syst em Ad m in ist r at io n Gu id e

If you want to. . . Then use this command. . .

rs instancename
Register a server instance as a service. register-service
instancename

us instancename
Unregister a registered server instance as a service. unregister-service
instancename

gl instancename
Display the license information for a service instance. get-license
instancename

gs instancename
Display the status information for a server instance
get-status instancename

Start or stop a server instance

start --service
Start a server instance as a service.
instancename

Start a server instance as an application. For more


start --interactive
information, see Running Intelligence Server as an
instancename
Application or a Service, page 30.

Stop a server instance that has been started as a service. stop instancename

Pause a server instance that has been started as a. service pause instancename

Resume a server instance that has been started as a service


resume instancename
and paused.

Terminate a server instance that has been started as a term instancename


service. terminate instancename

Using Files to Store Output and Provide Input


Certain Server Control Utility commands involve XML definitions. The
commands to display a server configuration, a service configuration, and a
server instance configuration all output an XML definition. The commands to
modify a server configuration, a service configuration, and a server instance
configuration all require an XML definition as input.

Copyright © 2024 All Rights Reserved 43


Syst em Ad m in ist r at io n Gu id e

It is difficult and time consuming to type a complete server, service, or


server instance configuration from the command line. An easier way to
configure them is to output the current configuration to a file, modify the file
with a text editor, and then use the file as input to a command to modify the
configuration.

Configuring Intelligence Server with XML files requires extensive knowledge


of the various parameters and values used to define Intelligence Server
configurations. Providing an incorrect XML definition to configure
Intelligence Server can cause errors and unexpected functionality.

For example, the following command saves the default server instance
configuration to an XML file:

mstrctl -s IntelligenceServer gsic > filename.xml

The server instance configuration is saved in the file filename.xml, in the


current directory.

The following command modifies the default server instance configuration by


reading input from an XML file:

mstrctl -s IntelligenceServer ssic < filename.xml

The XML definition in ServerInstance.xml is used to define the server


instance configuration.

Managing and Monitoring Projects


The System Administration Monitor lists all the projects on an Intelligence
Server and all the machines in the cluster that Intelligence Server is using.
You can monitor the status of the projects on a project source, and load,
unload, idle, and resume projects for the entire project source or for a single
node of the cluster. You can also schedule various system maintenance
tasks from the Scheduled Maintenance view.

The System Administration group contains the following views:

Copyright © 2024 All Rights Reserved 44


Syst em Ad m in ist r at io n Gu id e

l Project, which helps you keep track of the status of all the projects
contained in the selected project source. For detailed information, see
Managing Project Status, Configuration, or Security: Project View, page
45.

l Cluster, which helps you manage how projects are distributed across the
servers in a cluster. For detailed information, see Managing Clustered
Intelligence Servers: Cluster View, page 47.

l The Scheduled Maintenance monitor, which lists all the scheduled


maintenance tasks. For detailed information, see Scheduling
Administrative Tasks, page 1328.

Managing Project Status, Configuration, or Security: Project


View
The Project view helps you keep track of the status of all the projects
contained in the selected project source. It also enables access to a number
of project maintenance interfaces in one place. This makes it faster and
easier to perform maintenance tasks such as purging caches, managing
security filters, or loading or unloading projects from Intelligence Server.

To Access the Project View

1. Expand Administration in the project source's folder list.

2. Expand the System Administration group, and then select Project.


The projects and their statuses display on the right-hand side.

Using the Project View


The Project view lists all the projects in the project source. If your system is
set up as a cluster of servers, the Project Monitor displays all projects in the
cluster, including the projects that are not running on the node from which
you are accessing the Project Monitor. For details on projects in a clustered
environment, see Distribute Projects Across Nodes in a Cluster, page 1163.

Copyright © 2024 All Rights Reserved 45


Syst em Ad m in ist r at io n Gu id e

To view the status of a project, select the List or Details view, and click the
+ sign next to the project's name. A list of all the servers in the cluster
expands below the project's name. The status of the project on each server
is shown next to the server's name. If your system is not clustered, there is
only one server in this list.

For projects distributed asymmetrically across nodes of a cluster, a primary


server is assigned to each project. A project's primary server handles the
time-based scheduling for that project. The primary server is displayed in
bold, and Primary Server appears after the server name.

From the Project view, you can access a number of administrative and
maintenance functions. You can:

l Manage the users and security filters for a project

l View the change journal for a project (for details, see Monitor System
Activity: Change Journaling, page 828)

l Export and print the project's schema or other project documentation

l Load or unload projects from Intelligence Server, or idle or resume


projects for maintenance (for details, see Setting the Status of a Project,
page 48)

To load a project on a specific server in a cluster, you use the Cluster


Monitor. For details on this procedure, see Managing Clustered
Intelligence Servers: Cluster View, page 47.

l Purge report, element, or object caches for projects

These tasks are all available by right-clicking a project in the Project


Monitor. For more detailed information about any of these options, see the
Help or related sections in this guide.

You can perform an action on multiple projects at the same time. To do


this, select several projects (CTRL+click), then right-click and select one
of the options.

Copyright © 2024 All Rights Reserved 46


Syst em Ad m in ist r at io n Gu id e

You can also schedule any of these maintenance functions from the
Schedule Administration Tasks dialog box. To access this dialog box, right-
click a project in the Project view and select Schedule Administration
Tasks. For more information, including detailed instructions on scheduling a
task, see Scheduling Administrative Tasks, page 1328.

Managing Clustered Intelligence Servers: Cluster View


The Cluster view helps you keep track of the status of your clustered
Intelligence Servers. Through the Cluster view, you can view the status of
each node, add or remove nodes in the cluster, and view how projects are
distributed across the nodes.

To Access the Cluster View

1. Expand Administration in the project source's folder list.

2. Expand the System Administration group, and then select Cluster.


The projects and their statuses display on the right-hand side.

3. To see a list of all the projects on a node, click the + sign next to that
node. The status of the project on the selected server is shown next to
the project's name.

Using the Cluster View


From the Cluster view, you can access a number of administrative and
maintenance functions. You can:

l Manage the security policy settings for the project source

l Join or leave a cluster

l Manage the change journaling for projects on a cluster

l Purge the object cache for a server

These tasks are all available by right-clicking a server in the Cluster view.

Copyright © 2024 All Rights Reserved 47


Syst em Ad m in ist r at io n Gu id e

You can also load or unload projects from a machine, or idle or resume
projects on a machine for maintenance (for details, see Setting the Status of
a Project, page 48) by right-clicking a project on a server. For more detailed
information about any of these options, see Manage your Projects Across
Nodes of a Cluster, page 1169.

Setting the Status of a Project


Each project in Intelligence Server can operate in one of several modes.
Project modes allow for various system administration tasks to occur without
interrupting Intelligence Server operation for other projects. The tasks that
are allowed to occur depend on the job or jobs that are required for that task.

A project's status can be one of the following:

l Loaded, page 48

l Unloaded, page 49

l Request Idle, page 49

l Execution Idle, page 50

l Warehouse Execution Idle, page 50

l Full Idle, page 51

l Partial Idle, page 52

For instructions on changing a project's status, see Changing the Status of a


Project, page 52.

For example scenarios where the different project idle modes can help to
support project and data warehouse maintenance tasks, see Project and
Data Warehouse Maintenance Example Scenarios, page 54.

Loaded
A project in Loaded mode appears as an available project in Developer and
MicroStrategy Web products. In this mode, user requests are accepted and

Copyright © 2024 All Rights Reserved 48


Syst em Ad m in ist r at io n Gu id e

processed as normal.

Unloaded
Unloaded projects are still registered on Intelligence Server, but they do not
appear as available projects in Developer or MicroStrategy Web products,
even for administrators. Nothing can be done in the project until it is loaded
again.

Unloading a project can be helpful when an administrator has changed some


project configuration settings that do not affect run-time execution and are
to be applied to the project at a later time. The administrator can unload the
project, and then reload the project when it is time to apply the project
configuration settings.

A project unload request is fully processed only when all executing jobs for
the project are complete.

Request Idle
Request Idle mode helps to achieve a graceful shutdown of the project
rather than modifying a project from Loaded mode directly to Full Idle mode.
In this mode, Intelligence Server:

l Stops accepting new user requests from the clients for the project.

l Completes jobs that are already being processed. If a user requested that
results be sent to their History List, the results are available in their
History List after the project is resumed.

Setting a project to Request Idle can be helpful to manage server load for
projects on different clusters. For example, in a cluster with two nodes
named Node1 and Node2, the administrator wants to redirect load
temporarily to the project on Node2. The administrator must first set the
project on Node1 to Request Idle. This allows existing requests to finish
execution for the project on Node1, and then all new load is handled by the
project on Node2.

Copyright © 2024 All Rights Reserved 49


Syst em Ad m in ist r at io n Gu id e

Execution Idle
A project in Execution Idle mode is ideal for Intelligence Server maintenance
because this mode restricts users in the project from running any job in
Intelligence Server. In this mode, Intelligence Server:

l Stops executing all new and currently executing jobs and, in most cases,
places them in the job queue. This includes jobs that require SQL to be
submitted to the data warehouse and jobs that are executed in Intelligence
Server, such as answering prompts.

If a project is idled while Intelligence Server is in the process of fetching


query results from the data warehouse for a job, that job is canceled
instead of being placed in the job queue. When the project is resumed, if
the job was sent to the user's History List, an error message is placed in
the History List. The user can click the message to resubmit the job
request.

l Allows users to continue to request jobs, but execution is not allowed and
the jobs are placed in the job queue. Jobs in the job queue are displayed
as "Waiting for project" in the Job Monitor. When the project is resumed,
Intelligence Server resumes executing the jobs in the job queue.

This mode allows you to perform maintenance tasks for the project. For
example, you can still view the different project administration monitors,
create reports, create attributes, and so on. However, tasks such as
element browsing, exporting, and running reports that are not cached are
not allowed.

Warehouse Execution Idle


A project in Warehouse Execution Idle mode is ideal for data warehouse
maintenance because this mode restricts users in the project from running
any SQL against the data warehouse. In this mode, Intelligence Server:

Copyright © 2024 All Rights Reserved 50


Syst em Ad m in ist r at io n Gu id e

l Accepts new user requests from clients for the project, but it does not
submit any SQL to the data warehouse.

l Stops any new or currently executing jobs that require SQL to be executed
against the data warehouse and, in most cases, places them in the job
queue. These jobs display as "Waiting for project" in the Job Monitor.
When the project is resumed, Intelligence Server resumes executing the
jobs in the job queue.

If a project is idled while Intelligence Server is in the process of fetching


query results from the data warehouse for a job, that job is canceled
instead of being placed in the job queue. When the project is resumed, if
the job was sent to the user's History List, an error message is placed in
the History List. The user can click the message to resubmit the job
request.

l Completes any jobs that do not require SQL to be executed against the
data warehouse.

This mode allows you to perform maintenance tasks on the data


warehouse while users continue to access non-database-dependent
functionality. For example, users can run cached reports, but they cannot
drill if that drilling requires additional SQL to be submitted to the data
warehouse. Users can also export reports and documents in the project.

Full Idle
Full Idle is a combination of Request Idle and Execution Idle. In this mode,
Intelligence Server does not accept any new user requests and active
requests are canceled. When the project is resumed, Intelligence Server
does not resubmit the canceled jobs and it places an error message in the
user's History List. The user can click the message to resubmit the request.

This mode allows you to stop all Intelligence Server and data warehouse
processing for a project. However, the project still remains in Intelligence
Server memory.

Copyright © 2024 All Rights Reserved 51


Syst em Ad m in ist r at io n Gu id e

Partial Idle
Partial Idle is a combination of Request Idle and Warehouse Execution Idle.
In this mode, Intelligence Server does not accept any new user requests.
Any active requests that require SQL to be submitted to the data warehouse
are queued until the project is resumed. All other active requests are
completed.

This mode allows you to stop all Intelligence Server and data warehouse
processing for a project, while not canceling jobs that do not require any
warehouse processing. The project still remains in Intelligence Server
memory.

Changing the Status of a Project

To Load or Unload a Project

If the project is running on multiple clustered Intelligence Servers, the


project is loaded or unloaded from all nodes. To load or unload the project
from specific nodes, use the Cluster view instead of the Project view. For
detailed instructions, see Using the Cluster View, page 47.

1. In Developer, log in to the project source containing the project.

2. Under that project source, expand Administration, then expand


System Administration, and select Project.

3. Right-click the project, point to Administer Project, and select Load or


Unload. The project is loaded or unloaded. If you are using clustered
Intelligence Servers, the project is loaded or unloaded for all nodes in
the cluster.

Copyright © 2024 All Rights Reserved 52


Syst em Ad m in ist r at io n Gu id e

To Idle or Resume a Project

If the project is running on multiple clustered Intelligence Servers, the


project status changes for all nodes. To idle or resume the project on
specific nodes, use the Cluster view instead of the Project view. For
detailed instructions, see Using the Cluster View, page 47.

1. In Developer, log in to the project source containing the project.

2. Under that project source, expand Administration, then expand


System Administration, and then select Project.

3. Right-click the project, point to Administer Project, and select


Idle/Resume.

4. Select the options for the idle mode that you want to set the project to:

l Request Idle (Request Idle): all executing and queued jobs finish
executing, and any newly submitted jobs are rejected.

l Execution Idle (Execution Idle for All Jobs): all executing, queued,
and newly submitted jobs are placed in the queue, to be executed
when the project resumes.

l Warehouse Execution Idle (Execution Idle for Warehouse jobs): all


executing, queued, and newly submitted jobs that require SQL to be

Copyright © 2024 All Rights Reserved 53


Syst em Ad m in ist r at io n Gu id e

submitted to the data warehouse are placed in the queue, to be


executed when the project resumes. Any jobs that do not require SQL
to be executed against the data warehouse are executed.

l Full Idle (Request Idle and Execution Idle for All jobs): all
executing and queued jobs are canceled, and any newly submitted
jobs are rejected.

l Partial Idle (Request Idle and Execution Idle for Warehouse jobs):
all executing and queued jobs that do not submit SQL against the
data warehouse are canceled, and any newly submitted jobs are
rejected. Any currently executing and queued jobs that do not require
SQL to be executed against the data warehouse are executed.

To resume the project from a previously idled state, clear the


Request Idle and Execution Idle check boxes.

5. Click OK. The Idle/Resume dialog box closes and the project goes into
the selected mode. If you are using clustered Intelligence Servers, the
project mode is changed for all nodes in the cluster.

Project and Data Warehouse Maintenance Example Scenarios


In addition to the example scenarios provided with the different project idle
modes, the list below describes some other maintenance scenarios that can
be achieved using various project idle modes:

l Database maintenance for a data warehouse is scheduled to run at


midnight, during which time the data warehouse must not be accessible to
users. At 11:00 P.M., the administrator sets the project mode to Request
Idle. All currently executing jobs will finish normally. At 11:30 P.M., the
administrator sets the project mode to Warehouse Execution Idle,
disallowing any execution against the data warehouse while maintenance
tasks are performed. After maintenance is complete, the administrator
sets the project to Loaded to allow normal execution and functionality to
resume for the project.

Copyright © 2024 All Rights Reserved 54


Syst em Ad m in ist r at io n Gu id e

l Two projects, named Project1 and Project 2, use the same data
warehouse. Project1 needs dedicated access to the data warehouse for a
specific length of time. The administrator first sets Project2 to Request
Idle. After existing activity against the data warehouse is complete,
Project2 is restricted against executing on the data warehouse. Then, the
administrator sets Project2 to Warehouse Execution Idle mode to allow
data warehouse-independent activity to execute. Project1 now has
dedicated access to the data warehouse until Project2 is reset to Loaded.

l When the administrator schedules a project maintenance activity, the


impact on users of the project during this time can be reduced. The
administrator can set a project's idle mode to Request Idle, followed by
Partial Idle, and finally to Full Idle. This process can reduce user access to
a project and data warehouse gradually, rather than changing directly to
Full Idle and thus immediately stopping all user activity.

Processing Jobs
Any request submitted to Intelligence Server from any part of the
MicroStrategy system is known as a job. Jobs may originate from servers
such as the Subscription server or Intelligence Server's internal scheduler,
or from client applications such as MicroStrategy Library, MicroStrategy
Workstation, MicroStrategy Web, Mobile, Integrity Manager, or another
custom-coded application.

The main types of requests include report execution requests, object


browsing requests, element browsing requests, document requests, and
dashboard requests.

The Job Monitor shows you which jobs are currently executing and lets you
cancel jobs as necessary. For information about the job monitor, see
Monitoring Currently Executing Jobs, page 76.

By default, jobs are processed on a first-in first-out basis. However, your


system probably has some jobs that need to be processed before other jobs.

Copyright © 2024 All Rights Reserved 55


Syst em Ad m in ist r at io n Gu id e

You can assign a priority level to each job according to factors such as the
type of request, the user or user group requesting the job, the source of the
job (such as Developer, Mobile, or MicroStrategy Web), the resource cost of
the job, or the project containing the job. Jobs with a higher priority have
precedence over jobs with a lower priority, and they are processed first if
there is a limit on the resources available. For detailed information on job
priority, including instructions on how to prioritize jobs, see Prioritize Jobs,
page 1086.

Intelligence Server Job Processing (Common to All Jobs)


Regardless of the type of request, Intelligence Server uses some common
functionality to satisfy them. The following is a high-level overview of the
processing that takes place.

1. A user makes a request from a client application such as MicroStrategy


Web, which sends the request to Intelligence Server.

2. Intelligence Server determines what type of request it is and performs a


variety of functions to prepare for processing.

Depending on the request type, a task list is composed that determines


what tasks must be accomplished to complete the job, that is, what
components the job has to use within the server that handle things like
asking the user to respond to a prompt, retrieving information from the
metadata repository, executing SQL against a database, and so on.
Each type of request has a different set of tasks in the task list.

3. The components in Intelligence Server perform different tasks in the


task list, such as querying the data warehouse, until a final result is
achieved.

Those components are the stops the job makes in what is called a
pipeline, a path that the job takes as Intelligence Server works on it.

4. The result is sent back to the client application, which presents the
result to the user.

Copyright © 2024 All Rights Reserved 56


Syst em Ad m in ist r at io n Gu id e

Most of the actual processing that takes place is done in steps 2 and 3
internally in Intelligence Server. Although the user request must be received
and the final results must be delivered (steps 1 and 4), those are relatively
simple tasks. It is more useful to explain how Intelligence Server works.
Therefore, the rest of this section discusses Intelligence Server activity as it
processes jobs. This includes:

l Processing Report Execution, page 57

l Processing Object Browsing, page 62

l Processing Element Browsing, page 64

l Processing Report Services Document Execution, page 67

l Processing Dashboard Execution, page 70

l Client-Specific Job Processing, page 72

Being familiar with this material should help you to understand and interpret
statistics, Enterprise Manager reports, and other log files available in the
system. This may help you to know where to look for bottlenecks in the
system and how you can tune the system to minimize their effects.

Processing Report Execution


Reports are perhaps the most common requests made of Intelligence
Server. All report requests have the following pieces:

l A report instance is a container for all objects and information needed and
produced during report execution including templates, filters, prompt
answers, generated SQL, report results, and so on.

l A task list is a list of tasks that must be accomplished to complete a job.


All jobs have a task list associated with them. Intelligence Server
coordinates the report instance being passed from one internal
Intelligence Server component to another as a report is executed.

The most prominent Intelligence Server components related to report job


processing are listed here.

Copyright © 2024 All Rights Reserved 57


Syst em Ad m in ist r at io n Gu id e

Component Function

Performs complex calculations on a result set returned from the data


warehouse, such as statistical and financial functions. Also, sorts raw
results returned from the Query Engine into a cross-tabbed grid suitable
Analytical
for display to the user. In addition, it performs subtotal calculations on
Engine Server
the result set. Depending on the metric definitions, the Analytical Engine
will also perform metric calculations that were not or could not be
performed using SQL, such as complex functions.

Metadata
Controls all access to the metadata for the entire project.
Server

Creates, modifies, saves, loads and deletes objects from metadata. Also
maintains a server cache of recently used objects. The Object Server
Object Server does not manipulate metadata directly. The Metadata Server does all
reading/writing from/to the metadata; the Object Server uses the
Metadata Server to make any changes to the metadata.

Sends the SQL generated by the SQL Engine to the data warehouse for
Query Engine
execution.

Creates and manages all server reporting instance objects. Maintains a


Report Server
cache of executed reports.

Resolves prompts for report requests. Works in conjunction with Object


Resolution
Server and Element Server to retrieve necessary objects and elements for
Server
a given request.

SQL Engine
Generates the SQL needed for the report.
Server

Below is a typical scenario of a report's execution within Intelligence Server.


The diagram shows the report processing steps. An explanation of each step
follows the diagram.

Copyright © 2024 All Rights Reserved 58


Syst em Ad m in ist r at io n Gu id e

1. Intelligence Server receives the request.

2. The Resolution Server checks for prompts. If the report has one or
more prompts, the user must answer them. For information about these
extra steps, see Processing Reports with Prompts, page 60.

3. The Report Server checks the internal cache, if the caching feature is
turned on, to see whether the report results already exist. If the report
exists in the cache, Intelligence Server skips directly to the last step
and delivers the report to the client. If no valid cache exists for the
report, Intelligence Server creates the task list necessary to execute
the report. For more information on caching, see Result Caches, page
1203.

Prompts are resolved before the Server checks for caches. Users may
be able to retrieve results from cache even if they have personalized
the report with their own prompt answers.

Copyright © 2024 All Rights Reserved 59


Syst em Ad m in ist r at io n Gu id e

4. The Resolution Server obtains the report definition and any other
required application objects from the Object Server. The Object Server
retrieves these objects from the object cache, if possible, or reads them
from the metadata via the Metadata Server. Objects retrieved from
metadata are stored in the object cache.

5. The SQL Generation Engine creates the optimized SQL specific to the
RDBMS being used in the data warehouse. The SQL is generated
according to the definition of the report and associated application
objects retrieved in the previous step.

6. The Query Engine runs the SQL against the data warehouse. The report
results are returned to Intelligence Server.

7. The Analytical Engine performs additional calculations as necessary.


For most reports, this includes cross-tabbing the raw data and
calculating subtotals. Some reports may require additional calculations
that cannot be performed in the database via SQL.

8. Depending on the analytical complexity of the report, the results might


be passed back to the Query Engine for further processing by the
database until the final report is ready (in this case, steps 5–7 are
repeated).

9. Intelligence Server's Report Server saves or updates the report in the


cache, if the caching feature is turned on, and passes the formatted
report back to the client, which displays the results to the user.

Processing Reports with Prompts


If the report has prompts, these steps are inserted in the regular report
execution steps detailed here:

1. Intelligence Server sends the job to the Resolution Server component.


The Resolution Server discovers that the report definition contains a
prompt and tells Intelligence Server to prompt the user for the

Copyright © 2024 All Rights Reserved 60


Syst em Ad m in ist r at io n Gu id e

necessary information.

2. Intelligence Server puts the job in a sleep mode and tells the Result
Sender component to send a message to the client application
prompting the user for the information.

3. The user completes the prompt, and the client application sends the
user's prompt selections back to Intelligence Server.

4. Intelligence Server performs the security and governing checks and


updates the statistics. It then wakes up the sleeping job, adds the
user's prompt reply to the job's report instance, and passes the job to
the Resolution Server again.

5. This cycle repeats until all prompts in the report are resolved.

A sleeping job times out after a certain period or if the connection to


the client is lost. If the prompt reply comes back after the job has timed
out, the user sees an error message.

All regular report processing resumes from the point at which Intelligence
Server checks for a report cache, if the caching feature is turned on.

Processing Personal Intelligent Cube Reports


Personal Intelligent Cube reports are initially processed the same as a
regular report, and the report instance is held in Intelligence Server's
memory. If the user manipulates the report and that manipulation does not
cause the base report's SQL to change, the Analytical Engine component
services the request and sends the results to the client. No additional
processing from the data warehouse is required.

Reports can also connect to Intelligent Cubes that can be shared by multiple
reports. These Intelligent Cubes also allow the Analytical Engine to perform
additional analysis without requiring any processing on the data warehouse.

For information on personal Intelligent Cubes and Intelligent Cubes, see the
In-memory Analytics Help.

Copyright © 2024 All Rights Reserved 61


Syst em Ad m in ist r at io n Gu id e

Processing Graph Reports


When processing graph reports, Intelligence Server performs the regular
report processing detailed here. Depending on the connection, the following
happens:

l In a three-tier connection, Intelligence Server sends the report to


Developer, which creates the graph image.

l In a four-tier connection, Intelligence Server uses the graph generation


component to create the graph image and sends it to the client.

Processing Object Browsing


The definitions for all objects displayed in the folder list, such as folders,
metrics, attributes, and reports, are stored in the metadata. Whenever you
expand or select a folder in Developer or MicroStrategy Web, Intelligence
Server must retrieve the objects from the metadata before it can display
them in the folder list and the object viewer.

This process is called object browsing and it creates what are called object
requests. It can cause a slight delay that you may notice the first time you
expand or select a folder. The retrieved object definitions are then placed in
Intelligence Server's memory (cache) so that the information is displayed
immediately the next time you browse the same folder. This is called object
caching. For more information on this, see Object Caches, page 1276.

The most prominent Intelligence Server components related to object


browsing are listed here.

ComponentU Function

Metadata
Controls all access to the metadata for the entire project.
Server

Creates, modifies, saves, loads and deletes objects from metadata. Also
Object Server
maintains a server cache of recently used objects.

Source Net Receives, de-serializes, and passes metadata object requests to the
Server object server.

Copyright © 2024 All Rights Reserved 62


Syst em Ad m in ist r at io n Gu id e

The diagram below shows the object request execution steps. An


explanation of each step follows the diagram.

1. Intelligence Server receives the request.

2. The Object Server checks for an object cache that can service the
request. If an object cache exists, it is returned to the client and
Intelligence Server skips to the last step in this process. If no object
cache exists, the request is sent to the Metadata Server.

3. The Metadata Server reads the object definition from the metadata
repository.

4. The requested objects are received by the Object Server where are
they deposited into memory object cache.

5. Intelligence Server returns the objects to the client.

Copyright © 2024 All Rights Reserved 63


Syst em Ad m in ist r at io n Gu id e

Processing Element Browsing


Attribute elements are typically stored in lookup tables in the data
warehouse. This includes data that is unique to your business intelligence
system, such as Northeast, Northwest, Central, and Asia in the Region
attribute.

For a more thorough discussion of attribute elements, see the section in the
Basic Reporting Help about the logical data model.

When users request attribute elements from the system, they are said to be
element browsing and create what are called element requests. More
specifically, this happens when users:

l Answer prompts when executing a report

l Browse attribute elements in Developer using the Data Explorer (either in


the Folder List or the Report Editor)

l Use Developer's Filter Editor, Custom Group Editor, or Security Filter


Editor

l Use the Design Mode on MicroStrategy Web to edit the report filter

When Intelligence Server receives an element request from the user, it


sends a SQL statement to the data warehouse requesting attribute
elements. When it receives the results from the data warehouse, it then
passes the results back to the user. Also, if the element caching feature is
turned on, it stores the results in memory so that additional requests are
retrieved from memory instead of querying the data warehouse again. For
more information on this, see Element Caches, page 1261.

The most prominent Intelligence Server components related to element


browsing are listed here.

Copyright © 2024 All Rights Reserved 64


Syst em Ad m in ist r at io n Gu id e

Component Function

DB Element Transforms element requests into report requests and then sends report
Server requests to the warehouse.

Element Net Receives, de-serializes, and passes element request messages to the
Server Element Server.

Element Creates and stores server element caches in memory. Manages all
Server element requests in the project.

Sends the SQL generated by the SQL Engine to the data warehouse for
Query Engine
execution.

Creates and manages all server reporting instance objects. Maintains a


Report Server
cache of executed reports.

Resolves prompts for report requests. Works in conjunction with Object


Resolution
Server and Element Server to retrieve necessary objects and elements for
Server
a given request.

SQL Engine
Generates the SQL needed for the report.
Server

The diagram below shows the element request execution steps. An


explanation of each step follows the diagram.

Copyright © 2024 All Rights Reserved 65


Syst em Ad m in ist r at io n Gu id e

1. Intelligence Server receives the request.

2. The Element Server checks for a server element cache that can service
the request. If a server element cache exists, the element cache is
returned to the client. Skip to the last step in this process.

3. If no server element cache exists, the database Element Server


receives the request and transforms it into a report request.

The element request at this point is processed like a report request:


Intelligence Server creates a report that has only the attributes and

Copyright © 2024 All Rights Reserved 66


Syst em Ad m in ist r at io n Gu id e

possibly some filtering criteria, and SQL is generated and executed


like any other report.

4. The Report Server receives the request and creates a report instance.

5. The Resolution Server receives the request and determines what


elements are needed to satisfy the request, and then passes the
request to the SQL Engine Server.

6. The SQL Engine Server generates the necessary SQL to satisfy the
request and passes it to the Query Engine Server.

7. The Query Engine Server sends the SQL to the data warehouse.

8. The elements are returned from the data warehouse to Intelligence


Server and deposited in the server memory element cache by the
Element Server.

9. Intelligence Server returns the elements to the client.

Processing Report Services Document Execution


A MicroStrategy Report Services document contains objects representing
data coming from one or more reports. The document also holds positioning
and formatting information. A document is used to combine data from
multiple reports into a single display of presentation quality. When you
create a document, you can specify the data that appears and can also
control the layout, formatting, grouping, and subtotaling of that data. In
addition, you can insert pictures into the document and draw borders on it.
All these capabilities allow you to create documents that are suitable to
present to management.

Most of the data on a document is from an underlying dataset. A dataset is a


MicroStrategy report that defines the information that Intelligence Server
retrieves from the data warehouse or cache. Other data that does not
originate from the dataset is stored in the document's definition.

Document execution is slightly different from the execution of a single


report, since documents can contain multiple reports.

Copyright © 2024 All Rights Reserved 67


Syst em Ad m in ist r at io n Gu id e

The following diagram shows the document processing execution steps. An


explanation of each step follows the diagram.

1. Intelligence Server receives a document execution request and creates


a document instance in Intelligence Server. This instance holds the
results of the request.

A document instance facilitates the processing of the document through


Intelligence Server, similar to a report instance that is used to process
reports. It contains the report instances for all the dataset reports and
therefore has access to all the information that may be included in the
dataset reports. This information includes prompts, formats, and so on.

Copyright © 2024 All Rights Reserved 68


Syst em Ad m in ist r at io n Gu id e

2. The Document Server inspects all dataset reports and prepares for
execution. It consolidates all prompts from datasets into a single
prompt to be answered. All identical prompts are merged so that the
resulting prompt contains only one copy of each prompt question.

3. The Document Server, with the assistance of the Resolution Server,


asks the user to answer the consolidated prompt. The user's answers
are stored in the Document Server.

4. The Document Server creates an individual report execution job for


each dataset report. Each job is processed by Intelligence Server,
using the report execution flow described in Processing Report
Execution, page 57. Prompt answers are provided by the Document
Server to avoid further prompt resolution.

5. After Intelligence Server has completed all the report execution jobs,
the Analytical Engine receives the corresponding report instances to
begin the data preparation step. Document elements are mapped to the
corresponding report instance to construct internal data views for each
element.

Document elements include grouping, data fields, Grid/Graphs, and so


on.

6. The Analytical Engine evaluates each data view and performs the
calculations that are required to prepare a consolidated dataset for the
entire document instance. These calculations include calculated
expressions, derived metrics, and conditional formatting. The
consolidated dataset determines the number of elements for each
group and the number of detail sections.

7. The Document Server receives the final document instance to finalize


the document format:

l Additional formatting steps are required if the document is exported


to PDF or Excel format. The export generation takes place on the

Copyright © 2024 All Rights Reserved 69


Syst em Ad m in ist r at io n Gu id e

client side in three-tier and on the server side in four-tier, although


the component in charge is the same in both cases.

l If the document is executed in HTML, the MicroStrategy Web client


requests an XML representation of the document to process it and
render the final output.

8. The completed document is returned to the client.

Processing Dashboard Execution


A dashboard is a container for formatting, displaying, and distributing
multiple reports from a single request. Dashboards are based on an HTML
template, which allows them to contain any combination of text, images,
hyperlinks, tables, grid reports, and graph reports. Any reports included in a
dashboard are called the child reports of the dashboard.

Because dashboards are collections of multiple reports, their execution


process is slightly different from single reports. The most notable
differences are shown in the procedure below.

The diagram below shows the dashboard processing execution steps. An


explanation of each step follows the diagram.

Copyright © 2024 All Rights Reserved 70


Syst em Ad m in ist r at io n Gu id e

1. Intelligence Server receives a dashboard execution request and


creates dashboard instance to go through Intelligence Server and hold
the results.

2. The dashboard server consolidates all prompts from child reports into a
single prompt to be answered. Any identical prompts are merged so
that the resulting single prompt contains only one copy of each prompt
question.

3. Resolution Server asks the user to answer the consolidated prompt.


(The user only needs to answer a single set of questions.)

4. The dashboard server splits the dashboard request into separate


individual jobs for the constituent reports. Each report goes through the
report execution flow as described above.

Prompts have already been resolved for the child reports.

5. The completed request is returned to the client.

Copyright © 2024 All Rights Reserved 71


Syst em Ad m in ist r at io n Gu id e

Client-Specific Job Processing


This section explains the job processing steps that certain client
applications perform as they deliver user requests to Intelligence Server. It
also covers how those clients receive results, and how the results are
displayed them to the user.

For information about the processing steps performed by Intelligence Server


for all jobs, see Intelligence Server Job Processing (Common to All Jobs),
page 56.

Processing Jobs from MicroStrategy Web Products


This section provides a high-level overview of processing flow for requests
originating in MicroStrategy Web or Web Universal. It also includes the job
process for exporting reports in various formats.

Job Requests from MicroStrategy Web Products

1. The user makes a request from a web browser. The request is sent to
the web server via HTTP or HTTPS.

2. An ASP.NET page or a servlet receives the request and calls the


MicroStrategy Web API.

3. The MicroStrategy Web API sends the request to Intelligence Server,


which processes the job as usual (see Processing Report Execution,
page 57).

4. Intelligence Server sends the results back to the MicroStrategy Web


API via XML.

5. MicroStrategy Web converts the XML to HTML within the application


code:

l In MicroStrategy Web, the conversion is primarily performed in ASP


code.

Copyright © 2024 All Rights Reserved 72


Syst em Ad m in ist r at io n Gu id e

l In some customizations, the conversion may occur within custom XSL


classes. By default, the product does not use XSL for rendering
output, except in document objects.

6. MicroStrategy Web sends the HTML to the client's browser, which


displays the results.

What Happens When I Export a Report from MicroStrategy Web?

Exporting a report from MicroStrategy Web products lets users save the
report in another format that may provide additional capabilities for sharing,
printing, or further manipulation. This section explains the additional
processing the system must do when exporting a report in one of several
formats. This may help you to understand when certain parts of the
MicroStrategy platform are stressed when exporting.

Exporting a report from MicroStrategy Web products causes Intelligence


Server to retrieve the entire result set (no incremental fetch) into memory
and send it to MicroStrategy Web. This increases the memory use on the
Intelligence Server machine and it increases network traffic.

For information about governing report size limits for exporting, see Limit
the Information Displayed at One Time, page 1098 and the following
sections.

Export to Com m a Separated File (CSV) or Excel with Plain Text

Export to Comma Separated File (CSV) and Export to Excel with Plain Text
is done completely on Intelligence Server. These formats contain only report
data and no formatting information. The only difference between these two
formats is the internal "container" that is used.

The MicroStrategy system performs these steps when exporting to CSV or to


Excel with plain text:

Copyright © 2024 All Rights Reserved 73


Syst em Ad m in ist r at io n Gu id e

1. MicroStrategy Web product receives the request for the export and
passes the request to Intelligence Server. Intelligence Server takes the
XML containing the report data and parses it for separators, headers
and metric values.

2. Intelligence Server then outputs the titles of the units in the Row axis.
All these units end up in the same row of the result text.

3. Intelligence Server then outputs the title and header of one unit in the
Column axis.

4. Repeat step 3 until all units in the Column axis are completed.

5. Intelligence Server outputs all the headers of the Row axis and all
metric values one row at a time.

6. The finished result is then passed to be output as a CSV or an Excel


file, which is then passed to the client browser.

Export to Excel with Form atting

Exporting to Excel with formatting allows for reports to be exported to an


Excel file and contain the same formatting as shown in the browser window.
The report retains all cell coloring, font sizes, styles, and other formatting
aspects.

To export to Excel, users must first set their Export preferences by clicking
Preferences, then User preferences, then Export, and select the Excel
version they want to export to.

The MicroStrategy system performs these steps when exporting to Excel


with formatting:

1. MicroStrategy Web product receives the request for the export to Excel
and passes the request to Intelligence Server. Intelligence Server
produces a report by combining the XML containing the report data with
the XSL containing formatting information.

Copyright © 2024 All Rights Reserved 74


Syst em Ad m in ist r at io n Gu id e

2. Intelligence Server passes the report to MicroStrategy Web, which


creates an Excel file and sends it to the browser.

3. Users can then choose to view the Excel file or save it depending on the
client machine operating system's setting for viewing Excel files.

Export to PDF

Exporting to PDF uses Intelligence Server's export engine to create a PDF


(Portable Document Format) file. PDF files are viewed with Adobe's Acrobat
reader and provide greater printing functionality than simply printing the
report from the browser.

Processing Jobs from Narrowcast Server


MicroStrategy Narrowcast Server performs the following steps to deliver
reports to users.

For detailed information about Narrowcast Server, see the Narrowcast


Server Getting Started Guide.

Job Requests from MicroStrategy Narrowcast Server

1. A Narrowcast service execution is triggered by a schedule or external


API call.

2. Narrowcast Server determines the service recipients and allocates


work to Execution Engine (EE) machines.

3. EE machines determine personalized reports to be created for each


recipient by using recipient preferences.

4. Narrowcast Server submits one report per user or one multipage report
for multiple users, depending on service definition.

Copyright © 2024 All Rights Reserved 75


Syst em Ad m in ist r at io n Gu id e

5. Intelligence Server processes the report job request as usual. (See


Processing Report Execution, page 57.) It then sends the result back to
Narrowcast Server.

6. Narrowcast Server creates formatted documents using the personalized


report data.

7. Narrowcast Server packages documents as appropriate for the


service's delivery method, such as e-mail, wireless, and so on.

8. Narrowcast Server delivers the information to recipients by the chosen


delivery method.

Monitoring Currently Executing Jobs


The Job Monitor informs you of what is happening with system tasks.
However, it does not display detailed sub-steps that a job is performing. You
can see jobs that are:

l Executing

l Waiting in the queue

l Waiting for a user to reply to a prompt

l Canceling

l Not completing because of an error

The Job Monitor displays which tasks are running on an Intelligence Server.
When a job has completed it no longer appears in the monitor. You can view
a job's identification number; the user who submitted it; the job's status; a
description of the status and the name of the report, document, or query;
and the project executing it.

To View the Currently Executing Jobs

1. In Developer, log in to a project source. You must log in as a user with


the Monitor Jobs privilege.

Copyright © 2024 All Rights Reserved 76


Syst em Ad m in ist r at io n Gu id e

2. Expand Administration, then expand System Monitors, and then


select Jobs. The job information displays on the right-hand side.

3. Because the Job Monitor does not refresh itself, you must periodically
refresh it to see the latest status of jobs. To do this, press F5.

4. To view a job's details including its SQL, double-click it.

5. To view more details for all jobs displayed, right-click in the Job Monitor
and select View options. Select the additional columns to display and
click OK.

At times, you may see "Temp client" in the Network Address column. This
may happen when Intelligence Server is under a heavy load and a user
accesses the list of available projects. Intelligence Server creates a
temporary session that submits a job request for the available projects and
then sends the list to the MicroStrategy Web client for display. This
temporary session, which remains open until the request is fulfilled, is
displayed as Temp client.

To Cancel a Job

1. Select the job in the Job Monitor.

2. Press DELETE, and then confirm whether you want to cancel the job.

Using Automated Installation Techniques


You can make installing the MicroStrategy system across your enterprise
easier in several ways. They are mentioned here but more fully explained in
the Installation and Configuration Help.

Using a Response File to Install the Product


The response file installation allows you to automate certain aspects of the
installation by configuring a Windows INI-like response file, called
response.ini. This option is typically implemented by Original Equipment

Copyright © 2024 All Rights Reserved 77


Syst em Ad m in ist r at io n Gu id e

Manufacturer (OEM) applications that embed MicroStrategy installations in


other products. It can also be implemented by IT departments that want to
have more control over desktop installations. For more information on how
to set up and use a response file, see the Installation and Configuration
Help.

Using a Response File to Configure the Product


You can also use a response file to automate certain aspects of the
MicroStrategy configuration. This response file supplies parameters to the
Configuration Wizard to set up a metadata repository and statistics tables,
Intelligence Server, and multiple project sources. For steps on setting up
and using a response file for the Configuration Wizard, see the Installation
and Configuration Help.

Running a Silent Installation


Silent installations do not present any graphical user interface (GUI). They
are typically implemented by IT departments that perform software
distribution and installation across the network, for example, by using
Microsoft's System Management Server software. This involves configuring
a setup.iss file that the MicroStrategy Installation Wizard uses. For steps
on setting up and using a setup.iss file for a silent MicroStrategy
installation, see the Installation and Configuration Help.

OEMs may use silent installations; however, it is more common for OEMs to
use a response file installation.

Copyright © 2024 All Rights Reserved 78


Syst em Ad m in ist r at io n Gu id e

SETTIN G U P U SER
SECURITY

Copyright © 2024 All Rights Reserved 79


Syst em Ad m in ist r at io n Gu id e

Security is a concern in any organization. The metadata and data warehouse


may contain sensitive information that should not be viewed by all users. It
is your responsibility as administrator to make the right data available to the
right users.

MicroStrategy has a robust security model that enables you to create users
and groups, and control what data they can see and what objects they can
use. The security model is covered in the following sections:

l The MicroStrategy User Model, page 80

l Controlling Access to Application Functionality, page 88

l Controlling Access to Data, page 113

l Merging Users or Groups, page 143

Authentication, the process by which the system identifies the user, is an


integral part of any security model. Authenticating users is addressed in
Chapter 3, Identifying Users: Authentication.

The MicroStrategy User Model


This section provides an overview of what users and groups are in the
system and how they can be imported or created.

About MicroStrategy Users

About MicroStrategy User Groups

Privileges

Permissions

Creating, Importing, and Deleting Users and Groups

Monitoring Users' Connections to Projects

Copyright © 2024 All Rights Reserved 80


Syst em Ad m in ist r at io n Gu id e

About MicroStrategy Users


Like most security architectures, the MicroStrategy security model is built
around the concept of a user. To do anything useful with MicroStrategy, a
user must be authenticated and authorized. The user can then perform tasks
such as creating objects or executing reports and documents, and can
generally take advantage of all the other features of the MicroStrategy
system.

MicroStrategy supports a single sign-on for users in an enterprise


environment that consists of multiple applications, data sources, and
systems. Users can log in to the system once and access all the resources
of the enterprise seamlessly. For more details about implementing single
sign-on in MicroStrategy, see Enable Single Sign-On Authentication, page
198.

Users are defined in the MicroStrategy metadata and exist across projects.
You do not have to define users for every project you create in a single
metadata repository.

Each user has a unique profile folder in each project. This profile folder
appears to the user as the "My Personal Objects" folder. By default other
users' profile folders are hidden. They can be viewed by, in the Developer
Preferences dialog box, in the Developer: Browsing category, selecting the
Display Hidden Objects check box.

Administrator is a built-in default user created with a new MicroStrategy


metadata repository. The Administrator user has all privileges and
permissions for all projects and all objects.

One of the first things you should do in your MicroStrategy installation is to


change the password for the Administrator user.

About MicroStrategy User Groups


A user group (or "group" for short) is a collection of users and/or subgroups.
Groups provide a convenient way to manage a large number of users.

Copyright © 2024 All Rights Reserved 81


Syst em Ad m in ist r at io n Gu id e

Instead of assigning privileges, such as the ability to create reports, to


hundreds of users individually, you may assign privileges to a group. Groups
may also be assigned permissions to objects, such as the ability to add
reports to a folder.

In addition to having privileges of their own, subgroups always inherit the


privileges from their parent groups.

For a list of the privileges assigned to each group, see the List of Privileges
section.

Do not modify the privileges for an out-of-the-box user group. During


upgrades to newer versions of MicroStrategy, the privileges for the out-of-
the-box user groups are overwritten with the default privileges. Instead, you
should copy the user group you need to modify and make changes to the
copied version.

The Everyone Group


All users except for guest users are automatically members of the Everyone
group. The Everyone group is provided to make it easy for you to assign
privileges, security role memberships, and permissions to all users.

When a project is upgraded from MicroStrategy version 7.5.x or earlier to


version 9.x, the Use Developer privilege is automatically granted to the
Everyone group. This ensures that all users who were able to access
Developer in previous versions can continue to do so.

Authentication-Related Groups
These groups are provided to assist you in managing the different ways in
which users can log into the MicroStrategy system. For details on the
different authentication methods, see Chapter 3, Identifying Users:
Authentication.

Copyright © 2024 All Rights Reserved 82


Syst em Ad m in ist r at io n Gu id e

l Public/Guest: The Public group provides the capability for anonymous


logins and is used to manage the access rights of guest users. If you
choose to allow anonymous authentication, each guest user assumes the
profile defined by the Public group. For more information about
anonymous authentication and the Public/Guest group, see Implement
Anonymous Authentication, page 158.

l 3rd Party Users: Users who access MicroStrategy projects through third-
party (OEM) software.

l LDAP Users: The group into which users that are imported from an LDAP
server are added.

l LDAP Public/Guest: This group is for LDAP anonymous login. It behaves


like the Public/Guest group, except that it is for LDAP anonymous login.
When an LDAP anonymous user logs in, it is authorized with the privileges
and access rights of LDAP Public/Guest and Public/Guest.

For information on integrating LDAP with MicroStrategy, see Implement


LDAP Authentication, page 160.

l Warehouse Users: Users who access a project through a warehouse


connection.

Groups Corresponding to Product Offerings


These groups are built-in groups that correspond to the licenses you have
purchased. Using these groups gives you a convenient way to assign
product-specific privileges.

l Architect: Architects function as project designers and can create


attributes, facts, hierarchies, projects, and so on.

l Analyst: Analysts have the privileges to execute simple reports, answer


prompts, drill on reports, format reports, create reports by manipulating
Report Objects, create derived metrics, modify view filter, pivot reports,
create page by, and sort using advanced options.

Copyright © 2024 All Rights Reserved 83


Syst em Ad m in ist r at io n Gu id e

l Developer: Developers can design new reports from scratch, and create
report components such as consolidations, custom groups, data marts,
documents, drill maps, filters, metrics, prompts, and templates.

l Web Reporter: Web Reporters can view scheduled reports and


interactively slice and dice them. They can also use the printing,
exporting, and e-mail subscription features.

l Web Analyst: Web Analysts can create new reports with basic report
functionality, and use ad hoc analysis from Intelligent Cubes with
interactive, slice and dice OLAP.

l Web Professional: Web Professional users have the maximum access to


MicroStrategy Web functionality. They can create Intelligent Cubes and
reports for users, with full reporting, ad hoc, and OLAP capabilities with
seamless ROLAP analysis.

Administrator Groups
l System Monitors: The System Monitors groups provide an easy way to
give users basic administrative privileges for all projects in the system.
Users in the System Monitors groups have access to the various
monitoring and administrative monitoring tools

l System Administrators: The System Administrators group is a group


within the System Monitors group. It provides all the capabilities of the
System Monitors group plus the ability to modify configuration objects
such as database instances, and so on.

Privileges
Privileges allow users to access and work with various functionality within
the software. All users created in the MicroStrategy system are assigned a
set of privileges by default.

For detailed information about privileges, including how to assign privileges


to a user or group, see Controlling Access to Functionality: Privileges, page

Copyright © 2024 All Rights Reserved 84


Syst em Ad m in ist r at io n Gu id e

101. For a list of all user and group privileges in MicroStrategy, see the List
of Privileges section.

To see which users are using certain privileges, use the License Manager.
See Using License Manager, page 728.

To View a User's Privileges

1. In Developer, log in to a project source. You must log in as a user with


the Create And Edit Users And Groups privilege.

2. Expand Administration, then User Manager, and then the group


containing the user.

3. Right-click the user and select Grant access to projects. The User
Editor opens to the Project Access dialog box. The privileges that the
user has for each project are listed, as well as the source of those
privileges (inherent to user, inherited from a group, or inherited from a
security role).

Permissions
Permissions allow users to interact with various objects in the MicroStrategy
system. All users created in the MicroStrategy system have certain access
rights to certain objects by default.

Permissions differ from privileges in that permissions restrict or allow


actions related to a single object, while privileges restrict or allow actions
across all objects in a project.

For detailed information about permissions, including how to assign


permissions for an object to a user or group, see Controlling Access to
Objects: Permissions, page 89.

Copyright © 2024 All Rights Reserved 85


Syst em Ad m in ist r at io n Gu id e

To View the Permissions for an Object

1. From within Developer, right-click the object and select Properties.

2. Expand the Security category.

Creating, Importing, and Deleting Users and Groups


It is possible to create users individually using the User Manager interface in
Developer, or using Command Manager (for a detailed explanation of how to
use Command Manager, including examples, see Chapter 15, Automating
Administrative Tasks with Command Manager). You can also import users
and groups from a text file, from a Windows user directory, or from an LDAP
directory.

To Create a New User with the User Editor in Developer

1. In Developer, log in to a project source. You must log in as a user with


the Create And Edit Users And Groups privilege.

2. Expand Administration, then User Manager, and then a group that


you want the new user to be a member of. If you do not want the user to
be a member of a group, select Everyone.

3. Go to File > New > User.

4. Specify the user information for each category in the editor.

The user login ID is limited to 50 characters.

To Delete a User

If a Narrowcast user exists that inherits authentication from the user that
you are deleting, you must also remove the authentication definition from
that Narrowcast user. For instructions, see the MicroStrategy Narrowcast
Server Administration Guide.

Copyright © 2024 All Rights Reserved 86


Syst em Ad m in ist r at io n Gu id e

1. In Developer, log in to a project source. You must log in as a user with


the Create And Edit Users And Groups privilege.

2. Expand Administration, then User Manager, and then browse to the


group containing the user.

3. Select the user and press Delete.

4. Click OK.

5. Click No. The folder and its contents remain on the system and
ownership is assigned to Administrator. You may later assign
ownership and access control lists for the folder and its contents to
other users.

6. Click Yes and the folder and all of its contents are deleted.

Monitoring Users' Connections to Projects


When a user connects to a project, a user connection is established. You
may want to see a list of all users connected to projects within a project
source. The User Connection Monitor displays a list of all connections and
allows you to disconnect a user.

To View the Active User Connections

1. In Developer, log in to a project source. You must log in as a user with


the Monitor User Connections privilege.

2. Go to Administration > System Monitors > User Connections. The


user connection information displays on the right-hand side. For each
user, there is one connection for each project the user is logged in to,
plus one connection for <Server> indicating that the user is logged in
to the project source.

l Scheduler: Connections made by Intelligence Server to process


scheduled reports or documents appear as <Scheduler> in the

Copyright © 2024 All Rights Reserved 87


Syst em Ad m in ist r at io n Gu id e

Network Address column. Scheduler sessions cannot be manually


disconnected as described above. However, these sessions will be
removed automatically by Intelligence Server when the user session
idle time out value is reached.

l Temp client: At times, you may see "Temp client" in the Network
Address column. This may happen when Intelligence Server is under
a heavy load and a user accesses the Projects or Home page in
MicroStrategy Web (the pages that display the list of available
projects). Intelligence Server creates a temporary session that
submits a job request for the available projects and then sends the
list to the MicroStrategy Web client for display. This temporary
session, which remains open until the request is fulfilled, is
displayed as "Temp client."

3. To view a connection's details, double-click it.

To Disconnect a User

1. In the User Connection Monitor, select the connection.

2. Press Delete.

If you disconnect users from the project source (the <Configuration>


entry in the User Connection Monitor), they are also disconnected from any
projects they were connected to.

Controlling Access to Application Functionality


Access control governs the resources that an authenticated user can read,
modify, or write. In addition to controlling access to data (see Controlling
Access to Data, page 113), you must also control access to application
functionality, such as the ability to create reports or which reports are
viewable. The MicroStrategy system provides a rich set of functionality for
access control within Intelligence Server:

Copyright © 2024 All Rights Reserved 88


Syst em Ad m in ist r at io n Gu id e

Controlling Access to Objects: Permissions


Permissions define the degree of control users have over individual objects
in the system. For example, in the case of a report, a user may have
permission to view the report definition and execute the report, but not to
modify the report definition or delete the report.

While privileges are assigned to users (either individually, through groups,


or with security roles), permissions are assigned to objects. More precisely,
each object has an Access Control List (ACL) that specifies which
permissions different sets of users have on that object.

Intelligence Server includes special privileges called Bypass All Object


Security Access Checks and Bypass Schema Object Security Access
Checks. Users with these privileges are not restricted by access control
permissions and are considered to have full control over all objects and
schema objects, respectively. For information about privileges, see
Controlling Access to Functionality: Privileges, page 101.

To Modify Permissions for an Object in Developer

1. In Developer, right-click the object and select Properties.

To modify an object's ACL, you must access the Properties dialog box
directly from Developer. If you access the Properties dialog box from
within an editor, you can view the object's ACL but cannot make any
changes.

2. Select the Security category.

3. For the User or Group (click Add to select a new user or group), from
the Object drop-down list, select the predefined set of permissions, or
select Custom to define a custom set of permissions. If the object is a
folder, you can also assign permissions to objects contained in that
folder using the Children drop-down list.

4. Click OK.

Copyright © 2024 All Rights Reserved 89


Syst em Ad m in ist r at io n Gu id e

To Modify Permissions for an Object in MicroStrategy Web

1. In MicroStrategy Web, right-click an object and select Share.

2. To modify permissions for a user or group, from the Permission Level


drop-down list for that user or group, select the predefined set of
permissions, or select Custom to define a custom set of permissions.

3. To add new users or groups to the object's access control list (ACL):

l Click Choose Users/Groups.

l Select the users or groups that you want to add to the object's ACL.

l From the Choose a Permission Level drop-down list, select the


predefined set of permissions, or select Custom to define a custom
set of permissions.

l Click Add.

4. To remove a user or group from the object's ACL, click the X next to the
user or group's name.

5. When you are finished modifying the object's permissions, click OK.

Access Control List (ACL)


The Access Control List (ACL) of an object is a list of users and groups, and
the access permissions that each has for the object.

For example, for the Northeast Region Sales report you can specify the
following permissions:

l The Managers and Executive user groups have View access to the report.

l The Developers user group (people who create and modify your
applications) has Modify access.

l The Administrators user group has Full Control of the report.

Copyright © 2024 All Rights Reserved 90


Syst em Ad m in ist r at io n Gu id e

l The Everyone user group (any user not in one of the other groups) should
have no access to the report at all, so you assign the Denied All
permission grouping.

The default ACL of a newly created object has the following characteristics:

l The owner (the user who created the object) has Full Control permission.

l Permissions for all other users are set according to the Children ACL of
the parent folder.

Newly created folders inherit the standard ACLs of the parent folder. They
do not inherit the Children ACL.

l When creating new schema objects, if the Everyone user group is not
defined in the ACL of the parent folder, Developer adds the Everyone user
group to the ACL of the new schema object, and sets the permissions to
Custom. If the Everyone user group has permissions already assigned in
the parent folder ACL, they are inherited properly. Please note that
Workstation does not add the Everyone user group to the ACL of the new
schema object.

For example, if the Children setting of the parent folder's ACL includes
Full Control permission for the Administrator and View permission for the
Everyone group, then the newly created object inside that folder will have
Full Control permission for the owner, Full Control for the Administrator,
and View permission for Everyone.

l When a user group belongs to another user group, granting one group
permissions and denying the other any permissions will cause both
groups to have the Denied All permission.

For example, Group A belongs to, or is a member of Group B. If the ACL


on Object A for group A is assigned Full Control and the ACL on Object A
for Group B is Deny All, then the resolved ACL on Object A is Deny All.

l Modifying the ACL of a shortcut object does not modify the ACL of that
shortcut's parent object.

Copyright © 2024 All Rights Reserved 91


Syst em Ad m in ist r at io n Gu id e

l When you move an object to a different folder, the moved object retains
its original ACLs until you close and reopen the project in Developer.
Using Save As to move an object to a new folder will update the ACLs for
all objects except metrics. When editing or moving a metric, you should
copy the object and place the copy in a new folder so the copied object
inherits its ACL from the Children ACL of the folder into which it is
copied.

What Permissions Can be Granted for an Object?


When you edit an object's ACL using the object's Properties dialog box, you
can assign a predefined grouping of permissions or you can create a custom
grouping. The table below lists the predefined groupings and the specific
permissions each one grants.

Permissions
Grouping Description
granted

Browse
Grants permission to access the object for viewing Read
View only, and to provide translations for an object's name
and description. Use

Execute

Browse

Read

Write
Modify Grants permission to view and/or modify the object.
Delete

Use

Execute

Control and all


Grants all permissions for the object and also allows to other
Full Control
modify the ACL for the object. permissions
are granted

Copyright © 2024 All Rights Reserved 92


Syst em Ad m in ist r at io n Gu id e

Permissions
Grouping Description
granted

Explicitly denies all permissions for the object. None of none; all are
Denied All
the permissions are assigned. denied

Neither grants nor denies permissions. All permissions


Default are inherited from the groups to which the user or none
group belongs.

Allows the user or group to have a custom combination


Custom custom choice
of permissions that you can define.

Consume Browse
(Only available (Intelligent Cube only) Grants permission to create
Read
in MicroStrategy and execute reports based on this Intelligent Cube.
Web) Use

Browse
Add (Intelligent Cube only) Grants permission to create and
execute reports based on this Intelligent Cube, and Read
(Only available
in MicroStrategy republish/re-execute the Intelligent Cube to update the Use
Web) data.
Execute

Browse

Read
Collaborate (Intelligent Cube only) Grants permission to create
and execute reports based on this Intelligent Cube, Write
(Only available
in MicroStrategy republish/re-execute the Intelligent Cube to update the Delete
Web) data, and modify the Intelligent Cube.
Use

Execute

The permissions actually assigned to the user or group when you select a
permission grouping are explained in the table below.

Copyright © 2024 All Rights Reserved 93


Syst em Ad m in ist r at io n Gu id e

Permission Definition

Browse View the object in Developer and MicroStrategy Web

View the object's definition in the appropriate editor, and view the object's
Read access control list. When applied to a language object, allows users to see
the language in the Translation Editor but not edit strings for this language.

Modify the object's definition in the appropriate editor and create new
Write objects in the parent object. For example, add a new metric in a report or
add a new report to a document.

Delete Delete the object

Control Modify the object's access control list

Use the object when creating or modifying other objects. For example, the
Use permission on a metric allows a user to create a report containing that
metric. For more information, see Permissions and Report/Document
Execution, page 99. When applied to a language object, allows users to edit
and save translations, and to select the language for display in their
Developer or MicroStrategy Web language preferences. This permission is
Use
checked at design time, and when executing reports against an Intelligent
Cube.

A user with Use but not Execute permission for an Intelligent Cube can
create and execute reports that use that Intelligent Cube, but cannot
publish the Intelligent Cube.

Execute reports or documents that reference the object. To execute a report


or document, a user must have Execute access to all objects on the
report/document. For more information, see Permissions and
Report/Document Execution, page 99. This permission is checked at run
Execute
time.

The user must have Use permission on an Intelligent Cube to execute


reports against that Intelligent Cube.

When you give users only Browse access to a folder, using the Custom
permissions, they can see that folder displayed, but cannot see a list of
objects within the folder. However, if they perform a search, and objects

Copyright © 2024 All Rights Reserved 94


Syst em Ad m in ist r at io n Gu id e

within that folder match the search criteria, they can see those objects. To
deny a user the ability to see objects within a folder, you must deny all
access directly to the objects in the folder.

For example, grant the Browse permission to a folder, but assign


Denied All for the folder's children objects, then select the Apply
changes in permissions to all children objects check box. This
allows a user to see the folder, but nothing inside it. Alternatively, if
you assign Denied All to the folder and to its children, the user cannot
see the folder or any of its contents.

Permissions for Server Governing and Configuration


A server object is a configuration-level object in the metadata called Server
Definition. It contains governing settings that apply at the server level, a list
of projects registered on the server, connection information to the metadata
repository, and so on. It is created or modified when a user goes through the
Configuration Wizard. Server definition objects are not displayed in the
interface in the same way other objects are (reports, metrics, and so on).

As with other objects in the system, you can create an ACL for a server
object that determines what system administration permissions are assigned
to which users. These permissions are different from the ones for other
objects (see table below) and determine what capabilities a user has for a
specific server. For example, you can configure a user to act as an
administrator on one server, but as an ordinary user on another. To do this,
you must modify the ACL for each server definition object by right-clicking
the Administration icon, selecting Properties, and then selecting the
Security tab.

The table below lists the groupings available for server objects, the
permissions each one grants, and the tasks each allows you to perform on
the server.

Copyright © 2024 All Rights Reserved 95


Syst em Ad m in ist r at io n Gu id e

Grouping Permissions Granted Allows you to...

Connect Browse Connect to the server

View server definition properties


Browse
Monitoring View statistics settings
Read
Use the system monitors

Start/stop the server

Apply runtime settings

Update diagnostics at runtime

Cancel jobs

Browse Idle/resume a project

Read Disconnect user


Administration
Use Schedule reports

Execute Delete schedules

Trigger events

Perform cache administration

Create security filters

Use Security Filter Manager

Browse
Change server definition properties
Read
Change statistics settings
Configuration Write
Delete server definition
Delete
Grant server rights to other users
Control

All permissions that are assigned


Default Perform any task on that server.
to "Default"

Perform the tasks your custom


Custom... custom choice
selections allow.

Copyright © 2024 All Rights Reserved 96


Syst em Ad m in ist r at io n Gu id e

How Permissions are Determined


A user can have permissions for a given object from the following sources:

l User identity: The user identity is what determines an object's owner when
an object is created. The user identity also determines whether the user
has been granted the right to access a given object.

l Group membership: A user is granted access to an object if they belong to


a group with access to the object.

l Special privileges: A user may possess a special privilege that causes the
normal access checks to be bypassed:

l Bypass Schema Object Security Access Checks allows the user to


ignore the access checks for schema objects.

l Bypass All Object Security Access Checks allows the user to ignore the
access checks for all objects.

Perm ission Levels

A user can have permissions directly assigned to an object, and be a


member of one or more groups that have a different permission grouping
assigned to the object. In this case, user-level permissions override group-
level permissions, and permissions that are denied at the user or group level
override permissions that are granted at that level. The list below indicates
what permissions are granted when permissions from multiple sources
conflict.

1. Permissions that are directly denied to the user are always denied.

2. Permissions that are directly granted to the user, and not directly
denied, are always granted.

3. Permissions that are denied by a group, and not directly granted to the
user, are denied.

Copyright © 2024 All Rights Reserved 97


Syst em Ad m in ist r at io n Gu id e

4. Permissions that are granted by a group, and not denied by another


group or directly denied, are granted.

5. Any permissions that are not granted, either directly or by a group, are
denied.

For example, user Jane does not have any permissions directly assigned for
a report. However, Jane is a member of the Designers group, which has Full
Control permissions for that report, and is also a member of the Managers
group, which has Denied All permissions for that report. In this case, Jane is
denied all permissions for the report. If Jane is later directly granted View
permissions for the report, she would have View permissions only.

Default Perm issions for Folders in a New Project

By default, in a new MicroStrategy project, users are only allowed to save


objects within their personal folders. Only administrative users can save
objects within the Public Folder directory in a MicroStrategy project. Folders
in a new project are created with these default ACLs:

l Public Objects folder, Schema Objects folder

l Administrator: Full Control

l Everyone: Browse

l Public/Guest: Browse

l Inherited ACL

l Administrator: Default

l Everyone: View

l Public/Guest: View

This means that new users, as part of the Everyone group, are able to
browse the objects in the Public Objects folder, view their definitions
and use them in definitions of other objects (for example, create a
report with a public metric), and execute them (execute reports).

Copyright © 2024 All Rights Reserved 98


Syst em Ad m in ist r at io n Gu id e

However, new users cannot delete these objects, or create or save new
objects to these folders.

l Personal folders

l Owner: Full Control

This means that new users can create objects in these folders and have
full control over those objects.

Permissions and Report/Document Execution


Two permissions relate to report and document execution: the Use and
Execute permissions. These have the following effects:

l The Use permission allows the user to reference or use the object when
they are modifying another object. This permission is checked at object
design time, and when executing reports against an Intelligent Cube.

l The Execute permission allows the user to execute reports or documents


that use the object. This permission is checked only at report/document
execution time.

A user may have four different levels of access to an object using these two
new permissions:

l Both Use and Execute permissions: The user can use the object to create
new reports, and can execute reports containing the object.

l Execute permission only: The user can execute previously created reports
containing the object, but cannot create new reports that use the object. If
the object is an Intelligent Cube, the user cannot execute reports against
that Intelligent Cube.

l Use permission only: The user can create reports using the object, but
cannot execute those reports.

Copyright © 2024 All Rights Reserved 99


Syst em Ad m in ist r at io n Gu id e

A user with Browse, Read, and Use (but not Execute) permissions for an
Intelligent Cube can create and execute reports that use that Intelligent
Cube, but cannot publish the Intelligent Cube.

l Neither Use nor Execute permission: The user cannot create reports
containing the object, nor can the user execute such reports, even if the
user has Execute rights on the report.

Interpreting Access Rights During Report/ Docum ent Execution

The ability to execute a report or document is determined by whether the


user has Execute permission on the report and Execute permission on the
objects used to define that report. More specifically, Execute permission is
required on all attributes, custom groups, consolidations, prompts, metrics,
facts, filters, templates, and hierarchies used to define the report or
document. Permissions are not checked on transformations and functions
used to define the report.

If the user does not have access to an attribute, custom group,


consolidation, prompt, fact, filter, template, or hierarchy used to define a
report, the report execution fails.

If the user does not have access to a metric used to define a report, the
report execution continues, but the metric is not displayed in the report for
that user.

If the user does not have access to objects used in a prompt, such as an
attribute in an element list prompt, object prompt, or attribute qualification
prompt or a metric in metric qualification prompt or object prompt, the
prompt is considered as not applied if the prompt answer is optional.
Alternatively, the execution may fail to complain about a lack of access if a
prompt answer is required.

This enhancement allows a finer level of access control when executing


reports. The same report can be deployed to many users who experience
different results depending on their respective permissions on metrics.

Copyright © 2024 All Rights Reserved 100


Syst em Ad m in ist r at io n Gu id e

ACLs and Personalized Drill Paths in MicroStrategy Web


You can control what attribute drill paths users see on reports. You can
determine whether users can see all drill paths for an attribute, or only those
to which they have access. You determine this access using the Enable
Web personalized drill paths check box in the Project Configuration
Editor, Project Definition: Drilling category. (In Developer, right-click a
project and select Project Configuration.)

With the Enable Web personalized drill paths check box cleared (and
thus, XML caching enabled), the attributes to which all users in
MicroStrategy Web can drill are stored in a report's XML cache. In this case,
users see all attribute drill paths whether they have access to them or not.
When a user selects an attribute drill path, Intelligence Server then checks
whether the user has access to the attribute. If the user does not have
access (for example, because of Access Control Lists), the drill is not
performed and the user sees an error message.

Alternatively, if you select the Enable Web personalized drill paths check
box, at the time the report results are created (not at drill time), Intelligence
Server checks which attributes the user may access and creates the report
XML with only the allowed attributes. This way, the users only see their
available drill paths, and they cannot attempt a drill action that is not
allowed. With this option enabled, you may see performance degradation on
Intelligence Server. This is because it must create XML for each report/user
combination rather than using XML that was cached.

For more information about XML caching, see Types of Result Caches, page
1209.

Controlling Access to Functionality: Privileges


As discussed earlier in this section, there are different types of users and
groups in the user community. It is your responsibility as a system
administrator to assign privileges to users and groups. They give you full
control over the user experience.

Copyright © 2024 All Rights Reserved 101


Syst em Ad m in ist r at io n Gu id e

Privileges give users access to specific MicroStrategy functionality. For


example, the Create Metric privilege allows the user to use the Metric Editor
to create a new metric, and the Monitor Caches privilege allows the user to
view cache information in the Cache Monitor.

There is a special privilege called Bypass All Object Security Access


Checks. Users with this privilege can ignore the access control permissions
and are considered to have full control over all objects. For information
about permissions, see Controlling Access to Objects: Permissions, page
89.

Based on their different privileges, the users and user groups can perform
different types of operations in the MicroStrategy system. If a user does not
have a certain privilege, that user does not have access to that privilege's
functionality. You can see which users are using certain privileges by using
License Manager (see Using License Manager, page 728).

Most privileges may be granted within a specific project or across all


projects. Certain administrative privileges, such as Configure Group
Membership, do not apply to specific projects and can only be granted at the
project source level.

For a complete list of privileges and what they control in the system, see the
List of Privileges section.

Assigning Privileges to Users and Groups


Privileges can be assigned to users and user groups directly or through
security roles. The difference is that the former grants functionality across
all projects while the latter only apply within a specified project (see
Defining Sets of Privileges: Security Roles, page 106).

Copyright © 2024 All Rights Reserved 102


Syst em Ad m in ist r at io n Gu id e

To Assign Privileges to Users or Groups

1. From Developer User Manager, edit the user with the User Editor or
edit the group with the Group Editor.

2. Expand User Definition or Group Definition, and then select Project


Access.

3. Select the check boxes to grant privileges to the user or group.

Rather than assigning individual users and groups these privileges, it may
be easier for you to create Security Roles (collections of privileges) and
assign them to users and groups. Then you can assign additional privileges
individually when there are exceptions. For more information about security
roles, see Defining Sets of Privileges: Security Roles, page 106.

Assigning Privileges to Multiple Users at Once

You can grant, revoke, and replace the existing privileges of users, user
groups, or security roles with the Find and Replace Privileges dialog box.
This dialog box allows you to search for the user, user group, or security role
and change their privileges, depending on the tasks required for their work.

For example, your organization is upgrading Flash on all users' machines.


Until the time the Flash update is completed, the users will not be able to
export reports to Flash. You can use Find and Replace Privileges to revoke
the Export to Flash privilege assigned to users, and when the upgrade is
complete you can grant the privilege to the users again.

To access the Find and Replace Privileges dialog box, in Developer, right-
click the User Manager and select Find and Replace Privileges.

How are Privileges Inherited?

A user's privileges within a given project include the following:

Copyright © 2024 All Rights Reserved 103


Syst em Ad m in ist r at io n Gu id e

l Privileges assigned directly to the user (see Assigning Privileges to Users


and Groups, page 102)

l Privileges assigned to any groups of which the user is a member (see


About MicroStrategy User Groups, page 81)

Groups also inherit privileges from their parent groups.

l Privileges assigned to any security roles that are assigned to the user
within the project (see Defining Sets of Privileges: Security Roles, page
106)

l Privileges assigned to any security roles that are assigned to a group of


which the user is a member

Predefined User Groups and Privileges


MicroStrategy comes with several predefined user groups. For a complete
list and explanation of these groups, see About MicroStrategy User Groups,
page 81. These groups possess the following privileges:

l Everyone, Public/Guest, 3rd Party Users, LDAP Public/Guest, and LDAP


Users, have no predefined privileges.

l The predefined product-based user groups possess all the privileges


associated with their corresponding products. For a list of these groups,
see About MicroStrategy User Groups, page 81.

International Users is a member of the following product-based groups:


Analyst, Mobile User, Web Reporter, and Web Analyst. It has the
privileges associated with these groups.

l System Monitors and its member groups have privileges based on their
expected roles in the company. To see the privileges assigned to each
group, right-click the group and select Grant Access to Projects.

Copyright © 2024 All Rights Reserved 104


Syst em Ad m in ist r at io n Gu id e

How Predefined User Groups Inherit Privileges

Several of the predefined user groups form hierarchies, which allow groups
to inherit privileges from any groups at a higher level within the hierarchy.
These hierarchies are as follows:

In the case of the MicroStrategy Web user groups, the Web Analyst inherits
the privileges of the Web Reporter. The Web Professional inherits the
privileges of both the Web Analyst and Web Reporter. The Web Professional
user group has the complete set of MicroStrategy Web privileges.

l Web Reporter

l Web Analyst

l Web Professional

In the case of the MicroStrategy Developer user groups, the Developer


inherits the privileges of the Analyst and therefore has more privileges than
the Analysts.

l Analyst

l Developer

The various System Monitors user groups inherit the privileges of the
System Monitors user group and therefore have more privileges than the
System Monitors. Each has its own specific set of privileges in addition, that
are not shared by the other System Monitors groups.

l System Monitors

l various System Monitors groups

This group inherits the privileges of the Analyst, Mobile User, Web Reporter,
and Web Analyst groups.

l International Users

Copyright © 2024 All Rights Reserved 105


Syst em Ad m in ist r at io n Gu id e

Defining Sets of Privileges: Security Roles


A security role is a collection of project-level privileges that are assigned to
users and groups. For example, you might have two types of users with
different functionality needs: the Executive Users who need to run, sort, and
print reports, and the Business Analysts who need additional capabilities to
drill and change subtotal definitions. In this case, you can create two
security roles to suit these two different types of users.

Security roles exist at the project source level, and can be used in any
project registered with Intelligence Server. A user can have different
security roles in each project. For example, an administrator for the
development project may have a Project Administrator security role in that
project, but the Normal User security role in all other projects on that server.

A security role is fundamentally different from a user group in the following


ways:

l A group is a collection of users that can be assigned privileges (or security


roles) all at once, for the project source and all projects in it.

l A security role is a collection of privileges in a project. Those privileges


are assigned as a set to various users or groups, on a project-by-project
basis.

For information about how privileges are inherited from security roles and
groups, see Controlling Access to Functionality: Privileges, page 101

Managing Security Roles


The Security Role Manager lists all the security roles available in a project
source. From this manager you can assign or revoke security roles for users
in projects, or create or delete security roles. For additional methods of
managing security roles, see Other Ways of Managing Security Roles, page
108.

Copyright © 2024 All Rights Reserved 106


Syst em Ad m in ist r at io n Gu id e

To Assign a Security Role to Users or Groups in a Project

1. In Developer, log in to the project source containing the security role.

2. Expand Administration, then Configuration Managers, and then


select Security Roles.

3. Double-click the security role you want to assign to the user or group.

4. Select the Members tab.

5. From the Select a Project drop-down list, select the project for which
to assign the security role.

6. From the drop-down list of groups, select the group containing a user or
group you want to assign the security role to. The users or groups that
are members of that group are shown in the list box below the drop-
down list.

l By default, users are not shown in this list box. To view the users as
well as the groups, select the Show users check box.

l To assign a top-level group to a security role, from the drop-down list


select All Groups.

7. Select a desired user or group.

8. Click the > icon. The user or group moves to the Selected members
list. You can assign multiple users or groups to the security role by
selecting them and clicking the > icon.

9. When you are finished assigning the security role, click OK.

To Create a Security Role

1. In Developer, log in to the project source you want to create the


security role in.

Copyright © 2024 All Rights Reserved 107


Syst em Ad m in ist r at io n Gu id e

2. Expand Administration, go to Configuration Managers > Security


Roles.

3. From the File menu, point to New, and select Security Role.

4. Enter a name and description for the new security role.

5. Select the Privileges tab.

6. Select the privileges to add to this security role. For an explanation of


each privilege, see the List of Privileges section.

To select all privileges in a privilege group, select the group.

7. To assign the role to users, select the Members tab and follow the
instructions in To Assign a Security Role to Users or Groups in a
Project, page 107.

8. Click OK.

To Delete a Security Role

1. In Developer, log in the project source you want to remove the security
role from.

2. Expand Administration, then Configuration Managers, and then


select Security Roles.

3. Click the security role that you want to remove.

4. From the File menu select Delete.

5. Click Yes.

Other Ways of Managing Security Roles

You can also assign security roles to a user or group in the User Editor or
Group Editor. From the Project Access category of the editor, you can
specify what security roles that user or group has for each project.

Copyright © 2024 All Rights Reserved 108


Syst em Ad m in ist r at io n Gu id e

You can assign roles to multiple users and groups in a project through the
Project Configuration dialog box. The Project Access - General category
displays which users and groups have which security roles in the project,
and allows you to re-assign the security roles.

You can also use Command Manager to manage security roles. Command
Manager is a script-based administrative tool that helps you perform
complex administrative actions quickly. For specific syntax for security role
management statements in Command Manager, see Security Role
Management in the Command Manager on-line help (from Command
Manager, press F1, or select the Help menu). For general information about
Command Manager, see Chapter 15, Automating Administrative Tasks with
Command Manager.

If you are using UNIX, you must use Command Manager to manage your
system's security roles.

Controlling Access to a Project


You can deny user or group access to a specific MicroStrategy project by
using a security role.

To Deny User or Group Access to a Project

1. In Developer, right-click on the project you want to deny access to.


Select Project Configuration.

2. Expand the Project Access category.

3. In the Select a security role drop-down list, select the security role
that contains the user or group who you want to deny project access.

4. On the right-hand side of the Project access - General dialog, select the
user or group who you want to deny project access. Then click the left
arrow to remove that user or group from the security role.

Copyright © 2024 All Rights Reserved 109


Syst em Ad m in ist r at io n Gu id e

5. Using the right arrow, add any users to the security role for whom you
want to grant project access. To see the users contained in each group,
highlight the group and check the Show users check box.

6. Make sure the user or group whose access you want deny does not
appear in the Selected members pane on the right-hand side of the
dialog. Then click OK.

7. In Developer, under the project source that contains the project you are
restricting access to, expand Administration, then expand User
Manager.

8. Click on the group to which the user belongs who you want to deny
project access for. Then double-click on the user in the right-hand side
of Developer.

9. Expand User Definition, then select Project Access.

10. In the Security Role Selection row, under the project you want to
restrict access to, review the Security Role Selection drop-down list.
Make sure that no security role is associated with this project for this
user.

11. Click OK.

When the user attempts to log in to the project, they receive the message
"No projects were returned by this project source."

The Role-Based Administration Model


Beginning with version 9.0, the MicroStrategy product suite comes with a
number of predefined security roles for administrators. These roles makes it
easy to delegate administrative tasks.

For example, your company security policy may require you to keep the user
security administrator for your projects separate from the project resource
administrator. Rather than specifying the privileges for each administrator
individually, you can assign the Project Security Administrator role to one

Copyright © 2024 All Rights Reserved 110


Syst em Ad m in ist r at io n Gu id e

administrator, and the Project Resource Administrator to another. Because


users can have different security roles for each project, you can use the
same security role for different users in different projects to further delegate
project administration duties.

The predefined project administration roles cover every project-level


administrative privilege except for Bypass All Object Security Access
Checks. None of the roles have any privileges in common. For a list of the
privileges included with each predefined security role, see the List of
Privileges section.

The predefined administration security roles are:

l Analyst, who have authoring capabilities.

l Analytics Architect, who can create, publish, and optimize a federated


data layer as the enterprise’s single version of the truth. Users can build
and maintain schema objects and abstraction layers on top of various,
changing enterprise assests.

l Application Administrator, who have access to all application-specific


tasks.

l Application Architect, who create, share, and maintain intelligence


applications for the enterprise.

l Certifier, who can certify objects in addition to having authoring


capabilities.

l Collaborator, who can view and collaborate on a dashboard or document


they have access to.

l Consumer, who can only view a dashboard or document they have access
to.

l Database Architect, who can optimize query performance and utilization


based on query type, usage patterns, and application design requirements
by tuning VLDB settings or configuring schema objects.

Copyright © 2024 All Rights Reserved 111


Syst em Ad m in ist r at io n Gu id e

l Embedded Analytics Architect, who can inject, extend, and embed


analytics into portals, third-party, mobile, and white-labelled applications.

l IntroBI, which is used for the MicroStrategy class "Introduction to


Enterprise Business Intelligence."

l Mobile Architect, who builds, compiles, deploys, and maintains mobile


environments and applications. This user can also optimize the end user
experience when accessing applications via mobile devices.

l Northeast Users, which is used for the MicroStrategy class "Introduction to


Enterprise Business Intelligence."

l Platform Administrator, who configures the Intelligence Server, maintain


the security layer, monitor system usage, and optimize architecture in
order to reduce errors, maximize uptime, and boost performance.

l Power Users, which have the largest subset of privileges of any security
role.

l Project Bulk Administrators, who can perform administrative functions


on multiple objects with Object Manager (see Copy Objects Between
Projects: Object Manager, page 762), Command Manager (see Chapter
15, Automating Administrative Tasks with Command Manager), and the
Bulk Repository Translation Tool.

l Project Operations Administrators, who can perform maintenance on


various aspects of a project.

l Project Operations Monitors, who can view the various Intelligence


Server monitors but cannot make any changes to the monitored systems.

l Project Resource Settings Administrators, who can configure project-


level settings.

l Project Security Administrators, who create users and manage user and
object security.

Copyright © 2024 All Rights Reserved 112


Syst em Ad m in ist r at io n Gu id e

l System Administrator, who sets up, maintains, monitors, and


continuously supports the infrastructure environment through deployment
on cloud, Windows, or Linux.

For instructions on how to assign these security roles to users or groups,


see Managing Security Roles, page 106.

Do not modify the privileges for an out-of-the-box security role. During


upgrades to newer versions of MicroStrategy, the privileges for the out-of-
the-box security roles are overwritten with the default privileges. Instead,
you should copy the security role you need to modify and make changes to
the copied version.

Controlling Access to Data


Access control governs the resources that an authenticated user is able to
read, modify, or write. Data is a major resource of interest in any security
scheme that determines what source data a user is allowed to access. You
may be more familiar with the terms authentication (making sure the user is
who they say they are) and authorization (making sure they can access the
data they are entitled to see now that I know who they are).

The ways by which data access can be controlled are discussed below:

Controlling Access to the Database: Connection Mappings


Connection mappings allow you to assign a user or group in the
MicroStrategy system to a login ID on the data warehouse RDBMS. The
mappings are typically used to take advantage of one of several RDBMS
data security techniques (security views, split fact tables by rows, split fact
tables by columns) that you may have already created. For details on these
techniques, see Controlling Access to Data at the Database (RDBMS) Level,
page 139.

Copyright © 2024 All Rights Reserved 113


Syst em Ad m in ist r at io n Gu id e

Why Use Connection Mappings?


Use a connection mapping if you need to differentiate MicroStrategy users
from each other at the data warehouse level or if you need to direct them to
separate data warehouses. This is explained in more detail below.

First it is important to know that, as a default, all users in a MicroStrategy


project use the same database connection/DSN and database login when
connecting to the database. This means that all users have the same
security level at the data warehouse and therefore, security views cannot be
assigned to a specific MicroStrategy user. In this default configuration, when
the database administrator (DBA) uses an RDBMS feature to view a list of
users connected to the data warehouse, all MicroStrategy users would all
appear with the same name. For example, if forty users are signed on to the
MicroStrategy system and running jobs, the DBA sees a list of forty users
called "MSTR users" (or whatever name is specified in the default database
login). This is shown in the diagram below in which all jobs running against
the data warehouse use the "MSTR users" database login.

Creating a Connection Mapping


You define connection mappings with the Project Configuration Editor in
Developer. To create a connection mapping, you assign a user or group
either a database connection or database login that is different from the

Copyright © 2024 All Rights Reserved 114


Syst em Ad m in ist r at io n Gu id e

default. For information on this, see Connecting to the Data Warehouse,


page 22.

To Create a Connection Mapping

1. In Developer, log into your project. You must log in as a user with
administrative privileges.

2. Go to Administration > Projects > Project Configuration.

3. Expand the Database Instances category, and then select Connection


Mapping.

4. Right-click in the grid and select New to create a new connection


mapping.

5. Double-click the new connection mapping in each column to select the


database instance, database connection, database login, and
language.

6. Double-click the new connection mapping in the Users column. Click ...
(the browse button).

7. Select the desired user or group and click OK. That user or group is
now associated with the connection mapping.

8. Click OK.

Connection Mapping Example


One case in which you may want to use connection mappings is if you have
existing security views defined in the data warehouse and you want to allow
MicroStrategy users' jobs to execute on the data warehouse using those
specific login IDs. For example,

l The CEO can access all data (warehouse login ID = "CEO")

l All other users have limited access (warehouse login ID = "MSTR users")

Copyright © 2024 All Rights Reserved 115


Syst em Ad m in ist r at io n Gu id e

In this case, you would need to create a user connection mapping within
MicroStrategy for the CEO. To do this:

l Create a new database login definition for the CEO in MicroStrategy so it


matches their existing login ID on the data warehouse

l Create the new connection mapping in MicroStrategy to specify that the


CEO user uses the new database login

This is shown in the diagram below in which the CEO connects as CEO
(using the new database login called "CEO") and all other users use the
default database login "MSTR users."

Both the CEO and all the other users use the same project, database
instance, database connection (and DSN), but the database login is
different for the CEO.

If we were to create a connection mapping in the MicroStrategy Tutorial


project according to this example, it would look like the diagram below.

Copyright © 2024 All Rights Reserved 116


Syst em Ad m in ist r at io n Gu id e

For information on creating a new database connection, see Connecting to


the Data Warehouse, page 22. For information on creating a new database
login, see Connecting to the Data Warehouse, page 22.

Connection mappings can also be made for user groups and are not limited
to individual users. Continuing the example above, if you have a Managers
group within the MicroStrategy system that can access most data in the data
warehouse (warehouse login ID = "Managers"), you could create another
database login and then create another connection mapping to assign it to
the Managers user group.

Another case in which you may want to use connection mappings is if you
need to have users connect to two data warehouses using the same project.
In this case, both data warehouses must have the same structure so that the
project works with both. This may be applicable if you have a data
warehouse with domestic data and another with foreign data and you want
users to be directed to one or the other based on the user group to which
they belong when they log in to the MicroStrategy system.

For example, if you have two user groups such that:

l "US users" connect to the U.S. data warehouse (data warehouse login ID
"MSTR users")

Copyright © 2024 All Rights Reserved 117


Syst em Ad m in ist r at io n Gu id e

l "Europe users" connect to the London data warehouse (data warehouse


login ID "MSTR users")

In this case, you would need to create a user connection mapping within
MicroStrategy for both user groups. To do this, you would:

l Create two database connections in MicroStrategy—one to each data


warehouse (this assumes that DSNs already exist for each data
warehouse)

l Create two connection mappings in the MicroStrategy project that link the
groups to the different data warehouses via the two new database
connection definitions

This is shown in the diagram below.

Copyright © 2024 All Rights Reserved 118


Syst em Ad m in ist r at io n Gu id e

The project, database instance, and database login can be the same, but
the connection mapping specifies different database connections (and
therefore, different DSNs) for the two groups.

Linking Database Users and MicroStrategy Users:


Passthrough Execution
You can link a MicroStrategy user to an RDBMS login ID using the User
Editor (on the Authentication tab, specify the Warehouse Login and
Password) or using Command Manager. This link is required for database
warehouse authentication (see Implement Database Warehouse
Authentication, page 614) but works for other authentication modes as well.

You can configure each project to use either connection mappings and/or
the linked warehouse login ID when users execute reports, documents, or
browse attribute elements. If passthrough execution is enabled, the project
uses the linked warehouse login ID and password as defined in the User
Editor (Authentication tab). If no warehouse login ID is linked to a user,
Intelligence Server uses the default connection and login ID for the project's
database instance.

By default, warehouse passthrough execution is turned off, and the system


uses connection mappings. If no connection mapping is defined for the user,
Intelligence Server uses the default connection and login ID for the project's
database instance.

Why use Passthrough Execution?


You may want to use passthrough execution for these reasons:

l RDBMS auditing: If you want to be able to track which users are accessing
the RDBMS system down to the individual database query. Mapping
multiple users to the same RDBMS account blurs the ability to track which
users have issued which RDBMS queries.

Copyright © 2024 All Rights Reserved 119


Syst em Ad m in ist r at io n Gu id e

l Teradata spool space: If you use the Teradata RDBMS, note that it has a
limit for spool space set per account. If multiple users share the same
RDBMS account, they are collectively limited by this setting.

l RDBMS security views: If you use security views, each user needs to log
in to the RDBMS with a unique database login ID so that a database
security view is enforced.

Enabling Linked Warehouse Logins


You can configure linked warehouse logins with the Project Configuration
Editor in Developer. To create a connection mapping, you assign a user or
group either a database connection or database login that is different from
the default. For information on this, see Connecting to the Data Warehouse,
page 22.

To Enable Linked Warehouse Logins

1. In Developer, log into your project. You must log in as a user with
administrative privileges.

2. From the Administration menu, point to Projects, and select Project


Configuration.

3. Expand the Database Instances category, expand Authentication,


and then select Warehouse.

4. Select the Use warehouse pass-through credentials check box.

5. To use warehouse credentials for all database instances, select the For
all database instances option.

6. To use warehouse credentials for specific database instances, select


the For selected database instances option. Then select those
database instances from the list below.

7. Click OK.

Copyright © 2024 All Rights Reserved 120


Syst em Ad m in ist r at io n Gu id e

Restricting Access to Data: Security Filters


Security filters enable you to control what warehouse data users can see
when that data is accessed through MicroStrategy. A security filter can be
assigned to a user or group to narrow the result set when they execute
reports or browse elements. The security filter applies to all reports and
documents, and all attribute element requests, submitted by a user.

For example, two regional managers can have two different security filters
assigned to them for their regions: one has a security filter assigned to them
that only shows the data from the Northeast region, and the other has a
security filter that only shows data from the Southwest region. If these two
regional managers run the same report, they may see different report
results.

Security filters serve a similar function to database-level techniques such as


database views and row level security. For information about controlling
data security at the data warehouse level, see Controlling Access to Data at
the Database (RDBMS) Level, page 139.

For more information about security filters, see the following:

l Security Filter Example, page 121

l How Security Filters Work, page 122

l Creating and Applying a Security Filter, page 123

l Security Filters and Metric Levels, page 125

l Using a Single Security Filter for Multiple Users: System Prompts, page
136

l Merging Security Filters, page 131

Security Filter Example


A user in the MicroStrategy Tutorial project has a security filter defined as
Subcategory=TV. When this user browses the Product hierarchy beginning

Copyright © 2024 All Rights Reserved 121


Syst em Ad m in ist r at io n Gu id e

with the Category attribute, they only see the Electronics category. Within
the Electronics category, they see only the TV subcategory. Within the TV
subcategory, they see all Items within that subcategory.

When this user executes a simple report with Category, Subcategory, and
Item in the rows, and Revenue in the columns, only the Items from the TV
Subcategory are returned, as shown in the example below.

If this user executes another report with Category in the rows and Revenue
in the columns, only the Revenue from the TV Subcategory is returned, as
shown in the example below. The user cannot see any data from attribute
elements that are outside the security filter.

How Security Filters Work


Security filters are the same as regular filters except that they can contain
only attribute qualifications, custom expressions, and joint element lists.
Relationship filters and metric qualifications are not allowed in a security
filter. A security filter can include as many expressions as you need, joined

Copyright © 2024 All Rights Reserved 122


Syst em Ad m in ist r at io n Gu id e

together by logical operators. For more information on creating filters, see


the Filters section in the Basic Reporting Help.

A security filter comes into play when a user is executing reports and
browsing elements. The qualification defined by the security filter is used in
the WHERE clause for any report that is related to the security filter's
attribute. By default, this is also true for element browsing: when a user
browses through a hierarchy to answer a prompt, they only see the attribute
elements that the security filter allows them to see. For instructions on how
to disable security filters for element browsing, see To Disable Security
Filters for Element Browsing, page 125.

Security filters are used as part of the cache key for report caching and
element caching. This means that users with different security filters cannot
access the same cached results, preserving data security. For more
information about caching, see Chapter 10, Improving Response Time:
Caching.

Each user or group can be directly assigned only one security filter for a
project. Users and groups can be assigned different security filters for
different projects. In cases where a user inherits one or more security filters
from any groups that they belong to, the security filters may need to be
merged. For information about how security filters are merged, see Merging
Security Filters, page 131.

Creating and Applying a Security Filter


You create and apply security filters in the Security Filter Manager. Make
sure you inform your users of any security filters assigned to them or their
group. If you do not inform them of their security filters, they may not know
that the data they see in their reports has been filtered, which may cause
misinterpretation of report results.

To create security filters, you must have the following privileges:

Copyright © 2024 All Rights Reserved 123


Syst em Ad m in ist r at io n Gu id e

l Create Application Objects (under the Common Privileges privilege group)

l Use Report Filter Editor (under the Developer privilege group)

l Use Security Filter Manager (under the Administration privilege group)

1. To create and apply a security filter for a user or group

2. In Developer, from the Administration menu, go to Projects >


Security Filter Manager.

3. From the Choose a project drop-down list, select the project that you
want to create a security filter for.

4. Select the Security Filters tab.

5. Select one:

l To create a new security filter, click New. The Security Filter Editor
opens.

l OR, to convert an existing filter into a security filter, click Import.


Browse to the filter you want to convert and click Open. Specify a
name and location for the new security filter and click Save.

6. In the left side of the Security Filter Manager, in the Security Filters
tab, browse to the security filter that you want to apply, and select that
security filter.

7. In the right side of the Security Filter Manager, select Security Filters.

8. Browse to the user or group that you want to apply the security filter to,
and select that user or group.

9. Click > to apply the selected security filter to the selected user or
group.

10. Click OK.

Copyright © 2024 All Rights Reserved 124


Syst em Ad m in ist r at io n Gu id e

To Disable Security Filters for Element Browsing

1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.

2. From the Administration menu, point to Projects, and then select


Project Configuration.

3. Expand the Project Definition category, and then select Advanced.

4. Under Attribute element browsing, clear the Apply security filters


to element browsing check box.

5. Click OK.

6. Restart Intelligence Server for your changes to take effect.

Security Filters and Metric Levels


In certain situations involving level metrics, users may be able to see a
limited amount of data from outside their security filter. Specifically, if a
metric is defined with absolute filtering on a level above that used in the
security filter's expression, the filter expression is raised to the metric's
level. For information about metric levels and filtering in metrics, see the
Metrics section in the Advanced Reporting Help.

For example, consider a metric called Category Revenue that is defined to


return the revenue across all items in each category. Its level expression is
Target=Category, Filtering=Absolute. When a user with a security filter
Subcategory=TV executes a report with the Category Revenue metric, the
Category Revenue metric displays the total revenue for the category. The
user's security filter is effectively changed to show the entire Category in
which TV is a Subcategory.

This behavior can be modified by using the top range attribute and bottom
range attribute properties.

Copyright © 2024 All Rights Reserved 125


Syst em Ad m in ist r at io n Gu id e

l A top range attribute specifies the highest level of detail in a given


hierarchy that the security filter allows the user to view. If a top range
attribute is specified, the security filter expression is not raised to any
level above the top range.

l A bottom range attribute specifies the lowest level of detail in a given


hierarchy that the security filter allows the user to view. If this is not
specified, the security filter can view every level lower than the specified
top range attribute, as long as it is within the qualification defined by the
filter expression.

The top and bottom range attributes can be set to the same level.

For instructions on how to assign range attributes to security filters, see


Assigning a Top or Bottom Range Attribute to a Security Filter, page 129.

The examples below use a report with Category, Subcategory, and Item on
the rows, and three metrics in the columns:

l Revenue

l Subcategory Revenue, which is defined with absolute filtering to the


Subcategory level

l Category Revenue, which is defined with absolute filtering to the Category


level

The user executing this report has a security filter that restricts the
Subcategory to the TV element.

No Top or Bottom Range Attribute

If no top or bottom range attribute is specified, then at the level of the


security filter (Subcategory) and below, the user cannot see data outside
their security filter. Above the level of the security filter, the user can see
data outside the security filter if it is in a metric with absolute filtering for
that level. Even in this case, the user sees only data for the Category in
which their security filter is defined.

Copyright © 2024 All Rights Reserved 126


Syst em Ad m in ist r at io n Gu id e

In the example report below, the user's security filter does not specify a top
or bottom range attribute. Item-level detail is displayed for only the items
within the TV category. The Subcategory Revenue is displayed for all items
within the TV subcategory. The Category Revenue is displayed for all items
in the Category, including items that are not part of the TV subcategory.
However, only the Electronics category is displayed. This illustrates how the
security filter Subcategory=TV is raised to the category level such that
Category=Electronics is the filter used with Category Revenue.

Top Range Attribute: Subcategory

If a top range attribute is specified, then the user cannot see any data
outside of them security filter. This is true even at levels above the top level,
regardless of whether metrics with absolute filtering are used.

In the example report below, the user's security filter specifies a top range
attribute of Subcategory. Here, the Category Revenue is displayed for only
the items within the TV subcategory. The security filter Subcategory=TV is
not raised to the Category level, because Category is above the specified
top level of Subcategory.

Copyright © 2024 All Rights Reserved 127


Syst em Ad m in ist r at io n Gu id e

Bottom Range Attribute: Subcategory

If a bottom range attribute is specified, the user cannot see data aggregated
at a lower level than the bottom level.

In the example report below, the user's security filter specifies a bottom
range attribute of Subcategory. Item-level detail is not displayed, because
Item is a level below the bottom level of Subcategory. Instead, data for the
entire Subcategory is shown for each item. Data at the Subcategory level is
essentially the lowest level of granularity the user is allowed to see.

Copyright © 2024 All Rights Reserved 128


Syst em Ad m in ist r at io n Gu id e

Assigning a Top or Bottom Range Attribute to a Security Filter

You assign top and bottom range attributes to security filters in the Security
Filter Manager. You can assign range attributes to a security filter for all
users, or to the security filters per user.

You can assign the same attribute to a security filter as a top and bottom
range attribute. A security filter can have multiple top or bottom range
attributes as long as they are from different hierarchies. You cannot assign
multiple attributes from the same hierarchy to either a top or bottom range.
However, you can assign attributes from the same hierarchy if one is a top
range attribute and one is a bottom range attribute. For example, you can
assign Quarter (from the Time hierarchy) and Subcategory (from the
Products hierarchy) as top range attributes, and Month (from the Time
hierarchy) and Subcategory as bottom range attributes.

To modify security filters, you must have the Use Security Filter Manager
privilege.

Copyright © 2024 All Rights Reserved 129


Syst em Ad m in ist r at io n Gu id e

To Assign a Top or Bottom Range Attribute to a Security Filter

1. In Developer, from the Administration menu, point to Projects and


then select Security Filter Manager.

2. From the Choose a project drop-down list, select the project that you
want to modify security filters for.

3. Select the Attributes tab.

4. Browse to the attribute that you want to set as a top or bottom range
attribute, and select that attribute.

5. To apply a top or bottom range attribute to a security filter for all users:

l In the right side of the Security Filter Manager, select Security


Filters.

l Browse to the security filter that you want to apply the range attribute
to.

l Expand that security filter, and select either the Top range
attributes or Bottom range attributes folder.

l Click > to apply the selected attribute to the selected security filter.

6. To apply a top or bottom range attribute to a security filter for a single


user or group:

l In the right side of the Security Filter Manager, select Groups/Users.

l Browse to the user or group that you want to apply the range attribute
to.

l Expand that user or group and select the security filter that you want
to apply the range attribute to.

l Expand that security filter, and select either the Top range
attributes or Bottom range attributes folder.

Copyright © 2024 All Rights Reserved 130


Syst em Ad m in ist r at io n Gu id e

l Click > to apply the selected attribute to the selected security filter for
the selected user or group.

7. Click OK.

Merging Security Filters


A user can be assigned a security filter directly, and can inherit a security
filter from any groups that they belong to. Because of this, multiple security
filters may need to be merged when executing reports or browsing elements.

MicroStrategy supports the following methods of merging security filters:

l Merging Related Security Filters with OR and Unrelated Security Filters


with AND, page 132 (This is the default method for merging security
filters)

l Merging All Security Filters with AND, page 135

For the examples in these sections, consider a project with the following
user groups and associated security filters:

Group Security Filter Hierarchy

Electronics Category = Electronics Product

Drama Subcategory = Drama Product

Movies Category = Movies Product

Northeast Region = Northeast Geography

You control how security filters are merged at the project level. You can
change the merge settings in the Project Configuration Editor for the
selected project, in the Security Filter category. After making any changes to
the security filter settings, you must restart Intelligence Server for those
changes to take effect.

Copyright © 2024 All Rights Reserved 131


Syst em Ad m in ist r at io n Gu id e

Changing how security filters are merged does not automatically invalidate
any result caches created for users who have multiple security filters.
MicroStrategy recommends that you invalidate all result caches in a project
after changing how security filters are merged for that project. For
instructions on how to invalidate all result caches in a project, see
Managing Result Caches, page 1221.

Merging Related Security Filters with OR and Unrelated Security Filters


with AND

By default, security filters are merged with an OR if they are related, and
with an AND if they are not related. That is, if two security filters are related,
the user can see all data available from either security filter. However, if the
security filters are not related, the user can see only the data available in
both security filters.

Two security filters are considered related if the attributes that they derive
from belong in the same hierarchy, such as Country and Region, or Year and
Month. In the example security filters given above, the Electronics, TV, and
Movies security filters are all related, and the Northeast security filter is not
related to any of the others.

Using this merge method, a user who is a member of both the Electronics
and Drama groups can see data from the Electronics category and the
Drama subcategory, as shown below:

A user who is a member of both the Movies and Drama groups can see data
from all subcategories in the Movies category, not just the Drama

Copyright © 2024 All Rights Reserved 132


Syst em Ad m in ist r at io n Gu id e

subcategory. A user who is a member of both the Electronics and Drama


categories can see data from both categories.

If a user who is a member of the Movies and Northeast groups executes a


report with Region, Category, and Subcategory in the rows, only data from
the Movies category in the Northeast region is shown, as seen below:

Data for the Movies category from outside the Northeast region is not
available to this user, nor is data for the Northeast region for other
categories.

The following examples show how the data engine treats related and
unrelated attributes.

l Related Attributes

l Unrelated Attributes

Rel at ed At t r i b u t es

Two security filters are considered related if the attributes that they derived
from belong in the same hierarchy with a one-to-one or one-to-many
relation, such as Manager and Call Center, Country and Region, or Year and
Month.

Copyright © 2024 All Rights Reserved 133


Syst em Ad m in ist r at io n Gu id e

There are some advanced cases that fall into related or unrelated
categories. Related cases are sibling relations with a one-to-many
relationship to a common child/parent attribute, such as Region and
Distribution Center or MicroStrategy User and Distribution Center, where
respective security filters merge using OR.

Un r el at ed At t r i b u t es

Two filters are considered not related if they are defined as many-to-many,
such as Item and Catalog.

There are some advanced cases that fall into unrelated categories.
Unrelated cases are siblings that contain a join path that goes up and down
multiple times, such as Employee and Month of Year, where respective
security filters merge using AND, not OR. Notice how the join path may start
from Employee, all the way to Quarter, then come down to Month, and then
go up again to Month of Year.

Copyright © 2024 All Rights Reserved 134


Syst em Ad m in ist r at io n Gu id e

Merging All Security Filters with AND

You can also configure Intelligence Server to always merge security filters
with an AND, regardless of whether they are related.

As in the first method, a user who is a member of both the Movies and
Northeast groups would see only information about the Movies category in
the Northeast region.

A user who is a member of both the Movies and Drama groups would see
only data from the Drama subcategory of Movies, as shown below:

Data for the other subcategories of Drama is not available to this user.

This setting may cause problems if a user is a member of two mutually


exclusive groups. For example, a user who is a member of both the Movies
and Electronics groups cannot see any data from the Product hierarchy,
because that hierarchy does not contain any data that belongs to both the
Movies and Electronics categories.

To configure how security filters are merged, you must have the Configure
Project Basic privilege.

Copyright © 2024 All Rights Reserved 135


Syst em Ad m in ist r at io n Gu id e

To Configure how Intelligence Server Merges Multiple Security Filters


for a User or Group

1. In Developer, log into a project. You must log in as a user with


administrative privileges.

2. From the Administration menu, point to Projects, and then select


Project Configuration.

3. Expand the Security Filter category, and then select General.

4. Under Security Filter Merge Options, select one of the options:

l Union (OR) Security Filters on related attributes, intersect (AND)


Security Filters on unrelated attributes (see Merging Related
Security Filters with OR and Unrelated Security Filters with AND,
page 132)

l Intersect (AND) all Security Filters (see Merging All Security


Filters with AND, page 135)

5. Click OK.

6. Restart Intelligence Server for your changes to take effect.

Using a Single Security Filter for Multiple Users: System Prompts


A system prompt is a special type of prompt that does not require an answer
from the user. Instead, it is answered automatically by Intelligence Server.
System prompts are in the Public Objects/Prompts/System Prompts
folder in Developer.

l Like other prompt objects, answers to system prompts are used to match
caches. Therefore, users do not share caches for reports that contain
different answers to system prompts.

l The system prompts Token 1, Token 2, Token 3, and Token 4 are


provided to support using an XQuery source to authenticate users for a

Copyright © 2024 All Rights Reserved 136


Syst em Ad m in ist r at io n Gu id e

MicroStrategy project. For steps to report on and authenticate using


XQuery sources, see the Advanced Reporting Guide.

The User Login prompt is a system prompt that is automatically answered


with the login name of the user who executes the object containing the
prompt. It can provide flexibility when implementing security mechanisms in
MicroStrategy. You can use this prompt to insert the user's login name into
any security filter, or any other object that can use a prompt.

If you are using LDAP authentication in your MicroStrategy system, you can
import LDAP attributes into your system as system prompts. You can then
use these system prompts in security filters, in the same way that you use
the User Login system prompt, as described above. For instructions on how
to import LDAP attributes as system prompts, see Manage LDAP
Authentication, page 189.

For examples of how to use system prompts in security filters, see:

l Simplifying the Security Filter Definition Process, page 138

l Implementing a Report-Level Security Filter, page 138

l Using Database Tables That Contain Security Information, page 139

To Create a Security Filter Using a System Prompt

1. In Developer, from the Administration menu, point to Projects and


then select Security Filter Manager.

2. From the Choose a project drop-down list, select the project that you
want to create a security filter for.

3. Select the Security Filters tab.

4. Click New.

5. Double-click on the text Double-click here to add a qualification.

6. Select Add an advanced qualification and click OK.

Copyright © 2024 All Rights Reserved 137


Syst em Ad m in ist r at io n Gu id e

7. From the Option drop-down list, select Custom Expression.

8. Type your custom expression in the Custom Expression area. You can
drag and drop a system prompt or other object to include it in the
custom expression. For detailed instructions on creating custom
expressions in filters, see the Filters section of the Advanced Reporting
Help.

9. When you have finished typing your custom expression, click Validate
to make sure that its syntax is correct.

10. Click Save and close. Type a name for the security filter and click
Save.

Sim plifying the Security Filter Definition Process

You can use a system prompt to apply a single security filter to all users in a
group. For example, you can create a security filter using the formula
User@ID=?[User Login] that displays information only for the element of
the User attribute that matches the user's login.

For a more complex example, you can restrict Managers so that they can
only view data on the employees that they supervise. Add the User Login
prompt to a security filter in the form Manager=?[User Login]. Then
assign the security filter to the Managers group. When a manager named
John Smith executes a report, the security filter generates SQL for the
condition Manager='John Smith' and only John Smith's employees' data
is returned.

Im plem enting a Report-Level Security Filter

You can also use the User Login system prompt to implement security filter
functionality at the report level, by defining a report filter with a system
prompt. For example, you can define a report filter with the User Login
prompt in the form Manager=?[User Login]. Any reports that use this

Copyright © 2024 All Rights Reserved 138


Syst em Ad m in ist r at io n Gu id e

filter return data only to those users who are listed as Managers in the
system.

Using Database Tables That Contain Security Inform ation

If your organization maintains security information in database tables, you


can use a system prompt to build MicroStrategy security mechanisms using
the database security tables. For example, you can restrict the data returned
based on a user's login by creating a report filter that accesses columns in
your security tables and includes the User Login system prompt. You can
also restrict data access based on two or more unrelated attributes by using
logical views (database views) and the User Login system prompt in a
security filter.

Controlling Access to Data at the Database (RDBMS) Level


Database servers have their own security architectures that provide
authentication, access control, and auditing. As mentioned above, you may
choose to use these RDBMS techniques to manage access to data, or you
may choose to use mechanisms in the MicroStrategy application layer to
manage access to data, or you may use a combination of the two. They are
not mutually exclusive. One advantage of using the database-level security
mechanisms to secure data is that all applications accessing the database
benefit from those security measures. If only MicroStrategy mechanisms are
used, then only those users accessing the MicroStrategy application benefit
from those security measures. If other applications access the database
without going through the MicroStrategy system, the security mechanisms
are not in place.

Security Views
Most databases provide a way to restrict access to data. For example, a
user may be able to access only certain tables, or they may be restricted to
certain rows and columns within a table. The subset of data available to a
user is called the user's security view.

Copyright © 2024 All Rights Reserved 139


Syst em Ad m in ist r at io n Gu id e

Security views are often used when splitting fact tables by columns and
splitting fact tables by rows (discussed below) cannot be used. The rules
that determine which rows each user is allowed to see typically vary so much
that users cannot be separated into a manageable number of groups. In the
extreme, each user is allowed to see a different set of rows.

Note that restrictions on tables, or rows and columns within tables, may not
be directly evident to a user. However, they do affect the values displayed in
a report. You need to inform users as to which data they can access so that
they do not inadvertently run a report that yields misleading final results. For
example, if a user has access to only half of the sales information in the data
warehouse but runs a summary report on all sales, the summary reflects
only half of the sales. Reports do not indicate the database security view
used to generate the report.

Consult your database vendor's product documentation to learn how to


create security views for your database.

Splitting Fact Tables by Rows


You can split fact tables by rows to separate a logical data set into multiple
physical tables based on values in the rows (this is also known as table
partitioning). The resultant tables are physically distinct tables in the data
warehouse, and security administration is simple because permissions are
granted to entire tables rather than to rows and columns.

If the data to be secured can be separated by rows, then this may be a


useful technique. For example, suppose a fact table contains the key
Customer ID, Address, Member Bank and two fact columns, as shown below:

Customer Customer Member Transaction Current


ID Address Bank Amount ($) Balance ($)

1st
123456 12 Elm St. 400.80 40,450.00
National

Copyright © 2024 All Rights Reserved 140


Syst em Ad m in ist r at io n Gu id e

Customer Customer Member Transaction Current


ID Address Bank Amount ($) Balance ($)

888 Oak Eastern


945940 150.00 60,010.70
St. Credit

45 Crest People's
908974 3,000.00 100,009.00
Dr. Bank

907 Grove 1st


886580 76.35 10,333.45
Rd. National

1 Ocean Eastern
562055 888.50 1,000.00
Blvd. Credit

You can split the table into separate tables (based on the value in Member
Bank), one for each bank: 1st National, Eastern Credit, and so on. In this
example, the table for 1st National bank would look like this:

Customer Customer Member Transaction Current


ID Address Bank Amount ($) Balance ($)

1st
123456 12 Elm St. 400.80 40,450.00
National

907 Grove 1st


886580 76.35 10,333.45
Rd. National

The table for Eastern Credit would look like this:

Customer Customer Member Transaction Current


ID Address Bank Amount ($) Balance ($)

888 Oak Eastern


945940 150.00 60,010.70
St. Credit

1 Ocean Eastern
562055 888.50 1,000.00
Blvd. Credit

Copyright © 2024 All Rights Reserved 141


Syst em Ad m in ist r at io n Gu id e

This makes it simple to grant permissions by table to managers or account


executives who should only be looking at customers for a certain bank.

In most RDBMSs, split fact tables by rows are invisible to system users.
Although there are many physical tables, the system "sees" one logical fact
table.

Support for Split fact tables by rows for security reasons should not be
confused with the support that Intelligence Server provides for split fact
tables by rows for performance benefits. For more information about
partitioning, see the Advanced Reporting Help.

Splitting Fact Tables by Columns


You can split fact tables by columns to separate a logical data set into
multiple physical tables by columns. If the data to be secured can be
separated by columns, then this may be a useful technique.

Each new table has the same primary key, but contains only a subset of the
fact columns in the original fact table. Splitting fact tables by columns allows
fact columns to be grouped based on user community. This makes security
administration simple because permissions are granted to entire tables
rather than to columns. For example, suppose a fact table contains the key
labeled Customer ID and fact columns as follows:

Current
Customer Customer Member Transaction
Balance
ID Address Bank Amount ($)
($)

You can split the table into two tables, one for the marketing department and
one for the finance department. The marketing fact table would contain
everything except the financial fact columns as follows:

Copyright © 2024 All Rights Reserved 142


Syst em Ad m in ist r at io n Gu id e

Customer Customer Member


ID Address Bank

The second table used by the financial department would contain only the
financial fact columns but not the marketing-related information as follows:

Current
Customer Transaction
Balance
ID Amount ($)
($)

Merging Users or Groups


Within a given project source, you may need to combine multiple users into
one user definition or combine a user group into another user group. For
example, if UserA is taking over the duties of UserB, you may want to
combine the users by merging UserB's properties into UserA. The
MicroStrategy User Merge Wizard merges multiple users or groups and their
profiles into a single user or group, with a single profile.

Topics covered in this section include:

How Users and Groups are Merged


The User Merge Wizard combines users and their related objects, from a
single project source. These objects include profile folders, group
memberships, user privileges, security roles, and security filters, among
others. Information from the user or group that is being merged is copied to

Copyright © 2024 All Rights Reserved 143


Syst em Ad m in ist r at io n Gu id e

the destination user or group. Then the user or group that is being merged is
removed from the metadata and only the destination user or group remains.

For example, you want to merge UserB into UserA. In this case UserA is
referred to as the destination user. In the wizard, this is shown in the image
below:

When you open the User Merge Wizard and select a project source, the
wizard locks that project configuration. Other users cannot change any
configuration objects until you close the wizard. For more information about
locking and unlocking projects, see Lock Projects, page 760.

You can also merge users in batches if you have a large number of users to
merge. Merging in batches can significantly speed up the merge process.
Batch-merging is an option in the User Merge Wizard. Click Help for details
on setting this option.

The User Merge Wizard automatically merges the following properties:


privileges, group memberships, profile folders, and object ownership
(access control lists). You may optionally choose to merge properties such
as a user's or group's security roles, security filters, and database
connection maps. Details about how the wizard merges each of these
properties are discussed below.

Merging User Privileges


The User Merge Wizard automatically merges all of a user's or group's
privileges. To continue with the example above, before the users are
merged, each user has a distinct set of global user privileges. After the
merge, all privileges that had been assigned to UserB are combined with
those of the destination user, UserA. This combination is performed as a
union. That is, privileges are not removed from either user.

Copyright © 2024 All Rights Reserved 144


Syst em Ad m in ist r at io n Gu id e

For example, if UserA has the Web user privilege and UserB has the Web
user and Web Administration privileges, after the merge, UserA has both
Web user and Web Administration privileges.

Merging User Group Memberships


The User Merge Wizard automatically merges all of a user's or group's group
memberships. Before the merge, each user has a distinct set of group
memberships. After the merge, all group memberships that were assigned to
UserB are combined with those of the destination user, UserA. This
combination is performed as a union. That is, group memberships are not
removed for either user.

Merging User Profile Folders


The User Merge Wizard automatically merges all of a user's or group's
profile folders. Before the merge, UserA and UserB have separate and
distinct user profile folders. After UserB is merged into UserA, only UserA
exists; their profile contains the profile folder information from both UserA
and UserB.

Merging Object Ownership and Access Control Lists


The User Merge Wizard automatically merges all of a user's or group's
object ownerships and access control lists (ACLs). Before the merge, the
user to be merged, UserB, owns the user objects in their profile folder and
also has full control over the objects in the access control list. After the
merge, ownership and access to the merged user's objects are granted to
the destination user, UserA. The merged user is removed from the object's
ACL. Any other users that existed in the ACL remain in the ACL. For
example, before the merge, UserB owns an object that a third user, UserC
has access to. After the merge, UserA owns the object, and UserC still has
access to it.

Copyright © 2024 All Rights Reserved 145


Syst em Ad m in ist r at io n Gu id e

Merging Project Security Roles


The User Merge Wizard does not automatically merge a user's or group's
security roles. To merge them, you must select the Security Roles check
box on the Merge Options page in the wizard. Before the merge, both users
have unique security roles for a given project. After the merge, the
destination user profile is changed based on the following rules:

l If neither user has a security role for a project, the destination user does
not have a security role on that project.

l If the destination user has no security role for a project, the user inherits
the role from the user to be merged.

l If the destination user and the user to be merged have different security
roles, then the existing security role of the destination user is kept.

l If you are merging multiple users into a single destination user and each of
the users to be merged has a security role, then the destination user takes
the security role of the first user to be merged. If the destination user also
has a security role, the existing security role of the destination user is
kept.

Merging Project Security Filters


The User Merge Wizard does not automatically merge a user's or group's
security filters. To merge them, you must select the Security Filters check
box on the Merge Options page in the wizard. When merging security filters,
the wizard follows the same rules as for security roles, described above.

Merging Database Connection Mapping


The User Merge Wizard does not automatically merge a user's or group's
database connection maps. To merge them, you must select the Connection
Mapping check box on the Merge Options page in the wizard. When merging
database connection mappings, the Wizard follows the same rules as for
security roles and security filters, described above.

Copyright © 2024 All Rights Reserved 146


Syst em Ad m in ist r at io n Gu id e

Running the User Merge Wizard


The following high-level procedure provides an overview of what the User
Merge Wizard does. For an explanation of the information required at any
given page in the wizard, click Help, or press F1.

To Merge Users or Groups

1. From the Windows Start menu, point to All Programs, then


MicroStrategy Tools, and then select User Merge Wizard.

2. Specify the project source containing the users/groups you want to


merge.

3. Select whether you want to merge optional user properties such as


security roles, security filters, and database connection maps. For a
description of how the User Merge Wizard merges these optional
properties, see each individual property's section in How Users and
Groups are Merged, page 143.

4. Specify whether you want to have the wizard select the users/groups to
merge automatically (you can verify and correct the merge candidates),
or if you want to manually select them.

5. In the User Merge Candidates page, select the destination users or


groups and click > to move them to the right-hand side.

6. Select the users or groups to be merged and click > to move them to the
right-hand side.

7. Click Finish.

Security Checklist Before Deploying the System


Use the checklist below to make sure you have implemented the appropriate
security services or features for your system before it is deployed. All the

Copyright © 2024 All Rights Reserved 147


Syst em Ad m in ist r at io n Gu id e

security implementations listed below are described in detail in preceding


sections.

Ensure that the Administrator password has been changed. When you install
Intelligence Server, the Administrator account comes with a blank password
that must be changed.

Set up access controls for the database (see Controlling Access to Data,
page 113). Depending on your security requirements you may need to:

l Set up security views to restrict access to specific tables, rows, or


columns in the database

l Split tables in the database to control user access to data by separating a


logical data set into multiple physical tables, which require separate
permissions for access

l Implement connection mapping to control individual access to the


database

l Configure passthrough execution to control individual access to the


database from each project, and to track which users are accessing the
RDBMS

l Assign security filters to users or groups to control access to specific data


(these operate similarly to security views but at the application level)

Understand the MicroStrategy user model (see The MicroStrategy User


Model, page 80). Use this model to:

l Select and implement a system authentication mode to identify users

l Set up security roles for users and groups to assign basic privileges and
permissions

l Understand ACLs (access control lists), which allow users access


permissions to individual objects

l Check and, if necessary, modify privileges and permissions for anonymous


authentication for guest users. (By default, anonymous access is disabled

Copyright © 2024 All Rights Reserved 148


Syst em Ad m in ist r at io n Gu id e

at both the server and the project levels.) Do not assign delete privileges
to the guest user account.

Assign privileges and permissions to control user access to application


functionary. You may need to:

l Assign the Denied All permission to a special user or group so that, even if
permission is granted at another level, permission is still denied

l Make sure guest users (anonymous authentication) have access to the


Log folder in C:\Program Files (x86)\Common Files\MicroStrategy. This
ensures that any application errors that occur while a guest user is logged
in can be written to the log files.

Use your web application server security features to:

l Implement file-level security requirements

l Create security roles for the application server

l Make use of standard Internet security technologies such as firewalls,


digital certificates, and encryption.

l If you are working with sensitive or confidential data, enable the setting to
encrypt all communication between MicroStrategy Web server and
Intelligence Server.

There may be a noticeable performance degradation because the system


must encrypt and decrypt all network traffic.

l Enable encryption for MicroStrategy Web products. By default most


encryption technologies are not used unless you enable them.

Locate the physical machine hosting the MicroStrategy Web application in a


physically secure location.

Restrict access to files stored on the machine hosting the MicroStrategy


Web application by implementing standard file-level security offered by your
operating system. Specifically, apply this type of security to protect access

Copyright © 2024 All Rights Reserved 149


Syst em Ad m in ist r at io n Gu id e

to the MicroStrategy administrator pages, to prevent someone from typing


specific URLs into a browser to access these pages. (The default location of
the Admin page file is C:\Program Files (x86)\MicroStrategy\Web
ASPx\asp\Admin.aspx.) Be sure to restrict access to:

l The asp directory

l Admin.aspx

Copyright © 2024 All Rights Reserved 150


Syst em Ad m in ist r at io n Gu id e

I D EN TIFYIN G U SERS:
A UTH EN TICATION

Copyright © 2024 All Rights Reserved 151


Syst em Ad m in ist r at io n Gu id e

Authentication is the process by which the system identifies the user. In


most cases, a user provides a login ID and password which the system
compares to a list of authorized logins and passwords. If they match, the
user is able to access certain aspects of the system, according to the access
rights and application privileges associated with the user.

Workflow: Changing Authentication Modes


The following is a list of high-level tasks that you perform when you change
the default authentication mode in your MicroStrategy installation.

l Choose an authentication mode, and set up the infrastructure necessary to


support it. For example, if you want to use LDAP Authentication, you must
set up your LDAP directory and server. For the modes of authentication
available, see Authentication Modes, page 153.

l Import your user database into the MicroStrategy metadata, or link your
users' accounts in your user database with their accounts in
MicroStrategy. For example, you can import users in your LDAP directory
into the MicroStrategy metadata, and ensure that their LDAP credentials
are linked to the corresponding MicroStrategy users. Depending on the
authentication mode you choose, the following options are available:

l If your organization's users do not exist in the MicroStrategy metadata:

l You can import their accounts from an LDAP directory, or from a text
file. For the steps to import users, refer to the System Administration
Help in Developer.

l You can configure Intelligence Server to automatically import users


into the metadata when they log in.

l If your organization's users already exist in the MicroStrategy metadata:

l You can use a Command Manager script to edit the user information in
the metadata, and link the users' MicroStrategy accounts to their

Copyright © 2024 All Rights Reserved 152


Syst em Ad m in ist r at io n Gu id e

accounts in your user directory.

l Enable your chosen authentication mode for MicroStrategy applications at


the following levels:

l Your web server, for example, IIS or Apache.

l Your application server, for example, IIS or WebSphere.

l In Web Administrator, on the Default Server Properties page.

l In Mobile Administrator, on the Default Server Properties page.

l For all project sources that the above applications connect to.

The specific steps to implement an authentication mode depend on the


mode you choose, and are described in the sections that follow.

Authentication Modes
Several authentication modes are supported in the MicroStrategy
environment. The main difference between the modes is the authentication
authority used by each mode. The authentication authority is the system that
verifies and accepts the login/password credentials provided by the user.

The available authentication modes are:

l Standard: Intelligence server is the authentication authority. This is the


default authentication mode. For more information, see Implement
Standard Authentication, page 156 .

l LDAP (lightweight directory access protocol): An LDAP server is the


authentication authority. For more information, see Implement LDAP
Authentication, page 160 and

l Anonymous: Users log in as "Guest" and do not need to provide a


password. This authentication mode may be required to enable other
authentication modes. For more information, see Implement Anonymous
Authentication, page 158 .

Copyright © 2024 All Rights Reserved 153


Syst em Ad m in ist r at io n Gu id e

l Single sign-on: Single sign-on encompasses several different third-party


authentication methods, including:

l OpenID Connect (OIDC) authentication: A modern authentication


protocol built on an authorization protocol called OAuth2. The protocol
allows a client application to securely delegate user authentication to an
Identity and Access Management (IAM) service. The protocol is
designed for the internet and relies on features of the HTTPS protocol.
For more information, see Enabling Single Sign-On with OIDC
Authentication.

l SAML authentication: A two way authentication set up between your


MicroStrategy server and a SAML Identity Provider. For more
information, see Enable Single Sign-On with SAML Authentication.

l Integrated authentication: A domain controller using Kerberos


authentication is the authentication authority. For more information, see
Enabling integrated authentication.

l MicroStrategy Identity: Users log into Web and Mobile using


MicroStrategy Identity. MicroStrategy Identity enables users to
electronically validate their identity using the Badge app on their
smartphone, instead of entering a password. For steps, see Enable Badge
Authentication for Web and Mobile, page 606.

For examples of situations where you might want to implement specific


authentication modes, and the steps to do so, see Authentication Examples,
page 617.

Configuring the Authentication Mode for a Project Source


You can configure a project source to use a specific authentication mode
using the Project Source Manager. By default, project sources use standard
authentication (see Implement Standard Authentication, page 156).

Copyright © 2024 All Rights Reserved 154


Syst em Ad m in ist r at io n Gu id e

To Configure the Authentication Mode for a Project Source

1. In Developer, from the Tools menu, select Project Source Manager.

2. Select the appropriate project source and click Modify.

3. On the Advanced tab, select the appropriate option for the default
authentication mode that you want to use.

4. Click OK twice.

5. If the project source is accessed via MicroStrategy Web or


MicroStrategy Office, there are additional steps that must be followed
to configure the authentication mode, as follows:

l To set the authentication mode in MicroStrategy Web, use the


MicroStrategy Web Administrator's Default Server Properties page.

l To set the authentication mode in MicroStrategy Office, use the


projectsources.xml file. For detailed instructions, see
Determining How Users Log Into MIcroStrategy Office in the legacy
MicroStrategy Office User Guide.

This information applies to the legacy MicroStrategy Office add-in,


the add-in for Microsoft Office applications which is no longer
actively developed.

It was substituted with a new add-in, MicroStrategy for Office, which


supports Office 365 applications. The initial version does not yet
have all the functionalities of the previous add-in.

If you are using MicroStrategy 2021 Update 2 or a later version, the


legacy MicroStrategy Office add-in cannot be installed from Web.;

For more information, see the MicroStrategy for Office page in the
Readme and the MicroStrategy for Office Help.

Copyright © 2024 All Rights Reserved 155


Syst em Ad m in ist r at io n Gu id e

Importing Users from Different Authentication Systems


You can import users from multiple different authentication systems, such as
from a database warehouse and from an LDAP Server, into a single
MicroStrategy metadata.

Each user that is imported into MicroStrategy from a single authentication


mechanism is created as a separate user object in the MicroStrategy
metadata. For example, if User A is imported from your LDAP Server into
MicroStrategy, the User A object is created in the MicroStrategy metadata. If
User A is also imported from your NT system, a separate User A object (we
can call it User A-NT) is created in the metadata. Every time a user is
imported into the MicroStrategy metadata, a separate user object is created.

As an alternative, you can import User A from a single authentication system


(LDAP, for example), and then link the User A object that is created to the
same user in your NT system, and to the same user in your database
warehouse, and so on. Using linking, you can "connect" or map multiple
authentication systems to a single user object in the MicroStrategy
metadata.

Sharing User Accounts Between Users


MicroStrategy does not recommend sharing user accounts.

You may decide to map several users to the same MicroStrategy user
account. These users would essentially share a common login to the system.
Consider doing this only if you have users who do not need to create their
own individual objects, and if you do not need to monitor and identify each
individual user uniquely.

Implement Standard Authentication


Standard authentication is the default authentication mode and the simplest
to set up. Each user has a unique login and password and can be identified
in the MicroStrategy application uniquely.

Copyright © 2024 All Rights Reserved 156


Syst em Ad m in ist r at io n Gu id e

By default, all users connect to the data warehouse using one RDBMS login
ID, although you can change this using Connection Mapping. For more
information, see Connecting to the Data Warehouse, page 22. In addition,
standard authentication is the only authentication mode that allows a user or
system administrator to change or expire MicroStrategy passwords.

When using standard authentication, Intelligence Server is the


authentication authority. Intelligence Server verifies and accepts the login
and password provided by the user. This information is stored in the
metadata repository.

When a project source is configured to use standard authentication, users


must enter a valid login ID and password combination before they can
access the project source.

Password Policy
A valid password is a password that conforms to any specifications you may
have set. You can define the following characteristics of passwords:

l Whether a user must change their password when they first log into
MicroStrategy

l How often the password expires

l The number of past passwords that the system remembers, so that users
cannot use the same password

l Whether a user can include their login and/or name in the password

l Whether or not rotating characters from last password are allowed in new
passwords

l Minimum number of character changes

l Rules for password complexity, including:

l The minimum number of characters that the password must contain

l The minimum number of upper-case characters that the password must


contain

Copyright © 2024 All Rights Reserved 157


Syst em Ad m in ist r at io n Gu id e

l The minimum number of lower-case characters that the password must


contain

l The minimum number of numeric characters, that is, numbers from 0 to


9, that the password must contain

l The minimum number of special characters, that is, symbols, that the
password must contain

The expiration settings are made in the User Editor and can be set for each
individual user. The complexity and remembered password settings are
made in the Security Policy Settings dialog box, and affect all users.

Steps to Implement Standard Authentication


The procedure below gives the high-level steps for configuring your
Intelligence Server for standard authentication.

High-Level Steps to Configuration Standard Authentication

1. In Developer, open the Project Source Manager and click Modify.

2. On the Advanced tab, select Use login ID and password entered by


the user (standard authentication). This is the default setting.

3. In MicroStrategy Web, log in as an administrator. On the Preferences


page, select Project Defaults, select Security, and then enable
Standard (user name & password) as the login mode.

4. In Developer, create a database instance for the data warehouse and


assign it a default database login. This is the RDBMS account that will
be used to execute reports from all users.

Implement Anonymous Authentication


When using anonymous authentication, users log in as guests and do not
need to provide a password. Each guest user assumes the profile defined by
the Public group.

Copyright © 2024 All Rights Reserved 158


Syst em Ad m in ist r at io n Gu id e

This dynamically created guest user is not the same as the "Guest" user
which is visible in the User Manager.

Guest users inherit security settings, including privileges and permissions,


project access, security filter, and connection map information, from the
Public/Guest group; they are not part of the Everyone group.

By default, guest users have no privileges; you must assign this group any
privileges that you want the guest users to have. Privileges that are grayed
out in the User Editor are not available by default to a guest user. Other than
the unavailable privileges, you can determine what the guest user can and
cannot do by modifying the privileges of the Public/Guest user group and by
granting or denying it access to objects. For more information, see
Controlling Access to Functionality: Privileges, page 101 and Controlling
Access to Objects: Permissions, page 89.

All objects created by guest users must be saved to public folders and are
available to all guest users. Guest users may use the History List, but their
messages in the History List are not saved and are purged when the guest
users log out.

To Enable Anonymous Access to a Project Source

By default, anonymous access is disabled at both the server and the project
levels.

1. In Developer, log into the project source with a user that has
administrative privileges.

2. From the folder List, select Administration.

3. From the File menu, select Properties.

4. In the Security tab, click Add.

5. Select the Public/Guest group.

6. In the Access Permission list, select Connect.

Copyright © 2024 All Rights Reserved 159


Syst em Ad m in ist r at io n Gu id e

7. Click OK.

8. Follow the procedure in Configuring the Authentication Mode for a


Project Source, page 154 and select Anonymous authentication.
When users log into this project source, they are now automatically
logged in as guest users and not prompted for a login or password.

Implement LDAP Authentication


Lightweight Directory Access Protocol (LDAP) is an open standard Internet
protocol running over TCP/IP that is designed to maintain and work with
large user directory services. It provides a standard way for applications to
request and manage user and group directory information. LDAP performs
simple Select operations against large directories, in which the goal is to
retrieve a collection of attributes with simple qualifications, for example,
Select all the employees' phone numbers in the support
division.

An LDAP authentication system consists of two components: an LDAP


server and an LDAP directory. An LDAP server is a program that implements
the LDAP protocol and controls access to an LDAP directory of user and
group accounts. An LDAP directory is the storage location and structure of
user and group accounts on an LDAP server. Before information from an
LDAP directory can be searched and retrieved, a connection to the LDAP
server must be established.

If you use an LDAP directory to centrally manage users in your environment,


you can implement LDAP authentication in MicroStrategy. Group
membership can be maintained in the LDAP directory without having to also
be defined in Intelligence Server. LDAP authentication identifies users in an
LDAP directory which MicroStrategy can connect to through an LDAP server.
Supported LDAP servers include Novell Directory Services, Microsoft
Directory Services, OpenLDAP for Linux, and Sun ONE 5.1/iPlanet. For the
latest set of certified and supported LDAP servers, refer to the Readme.

The high-level steps to implement LDAP authentication are as follows:

Copyright © 2024 All Rights Reserved 160


Syst em Ad m in ist r at io n Gu id e

1. Review the LDAP information flow, described in LDAP Information


Flow, page 161.

2. Depending on your requirements, collect information and make


decisions regarding the information in Checklist: Information Required
for Connecting Your LDAP Server to MicroStrategy, page 162.

3. Run the LDAP Connectivity Wizard to connect your LDAP server to


MicroStrategy, as described in Setting up LDAP Authentication in
MicroStrategy Web, Library, and Mobile, page 185.

4. To make changes in your LDAP configuration, use the procedures


described in Manage LDAP Authentication, page 189.

You can also set up MicroStrategy Office to use LDAP authentication. For
information, see the MicroStrategy for Office Help.

LDAP Information Flow


The following scenario presents a high-level overview of the general flow of
information between Intelligence Server and an LDAP server when an LDAP
user logs into Developer or MicroStrategy Web.

LDAP User Login Information Flow


1. When an LDAP user logs in to MicroStrategy Web or Developer,
Intelligence Server connects to the LDAP server using the credentials
for the LDAP administrative user, called an authentication user.

2. The authentication user is bound to LDAP using a Distinguished Name


(DN) and password set up in the user's configuration.

3. The authentication user searches the LDAP directory for the user who
is logging in via Developer or MicroStrategy Web, based on the DN of
the user logging in.

4. If this search successfully locates the user who is logging in, the user's
LDAP group information is retrieved.

Copyright © 2024 All Rights Reserved 161


Syst em Ad m in ist r at io n Gu id e

5. Intelligence Server then searches the MicroStrategy metadata to


determine whether the DN of the user logging in is linked to an existing
MicroStrategy user or not.

6. If a linked user is not found in the metadata, Intelligence Server refers


to the import and synchronization options that are configured. If
importing is enabled, Intelligence Server updates the metadata with the
user and group information it accessed in the LDAP directory.

7. The user who is logging in is given access to MicroStrategy, with


appropriate privileges and permissions.

LDAP Anonymous Login Information Flow


When an LDAP anonymous (empty password) logs into MicroStrategy Web
or Developer, Intelligence Server checks whether the LDAP anonymous bind
to the LDAP server is successful. When this succeeds, the Intelligence
server authorizes the LDAP anonymous login using LDAP Users and
Everyone groups. The privileges and permissions of LDAP Users and
Everyone groups are applied.

Checklist: Information Required for Connecting Your LDAP


Server to MicroStrategy
You can connect your LDAP server from your Intelligence Server using the
LDAP Connectivity Wizard. Before beginning the process, ensure that you
have the following information:

l The connection details for your LDAP server. The information required is
as follows:

l The machine name or IP address of the LDAP server.

l The network port that the LDAP server uses.

l Whether the LDAP server is accessed using clear text, or over an


encrypted SSL connection. If you are using an SSL connection, you
need to do the following before you begin to set up LDAP:

Copyright © 2024 All Rights Reserved 162


Syst em Ad m in ist r at io n Gu id e

l Obtain a valid certificate from your LDAP server and save it on the
machine where Intelligence Server is installed.

l Follow the procedure recommended by your operating system to install


the certificate.

l The user name and password of an LDAP user who can search the LDAP
directory. This user is called the authentication user, and is used by the
Intelligence Server to connect to the LDAP server. Typically, this user
has administrative privileges for your LDAP server.

l Details of your LDAP SDK. The LDAP SDK is a set of connectivity file
libraries (DLLs) that MicroStrategy uses to communicate with the LDAP
server. For information on the requirements for your LDAP SDK, and for
steps to set up the SDK, see Setting Up LDAP SDK Connectivity, page
167.

l Your LDAP search settings, which allow Intelligence Server to


effectively search through your LDAP directory to authenticate and
import users. For information on defining LDAP search settings, see
Defining LDAP Search Filters to Verify and Import Users and Groups at
Login, page 170.

Additionally, depending on your organization's requirements, it is


recommended that you make decisions and gather information about the
following:

l Determine whether you want to use connection pooling with your LDAP
server. With connection pooling, you can reuse an open connection to the
LDAP server for subsequent operations. The connection to the LDAP
server remains open even when the connection is not processing any
operations (also known as pooling). This setting can improve performance
by removing the processing time required to open and close a connection
to the LDAP server for each operation.

Copyright © 2024 All Rights Reserved 163


Syst em Ad m in ist r at io n Gu id e

For background information on connection pooling, see Determining


Whether to Use Connection Pooling, page 174.

l Determine the method that Intelligence Server uses to authenticate users


in the LDAP server. The possible options are described below:

l Binding: If you choose this method, the Intelligence Server attempts to


log in to the LDAP server with the user's credentials.

l Password comparison: If you choose this method, the Intelligence


Server verifies the user's user name and password with the LDAP
server, without attempting to log in to the LDAP server.

For a comparison of the two methods of authentication, see Determining


Whether to Use Authentication Binding or Password Comparison, page
176.

l Determine whether you need to use database passthrough execution. In


MicroStrategy, a single user name and password combination is frequently
used to connect to and execute jobs against a database. However, you
can choose to pass to the database a user's LDAP user name and
password used to log in to MicroStrategy. The database is then accessed
and jobs are executed using the LDAP user name and password. This
allows each user logged in to MicroStrategy to execute jobs against the
database using their unique user name and password which can be given
a different set of privileges than other users.

For additional information on database passthrough execution, see


Determining Whether to Enable Database Passthrough Execution with
LDAP, page 177.

l Determine whether you want to import LDAP user and group information
into the MicroStrategy metadata. A MicroStrategy group is created for
each LDAP group. The following options are available:

Copyright © 2024 All Rights Reserved 164


Syst em Ad m in ist r at io n Gu id e

l Import users and groups into MicroStrategy: If you choose this option, a
MicroStrategy user is created for each user in your LDAP directory.
Users can then be assigned additional privileges and permissions in
MicroStrategy.

l Link users and groups to MicroStrategy, without importing them: If you


choose this option, a link is created between MicroStrategy users and
users in your LDAP directory, without creating new LDAP users in your
metadata. If you have an LDAP directory with a large number of users,
this option avoids filling your metadata with new users.

For information on the benefits and considerations for importing LDAP


user and group information into MicroStrategy, see Determining Whether
to Import LDAP Users into MicroStrategy, page 178.

l Determine whether you want to automatically synchronize user and group


information with the LDAP server. This ensures that if there are changes in
the group membership for the users you have imported into MicroStrategy,
or users who are linked to existing MicroStrategy accounts, the changes in
the LDAP directory are applied in MicroStrategy when users log in, or on a
schedule that you determine.

For the benefits and considerations of synchronizing user and group


information, see Determining Whether to Automatically Synchronize LDAP
User and Group Information, page 183.

l If you choose to import LDAP user and group information into the
MicroStrategy metadata, determine the following:

l Determine whether you want to import LDAP user and group information
into the MicroStrategy metadata when users log in, and whether the
information is synchronized every time users log in.

l Determine whether you want to import LDAP user and group information
into the MicroStrategy metadata in batches, and whether you want the
information to be synchronized according to a schedule.

Copyright © 2024 All Rights Reserved 165


Syst em Ad m in ist r at io n Gu id e

l If you want to import LDAP user and group information in batches, you
must provide search filters to import the users and the groups. For
example, if your organization has 1,000 users in the LDAP directory, of
whom 150 need to use MicroStrategy, you must provide a search filter
that imports the 150 users into the MicroStrategy metadata. For
information on defining search filters, see Defining LDAP Search Filters
to Verify and Import Users and Groups at Login, page 170.

l If your LDAP organizational structure includes groups contained within


groups, determine how many recursive groups to import when you import
a user or group into MicroStrategy.

To understand how this setting effects the way the users and groups are
imported into MicroStrategy, see the following diagram:

If you choose to import two nested groups when MicroStrategy imports LDAP
groups, the groups associated with each user are imported, up to two levels
above the user. In this case, for User 1, the groups Domestic and Marketing
would be imported. For User 3, Developers and Employees would be
imported.

l If you use a single sign-on (SSO) authentication system, such as Windows


authentication or integrated authentication, determine whether you want to
import the LDAP user and group information for users of your single sign-
on system.

Copyright © 2024 All Rights Reserved 166


Syst em Ad m in ist r at io n Gu id e

l Determine whether the following additional information is imported:

l The users' email addresses. If you have a license for MicroStrategy


Distribution Services, then when you import LDAP users, you can import
these email addresses as contacts associated with those users.

l The Trusted Authenticated Request User ID for a 3rd party user. When a
3rd party user logs in, this Trusted Authenticated Request User ID will
be used to find the linked MicroStrategy user.

l Additional LDAP attributes to import. For example, your LDAP directory


may include an attribute called accountExpires, which contains
information about when the users' accounts expire. The attributes in
your LDAP directory depend on the LDAP server that you use, and your
LDAP configuration.

You can create security filters based on the LDAP attributes that you
import. For example, you import the LDAP attribute countryName,
create a security filter based on that LDAP attribute, and then you assign
that security filter to all LDAP users. Now, when a user from Brazil views
a report that breaks down sales revenue by country, they only see the
sales data for Brazil.

For information on setting up security filters based on LDAP attributes,


see Manage LDAP Authentication, page 189.

Once you have collected the above information, you can use the LDAP
Connectivity Wizard to set up your LDAP connection. The steps are
described in Setting up LDAP Authentication in MicroStrategy Web, Library,
and Mobile, page 185.

Setting Up LDAP SDK Connectivity


From the perspective of your LDAP server, Intelligence Server is an LDAP
client that uses clear text or encrypted SSL to connect to your LDAP server
through the LDAP SDK.

Copyright © 2024 All Rights Reserved 167


Syst em Ad m in ist r at io n Gu id e

The LDAP SDK is a set of connectivity file libraries (DLLs) that


MicroStrategy uses to communicate with the LDAP server. For the latest set
of certified and supported LDAP SDK files, refer to the Readme.

Intelligence Server requires that the version of the LDAP SDK you are using
supports the following:

l LDAP v. 3

l SSL connections

l 64-bit architecture on Linux platforms

For LDAP to work properly with Intelligence Server, the 64-bit LDAP
libraries must be used.

The following image shows how behavior of the various elements in an LDAP
configuration affects other elements in the configuration.

1. The behavior between Intelligence Server and the LDAP SDK varies
slightly depending on the LDAP SDK used. The Readme provides an
overview of these behaviors.

2. The behavior between the LDAP SDK and the LDAP server is identical,
no matter which LDAP SDK is used.

MicroStrategy recommends that you use the LDAP SDK vendor that
corresponds to the operating system vendor on which Intelligence Server is
running in your environment. Specific recommendations are listed in the

Copyright © 2024 All Rights Reserved 168


Syst em Ad m in ist r at io n Gu id e

Readme, with the latest set of certified and supported LDAP SDKs,
references to MicroStrategy Tech Notes with version-specific details, and
SDK download location information.

High-Level Steps to Install the LDAP SDK DLLs

1. Download the LDAP SDK DLLs onto the machine where Intelligence
Server is installed.

2. Install the LDAP SDK.

3. Register the location of the LDAP SDK files as follows:

l Windows environment: Add the path of the LDAP SDK libraries as a


system environment variable so that Intelligence Server can locate
them.

l Linux environment: Modify the LDAP.sh file located in the env folder
of your MicroStrategy installation to point to the location of the LDAP
SDK libraries. The detailed procedure is described in the procedure
To Add the LDAP SDK Path to the Environment Variable in UNIX,
page 169 below.

4. Restart Intelligence Server.

To Add the LDAP SDK Path to the Environment Variable in UNIX

This procedure assumes you have installed an LDAP SDK. For high-level
steps to install an LDAP SDK, see High-Level Steps to Install the LDAP SDK
DLLs, page 169.

1. In a Linux console window, browse to HOME_PATH where HOME_PATH is


the specified home directory during installation. Browse to the folder
/env in this path.

2. Add Write privileges to the LDAP.sh file by typing the command


chmod u+w LDAP.sh and then pressing Enter.

Copyright © 2024 All Rights Reserved 169


Syst em Ad m in ist r at io n Gu id e

3. Open the LDAP.sh file in a text editor and add the library path to the
MSTR_LDAP_LIBRARY_PATH environment variable. For example:
MSTR_LDAP_LIBRARY_PATH='/path/LDAP/library'

It is recommended that you store all libraries in the same path. If you
have several paths, you can add all paths to the MSTR_LDAP_
LIBRARY_PATH environment variable and separate them by a colon
(:). For example: MSTR_LDAP_LIBRARY_
PATH='/path/LDAP/library:/path/LDAP/library2'

4. Remove Write privileges from the LDAP.sh file by typing the command
chmod a-w LDAP.sh and then pressing Enter.

5. Restart Intelligence Server for your changes to take effect.

Defining LDAP Search Filters to Verify and Import Users and


Groups at Login
You must provide Intelligence Server with some specific parameters so it
can search effectively through your LDAP directory for user information.

When users attempt to log in to MicroStrategy, the Intelligence Server


authenticates users by searching the LDAP directory for the user's
Distinguished Name, which is a unique way to identify users within the LDAP
directory structure.

To search effectively, Intelligence Server must know where to start its


search. When setting up LDAP authentication, it is recommended that you
indicate a search root Distinguished Name to establish the directory location
from which Intelligence Server starts all user and group searches. If this
search root is not set, Intelligence Server searches the entire LDAP
directory.

Additionally, you can specify search filters, which help narrow down the
users and groups to search.

The following sections describe the search settings that you can configure:

Copyright © 2024 All Rights Reserved 170


Syst em Ad m in ist r at io n Gu id e

l Highest Level to Start an LDAP Search: Search Root, page 171 provides
examples of these parameters as well as additional details of each
parameter and some LDAP server-specific notes.

l Finding Users: User Search Filters, page 172 provides an overview of


LDAP user search filters.

l Finding Groups: Group Search Filters, page 173 provides an overview of


LDAP group search filters.

Highest Level to Start an LDAP Search: Search Root

The following diagram and table present several examples of possible


search roots based on how users might be organized within a company and
within an LDAP directory. The diagram shows a typical company's
departmental structure. The table describes several user import scenarios
based on the diagram.

The following table, based on the diagram above, provides common search
scenarios for users to be imported into MicroStrategy. The search root is the
root to be defined in MicroStrategy for the LDAP directory.

Scenario Search Root

Include all users and groups from


Operations
Operations

Copyright © 2024 All Rights Reserved 171


Syst em Ad m in ist r at io n Gu id e

Scenario Search Root

Include all users and groups from


Operations, Consultants, and Sales
Sales

Include all users and groups from Departments (with an exclusion clause in the
Operations, Consultants, and User/Group search filter to exclude users who belong
Technology to Marketing and Administration)

Include all users and groups from Departments (with an exclusion clause in the
Technology and Operations but User/Group search filter to exclude users who belong
not Consultants. to Consultants.)

For some LDAP vendors, the search root cannot be the LDAP tree's root. For
example, both Microsoft Active Directory and Sun ONE require a search to
begin from the domain controller RDN (dc). The image below shows an
example of this type of RDN, where "dc=sales, dc=microstrategy, dc=com":

Finding Users: User Search Filters

User search filters allow MicroStrategy to efficiently search an LDAP


directory to authenticate or import a user at login.

Once Intelligence Server locates the user in the LDAP directory, the search
returns the user's Distinguished Name, and the password entered at user
login is verified against the LDAP directory. Intelligence Server uses the

Copyright © 2024 All Rights Reserved 172


Syst em Ad m in ist r at io n Gu id e

authentication user to access, search in, and retrieve the information from
the LDAP directory.

Using the user's Distinguished Name, Intelligence Server searches for the
LDAP groups that the user is a member of. You must enter the group search
filter parameters separately from the user search filter parameters (see
Finding Groups: Group Search Filters, page 173).

User search filters are generally in the form (&(objectclass=LDAP_


USER_OBJECT_CLASS)(LDAP_LOGIN_ATTR=#LDAP_LOGIN#)) where:

l LDAP_USER_OBJECT_CLASS indicates the object class of the LDAP users.


For example, you can enter (&(objectclass=person)(cn=#LDAP_
LOGIN#)).

l LDAP_LOGIN_ATTR indicates which LDAP attribute to use to store LDAP


logins. For example, you can enter (&(objectclass=person)
(cn=#LDAP_LOGIN#)).

l #LDAP_LOGIN# can be used in this filter to represent the LDAP user login.

Depending on your LDAP server vendor and your LDAP tree structure, you
may need to try different attributes within the search filter syntax above. For
example, (&(objectclass=person) (uniqueID=#LDAP_LOGIN#)),
where uniqueID is the LDAP attribute name your company uses for
authentication.

Finding Groups: Group Search Filters

Group search filters allow MicroStrategy to efficiently search an LDAP


directory for the groups to which a user belongs. These filters can be
configured in the Intelligence Server Configuration Editor, under the LDAP
subject.

The group search filter is generally in one of the following forms (or the
following forms may be combined, using a pipe | symbol to separate the
forms):

Copyright © 2024 All Rights Reserved 173


Syst em Ad m in ist r at io n Gu id e

l (&(objectclass=LDAP_GROUP_OBJECT_CLASS) (LDAP_MEMBER_LOGIN_
ATTR=#LDAP_LOGIN#))

l (&(objectclass=LDAP_GROUP_OBJECT_CLASS) (LDAP_MEMBER_DN_
ATTR=#LDAP_DN#))

l (&(objectclass=LDAP_GROUP_OBJECT_CLASS) (gidNumber=#LDAP_
GIDNUMBER#))

The group search filter forms listed above have the following placeholders:

l LDAP_GROUP_OBJECT_CLASS indicates the object class of the LDAP


groups. For example, you can enter (&(objectclass=groupOfNames)
(member=#LDAP_DN#)).

l LDAP_MEMBER_[LOGIN or DN]_ATTR indicates which LDAP attribute of


an LDAP group is used to store LDAP logins/DNs of the LDAP users. For
example, you can enter (&(objectclass=groupOfNames)
(member=#LDAP_DN#)).

l #LDAP_DN# can be used in this filter to represent the distinguished name


of an LDAP user.

l #LDAP_LOGIN# can be used in this filter to represent an LDAP user's


login.

l #LDAP_GIDNUMBER# can be used in this filter to represent the UNIX or


Linux group ID number; this corresponds to the LDAP attribute
gidNumber.

You can implement specific search patterns by adding additional criteria.


For example, you may have 20 different groups of users, of which only five
groups will be accessing and working in MicroStrategy. You can add
additional criteria to the group search filter to import only those five groups.

Determining Whether to Use Connection Pooling


With connection pooling, you can reuse an open connection to the LDAP
server for subsequent operations. The connection to the LDAP server

Copyright © 2024 All Rights Reserved 174


Syst em Ad m in ist r at io n Gu id e

remains open even when the connection is not processing any operations
(also known as pooling). This setting can improve performance by removing
the processing time required to open and close a connection to the LDAP
server for each operation.

If you do not use connection pooling, the connection to an LDAP server is


closed after each request. If requests are sent to the LDAP server
infrequently, this can help reduce the use of network resources.

Connection Pooling with Clustered LDAP Servers

You may have multiple LDAP servers which work together as a cluster of
LDAP servers.

If connection pooling is disabled, when a request to open an LDAP


connection is made, the LDAP server with the lightest load at the time of the
request is accessed. The operation against the LDAP directory can then be
completed, and in an environment without connection pooling, the
connection to the LDAP server is closed. When the next request to open an
LDAP connection is made, the LDAP server with the least amount of load is
determined again and chosen.

If you enable connection pooling for a clustered LDAP environment, the


behavior is different than described above. On the first request to open an
LDAP connection, the LDAP server with the least amount of load at the time
of the request is accessed. However, the connection to the LDAP server is
not closed because connection pooling is enabled. Therefore, instead of
determining the LDAP server with the least amount of load during the next
request to open an LDAP connection, the currently open connection is
reused.

The diagrams shown below illustrate how subsequent connections to a


clustered LDAP server environment are handled, depending on whether
connection pooling is enabled or disabled.

Copyright © 2024 All Rights Reserved 175


Syst em Ad m in ist r at io n Gu id e

Determining Whether to Use Authentication Binding or Password


Comparison
When MicroStrategy attempts to authenticate an LDAP user logging in to
MicroStrategy, you can choose to perform an LDAP bind to authenticate the
user or simply authenticate on user name and password.

Copyright © 2024 All Rights Reserved 176


Syst em Ad m in ist r at io n Gu id e

By implementing authentication binding, MicroStrategy authenticates the


user by logging in to the LDAP server with the user's credentials, and
assessing the following user restrictions:

l Whether the LDAP password is incorrect, has been locked out, or has
expired

l Whether the LDAP user account has been disabled, or has been identified
as an intruder and is locked out

If MicroStrategy can verify that none of these restrictions are in effect for
this user account, MicroStrategy performs an LDAP bind, and successfully
authenticates the user logging in. This is the default behavior for users and
groups that have been imported into MicroStrategy.

You can choose to have MicroStrategy verify only the accuracy of the user's
password with which the user logged in, and not check for additional
restrictions on the password or user account. To support password
comparison authentication, your LDAP server must also be configured to
allow password comparison only.

Determining Whether to Enable Database Passthrough Execution


with LDAP
In MicroStrategy, a single user name and password combination is
frequently used to connect to and execute jobs against a database.
However, you can choose to pass a user's LDAP user name and password
used to log in to MicroStrategy to the database. The database is then
accessed and jobs are executed using the LDAP user name and password.
This allows each user logged in to MicroStrategy to execute jobs against the
database using their unique user name and password, which can be given a
different set of privileges than other users.

Database passthrough execution is selected for each user individually. For


general information on selecting user authentication, see About
MicroStrategy Users, page 81.

Copyright © 2024 All Rights Reserved 177


Syst em Ad m in ist r at io n Gu id e

If a user's password is changed during a session in MicroStrategy,


scheduled tasks may fail to run when using database passthrough
execution.

Consider the following scenario.

A user with user login UserA and password PassA logs in to MicroStrategy at
9:00 A.M. and creates a new report. The user schedules the report to run at
3:00 P.M. later that day. Since there is no report cache, the report will be
executed against the database. At noon, an administrator changes UserA's
password to PassB. UserA does not log back into MicroStrategy, and at 3:00
P.M. the scheduled report is run with the credentials UserA and PassA, which
are passed to the database. Since these credentials are now invalid, the
scheduled report execution fails.

To prevent this problem, schedule password changes for a time when users
are unlikely to run scheduled reports. In the case of users using database
passthrough execution who regularly run scheduled reports, inform them to
reschedule all reports if their passwords have been changed.

Determining Whether to Import LDAP Users into MicroStrategy


To connect your LDAP users and groups to users and groups in
MicroStrategy, you can either import the LDAP users and groups into the
MicroStrategy metadata or you can create a link between users and groups
in the LDAP directory and in MicroStrategy. Importing a user creates a new
user in MicroStrategy based on an existing user in the LDAP directory.
Linking a user connects an LDAP user's information to an existing user in
MicroStrategy. You can also allow LDAP users to log in to the MicroStrategy
system anonymously, without an associated MicroStrategy user. The
benefits and considerations of each method are described in the table
below.

Copyright © 2024 All Rights Reserved 178


Syst em Ad m in ist r at io n Gu id e

Connection
Benefits Considerations
Type

Users and groups are created in In environments that have many

the metadata. LDAP users, importing can quickly fill


the metadata with these users and
Import LDAP Users and groups can be assigned their related information.
users and additional privileges and
groups permissions in MicroStrategy. Users and groups may not have the
correct permissions and privileges
Users have their own inboxes and when they are initially imported into
personal folders in MicroStrategy. MicroStrategy.

For environments that have many


LDAP users, linking avoids filling
Link users the metadata with users and their
and groups related information. Users to be linked to must already
without exist in the MicroStrategy metadata.
importing You can use Command Manager to
automate the linking process using
scripts.

Privileges are limited to those for the


Public/Guest group and LDAP Public
Allow Users can log in immediately group.
anonymous or without having to create a new
guest users MicroStrategy user. Users' personal folders and Inboxes
are deleted from the system after
they log out.

The options for importing users into MicroStrategy are described in detail in
the following sections:

l Importing LDAP Users and Groups into MicroStrategy, page 180

l Linking Users and Groups Without Importing, page 181

l Allowing Anonymous/Guest Users with LDAP Authentication, page 182

You can modify your import settings at any time, for example, if you choose
not to import users initially, but want to import them at some point in the

Copyright © 2024 All Rights Reserved 179


Syst em Ad m in ist r at io n Gu id e

future. The steps to modify your LDAP settings are described in Manage
LDAP Authentication, page 189.

Im porting LDAP Users and Groups into MicroStrategy

You can choose to import LDAP users and groups at login, in a batch
process, or a combination of the two. Imported users are automatically
members of MicroStrategy's LDAP Users group, and are assigned the
access control list (ACL) and privileges of that group. To assign different
ACLs or privileges to a user, you can move the user to another
MicroStrategy user group.

When an LDAP user is imported into MicroStrategy, you can also choose to
import that user's LDAP groups. If a user belongs to more than one group,
all the user's groups are imported and created in the metadata. Imported
LDAP groups are created within MicroStrategy's LDAP Users folder and in
MicroStrategy's User Manager.

LDAP users and LDAP groups are all created within the MicroStrategy LDAP
Users group at the same level. While the LDAP relationship between a user
and any associated groups exists in the MicroStrategy metadata, the
relationship is not visually represented in Developer. For example, looking
in the LDAP Users folder in MicroStrategy immediately after an import or
synchronization, you might see the following list of imported LDAP users and
groups:

If you want a users' groups to be shown in MicroStrategy, you must manually


move them into the appropriate groups.

The relationship between an imported LDAP user or group and the


MicroStrategy user or group is maintained by a link in the MicroStrategy

Copyright © 2024 All Rights Reserved 180


Syst em Ad m in ist r at io n Gu id e

metadata, which is in the form of a Distinguished Name. A Distinguished


Name (DN) is the unique identifier of an entry (in this case a user or group)
in the LDAP directory.

The MicroStrategy user's Distinguished Name is different from the DN


assigned for the authentication user. The authentication user's DN is the DN
of the MicroStrategy account that is used to connect to the LDAP server and
search the LDAP directory. The authentication user can be anyone who has
search privileges in the LDAP server, and is generally the LDAP
administrator.

Removing a user from the LDAP directory does not effect the user's
presence in the MicroStrategy metadata. Deleted LDAP users are not
automatically deleted from the MicroStrategy metadata during
synchronization. You can revoke a user's privileges in MicroStrategy, or
remove the user manually.

You cannot export users or groups from MicroStrategy to an LDAP directory.

Linking Users and Groups Without Im porting

A link is a connection between an LDAP user or group and a MicroStrategy


user or group which allows an LDAP user to log in to MicroStrategy. Unlike
an imported LDAP user, a linked LDAP user is not created in the
MicroStrategy metadata.

An LDAP group can only be linked to a MicroStrategy group, and an LDAP


user can only be linked to a MicroStrategy user. It is not possible to link a
group to a user without giving the user membership in the group.

When an LDAP user or group is linked to an existing MicroStrategy user or


group, no new user or group is created within the MicroStrategy metadata as
with importing. Instead, a link is established between an existing
MicroStrategy user or group and an LDAP user or group, which allows the
LDAP user to log in to MicroStrategy.

Copyright © 2024 All Rights Reserved 181


Syst em Ad m in ist r at io n Gu id e

The link between an LDAP user or group and the MicroStrategy user or
group is maintained in the MicroStrategy metadata in the form of a shared
Distinguished Name.

The user's or group's LDAP privileges are not linked with the MicroStrategy
user. In MicroStrategy, a linked LDAP user or group receives the privileges
of the MicroStrategy user or group to which it is linked.

LDAP groups cannot be linked to MicroStrategy user groups. For example,


you cannot link an LDAP group to MicroStrategy's Everyone group.
However, it is possible to link an LDAP user to a MicroStrategy user that has
membership in a MicroStrategy group.

Allowing Anonym ous/ Guest Users with LDAP Authentication

An LDAP anonymous login is an LDAP login with an empty login and/or


empty password. A successful LDAP anonymous login is authorized with the
privileges and access rights of LDAP Public and Public/Guest groups. The
LDAP server must be configured to allow anonymous or guest authentication
requests from MicroStrategy.

Because guest users are not present in the metadata, there are certain
actions these users cannot perform in MicroStrategy, even if the associated
privileges and permissions are explicitly assigned. Examples include most
administrative actions.

When the user is logged in as an anonymous/guest user:

l The user does not have a History List, because the user is not physically
present in the metadata.

l The user cannot create objects and cannot schedule reports.

l The User Connection monitor records the LDAP user's user name.

l Intelligence Server statistics record the session information under the user
name LDAP USER.

Copyright © 2024 All Rights Reserved 182


Syst em Ad m in ist r at io n Gu id e

Determining Whether to Automatically Synchronize LDAP User


and Group Information
In any company's security model, steps must be taken to account for a
changing group of employees. Adding new users and removing ones that are
no longer with the company is straightforward. Accounting for changes in a
user's name or group membership can prove more complicated. To ease this
process, MicroStrategy supports user name/login and group synchronization
with the information contained within an LDAP directory.

If you choose to have MicroStrategy automatically synchronize LDAP users


and groups, any LDAP group changes that have occurred within the LDAP
server will be applied within MicroStrategy the next time an LDAP user logs
in to MicroStrategy. This keeps the LDAP directory and the MicroStrategy
metadata in synchronization.

By synchronizing users and groups between your LDAP server and


MicroStrategy, you can update the imported LDAP users and groups in the
MicroStrategy metadata with the following modifications:

l User synchronization: User details such as user name in MicroStrategy


are updated with the latest definitions in the LDAP directory.

l Group synchronization: Group details such as group name in


MicroStrategy are updated with the latest definitions in the LDAP
directory.

When synchronizing LDAP users and groups in MicroStrategy, you should be


aware of the following circumstances:

l If an LDAP user or group has been given new membership to a group that
has not been imported or linked to a group in MicroStrategy and import
options are turned off, the group cannot be imported into MicroStrategy
and thus cannot apply its permissions in MicroStrategy.

For example, User1 is a member of Group1 in the LDAP directory, and


both have been imported into MicroStrategy. Then, in the LDAP directory,

Copyright © 2024 All Rights Reserved 183


Syst em Ad m in ist r at io n Gu id e

User1 is removed from Group1 and given membership to Group2.


However, Group2 is not imported or linked to a MicroStrategy group. Upon
synchronization, in MicroStrategy, User1 is removed from Group1, and is
recognized as a member of Group2. However, any permissions for Group2
are not applied for the user until Group2 is imported or linked to a
MicroStrategy group. In the interim, User1 is given the privileges and
permissions of the LDAP Users group.

l When users and groups are deleted from the LDAP directory, the
corresponding MicroStrategy users and groups that have been imported
from the LDAP directory remain in the MicroStrategy metadata. You can
revoke users' and groups' privileges in MicroStrategy and remove the
users and groups manually.

l Regardless of your synchronization settings, if a user's password is


modified in the LDAP directory, a user must log in to MicroStrategy with
the new password. LDAP passwords are not stored in the MicroStrategy
metadata. MicroStrategy uses the credentials provided by the user to
search for and validate the user in the LDAP directory.

Consider a user named Joe Doe who belongs to a particular group, Sales,
when he is imported into MicroStrategy. Later, he is moved to a different
group, Marketing, in the LDAP directory. The LDAP user Joe Doe and LDAP
groups Sales and Marketing have been imported into MicroStrategy. Finally,
the user name for Joe Doe is changed to Joseph Doe, and the group name
for Marketing is changed to MarketingLDAP.

The images below show a sample LDAP directory with user Joe Doe being
moved within the LDAP directory from Sales to Marketing.

Copyright © 2024 All Rights Reserved 184


Syst em Ad m in ist r at io n Gu id e

The following table describes what happens with users and groups in
MicroStrategy if users, groups, or both users and groups are synchronized.

Sync Sync User Name After Group Name After


Users? Groups? Synchronization Synchronization

No No Joe Doe Marketing

No Yes Joe Doe MarketingLDAP

Yes No Joseph Doe Marketing

Yes Yes Joseph Doe MarketingLDAP

Setting up LDAP Authentication in MicroStrategy Web, Library,


and Mobile
When you have collected the connection information for your LDAP server
and your LDAP SDK, you can use the LDAP Connectivity Wizard to set up
your LDAP connection. The LDAP Connectivity Wizard helps step you
through the initial setup of using your LDAP server to authenticate users and
groups in MicroStrategy. The steps to set up your LDAP connection are the
same for MicroStrategy Web, MicroStrategy Library, and MicroStrategy
Mobile.

Copyright © 2024 All Rights Reserved 185


Syst em Ad m in ist r at io n Gu id e

l You have collected the information for your LDAP server, and made
decisions regarding the LDAP authentication methods you want to use, as
described in Checklist: Information Required for Connecting Your LDAP
Server to MicroStrategy, page 162

l If you want Intelligence server to access your LDAP server over a secure
SSL connection, you must do the following:

1. Obtain a valid certificate from your LDAP server and save it on the
machine where Intelligence server is installed. The steps to obtain the
certificate depend on your LDAP vendor, and the operating system that
your LDAP server runs on. For specific steps, refer to the
documentation for your LDAP vendor.

2. Follow the procedure recommended by your operating system to install


the certificate.

To Set up LDAP Authentication in MicroStrategy

Co n n ect i n g Yo u r LDAP Ser ver Usi n g t h e LDAP Co n n ect i vi t y Wi zar d

1. In Developer, log in to a project source, as a user with administrative


privileges.

2. From the Administration menu, select Server, and click LDAP


Connectivity Wizard.

3. On the Welcome page, click Next.

4. Type the following information:

l Host: The machine name or IP address of the LDAP server.

l Port: The network port that the LDAP server uses. For clear text
connections, the default value is 389. If you want Intelligence server
to access your LDAP over an encrypted SSL connection, the default
value is 636.

Copyright © 2024 All Rights Reserved 186


Syst em Ad m in ist r at io n Gu id e

5. If you want Intelligence server to access your LDAP over an encrypted


SSL connection, select SSL (encrypted). The Server Certificate file
field is enabled.

6. In the Server Certificate file field, depending on your LDAP server


vendor, point to the SSL certificate in the following ways:

l Microsoft Active Directory: No information is required.

l Sun ONE/iPlanet: Provide the path to the certificate. Do not include


the file name.

l Novell: Provide the path to the certificate, including the file name.

l IBM: Use Java GSKit 7 to import the certificate, and provide the key
database name with full path, starting with the home directory.

l Open LDAP: Provide the path to the directory that contains the CA
certificate file cacert.pem, the server certificate file
servercrt.pem, and the server certificate key file
serverkey.pem.

7. Click Next.

8. Enter the details of your LDAP SDK, and click Next.

9. Step through the LDAP Connectivity Wizard to enter the remaining


information, such as the LDAP search filters to use to find users,
whether to import users into MicroStrategy, and so on.

10. When you have entered all the information, click Finish to exit the
LDAP Connectivity Wizard. You are prompted to test the LDAP
connection. It is recommended that you test the connection to catch any
errors with the connection parameters you have provided.

En ab l i n g LDAP Au t h en t i cat i o n f o r Yo u r Pr o j ect So u r ce

1. In the Folder List, right-click the project source, and select Modify
Project Source.

Copyright © 2024 All Rights Reserved 187


Syst em Ad m in ist r at io n Gu id e

2. On the Advanced tab, go to Use LDAP Authentication.

3. Click OK.

En ab l i n g LDAP Au t h en t i cat i o n f o r M i cr o St r at egy Web

1. From the Windows Start menu go to All Programs > MicroStrategy


Tools > Web Administrator.

2. Select Intelligence Server > Default Properties.

3. In the Login area, for LDAP Authentication, select the Enabled check
box.

4. Select the Default option to set LDAP as the default authentication


mode.

If your environment includes multiple Intelligence servers connected to


one MicroStrategy Web server, users are authenticated to all the
Intelligence servers using their LDAP credentials, and then shown a
list of projects they can access. However, if one or more of the
Intelligence servers does not use LDAP authentication, the projects for
those servers may not be displayed. To avoid this scenario, in the
Project list drop-down menu, ensure that Show all the projects
connected to the Web Server before the user logs in is selected.

5. Click Save.

En ab l i n g LDAP Au t h en t i cat i o n f o r M i cr o St r at egy Li b r ar y

1. Launch the Library Admin page by entering the following URL in your
web browser

http://<FQDN>:<port>/MicroStrategyLibrary/admin

where <FQDN> is the Fully Qualified Domain Name of the machine


hosting your MicroStrategy Library application and <port> is the
assigned port number.

Copyright © 2024 All Rights Reserved 188


Syst em Ad m in ist r at io n Gu id e

2. On the Library Web Server tab, select LDAP from the list of available
Authentication Modes.

3. Click Save.

4. Restart your Web Server to apply the change.

Manage LDAP Authentication


While working with MicroStrategy and implementing LDAP authentication,
you may want to improve performance or troubleshoot your LDAP
implementation. The sections below cover steps that can help your LDAP
authentication and MicroStrategy systems work as a cohesive unit.

l If your LDAP server information changes, or to edit your LDAP


authentication settings in general, see Modifying Your LDAP
Authentication Settings, page 190.

l If you want to modify the settings for importing users into MicroStrategy,
for example, if you initially chose not to import users, and now want to
import users and groups, see Importing LDAP Users and Groups into
MicroStrategy, page 190.

l If you choose to synchronize users and groups in batches, and want to


select a synchronization schedule, see Selecting Schedules for Importing
and Synchronizing Users, page 195.

l If you are using single sign-on (SSO) authentication systems, such as


Windows NT authentication or trusted authentication, you can link users'
SSO credentials to their LDAP user names, as described in Using LDAP
with Single Sign-On Authentication Systems, page 195.

l Depending on the way your LDAP directory is configured, You can import
additional LDAP attributes for users, for example, a countryCode
attribute, indicating the user's location. These additional LDAP attributes
can be used to create security filters for users, such as displaying data
that is relevant to the user's country. For information on creating these
security filters, see Using LDAP Attributes in Security Filters, page 196.

Copyright © 2024 All Rights Reserved 189


Syst em Ad m in ist r at io n Gu id e

Modifying Your LDAP Authentication Settings


Depending on changes in your organization's policies, you may need to
modify the LDAP authentication settings in MicroStrategy. To modify your
LDAP authentication settings, you can use the Intelligence Server
Configuration Editor. The steps to access the LDAP settings in the
Intelligence Server Configuration Editor are described below.

To Access LDAP Authentication Settings in the Intelligence Server


Configuration Editor

1. In Developer, log in to a project source as a user with administrative


privileges.

2. From the Administration menu, select Server, and click Configure


MicroStrategy Intelligence Server.

3. Expand the LDAP category. The LDAP settings are displayed. You can
modify the following:

l Your LDAP server settings, such as the machine name, port, and so
on.

l Your LDAP SDK information, such as the location of the LDAP SDK
DLL files.

l The LDAP search filters that Intelligence Server uses to find and
authenticate users.

l If you are importing and synchronizing users or groups in batches,


the synchronization schedules.

l If you are importing users and groups, the import settings.

Importing LDAP Users and Groups into MicroStrategy


You can choose to import LDAP users and groups at login, in a batch
process, or a combination of the two, described as follows:

Copyright © 2024 All Rights Reserved 190


Syst em Ad m in ist r at io n Gu id e

l Importing users and groups at login: When an LDAP user logs in to


MicroStrategy for the first time, that user is imported into MicroStrategy
and a physical MicroStrategy user is created in the MicroStrategy
metadata. Any groups associated with that user that are not already in
MicroStrategy are also imported and created in the metadata.

l Importing users and groups in batches: The list of users and groups are
returned from user and group searches on your LDAP directory.
MicroStrategy users and groups are created in the MicroStrategy metadata
for all imported LDAP users and groups.

This section covers the following:

l For information on setting up user and group import options, see Importing
Users and Groups into MicroStrategy, page 191.

l Once you have set up user and group import options, you can import
additional LDAP information, such as users' email addresses, or specific
LDAP attributes. For steps, see Importing Users' Email Addresses, page
193.

l For information on assigning security settings after users are imported,


see User Privileges and Security Settings after Import, page 194.

Im porting Users and Groups into MicroStrategy

You can choose to import users and their associated groups when a user
logs in to MicroStrategy for the first time.

l Ensure that you have reviewed the information and made decisions regarding
your organization's policy on importing and synchronizing user information,
described in the following sections:

l Checklist: Information Required for Connecting Your LDAP Server to


MicroStrategy, page 162

Copyright © 2024 All Rights Reserved 191


Syst em Ad m in ist r at io n Gu id e

l Checklist: Information Required for Connecting Your LDAP Server to


MicroStrategy, page 162

l If you want to import users and groups in batches, you must define the LDAP
search filters to return lists of users and groups to import into MicroStrategy.
For information on defining search filters, see Checklist: Information
Required for Connecting Your LDAP Server to MicroStrategy, page 162.

To Import Users and/or Groups into MicroStrategy

1. In Developer, log in to a project source as a user with administrative


privileges.

2. From the Administration menu, select Server > Configure


MicroStrategy Intelligence Server.

3. Expand the LDAP category, then expand Import, and then select
Import/Synchronize.

4. If you want to import user and group information when users log in, in
the Import/Synchronize at Login area, do the following:

l To import users at login, select Import Users.

l To allow MicroStrategy's user information to automatically


synchronize with the LDAP user information, select Synchronize
MicroStrategy User Login/User Name with LDAP.

l To import groups at login, select the Import Groups.

l To allow MicroStrategy's group information to automatically


synchronize with the LDAP group information, select Synchronize
MicroStrategy Group Name with LDAP.

5. If you want to import user and group information in batches, in the


Import/Synchronize in Batch area, do the following:

l To import users in batches, select Import Users. You must also enter
a user search filter in the Enter search filter for importing list of

Copyright © 2024 All Rights Reserved 192


Syst em Ad m in ist r at io n Gu id e

users field to return a list of users to import.

l To synchronize MicroStrategy's user information with the LDAP user


information, select Synchronize MicroStrategy User Login/User
Name with LDAP.

l To import groups in batches, select Import Groups. You must also


enter a group search filter in the Enter search filter for importing
list of groups field to return a list of users to import.

l To synchronize MicroStrategy's group information with the LDAP


group information, select Synchronize MicroStrategy Group Name
with LDAP.

6. To modify the way that LDAP user and group information is imported,
for example, to import group names as the LDAP distinguished name,
under the LDAP category, under Import, click User/Group.

7. Click OK.

Once a user or group is created in MicroStrategy, the users are given their
own inboxes and personal folders. Additionally, you can do the following:

l Import users' email addresses. For steps, see Importing Users' Email
Addresses, page 193.

l Assign privileges and security settings that control what a user can access
in MicroStrategy. For information on assigning security settings after
users are imported, see User Privileges and Security Settings after Import,
page 194.

l Import additional LDAP attributes, which can then be used in security


filters for users. For steps, see Using LDAP Attributes in Security Filters,
page 196.

Im porting Users' Em ail Addresses

Depending on your requirements, you can import additional information,


such as users' email addresses, from your LDAP directory. For example, If

Copyright © 2024 All Rights Reserved 193


Syst em Ad m in ist r at io n Gu id e

you have a license for MicroStrategy Distribution Services, then when you
import LDAP users, either in a batch or at login, you can import these email
addresses as contacts associated with those users.

To Import Users' Email Addresses from LDAP

1. In Developer, log in to a project source as a user with administrative


privileges.

2. From the Administration menu, select Server, and then select


Configure MicroStrategy Intelligence Server.

3. Expand the LDAP category, then expand Import, and select Options.

4. Select Import Email Address.

5. Select whether to use the default LDAP email address attribute of


mail, or to use a different attribute. If you want to use a different
attribute, specify it in the text field.

6. From the Device drop-down list, select the email device that the email
addresses are to be associated with.

7. Click OK.

User Privileges and Security Settings after Im port

Imported users receive the privileges of the MicroStrategy LDAP Users


group. You can add additional privileges to specific users in the LDAP Users
group using the standard MicroStrategy process in the User Editor. You can
also adjust privileges for the LDAP Users group as a whole. Group privileges
can be modified using the MicroStrategy Group Editor.

The privileges and security settings assigned to LDAP users imported in


MicroStrategy depend on the users' associated MicroStrategy group
privileges and security permissions. To see the default privileges assigned
to a user or group, in the folder list, expand your project source, expand
Administration, and then expand User Manager. Right-click the group (or

Copyright © 2024 All Rights Reserved 194


Syst em Ad m in ist r at io n Gu id e

select the group and right-click the user) and select Edit. The Project
Access tab displays all privileges for each project in the project source.

The process of synchronizing users and groups can modify which groups a
user belongs to, and thus modify the user's privileges and security settings.

Selecting Schedules for Importing and Synchronizing Users


If you choose to synchronize users and groups in batches, you can select a
schedule that dictates when LDAP users and groups are synchronized in
MicroStrategy. For information on creating and using schedules, see
Creating and Managing Schedules, page 1321. To select a synchronization
schedule for LDAP, follow the steps below.

To Select a Schedule for Importing and Synchronizing Users

1. In Developer, log in to a project source as a user with administrative


privileges.

2. From the Administration menu, select Server, and then select


Configure MicroStrategy Intelligence Server.

3. Expand the LDAP category, then click Schedules. The available


schedules are displayed. By default, all the checkboxes for all the
schedules are cleared.

4. Select the schedules to use as LDAP user and group synchronization


schedules.

5. To synchronize your MicroStrategy users and groups with the latest


LDAP users and groups immediately, select Run schedules on save.

6. Click OK.

Using LDAP with Single Sign-On Authentication Systems


If you are using single sign-on (SSO) authentication systems, such as
Windows NT authentication or trusted authentication, you can link users'

Copyright © 2024 All Rights Reserved 195


Syst em Ad m in ist r at io n Gu id e

SSO credentials to their LDAP user names, and import the LDAP user and
group information into MicroStrategy. For information about configuring a
single sign-on system, see Enable Single Sign-On Authentication, page 198.

Depending on the SSO authentication system you are using, refer to one of
the following sections for steps:

l If you are using Windows NT authentication, see Implement Windows NT


Authentication, page 540.

l If you are using integrated or trusted authentication, see Linking


integrated authentication users to LDAP users.

Using LDAP Attributes in Security Filters


You may want to integrate LDAP attributes into your MicroStrategy security
model. For example, you want users to only see sales data about their
country. You import the LDAP attribute countryName, create a security
filter based on that LDAP attribute, and then you assign that security filter to
all LDAP users. Now, when a user from Brazil views a report that breaks
down sales revenue by country, they only see the sales data for Brazil.

LDAP attributes are imported into MicroStrategy as system prompts. A


system prompt is a special type of prompt that is answered automatically by
Intelligence Server. The LDAP attribute system prompts are answered with
the related LDAP attribute value for the user who executes the object
containing the system prompt. You import LDAP attributes into
MicroStrategy from the Intelligence Server Configuration Editor.

Once you have created system prompts based on your LDAP attributes, you
can use those system prompts in security filters to restrict the data that your
users can see based on their LDAP attributes. For information about using
system prompts in security filters, including instructions, see Restricting
Access to Data: Security Filters, page 121. For general information about
security filters, see Restricting Access to Data: Security Filters, page 121.

Copyright © 2024 All Rights Reserved 196


Syst em Ad m in ist r at io n Gu id e

To Import an LDAP Attribute into a Project

1. In Developer, log in to a project source.

2. From the Administration menu, point to Server and then select


Configure MicroStrategy Intelligence Server.

3. Expand the LDAP category, then expand the Import category, and then
select Attributes.

4. From the Select LDAP Attributes drop-down list, select the LDAP
attribute to import.

5. From the Data Type drop-down list, select the data type of that
attribute.

6. Click Add.

7. Click OK.

Controlling Project Access with LDAP Attributes

By default, an LDAP user can log in to a project source even if the LDAP
attributes that are used in system prompts are not defined for that user. To
increase the security of the system, you can prevent LDAP users from
logging in to a project source if all LDAP attributes that are used in system
prompts are not defined for that user.

When you select this option, you prevent all LDAP users from logging in to
the project source if they do not have all the required LDAP attributes. This
affects all users using LDAP authentication, and also any users using
Windows, Trusted, or Integrated authentication if those authentication
systems have been configured to use LDAP. For example, if you are using
Trusted authentication with a SiteMinder single sign-on system, and
SiteMinder is configured to use an LDAP directory, this option prevents
SiteMinder users from logging in if they do not have all the required LDAP
attributes.

Copyright © 2024 All Rights Reserved 197


Syst em Ad m in ist r at io n Gu id e

l This setting prevents users from logging in to all projects in a project


source.

l If your system uses multiple LDAP servers, make sure that all LDAP
attributes used by Intelligence Server are defined on all LDAP servers. If
a required LDAP attribute is defined on LDAP server A and not on LDAP
server B, and the User login fails if LDAP attribute value is not read
from the LDAP server checkbox is selected, users from LDAP server B
will not be able to log in to MicroStrategy.

To Only Allow Users with All Required LDAP Attributes to Log In to the
System

1. In Developer, log in to a project source.

2. From the Administration menu, point to Server and then select


Configure MicroStrategy Intelligence Server.

3. Expand the LDAP category, then expand the Import category, and then
select Attributes.

4. Select the User logon fails if LDAP attribute value is not read from
the LDAP server checkbox.

5. Click OK .

Troubleshooting
There may be situations where you can encounter problems or errors while
trying to integrate MicroStrategy with your LDAP directory. For
troubleshooting information and procedures, see Troubleshooting LDAP
Authentication, page 2918.

Enable Single Sign-On Authentication


Enabling authentication to multiple applications using a single login is
known as single sign-on authentication. The topics below explain the

Copyright © 2024 All Rights Reserved 198


Syst em Ad m in ist r at io n Gu id e

different types of authentication that can be used to enable single sign-on in


MicroStrategy.

Chrome Web Browser version 80 introduces new changes to cross-site


embedding. For more information, see KB484005: Chrome v80 Cookie
Behavior and the Impact on MicroStrategy Deployments.

This page applies to MicroStrategy 2021 Update 4 and newer versions.

Upgrade Open SAML in MicroStrategy


MicroStrategy prioritizes security and is constantly working to stay abreast
of the latest security standards and enhancements. Upgrading the
OpenSAML component within MicroStrategy is a vital step of this journey.

In MicroStrategy 2021 Update 4, org.opensaml has been upgraded from


v2.6.7 to v4.1.0. The spring-security-saml2-core framework, whose end of
life is October 6, 2021, has also been replaced with a newer, more secure
spring-security-saml2-service-provider v5.5.3.

What this means for you:

l If the MicroStrategy environment is configured to use SAML without any


customization, the upgrade is completely seamless to MicroStrategy 2021
Update 4 and no additional steps are required.

l This change does not impact SAML on ASP.

l If the MicroStrategy environment is configured to use SAML and there


have been additional customizations added to this configuration,
additional steps may need to be followed after the upgrade. The steps are
simple and often just need a replacement of classes with newer and more
secure classes.

Please note the following for this upgrade:

Copyright © 2024 All Rights Reserved 199


Syst em Ad m in ist r at io n Gu id e

l Single and global logout are no supported in MicroStrategy 2021 Update


4. See the official Spring documentation for details.

l This new SAML version now supports multi-tenant customizations. See


the official Spring documentation for details.

l If AuthnRequest is required to be signed by the Idp server, set


WantAuthnRequestsSigned=true in the Idp configuration of
IDPMetadata.xml. This assertion is required to be signed unless the
response is signed. See the official Spring documentation and This page
applies to MicroStrategy 2021 Update 4 and newer versions. for details.

Upgrade Customized SAML Configurations


Modify the existing SAML authentication customizations to be compatible
with the new OpenSAML framework for the following:

l MicroStrategy Web and Mobile 2021 Update 4

l MicroStrategy Library 2021 Update 4

Build New Customizations


Build new SAML authentication customizations for the following:

l MicroStrategy Web and Mobile 2021 Update 4

l MicroStrategy Library 2021 Update 4


This page applies to MicroStrategy 2021 Update 4 and newer versions.

SAML Customization for MicroStrategy Library


Starting in MicroStrategy 2021 Update 4, the SAML framework spring-
security-saml2-service-provider v5.3.3 and OpenSAML v4.1.0 technologies
are used. This page illustrates the SAML workflow and beans you may
leverage for customization.

Copyright © 2024 All Rights Reserved 200


Syst em Ad m in ist r at io n Gu id e

SAML Login Workflow

The diagrams and workflows below illustrate how authentication-related


requests are handled with different authentication configurations. The
following are true for these workflow diagrams:

l Double-line arrows represent HTTP requests and responses. Single-line


arrows represent Java calls.

l The object names correspond to the bean IDs in the configuration XML
files. You must view the configuration files to identify which Java classes
define those beans.

l Only beans involved in request authentication are included. Filters that


simply pass the request along the filter chain or perform actions not
directly involved in request authentication are not included. As described
in the Spring Security architecture, each request passes through multiple
Spring Security filters.

Process to Generate <sam l2:AuthnRequest>

1. The multi-mode login page submits a POST:


{BasePath}/auth/login request, which is intercepted by the
mstrMultiModeFilter bean.

2. The multi-mode login filter recognizes this is a SAML login request and
delegates the work to the mstrMultiModeFilter SAML login filter

Copyright © 2024 All Rights Reserved 201


Syst em Ad m in ist r at io n Gu id e

bean.

3. The SAML login filter delegates to the mstrSamlEntryPoint SAML


entry point bean, which performs a redirection to
saml/authentication by default.

This redirection is designed to support a multi-tenants scenario. If


you've configured more than one asserting party, you can first redirect
the user to a picker or in most cases, leave it as is.

4. The browser is redirected and sends a GET:


{BasePath}/saml/authenticate request, which is intercepted by
the mstrSamlAuthnRequestFilter bean.

5. The mstrSamlAuthnRequestFilter bean is


<saml2:AuthnRequest>, which generates an endpoint that creates,
signs, serializes, and encodes a <saml2:AuthnRequest> and
redirects to the SSO login endpoint.

Bean Descr i p t i o n

Bean ID Java Class Description

A subclass of
LoginUrlAuth
enticationEn
tryPoint that
performs a
mstrSamlE com.microstrategy.auth.saml.authnrequest.S redirect to
ntryPoint AMLEntryPointWrapper where it is set in
the constructor
by the String
redirectFilt
erUrl
parameter

Copyright © 2024 All Rights Reserved 202


Syst em Ad m in ist r at io n Gu id e

Bean ID Java Class Description

By default, this
filter responds to
the
/saml/authen
ticate/**
endpoint and the
result is a
mstrSamlAu org.springframework.security.saml2.provide
redirect that
thnRequest r.service.servlet.filter.Saml2WebSsoAuthen
includes a
Filter ticationRequestFilter
SAMLRequest
parameter
containing the
signed, deflated,
and encoded
<saml2:Authn
Request>

Cu st o m i zat i o n

Before AuthnRequest is sent, you can either leverage the


mstrSamlEntryPoint or mstrSamlAuthnRequestFilter bean
according to the time you want your code to be executed, create a subclass,
and override the corresponding method with your own logic.

Prior to redirection to /saml/authenticate


If you want to customize before the redirection to /saml/authenticate:

1. Create a MySAMLEntryPoint class that extends


com.microstrategy.auth.saml.authnrequest.SAMLEntryPoi
ntWrapper and overrides the commence method.

2. Execute your code before calling super.commence.

Copyright © 2024 All Rights Reserved 203


Syst em Ad m in ist r at io n Gu id e

public class MySAMLEntryPoint extends SAMLEntryPointWrapper {


MySAMLEntryPoint(String redirectFilterUrl){
super(redirectFilterUrl);
}
@Override
public void commence(HttpServletRequest request,
HttpServletResponse response, AuthenticationException e) throws
IOException, ServletException {
//>>> Your logic here
super.commence(request, response, e);
}
}

3. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with the
bean ID "mstSamlEntryPoint" to replace the original one, as shown
below.

The constructor argument must be exactly the same as the original if you
don't do any customizations to them.

<!-- Entry point for SAML authentication mode -->


<bean
id=
"mstrSamlEntryPoint" class="com.microstrategy.custom.MySAMLEntryPoint">
<constructor-arg value="/saml/authenticate"/>
</bean>

Prior to SSO IDP Redirection


If you want to customize before the redirection to SSO IDP:

1. Create a MySAMLAuthenticationRequestFilter class that extends


org.springframework.security.saml2.provider.service.s
ervlet.filter.Saml2WebSsoAuthenticationRequestFilter.

2. Override the doFilterInternal method.

3. Execute your code before calling super.doFilterInternal.

public class MySAMLAuthenticationRequestFilter extends

Copyright © 2024 All Rights Reserved 204


Syst em Ad m in ist r at io n Gu id e

Saml2WebSsoAuthenticationRequestFilter {

public MySAMLAuthenticationRequestFilter
(Saml2AuthenticationRequestContextResolver
authenticationRequestContextResolver, Saml2AuthenticationRequestFactory
authenticationRequestFactory) {
super(authenticationRequestContextResolver,
authenticationRequestFactory);
}

@Override
protected void doFilterInternal
(HttpServletRequest request, HttpServletResponse response, FilterChain
filterChain) throws ServletException, IOException {
//>>> Your logic here
super.doFilterInternal(request, response,
filterChain);
}
}

4. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with a
"mstSamlAuthnRequestFilter" bean ID to replace the original one,
as shown below.

The two constructor arguments and property must be exactly the same as
the original if you don't customize them.

<bean
id=
"mstrSamlAuthnRequestFilter" class="MySAMLAuthenticationRequestFilter">
<constructor-arg
ref="samlAuthenticationRequestContextResolver"/>
<constructor-arg
ref="samlAuthenticationRequestFactory"/>
<property name="redirectMatcher"
ref="samlRedirectMatcher"/>
</bean>

Copyright © 2024 All Rights Reserved 205


Syst em Ad m in ist r at io n Gu id e

Process to Generate <sam l2:Response>

1. SSO redirects the user to the MicroStrategy Library application. The


redirected request contains a SAML assertion describing the
authenticated user.

2. The mstrSamlProcessingFilter SAML processing filter bean


extracts the SAML assertion from the request and passes it to the
samlAuthenticationProvider authentication provider bean.

3. The samlAuthenticationProvider bean verifies the assertion and


then calls the Intelligence server credentials provider to build an
Intelligence server credentials object from the SAML assertion
information.

4. The samlAuthenticationProvider bean passes the Intelligence


server credentials to the Session Manager to create an Intelligence
server session.

5. The SAML processing filter calls the login success handler, which
redirects the browser to the original request.

Copyright © 2024 All Rights Reserved 206


Syst em Ad m in ist r at io n Gu id e

Bean Descr i p t i o n

Bean ID Java Class Description

This is the
core filter
that is
responsible
for handling
the SAML
mstrSamlProcessi com.microstrategy.auth.saml.response.S
login
ngFilter AMLProcessingFilter
response
(SAML
assertion)
that comes
from the IDP
server.

This bean is
responsible
for
authenticatin
samlAuthenticati com.microstrategy.auth.saml.response.S g a user
onProvider AMLAuthenticationProviderWrapper based on
information
extracted
from SAML
assertion.

This bean is
responsible
for creating
and
samlIserverCrede com.microstrategy.auth.saml.response.S populating a
ntialProvider AMLIServerCredentialsProvider IServerCre
dentials
instance,
defining the

Copyright © 2024 All Rights Reserved 207


Syst em Ad m in ist r at io n Gu id e

Bean ID Java Class Description

credentials
for creating
Intelligence
server
sessions.
The
IServerCre
dentials
object is
passed to the
Session
Manager's
login
method,
which
creates the
Intelligence
server
session.

Cu st o m i zat i o n

The following content uses the real class name, instead of the bean name.
You can find the bean name in SAMLConfig.xml.

You can do the following customizations:


l Retrieve more information from SAMLResponse

l Customize the login process

l Customize SAMLAssertion validation

Copyright © 2024 All Rights Reserved 208


Syst em Ad m in ist r at io n Gu id e

Retrieve more information from SAMLResponse


The mstrSAMLProcessingFilter bean, also the
com.microstrategy.auth.saml.response.SAMLProcessingFilter
Java class, is the first layer that directly accesses the SAML response. It
accepts the raw HttpServletRequest, which contains the
samlResponse, and produces SAMLAuthenticationToken. This is then
passed to SAMLAuthenticationProviderWrapper to perform
authentication validation in further steps.

If you need to extract more information from HttpServletRequest,


perform the following steps:

1. It is highly recommended that you create a MySAMLConverter class


that extends the SAMLAuthenticationTokenConverter class .

2. Override the convert method and call super.convert, which can get
Saml2AuthenticationToken, a subclass of
SAMLAuthenticationToken.

3. Extract the information from the raw request in line 11 and return an
instance that is a subclass of Saml2AuthenticationToken.

The following classes are under the


com.microstrategy.auth.saml.response package:

SAMLAuthenticationTokenConverter

Saml2AuthenticationToken

public class MySAMLConverter extends SAMLAuthenticationTokenConverter {


public MySAMLConverter

Copyright © 2024 All Rights Reserved 209


Syst em Ad m in ist r at io n Gu id e

(Saml2AuthenticationTokenConverter delegate) {
super(delegate);
}
@Override
public Saml2AuthenticationToken convert
(HttpServletRequest request) {
Saml2AuthenticationToken samlAuthenticationToken
= super.convert(request);
// >>> Extract info from request that you are
interested in
return samlAuthenticationToken;
}
}

4. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with a
"mstSamlAuthenticationConverter" bean ID and keep the original
constructor argument if you don't perform additional customizations.

<bean
id=
"samlAuthenticationConverter"
class="com.microstrategy.custom.MySAMLConverter">
<constructor-arg
ref="saml2AuthenticationConverter"/>
</bean>

Customize the login process


To verify SAML 2.0 responses, SAMLProcessingFilterWrapper
delegates authenticate work to SAMLAuthenticationProvider. It
authenticates a user based on information extracted from SAML assertion
and logs into the Intelligence server by calling the internal login method.

Copyright © 2024 All Rights Reserved 210


Syst em Ad m in ist r at io n Gu id e

You can customize this login process at the following three specific time
points, as illustrated in the diagram above:

Point ① : When pre-processing the assertion before validating the


SAML response

1. Create a MySAMLAuthenticationProviderWrapper class that


extends
com.microstrategy.auth.saml.response.SAMLAuthenticati
onProvider and overrides the authenticate method.

public class MySAMLAuthenticationProviderWrapper extends


SAMLAuthenticationProvider {
@Override
public Authentication authenticate(Authentication
authentication) throws AuthenticationException {
// >>>> Do your own work before saml assertion validation --->
Point ① in the above diagram
Authentication auth = super.authenticate(authentication);
return auth;
}
}

2. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with the
bean ID "samlAuthenticationProvider" and keep the original one,
as shown below.

The two constructor arguments must be exactly the same as the original if
you don't customize them.

<bean
id=
"samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator"
ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>

Point ② : When filtering security roles after the Intelligence server


login

Copyright © 2024 All Rights Reserved 211


Syst em Ad m in ist r at io n Gu id e

1. Create a MySAMLAuthenticationProviderWrapper class that


extends
com.microstrategy.auth.saml.response.SAMLAuthenticati
onProvider and overrides the authenticate method.

public class MySAMLAuthenticationProviderWrapper extends


SAMLAuthenticationProvider {
private @Autowired
SessionManagerLocator sessionManagerLocator;
private @Autowired
HttpServletRequest request;

private @Autowired(required = false)


OAuthTokenProvider oAuthTokenProvider;

@Override
public Authentication authenticate(Authentication
authentication) throws AuthenticationException {

Authentication authResult = super.authenticate(authentication);

// >>>> Do something after assertion validation while before


iserver login ---> Point ② in the above diagram

IServerCredentials credentials = (IServerCredentials)


authResult.getDetails();
if (! Util.wasAdminRequest(request)) {
// No implicit OAuth after SAML login
if (oAuthTokenProvider == null) {
SessionManager sessionManager =
sessionManagerLocator.getSessionManager();
try {
sessionManager.login(credentials);
} catch (Exception ex) {
throw new AuthenticationServiceException("IServer
authentication failed", ex);
}
}
}
return new AuthenticationWithIServerCredentials(authResult,
credentials);
}
}

2. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with the
bean ID "samlAuthenticationProvider" and keep the original one,
as shown below.

Copyright © 2024 All Rights Reserved 212


Syst em Ad m in ist r at io n Gu id e

The two constructor arguments must be exactly the same as the original if
you don't customize them.

<bean
id=
"samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator"
ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>

Point ③ : A time in between Point ① and ③

1. Create a MySAMLProcessingFilterWrapper class that extends


com.microstrategy.auth.saml.response.SAMLProcessingFi
lterWrapper and overrides the Authentication method.

public class MySAMLAuthenticationProviderWrapper extends


SAMLAuthenticationProviderWrapper {
@Override
public Authentication authenticate(Authentication
authentication) throws AuthenticationException {
Authentication auth = super.authenticate(authentication);
// >>>> Do something after iserver login ---> Point ③ in the
above diagram
return auth;
}
}

2. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with the
bean ID "samlAuthenticationProvider" and keep the original one,
as shown below.

The two constructor arguments must be exactly the same as the original if
you don't customize them.

<bean
id=
"samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator"
ref="samlAssertionValidator"/>

Copyright © 2024 All Rights Reserved 213


Syst em Ad m in ist r at io n Gu id e

<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>

Customize SAMLAssertion validation


To verify SAML 2.0 responses, SAMLProcessingFilter delegates
authentication work to the samlAuthenticationProvider bean which is
com.microstrategy.auth.saml.response.SAMLAuthenticationPr
oviderWrapper.

You can configure this in the following ways:

l Set a clock skew or authentication age for timestamp validation

l Perform additional validation

l Coordinate with UserDetailsService

Set a clock skew for timestamp validation

It is not uncommon for your web and IDP servers to have system clocks that
are not perfectly synchronized. For that reason, you can configure the
default SAMLAssertionValidator assertion validator with some
tolerance.

1. Open the SAMLConfig.xml file under the classes/auth/custom


folder.

2. Set the responseSkew property to your customized value. By default,


it is 300 seconds.

<bean
id=
"samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="responseSkew" value="300"/>
</bean>

Copyright © 2024 All Rights Reserved 214


Syst em Ad m in ist r at io n Gu id e

Set an authentication age for timestamp validation

By default, the system allows users to single sign on for up to 2,592,000


seconds since their initial authentication with the IDP (based on the
AuthInstance value of the authentication statement). Some IDPs allow
users to stay authenticated for longer periods than this and you may need to
change the default value.

1. Open the SAMLConfig.xml file under the classes/auth/custom


folder.

2. Set the maxAuthenticationAge property in the default


SAMLAssertionValidator assertion validator to your customized
value.

<bean
id=
"samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30 days
-->
</bean>

Perform additional validation

The new spring SAML framework performs minimal validation on SAML 2.0
assertions. After verifying the signature, it:

l Validates the <AudienceRestriction> and


<DelegationRestriction> conditions

l Validates <SubjectConfirmation> s, except for any IP address


information

While recommended, it’s not necessary to call super.convert(). You


may skip this if you don’t need it to check the <AudienceRestriction> or
<SubjectConfirmation> since you are doing those yourself.

Copyright © 2024 All Rights Reserved 215


Syst em Ad m in ist r at io n Gu id e

1. Configure your own assertion validator that extends


com.microstrategy.auth.saml.response.SAMLAssertionVal
idator.

2. Perform your own validation. For example, you can use OpenSAML's
OneTimeUseConditionValidator to also validate a <OneTimeUse>
condition.

public class MySAMLAssertionValidator extends SAMLAssertionValidator {


@Override
public Saml2ResponseValidatorResult convert
(OpenSaml4AuthenticationProvider.AssertionToken token) {
Saml2ResponseValidatorResult result = super.convert(token);
OneTimeUseConditionValidator validator = ...;
Assertion assertion = token.getAssertion();
OneTimeUse oneTimeUse = assertion.getConditions().getOneTimeUse
();
ValidationContext context = new ValidationContext();
try {
if (validator.validate(oneTimeUse, assertion, context) ==
ValidationResult.VALID) {
return result;
}
} catch (Exception e) {
return result.concat(new Saml2Error(INVALID_ASSERTION,
e.getMessage()));
}
return result.concat(new Saml2Error(INVALID_ASSERTION,
context.getValidationFailureMessage()));
}
}

3. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with the
bean ID "samlAssertionValidator" to replace the original one, as
shown below.

<bean
id=
"samlAssertionValidator"
class="com.microstrategy.custom.MySAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30
days -->
<property name="responseSkew" value="300"/>
</bean>

Copyright © 2024 All Rights Reserved 216


Syst em Ad m in ist r at io n Gu id e

To set properties, see how to set a clock skew or authentication age for
timestamp validation.

Coordinate with UserDetailsService

If you would like to include user details from a UserDetailsService


legacy, use the response authentication converter.

You are not required to call super.convert. It returns


SAMLAuthentication containing the extracted attributes from
AttributeStatement, as well as the single ROLE_USER authority.

1. Create a class that extends


com.microstrategy.auth.saml.response.SAMLResponseAuth
enticationConverter.

2. Inject your UserDetailService legacy.

public class MyResponseAuthenticationConverter extends


SAMLResponseAuthenticationConverter {
@Autowired
UserDetailsService userDetailsService;
@Override
public AbstractAuthenticationToken convert
(OpenSaml4AuthenticationProvider.ResponseToken responseToken) {
SAMLAuthentication authentication =
(SAMLAuthentication)super.convert(responseToken); // >>> ①
Assertion assertion = responseToken.getResponse().getAssertions
().get(0);
String username = assertion.getSubject().getNameID().getValue();
UserDetails userDetails =
this.userDetailsService.loadUserByUsername(username);// >>> ②
return MySaml2Authentication(userDetails, authentication);// >>>

}
}

3. Call super.convert, which extracts attributes and authorities from


the response.

4. Call UserDetailService using the relevant information.

5. Return a custom authentication that includes the user details.

Copyright © 2024 All Rights Reserved 217


Syst em Ad m in ist r at io n Gu id e

6. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with the
bean ID "samlResponseAuthenticationConverter" to replace the
original one, as shown below.

<bean
id=
"samlResponseAuthenticationConverter"
class="com.microstrategy.custom.MyResponseAuthenticationConverter"/>

This page applies to MicroStrategy 2021 Update 4 and later versions.

SAML Customization for MicroStrategy Web and Mobile


Starting in MicroStrategy 2021 Update 4, the SAML framework spring-
security-saml2-service-provider v5.3.3 and OpenSAML v4.1.0 technologies
are used. This page illustrates the SAML workflow and beans you may
leverage for customization.

SAML Login Workflow

The diagrams and workflows below illustrate how authentication-related


requests are handled with different authentication configurations. The
following are true for these workflow diagrams:

l Double-line arrows represent HTTP requests and responses. Single-line


arrows represent Java calls.

l The object names correspond to the bean IDs in the configuration XML
files. You must view the configuration files to identify which Java classes
define those beans.

l Only beans involved in request authentication are included. Filters that


simply pass the request along the filter chain or perform actions not
directly involved in request authentication are not included. As described
in the Spring Security architecture, each request passes through multiple
Spring Security filters.

Copyright © 2024 All Rights Reserved 218


Syst em Ad m in ist r at io n Gu id e

Process to Generate <sam l2:AuthnRequest>

1. An unauthenticated user accesses a protected endpoint, such as


/servlet/mstrWeb, and is intercepted by the
springSecurityFilterChain bean.

2. The springSecurityFilterChain bean delegates to the


mstrSamlEntryPoint bean, which performs a redirection to
/saml/authentication by default.

This redirection is designed to support a multi-tenants scenario. If


you've configured more than one asserting party, you can first redirect
the user to a picker or in most cases, leave it as is.

3. The browser is redirected and sends a GET:


{BasePath}/saml/authenticate request, which is intercepted by
the mstrSamlAuthnRequestFilter bean.

4. The mstrSamlAuthnRequestFilter bean is


<saml2:AuthnRequest>, which generates an endpoint that creates,
signs, serializes, and encodes a <saml2:AuthnRequest> and
redirects to the SSO login endpoint.

Copyright © 2024 All Rights Reserved 219


Syst em Ad m in ist r at io n Gu id e

Bean Descr i p t i o n

Bean ID Java Class Description

A subclass of
LoginUrlAuth
enticationEn
tryPoint that
performs a
mstrSamlE com.microstrategy.auth.saml.authnrequest.S redirect to
ntryPoint AMLEntryPointWrapper where it is set in
the constructor
by the String
redirectFilt
erUrl
parameter

By default, this
filter responds to
the
/saml/authen
ticate/**
endpoint and the
result is a
mstrSamlAu org.springframework.security.saml2.provide
redirect that
thnRequest r.service.servlet.filter.Saml2WebSsoAuthen
includes a
Filter ticationRequestFilter
SAMLRequest
parameter
containing the
signed, deflated,
and encoded
<saml2:Authn
Request>

Cu st o m i zat i o n

Before AuthnRequest is sent, you can leverage the


mstrSamlEntryPoint bean according to the time you want your code to be

Copyright © 2024 All Rights Reserved 220


Syst em Ad m in ist r at io n Gu id e

executed, create a subclass, and override the corresponding method with


your own logic.

Prior to redirection to /saml/authenticate


If you want to customize before the redirection to /saml/authenticate:

1. Create a MySAMLEntryPoint class that extends


com.microstrategy.auth.saml.authnrequest.SAMLEntryPoi
ntWrapper and overrides the commence method.

2. Execute your code before calling super.commence:

public class MySAMLEntryPoint extends SAMLEntryPointWrapper {


MySAMLEntryPoint(String redirectFilterUrl){
super(redirectFilterUrl);
}
@Override
public void commence(HttpServletRequest request,
HttpServletResponse response, AuthenticationException e) throws
IOException, ServletException {
//>>> Your logic here
super.commence(request, response, e);
}
}

3. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/resources/SAML/custom
folder with the bean ID "mstSamlEntryPoint" to replace the original
one, as shown below.

The constructor argument must be exactly the same as the original if it is


not customized.

<!-- Entry point for SAML authentication mode -->


<bean
id=
"mstrSamlEntryPoint" class="com.microstrategy.custom.MySAMLEntryPoint">
<constructor-arg value="/saml/authenticate"/>
</bean>

Copyright © 2024 All Rights Reserved 221


Syst em Ad m in ist r at io n Gu id e

Prior to SSO IDP Redirection


If you want to customize before the redirection to SSO IDP:

1. Create a MySAMLAuthenticationRequestFilter class that extends


org.springframework.security.saml2.provider.service.s
ervlet.filter.Saml2WebSsoAuthenticationRequestFilter
and overrides the doFilterInternal method.

2. Execute your code before calling super.doFilterInternal.

public class MySAMLAuthenticationRequestFilter extends


Saml2WebSsoAuthenticationRequestFilter {

public MySAMLAuthenticationRequestFilter
(Saml2AuthenticationRequestContextResolver
authenticationRequestContextResolver, Saml2AuthenticationRequestFactory
authenticationRequestFactory) {
super(authenticationRequestContextResolver,
authenticationRequestFactory);
}

@Override
protected void doFilterInternal(HttpServletRequest request,
HttpServletResponse response, FilterChain filterChain) throws
ServletException, IOException {
//>>> Your logic here
super.doFilterInternal(request, response, filterChain);
}
}

3. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/resourcesSAML/custom
folder with a "mstSamlAuthnRequestFilter" bean ID to replace the
original one, as shown below.

The two constructor arguments and property must be exactly the same as
the original if you don't do any customizations to them.

<bean
id=
"mstrSamlAuthnRequestFilter" class="MySAMLAuthenticationRequestFilter">
<constructor-arg
ref="samlAuthenticationRequestContextResolver"/>
<constructor-arg
ref="samlAuthenticationRequestFactory"/>
<property name="redirectMatcher"

Copyright © 2024 All Rights Reserved 222


Syst em Ad m in ist r at io n Gu id e

ref="samlRedirectMatcher"/>
</bean>

Process to Generate <sam l2:Response>

1. SSO redirects the user to the MicroStrategy Library application. The


redirected request contains a SAML assertion describing the
authenticated user.

2. The mstrSamlProcessingFilter SAML processing filter bean


extracts the SAML assertion from the request and passes it to the
samlAuthenticationProvider authentication provider bean.

3. The samlAuthenticationProvider bean verifies the assertion and


then calls the Intelligence server credentials provider to build an
Intelligence server credentials object from the SAML assertion
information.

4. The samlAuthenticationFilter bean saves the authentication


object into the HTTP session.

5. The SAML processing filter calls the login success handler, which
redirects the browser to the original request.

Copyright © 2024 All Rights Reserved 223


Syst em Ad m in ist r at io n Gu id e

Bean Descr i p t i o n

Bean ID Java Class Description

This is the
core filter
that is
responsible
for handling
the SAML
mstrSamlProces com.microstrategy.auth.saml.response.SA
login
singFilter MLProcessingFilter
response
(SAML
assertion)
that comes
from the IDP
server.

This bean is
responsible
for
authenticatin
samlAuthentica com.microstrategy.auth.saml.response.SA g a user
tionProvider MLAuthenticationProviderWrapper based on
information
extracted
from SAML
assertion.

This bean is
responsible
for creating
and
com.microstrategy.auth.saml.SAMLUserDet populating a
userDetails
ailsServiceImpl IServerCre
dentials
instance,
defining the

Copyright © 2024 All Rights Reserved 224


Syst em Ad m in ist r at io n Gu id e

Bean ID Java Class Description

credentials
for creating
Intelligence
server
sessions. The
IServerCre
dentials
object is
saved to the
HTTP sessio
n, which is
used to
create the
Intelligence
server
session for
future
requests.

Cu st o m i zat i o n

For better description, the following content uses the real class name,
instead of the bean name. You can find the bean name in
SAMLConfig.xml.

You can do the following customizations:


l Retrieve more information from SAMLResponse

l Customize the login process

l Customize SAMLAssertion validation

Copyright © 2024 All Rights Reserved 225


Syst em Ad m in ist r at io n Gu id e

Retrieve more information from SAMLResponse


The mstrSAMLProcessingFilter bean, also the
com.microstrategy.auth.saml.response.SAMLProcessingFilter
Java class, is the first layer that directly accesses the SAML response. It
accepts the raw HttpServletRequest, which contains the
samlResponse, and produces SAMLAuthenticationToken. This is then
passed to SAMLAuthenticationProviderWrapper to perform
authentication validation in further steps.

If you need to extract more information from HttpServletRequest,


perform the following steps:

1. It is highly recommended that you create a MySAMLConverter class


that extends the SAMLAuthenticationTokenConverter class .

2. Override the convert method and call super.convert, which can get
Saml2AuthenticationToken, a subclass of
SAMLAuthenticationToken.

3. Extract the information from the raw request in line 11 and return an
instance that is a subclass of Saml2AuthenticationToken:

The following classes are under the


com.microstrategy.auth.saml.response package:

SAMLAuthenticationTokenConverter

Saml2AuthenticationToken

public class MySAMLConverter extends SAMLAuthenticationTokenConverter {


public MySAMLConverter(Saml2AuthenticationTokenConverter delegate) {

Copyright © 2024 All Rights Reserved 226


Syst em Ad m in ist r at io n Gu id e

super(delegate);
}
@Override
public Saml2AuthenticationToken convert(HttpServletRequest request) {
Saml2AuthenticationToken samlAuthenticationToken = super.convert
(request);
// >>> Extract info from request that you are interested in
return samlAuthenticationToken;
}
}

4. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/resources/SAML/custom
folder with a "mstSamlAuthenticationConverter" bean ID and
keep the original constructor argument if you don't perform additional
customizations:

<bean
id=
"samlAuthenticationConverter"
class="com.microstrategy.custom.MySAMLConverter">
<constructor-arg
ref="saml2AuthenticationConverter"/>
</bean>

Customize the login process


To verify SAML 2.0 responses, SAMLProcessingFilterWrapper
delegates authentication work to SAMLAuthenticationProvider. It
authenticates a user based on information extracted from SAML assertion
and returns a fully populated
com.microstrategy.auth.saml.response.SAMLAuthentication
object, including granted authorities, if successful. Then,
SAMLProcessingFilterWrapper saves the authentication result into the
HTTP session.

Copyright © 2024 All Rights Reserved 227


Syst em Ad m in ist r at io n Gu id e

You can customize this login process at the following three specific time
points, as illustrated in the diagram above:

Point ① : When pre-processing the assertion before validating the


SAML response

1. Create a MySAMLAuthenticationProviderWrapper class that


extends
com.microstrategy.auth.saml.response.SAMLAuthenticati
onProvider and overrides the authenticate method.

public class MySAMLAuthenticationProviderWrapper extends


SAMLAuthenticationProvider {
@Override
public Authentication authenticate(Authentication
authentication) throws AuthenticationException {
// >>>> Do your own work before saml assertion validation --->
Point ① in the above diagram
Authentication auth = super.authenticate(authentication);
return auth;
}
}

2. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/resources/SAML/custom
folder with the bean ID "samlAuthenticationProvider" and keep
the original one, as shown below.

The two constructor arguments must be exactly the same as the original if
you don't customize them.

Copyright © 2024 All Rights Reserved 228


Syst em Ad m in ist r at io n Gu id e

<bean
id=
"samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator"
ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>

Point ② : When customizing the logic of authenticating the user

1. Create a MySAMLAuthenticationProviderWrapper class that


extends
com.microstrategy.auth.saml.response.SAMLAuthenticati
onProvider and overrides the authenticate method.

public class MySAMLAuthenticationProviderWrapper extends


SAMLAuthenticationProvider {
@Override
public Authentication authenticate(Authentication
authentication) throws AuthenticationException {

Authentication authResult = super.authenticate(authentication);

// >>>> Do something after assertion validation while before


iserver login ---> Point ② in the above diagram

return new CustomAuthentication(authResult);


}
}

2. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/resources/SAML/custom
folder with the bean ID "samlAuthenticationProvider" and keep
the original one, as shown below.

The two constructor arguments must be exactly the same as the original if
you don't customize them.

<bean
id=
"samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator"

Copyright © 2024 All Rights Reserved 229


Syst em Ad m in ist r at io n Gu id e

ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>

Point ③ : Doing some work before or after saving the authentication


result into the HTTP session

1. Create a MySAMLProcessingFilterWrapper class that extends


com.microstrategy.auth.saml.response.SAMLProcessingFi
lterWrapper and overrides the attemptAuthentication method.

public class MySAMLProcessingFilterWrapper extends


SAMLProcessingFilterWrapper {
@Override
public Authentication attemptAuthentication(HttpServletRequest
request, HttpServletResponse response) throws AuthenticationException {
Authentication authResult = super.attemptAuthentication(request,
response);
// >>>> Do something after the user login ---> Point ③ in the
above diagram
return authResult;
}
}

2. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/resources/SAML/custom
folder with the bean ID "samlAuthenticationProvider" and keep
the original one, as shown below.

The two constructor arguments must be exactly the same as the original if
you don't customize them.

<bean
id=
"mstrSamlProcessingFilter"
class="com.microstrategy.custom.MySAMLProcessingFilterWrapper">
<constructor-arg ref="samlAuthenticationConverter" />
<property name="authenticationManager" ref="authenticationManager" />
<property name="authenticationSuccessHandler"
ref="successRedirectHandler" />
<property name="authenticationFailureHandler"
ref="failureRedirectHandler" />
</bean>

Copyright © 2024 All Rights Reserved 230


Syst em Ad m in ist r at io n Gu id e

Customize SAMLAssertion validation


To verify SAML 2.0 responses, SAMLProcessingFilter delegates
authentication work to the samlAuthenticationProvider bean which is
com.microstrategy.auth.saml.response.SAMLAuthenticationPr
oviderWrapper.

You can configure this in the following ways:

l Set a clock skew or authentication age for timestamp validation

l Perform additional validation

l Coordinate with UserDetailsService

Set a clock skew for timestamp validation

It is not uncommon for your web and IDP servers to have system clocks that
are not perfectly synchronized. For that reason, you can configure the
default SAMLAssertionValidator assertion validator with some
tolerance.

1. Open the SAMLConfig.xml file under the


classes/resouces/SAML/custom folder.

2. Set the responseSkew property to your customized value. By default,


it is 300 seconds.

<bean
id=
"samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="responseSkew" value="300"/>
</bean>

Set an authentication age for timestamp validation

By default, the system allows users to single sign on for up to 2,592,000


seconds since their initial authentication with the IDP (based on the
AuthInstance value of the authentication statement). Some IDPs allow

Copyright © 2024 All Rights Reserved 231


Syst em Ad m in ist r at io n Gu id e

users to stay authenticated for longer periods than this and you may need to
change the default value.

1. Open the SAMLConfig.xml file under the


classes/resouces/SAML/custom folder.

2. Set the maxAuthenticationAge property in the default


SAMLAssertionValidator assertion validator to your customized
value.

<bean
id=
"samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30 days
-->
</bean>

Perform additional validation

The new spring SAML framework performs minimal validation on SAML 2.0
assertions. After verifying the signature, it:

l Validates the <AudienceRestriction> and


<DelegationRestriction> conditions

l Validates <SubjectConfirmation> s, except for any IP address


information

While recommended, it’s not necessary to call super.convert(). You


may skip this if you don’t need it to check the <AudienceRestriction> or
<SubjectConfirmation> since you are doing those yourself.

1. Configure your own assertion validator that extends


com.microstrategy.auth.saml.response.SAMLAssertionVal
idator.

2. Perform your own validation. For example, you can use OpenSAML's
OneTimeUseConditionValidator to also validate a <OneTimeUse>

Copyright © 2024 All Rights Reserved 232


Syst em Ad m in ist r at io n Gu id e

condition.

public class MySAMLAssertionValidator extends SAMLAssertionValidator {


@Override
public Saml2ResponseValidatorResult convert
(OpenSaml4AuthenticationProvider.AssertionToken token) {
Saml2ResponseValidatorResult result = super.convert(token);
OneTimeUseConditionValidator validator = ...;
Assertion assertion = token.getAssertion();
OneTimeUse oneTimeUse = assertion.getConditions().getOneTimeUse
();
ValidationContext context = new ValidationContext();
try {
if (validator.validate(oneTimeUse, assertion, context) ==
ValidationResult.VALID) {
return result;
}
} catch (Exception e) {
return result.concat(new Saml2Error(INVALID_ASSERTION,
e.getMessage()));
}
return result.concat(new Saml2Error(INVALID_ASSERTION,
context.getValidationFailureMessage()));
}
}

3. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/resources/SAML/custom
folder with the bean ID "samlAssertionValidator" to replace the
original one, as shown below.

<bean
id=
"samlAssertionValidator"
class="com.microstrategy.custom.MySAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30
days -->
<property name="responseSkew" value="300"/>
</bean>

To set properties, see how to set a clock skew or authentication age for
timestamp validation.

Coordinate with UserDetailsService

If you would like to include user details from a UserDetailsService


legacy, use the response authentication converter.

Copyright © 2024 All Rights Reserved 233


Syst em Ad m in ist r at io n Gu id e

You are not required to call super.convert. It returns


SAMLAuthentication containing the extracted attributes from
AttributeStatement, as well as the single ROLE_USER authority.

1. Create a class that extends


com.microstrategy.auth.saml.response.SAMLResponseAuth
enticationConverter.

2. Inject your UserDetailService legacy.

public class MyResponseAuthenticationConverter extends


SAMLResponseAuthenticationConverter {
@Autowired
UserDetailsService userDetailsService;
@Override
public AbstractAuthenticationToken convert
(OpenSaml4AuthenticationProvider.ResponseToken responseToken) {
SAMLAuthentication authentication =
(SAMLAuthentication)super.convert(responseToken); // >>> ①
Assertion assertion = responseToken.getResponse().getAssertions
().get(0);
String username = assertion.getSubject().getNameID().getValue();
UserDetails userDetails =
this.userDetailsService.loadUserByUsername(username);// >>> ②
return MySaml2Authentication(userDetails, authentication);// >>>

}
}

3. Call super.convert, which extracts attributes and authorities from


the response.

4. Call UserDetailService using the relevant information.

5. Return a custom authentication that includes the user details.

6. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/resources/SAML/custom
folder with the bean ID "samlResponseAuthenticationConverter"
to replace the original one, as shown below.

<bean
id=
"samlResponseAuthenticationConverter"

Copyright © 2024 All Rights Reserved 234


Syst em Ad m in ist r at io n Gu id e

class="com.microstrategy.custom.MyResponseAuthenticationConverter"/>

This page applies to MicroStrategy 2021 Update 4 and newer versions.

SAML Upgrade Guidance for MicroStrategy Library


Use the procedures below to upgrade your non-customized or customized
SAML infrastructure. Any customizations you have made to your SAML
workflows require manual changes to the SAML configuration file located at:
/<TOMCAT_HOME>/webapps/MicroStrategyLibrary/WEB-
INF/classes/auth/custom/SAMLConfig.xml.

Upgrade a Non-Custom ized SAML System

1. Back up the following files in <TOMCAT_


HOME>\webapps\MicroStrategyLibrary\WEB-
INF\classes\auth\SAML\:

l IDPMetadata.xml

l SPMetadata.xml

l SamlKeystore.jks

l MstrSamlConfig.xml

2. Restore the file listed above to the same location after upgrading.

3. Change or add the following values in <TOMCAT_


HOME>\webapps\MicroStrategyLibrary\WEB-
INF\classes\config\configOverride.properties:

auth.modes.available=1048576
auth.modes.default=1048576
auth.admin.authMethod=2

Copyright © 2024 All Rights Reserved 235


Syst em Ad m in ist r at io n Gu id e

Upgrade a Custom ized SAML System

The following is a list of common SAML customization cases for upgrade


guidance. If your customization is not in the following list, see
SAML Customization for MicroStrategy Library for more information.

1. Remove the spring-security-saml2-core framework.

If you leverage classes in this framework for customizations, you must


remove them using the provided parity classes or the ones in the new
framework. The following table contains some useful parity classes for
your upgrade. If you are using them, directly change their class name to
the new one.

Parity Class Transfers

Old New Description

This class is
com.microstrategy.aut
org.springframework.security exactly the
h.saml.response.SAMLC
.saml.SAMLCredential same as the
redential
previous one.

An extra
loadSAMLPr
operties
method is
added. This
com.microstrategy.aut method is
org.springframework.security
h.saml.SAMLUserDetail called in
.saml.SAMLCredential SAMLRelyin
sService
gPartyRegi
stration's
constructor
when the app
is launched.

Copyright © 2024 All Rights Reserved 236


Syst em Ad m in ist r at io n Gu id e

Parity Class Transfers

Old New Description

Subclasses
should take
advantage of
the
SAMLConfig
instance and
set internal
properties.

This class is
a
replacement
of the
org.springframework.security com.microstrategy.aut
previous
.providers.ExpiringUsernameA h.saml.response.SAMLA
authenticatio
uthenticationToken uthentication
n token which
has the same
properties as
the old one.

2. Upgrade the org.opensaml framework to v4.1.0.

If you are using utility classes in v2.6.7, you must transfer them to
parities in v4.1.0.

3. If your web server is behind a proxy, remove all previous proxy-related


customizations.

In the SAML configuration generation page, located at


{ContextPath}/saml/config/open, select Yes from the Behind
the proxy drop-down. No additional customization is necessary.

Copyright © 2024 All Rights Reserved 237


Syst em Ad m in ist r at io n Gu id e

Starting in MicroStrategy 2021 Update 4, older customized proxies


must be removed. Otherwise, the app cannot start.

4. If you have customized a SAML response handling process, such as


SAMLProcessingFilterWrapper, or leveraged classes in the old
framework, such as SAMLProcessingFilter, see SAML
Customization for MicroStrategy Library to learn how to achieve the
same behavior in the new version.

5. If you have customized the maxAuthenticationAge and


responseSkew properties, they are relocated to
com.microstrategy.auth.saml.response.SAMLAssertionVal
idator.

Add the following code to the new version:

<bean
id=
"samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="maxAuthenticationAge"
value="2592000"/><!-- 30 days -->
<property name="responseSkew" value="300"/>
</bean>

Copyright © 2024 All Rights Reserved 238


Syst em Ad m in ist r at io n Gu id e

See SAML Customization for MicroStrategy Library for details.

6. The new framework performs minimal validation on SAML 2.0


assertions. After verifying the signature, it:

l Validates the <AudienceRestriction> and


<DelegationRestrictions> conditions

l Validates <SubjectConfirmation>s, except for any IP address


information

To perform additional validation, configure your own assertion


validator. See SAML Customization for MicroStrategy Web for details.

7. Customizations performed on the logout process must be removed,


since the single logout process is not supported in the new framework.
This can be added back later.
This page applies to MicroStrategy 2021 Update 4 and newer versions.

SAML Upgrade Guidance for MicroStrategy Web and Mobile


Use the procedures below to upgrade your non-customized (out-of-the-box)
or customized SAML infrastructure. You can determine whether you
environment is non-customized or customized by looking for manual
changes in the SpringSAMLConfig.xml file. You can find this
configuration file in the following locations:

l MicroStrategy Web: /<TOMCAT_


HOME>/webapps/MicroStrategy/WEB-
INF/classes/resources/SAML/SpringSAMLConfig.xml

l MicroStrategy Mobile: /<TOMCAT_


HOME>/webapps/MicroStrategyMobile/WEB-
INF/classes/resources/SAML/SpringSAMLConfig.xml

Copyright © 2024 All Rights Reserved 239


Syst em Ad m in ist r at io n Gu id e

Upgrade a Non-custom ized SAML System

1. Back up the following files in <TOMCAT_


HOME>\webapps\MicroStrategy\WEB-
INF\classes\resources\SAML\:

Do not back up SpringSamlConfig.xml.

l IDPMetadata.xml

l SPMetadata.xml

l SamlKeystore.jks

l MstrSamlConfig.xml

2. Restore the files listed above to the same location after upgrading.

3. Change or add the following values in <TOMCAT_


HOME>\webapps\MicroStrategy\WEB-INF\xml\sys_
defaults.properties:

defaultloginmode=1048576
enableloginmode=1048576
springAdminAuthMethod=2

Upgrade a Custom ized SAML System

The following is a list of common SAML customization cases for upgrade


guidance. If your customization is not in the following list, see
SAML Customization for MicroStrategy Web and Mobile for more
information.

1. Remove the spring-security-saml2-core framework.

If you leverage classes in this framework for customizations, you must


remove them using the provided parity classes or the ones in the new

Copyright © 2024 All Rights Reserved 240


Syst em Ad m in ist r at io n Gu id e

framework. The following table contains some useful parity classes for
your upgrade. If you are using them, directly change their class name to
the new one.

Parity Class Transfers

Old New Description

This class is
com.microstrategy.aut
org.springframework.security exactly the
h.saml.response.SAMLC
.saml.SAMLCredential same as the
redential
previous one.

An extra
loadSAMLPr
operties
method is
added. This
method is
called in
SAMLRelyin
gPartyRegi
com.microstrategy.aut stration's
org.springframework.security
h.saml.SAMLUserDetail constructor
.saml.SAMLCredential
sService when the app
is launched.
Subclasses
should take
advantage of
the
SAMLConfig
instance and
set internal
properties.

Copyright © 2024 All Rights Reserved 241


Syst em Ad m in ist r at io n Gu id e

Parity Class Transfers

Old New Description

This class is
a
replacement
of the
org.springframework.security com.microstrategy.aut
previous
.providers.ExpiringUsernameA h.saml.response.SAMLA
authenticatio
uthenticationToken uthentication
n token which
has the same
properties as
the old one.

2. Upgrade the org.opensaml framework to v4.1.0.

If you are using utility classes in v2.6.7, you must transfer them to
parities in v4.1.0.

3. If your web server is behind a proxy, remove all previous proxy-related


customizations.

In the SAML configuration generation page, located at


{ContextPath}/saml/config/open, select Yes from the Behind
the proxy drop-down. No additional customization is necessary.

Starting in MicroStrategy 2021 Update 4, customized proxies cannot


be added back. Otherwise, the app cannot start.

Copyright © 2024 All Rights Reserved 242


Syst em Ad m in ist r at io n Gu id e

4. If you have customized a SAML response handling process, such as


SAMLProcessingFilterWrapper, or leveraged classes in the old
framework, such as SAMLProcessingFilter, see SAML
Customization for MicroStrategy Web and Mobile to learn how to
achieve the same behavior in the new version.

5. If you have customized the maxAuthenticationAge and


responseSkew properties, they are relocated to
com.microstrategy.auth.saml.response.SAMLAssertionVal
idator.

Add the following code to the new version:

<bean
id=
"samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30 days
-->
<property name="responseSkew" value="300"/>
</bean>

See SAML Customization for MicroStrategy Web and Mobile for details.

Copyright © 2024 All Rights Reserved 243


Syst em Ad m in ist r at io n Gu id e

6. The new framework performs minimal validation on SAML 2.0


assertions. After verifying the signature, it:

l Validates the <AudienceRestriction> and


<DelegationRestrictions> conditions

l Validates <SubjectConfirmation>s, except for any IP address


information

To perform additional validation, configure your own assertion


validator. See SAML Customization for MicroStrategy Web and Mobile
for details.

7. Customizations that are performed on the logout process must be


removed since the single logout process is not supported in the new
framework. This can be added back later.

Starting in MicroStrategy 2021 Update 4, customized global logout


cannot be added back. Otherwise, the app cannot start.
This page applies to MicroStrategy 2021 Update 4 and newer versions.

Use Signed Authn Requests for SAML in MicroStrategy 2021


Update 4
In the MicroStrategy 2021 Update 4 SAML workflow, there’s a change to the
Spring Security SAML project for the signature. Follow the steps in this topic
if your setup meets following the conditions:

l Your SAML works correctly prior to MicroStrategy 2021 update 4. No


change is done on IdP side.

l After migrating the same configuration files to 2021 Update 4


environments, the SAML redirection is normal.

l After entering the SAML user name and password in the login page, the
login fails.

Copyright © 2024 All Rights Reserved 244


Syst em Ad m in ist r at io n Gu id e

Troubleshooting

Check the SAML response network trace. If you see the following response
and there’s no assertion appended, follow the solution below.

urn:oasis:names:tc:SAML:2.0:status:Responder

Solution

In IdPMetadata.xml, add WantAuthnRequestsSigned="true".

MicroStrategy has encountered one ADFS case where this parameter is a


must-have for SAML to work properly.
This page applies to MicroStrategy 2021 Update 6 and newer versions.

Upgrade SAML Framework to v5.6.3


Starting in MicroStrategy 2021 Update 6, the spring-security-saml2-
service-provider framework has been upgraded from v5.5.3 to v5.6.3,
and SAML single logout functionality is now supported!

What this means for you:

l If the MicroStrategy environment is configured to use SAML without any


customization, the upgrade is completely seamless in MicroStrategy 2021
Update 6 and no additional steps are required.

l This change does not impact SAML on ASP.

l If the MicroStrategy environment is configured to use SAML and there


have been additional customizations added to this configuration,

Copyright © 2024 All Rights Reserved 245


Syst em Ad m in ist r at io n Gu id e

additional steps may need to be followed after the upgrade. The steps are
simple and often just need a replacement of classes with newer and more
secure classes.

Note the following for this upgrade:

l SAML single logout is now supported on Tomcat and Jboss. See Generate
SAML Configuration Files to enable single logout for your application.
Local logout is used by default.

l In previous releases, you may have performed customizations after a


successful login by leveraging the SAMLCredential object. In 2021
Update 6 and newer, you no longer need to use the details property of
SAMLAuthentication to locate the necessary information, just
SAMLAuthentication itself.

Enable Single Sign-On with SAML Authentication


SAML is a two-way setup between your MicroStrategy application and
Identity Provider (IdP). SAML support allows MicroStrategy to work with a
wide variety of SAML identity providers for authentication.

To configure a MicroStrategy application for SAML authentication, you will


need to create SAML configuration files for your application, register the
application with your IdP, establish trust to MicroStrategy Intelligence
Server, and link SAML users to MicroStrategy users.

See the appropriate section for your MicroStrategy Application

Enable SAML Authentication for MicroStrategy Library


You can configure MicroStrategy Library to use SAML authentication for
single sign-on. You will need to generate SAML configuration files for your
Library application, establish a trust relationship between the Library server
and MicroStrategy Intelligence server, register the application with your
SAML Identity Provider (IdP), and link SAML users to MicroStrategy users.

Copyright © 2024 All Rights Reserved 246


Syst em Ad m in ist r at io n Gu id e

Before you begin, you need the following:

l A SAML Identity Provider

l MicroStrategy Library is deployed

l A running MicroStrategy Intelligence server

Additionally, Chrome Web Browser version 80 introduces new changes to


cross-site embedding. For more information, see KB484005: Chrome v80
Cookie Behavior and the Impact on MicroStrategy Deployments.

It is recommended to configure HTTPS for the web application server


running MicroStrategy Library.

Generate SAML Configuration Files

The following steps generate the application metadata (SPMetadata.xml)


and SAML configuration files (MstrSamlConfig.xml) for configuring
SAML.

To access the configuration page, you need admin privileges.

1. Open a browser and access the SAML configuration page with a URL in
this format:

http://<FQDN>:<port>/<MicroStrategyLibrary>/saml/config/open

where <FQDN> is the Fully Qualified Domain Name of the machine


hosting your MicroStrategy Library application and <port> is the
assigned port number.

2. Fill in the following:

l General:

l Entity ID: This the unique identifier of the application to be


recognized by the IdP.

Copyright © 2024 All Rights Reserved 247


Syst em Ad m in ist r at io n Gu id e

Some IdPs may require Entity ID to be the application URL. SAML


standards state it can be any string as long as a unique match can
be found among the IdP's registered entity IDs. Follow the
requirements for your specific IdP.

l Entity base URL: This is the URL the IdP will send and receive
SAML requests and responses. The field will be automatically
generated when you load the configuration page, but it should
always be double checked. It should be the application URL end
users would use to access the application.

If the application is set up behind a reverse proxy/load balancer,


the auto-populated URL here may not be correct. Ensure you are
using the front-end URL.

l Do not use "localhost" for the Entity base URL.

l Once configured, remember to always use this URL to access


MicroStrategy Web. Using any alternative host name to visit
would end up failing the SAML authentication.

l Behind the proxy: Using a reverse proxy or load balancer can alter
the HTTP headers of the messages sent to the application server.
These HTTP headers are checked against the destination specified
in the SAML response to make sure it is sent to the correct
destination. A mismatch between the two values can cause the
message delivery to fail. To prevent this, select Yes if
MicroStrategy Library runs behind a reverse proxy or load balancer.
The base URL field is set to the front-end URL. Select No if you are
not using a reverse proxy or load balancer.

l Logout mode: Select Local to prevent users from being logged out
from all other applications controlled by SSO. Select Global to log
out users from other applications controlled by SSO. Make sure that

Copyright © 2024 All Rights Reserved 248


Syst em Ad m in ist r at io n Gu id e

SSO supports global logout before choosing this option.

Single logout is not supported on WebLogic

l Encryption:

l Signature algorithm: The default is to use the industry standard


"SHA256 with RSA" encryption algorithm. Set this value in
accordance with the requirements of your specific IdP.

l Generate Encryption Key: Set to No by default. Setting to Yes will


generate an encryption key and store it in the MicroStrategy Library
metadata XML file.

If setting Generate Encryption Key to Yes: SAML authentication


will not work unless you have the proper JAVA encryption strength
policy and correct setup on IdP side.

l Assertion Attribute mapping:

These options control how user attributes received from the SAML
responses are processed. If the SAML attribute names are
configurable on IdP side, you may leave all options as default. If your
IdP sends over SAML attributes in fixed names the values must be
changed on the application side to match.

You can also change attribute names in MstrSamlConfig.xml even


after the configuration is done.

l Display Name Attribute: User display name attribute.

l Email Attribute: User email address attribute.

l Distiguished Name Attribute: User distinguished name attribute.

l Group Attribute: User group attribute.

Copyright © 2024 All Rights Reserved 249


Syst em Ad m in ist r at io n Gu id e

l Group format:

l Simple: The default option takes a user's group information as


plain group names. When using this option, make sure values
sent over by IdP in the "Groups" attribute are group names and
nothing else.

l DistinguishedName: DistinguishedName means that values sent


over in the "Groups" attribute are the LDAP DistinguishedName of
the user's groups. The option is only used to utilize LDAP
integration or when the IdP only sends group information as
DistinguishedNames.

l Admin Groups: Defines groups that can access the Administrator


page.

To define multiple groups, use a comma to separate them. Do not add


space in front of or behind the comma.

For example, group information is passed in the SAML response as:

<saml2:Attribute Name="Groups"
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema- instance"
xsi:type="xs:string">IdPGroupA </saml2:AttributeValue>
<saml2:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:type="xs:string">IdPGroupB </saml2:AttributeValue>
</saml2:Attribute>

To allow IdPGroupA and IdPGroupB users to access the


Administrator page, the configuration is:

l Group Attribute: Groups

l Admin Groups: IdPGroupA,IdPGroupB

Copyright © 2024 All Rights Reserved 250


Syst em Ad m in ist r at io n Gu id e

When the admin pages are protected by the SAML Authentication


then only the members who belong to the admin groups will be able
to access it.

3. Click Generate config.

The following configuration files are generated in the WEB-


INF/classes/auth/SAML folder of the MicroStrategy Library
installation directory:

l MstrSamlConfig.xml: Contains run-time SAML support


configuration parameters

l SPMetadata.xml: Contains metadata describing your web


application to SSO

l SamlKeystore.jks: Contains necessary cryptographic material

Do not rename any of the generated files.

Register Your SAML Identity Provider with MicroStrategy Library

MicroStrategy Library needs a metadata file from the IdP to identify which
service you are using.

To register your SAML IdP:

1. Download the metadata file and save it as IDPMetadata.xml

This file name is case sensitive and must be saved exactly as shown
above.

2. Place the file in the WEB-INF/classes/auth/SAML folder with the


MicroStrategy Library configuration files you generated previously.

Registering MicroStrategy Library with Your SAML Identity Provider

MicroStrategy Library needs to be registered to the IdP to enable SAML


authentication. The registration methods provided below should apply to

Copyright © 2024 All Rights Reserved 251


Syst em Ad m in ist r at io n Gu id e

most IdPs. Exact configuration details may differ depending on your IdP.
Consult your identity provider's documentation for specific instructions.

Regi st er b y Up l o ad i n g SPM et ad at a.xm l

Many IdPs provide a convenient way to register an application by uploading


a metadata file.

Use the SPMetadata.xml file generated previously and follow IdP's


instructions to register the MicroStrategy Library application.

M an u al Regi st r at i o n

If uploading a metadata file is not supported by your IdP, manual


configuration is necessary.

The SPMetadata.xml file contains all of the information needed for manual
configuration.

l The entityID= parameter is the same EntityID you provided in the SAML
config page

l AssertionConsumerService Location= this URL is located near the


end of the file.

Be aware that there are multiple URLs in this file. The


AssertionConsumerService Location will contain the binding
statement HTTP-POST at the end.

l If the signing certificate is required:

1. Copy the text between <ds:X509Certificate> and


</ds:X509Certificate> tags.

2. Paste the contents into a text editor.

3. Save the file as file_name.cer and upload to IdP.

Copyright © 2024 All Rights Reserved 252


Syst em Ad m in ist r at io n Gu id e

SAML Assertion Attributes Configuration

MicroStrategy Library uses information about users from the SAML


Response to create Intelligence server sessions. The settings are how
SAML users are mapped or imported to MicroStrategy.

The user properties that MicroStrategy uses for mapping are:

Re qu i r e d At t r i bu t e s:
l Name ID - Maps to Trusted Authenticated Request User ID of the
MicroStrategy user as defined in MicroStrategy Developer.

Opt i on a l At t r i bu t e s:
l DisplayName - Used to populate or link to a MicroStrategy user's Full
name

l Email - User email

l DistinguishedName - Used to extract additional user information from the


LDAP server

l Groups - List of groups user belongs to

Attribute names are case sensitive. Make sure any SAML attribute name
configured here is an exact match to the application configuration.

In the case where IdP does not allow customization of SAML attribute
names and provides fixed names instead, you may modify the
corresponding attribute names in MstrSamlConfig.xml generated
previously.

For more information on mapping users between a SAML IdP and


MicroStrategy, see Mapping SAML Users to MicroStrategy

Copyright © 2024 All Rights Reserved 253


Syst em Ad m in ist r at io n Gu id e

Enabling SAML Authentication Mode

Chrome Web Browser version 80 introduces new changes to cross-site


embedding. For more information, see KB484005: Chrome v80 Cookie
Behavior and the Impact on MicroStrategy Deployments.

To use SAML authentication it needs to be enabled on MicroStrategy Library


as a login mode.

1. Launch the Library Admin page by entering the following URL in your
web browser

http://<FQDN>:<port>/MicroStrategyLibrary/admin

where <FQDN> is the Fully Qualified Domain Name of the machine


hosting your MicroStrategy Library application, and <port> is the
assigned port number.

2. On the Library Web Server tab, select SAML from the list of available
Authentication Modes.

If you use MicroStrategy Identity Server as your SAML identity provider,


select MicroStrategy Identity Server.

3. Click Create Trusted Relationship to establish trusted communication


between Library Web Server and Intelligence server.

Ensure the Intelligence server information is entered correctly before


establishing this trusted relationship.

4. Click Save.

If you are using 2021 Update 1, follow the instructions in KB485016.

5. Restart your Web server to apply the changes.

Copyright © 2024 All Rights Reserved 254


Syst em Ad m in ist r at io n Gu id e

Library Adm in Authentication

In 2021 Update 2 or later, the Library admin pages support basic and SAML
authentication when only SAML authentication is enabled. The admin pages
authentication is governed by the auth.admin.authMethod parameter in
the WEB-INF/classes/config/configOverride.properties file. If
the parameter is not mentioned in the file, you can add it as shown below.

There are two possible values for the auth.admin.authMethod


parameter:

l auth.admin.authMethod = 1 (Default)

The default value of the auth.admin.authMethod parameter is 1. This


means the Library admin pages are protected by basic authentication.

l auth.admin.authMethod = 2

The Library admin pages are protected by the SAML admin groups
mentioned in the saml/config/open form. These admin groups are linked to
the groups on the Identity Provider (IDP) side. The members who belong
to the IDP admin groups can only access the admin pages. Users that do
not belong to the admin group receive a 403 Forbidden error.

The administrator can change the parameter value as per the requirements.
A Web application server restart is required for the changes to take effect.

The Library admin pages cannot be protected by the SAML admin groups
when multiple authentication modes are enabled.

Enable SAML Logging

1. Access the machine in which MicroStrategy Library is


installed/deployed and browse to <Library Folder Path>/WEB-
INF/classes.

2. Locate and edit logback.xml.

Copyright © 2024 All Rights Reserved 255


Syst em Ad m in ist r at io n Gu id e

3. Locate <logger name="org.springframework"


level="ERROR">. Remove the comment tag and change the value of
level to “DEBUG”.

Locate <logger name="com.microstrategy" level="ERROR">.


Remove the comment tag and change the value of level to “DEBUG”.

<logger name="org.springframework" level="DEBUG">


<appender-ref ref="SIFT" />
</logger>
<logger name="com.microstrategy" level="DEBUG">
<appender-ref ref="SIFT" />
</logger>

4. Locate <filter
class="ch.qos.logback.classic.filter.ThresholdFilter">
and change the level to be "DEBUG".

<filter class=“ch.qos.logback.classic.filter.ThresholdFilter”>
<level>DEBUG</level>
</filter>

5. Save and close logback.xml.

6. Restart the application server.

7. Additional logging is added to MicroStrategyLibrary-


{appName}.log. By default, this is named MicroStrategyLibrary-
MicroStrategyLibrary.log. You can expect the log file to appear
in a folder specified under the LOG_HOME property of logback.xml.
For example, <property name="LOG_HOME" value="C:/Program
Files (x86)/Common Files/MicroStrategy/Log" />.

Once the behavior you are investigating has been reproduced, edit
logback.xml once again and change level="DEBUG" back to
level="ERROR".

Copyright © 2024 All Rights Reserved 256


Syst em Ad m in ist r at io n Gu id e

Single Sign-On with SAML Authentication for JSP Web and


Mobile
You can configure MicroStrategy Web and MicroStrategy Mobile to work with
SAML-compliant single sign-on (SSO). To complete the set up in this
document, a basic understanding of SAML workflows is required.

Though the following prerequisites and procedures refer to MicroStrategy


Web, the same information applies to MicroStrategy Mobile, except where
noted.

Before you begin configuring MicroStrategy Web to support single sign-on,


make sure you have done the following:

l Deployed a SAML-enabled identity provider (IdP) infrastructure

l Verified that MicroStrategy Web is run on a JSP server.

l Deployed MicroStrategy Web on this Web application server. Deploy the


MicroStrategy Web WAR file on the Web application server in accordance
with your Web application server documentation.

The following procedures describe how to configure and integrate SAML


support for MicroStrategy Web to implement single sign-on.

l Generate and Manage SAML Configuration Files

l Existing SAML Configuration Files

l Upload IDPMetadata

l SAML Configuration Generation

l Register MicroStrategy Web with Your Identity Provider

l Configure the Intelligence Server and Enable SAML Authentication

l Configure Logging

l Change the Authentication Mode for the Admin Web Pages

Copyright © 2024 All Rights Reserved 257


Syst em Ad m in ist r at io n Gu id e

Generate and Manage SAML Configuration Files

MicroStrategy SAML support relies on several configuration files.


MicroStrategy provides a web page that automatically generates the
necessary files based on the provided information. SAML metadata is used
to share configuration information between the Identity Provider (IdP) and
the Service Provider (SP). Metadata for the IdP and the SP is defined in the
XML files.

To launch the page that generates the configuration files, open a browser
and enter the following URL:

<web application_path>/saml/config/open

To access, you are prompted for the application server's admin credentials.

If you deployed MicroStrategy Web under the name MicroStrategyWeb and


you are launching the configuration page from the machine where you deployed
MicroStrategy Web, the URL is:

http://<FQDN>:<port>/MicroStrategyWeb/saml/config/open

If you deployed MicroStrategy Mobile under the name MicroStrategyMobile


and you are launching the configuration page from the machine where you
deployed MicroStrategy Mobile, the URL is:

http://<FQDN>:<port>/MicroStrategyMobile/saml/config/open

Exi st i n g SAM L Co n f i gu r at i o n Fi l es

In MicroStrategy 2021 Update 3, you can download the existing SAML


configuration file without manually connecting to the Web server machine.

If you have already configured SAML, you can download the following SAML
configuration files and verify the content.

Copyright © 2024 All Rights Reserved 258


Syst em Ad m in ist r at io n Gu id e

l IDPMetadata.xml

l SPMetadata.xml

l SamlKeystore.jks

Up l o ad IDPM et ad at a

In MicroStrategy 2021 Update 3, you can upload the IDPMetadata.xml


configuration file without manually connecting to the Web server machine.

You can upload or change the existing IDPMetadata.xml file with the
metadata file generated by the Identity Provider.

SAM L Co n f i gu r at i o n Gen er at i o n

The SAML configuration files are generated by submitting the following


details:

l General

l Entity ID: This the unique identifier of the web application to be


recognized by the IdP.

Some IdPs may require Entity ID to be the web application URL. SAML
standards state it can be any string as long as a unique match can be
found among the IdP's registered entity IDs. Follow the requirements for
your specific IdP.

l Entity base URL: This is the URL the IdP will send and receive SAML
requests and responses. The field will be automatically generated when
you load the configuration page, but it should always be double checked.

If you deployed MicroStrategy Web under the name MicroStrategyWeb,


the URL is:

http://<FQDN>:<port>/MicroStrategyWeb

Copyright © 2024 All Rights Reserved 259


Syst em Ad m in ist r at io n Gu id e

If you deployed MicroStrategy Mobile under the name


MicroStrategyMobile, the URL is:

http://<FQDN>:<port>/MicroStrategyMobile

If the web application is set up behind reverse proxy or load balancer,


use FQDN of the proxy or loadbalancer in this URL.

l Do not use "localhost" for the Entity base URL.

l Do not use a trailing / at the end of the URL.

l Once configured, remember to always use this URL to access


MicroStrategy Web – using any alternative host name to visit would
end up failing the SAML authentication.

l Behind the proxy: Using a reverse proxy or load balancer can alter the
HTTP headers of the messages sent to the application server. These
HTTP headers are checked against the destination specified in the
SAML response to make sure it is sent to the correct destination. A
mismatch between the two values can cause the message delivery to
fail. To prevent this, select Yes if MicroStrategy Library runs behind a
reverse proxy or load balancer. The base URL field is set to the front-
end URL. Select No if you are not using a reverse proxy or load
balancer.

l Encryption

l Signature algorithm: The default is to use the industry standard


"SHA256 with RSA" encryption algorithm. Set this value in accordance
with the requirements of your specific IdP.

l Generate Encryption Key: Set to No by default. Setting to Yes will


generate an encryption key and store it in the MicroStrategy Library
metadata XML file.

Copyright © 2024 All Rights Reserved 260


Syst em Ad m in ist r at io n Gu id e

If setting Generate Encryption Key to Yes: SAML authentication will


not work unless you have the proper Java encryption strength policy
and correct setup on IdP side.

l Assertion Attribute mapping

These options control how user attributes received from the SAML
responses are processed. If the SAML attribute names are configurable on
IdP side, you may leave all options as default. If your IdP sends over
SAML attributes in fixed names the values must be changed on the web
application side to match.

You can also change attribute names in MstrSamlConfig.xml even


after the configuration is done.

l Display Name Attribute: User display name attribute

l Email Attribute: User email address attribute

l Distinguished Name Attribute: User distinguished name attribute

l Group Attribute: User group attribute

l Group format

l Simple: The default option takes a user's group information as plain


group names. When using this option, make sure values sent over by
IdP in the "Groups" attribute are group names and nothing else.

l DistinguishedName: DistinguishedName means that values sent over


in the "Groups" attribute are the LDAP DistinguishedName of the
user's groups. The option is only used to utilize LDAP integration or
when the IdP only sends group information as DistinguishedNames.

l Admin Groups: Defines groups that can access Administrator page.

To define multiple groups, use comma to separate them. Do not add


space in front of or behind comma.

Copyright © 2024 All Rights Reserved 261


Syst em Ad m in ist r at io n Gu id e

For example, group information is passed in the SAML response as:

<saml2:Attribute Name="Groups"
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema- instance"
xsi:type="xs:string">IdPGroupA </saml2:AttributeValue>
<saml2:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:type="xs:string">IdPGroupB </saml2:AttributeValue>
</saml2:Attribute>

To allow IdPGroupA and IdPGroupB users to access Administrator page,


the configuration is:

l Group Attribute:Groups

l Admin Groups:IdPGroupA,IdPGroupB

Click Generate config to generate three configuration files in the WEB-


INF/classes/resources/SAML folder of the MicroStrategy Web
installation directory:

l MstrSamlConfig.xml Contains run-time SAML support configuration


parameters

l SPMetadata.xml Contains metadata describing your web application to


SSO

l SamlKeystore.jks Contains necessary cryptographic material

Do not rename any of the generated files.

Register MicroStrategy Web with Your Identity Provider

To register MicroStrategy Web with your IdP, you need to do the following:

l Register MicroStrategy Web with your IdP using the SPMetadata.xml file
you generated in the previous step.

l Configure the SAML Assertion attributes

Copyright © 2024 All Rights Reserved 262


Syst em Ad m in ist r at io n Gu id e

Each SAML-compliant IdP has a different way to perform these steps. The
sections below provide a general overview of the process.

1. Register the web application with SSO:

Use the SPMetadata.xml file you generated in the previous step to


register the MicroStrategy Web application with the IdP.

If uploading a metadata file is not supported by your IdP, manual


configuration is necessary.

The SPMetadata.xml file contains all of the information needed for


manual configuration.

l The entityID= parameter is the same EntityID you provided in the


SAML config page

l AssertionConsumerService Location= this URL is located


near the end of the file.

Be aware that there are multiple URLs in this file. The


AssertionConsumerService Location will contain the binding
statement HTTP-POST at the end.

l If the signing certificate is required:

1. Copy the text between <ds:X509Certificate> and


</ds:X509Certificate> tags.

2. Paste the contents into a text editor.

3. Save the file as file_name.cer and upload to IdP.

2. Configure SAML Assertion attributes:

MicroStrategy Web uses information about users from the SAML


Response to create Intelligence Server sessions. The settings are how
SAML users are mapped or imported to MicroStrategy.

The user properties that MicroStrategy uses for mapping are:

Copyright © 2024 All Rights Reserved 263


Syst em Ad m in ist r at io n Gu id e

Required attributes

l Name ID: Maps to Trusted Authenticated Request User ID of the


MicroStrategy user as defined in MicroStrategy Developer.

Optional attributes

l DisplayName: Used to populate or link to a MicroStrategy user's Full


name

l EMail: User email

l DistinguishedName: Used to extract additional user information


from the LDAP server

l Groups: List of groups user belongs to

Attribute names are case sensitive. Make sure any SAML attribute
name configured here is an exact match to the web application
configuration.

In the case where IdP does not allow customization of SAML attribute
names and provides fixed names instead, you may modify the
corresponding attribute names in MstrSamlConfig.xml generated
previously.

For more information on mapping users between a SAML IdP and


MicroStrategy, see Mapping SAML Users to MicroStrategy

When configuring assertion attributes, make sure you set up users who
belong to a group (for example admin) with the same group name as
defined when generating configuration files in MicroStrategy Web
(step 2 in Generate and Manage SAML Configuration Files).
Otherwise, no user will be able to access the web administrator page
after the web.xml file has been modified and the Web server
restarted. Use Groups as SAML Attribute Name.

Copyright © 2024 All Rights Reserved 264


Syst em Ad m in ist r at io n Gu id e

3. Download the IdP metadata:

Consult the SSO documentation for instructions on how to export or


download the IdP metadata. The IdP metadata file must be named
IDPMetadata.xml. This file can either be uploaded using the Upload
IDPMetadata functionality mentioned inGenerate and Manage SAML
Configuration Files or directly placed in the WEB-
INF/classes/resources/SAML folder. Ensure that the EntityID
value in the IDPMetadata.xml file is different from the EntityID
value in the SPMetadata.xml file to avoid web application errors.

MicroStrategy does not automatically update the IDPMetadata.xml


file. If for any reason the metadata changes on the IdP side, you will
need to download and replace IDPMetadata.xml manually.

Configure the Intelligence Server and Enable SAML Authentication

To use SAML authentication, you need to configure the trusted relationship


between the Web server and the Intelligence Server and enable SAML
authentication. This is done through the Administrator Page. Open the
admin page for your web application. Then, connect to the Intelligence
Server you want to use.

l Establish trust between the server and Intelligence Server:

1. Open the Server properties editor.

2. Next to Trust relationship between MicroStrategy Web Server


and MicroStrategy Intelligence Server, click Setup.

Copyright © 2024 All Rights Reserved 265


Syst em Ad m in ist r at io n Gu id e

3. Enter the Intelligence Server administrator credentials.

4. Click Create Trust relationship.

l Enable SAML authentication for 2021 Update 2 or later:

1. In the Default Properties section of the Web Administrator page,


enable SAML authentication and click Save.

2. Restart the Web server.

l Enable SAML authentication for 2021 Update 1:

1. In the Default Properties section of the Web Administrator page,


enable SAML authentication and click Save.

Copyright © 2024 All Rights Reserved 266


Syst em Ad m in ist r at io n Gu id e

2. Locate the web.xml file located in the WEB-INF folder of the


MicroStrategy Web installation directory and open it in a text editor.

3. Comment out the two security constraints as shown below to disable


basic authentication for the Administrator page. Surround the
constraints with <!-- and --> tags. Make sure that there are no sub
comments in the text, as this may cause an error. If you decide to
change to another authentication mode besides SAML in the future,
you must reverse the changes done in this step.

Copyright © 2024 All Rights Reserved 267


Syst em Ad m in ist r at io n Gu id e

l Enable SAML authentication for the 2021 platform release or older


versions:

In MicroStrategy Web, the Default properties screen is used for


configuring default login mode, but the default properties do not apply to
SAML for 2021 or older versions. When SAML authentication is configured
in web.xml, this screen displays SAML settings regardless of the default
property values and all the login fields on the page are disabled. SAML is
chosen unconditionally for trusted mode.

If you decide to configure SAML authentication in web.xml, you must first


enable Trusted Authentication Request.

Copyright © 2024 All Rights Reserved 268


Syst em Ad m in ist r at io n Gu id e

To enable SAML in the Web application for 2021 or older versions, modify
the web.xml file located in the WEB-INF folder of the MicroStrategy Web
installation directory.

1. Stop the MicroStrategy Web application server.

2. Delete the first and last line of the web.xml fragment shown below to
enable SAML authentication.

<!-- Delete fragment below to enable SAML Authentication mode


<context-param>
<param-name>contextConfigLocation</param-name>
<param-
value>classpath:resources/SAML/SpringSAMLConfig.xml</param-value>
</context-param>

<context-param>

Copyright © 2024 All Rights Reserved 269


Syst em Ad m in ist r at io n Gu id e

<param-name>contextInitializerClasses</param-name>
<param-
value>com.microstrategy.auth.saml.config.ConfigApplicationContextInitia
lizer</param-value>
</context-param>

<filter>
<filter-name>springSecurityFilterChain</filter-name>
<filter-
class>org.springframework.web.filter.DelegatingFilterProxy</filter-
class>
</filter>
<filter-mapping>
<filter-name>springSecurityFilterChain</filter-name>
<url-pattern>/servlet/*</url-pattern>
</filter-mapping>
<filter-mapping>
<filter-name>springSecurityFilterChain</filter-name>
<url-pattern>/saml/*</url-pattern>
</filter-mapping>

<listener>
<listener-
class>org.springframework.web.context.ContextLoaderListener</listener-
class>
</listener>
-->

3. Save the web.xml file.

4. Restart the Web server.

To disable SAML in a Web application for the 2021 platform release and
older versions, modify the web.xml file located in the WEB-INF folder of
the MicroStrategy Web installation directory.

1. Replace the web.xml file of the Web application with the original file
that you saved.

2. Open the Web Administrator page.

3. Change the Login mode to the desired mode.

4. Remove the trust relationship between the Web server and


Intelligence Server.

5. Restart the Web server.

Copyright © 2024 All Rights Reserved 270


Syst em Ad m in ist r at io n Gu id e

Configure Logging

1. Locate the log4j2.properties file in the WEB-INF/classes folder.

2. Modify the property.filename property to point to the folder where


you want the SAML logs stored.

It is not recommended to leave the file as is, since the relative file path
is very unreliable and can end up anywhere. The file can almost
always cannot be found in the Web application folder. Use full file
paths to fully control the log location.

In a Windows environment, the file path must be in Java format. This


means you either need to change each backslash ("\") to a slash ("/"),
or you need to escape the backslash with another one ("\\"). There is
also a way to shorten the path by referring to the Tomcat base folder
as a variable, for example:

${catalina.home}/webapps/MicroStrategy/WEB-INF/log/SAML/SAML.log

For troubleshooting purposes it is recommended to first change the


level of org.opensaml, that is the logger.d.level property, to
debug and leave everything else as the default. This generates a
clean log with all SAML messages, along with any errors or
exceptions.

3. Restart the Web application server to apply all changes.

If you have a problem accessing the MicroStrategy Web Administrator


page, close and reopen your web browser to clear the old browser
cache.

Change the Authentication Mode for the Adm in Web Pages

In 2021 Update 2 or later, the Web admin pages support SAML and basic
authentication when SAML authentication is enabled. The admin pages

Copyright © 2024 All Rights Reserved 271


Syst em Ad m in ist r at io n Gu id e

authentication is governed by the springAdminAuthMethod parameter


located in the WEB-INF/xml/sys_defaults.properties file.

There are two possible values for the springAdminAuthMethod


parameter:

l springAdminAuthMethod = 2

The default value of the springAdminAuthMethod parameter is 2. This


means the Web admin pages are protected by the SAML admin groups
mentioned in the saml/config/open form. These admin groups are linked to
the groups on the Identity Provider (IDP) side. The members who belong
to the IDP admin groups can only access the admin pages. Users that do
not belong to the admin group receive a 403 Forbidden error.

l springAdminAuthMethod = 1

Admin pages are protected with basic authentication.

The administrator can change the parameter value as per the requirements.
A Web server restart is required for the changes to take effect.

Enabling Single Sign-On with SAML Authentication for ASP Web


and Mobile
You can configure MicroStrategy ASP Web and Mobile to support SAML
using Shibboleth Service Provider for IIS.

Shibboleth Service Provider Setup

MicroStrategy Integration

Shibboleth Service Provider Configuration

Identity Provider Configuration

Role-based authentication to secure Admin pages in ASP Web:

In MicroStrategy 9.0 and above, ACL-based protection is supported for


Admin pages (asp/Admin.aspx and asp/TaskAdmin.aspx). By default, only

Copyright © 2024 All Rights Reserved 272


Syst em Ad m in ist r at io n Gu id e

administrators have access to Admin pages.

Additionally, in MicroStrategy 11.0 a new feature to protect Admin pages


was introduced using Windows IIS URL Authorization. By default, the URL
Authorization feature is not installed by the Windows OS. IIS URL
Authorization is supported by IIS 7.0 and above. You can find instructions to
install IIS URL Authorization here.

The authorization rule has been added to Web.config out of the box. Once
you install the IIS URL Authorization module, you will automatically get
protection for Admin pages.

Compared to the ACL based protection, IIS URL authorization has a


centralized configuration in Web.config.

Shibboleth Service Provider Setup

1. Install the latest version of Shibboleth Service Provider.

2. Follow the installation instructions from Shibboleth for your version of


IIS.

Co n f i gu r i n g t h e N ew Pl u gi n

This is best done from the command line. You will also need admin
privileges.

Co n f i gu r i n g t h e IIS7 DLL

From the C:\Windows\System32\InetSrv directory, run the following


lines:

appcmd install module /name:ShibNative32 /image:"c:\opt\shibboleth-


sp\lib\shibboleth\iis7_shib.dll" /precondition:bitness32
appcmd install module /name:ShibNative /image:"c:\opt\shibboleth-
sp\lib64\shibboleth\iis7_shib.dll" /precondition:bitness64

Copyright © 2024 All Rights Reserved 273


Syst em Ad m in ist r at io n Gu id e

The Shibboleth 3.2.3 Windows installer contains a Configure IIS7 module


option to automatically install the Shibboleth module into IIS. If this option is
selected, you can skip running appcmd.

Ver i f yi n g t h e i n st al l at i o n

Open the following URL:

https://localhost/Shibboleth.sso/Status

This must be run as localhost, and should return XML containing information
about Shibboleth. The latest Shibboleth version only supports
HTTP connections.

MicroStrategy Integration

In t egr at i o n w i t h M i cr o St r at egy ASP Web

1. Setup the Trust relationship between MicroStrategy Web and


Intelligence Server:

a. Open the admin page at


https://localhost/MicroStrategy/asp/Admin.aspx

b. Go to Intelligence Servers > Servers

c. For each Intelligence Server, go to Properties > Modify

d. Click on "Trust relationship between Web Server and


MicroStrategy Intelligence Server".

e. Enter credentials. When successfully setup, there should be a


check mark next to the trust.

2. Navigate to Intelligence Servers > Default properties > Login.

3. Enable Trusted Authentication Request log-in mode.

4. Under Trusted Authentication Provider select Custom SSO.

Copyright © 2024 All Rights Reserved 274


Syst em Ad m in ist r at io n Gu id e

5. Configure C:\Program Files (x86)\MicroStrategy\Web


ASPx\WEB-INF\classes\resources\custom_
security.properties parameter LoginParam with same value
associated with the user mapped from the SAML assertion.

If the HTTP header in attribute-map.xml is set to id="SBUSER", then


custom_security.properties must be LoginParam=SBUSER.

M i cr o St r at egy User M ap p i n g

Ensure Intelligence Server users are mapped to your SAML users as


identified by the UID. Access User Manager, either with MicroStrategy
Developer or the Intelligence Server Administration Portal in MicroStrategy
Web.

MicroStrategy Developer
To map users using MicroStrategy Developer, open: User Manager > Edit
User Properties > Authentication > Metadata > Trusted Authentication
Request > User ID.

Copyright © 2024 All Rights Reserved 275


Syst em Ad m in ist r at io n Gu id e

Intelligence Server Administration Portal on MicroStrategy Web


To map users through the Web Administration Portal, go to: MicroStrategy
Web > Intelligence Server Administration Portal > User Manager > Edit
User Properties > Authentication > Trusted Authentication Login.

Copyright © 2024 All Rights Reserved 276


Syst em Ad m in ist r at io n Gu id e

Shibboleth Service Provider Configuration

To configure the Shibboleth Service Provider, use the following instructions


in conjunction with the Shibboleth documentation.

1. Configure %SHIBBOLETH_INSTALL_
DIR%\etc\shibboleth\shibboleth2.xml

l Set useHeaders to true in <ISAPI>

<ISAPI normalizeRequest="true" safeHeaderNames="true"


useHeaders="true">

l Replace site name with a fully qualified site name:

shibboleth2.xml – site

<Site id="1" name="sp.example.org"/>

with

Copyright © 2024 All Rights Reserved 277


Syst em Ad m in ist r at io n Gu id e

<Site id="1" name="FULLY_QUALIFIED_SERVICE_PROVIDER_HOST_NAME"/>

l Replace host name with fully qualified name, and paths:

shibboleth2.xml – host

<Host name="sp.example.org">
<Path name="secure"
authType="shibboleth"
requireSession="true"/>
</Host>

with

<Host name="FULLY_QUALIFIED_SERVICE_PROVIDER_HOST_NAME">
<Path name="MicroStrategy"
authType="shibboleth"
requireSession="true"/>
<Path name="MicroStrategyMobile"
authType="shibboleth"
requireSession="true"/>
</Host>

l Replace entityID value with a suitable entity name for your new
service provider:

Make note of this value, as it will be required by the Identity Provider.

shibboleth2.xml - entityID

<ApplicationDefaults entityID="https://sp.example.org/shibboleth"
REMOTE_USER="eppn persistent-id targeted-id"
cipherSuites=
"ECDHE+AESGCM:ECDHE:!aNULL:!eNULL:!LOW:!EXPORT:!RC4:!SHA:!SSLv2">

with

<ApplicationDefaults entityID="https://FULLY_QUALIFIED_SERVICE_
PROVIDER_HOST_NAME/shibboleth"
REMOTE_USER="eppn persistent-id targeted-id"

Copyright © 2024 All Rights Reserved 278


Syst em Ad m in ist r at io n Gu id e

cipherSuites=
"ECDHE+AESGCM:ECDHE:!aNULL:!eNULL:!LOW:!EXPORT:!RC4:!SHA:!SSLv2">

l Set SSO entityID with your SAML Identity Provider: This may be
obtained from the Identity Provider metadata by replacing:

shibboleth2.xml - Identity Provider

<SSO entityID="https://idp.example.org/idp/shibboleth"
discoveryProtocol="SAMLDS"
discoveryURL="https://ds.example.org/DS/WAYF">
SAML2 SAML1
</SSO>

with the following:

<SSO entityID="YOUR_SSO_SAML_ENTITY_ID">
SAML2 SAML1
</SSO>

Values for discoveryProtocol and discoveryURL are only


required with Shibboleth Identity Provider.

l Obtain Identity Provider metadata:

l URL option (recommended): If IdP exposes a metadata endpoint,


this is the preferred solution, otherwise see File option below. Add
the following declaration below the commented out
<MetadataProvider> section:

shibboleth2.xml - Identity Provider metadata

<MetadataProvider
type="XML"
url="https://adfs.example.org/federationmetadata/2007-
06/federationmetadata.xml"/>

Copyright © 2024 All Rights Reserved 279


Syst em Ad m in ist r at io n Gu id e

l File option: Copy it to the file %SHIBBOLETH_INSTALL_


DIR%\etc\shibboleth\partner-metadata.xml. Uncomment
the following declaration in shibboleth2.xml:

shibboleth2.xml - Identity Provider metadata

<MetadataProvider
type="XML"
file="partner-metadata.xml"/>

2. Configure %SHIBBOLETH_INSTALL_
DIR%\etc\shibboleth\attribute-map.xml to extract several
fields from the SAML assertion, which MicroStrategy will associate with
an Intelligence Server user. See AttributeNaming on the Shibboleth site
for more information.

l Add the <Attribute> mappings under <Attributes> root. Shibboleth will


look for this assertion attribute and map it to the HTTP header
SBUSER for the MicroStrategy application to consume. Here is a
configuration for ADFS where we read the windows account name
claim. This must be consistent with the Identity Provider claim
mapping that will be configured later.

attribute-map.xml user mapping - ADFS

<Attribute
name=
"http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccount
name"
id="SBUSER"/>

Here is a sample configuration for Keycloak, where you read the


"urn:oid:0.9.2342.19200300.100.1.1" or UID claim :

attribute-map.xml user mapping

<Attribute

Copyright © 2024 All Rights Reserved 280


Syst em Ad m in ist r at io n Gu id e

name="urn:oid:0.9.2342.19200300.100.1.1"
id="SBUSER"
nameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"/>

It is also recommended to comment out the unused <Attribute>


declarations in attribute-map.xml.

3. Restart the following services:

l Shibboleth 2 Daemon: May be done with Windows services, or


Windows Command Prompt:

net stop shibd_default


net start shibd_default

l World Wide Web Publishing Service: May be done with Windows


services, or Windows Command Prompt:

net stop w3svc


net start w3svc

4. Verify XML is returned from


https://localhost/Shibboleth.sso/Status again. Also, ensure
the Application entityID and MetadataProvider source values
have been correctly configured in previous steps.

Identity Provider Configuration

It is necessary to (1) add the Service Provider configured above as a new


client in the SAML Identity Provider (for example, ADFS), and (2) ensure
that the user login/UID is also included in the SAML Assertion. Some
guidance is provided below for several Identity Providers - refer to their
documentation for adding new clients/relying parties for details.

Copyright © 2024 All Rights Reserved 281


Syst em Ad m in ist r at io n Gu id e

ADFS

1. Run the Microsoft Windows Server Manager.

2. Under Tools run ADFS Management.

3. Expand to the following: ADFS > Trust Relationships > Relying Party
Trusts.

4. Click Add Relying Party Trust to launch the wizard.

5. When you reach the "Select Data Source" option, you need the
Shibboleth Service Provider metadata. Enter:

https://YOUR_MICROSTRATEGY_WEB_URL/Shibboleth.sso/Metadata

Copyright © 2024 All Rights Reserved 282


Syst em Ad m in ist r at io n Gu id e

If the HTTP URL metadata does not work, you may have to manually
download and upload the metadata file.

6. For "Display name", it is recommended you use YOUR_


MICROSTRATEGY_WEB_URL.

7. When finished, you may be prompted to edit claim rules. If not, you can
right-click your new client and select Edit claim rules.

8. Click Add Rule under the tab Issuance Claim Rules.The Add
Transform Rule Claim Wizard appears.

9. If your ADFS is backed by LDAP, select Send LDAP Attributes as


Claims. Otherwise, refer to ADFS documentation.

10. Set the following fields to values consistent with the Shibboleth
attribute-map.xml configuration from above.

Copyright © 2024 All Rights Reserved 283


Syst em Ad m in ist r at io n Gu id e

l Claim rule name: user

l Attribute store: Active Directory

l Mapping: LDAP Attribute=SAM-Account-Name, Outgoing


Claim Type=Windows account name

Keycl o ak

The Identity Provider will need ensure the user identity field is also included
in the SAML assertion generated when a user is authenticated. The exact
field depends upon the Identity Provider. The user identity will be associated
with the SAML parameter name of
urn:oid:0.9.2342.19200300.100.1.1. This parameter must be
consistent with the parameter with the same name in the Shibboleth Service
Provider attribute-map.xml declaration.

Copyright © 2024 All Rights Reserved 284


Syst em Ad m in ist r at io n Gu id e

Library SAML Configuration with Proxy or Load Balancer


Using a reverse proxy or load balancer can alter the HTTP headers of the
messages sent to the application server. These HTTP headers are checked
against the destination specified in the SAML response to make sure it is
sent to the correct destination. A mismatch between the two values can
cause the message delivery to fail.

The SAMLConfig.xml file needs to be altered to force the application to


ignore HTTP heads and instead check against a user defined value. This file
is stored as part of a .jar file in the SAML support libraries. You can make
this change manually, using the procedure below or modify the Behind the
proxy setting on the SAML configuration page. See Generate SAML
Configuration Files for more information about the Behind the proxy
setting.

To modify the SAMLConfig.xml file:

1. Locate restful-api-1.0-SNAPSHOT-jar-with-
dependencies.jar in the WEB-INF/lib of the MicroStrategy Library
file directory.

2. Find SAMLConfig.xml in auth, inside the JAR file.

3. Copy the file and place it in WEB-INF/classes/auth with your other


SAML configuration files. Now any modification of the file will take
precedence over the original file inside the JAR file.

4. Find <bean
class="org.springframework.security.saml.context.SAML
ContextProviderImpl" id="contextProvider"/> in the file and
replace it with the following bean:

<bean id="contextProvider"
class="org.springframework.security.saml.context.SAMLContextProviderLB">
<property name="scheme" value="https"/>
<property name="serverName" value="your external hostname"/>
<property name="serverPort" value="443"/>
<property name="includeServerPortInRequestURL" value="false"/>

Copyright © 2024 All Rights Reserved 285


Syst em Ad m in ist r at io n Gu id e

<property name="contextPath" value="/MicroStrategyLibrary"/>


</bean>

l The properties here are just examples and need to be configured


with correct information to match the application's external URL.

l The bean class is different from the original it has been changed to
SAMLContextProviderLB .

l The contextPath stops at the application name.

Integrating SAML Support with Badge


This procedure provides specific details about integrating MicroStrategy
Web or Library with Badge.

1. Download the IdP metadata:

1. Open Identity Manager.

2. Click the Logical Gateways tab.

3. Click Download your network's Badge IdP metadata.

2. Upload the SP metadata to the MicroStrategy Identity Server:

1. Click the large SAML button.

2. Enable the Upload Pre-configured Metadata option.

Copyright © 2024 All Rights Reserved 286


Syst em Ad m in ist r at io n Gu id e

3. Click Upload Metadata.

3. Configure assertion attributes by selecting the LDAP attributes and


mapping them to SAML Assetion attributes.

Select LDAP attributes:

1. Open the Users and Badges tab and click Configure in the User
Management section.

2. On the Active Directory Synchronization page, set the Badge user


attributes by mapping the values in the Badge field column to the
Active Directory Attribute to be used. You may add custom Badge
fields with any given name.

Map LDAP attributes to SAML Assetion attributes:

1. On the Logical Gateways tab and click the Edit link in the Web
Application login section.

2. In the Configure SAML Settings dialog, click Configure on SAML


Attribute Consuming Service.

Copyright © 2024 All Rights Reserved 287


Syst em Ad m in ist r at io n Gu id e

3. Map the SAML Attribute Name to the User Field that contains the
appropriate Active Directory Attribute configured in the previous
step.

4. Click Save.

4. Check group format setting by finding the <groupFormat> tag in the


MstrSamlConfig.xml file.

If your Identity network is configured with Active Directory or LDAP, the


group information should be sent as DistinguishedNames.

Copyright © 2024 All Rights Reserved 288


Syst em Ad m in ist r at io n Gu id e

Integrating SAML Support with Azure AD

Create an Application

1. Sign in to the Azure portal. If you have already launched Azure, under
Manage, go to Azure Active Directory and select Enterprise
applications.

2. At the top, select New application > Create your own application.

3. Provide a Name for the application, select the Non-gallery app option,
and click Create.

Configure the Application

1. In the Set up single sign-on tile, click Get Started and select SAML
as the sign-on method.

Copyright © 2024 All Rights Reserved 289


Syst em Ad m in ist r at io n Gu id e

2. Click Upload metadata file and add SPMetadata.xml from your


deployment folder.

The default path of SPMetadata.xml for Library is:


/opt/apache/tomcat/apache-tomcat-
9.0.43/webapps/MicroStrategyLibrary/WEB-
INF/classes/auth/SAML

The default path of SPMetadata.xml for Web is:


/opt/apache/tomcat/apache-tomcat-
9.0.43/webapps/MicroStrategy/WEB-
INF/classes/resources/SAML

3. Click Save.

4. Set the user attributes as defined in the configuration file. By default,


the Unique User Identifier should be user.mail. Add the group
attribute by choosing Edit > Add a group claim > All groups and save
the defined group attribute.

Copyright © 2024 All Rights Reserved 290


Syst em Ad m in ist r at io n Gu id e

5. Download the Federation Metadata XML file and save it as


IDPMetadata.xml in the SAML folder of your deployment.

Assertion Attributes

1. View the Federation Metadata document downloaded in the previous


section to obtain the URIs for required attributes such as
displayName, emailaddress, and groups.

<auth:ClaimType
Uri="http://schemas.microsoft.com/identity/claims/displayname"
xmlns:auth="http://docs.oasis-open.org/wsfed/authorization/200706">
<auth:DisplayName>Display Name</auth:DisplayName>
<auth:Description>Display name of the user.</auth:Description>
</auth:ClaimType>
<auth:ClaimType
Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"
xmlns:auth="http://docs.oasis-open.org/wsfed/authorization/200706">
<auth:DisplayName>Email</auth:DisplayName>
<auth:Description>Email address of the user.</auth:Description>
</auth:ClaimType>
<auth:ClaimType
Uri="http://schemas.microsoft.com/ws/2008/06/identity/claims/groups"
xmlns:auth="http://docs.oasis-open.org/wsfed/authorization/200706">
<auth:DisplayName>Groups</auth:DisplayName>

Copyright © 2024 All Rights Reserved 291


Syst em Ad m in ist r at io n Gu id e

<auth:Description>Groups of the user.</auth:Description>


</auth:ClaimType>

2. Copy these values and paste them between the <userInfo> tags in
MstrSamlConfig.xml, located in the deployment folder.

<userInfo>
<groupAttributeName>http://schemas.microsoft.com/ws/2008/06/identity/clai
ms/groups</groupAttributeName>
<groupFormat>Simple</groupFormat>
<dnAttributeName>DistinguishedName</dnAttributeName>
<displayNameAttributeName>http://schemas.microsoft.com/identity/claims/di
splayname</displayNameAttributeName>
<emailAttributeName>http://schemas.xmlsoap.org/ws/2005/05/identity/claims
/emailaddress</emailAttributeName>
<adminGroups>2109318c-dee4-4658-8ca0-51623d97c611</adminGroups>
<roleMap/>
</userInfo>

Azure AD only sends the IDs. For admin permissions, the Object ID
must also be copied.

<adminGroups>36198b4e-7193-4378-xxx4-715e65edb580</adminGroups>
<roleMap/>
</userInfo>

Troubleshooting

Af t er M aki n g t h e Ab o ve Ch an ges, t h e Web Dep l o ym en t Fai l s t o St ar t

Once web.xml has been changed to include SAML support, it refers to the
metadata and configuration files in the resources/SAML folder. If the Web
deployment fails to start, it is possible the generated files from the
resources/SAML/stage folder were not copied over. Copy the required
files to the SAML folder and restart the application.

Azu r e Ret u r n s a Lo g i n Fai l u r e an d Asser t i o n i s i n Pl ace

This is a result of bad metadata in IDPMetadata.xml. Ensure the correct


metadata from the application is copied to the SAML folder.

Copyright © 2024 All Rights Reserved 292


Syst em Ad m in ist r at io n Gu id e

Azu r e Ret u r n s t h e Er r o r : Ap p l i cat i o n w i t h i d en t i f i er “xxx” w as n o t f o u n d


i n t h e d i r ect o r y “xxx”

The App ID URI does not match the entityID set in the SP metadata.
Review the URIs and correct the names accordingly. Changes can be made
in SPMetadata.xml, MstrSamlConfig.xml, and Azure. Restart the
application after you finalize the changes.

Azu r e SAM L gr o u p n u m b er l i m i t at i o n

If the number of groups the user is in, goes over a limit (150 for SAML),
there will be a limitation on the group attributes carried in the SAML
assertion and it won't work as expected. Please refer to Azure doc for more
details.

Integrating SAML Support with AD FS


This procedure provides specific details about integrating MicroStrategy
Web, Library, and Mobile with AD FS. All steps below are performed in AD
FS Management Console. Additionally, the following steps assume that
SAML is already enabled in the AD FS server.

Copyright © 2024 All Rights Reserved 293


Syst em Ad m in ist r at io n Gu id e

1. Download the IDP metadata.

1. In Server Manager, go to Tools > AD FS Management.

2. Go to the Endpoints menu and locate the Federation Metadata


entry point. The entry contains a field similar to
/FederationMetadata/2007-
06/FederationMetadata.xml.

3. In any browser, enter the URL using the format <ADFS Server
base URL>/<Metadata entry point> to download the
metadata file to the browser's Downloads folder.

In this example, we navigate to https://cl-desp-


adfs.techsecurity.com/FederationMetadata/2007-
06/FederationMetadata.xml from the ADFS server machine
because ADFS access is restricted to that machine.

Copyright © 2024 All Rights Reserved 294


Syst em Ad m in ist r at io n Gu id e

4. Copy the metadata file into your Web application's WEB-


INF/classes/resources/SAML folder.

5. Copy the metadata file into your Library application's


MicroStrategyLibrary/WEB-INF/classes/auth/SAML
folder.

6. Copy the metadata file into your Mobile application's WEB-


INF/classes/resources/SAML folder.

7. Rename the copied metadata file to IDPMetadata.xml.

2. Establish trust between the Web server and Intelligence server. See
Single Sign-On with SAML Authentication for JSP Web and Mobile for
more information.

3. Generate SAML configuration files. See Single Sign-On with


SAML Authentication for JSP Web and Mobile for more information.

4. Add the admin user or user groups in WEB-


INF/classes/resources/SAML/MstrSamlConfig.xml.

Copyright © 2024 All Rights Reserved 295


Syst em Ad m in ist r at io n Gu id e

5. Register MicroStrategy Web, Library, and Mobile with the ADFS server.

1. Copy the SPMetadata.xml file from your Web application's WEB-


INF/classes/resources/SAML folder to the ADFS server
machine.

2. Copy the SPMetadata.xml file from your Library application's


WEB-INF/classes/auth/SAML folder to the ADFS server
machine.

3. Copy the SPMetadata.xml file from your Mobile application's


WEB-INF/classes/resources/SAML folder to the ADFS server
machine.

4. In the Console tree, right-click Relying Party Trusts > Add


Relying Party Trust.

5. In the Select Data Source pane, select Import data from the
relying party from a file.

Copyright © 2024 All Rights Reserved 296


Syst em Ad m in ist r at io n Gu id e

6. Click Browse and locate the metadata file.

7. Leave the remaining options as default.

6. Add claim rules for the registered relying party trust. See Integrating
SAML Support for ADFS for more information.

1. Right-click your registered application in ADFS > Edit Claim


Rules.

2. One by one, add the claim rules and complete the setup on ADFS.
The following are examples of rule creation and a list of created
rules. In this example, the same claim rule is created as specified
in the following screenshot.

Copyright © 2024 All Rights Reserved 297


Syst em Ad m in ist r at io n Gu id e

a. Select Send LDAP Attributes as Claims as the claim rule


template.

b. Select the name shown in the screenshot for the claim rule
name.

c. From the Attribute store drop-down, choose Active


Directory.

Copyright © 2024 All Rights Reserved 298


Syst em Ad m in ist r at io n Gu id e

d. From the LDAP Atrtibute drop-down, choose an option. This


option is the information taken from your Active Directory
user.

e. From the Outgoing Claim Type drop-down, choose an option.

f. The following is the list of the mappings for the rules used in
this example:

l User-Principal-Name -> Name

l E-Mail-Addresses -> Name ID

Copyright © 2024 All Rights Reserved 299


Syst em Ad m in ist r at io n Gu id e

l Token-Groups - Unqualified Names -> Groups

l Display-Name -> DisplayName

l User-Principal-Name -> DistinguishedName

7. Map the OAuth user to the MicroStrategy user for login control.

8. Restart your application server and test to see if login is successful.


Upon successful login, the following screen displays and your user
appears.

Copyright © 2024 All Rights Reserved 300


Syst em Ad m in ist r at io n Gu id e

Integrating SAML Support with Okta


This procedure provides instructions about integrating MicroStrategy Web
with Okta. For more information, see the Okta documentation.

Create an Application

1. Log in as an Okta administrator and go to the Admin page.

2. Go to Applications and click Add Application.

3. Select SAML 2.0.

4. Click Create.

Configure the Application

1. Enter your app name.

2. Click Next.

3. Complete SAML Settings.

Copyright © 2024 All Rights Reserved 301


Syst em Ad m in ist r at io n Gu id e

l Single Sign on URL: Also referred to as "Assertion Consumer


Service URL", it is the MicroStrategy application address that sends
and receives SAML messages. If SAML setup is already finished on
MicroStrategy side, it is the URL within the
md:AssertionConsumerService tag at the bottom of the
SPMetadata.xml file.

The URL usually takes the below form:

http(s)://<host server>/<MSTR application name>/saml/SSO

l Audience URI (SP Entity ID): It corresponds to the entityID value


at the top of the SPMetatada.xml file, which is also the first input
field on the MicroStrategy SAML configuration page. It is a unique
identifier of the MicroStrategy application.

l ATTRIBUTE STATEMENTS (OPTIONAL): This is to configure what


SAML attributes will be sent to MicroStrategy. If the default attribute
names were used at MicroStrategy SAML configuration, the names
are: EMail, DistinguishedName, and DisplayName. The
MicroStrategy-side attribute names can be found in the
MstrSamlConfig.xml file. For example:

<dnAttributeName>DistinguishedName</dnAttributeName>
<displayNameAttributeName>DisplayName</displayNameAttributeName>
<emailAttributeName>EMail</emailAttributeName>

It is not required to configure all three attributes.

l GROUP ATTRIBUTE STATEMENTS (OPTIONAL): This is used to


grant access to the MicroStrategy Web or Mobile Administrator page
and manage user privilege inheritance. If the default attribute name
was used at MicroStrategy SAML configuration, the name is
"Groups". The MicroStrategy-side attribute name can be found in the

Copyright © 2024 All Rights Reserved 302


Syst em Ad m in ist r at io n Gu id e

MstrSamlConfig.xml file. For example:

<groupAttributeName>Groups</groupAttributeName>

Use the filter to select the groups that are sent over. To send over all
the groups, select Regex and enter .* into the field.

You can leave the other fields as default or configure them as


needed.

Finish SAML Setup

1. On the Okta admin page, go to Applications and open the application.

2. Go to Assignments.

3. Click Assign to assign the application to users or groups.

4. Go to Sign On.

5. Click Identity Provider metadata.

Copyright © 2024 All Rights Reserved 303


Syst em Ad m in ist r at io n Gu id e

6. Save the XML file as IDPMetadata.xml, and place it in the


MicroStrategy\WEB-INF\classes\resources\SAML folder.

Integrating MicroStrategy With Snowflake for Single Sign-On


Using Okta
Starting in MicroStrategy 2020 Update 2, MicroStrategy supports connection
to Snowflake through OAuth authentication.

OAuth authentication is supported only in MicroStrategy Web, Library, and


Mobile with HTTPS enabled. OAuth authentication is not supported in
MicroStrategy Workstation or Developer.

MicroStrategy and Snowflake also support single sign-on (SSO) using SAML
protocol, and Okta as an Identiy Provider (IdP).

If any of the following steps have already been configured in your


environment, you can skip them.

Copyright © 2024 All Rights Reserved 304


Syst em Ad m in ist r at io n Gu id e

1. Configure MicroStrategy to use single sign-on with Okta

i. Troubleshoot and test the configuration

2. Configure Snowflake to use single sign-on with Okta

i. Troubleshoot and test the configuration

ii. Set up Okta's External OAuth security integration

iii. Test the External OAuth configuration

3. Configure the database instance to use Okta

i. Create a basic authentication database connection

ii. Add warehouse tables to the warehouse using MicroStrategy


Developer

iii. Create an OAuth authentication database connection

iv. Create connection mappings for non-admin users

4. Consume data from dashboards and reports

i. Authenticate to Snowflake from MicroStrategy Web

ii. Execute dashboards

5. Troubleshooting

Configure MicroStrategy to Use Single Sign-On with Okta

Refer to the following documentations to configure MicroStrategy Web and


Library to use single sign-on.

MicroStrategy only supports JSP Web. IIS is not supported.

1. Enabling SAML Authentication for MicroStrategy Library

2. Enabling SAML Authentication for JSP Web and Mobile

Copyright © 2024 All Rights Reserved 305


Syst em Ad m in ist r at io n Gu id e

3. Integrating SAML Support with Okta

4. Mapping SAML Users to MicroStrategy

Once you've completed all steps, you can troubleshoot the configuration.

Tr o u b l esh o o t an d Test t h e Co n f i gu r at i o n

1. Access your MicroStrategy Web URL. For example, https://tec-w-


012480:8443/MicroStrategy/servlet/mstrWeb.

You are redirected to Okta's authentication page.

2. Enter your credentials to authenticate to Okta. You are redirected to


MicroStrategy Web or Library.

Copyright © 2024 All Rights Reserved 306


Syst em Ad m in ist r at io n Gu id e

Configure Snowflake to Use Single Sign-On with Okta

Refer to following Snowflake documentations to set up Snowflake single


sign-on authentication with Okta.

1. Overview of Federated Authentication and SSO

2. Configuring an Identity Provider (IdP) for Snowflake: Okta Setup

3. Configuring Snowflake to Use Federated Authentication

Tr o u b l esh o o t an d Test t h e Si n gl e Si gn -On Co n f i gu r at i o n

The Okta account used as IdP for Snowflake must be the same account
used to authenticate MicroStrategy.

1. Access Snowflake via the web interface. For example,


https://XXXXX.snowflakecomputing.com/.

2. Click Single Sign On. You are redirected to Okta's authentication


page.

Copyright © 2024 All Rights Reserved 307


Syst em Ad m in ist r at io n Gu id e

3. Enter your credentials to authenticate to Okta. You are redirected to the


Snowflake web interface and a console appears.

Set Up Okt a' s Ext er n al OAu t h Secu r i t y In t egr at i o n

MicroStrategy automatically authenticates users in Snowflake using OAuth


authentication. To allow OAuth authentication in Snowflake using Okta as
the IdP, refer to the following Snowflake documentations.

Copyright © 2024 All Rights Reserved 308


Syst em Ad m in ist r at io n Gu id e

1. Introduction to OAuth

2. External OAuth Overview

3. Configure Okta for External OAuth

When creating the Authorization server in Okta (described in Step 2: Create


an OAuth Authorization Server), the following scopes must be specified:

l session:role-any

l openid

l profile

l email

l offline_access

Test Ext er n al OAu t h Co n f i gu r at i o n

Refer the following Snowflake documentations.

1. Testing Procedure

2. Connecting to Snowflake with External OAuth

Copyright © 2024 All Rights Reserved 309


Syst em Ad m in ist r at io n Gu id e

Configure the Database Instance to Use Okta

Cr eat e a Basi c Au t h en t i cat i o n Dat ab ase Co n n ect i o n

In MicroStrategy Developer, create a new database instance with a basic


authentication connection.

1. In the Database instance name field, type in a name.

2. From the Database connection type drop-down, select Snowflake.

3. Click New to create a new database connection.

4. In the Database connection name field, type in a name.

5. Select the DSN.

6. Create a database login and save your settings.

Ad d War eh o u se Tab l es t o t h e War eh o u se

Once the database instance is created, it can be used to add tables to the
project schema via MicroStrategy Developer.

Copyright © 2024 All Rights Reserved 310


Syst em Ad m in ist r at io n Gu id e

Cr eat e an OAu t h Au t h en t i cat i o n Dat ab ase Co n n ect i o n

After adding tables to the project schema, another database connection can
be created for OAuth authentication.

1. Create an OAuth database connection via MicroStrategy Developer:

i. Select the Snowflake_SSO_DSN_OAuth default connection and


click New.

ii. In the Database connection name field, type in a name.

iii. Select the DSN.

iv. Go to the Advanced tab.

v. In the Additional connection string parameters field, enter


TOKEN=?MSTR_OAUTH_TOKEN;AUTHENTICATOR=oauth;.

Copyright © 2024 All Rights Reserved 311


Syst em Ad m in ist r at io n Gu id e

This will act as a placeholder that will be replaced by a real token


when the user uses the Snowflake database instance.

vi. Click OK.

vii. Click New.

viii. In the Database login, enter a name.

ix. Select the Use network login id (Windows authentication)


checkbox.

2. Set the OAuth parameters in MicroStrategy Web:

i. Log in to MicroStrategy Web as the administrator user.

ii. In the Database Instance menu, select OAuth Parameters.

iii. Fill out the required fields:

l When setting OAuth parameters, select OKTA.

l For Client ID, recover the Client ID saved in Step 1: Configure


Okta for External OAuth.

l For Client Secret, recover the Client Secret saved in Step 1:


Configure Okta for External OAuth.

l For OAuth URL and Token URL, edit the Snowflake's


Authorization Server created in Okta (as described in Step 2:
Create an OAuth Authorization Server).

Copyright © 2024 All Rights Reserved 312


Syst em Ad m in ist r at io n Gu id e

a. Navigate to the Okta Admin Console.

b. In the Security menu, go to API > Authorization Servers.

c. Edit Snowflake's related authorization server.

d. Copy the value for Issuer. The value should be similar to


https://dev-
XXXXX.oktapreview.com/oauth2/YYYYY.

Copyright © 2024 All Rights Reserved 313


Syst em Ad m in ist r at io n Gu id e

e. To obtain the Init OAuth URL and Refresh Token URL,


add the following values to the Issuer value:

Init OAuth URL: https://dev-


XXXX.oktapreview.com/oauth2/YYYYY/v1/authoriz
e

Refresh token URL: https://dev-


XXXXX.oktapreview.com/oauth2/YYYY/v1/token

f. Copy the Callback URL. This will be whitelisted.

3. Whitelist the callback URL:

i. In the Okta Admin Console, go to the application created in Step 1:


Create an OAuth Compatible Client to Use with Snowflake.

ii. Go to the General tab.

iii. Click Edit.

iv. Locate the Login redirect URIs section and click Add URI.

v. Add the copied Callback URL to the list.

Copyright © 2024 All Rights Reserved 314


Syst em Ad m in ist r at io n Gu id e

Cr eat e Co n n ect i o n M ap p i n g f o r N o n -Ad m i n User s

In this example workflow, an administrator wants to use basic authentication


in MicroStrategy Developer. Then, the analyst uses OAuth authentication in
MicroStrategy Web and Library.

A connection mapping can be created for the analyst to use the Snowflake_
SSO_DSN_OAuth connection, and for the administrator to use the
Snowflake_SSO_DSN_Basic connection. For more information on
connection mapping, see Controlling Access to the Database: Connection
Mappings.

1. In MicroStrategy Developer, right-click on Project > Project


Configuration.

2. Go to Database Instances > Connection Mapping.

3. Right-click on the grid > New.

4. Modify the connection mapping to have the appropriate fields.

In this example, the OAuth database connection name is Snowflake_


SSO_DSN_OAuth and the basic database connection name is
Snowflake_SSO_DSN_Basic.

5. Click OK.

6. Go to Administration > Database Instances.

7. Edit the database instance. In this example, the database instance is


Snowflake_SSO.

Copyright © 2024 All Rights Reserved 315


Syst em Ad m in ist r at io n Gu id e

8. Select Snowflake_SSO_DSN_Basic as the default database


connection.

9. Click OK.

Consum e Data from Dashboards and Reports

Au t h en t i cat e t o Sn o w f l ake f r o m M i cr o St r at egy Web

Using an analyst user mapped to the Okta user (as explained in Mapping
SAML Users to MicroStrategy), log in to MicroStrategy Web.

1. In the Data Import dialog, select the primary database instance for the
project. For example, Snowflake_SSO.

The Okta authentication page momentarily appears and then


disappears. If you encounter a 404 error, then the Callback URL is not
correctly whitelisted.

2. Select the database instance. The dialog displays.

Copyright © 2024 All Rights Reserved 316


Syst em Ad m in ist r at io n Gu id e

At this point, you are authenticated to Snowflake and can access data
and dashboards with their credentials.

Execu t e Dash b o ar d s

Execute a project schema based dashboard.

Copyright © 2024 All Rights Reserved 317


Syst em Ad m in ist r at io n Gu id e

Troubleshooting

Learn to troubleshoot common errors.

Copyright © 2024 All Rights Reserved 318


Syst em Ad m in ist r at io n Gu id e

Wh en au t h en t i cat i n g t o Sn o w f l ake f r o m t h e Dat a Im p o r t d i al o g, a scr een


ap p ear s w i t h a 404 er r o r

Cause: The callback URL was not added to the whitelist of valid redirect
URLs.

Solution: Add the appropriate callback URL to the whitelist of valid URLs as
described in Whitelist the Callback URL.

Fai l ed t o r et r i eve r ef r esh t o ken f o r au t eh en t i cat i o n Er r o r i n Pr o cess


m et h o d o f Co m p o n en t : Qu er y En gi n eSer ver , Pr o j ect M i cr o St r at egy TPCH,
Jo b 42, Er r o r Co d e = -2147212544

Cause: Authentication to Snowflake has not been established yet.

Solution: You need to authenticate to Snowflake via the Data Import dialog.

Copyright © 2024 All Rights Reserved 319


Syst em Ad m in ist r at io n Gu id e

Cl i en t ID o r Cl i et n Secr et n o t f o u n d i n m et ad at a. Er r o r i n Pr o cess
m et h o d o f Co m p o n en t : Qu er yEn gi n eSer ver , Pr o j ect M i cr o St r at egy TPCH,
Jo b 69, Er r o r Co d e = -2147212544

Cause: Connection mapping resolves to the basic authentication database


connection.

Solution: Confirm the connection mapping is mapped correctly for the user.
Change the default database connection for the database instance.

In t el l i gen ce Ser ver Lo gs

In case of errors, please enable WSAuth.log, as well as DSSErrors.log.


It is also recommended that you place the file log for the WSAuth
components directly in the DSSErrors.log.

Sn o w f l ake Dr i ver Lo g

To enable the Snowflake driver, see KB48422: How to enable debug log for
newly bundled Snowflake driver.

Copyright © 2024 All Rights Reserved 320


Syst em Ad m in ist r at io n Gu id e

Related Content
KB484275: Best practices for using the Snowflake Single Sign-on (SSO)
feature

Integrating MicroStrategy with Snowflake for Single Sign-On using Azure AD

Integrate MicroStrategy With Snowflake for Single Sign-On With


SAML using Azure AD
Learn how to integrate MicroStrategy with Snowflake for Single-Sign On
(SSO) with SAML authentication.

1. Create an Azure AD Enterprise Application and enable single sign-on


with SAML authentication for JSP Web and Mobile

2. Integrate a MicroStrategy Library SAML environment with Azure AD

3. Create Snowflake OAuth applications and integrate with MicroStrategy

4. Create Snowflake database instances

Create an Azure AD Enterprise Application and Enable Single Sign-On with


SAML for JSP Web and Mobile

Steps 1. Create an Azure AD Enterprise Application and assign users to


your application and 2. Create single sign-on with SAML authetication for
JSP Web and Mobile are dependent on each other. The generated metadata
XML files from these steps are required to continue. Please switch
operation as needed.

1. Create an Azure AD Enterprise Application and assign users to your


application.

1. Follow the Microsoft documentation to configure SAML-based


single sign-on to non-gallery applications.

Copyright © 2024 All Rights Reserved 321


Syst em Ad m in ist r at io n Gu id e

The Application Name cannot include spaces, otherwise you will


not be able to proceed after uploading the SPMetadata.xml file.

2. Add users or user groups to your enterprise application.

2. Create single sign-on with SAML authetication for JSP Web and Mobile.

l Create SAML configuration files for your application.

l Register the application with your Identity Provider (IdP).

l Establish trust to the MicroStrategy Intelligence Server.

l Link SAML users to MicroStrategy users.

1. Refer to Enabling Sing Sign-On with SAML Authentication and


select from the following topics:

MicroStrategy only supports JSP Web. IIS is not supported.

l Enabling SAML Authentication for MicroStrategy Library

l Enabling SAML Authentication for JSP Web and Mobile

l Integrating SAML Support with Azure AD

l Mapping SAML Users to MicroStrategy

2. Or you can generate and modify configuration files, create an


Azure AD Enterprise Application, manage the SAML signing
certificate, enable SAML login from Web Admin, and establish

Copyright © 2024 All Rights Reserved 322


Syst em Ad m in ist r at io n Gu id e

trust between the web server and Intelligence Server.

Gen er at e an d M o d i f y Co n f i gu r at i o n Fi l es

a. Open <web application_path>/saml/config/open in


your browser.

b. Enter an Entity ID and click Generate config. The Entity ID is


the same as the Application Name created in IdP.

The URL is as follows:

l For Web:
https://<FQDN>:<port>/MicroStrategy/saml/conf
ig/open

l For Library:
https://<FQDN>:<port>/MicroStrategyLibrary/sa
ml/config/open

l For Mobile:
https://<FQDN>:<port>/MicroStrategyMobile/sam
l/config/open

c. Modify MstrSamlConfig.xml according to the information


from IdP.

i. Locate the XML file in Azure Active Directory >


Enterprise applications > <your application> > Single
sign-on.

Copyright © 2024 All Rights Reserved 323


Syst em Ad m in ist r at io n Gu id e

ii. Modify the values in userInfo. The values can be found


via the App Federation Metadata URL.

Copyright © 2024 All Rights Reserved 324


Syst em Ad m in ist r at io n Gu id e

iii. Get the Admin Group ID from Azure AD. Go to Azure


Active Directory > Groups > <your admin group> >
Object Id.

iv. Check to see if the following two sections exist in


[Tomcat]\webapps\MicroStrategy\WEB-
INF\classes\resources\SAML\SpringSAMLConfi
g.xml.

<!-- Handler deciding where to redirect user after


successful login -->
<bean id="successRedirectHandler"
class="com.microstrategy.auth.saml.SAMLSuccessRedirectHan
dler">

Copyright © 2024 All Rights Reserved 325


Syst em Ad m in ist r at io n Gu id e

<property name="defaultTargetUrl" value="/"/>


</bean>

<!-- Loads implicit OAuth configuration XML -->


<import resource="custom/SAML2OAuth.xml"/>

If they do not exist in SpringSAMLConfig.xml, add


them.

Cr eat e an Azu r e AD En t er p r i se Ap p l i cat i o n

Follow the Microsoft documentation to configure SAML-based


single sign-on to non-gallery applications.

a. Edit the Basic SAML Configuration.

i. Upload the metadata file created in Generate and


Modify Configuration Files, SPMetadata.xml.

Copyright © 2024 All Rights Reserved 326


Syst em Ad m in ist r at io n Gu id e

b. Configure User attributes and claims.

i. Add a new group claim or user claims.

M an age t h e SAM L si gn i n g cer t i f i cat e

a. Download the Federation Metadata XML from Azure


Active Directory > Enterprise applications >
<your application> > Single sign-on.

Copyright © 2024 All Rights Reserved 327


Syst em Ad m in ist r at io n Gu id e

b. Rename the XML file to IDPMetadata.xml.

c. Upload the XML file to the MicroStrategy/WEB-


INF/classes/resources/SAML folder.

For MicroStrategy Library, upload the file to the


MicroStrategyLibrary/WEB-
INF/classes/auth/SAML folder.

En ab l e SAM L au t h en t i cat i o n f o r 2021 Up d at e 1 o r


l at er ver si o n s

a. In the Default Properties section of the Web


Administrator page, enable SAML authentication
and click Save.

Copyright © 2024 All Rights Reserved 328


Syst em Ad m in ist r at io n Gu id e

b. Locate the web.xml file located in the WEB-INF


folder of the MicroStrategy Web installation
directory and open it in a text editor.

c. Comment out the two security-constraints as shown


below to disable basic authentication for the
Administrator page. Surround the constraints with
<!-- and --> tags. Make sure that there are no sub
comments in the text, as this may cause an error. If
you decide to change to another authentication
mode besides SAML in the future, you must reverse
the changes done in this step.

Copyright © 2024 All Rights Reserved 329


Syst em Ad m in ist r at io n Gu id e

En a bl e SAM L a u t h e n t i ca t i on f or t h e 2021 pl a t f or m
r e l e a se or ol de r ve r si on s

a. To enable SAML in the Web application for 2021 or


older versions, modify the web.xml file located in
[Tomcat]/MicroStrategy/WEB-INF/web.xml.

b. Uncomment the following to enable SAML


authentication mode for the file.

<!-- ================== SAML Support


================ -->

<context-param>
<param-name>contextConfigLocation</param-name>
<param-
value>classpath:resources/SAML/SpringSAMLConfig.xml<
/param-value>
</context-param>

<context-param>
<param-name>contextInitializerClasses</param-
name>
<param-
value>com.microstrategy.auth.saml.config.ConfigAppli
cationContextInitializer</param-value>

Copyright © 2024 All Rights Reserved 330


Syst em Ad m in ist r at io n Gu id e

</context-param>

<filter>
<filter-name>springSecurityFilterChain</filter-
name>
<filter-
class>org.springframework.web.filter.DelegatingFilte
rProxy</filter-class>
</filter>
<filter-mapping>
<filter-name>springSecurityFilterChain</filter-
name>
<url-pattern>/servlet/*</url-pattern>
</filter-mapping>
<filter-mapping>
<filter-name>springSecurityFilterChain</filter-
name>
<url-pattern>/saml/*</url-pattern>
</filter-mapping>

<listener>
<listener-
class>org.springframework.web.context.ContextLoaderL
istener</listener-class>
</listener>

<!-- SAML Config GUI -->

c. Optional: Comment out or delete the security


constraints for Administrator,
TaskAdministrator, and TaskDeveloper.

Copyright © 2024 All Rights Reserved 331


Syst em Ad m in ist r at io n Gu id e

Est ab l i sh t r u st b et w een t h e Web Ser ver an d In t el l i gen ce


Ser ver

a. Log in to MicroStrategy Web with your Azure AD account


in the admin group.

b. Connect to the Intelligence Server.

c. Select your Intelligence Server and next to Trust


relationship between Web Server and MicroStrategy
Intelligence Server, click Setup.

Copyright © 2024 All Rights Reserved 332


Syst em Ad m in ist r at io n Gu id e

d. Enter the administrator account and password to


establish trust.

Integrate a MicroStrategy Library SAML environm ent with Azure AD

The configuration for integrating a MicroStrategy Library SAML environment


is similar to the steps in Create an Azure AD Enterprise Application and
enable single sign-on with SAML authentication for JSP Web and Mobile.

1. Generate and modify configuration files.

1. Open
https://<FQDN>:<port>/MicroStrategyLibrary/saml/co
nfig/open to generate configuration files.

2. Modify MstrSamlConfig.xml according to the information from


IdP.

The file path is [Tomcat


path]\MicroStrategyLibrary\WEB-
INF\classes\auth\SAML.

2. Create an Azure AD Enterprise Application for Library and Manage the


SAML signing certificate.

3. Create a trusted relationship to establish trusted communication


between the Library Web Server and Intelligence Server.

Copyright © 2024 All Rights Reserved 333


Syst em Ad m in ist r at io n Gu id e

Create Snowflake OAuth Applications and Integrate with MicroStrategy

1. Configure Snowflake OAuth integration with Azure AD to create OAuth


Applications.

Refer to Configure Microsoft Azure AD for External OAuth.

1. Select the following OAuth flow described in the Pre-Requisites


section: The authorization can grant the Oauth client an
access token on behalf of the user.

2. Complete the steps accordingly.

Based on the Snowflake documentation, you will create two


applications, Snowflake OAuth Resource Application and
Snowflake OAuth Client Application. When configuring the Client
Application, add the following redirect URL: https://
[MicroStrategy Web Server
Hostname]/MicroStrategy/servlet/mstrWeb?evt=3172.

3. Go to the Snowflake OAuth Client Application > Authentication.


Locate the Implicit grant section and select the ID tokens

Copyright © 2024 All Rights Reserved 334


Syst em Ad m in ist r at io n Gu id e

checkbox.

2. Configure SAML2OAuth.xml to fetch the ID tokens.

1. Open and edit the following files:

l [MicroStrategy Web Root]\WEB-


INF\classes\resources\SAML\custom\SAML2OAuth.xml

l [MicroStrategyLibrary Root]\WEB-
INF\classes\auth\SAML\custom\SAML2OAuth.xml

2. Comment out the following section:

<!-- Beans to add an additional step to fetch idToken after SAML


login -->

<bean id="oAuthTokenProvider"
class="com.microstrategy.auth.saml.implicitoauth.MicrosoftAzureAD">
<property name="authorizationEndpoint" value=""/>
<property name="clientID" value=""/>
<property name="redirectUri" value=""/>

Copyright © 2024 All Rights Reserved 335


Syst em Ad m in ist r at io n Gu id e

<property name="responseType" value="id_token"/>


<property name="scope">
<list>
<value>openid</value>
<value>email</value>
<value>profile</value>
<value>offline_access</value>
</list>
</property>
</bean>

3. Complete the fields for authorizationEndpoint, clientID,


and redirectUri.

You can find the required information on Azure AD where the


Snowflake OAuth Resource Application and Snowflake OAuth
Client Application were created.

l For authorizationEndpoint,go to App > Overview >


Endpoints, copy the Oauth 2.0 authorization endpoints (v2)
and paste it in the file.

l For clientID, go to App > Overview > Application (client) ID,


copy the ID and paste it in the file.

Copyright © 2024 All Rights Reserved 336


Syst em Ad m in ist r at io n Gu id e

l For redirectUri, go to App > Authentication, copy the URL


and paste it in the file. If nothing is available in the Redirect
URIs list, manually add:

l For MicroStrategy Web: https://[MicroStrategy Web


Hostname]/MicroStrategy/auth/SAMLOAuthRedirect.
jsp

l For MicroStrategy Library: https://[MicroStrategy


Library
Hostname]/MicroStrategyLibrary/auth/SAMLOAuthRe
direct.jsp

3. Restart Tomcat for the MicroStrategy Web and Library configurations to


take effect.

Copyright © 2024 All Rights Reserved 337


Syst em Ad m in ist r at io n Gu id e

Create Snowflake Database Instances

You can create Snowflake database instances with or without the


project schema.

Wi t h t h e Pr o j ect Sch em a

To use the project schema, you must have a basic authentication


connection:

l In MicroStrategy Developer:

1. In the Database instance name field, type in a name.

2. From the Database connection type drop-down, select


Snowflake.

3. Click New to create a new database connection.

4. In the Database connection name field, type in a name.

5. Select the DSN.

6. Create a database login and saved your settings.

l In MicroStrategy Web:

Copyright © 2024 All Rights Reserved 338


Syst em Ad m in ist r at io n Gu id e

Database instances created via MicroStrategy can be used for the


project schema, but cannot be used for connection mapping.

1. In the Data Source dialog, select the Standard Connection


option.

Copyright © 2024 All Rights Reserved 339


Syst em Ad m in ist r at io n Gu id e

Wi t h o u t t h e Pr o j ect Sch em a

To use the database instance without the project schema, you must
either have basic or OAuth authentication.

1. Create an OAuth authentication database connection:

l In MicroStrategy Developer:

1. Click New to create a new database connection.

2. In the Database connection name field, type in a name.

3. Select the DSN.

4. Go to the Advanced tab.

5. In the Additional connection string parameters field, enter


TOKEN=?MSTR_OAUTH_TOKEN;.

This will act as a placeholder that will be replaced by a real


token when the user uses the Snowflake database instance.

6. Click OK.

7. In the Database login, enter a name.

8. Select the Use network login id (Windows


authentication) checkbox.

l In MicroStrategy Web:

Copyright © 2024 All Rights Reserved 340


Syst em Ad m in ist r at io n Gu id e

In the Data Source dialog, select the OAuth Connection option.

2. Set OAuth Parameters.

Users must have the Set OAuth parameters for Cloud App
sources privilege under Client-Web.

Copyright © 2024 All Rights Reserved 341


Syst em Ad m in ist r at io n Gu id e

If you want to use the DB role in MicroStrategy Workstation,


OAuth parameters must be set from Workstation. Oauth
parameters in Web and Workstation are different set values.

After the database instance is created, you can set the OAuth
parameters in MicroStrategy Web.

1. In the Database Instance menu, select Set OAuth


Parameters.

Copyright © 2024 All Rights Reserved 342


Syst em Ad m in ist r at io n Gu id e

2. In the Authentication Type drop-down, select Microsoft


Azure AD SSO.

3. Fill out the required fields.

You can find the required information on Azure AD where the


Snowflake OAuth Application was created in Integrate a
MicroStrategy Library SAML environment with Azure AD.

Copyright © 2024 All Rights Reserved 343


Syst em Ad m in ist r at io n Gu id e

l For Client ID, click on the app > Overview > Application
(client) ID, and locate the ID.

l For Client Secret, click on the app > Certificates &


secrets, and locate the secret. If necessary, create a new
secret.

l For Directory (tenant) ID, click on the app > Overview,


and locate the ID.

l For Scope, click on the app > API permissions, click on


the API/Permission name, and locate the URL. The URL is
in the format like https://[AzureDomain]/
[id]/session:scope-any.

l The Callback URL is generated by default.

For Web: https://[MicroStrategy Web


Hostname]/MicroStrategy/servlet/mstrWeb?evt=3
172

For Workstation: http://localhost

The callback URL should be added to the Snowflake


OAtuh Client Application.

Copyright © 2024 All Rights Reserved 344


Syst em Ad m in ist r at io n Gu id e

Cr eat e Co n n ect i o n M ap p i n gs (Op t i o n al )

If you have multiple MicroStrategy Users or User Groups and want to give
access to the same database instance but with different database logins,
see Controlling Access to the Database: Connection Mappings

In a primary database connection, users that are not mapped into the
secondary database connection use the default database connection. In a
secondary database connection, users in a specific group use the mapped
database connection.

For example, the administrator uses basic authentication, while other users
use OAuth authentication. All users can use the project schema. You must
set the default connection to use standard authentication for the Warehouse
Catalog to work in Developer:

1. Create a basic authentication database connection (default).

l In MicroStrategy Developer

a. In the Database instance name field, type in a name.

b. From the Database connection type drop-down, select


Snowflake.

c. Click New to create a new database connection.

d. In the Database connection name field, type in a name.

e. Select the DSN.

Copyright © 2024 All Rights Reserved 345


Syst em Ad m in ist r at io n Gu id e

f. Create a database login and save your settings.

2. Create an OAuth authentication database connection.

l In MicroStrategy Developer

a. Click New to create a new database connection.

b. In the Database connection name field, type in a name.

c. Select the DSN.

d. Go to the Advanced tab.

e. In the Additional connection string parameters field, enter


TOKEN=?MSTR_OAUTH_TOKEN;.

This will act as a placeholder that will be replaced by a real


token when the user uses the Snowflake database instance.

f. Click OK.

g. Click New.

h. In the Database login, enter a name.

i. Select the Use network login id (Windows authentication)

Copyright © 2024 All Rights Reserved 346


Syst em Ad m in ist r at io n Gu id e

checkbox.

3. Create connection mappings.

a. Assign the new traditional DBRole in Project Configuration >


Database Instance > SQL Data warehouse.

A default database connection mapping is created for all users


when you select the database instance.

b. Assign different user groups with basic and OAuth database


connection in Project Configuration > Database instances >

Copyright © 2024 All Rights Reserved 347


Syst em Ad m in ist r at io n Gu id e

Connection mapping.

l Users in group SSO_End_User_DSNless_OAuth will use the


Snowflake_SSO_DSNless_OAuth database connection.

l Users in group SSO_End_User_DSN_OAuth will use the


Snowflake_SSO_DSN_OAuth database connection.

l Users in group SSO_End_User_JDBC_OAuth will use the SSO_


End_User_JDBC_OAuth database connection.

l Other users will use the default database connection. In this


case, the Snowflake_SSO_DSNLess_Basic database
connection is used.

4. Set OAuth parameters via MicroStrategy Web.

After the database instance is created, you can set the OAuth
parameters in MicroStrategy Web.

Copyright © 2024 All Rights Reserved 348


Syst em Ad m in ist r at io n Gu id e

a. In the Database Instance menu, select Set OAuth Parameters.

b. From the Authentication Type drop-down, select Microsoft Azure


AD SSO.

c. Fill out the required fields:

Copyright © 2024 All Rights Reserved 349


Syst em Ad m in ist r at io n Gu id e

You can find the required information on Azure AD where the


Snowflake OAuth Application was created in Integrate a
MicroStrategy Library SAML environment with Azure AD.

l For Client ID, click on the app > Overview > Application
(client) ID, and locate the ID.

l For Client Secret, click on the app > Certificates & secrets,
and locate the secret. If necessary, create a new secret.

l For Directory (tenant) ID, click on the app > Overview, and
locate the ID.

l For Scope, click on the app > API permissions, click on the
API/Permission name, and locate the URL. The URL is in the
format like https://[AzureDomain]/
[id]/session:scope-any.

l The Callback URL is generated by default.

The callback URL should be added to the Snowflake OAtuh


Client Application.

Related Content
KB484275: Best practices for using the Snowflake Single Sign-on (SSO)
feature

Integrate Snowflake OAuth Authentication Through AD FS


Learn how to integrate MicroStrategy with Snowflake for OAuth
authentication through SAML support with Active Directory Federation
Service (AD FS).

1. Configure the AD FS server application

2. Create a Snowflake database instance and configure OAuth

Copyright © 2024 All Rights Reserved 350


Syst em Ad m in ist r at io n Gu id e

l With the project schema

l Without the project schema

l Enable seamless login

3. Known limitations

4. Appendix

l Configure AD FS SAML for MicroStrategy Web, Library, and Mobile

l Configure Snowflake

l SAML configuration

l OAuth configuration

To troubleshoot an expired refresh token, see KB485176.

Copyright © 2024 All Rights Reserved 351


Syst em Ad m in ist r at io n Gu id e

Configure the AD FS Server Application

1. In AD FS Management Console, add an application group.

2. Enter a name and select the Server application template.

3. Click Next.

4. Copy Client Identifier as Client ID.

5. Enter a redirect URI and click Add. For example, http://localhost.

The redirect URI does not accept query parameters. In this case,
?env=3172 is removed in the AD FS setting. However, you must enter
the full URL when adding an OAuth redirect URI, which is
automatically regenerated by MicroStrategy. Some identity providers
(IdP), like Twitter, prevent using such URI, but still accept the request.

Copyright © 2024 All Rights Reserved 352


Syst em Ad m in ist r at io n Gu id e

6. Click Next.

7. Configure the application credential. In this example, the shared secret


is used as the client secret.

Copyright © 2024 All Rights Reserved 353


Syst em Ad m in ist r at io n Gu id e

8. Review your configuration and click Finish to save your application.

Copyright © 2024 All Rights Reserved 354


Syst em Ad m in ist r at io n Gu id e

9. Right-click the application > Properties to add Web API.

10. Click Next.

11. Add Snowflake account identifiers.

The URL is in the format of https://<account_


identifier>.snowflakecomputing.com/fed/login. For more
information, see Account Identifiers - Snowflake Documentation.

Copyright © 2024 All Rights Reserved 355


Syst em Ad m in ist r at io n Gu id e

12. Assign the appropriate privilege and configure the application


permission.

13. In Permitted scopes, select email, openid, profile, and user_


impersonation.

Copyright © 2024 All Rights Reserved 356


Syst em Ad m in ist r at io n Gu id e

14. Add the session:role-any customized scope.

15. Review and click Finish.

16. Issue Transform Rules for Web API applications as shown in the
following screenshots.

Copyright © 2024 All Rights Reserved 357


Syst em Ad m in ist r at io n Gu id e

Copyright © 2024 All Rights Reserved 358


Syst em Ad m in ist r at io n Gu id e

Add an extra transform rule to transform scope of access to ascope.


See Configure Custom Clients for External OAuth — Snowflake
Documentation for more information on the ascope rule.

Upon completing the steps in this section, you should have generated all
OAuth parameters for Snowflake connectivity:

l Client ID: Generated in step 2

l Secret: Generated in step 3

l OAuth URL: https://<ADFS Server base


URL>/adfs/oauth2/authorize

l Token URL: https://<ADFS Server base


URL>/adfs/oauth2/token

l Resource: https://<account_
identifier>.snowflakecomputing.com/fed/login

l Scope: openid session:role-an

Copyright © 2024 All Rights Reserved 359


Syst em Ad m in ist r at io n Gu id e

Create Snowflake Database Instances

You can create Snowflake database instances with or without the project
schema.

See Integrate MicroStrategy With Snowflake for Single Sign-On With


SAML Using Azure AD to create a Snowflake Database Instance.

For a database instance created via MicroStrategy Developer, you must


specify OAuth keywords in the Additional connection string parameters
field. Enter the 'AUTHENTICATOR=oauth; TOKEN=?MSTR_OAUTH_TOKEN;'
parameter as a placeholder that will be replaced by a real token when the
user uses the Snowflake database instance.

Sample connection string:

DRIVER={SnowflakeDSIIDriver
ODBC};SERVER=sample.snowflakecomputing.com;DATABASE=SNOWF
LAKE_SAMPLE;SCHEMA=SAMPLE_SCHEMA;WAREHOUSE=SAMPLE_
WH;AUTHENTICATOR=oauth;TOKEN=?MSTR_OAUTH_TOKEN;

JDBC;DRIVER=
{net.snowflake.client.jdbc.SnowflakeDriver};URL=
{jdbc:snowflake://sample.snowflakecomputing.com/?authenti
cator=oauth&db=SNOWFLAKE_SAMPLE&warehouse=SAMPLE_
WH&schema=public&token=?MSTR_OAUTH_TOKEN};

Wi t h t h e Pr o j ect Sch em a

To use the project schema, you must have a basic authentication


connection:

l In MicroStrategy Developer:

1. In the Database instance name field, type in a name.

2. From the Database connection type drop-down, select Snowflake.

Copyright © 2024 All Rights Reserved 360


Syst em Ad m in ist r at io n Gu id e

3. Click New to create a new database connection.

4. In the Database connection name field, type in a name.

5. Select the DSN.

6. Create a database login and saved your settings.

l In MicroStrategy Web:

Database instances created via MicroStrategy can be used for the project
schema, but cannot be used for connection mapping.

Copyright © 2024 All Rights Reserved 361


Syst em Ad m in ist r at io n Gu id e

1. In the Data Source dialog, select the Standard Connection option.

Wi t h o u t t h e Pr o j ect Sch em a

To use the database instance without the project schema, you must either
have basic or OAuth authentication.

Copyright © 2024 All Rights Reserved 362


Syst em Ad m in ist r at io n Gu id e

1. Create an OAuth authentication database connection:

l In MicroStrategy Developer:

1. Click New to create a new database connection.

2. In the Database connection name field, type in a name.

3. Select the DSN.

4. Go to the Advanced tab.

5. In the Additional connection string parameters field, enter


TOKEN=?MSTR_OAUTH_TOKEN;.

This will act as a placeholder that will be replaced by a real


token when the user uses the Snowflake database instance.

6. Click OK.

7. In the Database login, enter a name.

8. Select the Use network login id (Windows authentication)


checkbox. For JDBC connection, you must specify Login ID as
the Snowflake login namea associated with the access token.
For example, domain\snowflakeuser.

l In MicroStrategy Web:

Copyright © 2024 All Rights Reserved 363


Syst em Ad m in ist r at io n Gu id e

In the Data Source dialog, select the OAuth Connection option.

2. Set OAuth Parameters.

Users must have the Set OAuth parameters for Cloud App sources
privilege under Client-Web.

Copyright © 2024 All Rights Reserved 364


Syst em Ad m in ist r at io n Gu id e

If you want to use the DB role in MicroStrategy Workstation, OAuth


parameters must be set from Workstation. Oauth parameters in Web
and Workstation are different set values.

After the database instance is created, you can set the OAuth
parameters in MicroStrategy Web.

Copyright © 2024 All Rights Reserved 365


Syst em Ad m in ist r at io n Gu id e

1. In the Database Instance menu, select Set OAuth Parameters.

Copyright © 2024 All Rights Reserved 366


Syst em Ad m in ist r at io n Gu id e

2. In the Authentication Type drop-down, select AD FS.

3. Fill out the required fields.

The required information can be referenced when creating the AD


FS server application in the previous steps.

l For Client ID, click on Application Groups > Overview >


Properties > Server application> Edit, and locate the ID.

l For Client Secret, use the secret generated the first time you
created the server application.

l For OAuth URL, use https://<ADFS Server base


URL>/adfs/oauth2/authorize.

Copyright © 2024 All Rights Reserved 367


Syst em Ad m in ist r at io n Gu id e

l For Token URL, use https://<ADFS Server base


URL>/adfs/oauth2/token.

l For Resource, use https://<account_


identifier>.snowflakecomputing.com/fed/login. For
more information, see Account Identifiers.

l For Scope, use openid session:role-any.

l The Callback URL is generated by default.

For Web: https://[MicroStrategy Web


Hostname]/MicroStrategy/servlet/mstrWeb?evt=3172

For Workstation: http://localhost

The callback URL must be added to the Snowflake server client


application.

Cr eat e Seam l ess Lo gi n

See Enable Seamless Login Between Web, Library, and Workstation for
more information.

Lim itations
l An end-to-end single sign-on (SSO) workflow is not supported.

l To use the database role in MicroStrategy Workstation, OAuth parameters


must be set from Workstation. OAuth parameters in MicroStrategy Web
and Workstation are of different set values.

l You must re-authenticate if the access token is expired. The default


expiration time is 8 hours and can be configure in the AD FS server

Copyright © 2024 All Rights Reserved 368


Syst em Ad m in ist r at io n Gu id e

Appendix

AD FS Co n f i gu r at i o n f o r SAM L Au t h en t i cat i o n

See Integrating SAML Support with AD FS to integrate AD FS with


MicroStrategy.

Sn o w f l ake Co n f i gu r at i o n

SAML Configuration
The following is a sample workflow for configuring OAuth for Snowflake.

1. Log in using the Snowflake Web Console.

2. Run the following query to create a Snowflake account.

create user "[email protected]" password = 'PASSWORD' DEFAULT_ROLE =


PUBLIC;
grant role SYSADMIN to user "[email protected]" ;
grant role PUBLIC to user "[email protected]" ;
GRANT MODIFY , MONITOR , USAGE , OPERATE ON WAREHOUSE "DEMO_WH" to role
PUBLIC;

3. Specify the IdP information for Snowflake.

use role accountadmin;


alter account set saml_identity_provider = '{
"certificate": "<Certificate Content>",
"issuer": "http://<ADFS Server base URL>/adfs/services/trust",
"ssoUrl": "https://<ADFS Server base URL>/adfs/ls",
"type" : "ADFS",
"label" : "ADFSSingleSignOn"
}';

See Configuring an Identity Provider (IdP) for Snowflake — Snowflake


Documentation to obtain the certificate and replace <Certificate
Content> with your certificate.

Copyright © 2024 All Rights Reserved 369


Syst em Ad m in ist r at io n Gu id e

4. Enable Snowflake-initiated SSO.

use role accountadmin;

alter account set sso_login_page = true;

OAuth Configuration
See Configure Custom Clients for External OAuth - Snowflake
Documentation to configure OAuth for Snowflake.

1. Create an OAuth authorization server in Snowflake.

external_oauth_rsa_public_key is retrieved against the certificate


which is obtained in the previous step.

create or replace security integration adfs_oauth_mstr


type = external_oauth
enabled = true
external_oauth_type = custom
external_oauth_any_role_mode = 'ENABLE'
external_oauth_issuer = 'https://<ADFS Server base
URL>/adfs/services/trust'
external_oauth_rsa_public_key = '<public key retieved from
certificate>'
external_oauth_audience_list=('https://<account_
identifier>.snowflakecomputing.com/fed/login')
external_oauth_scope_mapping_attribute = 'ascope'
external_oauth_token_user_mapping_claim='sub'
external_oauth_snowflake_user_mapping_attribute='login_name';

2. Modify your external OAuth security integration.

alter security integration adfs_oauth_mstr set


external_oauth_scope_mapping_attribute = 'scope';

Mapping SAML Users to MicroStrategy


MicroStrategy Intelligence server uses the SAML assertion attributes
configured in the Idp for authentication. This information is passed from

Copyright © 2024 All Rights Reserved 370


Syst em Ad m in ist r at io n Gu id e

SAML response to map the logged in user to MicroStrategy users and


groups stored in the metadata.

User Mapping

The following pieces of information sent over in the SAML response can be
used to map to a MicroStrategy user:

l Name ID: MicroStrategy looks for a match between the Name ID and User
ID in the Trusted Authenticated Request setting.

This field can be set in Developer by opening User Editor >


Authentication > Metadata. You can also set this field in Web
Administrator by opening Intelligence Server Administration Portal >
User Manager. The Trusted Authentication Login field is found in the
Authentication tab when editing a user.

l DistinguishedName: MicroStrategy looks for a match in user's


Distinguished name of LDAP Authentication setting.

This setting can be found in Developer by opening User Editor >


Authentication > Metadata.

MicroStrategy checks for matches in the exact order they are presented.

When a match is found in the metadata, MicroStrategy logs the user in as


the corresponding MicroStrategy user with all of the correct permissions and
privileges granted.

If no match is found, it means the SAML user does not yet exist in
MicroStrategy, and is denied access. You can choose to have SAML users
imported to MicroStrategy if no match is found, see Importing and Syncing
SAML Users.

Copyright © 2024 All Rights Reserved 371


Syst em Ad m in ist r at io n Gu id e

Group Mapping

The way MicroStrategy maps user groups is determined by the entries made
in the Group Attribute and Group Format fields when the SAML
configuration files were generated for your application. Groups are mapped
between an identity provider and MicroStrategy in one of two ways:

l Simple group names: Group Attribute must contain a list of MicroStrategy


User Groups and Group Format must be set to Simple in MicroStrategy
SAML configuration. The Group Attribute values is used to map the
MicroStrategy group's Full name.

This setting can be found in Developer by opening Group Editor > Group
Definition > General.

l DistinguishedNames:If MicroStrategy is configured for LDAP integration


DistinguishedNames can be used for group mapping. Group Attribute must
contain a list of LDAP DistinguishedNames and the Group Format must be
set to DistinguishedName in MicroStrategy SAML configuration.

This setting can be found in Developer by opening Group Editor >


Authentication > Metadata.

Im porting and Syncing SAML Users

New users and their associated groups can be dynamically imported into
MicroStrategy during application log in. You can also configure Intelligence
server to sync user information for existing MicroStrategy users each time
they log in to an application. The following settings are accessed from the
Intelligence Server Configuration > Web Single Sign-on > Configuration
window in Developer.

l Allow user to log on if Web Single Sign-on - MicroStrategy user link


not found: Controls access to an application when a MicroStrategy user is
not found when checking a SAML response. If unchecked, MicroStrategy
denies access to the user. If checked, the user obtains privileges and

Copyright © 2024 All Rights Reserved 372


Syst em Ad m in ist r at io n Gu id e

access rights of a 3rd Party user and Everyone group.

Import user and Sync user are not be available unless this setting is
turned on.

l Import user at logon: Allows MicroStrategy to import a user into the


metadata if no matching user is found. The imported user populates all the
fields that are used to check user mapping with the corresponding SAML
attribute information.

All users imported this way are placed in the "3rd party users" group in
MicroStrategy, and are not be physically added to any MicroStrategy
groups that match its group membership information.

After the configuration is complete, the imported user sees a privilege-


related error when trying to access the project. To resolve this issue, a
MicroStrategy administrator must add the project access privilege for the
imported user in 3rd Party Users group.

l Synch user at logon: Allows MicroStrategy to update the fields used for
mapping users with the current information provided by the SAML
response.

This option also updates all of a user's group information and import
groups into "3rd party users" if matching groups are not found. This may
result in unwanted extra groups being created and stored in the metadata.

This page applies to MicroStrategy 2021 Update 6 and newer versions.

Enabling Single Logout with SAML Authentication


Starting in MicroStrategy 2021 Update 6, you can enable single logout with
SAML authentication. See This page applies to MicroStrategy 2021 Update 6
and newer versions. for more information.

This page applies to MicroStrategy 2021 Update 6 and newer versions.

Copyright © 2024 All Rights Reserved 373


Syst em Ad m in ist r at io n Gu id e

Enabling SAML Single Logout for Azure


Starting in MicroStrategy 2021 Update 6, you can enable SAML single logout
for Azure for the following products:

l MicroStrategy Web/Mobile Server

l Library Web

See This page applies to MicroStrategy 2021 Update 6 and newer versions.
for more information.

MicroStrategy Web/ Mobile Server

Before following this procedure, SAML should already be configured. See


Single Sign-On with SAML Authentication for JSP Web and Mobile and
Integrating SAML Support with Azure AD for more information.

1. Generate SAML config file that enable global logout mode.

a. Open the SAML config page:

MicroStrategy Web:
https://<FQDN>:<port>/MicroStrategy/saml/config/op
en

Mobile Server:
https://<FQDN>:<port>/MicroStrategyMobile/saml/con
fig/open

b. Choose a Logout mode of Global.

Copyright © 2024 All Rights Reserved 374


Syst em Ad m in ist r at io n Gu id e

c. Enter any other necessary information and click Generate config.

2. Enable the application to initiate single logout using the Azure console.

a. Open the Azure console and in the Single sign-on tab, edit the
Basic SAML Configuration.

b. Add the appropriate Logout Url.

MicroStrategy Web:
https://<FQDN>:<port>/MicroStrategy/saml/SingleLog
out

Mobile Server:
https://<FQDN>:<port>/MicroStrategyMobile/saml/Sin
gleLogout

Copyright © 2024 All Rights Reserved 375


Syst em Ad m in ist r at io n Gu id e

c. Save your changes, re-upload the new IDPMetadata.xml, and


restart the Web server.

Library Web

1. Enable the application to initiate single logout in the Azure console.

a. Enter the necessary information in the Azure console as shown in


Integrating SAML Support with Azure AD.

b. Add the Logout Url:

https://<FQDN>:<port>/MicroStrategyLibrary/saml/Si
ngleLogout

Copyright © 2024 All Rights Reserved 376


Syst em Ad m in ist r at io n Gu id e

c. Save the configuration.

2. Open Workstation and add the new environment connection to the


Library server.

3. Right-click the environment and choose Configure Enterprise


Security > Configure SAML.

Copyright © 2024 All Rights Reserved 377


Syst em Ad m in ist r at io n Gu id e

4. On the Configure SAML dialog:

a. Enter the Entity ID.

b. Expand Advanced and General.

c. Enter the Entity Base Url:


https://<FQDN>:<port>/MicroStrategyLibrary

d. Set the Logout Mode to Global.

e. Click Generate Library SPMetadata.

f. Upload IDPMetadata.xml. The file is downloaded from the Azure


console.

Copyright © 2024 All Rights Reserved 378


Syst em Ad m in ist r at io n Gu id e

g. Click Complete Configuration.

5. Restart the Library server.

This page applies to MicroStrategy 2021 Update 6 and newer versions.

Enabling SAML Single Logout for Okta


Starting in MicroStrategy 2021 Update 6, you can enable SAML single logout
for Okta for the following products:

l MicroStrategy Web

l Library Web

See This page applies to MicroStrategy 2021 Update 6 and newer versions.
for more information.

1. Generate SAML config files with global logout mode enabled.

a. Open the SAML config page:


https://FQDN:port/MicroStrategy/saml/config/open.

b. Choose a Logout mode of Global.

Copyright © 2024 All Rights Reserved 379


Syst em Ad m in ist r at io n Gu id e

c. Enter any other necessary information and click Generate config.

2. Enable the application to initiate single logout using the Okta console.

a. Open the Okta console.

b. In SAML Settings, click Show Advanced Settings.

Copyright © 2024 All Rights Reserved 380


Syst em Ad m in ist r at io n Gu id e

l Select Allow application to initiate Single Logout.

l Enter the Single Logout URL:


https://FQDN:port/MicroStrategy/saml/SingleLogou
t

l In SP Issuer, enter the entity ID if desired.

l Upload the signature certification.

l Create a file named signature.crt. Make sure this file


starts with -----BEGIN CERTIFICATE----- and end with --
---END CERTIFICATE-----.

l Copy the ds:X509Certificate value from


SPMetadata.xml as shown below.

-----BEGIN CERTIFICATE-----
MIIDoDCCAgigAwIBAgIEFJ1sZDANBgkqhkiG9w0BAQwFADASMRAwDgYDVQQDDAdz
aWduS2V5MB4Xn707jRnJRiDr8qNverYFLJwjNZo=
-----END CERTIFICATE-----

l In Signature Certificate, upload signature.crt.

3. Download IDPMetadata.xml.

a. In the Okta console, choose SAML Application - Sign on.

b. Click View SAML setup instructions.

Copyright © 2024 All Rights Reserved 381


Syst em Ad m in ist r at io n Gu id e

c. Copy the content in Optional - Provide the following... into


IDPMetadata.xml under the SAML folder.

4. Restart the Web server.

Enable Integrated Authentication


Integrated authentication enables a Windows user to log in once to their
Windows machine. The user does not need to log in again separately to
Developer or MicroStrategy Web. This type of authentication uses Kerberos
delegation to validate a user's credentials. Kerberos delegation occurs when
a service needs to provide the Kerberos user's credentials to access another
service. For example, in MicroStrategy when doing integrated authentication
in Web, the web server needs to "delegate" the user's credentials to
Intelligence server so that the user can log in seamlessly. In addition to
authenticating users to Developer and MicroStrategy Web, integrated
authentication also passes user credentials down to the database server.
This allows each user's credentials to be used to return data from the
database.

MicroStrategy also supports an Active Directory configuration that makes


use of Kerberos Constrained Delegation to improve overall security

Copyright © 2024 All Rights Reserved 382


Syst em Ad m in ist r at io n Gu id e

associated with service communications. Kerberos Constrained Delegation


is a new way to delegate Kerberos user's credentials with improved security.
Implementing Kerberos Constrained Delegation involves specifying the
services that are allowed in terms of Intelligence server Kerberos
Delegation, in essence creating a "white list" of allowed services.

For single sign-on with integrated authentication to work, users must have
user names and passwords that are printable, US-ASCII characters. This
limitation is expected behavior in Kerberos. This limitation is important to
keep in mind when creating a multilingual environment in MicroStrategy.

Active Directory Account Configuration


To configure your Active Directory account you will need to set up a service
account to associate with Intelligence server as well as create a Service
Principal Name (SPN) and enable delegation for your Intelligence server.

Service Account Setup

For the Active Directory user account that you will associate with the SPN:

1. Go to User Properties > Account.

2. In the Account options section, clear the check box next to Account
is sensitive and cannot be delegated.

The Do not require Kerberos preauthentication option is unchecked by


default and should be kept that way for MicroStrategy service accounts
used for Kerberos Constrained Delegation.

Create the Intelligence Server Service Principal Nam e (SPN)

Once the user has been created, a Service Principal Name for the
Intelligence server must be attached to the user using the setspn
command.

Copyright © 2024 All Rights Reserved 383


Syst em Ad m in ist r at io n Gu id e

1. Execute the setspn.exe -L <your_service_account> command


to ensure no other SPN is associated with your service account.

C:\Windows\system32>
C:\Windows\system32setspn.exe -L mstrsvr_acct
Registered ServicePrincipalNames for CN=MicroStrategy Server
Account,CN=Users,DC=vmnet-esx-mstr,DC=net:

2. Add the SPN using the setspn.exe -A <your_service_account>


command.

MicroStrategy software expects that the service name will be


MSTRSVRSvc, and that the Intelligence server port number will be
added to the end of the hostname. The SPN should be formated as:
MSTRSVRSvc/<hostname>:<port>@<realm>. The realm does not
need to be specified in the setspn command. It will automatically use
the default realm of the Active Directory machine.

C:\Windows\system32>
C:\Windows\system32>setspn -A MSTRSVRSvc/exampleserver.example.com:34952
your_service_account
Registering ServicePrincipalNames for CN=your_service_
acount,CN=Users,DC=example,DC=com
MSTRSVRSvc/exampleserver.example.com:34952
Updated object

If you encounter any errors, contact your Active Directory


administrator before continuing.

Enabling Unconstrained Delegation for the Intelligence Server Service

If single-sign on authentication to a warehouse database is required, an


additional configuration step must be performed on the Active Directory
machine. Kerberos delegation will be required for the Intelligence server to
authenticate the end user to the database server.

Copyright © 2024 All Rights Reserved 384


Syst em Ad m in ist r at io n Gu id e

1. After creating the SPN, open the associated service user account.

2. On the Delegation tab select Trust this user for delegation to any
service (Kerberos only).

3. Click Apply, then OK.

Enabling Constrained Delegation for the Intelligence Server Service

1. After creating the SPN, open the associated service user account.

2. On the Delegation tab select Trust this user for delegation to


specified services only.

3. Click Add.

4. Provide the service account for the destination services then select a
registered service from the list.

5. Repeat steps 3 and 4 until each service requiring delegated access


have been added.

ASP versions of servers hosted on IIS will be use extra protocols to


make Kerberos Constrained Delegation work, and the Use any
authentication protocol option needs to be enabled for their service
accounts.

6. Click Apply, then OK.

Enabling Constrained Delegation for Intelligence Server to a Data Source

For Intelligence server to delegate to a data source:

l Select the Use any authentication protocol option.

l Add the Intelligence server to the list of services that accept delegated
credentials.

l Add the data source services to the list of services that accept delegated
credentials.

Copyright © 2024 All Rights Reserved 385


Syst em Ad m in ist r at io n Gu id e

If the data source is an MDX provider, instead of allowing delegation to


database services:

l Add the MDX provider service.

l On the service account of MDX provider allow delegation to the


database services.

Intelligence Server Configuration for Integrated Authentication

Configuring Intelligence Server on Windows

For users with Intelligence server deployed on a Windows platform do not


need to perform any additional configuration. Authentication is passed
between libraries so a Kerberos configuration file and keytab are not
needed. If Intelligence server is running on domain account, the account
needs to be an administrator or be enabled to act as part of the operating
system.

Continue to Developer Configuration for Integrated Authentication to


complete setup.

Configuring Intelligence Server on Linux for Integrated Authentication

The configurations listed below are required to configure Intelligence server


with your Windows domain controller and Kerberos security.

Kerberos only supports US-ASCII characters. Do not use any special


characters when installing or configuring Kerberos.

You have performed the steps described in Active Directory Account


Configuration.

In st al l Ker b er o s 5

You must have Kerberos 5 installed on your Linux machine that hosts
Intelligence server. Your Linux operating system may come with Kerberos 5

Copyright © 2024 All Rights Reserved 386


Syst em Ad m in ist r at io n Gu id e

installed. If Kerberos 5 is not installed on your Linux machine, refer to the


Kerberos documentation for steps to install it.

En su r e t h at t h e En vi r o n m en t Var i ab l es ar e Set

Once you have installed Kerberos 5, you must ensure that the following
environment variables have been created:

The variables must be set when the Intelligence server starts in order to
take effect.

Variable Description Default Required/Optional

Location of all
${KRB5_HOME} Kerberos /etc/krb5 Optional
configuration files

Location of the
${KRB5_CONFIG} default Kerberos /etc/krb5/krb5.conf Required
configuration file

Location of the
${KRB5CCNAME} Kerberos credential /etc/krb5/krb5_ccache Optional
cache

Location of the
${KRB5_KTNAME} /etc/krb5/krb5.keytab Required
Kerberos keytab file

For Keberos Constrained Delegation: The environment variable ${KRB5_


CLIENT_KTNAME} needs to be set to point to the keytab file used by
Intelligence server.

Co n f i gu r e t h e kr b 5.Keyt ab Fi l e f o r t h e In t el l i gen ce Ser ver

You must create and configure the krb5.keytab file. The steps to
configure this file on your Linux machine are provided in the procedure
below.

Copyright © 2024 All Rights Reserved 387


Syst em Ad m in ist r at io n Gu id e

The procedure below requires a few variables to be entered for various


commands. This includes information you can gather before you begin the
procedure. The required variables in the following procedure are described
below:

l ISMachineName: The name of the Intelligence server machine.

l ISPort: The port number for Intelligence server.

l KeyVersionNumber: The key version number, retrieved as part of this


procedure.

l EncryptionType: The encryption type used.

We recommend that you use rc4-hmac as the encryption type. Other


encryption types may cause compatibility issues with the Windows Active
Directory.

l DOMAIN_REALM: The domain realm for your Intelligence server, which must
be entered in uppercase.

To Create a krb5.keytab File

1. Log in to your Linux machine.

2. Retrieve the key version number for your Intelligence server service
principal name, using the following command:

kvno MSTRSVRSvc/ISMachineName:ISPort@DOMAIN_REALM

The key version number is displayed on the command line.

3. In the command line, type the following commands:

ktutil
addent -password -p MSTRSVRSvc/ISMachineName:ISPort@DOMAIN_REALM -k
KeyVersionNumber -e EncryptionType
wkt /etc/krb5/krb5.keytab
exit

4. To verify the keytab file, type the following command:

Copyright © 2024 All Rights Reserved 388


Syst em Ad m in ist r at io n Gu id e

kinit -k -t /etc/krb5/krb5.keytab MSTRSVRSvc/ISMachineName:ISPort@DOMAIN_


REALM

The command should run without prompting you for a username and
password.

Co n f i gu r e t h e kr b 5.co n f Fi l e f o r t h e In t el l i gen ce Ser ver

You must create and configure a file named krb5.conf. This file is stored
in the /etc/krb5/ directory by default.

If you create a krb5.conf file in a directory other than the default, you
must update the KRB5_CONFIG environment variable with the new location.
Refer to your Kerberos documentation for steps to modify the KRB5_
CONFIG environment variable.

The contents of the krb5.conf should be as shown below:

[libdefaults]
default_realm = DOMAIN_REALM
default_keytab_name = FILE:/etc/krb5/krb5.keytab
forwardable = true
no_addresses = true

[realms]
DOMAIN_REALM = {
kdc = DC_Address:88
admin_server = DC_Admin_Address:749
}

[domain_realm]
.domain.com = DOMAIN_REALM
domain.com = DOMAIN_REALM
.subdomain.domain.com = DOMAIN_REALM
subdomain.domain.com = DOMAIN_REALM

The variables in the syntax above are described below:

l DOMAIN_REALM: The domain realm used for authentication purposes. A


domain realm is commonly of the form EXAMPLE.COM, and must be
entered in uppercase.

Copyright © 2024 All Rights Reserved 389


Syst em Ad m in ist r at io n Gu id e

l domain.com and subdomain.domain.com: Use this for all domains and


subdomains whose users must be authenticated using the default
Kerberos realm.

l DC_Address: The host name or IP address of the Windows machine that


hosts your Active Directory domain controller. This can be the same
address as DC_Admin_Address.

l DC_Admin_Address: The host name or IP address of the Windows


machine that hosts your Active Directory domain controller administration
server. This can be the same address as DC_Address.

Developer Configuration for Integrated Authentication


To enable integrated authentication in a Windows MicroStrategy
environment you will need to configure your MicroStrategy users and the
Project sources.

Configure the Project Source

1. In Developer right click on your Project Source.

2. Click Modify Project Source.

3. On the Connection tab, under Server Name, type the server name
exactly as it appears is the Service Principal Name created in Active
Directory Account Configuration with the format
MSTRSVRSvc/<hostname>:<port>@<realm>.

4. In the Advanced tab Use Integrated Authentication.

Mapping Users to Active Directory

1. In Project Source open Administration > User Manager.

2. Right click on a user and select Edit > Authentication > Metadata.

Copyright © 2024 All Rights Reserved 390


Syst em Ad m in ist r at io n Gu id e

3. Enter the Active Directory user log in under Trusted Authentication


Request User ID.

4. Click OK.

Linking Integrated Authentication Users to LDAP Users

When users log in to MicroStrategy using their integrated authentication


credentials, their LDAP group memberships can be imported and
synchronized.

By default, users' integrated authentication information is stored in the


userPrincipalName LDAP attribute. If your system stores integrated
authentication information in a different LDAP attribute, you can specify the
attribute when you configure the import.

To Im p o r t LDAP User an d Gr o u p In f o r m at i o n f o r In t egr at ed


Au t h en t i cat i o n User s

1. In Developer, log in to a project source. You must log in as a user with


administrative privileges.

2. From the Administration menu, select Server, and then select


Configure MicroStrategy Intelligence Server.

3. Expand the LDAP category, then expand Import, and then select
Options.

4. Select the Synchronize user/group information with LDAP during


Windows authentication and import Windows link during Batch
Import check box.

5. Select the Batch import Integrated Authentication/Trusted


Authentication unique ID check box.

6. By default, users' integrated authentication IDs are stored in the


userPrincipalName LDAP attribute. If your system stores integrated

Copyright © 2024 All Rights Reserved 391


Syst em Ad m in ist r at io n Gu id e

authentication information in a different LDAP attribute, click Other,


and type the LDAP attribute that contains users' IDs.

7. Click OK.

Configure MicroStrategy Application Servers for Integrated


Authentication
Configuration of your MicroStrategy application servers is similar to the
process for allowing Intelligence server to use integrated authentication.
You will need to create a user and associated Service Principal Name (SPN)
in Active Directory for each application server service. You will then need to
perform platform specific configuration steps to each of the servers. See the
appropriate section for your application server deployments:

l Enabling Integrated Authentication for J2EE-Compliant Application


Servers

l Enable Integrated Authentication for IIS

Enabling Integrated Authentication for J2EE-Com pliant Application Servers

If you use a J2EE-compliant application server to deploy MicroStrategy Web,


MicroStrategy Library, MicroStrategy Mobile Server, or to deploy
MicroStrategy Web Services to support MicroStrategy Office, you can
support integrated authentication. If you are configuring integrated
authentication on your MicroStrategy Library server you do not need to
perform the steps regarding generation and configuration of .jaas files.

Cr eat e a Ser vi ce Pr i n ci p al N am e f o r Yo u r Ap p l i cat i o n Ser ver

You must create a Service Principal Name (SPN) for your J2EE application
server, and map it to the domain user that the application server runs as.
The SPN identifies your application server as a service that uses Kerberos.
For instructions on creating an SPN, see Active Directory Account
Configuration.

The SPN should be in the following format:


Copyright © 2024 All Rights Reserved 392
Syst em Ad m in ist r at io n Gu id e

HTTP/ASMachineName

The format is described below:

l HTTP: This is the service class for the application server.

l ASMachineName: This is the fully qualified host name of the server where
the application server is running. It is of the form machine-
name.example.com. Integrated authentication will only function when
accessing the application server using the ASMachineName used to
register the SPN. If the fully qualified host name was registered as SPN,
then using the machine name or IP address will not work. Should the
application server be accessible through FQDN and machine name,
additional SPNs will need to be registered to the AD service account.

In your Active Directory, configure the application server’s domain user to


be trusted for delegation, and map the user to this SPN. For example, if you
register the SPN to the Active Directory user j2ee-http, enable the
Account is trusted for delegation option for the user. Also, enable the
Trust this computer for delegation to any service (Kerberos only)
option for the machine where your application server is hosted.

Co n f i gu r e t h e krb5.keytab Fi l e f o r t h e Ap p l i cat i o n Ser ver

You must create and configure a krb5.keytab file for the application
server. In UNIX, you must use the kutil utility to create this file. In
Windows, you must use the ktpass utility to create the keytab file.

The procedure below requires a few variables to be entered for various


commands. This includes information you can gather before you begin the
procedure. The required variables in the following procedure are described
below:

ASMachineName: The name of the machine that the application server is


installed on.

Copyright © 2024 All Rights Reserved 393


Syst em Ad m in ist r at io n Gu id e

KeyVersionNumber: The key version number, retrieved as part of this


procedure.

DOMAIN_REALM: The domain realm for the application server. It is of the form
EXAMPLE.COM, and must be entered in uppercase.

EncryptionType: The encryption type used.

It is recommended that you use rc4-hmac as the encryption type. Other


encryption types may cause compatibility issues with the Windows Active
Directory.

Keytab_Path: For J2EE application servers under Windows, this specifies the
location of the krb5.keytab file. It is of the form
C:\temp\example.keytab.

ASUser and ASUserPassword: The user account for which the SPN was
registered, for example j2ee-http and its password.

To create a krb5.keytab file in Linux

If your application server and Intelligence server are hosted on the same
machine, it is required that you use separate keytab and configuration files
for each. For example, if you are using krb5.keytab and krb5.conf for
the Intelligence server, use krb5-http.keytab and krb5-http.conf for
the application server.

1. Log in to your Linux machine.

2. Retrieve the key version number for your application server service
principal name, using the commands shown below:

kinit ASUser
kvno ASUser

The variables are described in the prerequisites above.

The key version number is displayed on the command line.

Copyright © 2024 All Rights Reserved 394


Syst em Ad m in ist r at io n Gu id e

3. In the command line, type the following commands:

If your application server is installed on the same machine as the


Intelligence server, replace krb5.keytab below with a different file
name than the one used for the Intelligence server, such as krb5-
http.keytab.

ktutil
addent -password -p ASUser@DOMAIN_REALM -k KeyVersionNumber -e
EncryptionType rc4-hmac
wkt /etc/krb5/krb5.keytab
exit

4. To verify the keytab file, type the following command:

kinit -k -t /etc/krb5/krb5.keytab ASUser@DOMAIN_REALM

The command should run without prompting you for a password.

To create a krb5.keytab file in Windows

1. Log in to your Windows machine.

2. From a command prompt, type the following command:

ktpass ^
-out Keytab_Path ^
-princ ASUser@DOMAIN_REALM ^
-pass ASUserPassword ^
-crypto RC4-HMAC-NT ^
-pType KRB5_NT_PRINCIPAL

Co n f i gu r e t h e krb5.conf Fi l e f o r t h e Ap p l i cat i o n Ser ver

You must create and configure a file named krb5.conf.

For Linux only: If your application server and Intelligence server are hosted
on the same machine, it is required that you use a separate configuration

Copyright © 2024 All Rights Reserved 395


Syst em Ad m in ist r at io n Gu id e

file. For example, if you created krb5.conf for the Intelligence server, use
krb5-http.conf for the application server.

If you have created a different keytab file in Enabling Integrated


Authentication for J2EE-Compliant Application Servers, page 392, replace
krb5.keytab below with your own keytab file.

The contents of the krb5.conf should be as shown below:

[libdefaults]
default_realm = DOMAIN_REALM
default_keytab_name = Keytab_Path
forwardable = true
no_addresses = true

[realms]
DOMAIN_REALM = {
kdc = DC_Address:88
admin_server = DC_Admin_Address:749
}

[domain_realm]
.domain.com = DOMAIN_REALM
domain.com = DOMAIN_REALM
.subdomain.domain.com = DOMAIN_REALM
subdomain.domain.com = DOMAIN_REALM

The variables in the syntax above are described below:

l DOMAIN_REALM: The domain realm used for authentication purposes. A


domain realm is commonly of the form EXAMPLE.COM, and must be
entered in uppercase.

l Keytab_Path: The location of your krb5.keytab file. In Linux, it is of


the form /etc/krb5/krb5.keytab. In Windows, it is of the form
C:\temp\krb5.keytab.

l domain.com and subdomain.domain.com: Use these for all domains


and subdomains where users must be authenticated using the default
Kerberos realm.

Copyright © 2024 All Rights Reserved 396


Syst em Ad m in ist r at io n Gu id e

l DC_Address: The host name or IP address of the Windows machine that


hosts your Active Directory domain controller. This can be the same
address as DC_Admin_Address.

l DC_Admin_Address: The host name or IP address of the Windows


machine that hosts your Active Directory domain controller administration
server. This can be the same address as DC_Address.

Co n f i gu r e t h e jaas.conf Fi l e f o r t h e Ap p l i cat i o n Ser ver

You must configure the Java Authentication and Authorization Service


(JAAS) configuration file for your application server.

This step is not required for MicroStrategy Library server.

Depending on the version of the Java Development Kit (JDK) used by your
application server, the format of the jaas.conf file varies slightly. Refer to
your JDK documentation for the appropriate format. Sample jaas.conf files
for the Sun and IBM JDKs follow. The following variables are entered in the
.accept section of the jaas.conf file.:

l ASMachineName: The name of the machine that the application server is


installed on.

l DOMAIN_REALM: The domain realm used for authentication purposes. It is


of the form EXAMPLE.COM, and must be entered in uppercase.

Sample jaas.conf for Sun JDK 1.8 and above

com.sun.security.jgss.krb5.accept {
com.sun.security.auth.module.Krb5LoginModule required
principal="ASUser@DOMAIN_REALM"
useKeyTab=true
doNotPrompt=true
storeKey=true
debug=true;
};

Sample jaas.conf for IBM JDK

com.ibm.security.jgss.initiate {

Copyright © 2024 All Rights Reserved 397


Syst em Ad m in ist r at io n Gu id e

com.ibm.security.auth.module.Krb5LoginModule required
useDefaultKeytab=true
principal="ASUser@DOMAIN_REALM"
credsType=both
debug=true
storeKey=true;
};

Save the jaas.conf file to the same location as your krb5.conf file.

Co n f i gu r e t h e JVM St ar t u p Par am et er s

This step is not required for MicroStrategy Library server.

For your J2EE-compliant application server, you must set the appropriate
JVM startup parameters. The variables used are described below:

l JAAS_Path: The path to the jaas.conf file. In Linux, it is of the form


/etc/krb5/jaas.conf. In Windows, it is of the form
C:\temp\jaas.conf.

l KRB5_Path: The path to the krb5.conf file. In Linux, it is of the form


/etc/krb5/krb5.conf. In Windows, it is of the form
C:\temp\krb5.conf.

You must modify the JVM startup parameters listed below:

-Djava.security.auth.login.config=JAAS_Path
-Djava.security.krb5.conf=KRB5_Path
-Djavax.security.auth.useSubjectCredsOnly=false

En ab l e t h e SPN EGO M ech an i sm

This step is not required for MicroStrategy Library server.

As part of a MicroStrategy Web or Mobile Server JSP deployment, you must


modify the web.xml file for MicroStrategy Web or Mobile, to enable the
Simple and Protected GSSAPI Negotiation Mechanism (SPNEGO). This is

Copyright © 2024 All Rights Reserved 398


Syst em Ad m in ist r at io n Gu id e

accomplished by removing the comments around the following information in


the web.xml file:

For MicroStrategy Web:

<filter>
<display-name>SpnegoFilter</display-name>
<filter-name>SpnegoFilter</filter-name>
<filter-class>com.microstrategy.web.filter.SpnegoFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>SpnegoFilter</filter-name>
<servlet-name>mstrWeb</servlet-name>
</filter-mapping>

For MicroStrategy Mobile Server:

<filter>
<display-name>SpnegoFilter</display-name>
<filter-name>SpnegoFilter</filter-name>
<filter-class>com.microstrategy.mobile.filter.SpnegoFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>SpnegoFilter</filter-name>
<servlet-name>mstrMobileAdmin</servlet-name>
</filter-mapping>

Restart your application server for all of the above settings to take effect.

Ho w t o En ab l e In t egr at ed Au t h en t i cat i o n f o r t h e Li b r ar y Ser ver

To use the auth.kerberos.useJvmParams=true parameter, you must have


created a Service Principal Name and configured the krb5.keytab file,
krb5.conf file, jass.conf file, and JVM Startup Parameters.

If only Library is enabled for Kerberos, set the parameters as described in


Integrated Authentication Login for MicroStrategy Library Applications or
KB439598.

1. Launch the Library Admin page by entering the following URL in your
web browser

Copyright © 2024 All Rights Reserved 399


Syst em Ad m in ist r at io n Gu id e

http://<FQDN>:<port>/MicroStrategyLibrary/admin

where <FQDN> is the Fully Qualified Domain Name of the machine


hosting your MicroStrategy Library application and <port> is the
assigned port number.

2. On the Library Web Server tab, select Integrated from the list of
available Authentication Modes.

3. Click Save.

4. Restart your Web Server to apply the change.

Restart your application server for all the above settings to take effect.

Enable Integrated Authentication for IIS

Integrated authentication in MicroStrategy requires communication between


your Kerberos security system, IIS, and your database.

You must configure IIS to enable integrated authentication to the


MicroStrategy virtual directory to support integrated authentication to
MicroStrategy Web, or MicroStrategy Web Services to support
MicroStrategy Office.

If you are using Microsoft Analysis Services, to support report


subscriptions, you must use connection mapping to pass users' credentials
to Analysis Services. For steps to enable connection mapping, see
Connection Maps: Standard Authentication, Connection Maps, and
Partitioned Fact Tables, page 618.

En ab l e In t egr at ed Au t h en t i cat i o n t o t h e M i cr o St r at egy Vi r t u al Di r ect o r y

1. On the MicroStrategy Web server machine, access the IIS Internet


Service Manager.

2. Browse to and right-click the MicroStrategy virtual folder and select


Properties.

Copyright © 2024 All Rights Reserved 400


Syst em Ad m in ist r at io n Gu id e

3. Select the Directory Security tab, and then under Anonymous access
and authentication control, click Edit.

4. Clear the Enable anonymous access check box.

5. Select the Integrated Windows authentication check box.

6. Click OK.

7. If you want to enable integrated authentication for MicroStrategy


Mobile, repeat the above procedure for the MicroStrategyMobile
virtual folder.

8. If you want to enable integrated authentication for MicroStrategy Web


Services, repeat the above procedure for the MicroStrategyWS virtual
folder.

9. Restart IIS for the changes to take effect.

Co n f i gu r e Web / M o b i l e Ser ver f o r Co n st r ai n ed Del egat i o n

Currently ASP Web can only delegate users from the same domain

Using Kerberos constrained delegation requires the following additional


configuration to your Web/Mobile Server:

l ASP impersonation needs to be disabled

l Kerberos mode in sys_default.xml needs to be set to DELEGATION

l ASP application pool (if running on system account): AppPoolIdentity


doesn't work. use LocalSystem

l For IIS version 7 and older: If ASP runs on domain account, the account
needs to be an administrator or be enabled to act as part of the operating
system.

Copyright © 2024 All Rights Reserved 401


Syst em Ad m in ist r at io n Gu id e

Cr eat e a Ser vi ce Pr i n ci p al N am e f o r IIS

It is recommended that you create a Service Principal Name (SPN) for IIS,
and map it to the domain user that the application server runs as. The SPN
identifies your application server as a service that uses Kerberos. For
instructions on creating an SPN, refer to the Kerberos documentation.

The SPN should be in the following format:

HTTP/ASMachineName

The format is described below:

l HTTP: This is the service class for the application server.

l ASMachineName: This is the fully qualified host name of the server where
the application server is running. It is of the form machine-
name.example.com.

En ab l e Sessi o n Keys f o r Ker b er o s Secu r i t y

To enable single sign-on authentication to MicroStrategy Web from a


Microsoft Windows machine, you must modify a Windows registry setting on
the machine hosting IIS.

Modification of the allowtgtsessionkey registry setting is required by


Microsoft to work with Kerberos security. For information on the implications
of modifying the registry setting and steps to modify the registry setting, see
Kerberos protocol registry entries and KDC configuration keys in Windows
on the Microsoft site.

Co n f i gu r e t h e kr b 5.i n i Fi l e

If you configure Kerberos on IIS to host the web server, you must configure
the krb5.ini file. This file is included with an installation of MicroStrategy
Web, and can be found in the following directory:

C:\Program Files (x86)\Common Files\MicroStrategy\

Copyright © 2024 All Rights Reserved 402


Syst em Ad m in ist r at io n Gu id e

The path listed above assumes you have installed MicroStrategy in the
C:\Program Files (x86) directory.

Kerberos only supports US-ASCII characters. Do not use any special


characters when installing or configuring Kerberos.

Once you locate the krb5.ini file, open it in a text editor. The content
within the file is shown below:

[libdefaults]
default_realm = <DOMAIN NAME>
default_keytab_name = <path to keytab file>
forwardable = true
no_addresses = true

[realms]
<REALM_NAME> = {
kdc = <IP address of KDC>:88
admin_server = <IP address of KDC admin>:749
}

[domain_realm]
.domain.com = <DOMAIN NAME>
domain.com = <DOMAIN NAME>
.subdomain.domain.com = <DOMAIN NAME>
subdomain.domain.com = <DOMAIN NAME>

You must configure the krb5.ini file to support your environment by


replacing the entries enclosed in <>, which are described below:

l <DOMAIN NAME> and <REALM_NAME>: The domain realm used for


authentication purposes. A domain realm is commonly of the form
EXAMPLE.COM, and must be entered in uppercase.

l <IP address of KDC>: The IP address or host name of the Windows


machine that hosts your Active Directory domain controller. This can be
the same address as <IP address of KDC admin>.

l <IP address of KDC admin>: The host name or IP address of the


Windows machine that hosts your Active Directory domain controller
administration server. This can be the same address as <IP address of
KDC>.

Copyright © 2024 All Rights Reserved 403


Syst em Ad m in ist r at io n Gu id e

l domain.com and subdomain.domain.com: Use this for all domains and


subdomains whose users must be authenticated using the default
Kerberos realm.

Integrated Authentication Login for MicroStrategy Applications

Enabling integrated authentication login m ode for MicroStrategy Web

For MicroStrategy Web users to be able to use their Windows credentials to


log in to MicroStrategy Web, you must enable integrated authentication as
an available login mode. The procedure below describes the required steps
for this configuration.

To Enable Integrated Authentication Login Mode for MicroStrategy Web

1. From the Windows Start menu, go to All Programs > MicroStrategy


Tools > Web Administrator.

2. On the left, select Default Properties.

3. In the Login area, for Integrated Authentication, select the Enabled


check box.

If you want integrated authentication to be the default login mode for


MicroStrategy Web, for Integrated Authentication, select the Default
option

4. Click Save.

Enabling Integrated Authentication Login Mode for MicroStrategy Library

1. On the machine where the MicroStrategy Library application is


installed, open the configOverride.properties file.

Copyright © 2024 All Rights Reserved 404


Syst em Ad m in ist r at io n Gu id e

l Windows: C:\Program Files (x86)\Common


Files\MicroStrategy\Tomcat\apache-tomcat-
8.0.30\webapps\MicroStrategyLibrary\WEB-
INF\classes\config

l Linux: <tomcat_
directory>/webapps/MicroStrategyLibrary/WEB-
INF/classes/config

2. Add following entries to configOverride.properties:

l auth.kerberos.config=: set to file path of krb5.conf file

l auth.kerberos.keytab=: set to file path of file.keytab file

l auth.kerberos.principal=: set to Service Principal Name (SPN)


of the Library Web Server

l auth.kerberos.debug=false

l auth.kerberos.isInitiator=true

Enabling Integrated Authentication Login Mode for MicroStrategy Mobile

To allow your MicroStrategy Mobile users to use their Windows credentials


to log into MicroStrategy, you create a Mobile configuration, and select
Integrated Authentication as the authentication method. For steps to create
a Mobile configuration for your organization, see the MicroStrategy Mobile
Administration Help.

Configure Web Browser for Integrated Authentication


Integrated Authentication with Kerberos requires that the browser being
used to access MicroStrategy Web be configured to retrieve the currently
logged in user from the client machine. The steps for enabling this
functionality are different for the certified browsers for MicroStrategy.

Copyright © 2024 All Rights Reserved 405


Syst em Ad m in ist r at io n Gu id e

Kerberos should already be configured on the MicroStrategy Library server,


MicroStrategy Web server, and the MicroStrategy Intelligence server.

Google Chrom e on Windows

Chrome reads a key, AuthNegotiateDelegateAllowlist, which


configures Chrome to allow certain sites to allow delegation and use
Kerberos. The key can be implemented as a policy in a group policy object
or added manually in the registry on the client machine where Chrome is
installed. To learn more about the policy, see the Google Documentation.

To add the key manually to the registry:

1. Close any open instances of Chrome

2. Create a key with the path:

Computer\HKEY_LOCAL_
MACHINE\SOFTWARE\Policies\Google\Chrome

3. Add a new 'String value' named


AuthNegotiateDelegateAllowlist.

4. Populate the value of AuthNegotiateDelegateAllowlist with the


host of the MicroStrategy Web site as shown below.

5. Add a new 'String value' named AuthServerAllowlist.

6. Populate the value of AuthServerAllowlist with the host of the


MicroStrategy Web site as shown below.

If you are using Chrome 85 or earlier, you should use


AuthServerWhitelist and AuthNegotiateDelegateWhitelist
instead of AuthServerAllowlist and

Copyright © 2024 All Rights Reserved 406


Syst em Ad m in ist r at io n Gu id e

AuthNegotiateDelegateAllowlist.

Microsoft Edge on Windows

First, you must configure the browser to recognize Negotiate challenges


from Web servers configured to use these types of challenges (as they
would be if they were protected by Kerberos).

1. Open the Windows Control Panel and go to Network and Internet >
Internet Options.

2. On the Advanced tab, select Enable Integrated Windows


Authentication.

See Troubleshoot Kerberos failures on the Microsoft site for more

Copyright © 2024 All Rights Reserved 407


Syst em Ad m in ist r at io n Gu id e

information.

Second, you must also configure the browser to place the MicroStrategy
Web site in a security zone that can serve credentials. For security reasons,
Edge only allows Kerberos delegation to sites within the Intranet and
Trusted Sites zones. See FAQs about Enhanced Security Configuration on
the Microsoft site for more information about zones. For this reason, if
MicroStrategy Web is not automatically detected as belonging to either of
these zones, you need to add it to one of these zones manually.

Copyright © 2024 All Rights Reserved 408


Syst em Ad m in ist r at io n Gu id e

1. On the Security tab, click Trusted Sites > Sites.

2. Enter the hostname for MicroStrategy Web and click Add.

3. Click Close.

Third, within the specified zone, double-check the security settings.

1. On the Security tab, click Trusted Sites > Custom Level.

2. Under User Authenticaiton > Logon, confirm that Anonymous logon


is not selected. Instead, use a setting that allows the browser to pick up

Copyright © 2024 All Rights Reserved 409


Syst em Ad m in ist r at io n Gu id e

user credentials, as shown below.

Add Your Account in m acOS

macOS has built in support for Kerberos. You must add your account for
Kerberos authentication using either the Ticket Viewer app or kinit
command-line tool.

To add your account in the terminal:

Copyright © 2024 All Rights Reserved 410


Syst em Ad m in ist r at io n Gu id e

1. Enter the following command:

$ kinit user_name@REALM_NAME

user_name@REALM_NAME is your user name and realm name. The


realm name is case sensitive.

2. Enter your password.

user_name@REALM_NAME's Password:
$

Google Chrom e on m acOS

1. Once you Add Your Account in macOS, you must configure Chrome’s
AuthServerAllowlist with any domains that require Kerberos
authentication.

Run the following command in the terminal:

$ defaults write com.google.Chrome AuthServerAllowlist


mywebsite.domain.com

mywebsite.domain.com is the domain name you need to access with


Kerberos authentication.

2. You may also need to set AuthNegotiateDelegateAllowlist to


ensure Chrome delegates user credentials on Kerberos authentication.

$ defaults write com.google.Chrome AuthNegotiateDelegateAllowlist


mywebsite.domain.com

If you are using Chrome 85 or earlier, you should use


AuthServerWhitelist and AuthNegotiateDelegateWhitelist

Copyright © 2024 All Rights Reserved 411


Syst em Ad m in ist r at io n Gu id e

instead of AuthServerAllowlist and


AuthNegotiateDelegateAllowlist.

3. You may need to restart your machine for the changes to take effect.

To learn more about the policies, see Chrome Enterprise policy list.

Microsoft Edge on m acOS

1. For Edge 77 and later, you must configure Edge’s


AuthServerAllowlist with any domains that require Kerberos
authentication.

To learn more about the policy, see AuthServerAllowlist on the


Microsoft site.

Run the following command in the terminal:

$ defaults write com.microsoft.Edge AuthServerAllowlist


mywebsite.domain.com

mywebsite.domain.com is the domain name you need to access with


Kerberos authentication.

2. You may also need to set AuthNegotiateDelegateAllowlist to


ensure Chrome delegates user credentials on Kerberos authentication.

$ defaults write com.microsoft.Edge AuthNegotiateDelegateAllowlist


mywebsite.domain.com

3. You may need to restart your machine for the changes to take effect.

Mozilla Firefox

Firefox has two flags, network.negotiate-auth.trusted-uris and


network.negotiate-auth.delegation-uris, which configure it to
trust certain sites to allow delegation and use Kerberos.

Copyright © 2024 All Rights Reserved 412


Syst em Ad m in ist r at io n Gu id e

1. Navigate to about:config in the browser.

2. Find the two flags in the list of configuration settings.

3. Double-click on each flag and enter the host of the MicroStrategy Web
site, as shown below:

Linking Integrated Authentication Users to LDAP Users


When users log in to MicroStrategy using their integrated authentication
credentials, their LDAP group memberships can be imported and
synchronized.

By default, users' integrated authentication information is stored in the


userPrincipalName LDAP attribute. If your system stores integrated
authentication information in a different LDAP attribute, you can specify the
attribute when you configure the import.

l The LDAP server has been configured, as described in Setting up LDAP


Authentication in MicroStrategy Web, Library, and Mobile, page 185.

l You have configured the settings for importing users from your LDAP
directory., as described in Manage LDAP Authentication, page 189.

Copyright © 2024 All Rights Reserved 413


Syst em Ad m in ist r at io n Gu id e

To Import LDAP User and Group Information for Integrated


Authentication Users

1. In Developer, log in to a project source. You must log in as a user with


administrative privileges.

2. From the Administration menu, go to Server > Configure


MicroStrategy Intelligence Server.

3. Go to LDAP > Import > Options. The Import Options are displayed.

4. Select the Synchronize user/group information with LDAP during


Windows authentication and import Windows link during Batch
Import check box.

5. Select the Batch import Integrated Authentication/Trusted


Authentication unique ID check box. The Use Default LDAP Attribute
option is enabled.

6. By default, users' integrated authentication IDs are stored in the


userPrincipalName LDAP attribute. If your system stores integrated
authentication information in a different LDAP attribute, click Other,
and type the LDAP attribute that contains users' IDs.

7. Click OK.

Enabling Integrated Authentication to Data Sources


Through the use of integrated authentication, you can allow each user's
credentials to be passed to your database server. You must enable this
option at the project level.

If your reports or documents use subscriptions, using integrated


authentication for your data sources prevents the subscriptions from
running.

Copyright © 2024 All Rights Reserved 414


Syst em Ad m in ist r at io n Gu id e

Your database server must be configured to allow integrated authentication for


all MicroStrategy users that use it as a data warehouse. Refer to your third-
party database server documentation for instructions on enabling this support.

To Enable Integrated Authentication to Data Sources

1. In Developer, log in to the project whose data sources you want to


configure.

2. In the Administration menu, select Projects, then choose Project


Configuration.

3. Expand the Database instances category.

4. Expand Authentication, and select Warehouse.

5. Enable the For selected database instances radio button.

6. From the Metadata authentication type drop-down list, choose


Kerberos.

7. In the Database Instance pane, enable the check boxes for all the
database instances for which you want to use integrated authentication,
as shown below.

If you are connecting to a Microsoft SQL Server, Teradata, or TM1


data source, use this setting only if your Intelligence Server is running
on Windows.

Copyright © 2024 All Rights Reserved 415


Syst em Ad m in ist r at io n Gu id e

8. Click OK.

Enabling Integrated Authentication for the MicroStrategy Hadoop


Gateway
The MicroStrategy Hadoop Gateway is a data processing engine that you
install in your Hadoop ® environment. The Hadoop Gateway lets you analyze
unstructured data in Hadoop, and provides high-speed parallel data transfer
between the Hadoop Distributed File System (HDFS) and your MicroStrategy
Intelligence Server.

To enable integrated authentication for your Hadoop cluster, refer to your


third-party documentation.

For specific steps to enable integrated authentication for your Hadoop


cluster, refer to the documentation for your Hadoop cluster distribution.

Enable Single Sign-On to Library with Trusted Authentication


You can enable Single Sign-on (SSO) authentication for MicroStrategy
Library using third-party authentication provider such as IBM Tivoli Access
Manager, CA SiteMinder, Oracle Access Manager, or PingFederate ® .

Copyright © 2024 All Rights Reserved 416


Syst em Ad m in ist r at io n Gu id e

Trusted authentication mode cannot be used in combination with any other


log in mode.

Enable Trusted Authentication Mode


1. Launch the Library Admin page by entering the following URL in your
web browser

http://<FQDN>:<port>/MicroStrategyLibrary/admin

where <FQDN> is the Fully Qualified Domain Name of the machine


hosting your MicroStrategy Library application and <port> is the
assigned port number.

2. On the Library Web Server tab, select Trusted from the list of
available Authentication Modes.

3. Select your authentication provider from the Provider drop-down menu.

4. Click the Create Trusted Relationship button to establish trusted


communication between Library Web Server and Intelligence server.

Ensure the Intelligence server information is entered correctly before


establishing this trusted relationship.

5. Click Save.

6. Restart your Web Server to apply the changes.

Enable A Custom Authentication Provider


1. Edit Library/WEB-INF/classes/auth/trusted/custom_
security.properties in a text editor.

2. Fill in LoginParam and DistinguishedName based on your setup


with authentication provider.

l LoginParam is the name of the header variable that your provider


will use for authentication.

Copyright © 2024 All Rights Reserved 417


Syst em Ad m in ist r at io n Gu id e

l DistinguishedName is the name of the header variable that will


supply the Distinguished Name of the user for LDAP synchronization.

3. Restart MicroStrategy Library to apply the changes.

Enable Single Sign-On with OIDC Authentication


OIDC is a two-way setup between your MicroStrategy application and
Identity Provider (IdP). OIDC support allows MicroStrategy to work with a
wide variety of OIDC compliant identity providers (including Azure, Okta and
Ping) for authentication.

To configure a MicroStrategy application for OIDC authentication, you need


to deploy the OIDC application in your IdP, establish a trust relationship with
the MicroStrategy Intelligence server, and link/import OIDC users to
MicroStrategy users.

See the appropriate section for your MicroStrategy application:

Enable OIDC Authentication for MicroStrategy Library


This topic details how to enable OIDC authentication for MicroStrategy
Library.

l Configure OIDC Authentication

l Library Admin Page Authentication

Configure OIDC Authentication

1. Open Workstation and connect to the Library environment using


standard authentication with an admin privilege user.

2. Right-click on the connected environment and select Configure OIDC


under Configure Enterprise Security.

3. In step 1, Select an identity provider from the drop down.

Copyright © 2024 All Rights Reserved 418


Syst em Ad m in ist r at io n Gu id e

4. In step 2, click View configuration instruction to view a step-by-step


configuration guide. Use the instructions to complete steps 2 and 3.

If you need assistance from your administrator that is in charge of


enterprise Identity and Access Management (IAM), click Request
access from your administrator.

5. In step 4, configure User Claim Mapping to identify the IAM user


identity and information.

Primary User Identifier Enter the OIDC claim used to identify users.
By default, the OIDC claim is email.

Login Name Enter the OIDC claim for the login name. By default, the
OIDC claim is email.

Full name Enter the OIDC claim used for display users’ full names in
MicroStrategy. By default, the OIDC claim attribute is name.

Email Enter the OIDC claim used as the user's email in MicroStrategy.
By default, the OIDC claim attribute is email.

Select Import User at Login to allow all users in your AD to use their
credentials to log in to MicroStrategy.

6. In step 5, click Test Configuration to test with the credentials that you
provided above.

Copyright © 2024 All Rights Reserved 419


Syst em Ad m in ist r at io n Gu id e

Test configuration step is only available for OKTA and PingOne


identity providers.

7. Click Save. In 2021 Update 2 or later, you see the message shown
below. Workstation automatically creates a trusted relationship and
enables OIDC authentication, along with standard authentication.

If you are using an older build then you may need to manually create
the trusted relationship and enable OIDC authentication mode on the
Library admin page.

http://<FQDN>:<port>/MicroStrategyLibrary/admin

Library Adm in Page Authentication

In 2021 Update 2 or newer, the Library admin pages support basic and OIDC
authentication when only OIDC authentication is enabled. The admin pages
authentication is governed by the auth.admin.authMethod parameter in
the WEB-INF/classes/config/configOverride.properties file. If
the parameter is not mentioned in the file, you can add it as shown below.

There are two possible values for the auth.admin.authMethod


parameter:

Copyright © 2024 All Rights Reserved 420


Syst em Ad m in ist r at io n Gu id e

l auth.admin.authMethod = 1 (Default)

The default value of the auth.admin.authMethod parameter is 1. This


means the Library admin pages are protected by basic authentication.

l auth.admin.authMethod = 2

The Library admin pages are protected by the OIDC admin groups
mentioned in the OIDC configuration form. These admin groups are linked
to the groups on the Identity Provider (IDP) side. Members that belong to
the IDP admin groups can only access the admin pages. Users that do not
belong to the admin group receive a 403 Forbidden error.

The administrator can change the parameter value as per the requirements.
A Web application server restart is required for the changes to take effect.

The Library admin pages cannot be protected by the OIDC admin groups
when multiple authentication modes are enabled.

En ab l e OIDC Lo ggi n g

1. Access the machine in which MicroStrategy Library is


installed/deployed and browse to <Library Folder Path>/WEB-
INF/classes.

2. Locate and edit logback.xml.

3. Locate <logger name="org.springframework"


level="ERROR"> and <logger name="com.microstrategy"
level="ERROR". Remove any comment tags from both and change the
value of level to "DEBUG".

<logger name="org.springframework" additivity="false" level="DEBUG">


<appender-ref ref="SIFT" />
</logger>

<logger name="com.microstrategy" additivity="false" level="ERROR">


<appender-ref ref="SIFT" />
</logger>

Copyright © 2024 All Rights Reserved 421


Syst em Ad m in ist r at io n Gu id e

4. Locate <filter
class="ch.qos.logback.classic.filter.ThresholdFilter">
and change the level to be "DEBUG".

<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>DEBUG</level>

5. Save and close logback.xml.

6. Restart the application server.

Additional logging is added to MicroStrategyLibrary-


{appName}.log. By default, this is named MicroStrategyLibrary-
MicroStrategyLibrary.log. You can expect the log file to appear
in a folder specified under the LOG_HOME property of logback.xml.
For example, <property name="LOG_HOME" value="C:/Program
Files (x86)/Common Files/MicroStrategy/Log" />.

7. Once the behavior you are investigating has been reproduced, edit
logback.xml once again and change level="DEBUG" back to
level="ERROR".

Enabling OIDC Authentication for JSP Web and Mobile


You can configure MicroStrategy Web and MicroStrategy Mobile to work with
OIDC compliant Identity providers. To complete the set up in this document,
a basic understanding of OIDC workflows is required.

Although the following prerequisites and procedures refer to MicroStrategy


Web, the same information applies to MicroStrategy Mobile, except where
noted.

Before you begin configuring MicroStrategy Web to support single sign-on,


make sure you have done the following:

Copyright © 2024 All Rights Reserved 422


Syst em Ad m in ist r at io n Gu id e

l Deployed an OIDC application in identity provider (IdP) infrastructure.


Take note of the client ID, client secret key (not required for the PKCE
method), and issuer details for future reference.

l Verified that MicroStrategy Web is running on a JSP server.

l Deployed MicroStrategy Web on this web application server. Deploy the


MicroStrategy Web WAR file on the web application server in accordance
with your web application server documentation.

Configuring OIDC authentication for MicroStrategy Web

To configure OIDC authentication, you must setup a trusted relationship


between the Web and Intelligence servers. This is done on the Administrator
Page. Open the admin page for your web application. Then, connect to the
Intelligence Server you want to use.

Est ab l i sh t r u st b et w een t h e ser ver an d In t el l i gen ce Ser ver :

1. Open the Server properties editor.

2. Next to Trust relationship between MicroStrategy Web Server and


MicroStrategy Intelligence Server, click Setup.

3. Enter the Intelligence Server administrator credentials.

4. Click Create Trust relationship.

Copyright © 2024 All Rights Reserved 423


Syst em Ad m in ist r at io n Gu id e

En ab l e OIDC Au t h en t i cat i o n

1. In MicroStrategy Web Admin, go to the Default Properties screen.

2. Enable the OIDC authentication checkbox..

3. In OIDC Configuration, provide the Client ID, Client Secret, Issuer


and Native Client ID.

The client ID and native client ID are the same for MicroStrategy Web.

4. Under Claim Map, provide the scope to map IDP users with
MicroStrategy users.

Full Name: User display name attribute

User ID: User distinguished login attribute

Email Attribute: User email address attribute

Copyright © 2024 All Rights Reserved 424


Syst em Ad m in ist r at io n Gu id e

Group Attribute: User group attribute

Admin Groups: Defines groups that can access the Administrator


page. Use commas to define multiple groups. There should be no
spaces in front of or behind commas. To allow IdPGroupA and
IdPGroupB users to access the Administrator page, the configuration
is: Admin Groups: [“IdPGroupA,IdPGroupB”].

For more information on mapping users between a OIDC IdP and


MicroStrategy, see Mapping OIDC Users to MicroStrategy.

5. Click Save and restart the Web server.

Web Adm in Page Authentication

In MicroStrategy 2021 Update 2 or later, the Web admin pages support OIDC
and basic authentication when OIDC authentication is enabled. The admin
pages authentication is governed by the springAdminAuthMethod
parameter located in the WEB-INF/xml/sys_defaults.properties file.

There are two possible values for the springAdminAuthMethod


parameter:

l springAdminAuthMethod = 1 (Default)

The default value of the springAdminAuthMethod parameter is 1. This


means the Web admin pages are protected by basic authentication.

l springAdminAuthMethod = 2

When the springAdminAuthMethod is set to 2, the Web admin pages


are protected by the OIDC admin groups mentioned in the OIDC
configuration form. These admin groups are linked to the groups on the
Identity Provider(IDP) side. The members who belong to the IDP admin
groups can only access the admin pages. Users that do not belong to the
admin group receive a 403 Forbidden error.

The administrator can change the parameter value per the requirements. A
web application server restart is required for the changes to take effect.

Copyright © 2024 All Rights Reserved 425


Syst em Ad m in ist r at io n Gu id e

Configure OIDC Logging

1. Locate the log4j2.properties file in the WEB-INF/classes folder.

2. Modify the property.filename property to point to the folder where


you want the OIDC logs stored.

It is not recommended to leave the file as is, since the relative file path
is very unreliable and can end up anywhere. The file usually cannot be
found in the Web application folder. Use full file paths here to fully
control the log location.

In a Windows environment, the file path must be in Java format. This


means you either need to change each backslash ("\") to a slash ("/") or
escape the backslash with another one ("\\"). You can also shorten the
path by referring to the Tomcat base folder as a variable, as shown
below.

${catalina.home}/webapps/MicroStrategy/WEB-INF/log/OIDC/OIDC.log

For troubleshooting purposes it is recommended to first change the


level of org.springframework, that is the logger.c.level
property, to debug and leave everything else as the default. This
generates a clean log with all OIDC messages, along with any errors or
exceptions.

3. Restart the Web application server to apply all changes.

If you have a problem accessing the MicroStrategy Web Administrator


page, close and reopen your web browser to clear the old browser
cache.

Enabling OIDC Authentication for MicroStrategy for Power BI


Utilize OpenID Connect (OIDC) throughout MicroStrategy for Power BI to
support frictionless authentication and usage. OIDC authentication supports

Copyright © 2024 All Rights Reserved 426


Syst em Ad m in ist r at io n Gu id e

automatic and scheduled refresh in Power BI.

1. Open Power BI.

2. From File, go to Get data > Get data to get started > MicroStrategy
for Power BI.

3. Click Connect.

4. Enter your REST API URL and add the #OIDCMode parameter to the
end of the URL. For example,
https://mstr.mycompany.com/MicroStrategyLibrary/#OIDC
Mode.

5. Click OK.

Copyright © 2024 All Rights Reserved 427


Syst em Ad m in ist r at io n Gu id e

6. Go to the Library/OIDC tab.

7. Click Sign in.

8. Select your account and enter your password.

9. Upon successful login, click Connect.

10. Proceed with data import. See MicroStrategy for Power BI for
information.

Enable OIDC Authentication with Amazon Athena Using Okta and


Azure AD
Starting in MicroStrategy ONE Update 10, you can integrate MicroStrategy
with Amazon Athena for Single-Sign On (SSO) with OpenID Connect (OIDC)
authentication.

l Prerequisites

l Install Athena JDBC Driver

l Prepare Your Application in Okta and Azure AD

l Prepare AWS IAM Objects

l MicroStrategy Configuration

Copyright © 2024 All Rights Reserved 428


Syst em Ad m in ist r at io n Gu id e

l Create and Map Users to Okta/Azure AD

l Configure MicroStrategy Library in Workstation

l Configure MicroStrategy Web

l Create an Enterprise Security Object

l Create an Amazon Athena JDBC Data Source with OAuth On-Behalf-Of


Authentication

l End-to-End Testing

l Test Workstation

l Test Library

l Test MicroStrategy Web

Install Athena JDBC Driver

The Amazon Athena JDBC driver is not installed with MicroStrategy.


Therefore, you must download the driver.

Copyright © 2024 All Rights Reserved 429


Syst em Ad m in ist r at io n Gu id e

1. Download the Athena JDBC driver with Amazon SDK.

2. Upload the driver into the JDBC folder on the MicroStrategy Intelligence
server machine (<MSTR_INSTALL_HOME>/JDBC). See the example
paths below.

Linux

/opt/MicroStrategy/JDBC

Windows

C:\Program Files (x86)\Common Files\MicroStrategy\JDBC

Prepare Your Application in Okta and Azure AD

Follow the steps below to prepare your application in Okta and Azure AD.

Copyright © 2024 All Rights Reserved 430


Syst em Ad m in ist r at io n Gu id e

Okt a

1. Set up your application

2. Configure Native SSO for your Okta org.

3. Navigate to the Sign On tab of your newly created organization.

4. In OpenID Connect ID Token, click Edit.

5. In Issuer, choose the Okta URL.

6. Click Save.

7. Navigate to the Okta API Scopes tab.

8. Click Grant next to the okta.apps.read, okta.groups.read, and


okta.users.read scopes.

Copyright © 2024 All Rights Reserved 431


Syst em Ad m in ist r at io n Gu id e

Azu r e AD

1. See Integrate OIDC Support with Azure AD to create your Azure


application.

2. Go to the newly created app > Authentication.

3. Under the Implicit grant and hybrid flows section, select ID tokens.

Prepare AWS IAM Objects

Follow the procedures below to prepare IAM objects.

1. Create a Custom Policy

2. Create AWS OIDC Identify Providers

3. Create AWS Role for Web Identity or OpenID Connect Federation

Cr eat e a Cu st o m Po l i cy

Create a custom policy to grant permissions to an S3 resource for staging


Athena results.

Here is an example policy named get-cc-athena-result:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucketMultipartUploads",
"s3:AbortMultipartUpload",
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::tec-gd-gateway-data",
"arn:aws:s3:::tec-gd-gateway-data/athena",
"arn:aws:s3:::tec-gd-gateway-data/athena/*"
]

Copyright © 2024 All Rights Reserved 432


Syst em Ad m in ist r at io n Gu id e

}
]
}

Cr eat e AWS OIDC Id en t i f y Pr o vi d er s

See Creating OpenID Connect (OIDC) identity providers to create AWS


OIDC identity providers using the issuer URL and client_id for Azure AD and
Okta.

Cr eat e AWS Ro l e f o r Web Id en t i t y o r Op en ID Co n n ect Fed er at i o n

1. See Creating a role for web identity or OpenID Connect Federation


(console) to create an IAM Role, grant a suitable managed policy such
as AWSQuicksightAthenaAccess with permissions to call the Athena
API, and add the custom policy with permissions to S3 that was created
above.

Example of IAM Role:

2. Under Trust Relationships > Trusted Entities for the IAM role, add
the AWS OIDC Identity Providers created above.

Example:

Copyright © 2024 All Rights Reserved 433


Syst em Ad m in ist r at io n Gu id e

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GDAzure",
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::xxxxxxx:oidc-
provider/login.microsoftonline.com/4ca8943a-xxxx-xxxx-868e-
c5bdb4d59fee/v2.0"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"login.microsoftonline.com/4ca8943a- xxxx-xxxx -868e-
c5bdb4d59fee/v2.0:aud": "833d15da- xxxx-xxxx -ae3a-ca7a79432950"
}
}
},
{
"Sid": "GDOkta",
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::xxxxxxx:oidc-provider/dev-
xxxxxx.okta.com/oauth2/aus5xhhzgxxxx2ZZ5d7"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"dev-xxxxxx.okta.com/oauth2/aus5xhhzgxxxx2ZZ5d7:aud":
"xxxxxxxx"
}
}
}
]
}

This example includes both Azure AD and Okta. You can find more
details in Configuring a role for GitHub OIDC identity provider.

Create and Map Users to Okta/ Azure AD

1. Open the Workstation window.

2. In the Navigation pane, click Environments.

3. Log into your environment. You must have Administrator privileges.

4. In the Navigation pane, click User and Groups.

Copyright © 2024 All Rights Reserved 434


Syst em Ad m in ist r at io n Gu id e

5. Next to All Users, click .

6. In the left pane, click Privileges and add the following privileges:

l Access data from Databases, Google BigQuery, BigData, OLAP, BI


tools

l Create and edit database instances and connections

l Create and edit database logins

l Create configuration objects

l Create dataset in Workstation

l Configure project data source

l Monitor Database Connections

l Use Workstation

7. In the left pane, click Authentication.

Copyright © 2024 All Rights Reserved 435


Syst em Ad m in ist r at io n Gu id e

8. Enter user’s email address in Trusted Authenticated Request User


ID.

9. Click Save.

Configure MicroStrategy Library in Workstation

1. Open Workstation and connect to the Library environment using


standard authentication with an admin privilege user.

2. Right-click on the connected environment and choose Configure


Enterprise Security.

3. Configure for Azure AD and Okta

Azure AD: Under MicroStrategy Configuration, upload the manifest file


you downloaded earlier and provide the OpenID Connect Metadata
Document URL.

Copyright © 2024 All Rights Reserved 436


Syst em Ad m in ist r at io n Gu id e

Okta: Under MicroStrategy Configuration, provide the Client ID and


Issuer.

Copyright © 2024 All Rights Reserved 437


Syst em Ad m in ist r at io n Gu id e

4. Click Save. For more information about enabling OpenID Connection


(OIDC) authentication in Workstation, see Configure Enterprise
Security.

5. Restart the web server.

Configure MicroStrategy Web

1. Go to the MicroStrategy Web admin page.

https://env-
xxxxxx.customer.cloud.microstrategy.com/MicroStrategy
/servlet/mstrWebAdmin

2. Locate the connected Intelligence server and click Modify.

Copyright © 2024 All Rights Reserved 438


Syst em Ad m in ist r at io n Gu id e

3. Click Setup next to the trust relationship between the Web server and
MicroStrategy Intelligence server.

4. Enter the user credentials with admin privileges and click Create Trust
Relationship.

5. In the navigation pane, click Default properties and enable OIDC


Authentication.

6. Under OIDC Configuration, complete the remaining fields. For the Okta
Native app, leave Client Secret empty.

Click here to view details about the OIDC Configuration section.

Client ID Enter the client ID of your Azure application.

Client Secret This field is only required when the Azure application is a
Web app. If you deployed a Public client/native app in Create a
Custom Policy, you can leave this field blank.

Issuer The OpenID Connect metadata document field in the


Endpoints section is the provider's issuer URL suffixed with /.well-
known/openid-configuration. You must remove this suffix from
the OpenID Connect metadata document to get the Issuer information.

Copyright © 2024 All Rights Reserved 439


Syst em Ad m in ist r at io n Gu id e

Example: If OpenID Connect metadata document is


https://login.microsoftonline.com/901c038b-xxxx-4259-
b115-c1753c7735aa/v2.0/.well-known/openid-
configuration, the Issuer is
https://login.microsoftonline.com/901c038b-xxxx-4259-
b115-c1753c7735aa/v2.0.

Native Client ID This is the same as the client ID, unless configured
otherwise.

Redirect URI The default web redirect URI. This should not be changed
unless configured otherwise.

Scope The scopes used by MicroStrategy to authorize access to a


user. This should not be changed unless configured otherwise.

Claim Map

l Full Name The user display name attribute. The default value for this
field is name.

l User ID The user distinguished login attribute. The default value for
this field is email.

l Email The user email address attribute. The default value for this
field is email.

l Groups The user group attribute. The default value for this field is
groups.

Admin Groups: Select admin groups whose members can access to


the admin pages. You can have multiple admin groups.

Copyright © 2024 All Rights Reserved 440


Syst em Ad m in ist r at io n Gu id e

Example: ["WebAdmin","SystemAdmin"]
Members belonging to WebAdmin and SystemAdmin can access the
admin pages.

7. Click Save. For more information, see Enabling OIDC Authentication


for JSP Web and Mobile.

8. Restart the web server.

Create an Enterprise Security Object

Follow the steps in Manage OAuth Enterprise Security with Identity and
Access Management (IAM) Objects to create an enterprise security object.

For Okta, choose Okta from the identity provider drop-down and enter the
Client ID, OAuth URL, and Token URL for your Okta application. Use the
following format for the URLs:

https://dev-
xxxxxx.okta.com/oauth2/microstrategy/v1/authorize

https://dev-xxxxxx.okta.com/oauth2/microstrategy/v1/token

Create an Am azon Athena JDBC Data Source with OAuth On-Behalf-Of


Authentication

1. Open the Workstation window.

2. In the Navigation pane, click , next to Data Sources.

3. Select Amazon Athena.

4. Enter a Name.

5. Expand Default Database Connection and click Add New Database


Connection.

Copyright © 2024 All Rights Reserved 441


Syst em Ad m in ist r at io n Gu id e

6. Enter a Name, select OAuth as the connection method, and enter the
required connection information.

Click here to connection information details.

AWS Region The AWS region of the Athena and AWS Glue instance
that you want to connect to.

AWS S3 Staging Directory The path of the Amazon S3 location where


you want to store query results, prefixed by s3://.

AWS Role Session Name Enter AthenaJWT.

AWS Role ARN The Amazon Resource Name (ARN) of the role that
you want to assume when authenticated through JWT. This is the name
you created in Prepare AWS IAM Objects.

Schema The name of the database schema to use. This field is


optional.

See the Magnitude Amazon Simba Athena JDBC Data Connector


Installation and Configuration Guide for more information.

Copyright © 2024 All Rights Reserved 442


Syst em Ad m in ist r at io n Gu id e

7. Select OAuth On-Behalf-Of as Authentication Mode.

8. Select the IAM object created in Create an Enterprise Security Object

9. Click Save.

10. Select the Projects to which the data source is assigned and can be
accessed.

11. Click Save.

Copyright © 2024 All Rights Reserved 443


Syst em Ad m in ist r at io n Gu id e

Test Workstation

1. Open the Workstation window.

2. Verify that the environment is using the default OIDC authentication


mode.

a. Click Environments in the Navigation pane.

b. Right-click the environment you want to use and click Edit


Environment Information.

c. Verify that Authentication Mode is set to "Default OIDC".

3. Log into your MicroStrategy environment using your Okta/Azure AD


username and password.

4. In the Navigation pane, click , next to Datasets.

5. Select Data Import Cube and click OK.

6. Select Amazon Athena.

7. Select any of import options and click Next.

8. Click on the data source created in Create an Amazon Athena JDBC


Data Source with OAuth On-Behalf-Of Authentication.

The namespaces and tables list appears.

Test Library

1. Open MicroStrategy Library and click Log in with OIDC.

2. In the toolbar, click , and choose Dashboard.

3. Click Blank Dashboard.

4. Click Create.

5. Click New Data and select the Amazon Athena gateway.

Copyright © 2024 All Rights Reserved 444


Syst em Ad m in ist r at io n Gu id e

6. Select any of import options and click Next.

7. Click on the data source created in Create an Amazon Athena JDBC


Data Source with OAuth On-Behalf-Of Authentication.

The namespaces and tables list appears.

Test MicroStrategy Web

1. Open MicroStrategy Web and log in using your Okta/Azure AD


username and password.

2. Click Create.

3. Click Add External Data.

4. Select the Amazon Athena gateway.

5. Select any of import options and click Next.

6. Click on the data source created in Create an Amazon Athena JDBC


Data Source with OAuth On-Behalf-Of Authentication.

The namespaces and tables list appears.

Enable Multiple OIDC and SAML Configurations


Starting in MicroStrategy ONE (September 2024), MicroStrategy supports
multiple OIDC and SAML configurations. This allows different applications to
utilize SAML with various identity providers (IDPs) or OIDC with different
identity access management systems (IAMs).

l Configure Multiple OIDC/SAML Configurations on the Library Server

l Assign Multiple OIDC/SAML Configurations to an Application

l Security Setting

Copyright © 2024 All Rights Reserved 445


Syst em Ad m in ist r at io n Gu id e

Configure Multiple OIDC/ SAML Configurations on the Library Server

1. Open Workstation.

2. In the Navigation pane, click Enterprise Security.

3. Click System Authentication to display your


OIDC/SAML configurations. By default, there are server-level
configurations for OIDC and SAML.

4. In the Navigation pane, click , next to Enterprise Security.

5. In Type, select System Authentication and in Mode, select OIDC or


SAML.

Copyright © 2024 All Rights Reserved 446


Syst em Ad m in ist r at io n Gu id e

6. In Name the Configuration/Configuration Name, enter a name for


your multi OIDC/SAML configuration to distinguish it from others. The
remaining configurations are the same as the server-level settings
shown in Enable OIDC Authentication for MicroStrategy Library and
Enable Single Sign-On with SAML Authentication.

Copyright © 2024 All Rights Reserved 447


Syst em Ad m in ist r at io n Gu id e

Copyright © 2024 All Rights Reserved 448


Syst em Ad m in ist r at io n Gu id e

7. When you are finished, confirm the new OIDC/SAML configurations are
listed.

Copyright © 2024 All Rights Reserved 449


Syst em Ad m in ist r at io n Gu id e

Assign Multiple OIDC/ SAML Configurations to an Application

1. In the Navigation pane, click Applications.

2. Right-click an application and choose Edit.

3. In Authentication Modes, select Choose specific authentication


mods for the app and choose your multi OIDC/SAML configuration for
this application.

Copyright © 2024 All Rights Reserved 450


Syst em Ad m in ist r at io n Gu id e

Copyright © 2024 All Rights Reserved 451


Syst em Ad m in ist r at io n Gu id e

4. Click Save.

Security Setting

Administrators can enable a security setting that mitigates the risk of two
individuals from different IDP servers sharing the same name ID.

1. In the Navigation pane, click Enterprise Security.

2. Click System Authentication.

3. Turn on Enable user mapping with SSO configuration.

Copyright © 2024 All Rights Reserved 452


Syst em Ad m in ist r at io n Gu id e

4. Once this security setting is enabled, navigate to Users & Groups >
Edit Specific User > Authentication > Namespace (Multi-SSO only)
and select the desired SSO scopes for the user's login. If a user
attempts to log in with an SSO scope different than the one configured
in the system, the login attempt is rejected.

Integrate OIDC Support with Azure AD


This procedure provides instructions for integrating MicroStrategy
applications with Azure AD using OIDC authentication.

Copyright © 2024 All Rights Reserved 453


Syst em Ad m in ist r at io n Gu id e

l Create an Application

l Configure MicroStratgy Library in Workstation

l Enable OIDC Auth Mode for MicroStrategy Library

l Configure and Enable OIDC Auth Mode for MicroStrategy


Web/MicroStrategy Mobile

Create an Application

1. Sign in to the Azure portal. If you have already launched Azure Active
Directory, under Manage, select App registration.

2. Click New registration.

3. In Register an application, enter MicroStrategy as the application


name. Choose the account type that best fits your enterprise identity
access management.

4. Under Redirect URI, select Public client/native (mobile and


desktop). Enter the Library URL suffixed by /auth/oidc/login as
shown below.

https://env-
xxxxxx.customer.cloud.microstrategy.com/MicroStrategy
Library/auth/oidc/login

5. Click Register.

6. In the newly created app registration screen, locate Authentication in


the navigation pane and add the following mobile and desktop
application URIs. Replace the environment-specific URIs with your
environment name.

l http://127.0.0.1

l com.microstrategy.hypermobile://auth

Copyright © 2024 All Rights Reserved 454


Syst em Ad m in ist r at io n Gu id e

l com.microstrategy.dossier.mobile://auth

l com.microstrategy.mobile://auth

l https://env-
xxxx.customer.cloud.microstrategy.com/MicroStrategyL
ibrary/static/oidc/success.html

l https://env-
xxxxxx.customer.cloud.microstrategy.com:443/MicroStr
ategy/auth/oidc/login

l https://env-
xxxxxx.customer.cloud.microstrategy.com:443/MicroStr
ategyMobile/auth/oidc/login

7. Click Save.

8. In the navigation pane, locate API permissions.

9. Click Add a permission > Microsoft Graph > Delegated


permissions.

10. Search for Directory.Read.All, expand Directory, select


Directory.Read.All, and click Add permissions.

Copyright © 2024 All Rights Reserved 455


Syst em Ad m in ist r at io n Gu id e

11. Click Update permissions.

12. In the navigation pane, locate Manifest and download the manifest file.

13. In the navigation pane, locate Overview and take note of the Client ID
for later.

14. Click Endpoints and copy the OpenID Connect metadata document
field.

15. Add group claims by choosing Token configuration > Add group
claims > ID and save the defined group claim.

Copyright © 2024 All Rights Reserved 456


Syst em Ad m in ist r at io n Gu id e

Configure MicroStratgy Library in Workstation

1. Open Workstation and connect to the Library environment using


standard authentication with an admin privilege user.

2. Right-click on the connected environment and choose Configure


Enterprise Security.

3. Under MicroStrategy Configuration, upload the manifest file you


downloaded earlier and provide the OpenID Connect metadata
document details.

Copyright © 2024 All Rights Reserved 457


Syst em Ad m in ist r at io n Gu id e

4. Click Save. For more information about enabling OpenID Connection


(OIDC) authentication in Workstation, see Configure Enterprise
Security.

Enable OIDC Auth Mode for MicroStrategy Library

1. Go to the Library Admin page to enable OIDC authentication as the


default for MicroStrategy Library.

https://env-

Copyright © 2024 All Rights Reserved 458


Syst em Ad m in ist r at io n Gu id e

xxxxxx.customer.cloud.microstrategy.com/MicroStrategy
Library/admin

2. In the navigation pane, click Library Server.

3. Under Authentication Modes, select OIDC, and click Create Trusted


Relationship.

4. Log in, deselect Standard, and click Save. For more information, see
Enable OIDC Authentication for MicroStrategy Library.

Configure and Enable OIDC Auth Mode for MicroStrategy


Web/ MicroStrategy Mobile

The procedure below refers to MicroStrategy Web. However, the same


information applies to MicroStrategy Mobile unless otherwise noted.

1. Go to the MicroStrategy Web admin page.

https://env-
xxxxxx.customer.cloud.microstrategy.com/MicroStrategy
/servlet/mstrWebAdmin

2. Locate the connected Intelligence server and click Modify.

3. Click Setup next to the trust relationship between the Web server and
MicroStrategy Intelligence server.

4. Enter the user credentials with admin privileges and click Create Trust
Relationship.

5. In the navigation pane, click Default properties and enable


OIDC Authentication.

6. Under OIDC Configuration, complete the remaining fields.

Copyright © 2024 All Rights Reserved 459


Syst em Ad m in ist r at io n Gu id e

Click here to view details about the OIDC Configuration section.

Client ID Enter the client ID of your Azure application.

Client Secret This field is only required when the Azure application is a
Web app. If you deployed a Public client/native app in Create an
Application, you can leave this field blank.

Issuer The OpenID Connect metadata document field in the


Endpoints section is the provider's issuer URL suffixed with /.well-
known/openid-configuration. You must remove this suffix from
the OpenID Connect metadata document to get the Issuer information.

Example: If OpenID Connect metadata document is


https://login.microsoftonline.com/901c038b-xxxx-4259-
b115-c1753c7735aa/v2.0/.well-known/openid-
configuration, the Issuer is
https://login.microsoftonline.com/901c038b-xxxx-4259-
b115-c1753c7735aa/v2.0.

Copyright © 2024 All Rights Reserved 460


Syst em Ad m in ist r at io n Gu id e

Native Client ID This is the same as the client ID, unless configured
otherwise.

Redirect URI The default web redirect URI. This should not be changed
unless configured otherwise.

Scope The scopes used by MicroStrategy to authorize access to a


user. This should not be changed unless configured otherwise.

Claim Map

l Full Name The user display name attribute. The default value for this
field is name.

l User ID The user distinguished login attribute. The default value for
this field is email.

l Email The user email address attribute. The default value for this
field is email.

l Groups The user group attribute. The default value for this field is
groups.

Admin Groups: Select admin groups whose members can access to


the admin pages. You can have multiple admin groups.

Example: ["WebAdmin","SystemAdmin"]
Members belonging to WebAdmin and SystemAdmin can access the
admin pages.

7. Click Save. For more information, see Enabling OIDC Authentication


for JSP Web and Mobile

Integrate OIDC Support with Okta


This procedure provides instructions for integrating MicroStrategy Web with
Okta. For more information, see the Okta documentation.

Copyright © 2024 All Rights Reserved 461


Syst em Ad m in ist r at io n Gu id e

l Create an Application

l Configure MicroStrategy Library in Workstation

l Configure and Enable OIDC Auth Mode for MicroStrategy


Web/MicroStrategy Mobile

Create an Application

1. Log in as an Okta administrator and go to the Admin page.

2. Go to Applications and click Create App Integration.

3. Select OIDC - OpenID Connect and Native Application.

4. Click Next

5. Under General Settings, enter the App integration name.

6. Confirm that Authorization Code and Refresh Token are checked in


the Grant type.

7. Add the following Web, Mobile, Library and Desktop application URIs
under Sign-in redirect URIs. Replace the environment-specific URIs
with your environment name.

l https://env-
xxxxxx.customer.cloud.microstrategy.com/MicroStrateg
yLibrary/auth/oidc/login

l com.microstrategy.hypermobile://auth

l com.microstrategy.dossier.mobile://auth

l https://env-
xxxxxx.customer.cloud.microstrategy.com/MicroStrateg
yLibrary/static/oidc/success.html

l local://plugins

Copyright © 2024 All Rights Reserved 462


Syst em Ad m in ist r at io n Gu id e

l http://127.0.0.1

l http://127.0.0.1:51892

l http://127.0.0.1:51893

l http://127.0.0.1:51894

l http://127.0.0.1:51895

l http://127.0.0.1:51896

l http://127.0.0.1:51897

l com.microstrategy.mobile://auth

l https://env-
xxxxxx.customer.cloud.microstrategy.com:443/MicroStr
ategy/auth/oidc/login

l https://env-
xxxxxx.customer.cloud.microstrategy.com:443/MicroStr
ategyMobile/auth/oidc/login

8. Click Save.

9. Go to the Assignments tab and assign users.

10. On the General tab, under Client Credentials, take note of the client ID
for future reference.

Copyright © 2024 All Rights Reserved 463


Syst em Ad m in ist r at io n Gu id e

11. Select the Sign On tab. Under the OpenID Connect ID Token, take note
of the issuer for future reference.

12. Next to OpenID Connect ID Token, click Edit. In the Groups claim
filter, choose Matches regex, enter a value of .*, and click Save.

13. Select the Assignments tab and verify that the users that need to
access the application are assigned.

Copyright © 2024 All Rights Reserved 464


Syst em Ad m in ist r at io n Gu id e

14. Click Okta API Scopes and grant okta.apps.read,


okta.groups.read, and okta.users.read scopes to the
application.

Configure MicroStrategy Library in Workstation

1. Open Workstation and connect to the Library environment using


standard authentication with an admin privilege user.

2. Right-click on the connected environment. Under Configure Enterprise


Security, select Configure OIDC.

3. Select Okta as the identity provider from the dropdown in the first step.

4. Verify that all URIs mentioned in the second step are already added to
the Okta application.

5. Provide the Client ID and Issuer for the Okta application in the third
step.

6. Verify the default User claim mappings and Import user at Login
setting.

7. Click Save. This automatically creates a trust relationship between


Library Web server and Intelligence server, and enables OIDC
authentication mode.

Configure and Enable OIDC Auth Mode for MicroStrategy


Web/ MicroStrategy Mobile

The procedure below refers to MicroStrategy Web. However, the same


information applies to MicroStrategy Mobile unless otherwise noted.

1. Go to the MicroStrategy Web admin page.

https://env-

Copyright © 2024 All Rights Reserved 465


Syst em Ad m in ist r at io n Gu id e

xxxxxx.customer.cloud.microstrategy.com/MicroStrategy
/servlet/mstrWebAdmin

2. Locate the connected Intelligence server and click Modify.

3. Click Setup next to the trust relationship between the Web server and
MicroStrategy Intelligence server.

4. Enter the user credentials with admin privileges and click Create Trust
Relationship.

5. In the navigation pane, click Default properties and enable OIDC


Authentication.

6. Under OIDC Configuration, complete the remaining fields.

Client ID Enter the client ID of your Okta application.

Client Secret This field is only required when the Okta application is a
Web app. If you deployed a Public client/native app in Create an
Application, you can leave this field blank.

Issuer Enter the Issuer of your Okta application.

Native Client ID This is the same as the client ID, unless configured
otherwise.

Redirect URI The default web redirect URI. This should not be changed
unless configured otherwise.

Scope The scopes used by MicroStrategy to authorize access to a


user. To log into the Mobile admin page with OIDC, add "groups" to the
scope.

Claim Map

l Full Name The user display name attribute. The default value for this
field is name.

Copyright © 2024 All Rights Reserved 466


Syst em Ad m in ist r at io n Gu id e

l User ID The user distinguished login attribute. The default value for
this field is email.

l Email The user email address attribute. The default value for this
field is email.

l Groups The user group attribute. The default value for this field is
groups.

Admin Groups Select admin groups whose members can access to the
admin pages. You can have multiple admin groups.

["WebAdmin","SystemAdmin"]
Members belonging to WebAdmin and SystemAdmin can access the
admin pages.

7. Click Save. For more information, see Enabling OIDC Authentication


for JSP Web and Mobile.

Restart the Web server after completing all the above steps for the changes
to take effect.

Integrate OIDC Support with Ping


This procedure provides instructions for integrating MicroStrategy Web with
Ping. For more information, see the Ping documentation.

Create an Application

1. Log in as a Ping administrator and connect to the desired environment


console.

2. Go to Connection and click Add Application.

3. Select WEB APP as the Application type.

4. Select Configure OIDC as Connection type.

5. Enter the Application name.

Copyright © 2024 All Rights Reserved 467


Syst em Ad m in ist r at io n Gu id e

6. Enter the Redirect URIs. This is the MicroStrategy deployment path


followed by “/auth/oidc/login”. For example., https://env-
xxxxxx.customer.cloud.microstrategy.com:443/MicroStra
tegy/auth/oidc/login.

7. On Grant Resource Access to Your Application page, click the drop-


down next to Filtered By and select openid.

8. Add all four scopes including address, email, phone, and profile.

9. Click Next followed by Save and Close.

10. Enable the application.

11. Take note of the Client ID, CLIENT SECRET, and ISSUER under the
Configuration tab. You will need this information later to configure
MicroStrategy with the Ping application.

Integrate OIDC Support with Google


Starting in MicroStrategy ONE (September 2024), you can integrate
OpenID Connection (OIDC) support with Google.

Google does not expose an OAuth scope to obtain user groups as part of
the OIDC flow. Therefore, MicroStrategy can not retrieve group information
for Google users and can not map Google groups to MicroStrategy
administrator groups.

Copyright © 2024 All Rights Reserved 468


Syst em Ad m in ist r at io n Gu id e

Create Application

1. Sign in to Google Cloud Console.

2. Under APIs & Services, click Credentials.

3. Click Create Credentials and select OAuth client ID.

Copyright © 2024 All Rights Reserved 469


Syst em Ad m in ist r at io n Gu id e

4. If your application runs on multiple platforms, each platform will need


its own client ID.

Cr eat e Web OAu t h Cl i en t ID

1. In the Create OAuth client ID dialog, under Application type, select


Web application.

Copyright © 2024 All Rights Reserved 470


Syst em Ad m in ist r at io n Gu id e

2. In Name, type an application name.

3. Under Authorized redirect URIs, enter the Library URL and add
/auth/oidc/login to the end of the URL, as shown below.

https://env-
xxxxxx.customer.cloud.microstrategy.com/MicroStrategy
Library/auth/oidc/login

4. Click Create.

Copyright © 2024 All Rights Reserved 471


Syst em Ad m in ist r at io n Gu id e

Cr eat e i OS OAu t h Cl i en t ID

1. In the Create OAuth client ID dialog, under Application type, choose


iOS.

2. In Name, type an application name.

3. In Bundle ID, enter your application bundle ID.

4. Click Create.

Cr eat e An d r o i d OAu t h Cl i en t ID

1. In the Create OAuth client ID dialog, under Application type, choose


Android.

Copyright © 2024 All Rights Reserved 472


Syst em Ad m in ist r at io n Gu id e

2. In Name, type an application name.

3. In Package name, type an application package name.

4. In SHA-1 certificate fingerprint, enter the SHA-1 certificate


fingerprint.

5. Expand Advanced Settings and select the checkbox next to Enable


custom URI scheme.

6. Click Create.

Copyright © 2024 All Rights Reserved 473


Syst em Ad m in ist r at io n Gu id e

Cr eat e Wo r kst at i o n OAu t h Cl i en t ID

1. In the Create OAuth client ID dialog, under Application type, choose


Desktop app.

2. In Name, type an application name.

3. Click Create.

Configure MicroStrategy Library in Workstation

1. Open Workstation and connect to the Library environment using


standard authentication with an administrator user.

2. Right-click on the environment and choose Configure Enterprise


Security.

Copyright © 2024 All Rights Reserved 474


Syst em Ad m in ist r at io n Gu id e

3. In Select an identity provider, choose Google Cloud Identity.

Co n f i gu r e Web Cl i en t

1. In Client ID and Client Secret, enter the values from the client you
created in Create Web OAuth Client ID.

2. The Library Web URI is generated automatically. Ensure you add this
URI to the Authorized redirect URLs in the client you created in
Create Web OAuth Client ID.

3. In Scopes, enter your required scopes.

Copyright © 2024 All Rights Reserved 475


Syst em Ad m in ist r at io n Gu id e

4. In Additional Parameter, type access_type and offline.

Co n f i gu r e i OS Cl i en t

1. In Client ID, enter the value from the client you created in Create
iOS OAuth Client ID.

2. In Redirect URI Scheme, enter the iOS URL scheme from the
iOS client you created in Create iOS OAuth Client ID.

3. In Scopes, enter your required scopes.

Copyright © 2024 All Rights Reserved 476


Syst em Ad m in ist r at io n Gu id e

Co n f i gu r e An d r o i d Cl i en t

1. In Client ID, enter the value from the client you created in Create
Android OAuth Client ID.

2. In Package Name, enter the Package name from the client you created
in Create Android OAuth Client ID.

3. In Scopes, enter your required scopes.

Co n f i gu r e Wo r kst at i o n Cl i en t

1. In Client ID and Client Secret, enter the values from the client you
created in Create Workstation OAuth Client ID.

2. In Scopes, enter your required scopes.

Copyright © 2024 All Rights Reserved 477


Syst em Ad m in ist r at io n Gu id e

En ab l e OIDC Au t h M o d e f o r M i cr o St r at egy Li b r ar y

1. Go to the Library Admin page. For example, https://env-


xxxxxx.customer.cloud.microstrategy.com/MicroStrategy
Library/admin.

2. In the navigation pane, click Library Server.

3. In Authentication Modes, choose OIDC and click Create Trusted


Relationship.

4. Log in and deselect Standard.

5. Click Save.

For more information, see Enable OIDC Authentication for MicroStrategy


Library

Integrate MicroStrategy With Snowflake for Single Sign-On With


OIDC using Azure AD
Learn how to integrate MicroStrategy with Snowflake for Single-Sign On
(SSO) with OpenID Connect (OIDC) authentication.

1. Prerequisite: Configure Snowflake OAuth integration with Azure AD to


create OAuth applications.

2. Enable MicroStrategy Web OIDC with Azure AD

3. Enable MicroStrategy Library OIDC with Azure AD

4. Configure seamless login

5. Enable MicroStrategy Mobile OIDC with Azure AD

6. Create Snowflake database instances

7. Validate OIDC login mode

Copyright © 2024 All Rights Reserved 478


Syst em Ad m in ist r at io n Gu id e

Prerequisite: Configure Snowflake OAuth integration with Azure AD to


create OAuth applications

1. Select the following OAuth flow described in the Pre-Requisite section


of Configure Microsoft Azure AD for External OAuth: The authorization
can grant the Oauth client an access token on behalf of the user.

2. Complete the following steps:

1. Based on the Snowflake documentation, create two applications:


Snowflake OAuth Resource Application and Snowflake OAuth
Client Application. When configuring the Client Application, add
the following redirect URLs:
l https://<FQDN>:<port>/MicroStrategy/servlet/mstr
Web?evt=3172

l http://localhost

3. Go to the Snowflake OAuth Client Application > Authentication.

4. Under the Implicit gran and hybrid flows section, select the ID tokens
checkbox.

Copyright © 2024 All Rights Reserved 479


Syst em Ad m in ist r at io n Gu id e

Enable MicroStrategy Web OIDC with Azure AD

1. Establish trust between the Web Server and Intelligence Server.

1. Log into MicroStrategy Web.

2. Connect to the Intelligence Server.

3. Select your Intelligence Server and next to Trust relationship


between Web Server and MicroStrategy Intelligence Server, click
Setup.

4. Enter the administrator account and password to establish trust.

2. Set OIDC as the login mode and complete the required fields.

Copyright © 2024 All Rights Reserved 480


Syst em Ad m in ist r at io n Gu id e

For the MicroStrategy application redirect URI, the value must be


added to Azure AD > Snowflake OAuth Client Application >
Authentication > Web Redirects URIs.

l For Client ID, go to the app > Overview > Application (client) ID,
and locate the ID.

Copyright © 2024 All Rights Reserved 481


Syst em Ad m in ist r at io n Gu id e

l For Client Secret, go to the app > Certificates & secrets, and locate
the secret. If necessary, create a new secret.

l For Issuer, go to App > Overview > Endpoints, open the URL of
OpenID Connect metadata document, and copy the issuer value. For
example, https://login.microsoftonline.com/[Directory
tenant ID]/v2.0.

Copyright © 2024 All Rights Reserved 482


Syst em Ad m in ist r at io n Gu id e

l For Native ID, use the same value as Client ID.

l For Redirect URI and Scope, leave the fields unmodified.

l For the Claim Map fields:

l Full Name: name

l User ID: upn

l Email: email

l Groups: groups

l For Admin Groups, go the app > Groups > Overview, and locate
Object Id. If the Object Id is a value set, only users in the group can
access mstrWebAdmin pages.

3. Restart Tomcat for the MicroStrategy Web configurations to take effect.

Enable MicroStrategy Library OIDC with Azure AD

1. Create or modify MicroStrategyLibrary\WEB-


INF\classes\auth\Oidc\OidcConfig.json.

{
"iams":[{
"clientId":"XXXXXXX",
"clientSecret":"XXXXXXX",
"nativeClientId": "XXXXXXX",
"id":"test",
"issuer":"https://login.microsoftonline.com/XXXXXXX/v2.0",
"redirectUri":"https://XXXXXXX/MicroStrategyLibrary/auth/oidc/login",
"blockAutoProvisioning": true,
"claimMap": {

Copyright © 2024 All Rights Reserved 483


Syst em Ad m in ist r at io n Gu id e

"email": "email",
"fullName": "name",
"userId": "upn",
"groups": "groups"
},
"default": true,
"mstrIam": true,
"scopes": [
"openid",
"profile",
"email",
"offline_access"
],
"vendor": {
"name": "MicroStrategy IAM",
"version": "Azure AD"
}
}]
}

l For clientId, clientSecret, nativeClientId, and issuer,


use the same values used for OIDC configuration for MicroStrategy
Web.

l For redirectUri, use <FQDN>:<port> to replace XXXXXXX, and


add the URL to Azure AD > Snowflake OAuth > Snowflake OAuth
Client Application > Web Redirect URLs.

2. Set OIDC authentication mode.

1. Log into the MicroStrategy Library Admin page.

2. Connect to the Intelligence Server.

3. Go to the Library Server tab > Authentication Modes and select


the OIDC checkbox.

4. Click Create Trusted Relationship to create a trusted


relationship between the Library Server and Intelligence Server.

Copyright © 2024 All Rights Reserved 484


Syst em Ad m in ist r at io n Gu id e

5. Enter the administrator account and password to establish trust.

3. Restart Tomcat for the MicroStrategy Library configurations to take


effect.

Configure Seam less Login

To navigate between MicroStrategy Web and Library without having to re-


authenticate, see How to Enable Seamless Login Between Web, Library, and
Workstation.

Enable MicroStrategy Mobile OIDC with Azure AD

1. Establish trust between the Web Server and Intelligence Server.

1. Log into the MicroStrategy Mobile Admin page.

2. Connect to the Intelligence Server.

3. Select your Intelligence Server and next to Trust relationship


between Mobile Server and MicroStrategy Intelligence Server,

Copyright © 2024 All Rights Reserved 485


Syst em Ad m in ist r at io n Gu id e

click Setup.

2. Set OIDC as the login mode and complete the required fields using the
same values used for OIDC configuration for MicroStrategy Web.

3. Add the Mobile Redirect URI to Azure AD > Snowflake OAuth Client
Application > Authentication > Web Refidirect URIs.

4. In Azure AD > Snowflake OAuth Client Application >


Authentication > Mobile and desktop applications, add
com.microstrategy.mobile://auth as a redirect URI.

Copyright © 2024 All Rights Reserved 486


Syst em Ad m in ist r at io n Gu id e

5. Restart Tomcat for the MicroStrategy Mobile configurations to take


effect.

Create Snowflake Database Instances

You can create Snowflake database instances with or without the project
schema.

Wi t h t h e Pr o j ect Sch em a

To use the project schema, you must have a basic authentication


connection:

l In MicroStrategy Developer:

1. In the Database instance name field, type in a name.

2. From the Database connection type drop-down, select Snowflake.

3. Click New to create a new database connection.

4. In the Database connection name field, type in a name.

5. Select the DSN.

Copyright © 2024 All Rights Reserved 487


Syst em Ad m in ist r at io n Gu id e

6. Create a database login and saved your settings.

l In MicroStrategy Web:

Database instances created via MicroStrategy can be used for the project
schema, but cannot be used for connection mapping.

Copyright © 2024 All Rights Reserved 488


Syst em Ad m in ist r at io n Gu id e

1. In the Data Source dialog, select the Standard Connection option.

Wi t h o u t t h e Pr o j ect Sch em a

To use the database instance without the project schema, you must either
have basic or OAuth authentication.

Copyright © 2024 All Rights Reserved 489


Syst em Ad m in ist r at io n Gu id e

1. Create an OAuth authentication database connection:

l In MicroStrategy Developer:

1. Click New to create a new database connection.

2. In the Database connection name field, type in a name.

3. Select the DSN.

4. Go to the Advanced tab.

5. In the Additional connection string parameters field, enter


TOKEN=?MSTR_OAUTH_TOKEN;.

This will act as a placeholder that will be replaced by a real


token when the user uses the Snowflake database instance.

6. Click OK.

7. In the Database login, enter a name.

8. Select the Use network login id (Windows authentication)


checkbox.

l In MicroStrategy Web:

Copyright © 2024 All Rights Reserved 490


Syst em Ad m in ist r at io n Gu id e

1. In the Data Source dialog, select the OAuth Connection option.

2. Set OAuth Parameters.

Users must have the Set OAuth parameters for Cloud App sources
privilege under Client-Web.

Copyright © 2024 All Rights Reserved 491


Syst em Ad m in ist r at io n Gu id e

If you want to use the DB role in MicroStrategy Workstation, OAuth


parameters must be set from Workstation. Oauth parameters in Web
and Workstation are different set values.

3. After the database instance is created, you can set the OAuth
parameters in MicroStrategy Web.

Copyright © 2024 All Rights Reserved 492


Syst em Ad m in ist r at io n Gu id e

1. In the Database Instance menu, select Set OAuth Parameters.

2. In the Authentication Type drop-down, select Microsoft Azure


AD SSO.

3. Fill out the required fields.

Copyright © 2024 All Rights Reserved 493


Syst em Ad m in ist r at io n Gu id e

You can find the required information on Azure AD where the


Snowflake OAuth Application was created in Integrate a
MicroStrategy Library SAML environment with Azure AD.

l For Client ID, click on the app > Overview > Application
(client) ID, and locate the ID.

l For Client Secret, click on the app > Certificates & secrets,
and locate the secret. If necessary, create a new secret.

l For Directory (tenant) ID, click on the app > Overview, and
locate the ID.

l For Scope, click on the app > API permissions, click on the
API/Permission name, and locate the URL. The URL is in the
format like https://[AzureDomain]/
[id]/session:scope-any.

l The Callback URL is generated by default.

For Web: https://[MicroStrategy Web


Hostname]/MicroStrategy/servlet/mstrWeb?evt=3172

For Workstation: http://localhost

The callback URL should be added to the Snowflake OAtuh

Copyright © 2024 All Rights Reserved 494


Syst em Ad m in ist r at io n Gu id e

Client Application.

Cr eat e Co n n ect i o n M ap p i n gs (Op t i o n al )

If you have multiple MicroStrategy Users or User Groups and want to give
access to the same database instance but with different database logins,
see Controlling Access to the Database: Connection Mappings

In a primary database connection, users that are not mapped into the
secondary database connection use the default database connection. In a
secondary database connection, users in a specific group use the mapped
database connection.

For example, the administrator uses basic authentication, while other users
use OAuth authentication. All users can use the project schema. You must
set the default connection to use standard authentication for the Warehouse
Catalog to work in Developer:

Copyright © 2024 All Rights Reserved 495


Syst em Ad m in ist r at io n Gu id e

1. Create a basic authentication database connection (default).

l In MicroStrategy Developer

1. In the Database instance name field, type in a name.

2. From the Database connection type drop-down, select


Snowflake.

3. Click New to create a new database connection.

4. In the Database connection name field, type in a name.

5. Select the DSN.

6. Create a database login and save your settings.

2. Create an OAuth authentication database connection.

l In MicroStrategy Developer

1. Click New to create a new database connection.

2. In the Database connection name field, type in a name.

3. Select the DSN.

4. Go to the Advanced tab.

Copyright © 2024 All Rights Reserved 496


Syst em Ad m in ist r at io n Gu id e

5. In the Additional connection string parameters field, enter


TOKEN=?MSTR_OAUTH_TOKEN;.

This will act as a placeholder that will be replaced by a real


token when the user uses the Snowflake database instance.

6. Click OK.

7. Click New.

8. In the Database login, enter a name.

9. Select the Use network login id (Windows authentication)


checkbox. :

3. Create connection mappings.

1. Assign the new traditional DBRole in Project Configuration >


Database Instance > SQL Data warehouse.

Copyright © 2024 All Rights Reserved 497


Syst em Ad m in ist r at io n Gu id e

A default database connection mapping is created for all users


when you select the database instance.

2. Assign different user groups with basic and OAuth database


connection in Project Configuration > Database instances >
Connection mapping.

Copyright © 2024 All Rights Reserved 498


Syst em Ad m in ist r at io n Gu id e

l Users in group SSO_End_User_DSNless_OAuth will use the


Snowflake_SSO_DSNless_OAuth database connection.

l Users in group SSO_End_User_DSN_OAuth will use the


Snowflake_SSO_DSN_OAuth database connection.

l Users in group SSO_End_User_JDBC_OAuth will use the SSO_


End_User_JDBC_OAuth database connection.

l Other users will use the default database connection. In this


case, the Snowflake_SSO_DSNLess_Basic database
connection is used.

4. Set OAuth parameters via MicroStrategy Web. After the database


instance is created, you can set the OAuth parameters in MicroStrategy
Web.

Copyright © 2024 All Rights Reserved 499


Syst em Ad m in ist r at io n Gu id e

1. In the Database Instance menu, select Set OAuth Parameters.

2. From the Authentication Type drop-down, select Microsoft Azure


AD SSO.

3. Fill out the required fields.

Copyright © 2024 All Rights Reserved 500


Syst em Ad m in ist r at io n Gu id e

You can find the required information on Azure AD where the


Snowflake OAuth Application was created in Integrate a
MicroStrategy Library SAML environment with Azure AD.

l For Client ID, click on the app > Overview > Application
(client) ID, and locate the ID.

l For Client Secret, click on the app > Certificates & secrets,
and locate the secret. If necessary, create a new secret.

l For Directory (tenant) ID, click on the app > Overview, and
locate the ID.

l For Scope, click on the app > API permissions, click on the
API/Permission name, and locate the URL. The URL is in the
format like https://[AzureDomain]/
[id]/session:scope-any.

l The Callback URL is generated by default.

The callback URL should be added to the Snowflake OAtuh


Client Application.

Validate OIDC Login Mode

1. Check the ID token saved in the user run time.

1. Open MicroStrategy Diagnostics and Performance Logging Tool to


enable Kernel XML API log.

2. When logging into the MicroStrategy Web, Library, or Mobile


server, the id token is saved in the user run time and logs, as
shown below:

2021-03-05 04:03:58.020-05:00 [HOST:tec-w-XXX][SERVER:CastorServer][PID:19102


[UID:0][SID:0][OID:0] XML Command: <st><sst><st><cmd><crs uid="[email protected]
twst="TokenE7061FDE496B0B87A797B6B4D00C3665" pwd="***" npwd="***" pgd="" clid

Copyright © 2024 All Rights Reserved 501


Syst em Ad m in ist r at io n Gu id e

tempclient" clmn=" Client Machine Name: tempclient" amd="64" snf="33554432" r


vr="11.3.0100.17108J"><reg_opt lcl_rsl="1"><reg_int lcl_id="1033" lcl_rsl="1"
rsl="1"/></reg_opt><u n="Snowflake User" eml="[email protected]"
token="eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6Im5PbzNaRHJPRFhFSzFqS1doWH
NjExLTQ0MzYtYWJjZC1lZTQzODMxNzYxY2UiLCJpc3MiOiJodHRwczovL2xvZ2luLm1pY3Jvc29md
LTg2OGUtYzViZGI0ZDU5ZmVlL3YyLjAiLCJpYXQiOjE2MTQ5MzQ3MzUsIm5iZiI6MTYxNDkzNDczN
eS84VEFBQUFxWk1ldlZ6RVJOcVovbU8rRTM1QTJueHpOK0Y0djMrNlBNWW51KzBuaHN5eW0vQUNZN
YW5AbWljcm9zdHJhdGVneS5jb20iLCJuYW1lIjoiU25vd2ZsYWtlIFVzZXIiLCJub25jZSI6IlhVa
T3JZc1ZmQkpvTFkiLCJvaWQiOiJkMTIzYzE4Ny0wNWU5LTQwMWUtYTIwYy05OWQ4ODA5ZGM0MjAiL
a2V1c2VyQGdkbXN0ci5vbm1pY3Jvc29mdC5jb20iLCJyaCI6IjAuQUFBQU9wU29US2ZCX2t1R2pzV
QURrLiIsInN1YiI6IlVkanhTNEg0b1owWmZoUDJ0U3RLSHl6TXNqbnUtTzZfZGhyOWcwNzQwcXMiL
ZS1jNWJkYjRkNTlmZWUiLCJ1cG4iOiJzbm93Zmxha2V1c2VyQGdkbXN0ci5vbm1pY3Jvc29mdC5jb
SUFBIiwidmVyIjoiMi4wIn0.I625c3oUxaz9fzCSjHauMDwyooDck9cXa4F0FPycMqwSRxEMqcNwO
hYd48a3apgwLunsL_7Hj0MhWdSlmqZXFK6JleMj2Xeiqj4oTMyi9TPkH1vi7cpHSSx2_8-M6tYyPV
2yc13xOuvgZus9LmTP9SmuToAeII56kz_Pg0OmDUqWkL0IhBWq9MXEMn-RVP6xU-
hkHIPrYIQAgKjR2Snpc8A48hM9igJuRHl3igqW3GuPZvuLv5xYmGqcM212INafxZKFwVHZF0QQGOL
/></crs></cmd></st></sst></st>

1. Check to see if SSO works for Snowflake.

1. Open MicroStrategy Diagnostics and Performance Logging Tool to


enable WSAuth > Info log.

2. Open WSAuth.log and if SSO is being used, the log should have
the following content:

l id token isEmpty=false, usingSSO=true

l Refreshed an access token using id token

The log is shown below:

2021-03-01 21:23:13.999-05:00 [HOST:tec-w-XXX][SERVER:CastorServer]


[PID:191028][THR:176252][WSAuth][Trace]
[UID:D27A5347411556F271A147B8DE2A74B9]
[SID:5EDD58F04C30D75546CB6BC97AE45CF3]
[OID:230016B943FD7EB57237F6A7AA185AC2] To refresh an access token:
access token isEmpty=true, refresh token isEmpty=true, id token
isEmtpy=false, usingSSO=true
2021-03-01 21:23:14.297-05:00 [HOST:tec-w-XXX][SERVER:CastorServer]
[PID:191028][THR:176252][WSAuth][Trace]
[UID:D27A5347411556F271A147B8DE2A74B9]
[SID:5EDD58F04C30D75546CB6BC97AE45CF3]
[OID:230016B943FD7EB57237F6A7AA185AC2] Refreshed an access token
using id token
2021-03-01 21:23:14.298-05:00 [HOST:tec-w-XXX][SERVER:CastorServer]
[PID:191028][THR:176252][WSAuth][Trace]
[UID:D27A5347411556F271A147B8DE2A74B9]
[SID:5EDD58F04C30D75546CB6BC97AE45CF3]

Copyright © 2024 All Rights Reserved 502


Syst em Ad m in ist r at io n Gu id e

[OID:230016B943FD7EB57237F6A7AA185AC2] To refresh an access token:


access token isEmpty=false, refresh token isEmpty=false, id token
isEmtpy=false, usingSSO=true
2021-03-01 21:23:14.300-05:00 [HOST:tec-w-XXX][SERVER:CastorServer]
[PID:191028][THR:176252][WSAuth][Trace]
[UID:D27A5347411556F271A147B8DE2A74B9]
[SID:5EDD58F04C30D75546CB6BC97AE45CF3]
[OID:230016B943FD7EB57237F6A7AA185AC2] The live access token:
isExpired=false, expireTime=1614655378, currentTime=1614651794

Related Content
KB484275: Best practices for using the Snowflake Single Sign-on (SSO)
feature

Integrating MicroStrategy with Snowflake for Single Sign-On using Okta

Integrate Azure SQL Database for Single Sign-On


Learn how to integrate MicroStrategy with Azure SQL Database for Single-
Sign On (SSO) with OIDC.

l Prerequisite: Configure Azure SQL Database OAuth Integration with Azure


AD to Create OAuth Applications

l Integrate MicroStrategy with Azure SQL Database for Single Sign-On with
OIDC

l Enable MicroStrategy Web OIDC with Azure AD

l Enable MicroStrategy Library OIDC with Azure AD

l Enable MicroStrategy Mobile OIDC with Azure AD

l Integrate MicroStrategy with Azure SQL Database for Single Sign-On with
SAML

l Enable Seamless Login Between Web, Library, and Workstation

Copyright © 2024 All Rights Reserved 503


Syst em Ad m in ist r at io n Gu id e

l Create Azure SQL Database DB Instances

l With the Project Schema

l Without the Project Schema

l Create Connection Mappings (Optional)

l Validate the OIDC/SAML Login Mode Environment

Prerequisite: Configure Azure SQL Database OAuth Integration with Azure


AD to Create OAuth Applications

You can create an OAuth application using the steps below or follow
Microsoft's documentation.

1. Navigate to the Microsoft Azure Portal and authenticate.

2. Navigate to Azure Active Directory.

3. Click App Registrations.

4. Click New Registration.

5. Enter a name for the client such as Azure SQL OAuth Client.

6. Verify that Supported account types is set to Single Tenant.

7. Click Register.

8. In the Overview section, copy the ClientID from Application (client)


ID. This is the <OAUTH_CLIENT_ID> mentioned in the following steps.

9. Click Certificates & secrets and New client secret.

10. Add a description of the secret.

11. Select never expire. For testing purposes, select secrets that never
expire.

Copyright © 2024 All Rights Reserved 504


Syst em Ad m in ist r at io n Gu id e

12. Click Add. Copy the secret. This is the <OAUTH_CLIENT_SECRET>


mentioned in the following steps.

13. For programmatic clients that request an access token on behalf of a


user, configure Delegated permissions for Applications as follows:

a. Click API Permissions.

b. Click Add Permission.

c. Search for Azure SQL Database.

d. Click Delegated permissions.

e. Select the permission related to the scope defined in the


application that you want to grant to this client. Make sure to use
https://database.windows.net/user_impersonation as
the scope.

f. Click Add Permissions.

Copyright © 2024 All Rights Reserved 505


Syst em Ad m in ist r at io n Gu id e

g. Click Grant Admin Consent to grant the permissions to the client.


Permissions are configured this way for testing purposes.
However, in a production environment, granting permissions in this
manner is not advised.

14. Switch to Authentication and Add a platform.

15. Select Web and add redirect URIs. Here are some samples:

l https://<FQDN>:<port>/MicroStrategy/servlet/mstrWeb?evt=3172

l http://localhost/

16. In the Implicit grand and hybrid flows section, select ID tokens.

Copyright © 2024 All Rights Reserved 506


Syst em Ad m in ist r at io n Gu id e

Integrate MicroStrategy with Azure SQL Database for Single Sign-On with
OIDC
l Enable MicroStrategy Web OIDC with Azure AD

l Enable MicroStrategy Library OIDC with Azure AD

l Enable MicroStrategy Mobile OIDC with Azure AD

l Integrate MicroStrategy with Azure SQL Database for Single Sign-On with
SAML

En ab l e M i cr o St r at egy Web OIDC w i t h Azu r e AD

1. Establish trust between Web server and Intelligence server.

a. Log in to MicroStrategy Web.

b. Connect to the Intelligence server.

c. Select your Intelligence server and next to Trust relationship


between Web Server and MicroStrategy Intelligence Server,
click Setup.

d. Enter the administrator account and password to establish trust.

2. Set OIDC as the login mode and input the necessary values.

Copyright © 2024 All Rights Reserved 507


Syst em Ad m in ist r at io n Gu id e

a. For the MicroStrategy application redirect URI, the value must be


added to Azure AD > Azure SQL OAuth Client Application >
Authentication > Web Redirect URIs.

Copyright © 2024 All Rights Reserved 508


Syst em Ad m in ist r at io n Gu id e

b. For the clientID, go to App > Overview > Application (client)


ID. Copy the ID and paste it into Client ID.

c. For Client Secret, go to App > Certificates & secrets. Copy the
value and paste it into Client Secret.

d. For the Issuer, go to App > Overview > Endpoints. Open the
'OpenID Connect metadata document' URL, copy the issuer value.
For example, https://login.microsoftonline.com/[Directory tenant
ID]/v2.0.

Copyright © 2024 All Rights Reserved 509


Syst em Ad m in ist r at io n Gu id e

e. The Native Client ID is the same as the Client ID.

f. There is no need to modify the Redirect URI and Scope.

g. For the values in Claim Map, fill them as image.

h. For Admin Groups, enter the groups' Object Id as the value. If the
value is set, only users in the group can access the MicroStrategy
Web Admin page.

3. Restart Tomcat for the MicroStrategy Web configurations to take effect.

En ab l e M i cr o St r at egy Li b r ar y OIDC w i t h Azu r e AD

1. Create or modify MicroStrategyLibrary\WEB-


INF\classes\auth\Oidc\OidcConfig.json.

Copyright © 2024 All Rights Reserved 510


Syst em Ad m in ist r at io n Gu id e

a. The clientId, clientSecret, nativeClientId, and issuer are the same


as the ones used in the previous procedure for Web.

b. For the redirect Uri, use <FQDN>:<port> to replace XXXXXXX in


the sample below. Add the URL into Azure AD > Azure SQL
OAuth Client Application > Authentication > Web Redict URLs.

{
"iams":[{
"clientId":"XXXXXXX",
"clientSecret":"XXXXXXX",
"nativeClientId": "XXXXXXX",
"id":"test",
"issuer":"https://login.microsoftonline.com/XXXXXXX/v2.0",
"redirectUri":"https://XXXXXXX/MicroStrategyLibrary/auth/oidc/lo
gin",
"blockAutoProvisioning": true,
"claimMap": {
"email": "email",
"fullName": "name",
"userId": "upn",
"groups": "groups"
},
"default": true,
"mstrIam": true,
"scopes": [
"openid",
"profile",
"email",
"offline_access"
],
"vendor": {
"name": "MicroStrategy IAM",
"version": "Azure AD"
}
}]
}

2. Set the OIDC authentication mode.

a. Log in to MicroStrategy Library admin page.

b. Connect to the Intelligence server.

c. Choose OIDC authentication mode and click Create Trusted


Relationship to create a trust relationship between Library server

Copyright © 2024 All Rights Reserved 511


Syst em Ad m in ist r at io n Gu id e

and MicroStrategy Intelligence server. Enter the administrator


account and password.

3. Restart Tomcat for the MicroStrategy Library configurations to take


effect.

En ab l e M i cr o St r at egy M o b i l e OIDC w i t h Azu r e AD

1. Establish trust between the Web server and Intelligence server.

a. Log in to the MicroStrategy Mobile admin page.

b. Connect to the Intelligence server.

c. In Trust relationship between Web Server and MicroStrategy


Intelligence Server, select your Intelligence server and click
Setup.

Copyright © 2024 All Rights Reserved 512


Syst em Ad m in ist r at io n Gu id e

2. Choose OIDC Authentication for the Login mode and enter the
necessary values.

a. These values are the same as the ones used in Web.

b. Add the mobile redirect URI in Azure AD > Azure SQL OAuth
Client Application > Authentication > Web Redirect URIs.

3. Add com.microstrategy.mobile://auth into Azure AD > Azure SQL


OAuth Client Application > Authentication > Mobile and desktop
application.

4. Restart Tomcat for the MicroStrategy Mobile configurations to take


effect.

Copyright © 2024 All Rights Reserved 513


Syst em Ad m in ist r at io n Gu id e

Integrate MicroStrategy with Azure SQL Database for Single Sign-On with
SAML

See Integrate MicroStrategy With Snowflake for Single Sign-On With SAML
using Azure AD to integrate MicroStrategy with Azure SQL Database for
Single Sign On with SAML.

l Create an Azure AD Enterprise Application and enable single sign-on with


SAML authentication for JSP Web and Mobile

l Integrate a MicroStrategy Library SAML environment with Azure AD.

l Create Azure SQL Database OAuth applications and integrate with


MicroStrategy.

Enable Seam less Login

See Enable Seamless Login Between Web, Library, and Workstation for
more information.

Create Azure SQL Database DB Instances

Starting in MicroStrategy 2021 Update 4, you can leverage Azure SQL


single sign-on (SSO) for both the ODBC and JDBC driver.

You can create Azure SQL Database DB instances with or without the
project schema.

l With the Project Schema

l Without the Project Schema

l Create Connection Mappings (Optional)

Wi t h t h e Pr o j ect Sch em a

To use the project schema, you must create a basic authentication


connection in MicroStrategy Developer or MicroStrategy Web.

In MicroStrategy Developer:
Copyright © 2024 All Rights Reserved 514
Syst em Ad m in ist r at io n Gu id e

1. In Database instance name, enter in a descriptive name.

2. In Database connection type, select Azure SQL Database.

3. Click New to create a new database connection.

Copyright © 2024 All Rights Reserved 515


Syst em Ad m in ist r at io n Gu id e

4. In Database connection name, enter a descriptive name.

5. Select the DSN.

6. Create a database login and save your settings.

Copyright © 2024 All Rights Reserved 516


Syst em Ad m in ist r at io n Gu id e

In MicroStrategy Web:

1. Choose Add Data > New Data to open the Data Source dialog.

2. Select the Standard Connection option.

Wi t h o u t t h e Pr o j ect Sch em a

Create an OAuth authentication database connection in MicroStrategy


Developer or MicroStrategy Web.

In MicroStrategy Developer:

1. Click New to create a new database connection.

2. In Database connection name, enter a descriptive name.

3. Select the DSN.

4. Open the Advanced tab.

5. In Additional connection string parameters, enter


AADToken=?MSTR_OAUTH_TOKEN;. This acts as a placeholder that will
be replaced by a real token when the user uses the Azure SQL
Database DB instance.

Copyright © 2024 All Rights Reserved 517


Syst em Ad m in ist r at io n Gu id e

6. Click OK.

7. In Database login, enter a name.

8. Select Use network login id (Windows authentication).

In MicroStrategy Web:

1. Choose Add Data > New Data to open the Data Source dialog.

2. Select the OAuth Connection option.

Copyright © 2024 All Rights Reserved 518


Syst em Ad m in ist r at io n Gu id e

3. In the Database Instance menu, select Set OAuth Parameters.

Copyright © 2024 All Rights Reserved 519


Syst em Ad m in ist r at io n Gu id e

4. In Authentication Type, select Microsoft Azure AD SSO. Microsoft


Azure AD SSO must be fully configured with SAML or OIDC.

Copyright © 2024 All Rights Reserved 520


Syst em Ad m in ist r at io n Gu id e

5. Fill out the required fields. To locate the Client ID, click on the app. Got
to Overview > Application (client) ID, and locate the ID.

6. For the Client Secret, click on the app. Go to Certificates & secrets,
and locate the secret. If necessary, create a new secret.

7. For the Directory (tenant) ID, click on the app. Go to Overview and
locate the ID.

8. For the Scope, use https://database.windows.net/user_


impersonation.

9. The Callback URL is generated by default.

Web: https://[MicroStrategy Web


Hostname]/MicroStrategy/servlet/mstrWeb?evt=3172

Workstation: http://localhost

Users must have the Set OAuth parameters for Cloud App sources
privilege under Client-Web.

Copyright © 2024 All Rights Reserved 521


Syst em Ad m in ist r at io n Gu id e

If you want to use the DB role in MicroStrategy Workstation, OAuth


parameters must be set from Workstation. Oauth parameters in Web
and Workstation are different set values.

Cr eat e Co n n ect i o n M ap p i n gs (Op t i o n al )

If you have multiple MicroStrategy users or user groups and want to give
access to the same database instance, but with different database logins,
see Controlling Access to the Database: Connection Mappings.

In a primary database connection, users that are not mapped into the
secondary database connection use the default database connection. In a
secondary database connection, users in a specific group use the mapped
database connection.

For example, the administrator uses basic authentication, while other users
use OAuth authentication. All users can use the project schema. You must
set the default connection to use standard authentication for the Warehouse
Catalog to work in Developer:

Copyright © 2024 All Rights Reserved 522


Syst em Ad m in ist r at io n Gu id e

1. Create a basic authentication database connection (default) in


MicroStrategy Developer.

a. In Database instance name, enter a descriptive name.

b. In Database connection type, select Azure SQL Database.

c. Click New to create a new database connection.

d. In Database connection name, enter a descriptive name.

e. Select the DSN.

f. Create a database login and save your settings.

2. Create an OAuth authentication database connection in MicroStrategy


Developer.

a. Click New to create a new database connection.

b. In Database connection name, enter a descriptive name.

c. Select the DSN.

d. Open the Advanced tab.

e. In Additional connection string parameters, enter


AADToken=?MSTR_OAUTH_TOKEN;. This acts as a placeholder
that is replaced by a real token when the user uses the Azure SQL
Database DB instance.

f. Click OK.

g. Click New.

h. In Database login, enter a name.

i. Select Use network login id (Windows authentication).

Copyright © 2024 All Rights Reserved 523


Syst em Ad m in ist r at io n Gu id e

3. Create connection mappings.

a. Assign the new traditional DBRole in Project Configuration >


Database Instance > SQL Data warehouse. A default database
connection mapping is created for all users when you select the
database instance.

b. Assign different user groups with basic and OAuth database


connections in Project Configuration > Database instances >
Connection mapping.

Copyright © 2024 All Rights Reserved 524


Syst em Ad m in ist r at io n Gu id e

Users in the SSO_End_User_DSNless_OAuth group use the


AzureSQLSSODSNlessdatabase connection. Users in the SSO_
End_User_DSN_OAuth group use the
AzureSQLSSODSNdatabase connection. Other users use the
default database connection. In this case, the
AzureSQLSSODSNless database connection is used.

4. Set OAuth parameters via MicroStrategy Web.

Validate the OIDC/ SAML Login Mode Environm ent

1. Check to see if the id token was saved into the user run time. Open the
MicroStrategy Diagnostics and Performance Logging Tool and enable
the Kernel XML API log.

2. When logging into MicroStrategy Web, Library, or Mobile servers, the id


token is saved into the user run time and logs as shown below.

2021-03-05 04:03:58.020-05:00 [HOST:tec-w-XXX][SERVER:CastorServer]


[PID:191028][THR:194528][Kernel XML API][Trace][UID:0][SID:0][OID:0] XML
Command: <st><sst><st><cmd><crs uid="[email protected]"
twst="TokenE7061FDE496B0B87A797B6B4D00C3665" pwd="***" npwd="***" pgd=""
clid="Server Machine: XXXX Client Machine: tempclient" clmn=" Client
Machine Name: tempclient" amd="64" snf="33554432" rws="10" sws="1" mid=""
clt="6" vr="11.3.0100.17108J"><reg_opt lcl_rsl="1"><reg_int lcl_id="1033"
lcl_rsl="1"/><reg_num lcl_id="1033" lcl_rsl="1"/></reg_opt><u
n="Snowflake User" eml="[email protected]"
token="eyJ0eXAiOiJKV1QiLxxxxxxxxxAccessToken"
/></crs></cmd></st></sst></st>

Copyright © 2024 All Rights Reserved 525


Syst em Ad m in ist r at io n Gu id e

3. Check to see if Azure SQL Database SSO is working. In the


MicroStrategy Diagnostics and Performance Logging Tool, enable the
WSAuth > Info log.

4. Check the WSAuth.log file. If you are using SSO, the log should look
similar to the one below.

2021-03-01 21:23:13.999-05:00 [HOST:tec-w-XXX][SERVER:CastorServer]


[PID:191028][THR:176252][WSAuth][Trace]
[UID:D27A5347411556F271A147B8DE2A74B9]
[SID:5EDD58F04C30D75546CB6BC97AE45CF3]
[OID:230016B943FD7EB57237F6A7AA185AC2] To refresh an access token: access
token isEmpty=true, refresh token isEmpty=true, id token isEmpty=false,
usingSSO=true
2021-03-01 21:23:14.297-05:00 [HOST:tec-w-XXX][SERVER:CastorServer]
[PID:191028][THR:176252][WSAuth][Trace]
[UID:D27A5347411556F271A147B8DE2A74B9]
[SID:5EDD58F04C30D75546CB6BC97AE45CF3]
[OID:230016B943FD7EB57237F6A7AA185AC2] Refreshed an access token using id
token
2021-03-01 21:23:14.298-05:00 [HOST:tec-w-XXX][SERVER:CastorServer]
[PID:191028][THR:176252][WSAuth][Trace]
[UID:D27A5347411556F271A147B8DE2A74B9]
[SID:5EDD58F04C30D75546CB6BC97AE45CF3]
[OID:230016B943FD7EB57237F6A7AA185AC2] To refresh an access token: access
token isEmpty=false, refresh token isEmpty=false, id token isEmpty=false,
usingSSO=true
2021-03-01 21:23:14.300-05:00 [HOST:tec-w-XXX][SERVER:CastorServer]
[PID:191028][THR:176252][WSAuth][Trace]
[UID:D27A5347411556F271A147B8DE2A74B9]
[SID:5EDD58F04C30D75546CB6BC97AE45CF3]
[OID:230016B943FD7EB57237F6A7AA185AC2] The live access token:
isExpired=false, expireTime=1614655378, currentTime=161465179

Map OIDC Users to MicroStrategy


MicroStrategy Intelligence server uses the OIDC assertion attributes
configured in the IdP for authentication. This information is passed from
OIDC response to map the logged in user to MicroStrategy users and groups
stored in the metadata.

User Mapping

User ID information sent in the OIDC response can be used to map to a


MicroStrategy user:

Copyright © 2024 All Rights Reserved 526


Syst em Ad m in ist r at io n Gu id e

l User ID: MicroStrategy looks for a match of the Name ID to the User ID of
the Trusted Authenticated Request setting.

This field can be set in Developer by opening User Editor >


Authentication > Metadata. You can also set this field in Web
Administrator by opening Intelligence Server Administration Portal >
User Manager. The Trusted Authentication Login field is found in the
Authentication tab when editing a user.

When a match is found in the metadata, MicroStrategy logs the user in as


the corresponding MicroStrategy user with all of the correct permissions and
granted privileges.

If no match is found, this means the OIDC user does not yet exist in
MicroStrategy and will be denied access. You can choose to have OIDC
users imported into MicroStrategy if no match is found. See Importing and
Syncing OIDC Users below for more information.

Im porting and Syncing OIDC Users

New users and their associated groups can be dynamically imported into
MicroStrategy during application log in. You can also configure the
Intelligence server to sync user information for existing MicroStrategy users
each time they log in to an application. The following settings are accessed
from the Intelligence Server Configuration > Web Single Sign-on >
Configuration window in Developer.

l Allow user to log on if Web Single Sign-on - MicroStrategy user link


not found: Controls access to an application when a MicroStrategy user is
not found when checking an OIDC response. If unchecked, MicroStrategy
denies access to the user. If checked, the user obtains privileges and
access rights of a 3rd Party user and Everyone group.

Import user and Sync user are not be available unless this setting is
turned on.

Copyright © 2024 All Rights Reserved 527


Syst em Ad m in ist r at io n Gu id e

l Import user at logon: Allows MicroStrategy to import a user into the


metadata if no matching user is found. The imported user populates all the
fields that are used to check user mapping with the corresponding OIDC
attribute information.

All users imported this way are placed in the3rd party users group in
MicroStrategy and are not physically added to any MicroStrategy groups
that match its group membership information.

After the configuration is complete, the imported user sees a privilege-


related error when trying to access the project. To resolve this issue, a
MicroStrategy administrator must add the project access privilege for the
imported user in the 3rd Party Users group.

l Synch user at logon: Allows MicroStrategy to update the fields used for
mapping users with the current information provided by the OIDC
response.

This option also updates all of a user's group information and import
groups into 3rd party users if matching groups are not found. This may
result in unwanted extra groups being created and stored in the metadata.

Integrate MicroStrategy with Azure AD OIDC for Google BigQuery


Single Sign-On
The Azure AD OIDC for Google BigQuery Single Sign-On is the simplest
data source connection for users because it leverages MicroStrategy
authentication and users only need to sign in once.

Before following the steps below, you must create an Azure App and note
it's Client ID, Client Secret, and Directory/Tenant ID. You must also have
access to at least one Azure account.

Copyright © 2024 All Rights Reserved 528


Syst em Ad m in ist r at io n Gu id e

Create and Map a MicroStrategy User to the Azure AD User

1. Open the Workstation window with the Navigation pane in smart mode.

2. In the Navigation pane, click Environments.

3. Log into your environment. You must have the Administrator privileges.

4. In the Navigation pane, click Users and Groups.

5. Click the plus icon (+) next to All Users and enter the required fields.

6. In the left pane, click Privileges and add the following privileges:

l Create dataset in Workstation

l Access data from Databases, Google BigQuery, BigData, OI

l Create configuration objects

l Monitor Database Connections

l Create and edit database instances and connections

l Create and edit database logins

l Configure project data source

l Use Workstation

7. In the left pane, click Authentication.

8. Enter your Azure AD email address in Trusted Authenticated Request

Copyright © 2024 All Rights Reserved 529


Syst em Ad m in ist r at io n Gu id e

User ID.

For more information on mapping existing users, see Mapping OIDC Users
to MicroStrategy.

Integrate OIDC Support with Azure AD

Integrate your MicroStrategy applications with Azure AD using


OIDC authentication by following Integrating OIDC Support with Azure AD.
You do not need to perform the steps in the Configure and Enable
OIDC Auth Mode for MicroStrategy Web/MicroStrategy Mobile section.

Setup Google Workforce Identity Federation

You must set up gcloud utility to perform the following procedure. For more
information on setting up gcloud utility, see gcloud CLI overview.

1. Set the default billing/quota project using the following command:

gcloud config set billing/quota_project google-project-id

Copyright © 2024 All Rights Reserved 530


Syst em Ad m in ist r at io n Gu id e

2. Create a Google Workforce identity pool using the following command


and replace the organization value with your Google organization ID:

gcloud iam workforce-pools create azure-ad-workforce-identity-pool \


--organization=123456789012 \
--description="Azure AD Workforce Identity Pool" \
--location=global

3. Create a Workforce pool provider using the following


command. Replace 4ca8943a-c1a7-4bfe-868e-c5bdb4d59fee
with your Azure AD directory or tenant ID and replace 92e890be-
8367-4f57-84ea-9cd34cc0e5cd with your Azure AD application or
client ID.

gcloud iam workforce-pools providers create-oidc azure-provider \


--workforce-pool=azure-ad-workforce-identity-pool \
--display-name="Azure AD Provider" \
--description="Azure AD Workforce Identity Pool" \
--issuer-uri="https://login.microsoftonline.com/4ca8943a-c1a7-4bfe-868e-
c5bdb4d59fee/v2.0" \
--client-id="92e890be-8367-4f57-84ea-9cd34cc0e5cd" \
--attribute-mapping="google.subject=assertion.preferred_username" \
--location=global

4. Set the workforce pool privileges for your organization's needs and set
the following minimum privileges:

gcloud projects add-iam-policy-binding microstrategy-sr \


--role="roles/bigquery.dataViewer" \
--
member="principalSet://iam.googleapis.com/locations/global/workforcePools/azur
e-ad-workforce-identity-pool/*"

gcloud projects add-iam-policy-binding microstrategy-sr \


--role="roles/bigquery.jobUser" \
--
member="principalSet://iam.googleapis.com/locations/global/workforcePools/azur
e-ad-workforce-identity-pool/*"

gcloud projects add-iam-policy-binding microstrategy-sr \


--role="roles/serviceusage.serviceUsageConsumer" \
--
member="principalSet://iam.googleapis.com/locations/global/workforcePools/azur
e-ad-workforce-identity-pool/*"

5. Make note of your Google Audience URI.

Copyright © 2024 All Rights Reserved 531


Syst em Ad m in ist r at io n Gu id e

Create an Enterprise Security Object

Follow the steps in Manage OAuth Enterprise Security with Identity and
Access Management (IAM) Objects to create an enterprise security object.

Create Google BigQuery JDBC or ODBC Data Source with OAuth OBO

1. Open the Workstation window with the Navigation pane in smart mode.

2. In the Navigation pane, click the plus icon (+) next to Data Sources.

3. Select Google BigQuery.

4. Enter a name for the data source and select the project(s) that will use
it.

5. Expand the Default Database Connection drop-down and click Add


New Database Connection.

6. The Create New Database Connection module appears.

Copyright © 2024 All Rights Reserved 532


Syst em Ad m in ist r at io n Gu id e

7. Enter values in the following fields:

l Name: A name for the database connection.

l Driver Type: Select "JDBC" or "ODBC".

l Billing Project: The Google billing project ID.

l Authentication Mode: Select "OAuth On-Behalf-Of".

l Authentication Service: The enterprise security object that you


created above.

l Audience: The Audience URI that you created above.

7. Click Save.

8. Click Save.

Copyright © 2024 All Rights Reserved 533


Syst em Ad m in ist r at io n Gu id e

Test the Google BigQuery Data Source

1. Open the Workstation window with the Navigation pane in smart mode.

2. Check that the environment is using the Default OIDC authentication


mode.

a. Click Environments in the Navigation pane.

b. Right-click the environment you want to use and click Edit


Environment Information.

c. Check that the Authentication Mode is set to "Default OIDC".

3. Login to your MicroStrategy environment using your Azure AD


username and password.

4. In the Navigation pane, click the plus icon (+) next to Datasets.

5. Select Data Import Cube and click Ok.

6. Select Google BigQuery (Driver) or Google BigQuery (JDBC).

7. Leave Select Tables selected and click Next.

8. Select the GBQ_JDBC_Azure_OBO data source.

9. The projects and datasets list displays.

Integrate MicroStrategy with Google BigQuery for Single Sign-On


Using Google
Starting in MicroStrategy ONE (September 2024), MicroStrategy supports
single sign-on to Google BigQuery using OpenID through Google in all
clients out-of-the-box.

Check out the following topics to enable single sign-on using Google:

l Integrate MicroStrategy with Google Using OIDC

l Create and Map a MicroStrategy User to a Google User

Copyright © 2024 All Rights Reserved 534


Syst em Ad m in ist r at io n Gu id e

l Create an Enterprise Security Object

l Create a Google BigQuery JDBC or ODBC Data Source

l Test the Google BigQuery Data Source

l Known Limitations

Integrate MicroStrategy with Google Using OIDC

To set up OIDC login with Google, see Integrate OIDC Support with Google.

If you want to access BigQuery data, add a


https://www.googleapis.com/auth/bigquery scope to each client.

Create and Map a MicroStrategy User to a Google User

1. Open the Workstation window with the Navigation pane in smart mode.

2. In the Navigation pane, click Environments.

3. Log into your environment. You must have the Administrator privileges.

4. In the Navigation pane, click Users and Groups.

5. Click the plus icon (+) next to All Users and enter the required fields.

6. In the left pane, click Privileges and add the following privileges:

l Access data from Databases, Google BigQuery, BigData, OLAP,


BI tools

l Create and edit database instances and connections

l Create and edit database logins

l Create configuration objects

l Create dataset in Workstation

Copyright © 2024 All Rights Reserved 535


Syst em Ad m in ist r at io n Gu id e

l Monitor Database Connections

l Use Workstation

7. In the left pane, click Authentication.

8. Enter your Google email address in Trusted Authenticated Request


User ID.

9. Click Save.

For more information on mapping existing users, see Mapping OIDC Users
to MicroStrategy.

Create an Enterprise Security Object

1. In the Navigation pane, click , next to Enterprise Security.

2. Choose the Environment in which you want to create the object.

3. Give the IAM object a Display Name.

4. Select the Google IdP type and register the MicroStrategy environment
as an application with the provided Login Redirect URIs.

5. In the Workstation drop-down, enter the Client ID for each client that
you created in the previous step.

Click Client Type to add a different client type.

Copyright © 2024 All Rights Reserved 536


Syst em Ad m in ist r at io n Gu id e

6. Enter the Client Secret for Web and Workstation.

Client Secret is not required for iOS and Android.

7. Leave Scope blank.

8. Click Save.

For more information on creating security objects, see Manage OAuth


Enterprise Security with Identity and Access Management (IAM) Objects.

Create a Google BigQuery JDBC or ODBC Data Source

1. Open the Workstation window.

2. In the Navigation pane, click , next to Data Sources.

3. Choose Google BigQuery.

4. Enter a Name.

Copyright © 2024 All Rights Reserved 537


Syst em Ad m in ist r at io n Gu id e

5. Expand the Default Database Connection drop-down and click Add


New Database Connection.

6. Enter a Name.

7. Choose a JDBC or ODBC driver and enter the required information.

8. In Authentication Service, choose the security object you created in


the section above.

9. Click Save.

10. Select the Projects to which the data source is assigned and can be
accessed.

11. Click Save.

Copyright © 2024 All Rights Reserved 538


Syst em Ad m in ist r at io n Gu id e

Test the Google BigQuery Data Source

1. Open the Workstation window.

2. Check that the environment is using the Default OIDC authentication


mode:

a. Click Environments in the Navigation pane.

b. Right-click the environment you want to use and choose Edit


Environment Information.

c. Check the Authentication Mode is set to Default OIDC.

3. Log in to your MicroStrategy environment using your Google username


and password.

4. To test the data source in Library and ensure it displays:

a. Open MicroStrategy Library and click Log in with OIDC.

b. In the toolbar, click , and choose Dashboard.

c. Click Blank Dashboard.

d. Click Create.

e. Click New Data and select the Google BigQuery gateway.

f. Choose Select Tables and click Next.

g. Select the data source you created.

5. To test the data source in Workstation and ensure it displays:

a. In the Navigation pane, click , next to Dataset.

b. Select the Google BigQuery gateway.

c. Select the data source you created.

Copyright © 2024 All Rights Reserved 539


Syst em Ad m in ist r at io n Gu id e

Known Lim itations

Google BigQuery drivers do not support refresh token authentication modes


without a client secret. Therefore, the connection on iOS and Android may
fail. You can skip the two clients when configuring Enterprise Security and
MicroStrategy will use the client information configured for Web to retrieve
the refresh token to connect.

Implement Windows NT Authentication


If you use Windows 2003 as your network operating system and your users
are already defined in a Windows 2003 directory, then you can enable
Windows authentication in MicroStrategy to allow users access without
having to enter their login information.

The Apple Safari web browser does not support Windows authentication
with MicroStrategy Web.

Use the procedures in the rest of this section to enable single sign-on with
Windows authentication in MicroStrategy Web. For high-level steps to
configure these settings, see Steps to Enable Single Sign-On to
MicroStrategy Web Using Windows Authentication, page 542.

To use Windows authentication you must create users in the MicroStrategy


environment and then link them to Windows users. Linking enables
Intelligence Server to map a Windows user to a MicroStrategy user. See Link
a Windows Domain User to a MicroStrategy User, page 545.

You can also create MicroStrategy users from existing Windows by importing
either user definitions or group definitions.

To use Windows authentication with MicroStrategy Web, you must be


running MicroStrategy Web or Web Universal under Microsoft IIS. Non-IIS
web servers do not support Windows authentication. See Enabling
integrated authentication for IIS.

If the Windows domain account information is linked to a MicroStrategy user


definition, a MicroStrategy Web user can be logged in automatically through

Copyright © 2024 All Rights Reserved 540


Syst em Ad m in ist r at io n Gu id e

MicroStrategy Web. When a user accesses MicroStrategy Web, IIS detects


the Windows user and sends the login information to Intelligence Server. If
the Windows user is linked to a MicroStrategy user, Intelligence Server
starts a session for that user. For information on setting up MicroStrategy
Web to allow single sign-on using Windows authentication, see Enable
Windows Authentication Login for MicroStrategy Web, page 547.

Enable Windows Authentication in MicroStrategy Web to Allow


Single Sign-On
Single sign-on authentication allows users to type their login credentials
once, and have access to multiple software applications securely, because
the system can apply that single authentication request to all the
applications that the user need access to. It is possible to use Windows
authentication to enable single sign-on for MicroStrategy Web.

There are several configurations that you must make to enable Windows
authentication in MicroStrategy Web. To properly configure MicroStrategy
Web, Microsoft Internet Information Services (IIS), and the link between
Microsoft and MicroStrategy users, follow the procedure Steps to Enable
Single Sign-On to MicroStrategy Web Using Windows Authentication, page
542.

Steps to use Windows authentication with Microsoft Sharepoint and


MicroStrategy Web are in the MicroStrategy Developer Library (MSDL). The
MicroStrategy SDK and MSDL contain information on customizing
MicroStrategy Web.

Before continuing with the procedures described in the rest of this section, you
must first set up a Windows domain that contains a domain name for each user
that you want to allow single sign-on access to MicroStrategy Web with
Windows authentication.

In addition, you must be connected to the MicroStrategy Web machine without


a proxy. Windows authentication does not work over a proxy connection. For

Copyright © 2024 All Rights Reserved 541


Syst em Ad m in ist r at io n Gu id e

more information, including some possible work-arounds, see Microsoft's IIS


documentation.

Steps to Enable Single Sign-On to MicroStrategy Web Using Windows


Authentication

1. Enable integrated Windows authentication for Microsoft IIS. See Enable


Windows Authentication for Microsoft IIS, page 542.

2. Create a link between a Windows domain user and a MicroStrategy


Web user for each person that will be accessing MicroStrategy Web
with Windows authentication. See Link a Windows Domain User to a
MicroStrategy User, page 545.

3. Define a project source to use Windows authentication. See Define a


Project Source to Use Windows Authentication, page 547.

4. Enable Windows authentication in MicroStrategy Web. See Enable


Windows Authentication Login for MicroStrategy Web, page 547.

5. Configure each MicroStrategy Web user's browser for single sign-on.


See Configure a Browser for Single Sign-On to MicroStrategy Web,
page 548.

Enable Windows Authentication for Microsoft IIS

Microsoft Internet Information Services is an Internet server that is integral


to Windows authentication. You must configure IIS to enable Windows
authentication in the MicroStrategy virtual directory to support integrated
authentication to MicroStrategy Web.

The steps to perform this configuration are provided in the procedure below,
which may vary depending on your version of IIS. The following links can
help you find information on how to enable integrated authentication for your
version of IIS:

Copyright © 2024 All Rights Reserved 542


Syst em Ad m in ist r at io n Gu id e

l IIS 7: See https://learn.microsoft.com/en-us/previous-versions/windows/it-


pro/windows-server-2008-R2-and-2008/cc754628(v=ws.10)for information
on enabling Windows authentication for IIS 7.

If you are using IIS 7 on Windows Server 2008, ensure the following:

l The MicroStrategyWebPool application pool is started, and the


Managed Pipeline is set to Integrated.

l ASP.NET Impersonation is enabled. For information on enabling


ASP.NET Impersonation in IIS 7, see https://learn.microsoft.com/en-
us/previous-versions/windows/it-pro/windows-server-2008-R2-and-
2008/cc730708(v=ws.10).

l IIS 6: See https://learn.microsoft.com/en-us/previous-versions/windows/it-


pro/windows-server-2003/cc780160(v=ws.10)for information on enabling
Windows authentication for IIS 6.

The third-party products discussed below are manufactured by vendors


independent of MicroStrategy, and the information provided is subject to
change. Refer to the appropriate third-party vendor documentation for
updated IIS support information.

Enable Windows Authentication in Microsoft IIS

1. On the MicroStrategy Web server machine, access the IIS Internet


Service Manager.

2. Navigate to and right-click the MicroStrategy virtual folder, and select


Properties.

3. Select the Directory Security tab, and then under Anonymous access
and authentication control, click Edit.

4. Clear the Anonymous access check box.

5. Select the Integrated Windows authentication check box.

Copyright © 2024 All Rights Reserved 543


Syst em Ad m in ist r at io n Gu id e

6. Click OK.

7. Restart IIS for the changes to take effect.

Enable the MicroStrategy ISAPI Filter in IIS 6

1. In IIS, right-click the default web site, and select Properties.

2. Click the ISAPI Filters tab. A list of ISAPI filters for your IIS installation
is shown.

3. Click Add.

4. Browse to the location of the MBWBAUTH.dll file. By default, the file is


located in C:\Program Files (x86)\Common
Files\MicroStrategy.

5. Select MBWBAUTH.dll and click OK. The MBWBAUTH ISAPI filter is


added to the list of ISAPI filters.

6. Restart your IIS server.

Enable the MicroStrategy ISAPI Filter in IIS 7

1. In IIS, select the default web site. The Default Web Site Home page is
shown.

2. In the Default Web Site Home page, double-click ISAPI Filters. A list of
ISAPI filters for your IIS installation is shown.

3. In the Actions pane, click Add.

4. In the Filter name field, type a name for the filter. For example,
MicroStrategy ISAPI Filter.

5. Next to the Executable field, click Browse (...).

Copyright © 2024 All Rights Reserved 544


Syst em Ad m in ist r at io n Gu id e

6. Browse to the location of the MBWBAUTH.dll file. By default, the file is


located in C:\Program Files (x86)\Common
Files\MicroStrategy.

7. Select MBWBAUTH.dll and click OK.

8. Click OK.

9. Restart your IIS server.

Link a Windows Dom ain User to a MicroStrategy User

Once IIS has been configured to allow integrated Windows authentication, a


link must be created between a user's MicroStrategy user name and the
user's Windows domain user name. The required steps are detailed below.

1. In Developer, log in to a project source using an account with


administrative privileges.

2. From the Folder List, expand a project source, then expand


Administration, and then expand User Manager.

3. Navigate to the MicroStrategy user you want to link a Windows user to.
Right-click the MicroStrategy user and select Edit.

4. Expand Authentication, then select Metadata.

5. Under Windows Authentication, in the Link Windows user area,


provide the Windows user name for the user you want to link the
MicroStrategy user to. There are two ways to do this:

l Click Browse to select the user from the list of Windows users
displayed.

l Click Search to search for a specific Windows user by providing the


Windows login to search for and, optionally, the Windows domain to
search. Then click OK to run the search.

6. Click OK.

Copyright © 2024 All Rights Reserved 545


Syst em Ad m in ist r at io n Gu id e

Link a Windows Login to an LDAP User

When using LDAP with MicroStrategy, you can reduce the number of times a
user needs to enter the same login and password by linking their Windows
system login with their LDAP login used in MicroStrategy.

By creating a link between a Windows system login, an LDAP user, and a


MicroStrategy user, a single login into the machine authenticates the user
for the machine as well as in MicroStrategy.

For example, a user logs in to their Windows machine with a linked LDAP
login and password and is authenticated. The user then opens Developer
and connects to a project source using Windows authentication. Rather than
having to enter their login and password to log in to MicroStrategy, the
user's login and password authenticated when logging in to their machine is
used to authenticate the user. During this process, the user account and any
relevant user groups are imported and synchronized for the user.

The LDAP Server is configured as the Microsoft Active Directory Server domain
controller, which stores the Windows system login information.

1. In Developer, log in to a project source. You must log in as a user with


administrative privileges.

2. From the Administration menu, select Server, and then select


Configure MicroStrategy Intelligence Server.

3. Expand the LDAP category, then expand Import, and then select
Options.

4. Select the Synchronize user/group information with LDAP during


Windows authentication and import Windows link during Batch
Import check box.

5. Click OK.

Copyright © 2024 All Rights Reserved 546


Syst em Ad m in ist r at io n Gu id e

Define a Project Source to Use Windows Authentication

For MicroStrategy Web users to gain access to a project in a specific project


source using Windows authentication, the project source must first be
configured have Windows authentication enabled. The steps for enabling
this configuration are detailed below.

1. In Developer, log in to a project source using an account with


administrative privileges.

2. Right-click the project source and select Modify Project Source

3. On the Advanced tab, select the Use network login id (Windows


authentication) option.

4. Click OK.

Enable Windows Authentication Login for MicroStrategy Web

There are two ways to enable access to MicroStrategy Web using Windows
authentication. Access can be enabled for the MicroStrategy Web
application as a whole, or it can be enabled for individual projects at the
project level.

For steps to enable Windows authentication for all of MicroStrategy Web,


see Enable Windows Authentication Login for MicroStrategy Web, page 547.

For steps to enable Windows authentication for a project, see Enable


Windows Authentication Login for a Project, page 548.

1. From the Windows Start menu, point to All Programs, then


MicroStrategy Tools, and then select Web Administrator

2. On the left, under Intelligence Server, select Default Properties.

3. In the Login area, for Windows Authentication, select the Enabled


check box.

Copyright © 2024 All Rights Reserved 547


Syst em Ad m in ist r at io n Gu id e

If you want Windows authentication to be the default login mode for


MicroStrategy Web, for Windows Authentication, select the Default
option.

4. Click Save.

Enable Windows Authentication Login for a Project

1. Log into a MicroStrategy Web project as a user with administrative


privileges.

2. At the upper left of the page, click the MicroStrategy icon, and select
Preferences.

3. On the left, select Project Defaults, then Security.

4. In the Login modes area, for Windows Authentication, select the


Enabled check box.

If you want Windows authentication to be the default login mode for


this project in MicroStrategy Web, also select the Default option.

5. Next to Apply, choose whether to apply these settings to all projects, or


just to the one you are currently logged into.

6. Click Apply.

Configure a Browser for Single Sign-On to MicroStrategy Web

If a MicroStrategy Web user plans to use single sign-on to log in to


MicroStrategy Web, each user's browser must be configured to enable
integrated authentication. The process to enable integrated authentication is
different depending on the browser they use:

l For Internet Explorer, you must enable integrated authentication for the
browser, as well as add the MicroStrategy Web server URL as a trusted
site. Depending on your security policy, integrated authentication may be

Copyright © 2024 All Rights Reserved 548


Syst em Ad m in ist r at io n Gu id e

enabled by default for Internet Explorer.

l For Firefox, you must add the MicroStrategy Web server URL as a trusted
site. The URL must be listed in the about:config page, in the settings
network.negotiate-auth.trusted-uris and network.negotiate-
auth.delegation-uris.

Enable Single Sign-on to Web, Mobile, and Office with Third-


Party Authentication
You can enable single sign-on (SSO) authentication for the following
MicroStrategy applications using a third-party tool such as IBM Tivoli Access
Manager, CA SiteMinder, Oracle Access Manager, or PingFederate ® :

l MicroStrategy Web

l MicroStrategy Mobile

l MicroStrategy Web Services, to support MicroStrategy Office (IBM Tivoli


Access Manager and CA SiteMinder only)

This information applies to the legacy MicroStrategy Office add-in, the


add-in for Microsoft Office applications which is no longer actively
developed.

It was substituted with a new add-in, MicroStrategy for Office, which


supports Office 365 applications. The initial version does not yet have all
the functionalities of the previous add-in.

If you are using MicroStrategy 2021 Update 2 or a later version, the


legacy MicroStrategy Office add-in cannot be installed from Web.;

For more information, see the MicroStrategy for Office page in the
Readme and the MicroStrategy for Office Help.

Once a user is authenticated in the third-party system, the user's


permissions are retrieved from a user directory, such as LDAP, and access
is granted to the MicroStrategy application.

Copyright © 2024 All Rights Reserved 549


Syst em Ad m in ist r at io n Gu id e

In this security model, there are several layers. For example, when a user
logs in to Tivoli, Tivoli determines whether the user's credentials are valid. If
the user logs in with valid credentials to Tivoli, the user directory (such as
LDAP) determines whether that valid user can connect to MicroStrategy. The
user's MicroStrategy privileges are stored within the MicroStrategy Access
Control List (ACL). What a user can and cannot do within the MicroStrategy
application is stored on Intelligence Server in the metadata within these
ACLs. For more information about privileges and ACLs in MicroStrategy, see
Chapter 2, Setting Up User Security.

For MicroStrategy to be able to get a user's privileges from the metadata,


Intelligence Server must be configured to be a trusted machine in
MicroStrategy Web, Mobile, and Office. This allows the information to be
passed between the two machines.

The following diagram illustrates the architecture of a security system that


uses third-party authentication.

MicroStrategy enables this type of access by passing tokens between


MicroStrategy, the user directory, and the third-party authentication
provider. Properly configuring these levels of communication is critical to
implementing SSO authentication.

Copyright © 2024 All Rights Reserved 550


Syst em Ad m in ist r at io n Gu id e

The distinguished name of the user passed from the third-party provider is
URL-decoded by default within MicroStrategy Web, Mobile, or Web Services
before it is passed to the Intelligence Server.

Single sign-on authentication performs the step of allowing a user access to


MicroStrategy products. You also must configure MicroStrategy users to
define privileges and permissions that control what a user can perform and
access within the products.

Setting Up Third-Party SSO Authentication in MicroStrategy


Products
The following high-level steps are required to set up third-party SSO
authentication in MicroStrategy Web, Mobile, or Web Services, and each is
detailed below:

l Creating Users and Links in Third-Party Authentication Systems, page 551

l Enabling Single Sign-On Authentication to MicroStrategy Web, Mobile, or


Office, page 552

l Importing and Linking Third-Party Authentication Users in MicroStrategy,


page 564

l To Log in to MicroStrategy Web Using Tivoli Single Sign-On, page 568

Creating Users and Links in Third-Party Authentication Systems


Before MicroStrategy can be configured to accept Tivoli, SiteMinder,
PingFederate or Oracle Access Manager authentication, certain preliminary
settings must be established. This ensures that a link exists between the
authentication provider and MicroStrategy products, and that the link is
functioning as required.

You must complete all of the following steps to ensure proper configuration
of your authentication provider and MicroStrategy products.

Copyright © 2024 All Rights Reserved 551


Syst em Ad m in ist r at io n Gu id e

Creating a User in Your Third-Party Authentication System

You can enable SSO authentication in MicroStrategy by associating a


MicroStrategy user to a user in Tivoli, SiteMinder, PingFederate or Oracle
Access Manager. To test this association, you must create a user in your
authentication system to confirm that access has been properly configured
in MicroStrategy products.

For steps to create a new user, refer to your authentication provider's


documentation.

Creating a Link to MicroStrategy Applications in Your Third-Party


Authentication System

You link Tivoli to MicroStrategy applications using junctions, SiteMinder


using Web Agents, and Oracle Access Manager using Webgates. These
links redirect users from the respective provider to MicroStrategy, and are
required to enable SSO authentication. You must create one link each, as
applicable, for MicroStrategy Web, MicroStrategy Mobile, and MicroStrategy
Web Services to support MicroStrategy Office.

Oracle Access Manager authentication is only available for MicroStrategy


Web.

For steps to create a junction (in Tivoli), a Web Agent (in SiteMinder), or a
Webgate (Oracle Access Manager), refer to the product's documentation.

Enabling Single Sign-On Authentication to MicroStrategy Web,


Mobile, or Office
Once the initial third-party authentication setup is complete, you must
enable trusted authentication in MicroStrategy Web, Mobile or Office, and
establish trust between the MicroStrategy product and Intelligence Server.
This allows the authentication token to be passed from one system to the
other.

Copyright © 2024 All Rights Reserved 552


Syst em Ad m in ist r at io n Gu id e

Note that for MicroStrategy Web Services to support MicroStrategy Office,


you must establish trust between Office and the Intelligence server, and
enable trusted authentication in the configuration files for Web Services.

This section explains the following required steps to enable SSO


authentication in MicroStrategy Web, Mobile, or Web Services:

l Enabling Trusted Authentication in MicroStrategy Web, page 553

l Enabling Trusted Authentication in MicroStrategy Mobile, page 555

l Establishing Trust Between MicroStrategy Web or Mobile and Intelligence


Server, page 555

l Establishing Trust Between MicroStrategy Web Services and Intelligence


Server, to Support MicroStrategy Office, page 560

l Enabling Trusted Authentication in MicroStrategy Web Services to Support


MicroStrategy Office, page 561

If you use Internet Information Services (IIS) as your web server for
MicroStrategy Web or Web Services, you must enable anonymous
authentication to the MicroStrategy virtual directories to support SSO
authentication to MicroStrategy Web, Mobile, or Office. This is discussed in
Enabling Anonymous Authentication for Internet Information Services, page
563.

Enabling Trusted Authentication in MicroStrategy Web

To enable users to log in to MicroStrategy Web using SSO authentication,


you must enable trusted authentication as an available authentication mode
in MicroStrategy Web.

To Enable Trusted Authentication in MicroStrategy Web

1. From the Windows Start menu, point to All Programs, then


MicroStrategy Tools, and then select Web Administrator.

Copyright © 2024 All Rights Reserved 553


Syst em Ad m in ist r at io n Gu id e

2. On the left side of the page, click Default Properties.

3. Scroll down to the Login area and, under Login mode, select the
Enabled check box next to Trusted Authentication Request. Also
select the Default option next to Trusted Authentication Request, as
shown below:

4. From the Trusted Authentication Providers drop-down list, select


IBM Tivoli Access Manager, CA SiteMinder, PingFederate, or
Oracle Access Manager.

To use a custom authentication provider, select Custom SSO. For


information about adding custom authentication providers, refer to your
MicroStrategy SDK documentation.

5. Click Save.

Using Certificate Authentication with SiteMinder

CA SiteMinder can be configured to use either certificate authentication or


basic authentication. MicroStrategy Web's siteminder_
security.properties file indicates that the first SiteMinder header
variable to be used is SM_UNIVERSALID. This variable provides information
for certificate authentication. If this variable is empty, then the information in
the variable SM_USER is used for basic authentication. For information about

Copyright © 2024 All Rights Reserved 554


Syst em Ad m in ist r at io n Gu id e

configuring your SiteMinder system to use certificate authentication, see the


SiteMinder documentation.

Enabling Trusted Authentication in MicroStrategy Mobile

To enable users to log in to MicroStrategy Mobile using SSO authentication,


you must enable trusted authentication as an available authentication mode
in MicroStrategy Mobile. For instructions on configuring mobile devices to
use trusted authentication, refer to the Administering MicroStrategy Mobile
section in the MicroStrategy Mobile Administration Help.

To Enable Trusted Authentication in MicroStrategy Mobile

1. From the Windows Start menu, point to All Programs, then


MicroStrategy Tools, and then select Mobile Administrator.

2. On the left side of the page, click Default Properties.

3. From the Trusted Authentication Providers drop-down list, select


IBM Tivoli Access Manager, CA SiteMinder, PingFederate, or
Oracle Access Manager.

To use a custom authentication provider, select Custom SSO. For


information about adding custom authentication providers, refer to your
MicroStrategy SDK documentation.

4. Click Save.

To create a mobile configuration to send to users' mobile devices, refer to


the Administering MicroStrategy Mobile section in the MicroStrategy Mobile
Administration Help.

Establishing Trust Between MicroStrategy Web or Mobile and Intelligence


Server

To enable the authentication token to pass from your third-party


authentication provider to MicroStrategy Web or Mobile, and then to

Copyright © 2024 All Rights Reserved 555


Syst em Ad m in ist r at io n Gu id e

Intelligence Server, a trust relationship must be established between


MicroStrategy Web or Mobile and Intelligence Server. The steps to establish
trust are described below.

If you need to delete an established trust relationship, see To Delete a Trust


Relationship, page 558.

If you are using multiple Intelligence Server machines in a cluster, you must
first set up the cluster, as described in Chapter 9, Cluster Multiple
MicroStrategy Servers, and then establish trust between Web or Mobile
Server and the cluster.

To establish trust between MicroStrategy Web or Mobile and Intelligence


Server, you must have the following privileges:

l Bypass all object security access checks

l Configure security settings

l Enable Intelligence Server administration from Web

l Web administration

For information on assigning privileges to users, see Chapter , Controlling


Access to Functionality: Privileges.

To Establish Trust Between MicroStrategy Web or Mobile and


Intelligence Server

1. Open MicroStrategy Web Administrator or MicroStrategy Mobile


Administrator, as applicable:

l From the Windows Start menu, point to All Programs, then


MicroStrategy Tools, and then select Web Administrator.

l From the Windows Start menu, point to All Programs, then


MicroStrategy Tools, and then select Mobile Administrator.

2. On the left, click Servers.

Copyright © 2024 All Rights Reserved 556


Syst em Ad m in ist r at io n Gu id e

3. Confirm that MicroStrategy Web or Mobile Server is currently


connected to an Intelligence Server. If an Intelligence Server is not
connected, in the Unconnected Servers table, under Action, click
Connect for the appropriate Intelligence Server.

4. In the Connected Servers table, under Properties, click the Modify


icon .

5. Next to Trust relationship between Web/Mobile Server and


MicroStrategy Intelligence Server, as applicable, click Setup.

6. Type a User name and Password in the appropriate fields. The user
must have administrative privileges for MicroStrategy Web or Mobile,
as applicable.

7. From the options provided, select the authentication mode used to


authenticate the administrative user.

8. In the Web Server Application or Mobile Server Application field,


type a unique name for the trust relationship.

For example, you can use the URLs for the applications using Tivoli, as
follows:

Copyright © 2024 All Rights Reserved 557


Syst em Ad m in ist r at io n Gu id e

MicroStrategy Web:
https://MachineName/JunctionName/MicroStrategy/asp

MicroStrategy Mobile:
https://
MachineName/JunctionName/MicroStrategyMobile/asp

9. Click Create Trust Relationship.

10. Click Save.

To Ver i f y t h e Tr u st Rel at i o n sh i p

1. From the Windows Start menu, point to All Programs, then


MicroStrategy Products, and then select Developer.

2. Log in to a project source as a user with administrative privileges.

3. From the Administration menu, point to Server, and then select


Configure MicroStrategy Intelligence Server.

4. On the left, expand the Web Single Sign-on category, and verify that
the trusted relationship is listed in the Trusted Web Application
Registration list.

5. Click OK.

To Delete a Trust Relationship

1. Open MicroStrategy Web Administrator or MicroStrategy Mobile


Administrator, as applicable:

l From the Windows Start menu, point to All Programs, then


MicroStrategy Tools, and then select Web Administrator.

l From the Windows Start menu, point to All Programs, then


MicroStrategy Tools, and then select Mobile Administrator.

2. On the left, click Servers.

Copyright © 2024 All Rights Reserved 558


Syst em Ad m in ist r at io n Gu id e

3. Confirm that MicroStrategy Mobile is currently connected to an


Intelligence Server. If an Intelligence Server is not connected, in the
Unconnected Servers table, under Action, click Connect for the
appropriate Intelligence Server.

4. In the Connected Servers table, under Properties, click the Modify


icon .

5. Next to Trust relationship between MicroStrategy Web/Mobile


Server and MicroStrategy Intelligence Server, as applicable, click
Delete.

6. Provide your login information in the appropriate fields.

7. Click Delete trust relationship.

8. Click Save.

Copyright © 2024 All Rights Reserved 559


Syst em Ad m in ist r at io n Gu id e

Establishing Trust Between MicroStrategy Web Services and Intelligence


Server, to Support MicroStrategy Office

To establish trust between MicroStrategy Office and Intelligence Server, you


must use MicroStrategy Office to connect to the project source you want to
use trusted authentication for, and then establish the trust relationship
between Office and the Intelligence Server. Once you have completed this
step, you must edit the projectsources.xml file for Web Services to
enable trusted authentication for the project source. Both procedures are
described below.

To Establish Trust Between MicroStrategy Web Services and


Intelligence Server

1. On a machine where MicroStrategy Office is installed, open a Microsoft


Office product, such as Excel.

2. In the Microsoft Office ribbon, under the MicroStrategy Office tab, click
MicroStrategy Office. MicroStrategy Office starts, with a list of project
sources you can connect to.

3. From the list of project sources on the left, select the project source you
want to enable trusted authentication for.

4. In the right pane, enter the login ID and password for a user with
administrative privileges, and click Get Projects. A list of projects is
displayed.

5. Select any project, and click OK.

6. In the MicroStrategy Office toolbar, click Options.

7. Under the General category, select Server.

8. Next to Trust relationship between Web Services and Intelligence


Server, click Create.

Copyright © 2024 All Rights Reserved 560


Syst em Ad m in ist r at io n Gu id e

To Use t h e Th i r d -Par t y Au t h en t i cat i o n URL f o r Web Ser vi ces

1. In the Web Services URL field, enter the URL for the Tivoli Junction or
SiteMinder Web Agent, as applicable, that you created for
MicroStrategy Web Services.

2. Click OK.

Enabling Trusted Authentication in MicroStrategy Web Services to Support


MicroStrategy Office

To allow users to log in to MicroStrategy Office using single sign-on (SSO),


you must do the following:

l Edit the web.config file for Web Services or MWSConfig.properties


file for J2EE application servers, to choose a trusted authentication
provider.

l Edit the projectsources.xml file for MicroStrategy Web Services and


configure the project source to use a third-party security plug-in. For
additional information on the settings in the projectsources.xml file,
see Determining How Users Log Into MicroStrategy Office in the legacy
MicroStrategy Office User Guide.

You need administrative access to the machine where MicroStrategy Web


Services is installed.

To Enable Trusted Authentication in MicroStrategy Office

To Ch o o se a Tr u st ed Au t h en t i cat i o n Pr o vi d er

1. Depending on your Web Services environment, on the machine where


MicroStrategy Web Services is installed, do one of the following:

l If you are using IIS as your application server, open the web.config
file in a text editor, such as Notepad. By default, the file is located in

Copyright © 2024 All Rights Reserved 561


Syst em Ad m in ist r at io n Gu id e

C:\Program Files (x86)\MicroStrategy\Web Services.

l If you are using Web Services in a J2EE-compliant application server,


open the MWSConfig.properties file in a text editor, such as
Notepad. By default, the file is located in the folder where your
application server deploys Web Services.

2. Depending on your Web Services environment, do the following:

3. In the web.config file, locate the following line:

<add key="TRUSTEDAUTHPROVIDER" value="1" />

4. In the MWSConfig.properties file, locate the following line:

TRUSTEDAUTHPROVIDER=1

5. Change value or TRUSTEDAUTHPROVIDER, as applicable, to one of the


following, as applicable:

l To use Tivoli as the authentication provider, type 1.

l To use SiteMinder as the authentication provider, type 2.

l To use a custom authentication provider, type 3.

If you are using a custom authentication provider, you must make additional
modifications to the custom_security.properties file, which is located
by default in C:\Program Files (x86)\MicroStrategy\Web
Services\resources. For information on these modifications, refer to the
MicroStrategy Developer Library (MSDL).

To Co n f i gu r e Web Ser vi ces t o Use Tr u st ed Au t h en t i cat i o n

1. On the machine where MicroStrategy Web Services is installed, open


the projectsources.xml file in a text editor, such as Notepad. By
default, the file is located in C:\Program Files
(x86)\MicroStrategy\Web Services.

Copyright © 2024 All Rights Reserved 562


Syst em Ad m in ist r at io n Gu id e

2. In the projectsources.xml file, locate the <ProjectSource> tag


describing the project source you want to enable SSO for.

3. In the <ProjectSource> tag, replace the content of the <AuthMode>


tag with MWSSimpleSecurityPlugin. The contents of the new
<ProjectSource> tag should appear similar to the following:

<ProjectSource>
<ProjectSourceName>Name</ProjectSourceName>
<ServerName>Name</ServerName>
<AuthMode>MWSSimpleSecurityPlugIn</AuthMode>
<PortNumber>0</PortNumber>
</ProjectSource>

4. Save projectsources.xml.

Enabling Anonym ous Authentication for Internet Inform ation Services

If you use Internet Information Services (IIS) as your web server, you must
enable anonymous authentication to the MicroStrategy virtual directory to
support SSO authentication to MicroStrategy Web, Web Services or Mobile.

The steps to perform this configuration are provided below, which may vary
depending on your version of IIS. Click here to find more information about
using anonymous authentication with IIS.

l IIS 7

l IIS 8

l IIS 10

You cannot use Windows authentication to authenticate users in


MicroStrategy Web or Mobile if you enable anonymous authentication to the
MicroStrategy virtual directory in IIS. The steps below should only be used
as part of an SSO authentication solution with Tivoli.

Copyright © 2024 All Rights Reserved 563


Syst em Ad m in ist r at io n Gu id e

To Configure IIS to Enable Anonymous Authentication to the


MicroStrategy Web, Web Services, and Mobile Virtual Directories

1. On the MicroStrategy Web server machine, access the IIS Internet


Service Manager.

2. Browse to and right-click the MicroStrategy virtual folder and select


Properties.

3. On the Directory Security tab, under Anonymous access and


authentication control, click Edit.

4. Select the Allow anonymous access check box.

5. Click OK.

6. Click OK.

7. To enable anonymous authentication to MicroStrategy Web Services,


repeat the above procedure for the MicroStrategyWS virtual directory.

8. To enable anonymous authentication to MicroStrategy Mobile, repeat


the above procedure for the MicroStrategyMobile virtual directory on
the Mobile Server machine.

9. Restart IIS for the changes to take effect.

Importing and Linking Third-Party Authentication Users in


MicroStrategy
For third-party authentication users to access MicroStrategy applications,
the users must be granted MicroStrategy privileges. Whether the LDAP DN
is sent in the request to Intelligence Server is configured when the Tivoli
junction or SiteMinder Web Agent is created. For details about creating a
junction or Web Agent, refer to your Tivoli or SiteMinder documentation.

A Tivoli or SiteMinder user can be:

Copyright © 2024 All Rights Reserved 564


Syst em Ad m in ist r at io n Gu id e

l Imported as a new MicroStrategy user upon logging in to MicroStrategy


Web, which assigns the user privileges that are defined for the
MicroStrategy user. For steps to perform this configuration, see Importing
Tivoli Users as MicroStrategy Users, page 565.

l Allowed guest access to MicroStrategy Web. The Tivoli user inherits the
privileges of the Public/Guest group in MicroStrategy. Guest access to
MicroStrategy Web is not necessary for imported or linked Tivoli users.
For steps to perform this configuration, see Enabling Guest Access to
MicroStrategy Web or Mobile for Tivoli Users, page 567.

A Tivoli or SiteMinder user can also be associated with an existing


MicroStrategy user, using the MicroStrategy User Editor. Associating Tivoli
users rather than enabling Tivoli users to be imported when they log in to
MicroStrategy Web enables you to assign MicroStrategy privileges and other
security settings for the user prior to their initial login. For steps to perform
this configuration, see Linking Tivoli Users to Existing MicroStrategy Users,
page 566.

If a Tivoli or SiteMinder user has already been imported into MicroStrategy,


and a MicroStrategy user has been associated with the Tivoli or SiteMinder
user, the MicroStrategy metadata is synchronized with the information from
the user directory, such as the LDAP server. The way this synchronization
takes place depends upon several factors.

Im porting Tivoli Users as MicroStrategy Users

When MicroStrategy is configured to import a Tivoli user, the Tivoli user is


imported as a MicroStrategy user the first time that the user logs in to
MicroStrategy Web after the configuration is completed. A Tivoli user is
imported into MicroStrategy only if the Tivoli user has not already been
imported as or associated with a MicroStrategy user.

When a Tivoli user is imported into MicroStrategy:

Copyright © 2024 All Rights Reserved 565


Syst em Ad m in ist r at io n Gu id e

l The Tivoli user name is imported as the trusted authentication request


user ID for the new MicroStrategy user.

l The MicroStrategy user is added to the Everyone group by default. If no


privileges are defined through a user directory such as LDAP, then the
imported user inherits the privileges associated with the MicroStrategy
Everyone group.

l Security privileges are not imported from Tivoli; these must be defined in
MicroStrategy by an administrator.

To iImport Tivoli Users as MicroStrategy Users

1. From the Windows Start menu, point to All Programs, then


MicroStrategy Products, and then select Developer.

2. Log in to a project source as a user with administrative privileges.

3. From the Administration menu, point to Server, and then Configure


MicroStrategy Intelligence Server.

4. On the left, expand the Web Single Sign-on category.

5. On the right, select the Import user at login check box.

6. Click OK.

Linking Tivoli Users to Existing MicroStrategy Users

As an alternative to importing users, you can link (or associate) Tivoli users
to existing MicroStrategy users to retain the existing privileges and
configurations defined for the MicroStrategy users. Linking Tivoli users
rather than enabling Tivoli users to be imported when they log in to
MicroStrategy Web enables you to assign privileges and other security
settings for the user prior to their initial login.

Copyright © 2024 All Rights Reserved 566


Syst em Ad m in ist r at io n Gu id e

To Link Tivoli Users to Existing MicroStrategy Users

1. From the Windows Start menu, point to All Programs, then


MicroStrategy Products, and then select Developer.

2. Log in to a project source as a user with administrative privileges.

3. In the folder list on the left, expand Administration, and then expand
User Manager.

4. Browse to the MicroStrategy user to link to a Tivoli user.

5. Right click the user and select Edit.

6. Expand Authentication, then select Metadata.

7. Under Trusted Authentication Request, in the User ID field, type the


Tivoli user name to link to the MicroStrategy user.

The name you type in the User ID field should be the same as the one
that the user employs when providing their Tivoli login credentials.

8. Click OK.

Enabling Guest Access to MicroStrategy Web or Mobile for Tivoli Users

If you choose to not import or link Tivoli users to a MicroStrategy user, you
can enable guest access to MicroStrategy Web for the Tivoli users. Guest
users inherit their privileges from the MicroStrategy Public/Guest group.

Logging in to MicroStrategy Web Using Tivoli Single Sign-On


Once all of the preliminary steps have been completed and tested, users
may begin to sign in to MicroStrategy using their Tivoli credentials. Sign-on
steps are provided in the procedure below.

Copyright © 2024 All Rights Reserved 567


Syst em Ad m in ist r at io n Gu id e

To Log in to MicroStrategy Web Using Tivoli Single Sign-On

1. Open a web browser.

2. Type the following URL in the address field:


https://MachineName/JunctionName/MicroStrategyWebURL

Where the variables in italics are as follows:

l MachineName is the name of the machine running Tivoli.

l JunctionName is the name of the junction created in Tivoli.

l MicroStrategyWebURL is the URL to access MicroStrategy Web.


For example, MicroStrategy/asp.

3. Type your Tivoli user name and password.

4. Connect to a MicroStrategy project.

5. Click Trusted Authentication.

You are logged in to the MicroStrategy project with your Tivoli user
credentials.

If you are prompted to display both secure and non-secure items on the
web page, you can configure your web browser to hide this warning
message. Refer to your web browser documentation regarding this
configuration.

Spring Security SAML Provider 6.2.3 Upgrade


MicroStrategy ONE (June 2024) includes a major upgrade of Spring
frameworks to 6.x to Jakarta EE 10 platform. The spring-security-
saml2-service-provider framework was upgraded to v6.2.3 which
includes several deprecated class methods that were replaced. General
upgrade notes can be found in Assessing and Updating MicroStrategy
Customizations for Tomcat 10.1.x and Spring 6 Compatibility. See also the
MicroStrategy 2024 roadmap.

How this update effects users:

Copyright © 2024 All Rights Reserved 568


Syst em Ad m in ist r at io n Gu id e

l If your MicroStrategy environment is configured to use SAML without any


customization, the upgrade to MicroStrategy ONE (June 2024) is seamless
and requires no additional steps.

l The update does not impact SAML on ASP.

l If your MicroStrategy environment uses SAML and includes additional


customizations, additional steps may be needed after the upgrade. The
steps often require you to replace classes with new and more secure
classes.

To upgrade existing or integrate new SAML customization in the latest


version, see the following SAML customization topics:

l Spring Security SAML Customization for MicroStrategy Library

l Spring Security SAML Customization for MicroStrategy Web and Mobile

Spring Security SAML Customization for MicroStrategy Library


MicroStrategy ONE (June 2024) includes a Spring Security SAML provider
upgrade to 6.2. This major upgrade includes deprecated classes and
methods. The following topic illustrates the SAML workflow and beans you
can leverage for customization.

SAML Login Workflow

The diagrams and workflows below illustrate how authentication-related


requests are handled with different authentication configurations. The
following points should be considered when using these workflow diagrams:

l Double-line arrows represent HTTP requests and responses and single-


line arrows represent Java cells.

l The object names correspond to the bean IDs in the configuration XML
files. You must view the configuration files to identify which Java classes
define those beans.

Copyright © 2024 All Rights Reserved 569


Syst em Ad m in ist r at io n Gu id e

l Only beans involved in request authentication are included. Filters that


simply pass the request along the filter chain or perform action not directly
involved in request authentication are not included. Each request passes
through multiple Spring Security filter, as described in Configuration files
and bean classes.

Generate <sam l2:AuthnRequest>

1. The multi-mode login page submits a POST:


{BasePath}/auth/login request, which is intercepted by the
mstrMultiModeFilter bean.

2. The multi-mode login filter recognizes this is a SAML login request and
delegates the work to the mstrMultModeFilter SAML login filter
bean.

3. The SAML login filter delegates to the mstrSamlEntryPoint SAML


entry point bean, which performs a redirection to saml/authenticate by
default.

The redirect supports multi-tenant scenarios. If you've configured more


than one asseting party, you can first redirect the user to a picker or in
most cases, leave it as is.

4. The browser redirects and sends a GET:


{BasePath}/saml/authenticate request, which is intercepted by

Copyright © 2024 All Rights Reserved 570


Syst em Ad m in ist r at io n Gu id e

the mstrSamlAuthnRequestFilter bean.

5. The mstrSamlAuthnRequestFilter bean is


<saml2:AuthnRequest>, which generates an endpoint that creates,
sings, serializes, and encodes a <saml2:AuthnRequest> and
redirects to the SSO login endpoint.

Bean Descr i p t i o n s

Bean ID Java Class Description

A subclass of
LoginUrlAuthe
nticationEntr
yPoint that
performs a
mstrEntryP com.microstrategy.auth.saml.authnrequest
redirect to where
oint .SAMLEntryPointWrapper
it is set in the
constructor by the
String
redirectFilte
rUrl parameter.

By default, this
filter responds to
the
/saml/authent
icate/**
endpoint and the
mstrSamlAu org.springframework.security.saml2.provi result is a redirect
thnRequest der.service.web.Saml2WebSsoAuthenticatio that includes a
Filter nRequestFilter SAMLRequest
parameter that
contains the
signed, deflated,
and encoded
<saml2:AuthnR
equest>

Copyright © 2024 All Rights Reserved 571


Syst em Ad m in ist r at io n Gu id e

Cu st o m i zat i o n

Before AuthnRequest is sent, you can leverage the


mstrSamlEntryPoint or mstrSamlAuthnRequestFilter bean
depending on the time you want your code to be executed, create a
subclass, and override the corresponding method with your own logic.

Prior to /saml/authenticate
To customize before /saml/authenticate redirect:

1. Create a MySAMLEntryPoint class that extends


com.microstrategy.auth.saml.authnrequest.SAMLEntryPoi
ntWrapper and overrides the commence method.

2. Execute your code before calling super.commence:

public class MySAMLEntryPoint extends SAMLEntryPointWrapper {


MySAMLEntryPoint(String redirectFilterUrl){
super(redirectFilterUrl);
}
@Override
public void commence(HttpServletRequest request, HttpServletResponse
response, AuthenticationException e) throws IOException, ServletException
{
//>>> Your logic here
super.commence(request, response, e);
}
}

3. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with a
mstSamlEntryPoint bean ID to replace the existing bean:

The constructor argument must be exactly the same as the original, if it


is not customized.

<!-- Entry point for SAML authentication mode -->


<bean id="mstrSamlEntryPoint"
class="com.microstrategy.custom.MySAMLEntryPoint">
<constructor-arg value="/saml/authenticate"/>
</bean>

Copyright © 2024 All Rights Reserved 572


Syst em Ad m in ist r at io n Gu id e

Pr i o r t o SSO IDP Red i r ect

To customize before the SSO IDP redirect:

1. Create a MySAMLAuthenticationRequestFilter class that extends


org.springframework.security.saml2.provider.service.w
eb.Saml2WebSsoAuthenticationRequestFilter.

2. Override the doFilterInternal method.

3. Execute your code before calling super.doFilterInternal.

public class MySAMLAuthenticationRequestFilter extends


Saml2WebSsoAuthenticationRequestFilter {
@Override
protected void doFilterInternal(HttpServletRequest request,
HttpServletResponse response, FilterChain filterChain) throws
ServletException, IOException {
//>>> Your logic here
super.doFilterInternal(request, response, filterChain);
}
}

4. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with a
mstSamlAuthnRequestFilter bean ID to replace the existing bean:

The constructor argument must be exactly the same as the original, if it


is not customized.

<bean id="mstrSamlAuthnRequestFilter"
class="MySAMLAuthenticationRequestFilter">
<constructor-arg ref="samlAuthenticationRequestContextResolver"/>
</bean>

Cu st o m i ze t h e Au t h n Req u est Ob j ect

The AuthnRequest object is constructed by


mstrSamlAuthnRequestFilter as a part of the SAML request. If you
want to customize the AuthnRequest object before it is sent to SSO IDP, you
can extend SAMLAuthenticationAuthnRequestCustomizer:

Copyright © 2024 All Rights Reserved 573


Syst em Ad m in ist r at io n Gu id e

In previous releases, AuthnRequest customization is performed by


extending SAMLAuthenticationRequestContextConverter, which is
deprecated and removed in MicroStrategy ONE (June 2024) in favor of
SAMLAuthenticationAuthnRequestCustomizer.

1. Create a MyAuthnRequestCustomizer class that extends


com.microstrategy.auth.saml.authnreques.SAMLAuthentic
ationAuthnRequestCustomizer, and override the accept method:

package com.microstrategy.custom.auth;
import ...;
public class MyAuthnRequestCustomizer extends
SAMLAuthenticationAuthnRequestCustomizer {
@Override
public void accept
(OpenSaml4AuthenticationRequestResolver.AuthnRequestContext
authnRequestContext) {
super.accept(authnRequestContext);
AuthnRequest authnRequest = authnRequestContext.getAuthnRequest
();
// Add your AuthnRequest customization here...
}
}

2. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with a
authnRequestCustomizer bean ID to replace the existing bean:

<bean id="authnRequestCustomizer"
class="com.microstrategy.custom.auth.MyAuthnRequestCustomizer"/>

Copyright © 2024 All Rights Reserved 574


Syst em Ad m in ist r at io n Gu id e

Generate <sam l2:Response>

1. SSO redirects the user to the MicroStrategy Web application. The


redirect request contains a SAML assertion that describes the
authenticated user.

2. The mstrSamlProcessingFilter SAML processing filter bean


extracts the SAML assertion from the request and passes it to the
samlAuthenticationProvider authentication provider bean.

3. The samlAuthenticationProvider bean verifies the assertion then


calls the Intelligence server credentials provider to build an Intelligence
server credentials object from the SAML assertion information.

4. The samlAuthenticationProvider bean passes the Intelligence


server credentials to the Session Manager to create an Intelligence
server session.

5. The SAML processing filter calls the login success handler, which
redirects the browser to the original request.

Copyright © 2024 All Rights Reserved 575


Syst em Ad m in ist r at io n Gu id e

Bean Descr i p t i o n s

Descript
Bean ID Java Class
ion

This is
the core
filter that
is
responsib
le for
handling
org.springframework.security.saml2.provider. the SAML
mstrSamlProc
service.web.authentication.Saml2WebSsoAuthen login
essingFilter
ticationFilter response
(SAML
assertion)
that
comes
from the
IDP
server.

This bean
is
responsib
le for
authentic
ating a
samlAuthenti
com.microstrategy.auth.saml.response.SAMLAut user
cationProvid
henticationProviderWrapper based on
er
informatio
n
extracted
from the
SAML
assertion.

Copyright © 2024 All Rights Reserved 576


Syst em Ad m in ist r at io n Gu id e

Descript
Bean ID Java Class
ion

This bean
is
responsib
le for
creating
and
populatin
g an
IServer
Credent
ials
instance
that
defines
the
credential
samlIserverC s for
com.microstrategy.auth.saml.SAMLIServerCrede
redentialsPr creating
ntialsProvider
ovider Intelligen
ce server
sessions.
The
IServer
Credent
ials
object is
passed to
the
Session
Manager'
s login
method,
which
creates
the

Copyright © 2024 All Rights Reserved 577


Syst em Ad m in ist r at io n Gu id e

Descript
Bean ID Java Class
ion

Intelligen
ce server
session.

Cu st o m i zat i o n

The following content uses the real class name, instead of the bean name.
You can find the bean name in SAMLConfig.xml.

You can perform the following customizations:

l Retrieve more information from SAMLResponse

l Customize the login process

l Customize SAMLAssertion validation

Retrieve More Information from SAMLResponse


The mstrSamlProcessingFilter bean is the first layer that directly
accesses the SAML response. The bean accepts the raw
HttpServletRequest, which contains the samlResponse, and produces
SAMLAuthenticationToken. It is then passed to
SAMLAuthenticationProviderWrapper to perform authentication
validation in later steps.

Copyright © 2024 All Rights Reserved 578


Syst em Ad m in ist r at io n Gu id e

1. MicroStrategy recommends that you create a MySAMLConverter class


that extends the
com.microstrategy.auth.saml.response.SAMLAuthenticati
onTokenConverter class.

2. Override the convert method and call super.convert, which can get
com.microstrategy.auth.saml.response.SAMLAuthenticati
onToken, a subclass of Saml2AuthenticationToken.

3. Extract the information from the raw request, then return an instance
that is a subclass of Saml2AuthenticationToken:

public class MySAMLConverter extends SAMLAuthenticationTokenConverter {


public MySAMLConverter(Saml2AuthenticationTokenConverter delegate) {
super(delegate);
}
@Override
public Saml2AuthenticationToken convert(HttpServletRequest request) {
Saml2AuthenticationToken samlAuthenticationToken = super.convert
(request);
// >>> Extract info from request that you are interested in
return samlAuthenticationToken;
}
}

4. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with a
samlAuthenticationConvertor bean ID:

The constructor argument must be exactly the same as the original, if it


is not customized.

<bean id="samlAuthenticationConverter"
class="com.microstrategy.custom.MySAMLConverter">
<constructor-arg ref="saml2AuthenticationConverter"/>
</bean>

Customize the Login Process


To verify SAML 2.0 responses, mstrSamlProcessingFilter delegates
authentication work to samlAuthenticationProvider. It authenticates a

Copyright © 2024 All Rights Reserved 579


Syst em Ad m in ist r at io n Gu id e

user based on information extracted from a SAML assertion and logs into the
Intelligence sever by calling the internal login method.

You can customize this login process at the following three time points:

Point 1: When Pre-Processing the Assertion Before Validating the


SAML Response

1. Create a MySAMLAuthenticationProviderWrapper class that


extends
com.microstrategy.auth.saml.response.SAMLAuthenticati
onProviderWrapper and overrides the authenticate method:

public class MySAMLAuthenticationProviderWrapper extends


SAMLAuthenticationProviderWrapper {
@Override
public Authentication authenticate(Authentication authentication)
throws AuthenticationException {
// >>>> Do your own work before saml assertion validation --->
Point ① in the above diagram
Authentication auth = super.authenticate(authentication);
return auth;
}
}

2. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with a
samlAuthenticationProvider bean ID and keep the existing bean:

Copyright © 2024 All Rights Reserved 580


Syst em Ad m in ist r at io n Gu id e

The two constructor arguments must be exactly the same as the


original, if it is not customized.

<bean id="samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator" ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>

Point 2: A tim e between Point 1 and 3

1. Create a MySAMLAuthenticationProviderWrapper class that


extends
com.microstrategy.auth.saml.response.SAMLAuthenticati
onProvider and overrides the authenticate method:

public class MySAMLAuthenticationProviderWrapper extends


SAMLAuthenticationProvider {
private @Autowired
SessionManagerLocator sessionManagerLocator;
private @Autowired
HttpServletRequest request;

private @Autowired(required = false)


OAuthTokenProvider oAuthTokenProvider;

@Override
public Authentication authenticate(Authentication authentication)
throws AuthenticationException {

Authentication authResult = super.authenticate(authentication);

// >>>> Do something after assertion validation while before


iserver login ---> Point ② in the above diagram

IServerCredentials credentials = (IServerCredentials)


authResult.getCredentials();
if (! Util.isAdminSession(request)) {
// No implicit OAuth after SAML login
if (oAuthTokenProvider == null) {
SessionManager sessionManager =
sessionManagerLocator.getSessionManager();
try {
sessionManager.login(credentials);
} catch (Exception ex) {
throw new AuthenticationServiceException("IServer

Copyright © 2024 All Rights Reserved 581


Syst em Ad m in ist r at io n Gu id e

authentication failed", ex);


}
}
}
return new AuthenticationWithIServerCredentials(authResult,
credentials);
}
}

2. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with the
samlAuthenticationProvider bean ID and keep the existing bean:

The two constructor arguments must be exactly the same as the


original, if it is not customized.

<bean id="samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator" ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>

Point 3: When Filtering Security Roles After Logging in to the Intelligence


Server

1. Create a MySAMLAuthenticationProviderWrapper class that


extends
com.microstrategy.auth.saml.response.SAMLAuthenticati
onProviderWrapper and overrides the authenticate method:

public class MySAMLAuthenticationProviderWrapper extends


SAMLAuthenticationProviderWrapper {
@Override
public Authentication authenticate(Authentication authentication)
throws AuthenticationException {
Authentication auth = super.authenticate(authentication);
// >>>> Do something after iserver login ---> Point ③ in the
above diagram
return auth;

Copyright © 2024 All Rights Reserved 582


Syst em Ad m in ist r at io n Gu id e

}
}

2. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with a
samlAuthenticationProvider bean ID and keep the existing bean:

The two constructor arguments must be exactly the same as the


original, if it is not customized.

<bean id="samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator" ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>

Customize SAMLAssertion Validation


To verify SAML 2.0 responses, mstrSamlProcessingFilter delegates
authentication work to the samlAuthenticationProvider bean, which is
com.microstrategy.auth.saml.response.SAMLAuthenticationPr
oviderWrapper.

You can configure this in the following ways:

l Set a clock skew or authentication age for timestamp validation

l Perform additional validation

l Coordinate with UserDetailsService

Set a Clock Skew for Tim estam p Validation

It is common for your web and IDP servers to have system clocks that are
not perfectly synchronized. You can configure the default
SAMLAssertionValidator assertion validator with some tolerance.

Copyright © 2024 All Rights Reserved 583


Syst em Ad m in ist r at io n Gu id e

1. Open the SAMLConfig.xml file under the classes/auth/custom


folder.

2. Set the responseSkew property to your custom value. By default, it is


300 seconds.

<bean id="samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="responseSkew" value="300"/>
</bean>

Set an Authentication Age for Tim estam p Validation

By default, the system allows users to single sign on for up to 2,592,000


seconds since their initial authentication with the IDP (based on the
AuthInstance value of the authentication statement). Some IDPs allow
users to stay authenticated for longer periods of time and you may need to
change the default value.

1. Open the SAMLConfig.xml file under the classes/auth/custom


folder.

2. Set the maxAuthenticationAge property in the default


SAMLAssertionValidator assertion validator to your customized
value:

<bean id="samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30 days
-->
</bean>

Perform Additional Validation

The new spring SAML framework performs minimal validation on SAML 2.0
assertions. After verifying the signature, the spring SAML framework:

Copyright © 2024 All Rights Reserved 584


Syst em Ad m in ist r at io n Gu id e

l Validates the <AudienceRestriction> and


<DelegationRestriction> conditions.

l Validate <SubjectConfirmation>, expect for any IP address


information

MicroStrategy recommends to call super.convert(). You can skip this


call if you don't need it to check the <AudienceRestriction> or
<SubjectConfirmation> since you are checking those yourself.

1. Configure your own assertion validator that extends


com.microstrategy.auth.saml.response.SAMLAssertionVal
idator.

2. Perform your own validation. For example, you can use OpenSAML's
OneTimeUseConditionValidator to also validate a <OneTimeUse>
condition:

public class MySAMLAssertionValidator extends SAMLAssertionValidator {


@Override
public Saml2ResponseValidatorResult convert
(OpenSaml4AuthenticationProvider.AssertionToken token) {
Saml2ResponseValidatorResult result = super.convert(token);
OneTimeUseConditionValidator validator = ...;
Assertion assertion = token.getAssertion();
OneTimeUse oneTimeUse = assertion.getConditions().getOneTimeUse
();
ValidationContext context = new ValidationContext();
try {
if (validator.validate(oneTimeUse, assertion, context) ==
ValidationResult.VALID) {
return result;
}
} catch (Exception e) {
return result.concat(new Saml2Error(INVALID_ASSERTION,
e.getMessage()));
}
return result.concat(new Saml2Error(INVALID_ASSERTION,
context.getValidationFailureMessage()));
}
}

3. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with the
samlAssertionValidator bean ID to replace the existing one:

Copyright © 2024 All Rights Reserved 585


Syst em Ad m in ist r at io n Gu id e

<bean id="samlAssertionValidator"
class="com.microstrategy.custom.MySAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30
days -->
<property name="responseSkew" value="300"/>
</bean>

To set properties, see Set a Clock Skew for Timestamp Validation or Set an
Authentication Age for Timestamp Validation.

Custom ize Intelligence Server Credentials Object with the SAML Assertion
Inform ation

You can overwrite SAMLUserDetailsService to customize Intelligence


server credentials.

To make adjustments on Intelligence server credentials that you created,


extend
com.microstrategy.auth.saml.SAMLIServerCredentialsProvide
r:

1. Create MySAMLUserDetailsService by extending the


SAMLIServerCredentialsProvider interface and implement
methods:

package com.microstrategy.custom.auth;
import ...;
public class MySAMLUserDetailsService extends
SAMLIServerCredentialsProvider {
@Override
public Object loadUserBySAML(SAMLCredential samlCredential) throws
AuthenticationException {
SAMLIServerCredentials iServerCredentials =
(SAMLIServerCredentials) super.loadUserBySAML(samlCredential);

// customize iserver credentials object with saml credential


object and other config properties

return iServerCredentials;
}
}

Copyright © 2024 All Rights Reserved 586


Syst em Ad m in ist r at io n Gu id e

2. Configure your customized bean (Fully Qualified Class Name in


SAMLConfig.xml under the classes/auth/custom folder with a
samlIserverCredentialsProvider bean ID and keep the existing
bean:

The constructor arguments and properties must be exactly the same as


the original, if you don't customize them.

<bean id="samlIserverCredentialsProvider"
class="com.microstrategy.auth.saml.SAMLIServerCredentialsProvider">
<!-- SAML Attribute mapping -->
<property name="displayNameAttributeName" value="DisplayName" />
<property name="dnAttributeName" value="DistinguishedName" />
<property name="emailAttributeName" value="EMail" />
<property name="groupAttributeName" value="Groups" />

<!-- Parser for user group information -->


<property name="groupParser" ref="samlGroupParser" />
<!-- Bean responsible for mapping user groups to roles -->
<property name="roleBuilder" ref="samlRoleBuilder"/>
</bean>

To construct Intelligence server credentials on your own, directly implement


com.microstrategy.auth.saml.SAMLUserDetailsService:

1. Create MySAMLUserDetailsService by implementing


SAMLUserDetailsService interface and implement methods:

package com.microstrategy.custom.auth;
import ...;
public class MySAMLUserDetailsService implements SAMLUserDetailsService {
@Override
public Object loadUserBySAML(SAMLCredential samlCredential) throws
UsernameNotFoundException {
SAMLIServerCredentials iServerCredentials = new
SAMLIServerCredentials();

// customize iserver credentials object with saml credential


object and other config properties
iServerCredentials.setUsername(samlCredential.getNameID
().getValue());

return iServerCredentials;
}

Copyright © 2024 All Rights Reserved 587


Syst em Ad m in ist r at io n Gu id e

@Override
public void loadSAMLProperties(SAMLConfig samlConfig) {
// load attributes from MstrSamlConfig.xml from start up, so that
it could be utilized by `loadUserBySAML(...)`
}
}

2. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/resources/SAML/custom
folder with a samlIserverCredentialsProvider bean ID and keep
the existing bean:

<bean id="samlIserverCredentialsProvider"
class="com.microstrategy.custom.auth.MySAMLUserDetailsService">

Spring Security SAML Customization for MicroStrategy Web and


Mobile
MicroStrategy ONE (June 2024) includes a Spring Security SAML provider
upgrade to 6.2. This major upgrade includes deprecated classes and
methods. The following topic illustrates the SAML workflow and beans you
can leverage for customization.

SAML Login Workflow

The diagrams and workflows below illustrate how authentication-related


requests are handled with different authentication configurations. The
following points should be considered when using these workflow diagrams:

l Double-line arrows represent HTTP requests and responses and single-


line arrows represent Java cells.

l The object names correspond to the bean IDs in the configuration XML
files. You must view the configuration files to identify which Java classes
define those beans.

Copyright © 2024 All Rights Reserved 588


Syst em Ad m in ist r at io n Gu id e

l Only beans involved in request authentication are included. Filters that


simply pass the request along the filter chain or perform action not directly
involved in request authentication are not included. Each request passes
through multiple Spring Security filter, as described in Configuration files
and bean classes.

Generate <sam l2:AuthnRequest>

1. An unauthenticated user accesses a protected endpoint, such as


/servlet/mstrWeb, and is intercepted by the
springSecurityFilterChain bean.

2. The springSecurityFilterChain bean delegates to the


mstrSamlEntryPoint bean, which redirects to
/saml/authenticate by default.

The redirect is designed to support a multi-tenants sceanrio. If you've


configured more than one asserting party, you can first redirect the
user to a picker or in most cases, leave it as is.

3. The browser is redirected and sends a GET:


{BasePath}/saml/authenticate request, which is intercepted by
the mstrSamlAuthnRequestFilter bean.

Copyright © 2024 All Rights Reserved 589


Syst em Ad m in ist r at io n Gu id e

4. The mstrSamlAuthnRequestFilter bean is


<saml2:AuthnRequest>, which generates an endpoint that creates,
signs, serializes, and encodes a <saml2:AuthnRequest> and
redirects to the SSO login endpoint.

Bean Descr i p t i o n s

Bean ID Java Class Description

A subclass of
LoginUrlAuthe
nticationEntr
yPoint that
performs a
mstrEntryP com.microstrategy.auth.saml.authnrequest
redirect to where
oint .SAMLEntryPointWrapper
it is set in the
constructor by the
String
redirectFilte
rUrl parameter.

By default, this
filter responds to
the
/saml/authent
icate/**
endpoint and the
mstrSamlAu org.springframework.security.saml2.provi result is a redirect
thnRequest der.service.web.Saml2WebSsoAuthenticatio that includes a
Filter nRequestFilter SAMLRequest
parameter that
contains the
signed, deflated,
and encoded
<saml2:AuthnR
equest>

Copyright © 2024 All Rights Reserved 590


Syst em Ad m in ist r at io n Gu id e

Cu st o m i zat i o n

Before AuthnRequest is sent, you can leverage the


mstrSamlEntryPoint bean depending on the time you want your code to
be executed, create a subclass, and override the corresponding method with
your own logic.

Prior to /saml/authenticate Redirect


To customize before /saml/authenticate/ redirect:

1. Create a MySAMLEntryPoint class that extends


com.microstrategy.auth.saml.authnrequest.SAMLEntryPoi
ntWrapper and overrides the commence method.

2. Execute your code before calling super.commence:

public class MySAMLEntryPoint extends SAMLEntryPointWrapper {


MySAMLEntryPoint(String redirectFilterUrl){
super(redirectFilterUrl);
}
@Override
public void commence(HttpServletRequest request, HttpServletResponse
response, AuthenticationException e) throws IOException, ServletException
{
//>>> Your logic here
super.commence(request, response, e);
}
}

3. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/resources/SAML/custom
folder with a mstrEntryPoint bean ID to replace the existing bean:

The constructor argument must be exactly the same as the original, if it


is not customized.

<!-- Entry point for SAML authentication mode -->


<bean id="mstrEntryPoint"
class="com.microstrategy.custom.MySAMLEntryPoint">
<constructor-arg value="/saml/authenticate"/>
</bean>

Copyright © 2024 All Rights Reserved 591


Syst em Ad m in ist r at io n Gu id e

Pr i o r t o SSO IDP Red i r ect

To customize before the SSO IDP redirect:

1. Create a MySAMLAuthenticationRequestFilter class that extends


org.springframework.security.saml2.provider.service.w
eb.Saml2WebSsoAuthenticationRequestFilter and override the
doFilterInternal method.

2. Execute your code before calling super.doFilterInternal:

public class MySAMLAuthenticationRequestFilter extends


Saml2WebSsoAuthenticationRequestFilter {
@Override
protected void doFilterInternal(HttpServletRequest request,
HttpServletResponse response, FilterChain filterChain) throws
ServletException, IOException {
//>>> Your logic here
super.doFilterInternal(request, response, filterChain);
}
}

3. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/resources/SAML/custom
folder with a mstrSamlAuthnRequestFilter bean ID to replace the
existing bean:

The constructor argument must be exactly the same as the original, if it


is not customized.

<bean id="mstrSamlAuthnRequestFilter"
class="MySAMLAuthenticationRequestFilter">
<constructor-arg ref="samlAuthenticationRequestContextResolver"/>
</bean>

Cu st o m i ze t h e Au t h n Req u est Ob j ect

The AuthnRequest object is constructed by


mstrSamlAuthnRequestFilter as a part of the SAML request. If you
want to customize the AuthnRequest object before it is sent to SSO IDP, you
can extend SAMLAuthenticationAuthnRequestCustomizer:

Copyright © 2024 All Rights Reserved 592


Syst em Ad m in ist r at io n Gu id e

In previous releases, AuthnRequest customization is performed by


extending SAMLAuthenticationRequestContextConverter, which is
deprecated and removed in MicroStrategy ONE (June 2024) in favor of
SAMLAuthenticationAuthnRequestCustomizer.

1. Create a MyAuthnRequestCustomizer class that extends


com.microstrategy.auth.saml.authnreques.SAMLAuthentic
ationAuthnRequestCustomizer and override the extent method:

package com.microstrategy.custom.auth;
import ...;
public class MyAuthnRequestCustomizer extends
SAMLAuthenticationAuthnRequestCustomizer {
@Override
public void accept
(OpenSaml4AuthenticationRequestResolver.AuthnRequestContext
authnRequestContext) {
super.accept(authnRequestContext);
AuthnRequest authnRequest = authnRequestContext.getAuthnRequest
();
// Add your AuthnRequest customization here...
}
}

2. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with an
authnRequestCustomizer bean ID to replace the existing bean:

<bean id="authnRequestCustomizer"
class="com.microstrategy.custom.auth.MyAuthnRequestCustomizer"/>

Copyright © 2024 All Rights Reserved 593


Syst em Ad m in ist r at io n Gu id e

Generate <sam l2:Response>

1. SSO redirects the user to the MicroStrategy Web application. The


redirect request contains a SAML assertion that describes the
authenticated user.

2. The mstrSamlProcessingFilter SAML processing filter bean extracts the


SAML assertion from the request and passes it to the
samlAuthenticationProvider authentication provider bean.

3. The samlAuthenticationProvider bean verifies the assertion then calls


the Intelligence server credentials provider to build an Intelligence
server credentials object from the SAML assertion information.

4. The mstrSamlProcessingFilter bean saves the authentication object in


the HTTP session.

5. The SAML processing filter calls the login success handler, which
redirects the browser to the original request.

Copyright © 2024 All Rights Reserved 594


Syst em Ad m in ist r at io n Gu id e

Bean Descr i p t i o n s

Bean ID Java Class Description

This is the
core filter
that is
responsible
for handling
the SAML
mstrSamlProces com.microstrategy.auth.saml.response.SA
login
singFilter MLProcessingFilterWrapper
response
(SAML
assertion)
that comes
from the IDP
server.

This bean is
responsible
for
authenticatin
g a user
samlAuthentica com.microstrategy.auth.saml.response.SA
based on
tionProvider MLAuthenticationProviderWrapper
information
extracted
from the
SAML
assertion.

This bean is
responsible
for creating
and
com.microstrategy.auth.saml.SAMLUserDet
userDetails populating an
ailsServiceImpl
IServerCre
dentials
instance that

Copyright © 2024 All Rights Reserved 595


Syst em Ad m in ist r at io n Gu id e

Bean ID Java Class Description

defines the
credentials
for creating
Intelligence
server
sessions. The
IServerCre
dentials
object is
saved to the
HTTP
session,
which is used
to create the
Intelligence
server
session for
future
requests.

Cu st o m i zat i o n

The following content uses the real class name, instead of the bean name.
You can find the bean name in SAMLConfig.xml.

You can perform the following customizations:

l Retrieve more information from SAMLResponse

l Customize the login process

l Customize SAMLAssertion validation

Copyright © 2024 All Rights Reserved 596


Syst em Ad m in ist r at io n Gu id e

Retrieve More Information from SAMLResponse


The mstrSamlProcessingFilter bean is the first layer that directly
accesses the SAML response. The bean accepts the raw
HttpServletRequest, which contains the samlResponse, and produces
SAMLAuthenticationToken. It is then passed to
SAMLAuthenticationProviderWrapper to perform authentication
validation in later steps.

To extract more information from HttpServletRequest:

1. MicroStrategy recommends that you create a MySAMLConverter class


that extends the
com.microstrategy.auth.saml.response.SAMLAuthenticati
onTokenConverter class.

2. Override the convert method and call super.convert, which can get
com.microstrategy.auth.saml.response.SAMLAuthenticati
onToken, a subclass of Saml2AuthenticationToken.

3. Extract the information from the raw request, then return an instance
that is a subclass of Saml2AuthenticationToken:

public class MySAMLConverter extends SAMLAuthenticationTokenConverter {


public MySAMLConverter(Saml2AuthenticationTokenConverter delegate) {
super(delegate);
}
@Override
public Saml2AuthenticationToken convert(HttpServletRequest request) {
Saml2AuthenticationToken samlAuthenticationToken = super.convert
(request);
// >>> Extract info from request that you are interested in
return samlAuthenticationToken;
}
}

Copyright © 2024 All Rights Reserved 597


Syst em Ad m in ist r at io n Gu id e

4. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/resources/SAML/custom
folder with a samlAuthenticationConverter bean ID.

The constructor argument must be exactly the same as the original, if it


is not customized.

<bean id="samlAuthenticationConverter"
class="com.microstrategy.custom.MySAMLConverter">
<constructor-arg ref="saml2AuthenticationConverter"/>
</bean>

Customize the Login Process


To verify SAML 2.0 responses, mstrSamlProcessingFilter delegates
authentication work to samlAuthenticationProvider. It authenticates a
user based on information extracted from a SAML assertion and returns a
fully populated
com.microstrategy.auth.saml.response.SAMLAuthentication
object including granted authorities. Then mstrSamlProcessingFilter
saves the authentication result in the HTTP session.

You can customize this login process at the following three time points:

Copyright © 2024 All Rights Reserved 598


Syst em Ad m in ist r at io n Gu id e

Point 1: When Pre-Processing the Assertion Before Validating the


SAML Response

1. Create a MySAMLAuthenticationProviderWrapper class that


extends
com.microstrategy.auth.saml.response.SAMLAuthenticati
onProvider and overrides the authenticate method:

public class MySAMLAuthenticationProviderWrapper extends


SAMLAuthenticationProvider {
@Override
public Authentication authenticate(Authentication authentication)
throws AuthenticationException {
// >>>> Do your own work before saml assertion validation --->
Point ① in the above diagram
Authentication auth = super.authenticate(authentication);
return auth;
}
}

2. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/resouces/SAML/custom
folder with a samlAuthenticationProvider bean ID and keep the
existing bean:

The two constructor arguments must be exactly the same as the


original, if it is not customized.

<bean id="samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator" ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>

Point 2: When Custom izing the Logic of User Authentication

1. Create a MySAMLAuthenticationProviderWrapper class that


extends

Copyright © 2024 All Rights Reserved 599


Syst em Ad m in ist r at io n Gu id e

com.microstrategy.auth.saml.response.SAMLAuthenticati
onProvider and overrides the authenticate method:

public class MySAMLAuthenticationProviderWrapper extends


SAMLAuthenticationProvider {
@Override
public Authentication authenticate(Authentication authentication)
throws AuthenticationException {
Authentication auth = super.authenticate(authentication);
// >>>> Do something after assertion validation while before
iserver login ---> Point ② in the above diagram
return new CustomAuthentication(authResult);
}
}

2. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/resources/SAML/custom
folder with a samlAuthenticationProvider bean ID and keep the
existing bean:

The two constructor arguments must be exactly the same as the


original, if it is not customized.

<bean id="samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator" ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>

Point 3: Doing Work Before or After Saving the Authentication Result in


the HTTP Session

1. Create a MySAMLProcessingFilterWrapper class that extends


com.microstrategy.auth.saml.response.SAMLProcessingFi
lterWrapper and overrides the attemptAuthentication method:

public class MySAMLProcessingFilterWrapper extends


SAMLProcessingFilterWrapper {

Copyright © 2024 All Rights Reserved 600


Syst em Ad m in ist r at io n Gu id e

@Override
public Authentication attemptAuthentication(HttpServletRequest
request, HttpServletResponse response) throws AuthenticationException {
Authentication authResult = super.attemptAuthentication(request,
response);
// >>>> Do something after the user login ---> Point ③ in the
above diagram
return authResult;
}
}

2. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/resources/SAML/custom
folder with an mstrSamlProcessingFilter bean ID and keep the
existing bean:

The constructor argument and properties must be exactly the same as


the original, if you don't customize them.

<bean id="mstrSamlProcessingFilter"
class="com.microstrategy.custom.MySAMLProcessingFilterWrapper">
<constructor-arg ref="samlAuthenticationConverter" />
<property name="authenticationManager" ref="authenticationManager" />
<property name="authenticationSuccessHandler"
ref="successRedirectHandler" />
<property name="authenticationFailureHandler"
ref="failureRedirectHandler" />
<property name="requiresAuthenticationRequestMatcher"
ref="samlSsoMatcher" />
</bean>

Customize SAMLAssertion Validation


To verify SAML 2.0 responses, mstrSamlProcessingFilter delegates
authentication work to the samlAuthenticationProvider bean, which is
com.microstrategy.auth.saml.response.SAMLAuthenticationPr
ovider.

You can configure this in the following ways:

Copyright © 2024 All Rights Reserved 601


Syst em Ad m in ist r at io n Gu id e

l Set a clock skew or authentication age for timestamp validation

l Perform additional validation

l Coordinate with UserDetailsService

Set a Clock Skew for Tim estam p Validation

It is common for your web and IDP servers to have system clocks that are
not perfectly synchronized. You can configure the default
SAMLAssertionValidator assertion validator with some tolerance.

1. Open the SAMLConfig.xml file under the classes/auth/custom


folder.

2. Set the responseSkew property to your custom value. By default, it is


300 seconds.

<bean id="samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="responseSkew" value="300"/>
</bean>

Set an Authentication Age for Tim estam p Validation

By default, the system allows users to single sign on for up to 2,592,000


seconds since their initial authentication with the IDP (based on the
AuthInstance value of the authentication statement). Some IDPs allow
users to stay authenticated for longer periods of time and you may need to
change the default value.

1. Open the SAMLConfig.xml file under the classes/auth/custom


folder.

2. Set the maxAuthenticationAge property in the default


SAMLAssertionValidator assertion validator to your customized
value:

Copyright © 2024 All Rights Reserved 602


Syst em Ad m in ist r at io n Gu id e

<bean id="samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30 days
-->
</bean>

Perform Additional Validation

The new spring SAML framework performs minimal validation on SAML 2.0
assertions. After verifying the signature, the spring SAML framework:

l Validates the <AudienceRestriction> and


<DelegationRestriction> conditions.

l Validate <SubjectConfirmation>, expect for any IP address


information

MicroStrategy recommends to call super.convert(). You can skip this


call if you don't need it to check the <AudienceRestriction> or
<SubjectConfirmation> since you are checking those yourself.

1. Configure your own assertion validator that extends


com.microstrategy.auth.saml.response.SAMLAssertionVal
idator.

2. Perform your own validation. For example, you can use OpenSAML's
OneTimeUseConditionValidator to also validate a <OneTimeUse>
condition:

public class MySAMLAssertionValidator extends SAMLAssertionValidator {


@Override
public Saml2ResponseValidatorResult convert
(OpenSaml4AuthenticationProvider.AssertionToken token) {
Saml2ResponseValidatorResult result = super.convert(token);
OneTimeUseConditionValidator validator = ...;
Assertion assertion = token.getAssertion();
OneTimeUse oneTimeUse = assertion.getConditions().getOneTimeUse
();
ValidationContext context = new ValidationContext();
try {

Copyright © 2024 All Rights Reserved 603


Syst em Ad m in ist r at io n Gu id e

if (validator.validate(oneTimeUse, assertion, context) ==


ValidationResult.VALID) {
return result;
}
} catch (Exception e) {
return result.concat(new Saml2Error(INVALID_ASSERTION,
e.getMessage()));
}
return result.concat(new Saml2Error(INVALID_ASSERTION,
context.getValidationFailureMessage()));
}
}

3. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/auth/custom folder with the
samlAssertionValidator bean ID to replace the existing one:

<bean id="samlAssertionValidator"
class="com.microstrategy.custom.MySAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30
days -->
<property name="responseSkew" value="300"/>
</bean>

To set properties, see Set a Clock Skew for Timestamp Validation or Set an
Authentication Age for Timestamp Validation.

Custom ize Intelligence Server Credentials Object with the SAML Assertion
Inform ation

You can overwrite SAMLUserDetailsService to customize Intelligence


server credentials.

To make adjustments on Intelligence server credentials that you created,


extend
com.microstrategy.auth.saml.SAMLUserDetailsServiceImpl:

1. Create MySAMLUserDetailsService by extending the


SAMLUserDetailsServiceImpl interface and implement methods:

Copyright © 2024 All Rights Reserved 604


Syst em Ad m in ist r at io n Gu id e

package com.microstrategy.custom.auth;
import ...;
public class MySAMLUserDetailsService extends SAMLUserDetailsServiceImpl
{
@Override
public Object loadUserBySAML(SAMLCredential samlCredential) throws
UsernameNotFoundException {
SAMLIServerCredentials iServerCredentials =
(SAMLIServerCredentials) super.loadUserBySAML(samlCredential);

// customize iserver credentials object with saml credential


object and other config properties

return iServerCredentials;
}
}

2. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under classes/resources/SAML/custom folder
with the userDetails bean ID and keep the existing bean:

The constructor arguments and properties must be exactly the same as


the original, if you don't customize them.

<bean id="userDetails"
class="com.microstrategy.custom.auth.MySAMLUserDetailsService">
<!-- SAML Attribute mapping -->
<property name="displayNameAttributeName" value="DisplayName" />
<property name="dnAttributeName" value="DistinguishedName" />
<property name="emailAttributeName" value="EMail" />
<property name="groupAttributeName" value="Groups" />

<!-- Parser for user group information -->


<property name="groupParser" ref="groupParser" />
<!-- Bean responsible for mapping user groups to roles -->
<property name="roleBuilder" ref="roleBuilder"/>
</bean>

To construct Intelligence server credentials on your own, directly implement


com.microstrategy.auth.saml.SAMLUserDetailsService:

1. Create MySAMLUserDetailsService by implementing


SAMLUserDetailsService interface and implement methods:

Copyright © 2024 All Rights Reserved 605


Syst em Ad m in ist r at io n Gu id e

package com.microstrategy.custom.auth;
import ...;
public class MySAMLUserDetailsService implements SAMLUserDetailsService {
@Override
public Object loadUserBySAML(SAMLCredential samlCredential) throws
UsernameNotFoundException {
SAMLIServerCredentials iServerCredentials = new
SAMLIServerCredentials();

// customize iserver credentials object with saml credential


object and other config properties
iServerCredentials.setUsername(samlCredential.getNameID
().getValue());

return iServerCredentials;
}

@Override
public void loadSAMLProperties(SAMLConfig samlConfig) {
// load attributes from MstrSamlConfig.xml from start up, so that
it could be utilized by `loadUserBySAML(...)`
}
}

2. Configure your customized bean (Fully Qualified Class Name) in


SAMLConfig.xml under the classes/resources/SAML/custom
folder with the userDetails bean ID and keep the existing bean:

<bean id="userDetails"
class="com.microstrategy.custom.auth.MySAMLUserDetailsService">

Enable Badge Authentication for Web and Mobile


If you use an LDAP directory to centrally manage users in your environment,
you can add them to your Identity network, and allow them to log into
MicroStrategy Web or Mobile by using their badges from MicroStrategy
Badge.

The users in your LDAP directory can log into MicroStrategy Web by:

l Scanning a QR code using the Badge app on their smart phones, if Badge
is configured as the primary authentication method.

Copyright © 2024 All Rights Reserved 606


Syst em Ad m in ist r at io n Gu id e

l Supplementing their user name and password with a numerical Badge


Code that is provided via the Badge app on their smart phones, if Badge is
configured as the second factor of authentication.

The high-level steps to enable Badge authentication for Web and Mobile are
as follows:

1. Set up an Identity network. Your network is the group of users in your


organization who can use the Badge app on their smart phone to
validate their identity to log into MicroStrategy. For steps, see the
Identity Help.

2. Add your LDAP directory to your Identity network. For steps to add your
LDAP directory to Identity, see the Identity Help.

3. If you are importing users from LDAP, connect LDAP by leveraging the
connection between LDAP and your MicroStrategy Identity Server.
Alternatively, you can manually connect your LDAP directory to
MicroStrategy. Otherwise, import your MicroStrategy user data into the
Identity network. For more information, see the Identity Help.

4. Register your MicroStrategy environment with Badge.

5. Configure Badge in MicroStrategy Web and Mobile.

Registering your MicroStrategy Products with Badge


To establish a connection between Badge and your MicroStrategy products,
follow the steps below.

You have created an Identity network and badges for your users. Your network
is the group of users in your organization who can use the Badge app on their
smart phone to validate their identity to log into MicroStrategy. For steps to
create an Identity network, see the Identity Help.

You have connected an LDAP user directory to MicroStrategy. For steps to


connect your LDAP directory to MicroStrategy, see Implement LDAP
Authentication, page 160.

Copyright © 2024 All Rights Reserved 607


Syst em Ad m in ist r at io n Gu id e

To Register MicroStrategy with Badge

1. In a web browser, log into MicroStrategy Identity Manager.

2. Click Logical Gateways.

3. In the MicroStrategy Platform Login area, click the MicroStrategy icon


and click Continue.

4. To change the image that is displayed on the login page when users
open MicroStrategy Web, click Import an Icon. Select an image to
display and click Open.

5. In the Enter Display Name field, enter a name to display on your


MicroStrategy login page.

6. Click Next. The Set Up Your MicroStrategy Platform page is shown,


with the details to configure your MicroStrategy Intelligence Server.

7. Note the values for Organization ID, Application ID, and Token. You
use these values to configure MicroStrategy Intelligence Server.

8. Click Done.

Configuring Badge in MicroStrategy Web and Mobile


To allow your users to log into MicroStrategy Web and Mobile with
MicroStrategy Badge, you must configure Badge as a trusted authentication
provider in Web Administrator and Mobile Administrator, as described in the
steps below.

You have registered your MicroStrategy products with Badge, as described in


Registering your MicroStrategy Products with Badge, page 607, and noted the
Organization ID, Application ID, and Token provided.

You have upgraded your MicroStrategy metadata. For steps to upgrade your
MicroStrategy metadata, see the Upgrade Help.

Copyright © 2024 All Rights Reserved 608


Syst em Ad m in ist r at io n Gu id e

Enabling Badge authentication without upgrading your metadata may cause


your users to be locked out of MicroStrategy applications.

If you are enabling two-factor authentication for Web using Badge, you have
added at least one user to the Two-factor Exempt (2FAX) user group in your
MicroStrategy project. MicroStrategy users who are members of the Two-factor
Exempt (2FAX) group are exempt from two-factor authentication, and do not
need to provide an Badge Code to log into MicroStrategy Web. It is
recommended that these users have a secure password for their accounts and
use their accounts for troubleshooting MicroStrategy Web.

Ensure that you configure your LDAP server information correctly in your
Intelligence Server. If it is not configured correctly, two-factor authentication
cannot be used and therefore users will not be able to log into the server.

Enabling Badge Authentication in Web and Mobile

To Configure Intelligence Server for Badge Authentication

1. From the Windows Start menu, select All Programs > MicroStrategy
Tools > Web Administrator.

2. For your Intelligence Server, click Modify.

3. Click Setup.

4. In the Connectivity section, in the MicroStrategy Identity Server URL


field, enter the MicroStrategy Identity Server URL and port number for
1-way SSL.

5. In the OrgID field, enter the Organization ID from MicroStrategy


Identity Manager.

6. In the AppID field, enter the Application ID from MicroStrategy Identity


Manager.

Copyright © 2024 All Rights Reserved 609


Syst em Ad m in ist r at io n Gu id e

7. If you want to use Badge as a two-factor authentication system, select


the Enable two-factor authentication checkbox. The Security token
field is enabled.

MicroStrategy users who are members of the Two-factor Exempt


(2FAX) group are exempt from two-factor authentication, and do not
need to provide an Badge Code to log into MicroStrategy Web. It is
recommended that these users have a secure password for their
accounts, and use their accounts for troubleshooting MicroStrategy
Web.

8. In the Security token field, enter the Security Token from


MicroStrategy Identity Manager.

9. To use the connection between your MicroStrategy Identity Server and


LDAP, check the box labeled Import Badge User. By enabling the
import process, the Badge users synchronized from LDAP are added
without having to manually add them.

10. Click Save.

To Enable Badge Authentication in Web and Mobile

1. In Web Administrator, click Default Properties.

2. In the Login area, for Trusted Authentication Request, select the


Enabled checkbox.

3. From the Trusted Authentication Providers drop-down menu, select


Badge.

4. Click Save.

5. In Mobile Administrator, click Mobile Configuration.

6. For the configuration name where you want to enable Badge


authentication, click the Modify icon in the Actions column.

7. Click on the Connectivity Settings tab.

Copyright © 2024 All Rights Reserved 610


Syst em Ad m in ist r at io n Gu id e

8. In the Default Project Authentication area, open the drop-down menu


for the Authentication mode setting and select Badge.

9. Click Save.

10. Return to the Mobile Configuration page and repeat the modify steps
for each other configuration name where you want to enable Badge
authentication.

How to Enable Seamless Login Between Web and


Library
Enabling seamless login allows you to navigate between MicroStrategy Web
and Library without having to re-authenticate regardless of your configured
authentication mode. It uses an encrypted (secret) key to securely share the
session among the applications.

For new installations of MicroStrategy ONE seamless login will be


configured and active if the prerequisite components are installed on the
same machine. Distributed environments and customers upgrading to
version ONE need to configure the secret key by following the steps below.

Important Considerations
The following are some points to keep in mind while configuring seamless
login between Web and Library.

l For Web and Library configuration, use the same Intelligence Server.

l For collaboration to work properly, use the same secret key in


config.json.
Configure Web and Library Applications for Seamless Login
1. Launch MicroStrategy Web.

2. Go to Preferences.

3. Under Project Defaults > MicroStrategy Library configuration, enter your


Library URL.

Copyright © 2024 All Rights Reserved 611


<FQDN>:<port>/MicroStrategyLibrary/

4. Click Apply.
Syst em Ad m in ist r at io n Gu id e

Configure the Secret Key Between Web and Library


If the secret key is not available in the configOverride.properties
file, you can add any phrase or passcode to the parameter to be used as
the secret key. A secret key must be a base64 formatted string with a
minimum length of 88 characters.

1. Open the Library configOverride.properties with a text


editor.

2. Copy the token value from the identityToken.secretKey


parameter.

3. Open the Web Administration Page


(<FQDN>:<port>/MicroStrategy/servlet/mstrWebAdmin).

4. Select Security from the left-side navigation.

5. Under MicroStrategy Library configuration, enter your secret key.

6. Click Save.

7. Restart your MicroStrategy Web server to apply the changes.

Configure Web and Library Applications for Seamless Login


1. In MicroStrategy Web open Preferences > Project Defaults.

2. Enter your MicroStrategy Library URL


(<FQDN>:<port>/MicroStrategyLibrary) in the Link to
MicroStrategy Library field.

3. Open the Library Administration Control Panel


(<FQDN>:<port>/MicroStrategyLibrary/admin).

4. Open the Library Server tab.

Copyright © 2024 All Rights Reserved 612


Syst em Ad m in ist r at io n Gu id e

5. Enter your MicroStrategy Web URL into the Link field under
MicroStrategy Web
(<FQDN>:<port>/MicroStrategy/servlet/mstrWeb).

6. Click Save.

Related Topics
KB485196: A seamless login error occurs when launching MicroStrategy
Library from MicroStrategy Web

Set Default Authentication for Library Web Using the


Library Server Config File
Starting in MicroStrategy 2021 Update 8, the administrator can choose the
default authentication mode for the Library server. To make this change
using Workstation instead, see Set Default Authentication for Library Web in
Workstation.

1. Edit the following Library server config file:

MicroStrategyLibrary/WEB-
INF/classes/config/configOverrides.properties

2. Set the default mode to one of the corresponding values below. When
setting auth.modes.default, make sure the value is one of the
auth.modes.available.

To set the default mode to LDAP, use auth.modes.default = 16.

Copyright © 2024 All Rights Reserved 613


Syst em Ad m in ist r at io n Gu id e

Supported Authentication Mode Authentication Mode Value

Standard 1

Guest 8

LDAP 16

Trusted 64

Integrated 128

SAML 1048576

OIDC 4194304

Implement Database Warehouse Authentication


This mode of authentication identifies users by means of a login ID and
password stored in the data warehouse database. The RDBMS is the
authentication authority and verifies that the login ID and password are
valid. Each report is executed on the RDBMS under the RDBMS account of
the user who submitted the report from the MicroStrategy system. Users log
in to the MicroStrategy system with their RDBMS login and password, and
each MicroStrategy user is linked to an RDBMS account.

Use database warehouse authentication if you want the data warehouse


RDBMS to be the authority for identifying users and you do not want to
maintain user credentials in Intelligence Server as well as the RDBMS. You
can also use this configuration if you need to keep an audit trail at the
RDBMS level for each query executed by each individual user.

If you use database authentication, for security reasons MicroStrategy


recommends that you use the setting Create caches per database login.
This ensures that users who execute their reports using different database
login IDs cannot use the same cache. You can set this in the Project
Configuration Editor in the Caching: Result Caches: Creation category.

Copyright © 2024 All Rights Reserved 614


Syst em Ad m in ist r at io n Gu id e

Database Warehouse Authentication Information Flow


The following scenario presents an overview of the general flow of
information between Intelligence Server and a database server when a
database user logs into Developer or MicroStrategy Web.

1. The user anonymously logs in to a project source.

This is done anonymously because the user has not yet logged in to a
specific project. Because a warehouse database is not associated with
the project source itself, users are not authenticated until they select a
project to use. For more information about anonymous authentication,
including instructions on enabling it for a project source, see Implement
Anonymous Authentication, page 158.

l By default, the Public/Guest group is denied access to all projects. A


security role with View access to the projects must be explicitly
assigned to the Public/Guest group, so that these users can see and
log in to the available projects.

l All users logging in to a database warehouse authentication project


source can see all projects visible to the Guest user. Project access
is then granted or denied for each individual user when the user
attempts to log into the project.

2. The user selects a project, and then logs in to that project using their
data warehouse login ID and password. They are authenticated against
the data warehouse database associated with that project.

To enable database authentication, you must link the users in the


MicroStrategy metadata to RDBMS users. Linking causes Intelligence
Server to map a warehouse database user to a MicroStrategy user. If a
user attempts to log in to a project without having been linked to a
MicroStrategy user, a "User not found" error message is returned.

Copyright © 2024 All Rights Reserved 615


Syst em Ad m in ist r at io n Gu id e

Steps to Implement Database Warehouse Authentication


The procedure below gives the high-level steps for configuring your
Intelligence Server for database warehouse authentication.

High-Level Steps for Configuring Database Warehouse Authentication

1. Create a DSN and a database instance for the authentication database.

2. Configure the project source to allow anonymous authentication (see


Implement Standard Authentication, page 156).

3. Configure the project source to use database warehouse authentication


(see Configuring the Authentication Mode for a Project Source, page
154).

4. Assign a security role to the Public/Guest group for each project to


which you want to provide access (see Defining Sets of Privileges:
Security Roles, page 106).

5. Link each MicroStrategy user to an RDBMS user. In the User Editor, in


the Authentication: Metadata category, type the data warehouse login
ID in the Database Login field.

You can create the MicroStrategy users by importing a list of the


RDBMS users into the MicroStrategy system. For instructions, see
Creating, Importing, and Deleting Users and Groups, page 86.

6. For each project, in the Project Configuration Editor, in the Database


instances: Authentication: Metadata category, specify the database
instance for the authentication database.

7. For each project, enable database execution using linked warehouse


logins (see Linking Database Users and MicroStrategy Users:
Passthrough Execution, page 119).

Copyright © 2024 All Rights Reserved 616


Syst em Ad m in ist r at io n Gu id e

8. To enable database authentication in MicroStrategy Web, log in as an


administrator. On the Preferences page, select Project Defaults. The
Project Defaults page is displayed.

9. Under Security, select the Database Authentication check box, and


then click Apply.

Authentication Examples
Below are a few examples of how the different methods for user
authentication can be combined with different methods for database
authentication to achieve the security requirements of your MicroStrategy
system. These examples illustrate a few possibilities; other combinations
are possible.

Security Views: Windows Authentication and Linked


Warehouse Login
You may want to use this configuration if you are using security views to
implement access control policies for data. For example, two different users
executing the same SQL query receive different results, reflecting their
different levels of access. For the security views to work, each report is
executed under the RDBMS account of the user who submitted the report
from the MicroStrategy system. Even though this approach requires users to
have accounts on the RDBMS, you may choose to use Windows
authentication so that users do not have to remember their RDBMS login ID
and password when logging in to the MicroStrategy system. With Windows
authentication, users are automatically logged in to the MicroStrategy
system using their Windows ID and password.

For detailed information about security views, see Controlling Access to


Data at the Database (RDBMS) Level, page 139.

Copyright © 2024 All Rights Reserved 617


Syst em Ad m in ist r at io n Gu id e

To Establish the Configuration

1. In Developer, open the Project Source Manager, and on the Advanced


tab, select Use network login ID (Windows authentication) as the
Authentication mode.

2. From Web, log in as an administrator and select Preferences, select


Project Defaults, select Security, and then enable Windows
Authentication as the login mode.

3. In Developer, in the User Editor, expand Authentication, then select


Warehouse.

4. Link users to their respective database user IDs using the Warehouse
passthrough Login and Warehouse passthrough password boxes
for each user. For details on each option, click Help.

5. Enable the setting for database execution to use linked warehouse


logins on each project that you want to use linked warehouse logins for
database execution. To do this, right-click the project and select
Project Configuration, expand the Database instances category,
click Execution, and select the Use linked warehouse login for
execution check box.

Connection Maps: Standard Authentication, Connection Maps,


and Partitioned Fact Tables
You may want to use this configuration if you implement access control
policies in the RDBMS so that you can have multiple user accounts in the
RDBMS, but not necessarily one for every user. In addition, you must use
connection maps to enable report subscriptions if you are using Microsoft
Analysis Services with integrated authentication.

For example, you are partitioning fact tables by rows, as described in


Controlling Access to Data at the Database (RDBMS) Level, page 139. You
have a user ID for the 1st National Bank that only has access to the table
containing records for that bank and another user ID for the Eastern Credit

Copyright © 2024 All Rights Reserved 618


Syst em Ad m in ist r at io n Gu id e

Bank that only has access to its corresponding table. Depending on the user
ID used to log in to the RDBMS, a different table is used in SQL queries.

Although there are only a small number of user IDs in the RDBMS, there are
many more users who access the MicroStrategy application. When users
access the MicroStrategy system, they log in using their MicroStrategy user
names and passwords. Using connection maps, Intelligence Server uses
different database accounts to execute queries, depending on the user who
submitted the report.

To Establish this Configuration

1. In Developer, open the Project Source Manager and click Modify.

2. On the Advanced tab, select Use login ID and password entered by


the user (standard authentication) as the Authentication mode.
This is the default setting.

3. From Web, log in as an administrator and select Preferences, select


Project Defaults, select Security, and then enable Standard (user
name & password) as the login mode.

4. Create a database login for each of the RDBMS accounts.

5. Create a user group in the MicroStrategy system corresponding to each


of the RDBMS accounts and then assign multiple users to these groups
as necessary.

6. Define a connection mapping that maps each user group to the


appropriate database login.

Copyright © 2024 All Rights Reserved 619


Syst em Ad m in ist r at io n Gu id e

Secu r i t y Co n f i gu r at i o n s i n
M i cr o St r at egy
MicroStrategy's software platform ships with hardened security
configurations present by default where possible. In many cases, security
configurations are dependent upon the infrastructure where the system is
operated. In other cases, security configurations may be dependent upon
organizational requirements and unique operating environments. This
section attempts to document different security configurations which are
available to further harden a MicroStrategy deployment.

Security Configuration Workstation Library Mobile Web

Certificate Files: Common Extensions


X X X X
and Conversions

Encryption Key Manager X X X X

Self-Signed Certificates: Creating a


X X X X
Certificate Authority for Development

Disallow Custom HTML and


JavaScript in Dashboards, X X
Documents, Reports, and Bots

Edit Password and Authentication


X
Settings

Enable Encryption for trustStore


X X X
Secret Values

Enable Support for HTTP Strict


X X X
Transport Security (HSTS)

Configure Session Idle Timeouts X X X

Configure a Redirect URL Whitelist in X X

Copyright © 2024 All Rights Reserved 620


Syst em Ad m in ist r at io n Gu id e

Security Configuration Workstation Library Mobile Web

MicroStrategy Web and Library

Enable Enforcing File Path Validation X X

Enforce Security Constraints for the


Plugin Folder in MicroStrategy Web X X
or Library

Enable App Transport Security Using


MicroStrategy Mobile SDK or Library X X
SDK

Configure SameSite Cookies for


X
Library

Configuring Security Settings on


X
Library Administration

Configure SameSite Cookies for


MicroStrategy Web and X X
MicroStrategy Mobile

Configuring Secure Communication


for MicroStrategy Web, Mobile X X
Server, and Developer

Configuring Web, Mobile Server, and


X X
Web Services to Require SSL Access

Secure Communication in
X X
MicroStrategy

Configuring MicroStrategy Client


X
Applications to Use an HTTPS URL

Enable HTTPS Connection Between


the Refine Server and Web Server for X
Data Wrangling

Prevent a CSRF Attack X

Specify URLs and URL Paths to X

Copyright © 2024 All Rights Reserved 621


Syst em Ad m in ist r at io n Gu id e

Security Configuration Workstation Library Mobile Web

Export

Testing SSL Access X

Secure Communication in MicroStrategy


SSL (secure socket layer) and TLS (transport layer security) are encryption
technologies that encode communication over the Internet or local network
so that only the recipient can read it. MicroStrategy Intelligence Server
opens two ports for SSL communication. The default port 39321 uses SSL
without client certificate verification. The second port 39320 provides extra
security by enforcing client certificate verification.

MicroStrategy administrators should refer to the information security policy


of your particular organization or IT department before choosing an
encryption configuration for your MicroStrategy environment.

A certificate signing request (CSR) must be generated to obtain an SSL


certificate from a third party certification authority(CA). Refer to the
requirements of your CA for the necessary steps to generate a CSR, your
private key, and obtain your SSL certificate. If you are using a self-signed
certificate, the key algorithm, key size, and signature algorithm should be
set according to your IT administrator's requirements. For steps to generate
your certificate, see Self-Signed Certificates: Creating a Certificate
Authority for development.

Configuring SSL for Intelligence Server


You must have the SSL certificate you created for Intelligence Server.

You must have the private key file that you created while requesting a
certificate for Intelligence Server.

Copyright © 2024 All Rights Reserved 622


Syst em Ad m in ist r at io n Gu id e

To Configure SSL for Intelligence Server

1. From the Start menu, choose All Programs > MicroStrategy Tools >
Configuration Wizard.

2. On the Welcome screen, select Configure Intelligence Server, and


click Next.

3. If you have previously configured Intelligence Server, click Next until


you reach the SSL Configuration page. If this is the first time you are
configuring Intelligence Server, click Help for instructions to configure
Intelligence Server.

4. In the SSL Configuration page, enable the Configure SSL check box.

5. Click the button next to the Certificate field and browse to the
certificate you created for Intelligence Server.

6. Click the button next to the Key field and browse to the private key file
you created while requesting the certificate for Intelligence Server.

7. In the Password field, type the password that you used while creating
the private key for the certificate.

8. In the SSL Port field, type the port number to use for SSL access. By
default, the port is 39321.

Configuring Web and Mobile Server Truststore


MicroStrategy currently supports three certificate types for setting up SSL
communication between MicroStrategy components. The table below lists
the supported certificate types the necessary actions to complete setup.

Certificate Signing MicroStrategy Web or Mobile Configuration


Type Authority Server Truststore Location Actions

CA Signed l No additional
Public <JRE>/lib/security/cacerts
Public server

Copyright © 2024 All Rights Reserved 623


Syst em Ad m in ist r at io n Gu id e

Certificate Signing MicroStrategy Web or Mobile Configuration


Type Authority Server Truststore Location Actions

configuration
required.

l The default Java


cacerts
Truststore is
used.

l Configuring
secure

certification communication

authority <JRE> location depends on for

such as the Application Server being MicroStrategy

Verisign or used. Web and Mobile

Thawte Server,
Developer, and
client
application.

l Configure secure
communication
for
MicroStrategy
Library on
WIndows.

l Enterprise root
certificate must
be added to each
client Truststore.
Contact your IT
Self-signed
CA Signed Administrator for
by Enterprise /WEB-INF/trusted.jks
Enterprise a copy of your
root CA
enterprise CA
certificate chain.

l Configuring
secure

Copyright © 2024 All Rights Reserved 624


Syst em Ad m in ist r at io n Gu id e

Certificate Signing MicroStrategy Web or Mobile Configuration


Type Authority Server Truststore Location Actions

communication
for MicroStrategy
Web and Mobile
Server,
Developer, and
client
applications.

l Configure secure
communication
for MicroStrategy
Library on
Windows.

l Certificate must
be added to
client Truststore

l Truststore must
contain
certificate from
each Intelligence
Server

l Configuring
Self-signed secure
Self-Signed
by certificate /WEB-INF/trusted.jks communication
Certificate
creator for
MicroStrategy
Web and Mobile
Server,
Developer, and
client
applications.

l Configure secure
communication
for

Copyright © 2024 All Rights Reserved 625


Syst em Ad m in ist r at io n Gu id e

Certificate Signing MicroStrategy Web or Mobile Configuration


Type Authority Server Truststore Location Actions

MicroStrategy
Library on
Windows.

Steps to Add Certificates to Web or Mobile Server Truststore

Once you have populated the Keystore on Intelligence Server with your SSL
certificate and private key, follow the steps below to add the necessary
certificate to the client Truststore.

1. Locate your MicroStrategy Web or Mobile deployment and locate its


WEB-INF directory.

l IIS ASP Web: C:\Program Files (x86)\MicroStrategy\Web


ASPx\WEB-INF\

l IIS ASP Mobile: C:\Program Files


(x86)\MicroStrategy\Mobile ASPx\WEB-INF\

l JSP Web or Mobile: The location depends on your .war file


deployment

2. Open a Command Line terminal and navigate to the WEB-INF directory.

3. Execute the following keytool command found under MICROSTRATEGY_


JRE.

<MICROSTRATEGY_JRE>/bin/keytool -importcert -
trustcacerts -alias "<certificate_common_name>" -
keystore trusted.jks -storepass mstr123 -file cert.pem

Copyright © 2024 All Rights Reserved 626


Syst em Ad m in ist r at io n Gu id e

l If the file trusted.jks does not exist it will be created.

l The storepass value refers to your Truststore password. This value


is set to mstr123 by default. Use your unique Truststore password if
it was changed.

If the truststore password was changed, the


sslTruststorePassword value in the microstrategy.xml
should be modified accordingly.

l The cert.pem file refers to the certificate(s) previously obtained.

l Any alias value may be used, but the certificate common name is
recommended, as long as the alias is unique in the Truststore.

SSL with Client Certificate Verification


Client certificate verification, also referred to as mutual authentication, is an
optional step in the SSL protocol. Through this second certificate
verification, Intelligence Server verifies the identity of MicroStrategy Web or
Mobile server (client). Client certificate verification requires that a Keystore
and Truststore are set up on Intelligence Server as well as on the client to
complete the trusted connection. The following sections describe how to set
up your client Keystore and Intelligence Server Truststore by generating
self-signed certificates for your Web and Mobile Server clients.

Copyright © 2024 All Rights Reserved 627


Syst em Ad m in ist r at io n Gu id e

Steps to Setup the Client Keystore

1. Open a Command Line terminal and navigate to the WEB-INF directory

2. Execute the following keytool command found under MICROSTRATEGY_


JRE:

<MICROSTRATEGY_JRE>/bin/keytool -genkeypair -keyalg


RSA -keysize 2048 -sigalg sha256withrsa -validity 365
-alias <client_certificate_common_name> -dname
"CN=YOUR_FULLY_QUALIFIED_DOMAIN_SERVER_NAME" -keystore
clientKey.jks -storepass mstr123

If prompted to set a key password, press Enter to default the key


password to match the store password. If your password was changed
from the default of mstr123 update the parameter in your WEB-
INF/microstrategy.xml file. Optionally, you can change the
location or name of the Keystore file clientKey.jks at this time via
the sslClientKeystore parameter in the microstrategy.xml file.

Copyright © 2024 All Rights Reserved 628


Syst em Ad m in ist r at io n Gu id e

3. Restart MicroStrategy Web and Mobile Server.

4. Extract the certificate information with the following command,


replacing the default password with your own if necessary. The file
created will be needed to set up the Truststore for Intelligence Server.

keytool -exportcert -rfc -keystore clientKey.jks -


alias <client_certificate_common_name> -file cert.txt
-storepass mstr123

5. Repeat this process for each Web and Mobile deployment.

Steps to Set Up Intelligence Server Truststore

1. Create a simple text file, for example truststore.txt, and add the
certificate information for each certificate created during client
Keystore creation.

2. Save this file to the certs folder in Intelligence Server directory.

3. Launch Configuration Wizard and navigate to SSL Configuration in the


Configure Intelligence Server section.

4. Check the Configuring port requires Client Certificate check box, as


shown below:

Copyright © 2024 All Rights Reserved 629


Syst em Ad m in ist r at io n Gu id e

5. Click the Browse button next to the Truststore field and select the
truststore.txt file containing your client certificate information.

6. Click Next and follow the Configuration Wizard prompts to restart


Intelligence Server.

Configuring Web, Mobile Server, and Web Services to


Require SSL Access
You can configure your application server to require that clients, such as
users' web browsers, access the following applications with SSL, using the
HTTPS protocol:

Copyright © 2024 All Rights Reserved 630


Syst em Ad m in ist r at io n Gu id e

l MicroStrategy Web, to enable secure communication between Web and


users' browsers.

l MicroStrategy Mobile Server, to enable secure communication between


Mobile Server and Mobile for iPhone, iPad and Android.

l MicroStrategy Web Services, to enable secure communication between


Web Services and Office.

For steps to configure SSL on your application server, see the link below to
view the official documentation for your server type.

l Apache Tomcat 9.x

l Microsoft IIS 7 and above

l Oracle WebLogic Server 12.x

l IBM WebSphere 8.5x

l JBoss Enterprise Application Platform Web Server

Configuring Secure Communication for MicroStrategy


Web, Mobile Server, and Developer
Once the certificate stores have been set up on Intelligence Server, Web,
and Mobile servers, you can now enable SSL communication in your
MicroStrategy applications.

To Configure SSL for Web and Mobile Server

1. From the Start menu choose All Programs > MicroStrategy Tools and
select Web Administrator or Mobile Administrator.

2. On the left, click Security.

3. Under Traffic to the Intelligence Server, select the SSL option.

4. Click Save.

Copyright © 2024 All Rights Reserved 631


Syst em Ad m in ist r at io n Gu id e

To Configure SSL for Developer

l You must use the Configuration Wizard to set up SSL for Intelligence Server,
as described in Secure Communication in MicroStrategy.

l For additional security, you can enable Developer to verify Intelligence


Server's certificate with the Certificate Authority (CA) before transmitting any
data. If you want to enable this option, you must obtain the following:

l Your CA's SSL certificate. If you are using a commercial CA, refer to their
documentation for instructions to download their certificate.

l If you are using an enterprise CA that has Microsoft Certificate Services


installed, visit http://hostname/CertSrv, where hostname is the
computer on which Certificate Services is installed, and click Download a
CA certificate, certificate chain, or CRL. Under Encoding method,
select Base64.

l The CSR generated when configuring SSL for Intelligence Server, as


described in Generating an SSL Certificate Signing Request, page 648.

l A .pem certificate containing both the SSL certificate and the CSR for
Intelligence Server.

1. In Developer, right-click the server-based project source that you use


to connect to Intelligence Server, and select Modify Project Source.

2. On the Connection tab, select the Use SSL check box.

3. If you want Developer to verify Intelligence Server's certificate with the


CA every time a connection is made, select the Verify Server
Certificate check box.

You must perform the following tasks to verify the server's certificate:

Copyright © 2024 All Rights Reserved 632


Syst em Ad m in ist r at io n Gu id e

l Download the CA's certificate to the computer running Developer.

l In the Client SSL Certificate Authority Certificate field, enter the


path to the .pem certificate referenced in the prerequisites above. For
example, C:\Certificates\desktop.pem.

4. Click OK.

Configuring MicroStrategy Client Applications to Use


an HTTPS URL
To require iPhones, iPads, and Android devices to use HTTPS to connect to
Mobile Server, you must update your device configurations in Mobile Server.

To require MicroStrategy Office to use SSL to connect to Web Services, in


the Options dialog box, you must add the https:// prefix to the URL for
Web Services, as described in To Configure MicroStrategy Office to Use
SSL, page 634.

To Configure MicroStrategy Mobile for iPhone, iPad and


Android to Use SSL
1. Open the Mobile Administrator page.

2. Click Mobile Configuration.

3. For the configuration you want to edit, click Modify.

4. Click the Connectivity Settings tab.

5. For the Mobile Server that has SSL enabled, from the Request Type
drop-down list, select HTTPS.

6. Click Save.

7. Repeat this procedure for every configuration that includes the above
Mobile Server.

Copyright © 2024 All Rights Reserved 633


Syst em Ad m in ist r at io n Gu id e

To Configure MicroStrategy Office to Use SSL


This information applies to the legacy MicroStrategy Office add-in, the
add-in for Microsoft Office applications which is no longer actively
developed.

It was substituted with a new add-in, MicroStrategy for Office, which


supports Office 365 applications. The initial version does not yet have all
the functionalities of the previous add-in.

If you are using MicroStrategy 2021 Update 2 or a later version, the legacy
MicroStrategy Office add-in cannot be installed from Web.;

For more information, see the MicroStrategy for Office page in the Readme
and the MicroStrategy for Office Help.

1. In Windows, go to Start > All Programs > MicroStrategy Tools >


Office Configuration.

2. Under General, select Server.

3. In the Web Services URL field, replace the http:// prefix with
https://.

4. Click OK.

Enable HTTPS Connection Between the Refine Server


and Web Server for Data Wrangling

For ASP and IIS server, the default connection between the Refine Server
and Web Server is HTTP. Thus, you must complete the following steps to
manually enable HTTPS connection.

For JSP and Tomcat server, MicroStrategy provides an out-of-the-box


keystore and the default connection is HTTPS.

Copyright © 2024 All Rights Reserved 634


Syst em Ad m in ist r at io n Gu id e

You can enable HTTPS connection between the Refine Server and
Web Server to ensure secure data wrangling service.

1. Prepare the self-assigned certificate

2. Enable Refine Server HTTPS connection

3. Enable Data Warehouse HTTPS connection on the Web Server

If you run into issues, please refer to the Troubleshooting section.

Prepare the Self-Assigned Certificate


To establish HTTPS connection for Web Server and Refine Server, you must
have two keystores prepared. One is the Refine Server keystore, which is
used for TLS encryption. The other is the Web Server client trust store,
which is used for validating the server certificate. There are multiple ways to
generate a self-assigned certificate, such as JDK keytool, OpenSSL. and
XCA. For more information, see Self-Signed Certificates.

Enable Refine Server HTTPS Connection


1. Find the openrefine installation path. For example, C:\Program
Files (x86)\MicroStrategy\Intelligence
Server\openrefine.

2. In the webapp folder, create a new folder and put the server keystore
file in this folder.

3. Open Registry Editor.

4. On Windows, locate Refine Server at [Computer\HKEY_LOCAL_


MACHINE\SOFTWARE\Wow6432Node\MicroStrategy\Refine

Copyright © 2024 All Rights Reserved 635


Syst em Ad m in ist r at io n Gu id e

Server]. If the Refine Server does not exist, then create one.

On Linux, edit the MSIReg.reg file.

5. Create the following string values in your file:

l enableHttps: If this name exists, Refine Server will start an


additional port for HTTPS connection.

l https_port: The default port number for HTTPS connection.

l port: The default port number for HTTP connection.

l keystorePath: The path of the server keystore placed in Step 1.

l keystorePassword: The password for the server keystore.

6. Restart the Intelligence Server.

Enable Data Warehouse HTTPS Connection on Web Server


1. Go to the extracted MicroStrategy.war file and locate the WEB-INF
folder.

2. Create a folder named keystore and put the client TrustStore file
(created in Prepare the Self-Assigned Certificate) in this folder.

Copyright © 2024 All Rights Reserved 636


Syst em Ad m in ist r at io n Gu id e

3. Edit the microstrategy.xml file in the WEB-INF folder.

1. Locate the parameter tags with the names


refineSSLTruststore and refineSSLTruststorePwd.

2. Remove the comments around the parameter tags mentioned


above to make them valid. Ensure the keystore password is
correct.

Copyright © 2024 All Rights Reserved 637


Syst em Ad m in ist r at io n Gu id e

3. Restart Tomcat.

Troubleshooting

Enable the Intelligence Server Refine Process Log


1. Go to [Computer\HKEY_LOCAL_
MACHINE\SOFTWARE\Wow6432Node\MicroStrategy\Diagnostic
s\Log2\RefineProcess\error] and [Computer\HKEY_LOCAL_
MACHINE\SOFTWARE\Wow6432Node\MicroStrategy\Diagnostic
s\Log2\RefineProcess\info]. If it does not exist, then create one.

2. Create a string value and name it the file name of your log file. For
example, https.

3. Create a log file with the name you edited in the register. For example,
https.log.

4. Put the log file under the Intelligence Server installation path,
C:\Program Files (x86)\MicroStrategy\Intelligence
Server\.

Copyright © 2024 All Rights Reserved 638


Syst em Ad m in ist r at io n Gu id e

Copyright © 2024 All Rights Reserved 639


Syst em Ad m in ist r at io n Gu id e

Enable the Data Warehouse Log


1. Add the following registry key in the JVM Options file located at [HKEY_
LOCAL_MACHINE\SOFTWARE\Wow6432Node\MicroStrategy\JNI
Bridge\Config for DataServices\JVM Options]:

“OtherOptions”=“-Drefine.verbosity=info”

2. Restart the Intelligence Server .

3. The log will show in the DW.log file.

l On Windows: Under the same directory as the DSSErrors.log file


(commonly under BIN\X64).

Copyright © 2024 All Rights Reserved 640


Syst em Ad m in ist r at io n Gu id e

l On Linux: Under Linux/bin/, where bin is located under the same


directory as the DSSErrors.log file.

If the HTTPS connection is started successfully, the log message will be


shown in the DW.log file.

Testing SSL Access


Perform the following steps to test SSL access to Web and Web Services.

To Test SSL Access to Web and Web Services

1. In your browser, enter the URL to access Web and Web Services. By
default, these are:

l Web (ASP.NET): http://hostname/MicroStrategy/asp/,


where hostname is the name of the server that Web is running on.

l Web (J2EE):
http://hostname/MicroStrategy/servlet/mstrWeb, where
hostname is the name of the server that Web is running on.

l Web Services:
http://hostname/MicroStrategyWS/MSTRWS.asmx, where
hostname is the name of the server that Web Services is running on.

An error page should be displayed, with a 403.4 error indicating that


SSL is required to access the page.

2. In the above URLs, replace http:// with https://. After a short


delay, Web should open, or the Web Services method list should be
displayed (as applicable), indicating that the SSL access is working.

Copyright © 2024 All Rights Reserved 641


Syst em Ad m in ist r at io n Gu id e

Certificate Files: Common Extensions and


Conversions
This section briefly explains common extensions for SSL certificate and
keystore files, as well as how to convert these files between formatting
types.

Common File Extensions


Certificate

Certificate files: .crt, .cer, .ca-bundle, .p7b, .p7c, .p7s, .pem

Keystore Files: .key, .keystore, .jks

Combined certificate and key files: .p12, .pfx, .pem

Converting Files
To set up SSL for your MicroStrategy environment, you will need to have
your certificates and key files in .pem, .crt, and .key formats. If you have
files from your IT administrator that do not have these extensions, they must
be converted.

The following commands to convert between file types:

l Convert a DER format file to PEM format

openssl x509 -inform der -in certificate.cer -out certificate.pem

l Convert a .pfx or .p12 containing a private key and certificates to


PEM

openssl pkcs12 -in certkey.pfx -out certkey.pem -nodes

l Add -nocerts to only output the private key.

l Add -nokeys to only output the certificates.

Copyright © 2024 All Rights Reserved 642


Syst em Ad m in ist r at io n Gu id e

l Convert .keystore or .jks to .key: Requires two commands to be run.

1. Convert the file to the .p12 extension

keytool -importkeystore -srckeystore privatekey.keystore -destkeystore


privatekey.p12 -srcstoretype jks -deststoretype pkcs12 -srcstorepass
password -deststorepass password

2. Convert to the .key extension

openssl pkcs12 -nocerts -nodes -in newkeystore.p12 -out keyfile.key

Self-Signed Certificates: Creating a Certificate


Authority for Development
If you are creating demos or proofs-of-concept that require SSL, you can set
up a server that can act as a Certificate Authority (CA) to sign the
certificates for the MicroStrategy applications.

Use self-signed certificates only in demo or development environments.


Self-signed certificates are not recommended in a production environment
for the following reasons:

l If the CA server is compromised, an attacker can use it to sign certificates


for malicious sites.

l By default, users' devices and browsers do not accept self-signed


certificates, which may cause users to receive security warnings and
disrupt their workflows.

You can set up a CA server using the OpenSSL utility. If you are using a
UNIX or Linux machine, OpenSSL should be installed by default. If you are
using a Windows machine, you can download the OpenSSL utility from
http://www.openssl.org/.

To set up a CA, perform the following tasks:

Copyright © 2024 All Rights Reserved 643


Syst em Ad m in ist r at io n Gu id e

l Create the directories and configuration files for the CA. See Creating the
Directories and Configuration Files for Your CA, page 644.

l Create the server's private key and root certificate. See Creating the
Private Key and Root Certificate for the CA, page 646.

l Add the root certificate as a trusted certificate on your network. See


Adding your enterprise CA as a trusted certificate authority.

l Configure OpenSSL to use the server's private key and certificate to sign
certificate requests. See Configuring OpenSSL to Use your Private Key
and Root Certificate, page 647.

l Generate an SSL Certificate Signing Request (CSR). See Generating an


SSL Certificate Signing Request.

l Create certificates for the MicroStrategy applications. See Signing


Certificate Requests Using Your CA, page 649.

Creating the Directories and Configuration Files for Your CA


To create your CA using OpenSSL, you must create directories to store
important files for the CA, such as the server's private keys, certificates that
have been signed, and so on. In addition, you must create the files that track
the certificates that have been created, and an OpenSSL configuration file
for your CA.

To Create the Directories and Files for the CA

1. Using Windows Explorer or the UNIX Terminal, as applicable, create


the following directories:

Directory Folder name

A name of your choice. For


A root directory for the CA.
example, devCA

Copyright © 2024 All Rights Reserved 644


Syst em Ad m in ist r at io n Gu id e

Directory Folder name

private
A subdirectory to store the CA's private key
For example, devCA/private

A subdirectory to store new certificates issued by certs

the CA For example, devCA/certs

newcerts
A subdirectory to store the new certificates in an
unencrypted format For example,
devCA/newcerts

2. In the root directory for the CA, use a text editor to create the following
files:

Filename Description

Contains the serial number for the next certificate. When you create
serial (no
the file, you must add the serial number for the first certificate. For
extension)
example, 01 .

index.txt Used as a database to track certificates that have been issued.

3. Depending on your platform, do one of the following:

l Linux: Open a terminal window, and navigate to the location where


OpenSSL is installed.

The default installation folder may depend on the distribution you are
using. For example, for Red Hat Enterprise Linux, the default folder
is /etc/pki/tls.

l Windows: Open a command prompt window, and navigate to the


location where OpenSSL is installed. By default, this is
C:\OpenSSL-Win32\bin.

Copyright © 2024 All Rights Reserved 645


Syst em Ad m in ist r at io n Gu id e

4. Create a copy of the OpenSSL configuration file openssl.cnf, and


paste it in the root directory you created for your CA. Use a different file
name, for example, openssl.dev.cnf.

Creating the Private Key and Root Certificate for the CA


Once you have set up the files and directories for your CA, you can create a
root certificate, which is used to sign certificate requests from MicroStrategy
applications.

This procedure assumes that you have followed all the steps in Creating the
Directories and Configuration Files for Your CA, page 644.

To Create the Private Key and Root Certificate for the CA

1. Depending on your platform, do one of the following:

l Linux: Open a terminal window.

l Windows: Open a command prompt window, and navigate to the


location where OpenSSL is installed. By default, this is
C:\OpenSSL-Win32\bin.

2. To create the private key and root certificate, type the following
command, and press Enter:

openssl req -config devCApath/openssl.dev.cnf -new -x509 -extensions v3_


ca -keyout devCApath/private/devCA.key -out devCApath/certs/devCA.crt -
days 1825

Where:

l devCApath: The root directory for your CA, which is created as part
of the procedure described in Creating the Directories and
Configuration Files for Your CA, page 644. For example,
/etc/pki/tls/devCA.

Copyright © 2024 All Rights Reserved 646


Syst em Ad m in ist r at io n Gu id e

l openssl.dev.cnf: The copy of the default OpenSSL configuration


file, created in the root directory for your CA.

l devCA.key: The filename for the private key.

l devCA.crt: The filename for the root certificate.

3. You are prompted for a pass-phrase for the key, and for information
about your CA, such as your location, organization name, and so on.
Use a strong pass-phrase to secure your private key, and type the
required information for the CA. The private key and root certificate are
created.

Configuring OpenSSL to Use your Private Key and Root


Certificate
To start creating certificates for the MicroStrategy applications in your
development environment, you must configure OpenSSL to use your CA's
private key and root certificate to sign certificate requests. For information
on creating certificate requests for applications, see Generating an SSL
Certificate Signing Request, page 648.

This procedure assumes that you have completed the following steps:

l Create the files and directory structure for your CA, including a copy of the
default OpenSSL configuration file, as described in Creating the Directories
and Configuration Files for Your CA, page 644.

l Create a private key and root certificate for your CA, as described in Creating
the Private Key and Root Certificate for the CA, page 646.

To Configure OpenSSL to Use your CA's Root Certificate

1. Use a text editor, such as Notepad, to open the copy of the OpenSSL
configuration file in your CA's root directory. For example,
openssl.dev.cnf.

Copyright © 2024 All Rights Reserved 647


Syst em Ad m in ist r at io n Gu id e

2. Scroll to the CA_default section, and edit the following values:

l dir: Change this value to the root folder that you created for your
CA. For example, /etc/pki/tsl/devCA.

l certificate: Change this value to $dir/certs/devCA.crt,


where devCA.crt is the root certificate that you created for your CA.

l private_key: Change this value to $dir/private/devCA.key,


where devCA.key is the private key that you created for your CA.

3. Save the file.

Generating an SSL Certificate Signing Request


You can use the OpenSSL utility to create an SSL Certificate Signing
Request (CSR) for each of your applications.

If you are using a UNIX or Linux machine, the OpenSSL utility should be
installed by default. If you are using a Windows machine, you can download
the OpenSSL utility from http://www.openssl.org/.

To Generate an SSL Certificate Signing Request using OpenSSL

1. Depending on your platform, do one of the following:

l Linux: Open a terminal window.

l Windows: Open a command prompt window, and navigate to the


location where OpenSSL is installed. By default, this is
C:\OpenSSL-Win32\bin.

To Gen er at e a Pr i vat e Key f o r t h e Ser ver

1. Type the following command, and press Enter:

openssl genrsa –des3 –out Server_key.key

Copyright © 2024 All Rights Reserved 648


Syst em Ad m in ist r at io n Gu id e

Where Server_key.key is the name of the private key file. By default,


the private key file is created in the current location. To create the file
at a different location, replace Server_key.key with a path to create
the new file.

You are prompted for a pass-phrase for the key.

2. Type a secure pass-phrase for the key, and press Enter. The key file is
created.

To Gen er at e t h e Cer t i f i cat e Si gn i n g Req u est

1. Type the following command, and press Enter:

openssl req –new –key Server_key.key –out Server_


CSR.csr

Where Server_key.key is the private key file that you created, and
Server_CSR is the CSR file.

2. You are prompted for information such as your organization's name,


department name, country code, and so on. Type the information about
your organization as you are prompted. When prompted for a Common
Name, type the fully qualified domain name of the server that the
application runs on. For example, if Intelligence Server runs on a
machine called intelligenceserver, and your domain is
yourcompany.com, the fully qualified domain name is
intelligenceserver.yourcompany.com.

When you have entered all the required information, the CSR file is
created

3. Repeat this procedure for every application that you need a certificate
for.

Signing Certificate Requests Using Your CA


Once you have configured OpenSSL to use your CA's private key and root
certificate, you can sign certificate requests to create the SSL certificates

Copyright © 2024 All Rights Reserved 649


Syst em Ad m in ist r at io n Gu id e

for the MicroStrategy applications. The steps to create certificates follow.

This procedure assumes that you have completed the following steps:

l Create the files and directory structure for your CA, including a copy of the
default OpenSSL configuration file, as described in Creating the Directories
and Configuration Files for Your CA, page 644.

l Create a private key and root certificate for your CA, as described in Creating
the Private Key and Root Certificate for the CA, page 646.

l Configure OpenSSL to use the private key and root certificate, as described
in Configuring OpenSSL to Use your Private Key and Root Certificate, page
647.

l Create a certificate signing request (CSR file) for the applications that
require SSL certificates, as described in Generating an SSL Certificate
Signing Request, page 648. Copy the CSR file to the server that hosts your
CA.

To Sign Certificate Requests Using Your CA

1. Depending on your platform, do one of the following:

l Linux: Open a terminal window, and navigate to the location where


OpenSSL is installed.

The default installation folder may depend on the distribution you are
using. For example, for Red Hat Enterprise Linux, the default folder
is /etc/pki/tls.

l Windows: Open a command prompt window, and navigate to the


location where OpenSSL is installed. By default, this is
C:\OpenSSL-Win32\bin.

Copyright © 2024 All Rights Reserved 650


Syst em Ad m in ist r at io n Gu id e

2. Type the following command, and press Enter:

openssl ca -config devCApath/openssl.dev.cnf -policy


policy_anything -out devCApath/certs/mstrapp.crt -
infiles CSRpath/mstrapp.csr

Where:

l devCApath: The root directory for your CA, which is created as part
of the procedure described in Creating the Directories and
Configuration Files for Your CA, page 644. For example,
/etc/pki/tls/devCA.

l openssl.dev.cnf: The OpenSSL configuration file for your CA,


configured to use your CA's private key and root certificate, as
described in Configuring OpenSSL to Use your Private Key and Root
Certificate, page 647.

l mstrapp.crt: The filename for the certificate to be generated for


the MicroStrategy application.

l CSRpath: The folder where the certificate signing request is stored.

l mstrapp.csr: The certificate signing request for the MicroStrategy


application.

The certificate is generated, and is stored in the certs folder.

3. Copy the generated certificate to the machine where the MicroStrategy


application is hosted.

4. Repeat this procedure for all MicroStrategy applications that require


SSL certificates.

Copyright © 2024 All Rights Reserved 651


Syst em Ad m in ist r at io n Gu id e

Enforce Security Constraints for the Plugin Folder in


MicroStrategy Web or Library
Prior to MicroStrategy 2021 Update 8 (11.3.8), you must follow the steps
below to enforce security constraints for the plugin folder.

Starting in MicroStrategy 2021 Update 8 (11.3.8), MicroStrategy enabled


this option by default in Web JSP, so you do not need to follow the steps
below. However, if you are using Web ASP, you must follow the steps below.

If you are using plugins for customization in Microstrategy Web or Library,


MicroStrategy suggests implementing the security constraints detailed
below to protect sensitive or confidential files, such as passwords or
database connections. These security constraints protect the JSP Web
plugin’s WEB-INF and jsp folders, as well as the asp folder for ASP Web,
from remote access via URL.

l Solution for JSP Deployments

l Solution for ASP Deployments

Solution for JSP Deployments


To prevent the WEB-INF and jsp folders inside the given plugin folder from
being accessed by a web URL, add the following security constraint in
web.xml. This file is located in the Web JSP’s WEB-INF folder, such as
<Web JSP deployment>/WEB-INF/web.xml.

<security-constraint>
<web-resource-collection>
<web-resource-name>NoAccess</web-resource-name>
<url-pattern>/plugins/<plugin name>/jsp/*</url-pattern>
<url-pattern>/plugins/<plugin name>/WEB-INF/*</url-pattern>
</web-resource-collection>
<auth-constraint />
<user-data-constraint>
<transport-guarantee>NONE</transport-guarantee>
</user-data-constraint>
</security-constraint>

Copyright © 2024 All Rights Reserved 652


Syst em Ad m in ist r at io n Gu id e

MicroStrategy recommends you place your server side files for jsp
deployment in the WEB-INF and jsp folders. If your plugin has sensitive
files in other folders, you can add more <url-pattern> entries for those
folders in web.xml to ensure they cannot be accessed.

See Java Servlet Specification for more information about security-


constraint.

Solution for ASP Deployments


To prevent the WEB-INF and jsp folders inside the given plugin folder from
being accessed by a web URL, copy the web.config file in <Web ASPx
Deployment>\WEB-INF\web.config to <Web ASPx
Deployment>\plugins\<plugin name>\WEB-INF\web.config and
<Web ASPx Deployment>\plugins\<plugin
name>\asp\web.config.

MicroStrategy recommends you place your server side files for asp
deployment in the WEB-INF and asp folders. If your plugin has sensitive
files in other folders, you can copy the same web.config in the
corresponding location.

The contents of the web.config file is shown below.

<?xml version="1.0" encoding="UTF-8"?>


<configuration>
<system.webServer>
<handlers accessPolicy="None" />
</system.webServer>
</configuration>

See the Handlers <handlers> Microsoft IIS document regarding


accessPolicy for more information.

Copyright © 2024 All Rights Reserved 653


Syst em Ad m in ist r at io n Gu id e

Configure a Redirect URL Whitelist in MicroStrategy


Web and Library
Starting in MicroStrategy 2021 Update 1, you can configure redirect URL
whitelists in MicroStrategy Web and Library.

l Whitelist URLs

l Edit the Whitelist Contents

l Block All URLs

l Allow URLs Based on Sub-Domain

l Allow All Domains with HTTPs Protocols

Whitelist URLs
The URL whitelist is enabled by default, but allows all domains and all
protocols. The configuration must be adjusted to be more restrictive. Any
changes require a restart of the web server (Tomcat) to be applied.

The following section is present inside the exploded WAR file for
MicroStrategy Web and Library. Inside the \WEB-INF\ folder is the
web.xml file used for configuring the URL whitelist:

<filter>
<filter-name>redirectResponseFilter</filter-name>
<filter-class>com.microstrategy.web.filter.RedirectResponseFilter</filter-
class>
<init-param>
<param-name>allowedProtocols</param-name>
<param-value>*</param-value>
</init-param>
<init-param>
<param-name>domains</param-name>
<param-value>*</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>redirectResponseFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>

Copyright © 2024 All Rights Reserved 654


Syst em Ad m in ist r at io n Gu id e

Edit the Whitelist Contents


The whitelist has the following parameters that can be used to control it:

l allowedProtocols

Specifies the allowed protocols, such as http, https, ftp. By default, an


asterisk (*) is present and all protocols are allowed. Removing the asterisk
and populating the parameter restricts it to only the specified protocols. If
there are no values present, all protocols are blocked.

l domains

Specifies the allowed domains, such as google.com. By default, an


asterisk (*) is present and all domains are allowed. Removing the asterisk
and populating the parameter restricts it to only the specified domains. If
there are no values are present, all domains are blocked.

After editing the URL whitelist, you must restart the application server.

See the following examples of whitelist configurations in web.xml:

l Block All URLs

l Allow URLs Based on Sub-Domain

l Allow All Domains with HTTPs Protocols

Block All URLs


The file shown below contains no allowed protocols, and no allowed
domains, so all URLs are blocked.

<filter>
<filter-name>redirectResponseFilter</filter-name>
<filter-class>com.microstrategy.web.filter.RedirectResponseFilter</filter-
class>
<init-param>
<param-name>allowedProtocols</param-name>
<param-value></param-value>
</init-param>
<init-param>

Copyright © 2024 All Rights Reserved 655


Syst em Ad m in ist r at io n Gu id e

<param-name>domains</param-name>
<param-value></param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>redirectResponseFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>

Allow URLs Based on Sub-Domain


The file shown below allows all URLs that have domains within
*.microstrategy.com with any protocols.

For example, the file below allows https://www.microstrategy.com/,


http://try.microstrategy.com/, ftp://a.microstrategy.com/,
and blocks http://try.microstrategy.top/ and
https://try.microstrategy.cn/.

<filter>
<filter-name>redirectResponseFilter</filter-name>
<filter-class>com.microstrategy.web.filter.RedirectResponseFilter</filter-
class>
<init-param>
<param-name>allowedProtocols</param-name>
<param-value>*</param-value>
</init-param>
<init-param>
<param-name>domains</param-name>
<param-value>*.microstrategy.com</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>redirectResponseFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>

Allow All Domains with HTTPs Protocols


The file shown below allows all URLs that have https protocols.

For example, the file below allows https://www.microstrategy.com/


and blocks http://www.microstrategy.com/ and
ftp://a.microstrategy.com/.

Copyright © 2024 All Rights Reserved 656


Syst em Ad m in ist r at io n Gu id e

<filter>
<filter-name>redirectResponseFilter</filter-name>
<filter-class>com.microstrategy.web.filter.RedirectResponseFilter</filter-
class>
<init-param>
<param-name>allowedProtocols</param-name>
<param-value>https</param-value>
</init-param>
<init-param>
<param-name>domains</param-name>
<param-value>*.microstrategy.com</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>redirectResponseFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>

Don't forget to restart your application to apply your changes.

Edit Password and Authentication Settings


You can view and edit server-level security settings for passwords and
authentication.

The ability to view or edit certain settings is determined by a user's


privileges. All necessary privileges are included in the Administrator role by
default. You must belong to the System Administrators group to use this
feature.

1. Open the Workstation window.

2. In the Navigation pane, click Environments.

3. Right-click a connected environment and choose Properties.

Choose Get Info if you are using a Mac.

4. In the left pane, click Security Settings.

Copyright © 2024 All Rights Reserved 657


Syst em Ad m in ist r at io n Gu id e

Fields

Password Settings
Security Level Security Level includes the following password settings. It
provides four sets of predefined setting values for the administrator to use.
These are Default, Low, Medium, and High. Select Customize from the
drop-down list to view the following settings for a customized configuration.

l Lock after (failed attempts) Specify the number of failed login attempts
allowed. Once a user has this many failed login attempts in a row, the user
is locked out of the MicroStrategy account until an administrator unlocks
the account. Setting this value to No Limit indicates that users are never
locked out of their accounts. The default setting is No Limit.

l Allow lockout duration (Minutes)Set a the amount of time in minutes to


lock an account after a user fails to log in a certain number of times, as
specified in Lock after (failed attempts). The minimum value is 15. The
maximum value is 525960. The default value is No Limit and indicates
there is no time limit to account lockout.

l Allow user login and full name in password When this option is
disabled, Intelligence Server ensures that new passwords do not contain
the user's login or part of the user's name. This option is enabled by
default.

l Allow rotating characters from last password When this option is


disabled, Intelligence Server prevents users from using a password that is
a backwards version of the old password. This option is enabled by
default.

l Minimum password length The minimum password length. The minimum


value is 0. The maximum value is 999. The default value is 0.

l Minimum upper case characters in password The minimum number of


upper case (A-Z) characters that mist be present in users' passwords. The
default value is 0.

Copyright © 2024 All Rights Reserved 658


Syst em Ad m in ist r at io n Gu id e

l Minimum lower case characters in password The minimum number of


lower case (a-z) characters that must be present in users' passwords. The
default value is 0.

l Minimum numeric characters in password The minimum number of


numeric (0-9) characters that must be present in users' passwords. The
default value is 0.

l Minimum special characters in password The minimum number of non-


alphanumeric (symbol) characters that must be present in users'
passwords. The default value is 0.

l Minimum number of character changes in password The minimum


number of character changes. The minimum value is 0. The maximum
value is 999. The default value is 3.

l Number of past passwords remembered The number of each user's


previous passwords that Intelligence Server stores. Intelligence Server
prevents users from using a password that is identical to one they have
previously used. The minimum value is 0. The maximum value is 999. The
default value is 0.

l Hash iterations for password encryption Select the number of


iterations that a password is hashed. This provides even greater security
on top of the algorithm by iteratively hashing the hash a configurable
number of times. The minimum value is 1000. The maximum value is
1000000. The default value is 10000.

Authentication Settings
Update pass-through credentials on successful login Select to update or
disable updating the user's database credentials, LDAP credentials, on a
successful MicroStrategy login.

Use public/private key to sign/verify authentication token Enable this


toggle button to use a public or private key to sign or verify a token. This

Copyright © 2024 All Rights Reserved 659


Syst em Ad m in ist r at io n Gu id e

requires the setup of a public or private key. This option is disabled by


default.

Token Lifetime (Minutes) The lifetime, in minutes, of the token. The


minimum value is 1. The maximum value is 99999. The default value is 1440.

Content Settings
Enable custom HTML and JavaScript content in dashboards Enabling
this option allows users with the appropriate access to display third-party
Web applications or custom HTML and JavaScript directly in the dashboard.
This option is enabled by default. Although the ability to display Web
applications or custom HTML and JavaScript directly in a dashboard is
governed by user privileges, MicroStrategy recommends disabling these
features to ensure a secure environment.

Allow URLs for Export


Administrators can specify which URLs or URL paths are permitted when
fetching content to be included in an export. This concept, where only
certain URLs are permitted, is largely referred to as whitelisting.

If the URL is permitted by any of the specified URLs in the whitelist, then the
information is retrieved. The wildcard character (*) is allowed in the whitelist
as part of the URL. This allows you to have one URL in the whitelist that
encompasses many target URLs.

Certain URLs typically used by the MicroStrategy product are included by


default. This includes the default locations for maps, images, visualizations,
and so on. When adding your own URLs, take the following information into
consideration:

Relative paths are case sensitive.

Copyright © 2024 All Rights Reserved 660


Syst em Ad m in ist r at io n Gu id e

l Include URLs external to your own domain where you know content is
required.

l Avoid specifying the URL of the local machine where the MicroStrategy
product is running.

l If you must use the local MicroStrategy server machine to host content,
specify the exact location on the machine for the content.

For example, if you want to place an image in a particular location on the


MicroStrategy server, use the URL https://my_machine/images so
only the images folder can be accessed.

l A relative path, such as ./images/, can be specified. This specifically


accesses a resource in the Intelligence Server installation folder, <
Install_Path>/images.

Encryption Key Manager


Encryption Key Manager (EKM) creates and manages unique encryption
keys for every MicroStrategy environment. The EKM features include
creating, importing, and exporting of these unique keys through
Configuration Wizard. These keys encrypt potentially sensitive information
stored in the metadata, cube, cache, history list, and session recovery files.

Terms and Definitions


l Master Key: The master key encrypts the key store and is saved in the
master key file. MicroStrategy Intelligence Server will look for the path to
the master key in the registry upon start up.

l Key Store: Contains keys used to encrypt the metadata and file caches.
These keys are encrypted by the master key.

l Secure Bundle: A password protected file that enables administrators to


securely deploy encryption keys between clustered Intelligence Servers or
servers sharing the same metadata.

Copyright © 2024 All Rights Reserved 661


Syst em Ad m in ist r at io n Gu id e

l Secure Bundle Code: The password used to protect the Secure Bundle
file.

High Level Steps to Use the Encryption Key Manager


1. Enable the Encryption Key Manager Feature and restart Intelligence
Server.

2. Using Configuration Wizard:

l Update the metadata to apply the new encryption keys.

l Export the Secure Bundle.

3. To configure additional nodes in a clustered environment:

l Enable the Encryption Key Manager Feature and restart Intelligence


Server.

l Import the Secure Bundle to each node using Configuration Wizard.

Enable the Encryption Key Manager Feature


The Encryption Key Manager is disabled by default.

Windows
To enable the Encryption Key Manager:

1. Open the Registry Editor

2. Navigate to HKEY_LOCAL_MACHINE > SOFTWARE > Wow6432Node


> MicroStrategy > Feature Flags.

3. Double click KE/EncryptionKeyManager.

4. Change the Value Data field from 0 to 1.

5. Click OK.

Copyright © 2024 All Rights Reserved 662


Syst em Ad m in ist r at io n Gu id e

6. Restart Intelligence Server.

If Configuration Wizard is open it must be restarted as well.

Linux
To enable the Encryption Key Manager:

1. Locate the MSIReg.reg file in your MicroStrategy root install directory.

2. Modify the following in a text editor:

Change

[HKEY_LOCAL_MACHINE\SOFTWARE\MicroStrategy\Feature
Flags]

"KE/EncryptionKeyManager"=dword:00000000

to

[HKEY_LOCAL_MACHINE\SOFTWARE\MicroStrategy\Feature
Flags]

"KE/EncryptionKeyManager"=dword:00000001

3. Save and close.

4. Restart Intelligence Server.

If Configuration Wizard is open it must be restarted as well.

Updating the Metadata with Encryption Key Manager


Once Encryption Key Manager is enabled the metadata must be updated to
become encrypted. The encryption keys and master key are automatically
generated and stored locally during the metadata upgrade. See the Update
the Metadata chapter in the Upgrade Help for steps to complete this
process.

Copyright © 2024 All Rights Reserved 663


Syst em Ad m in ist r at io n Gu id e

The metadata cannot be de-encrypted once it is encrypted with the


Encryption Key Manager feature enabled and the encrypted metadata
objects cannot be used in an environment with the Encryption Key Manager
feature disabled. Ensure a full backup of your metadata before an update.

Prevent a CSRF Attack


You can use the Prevention of CSRF attack (Cross-Site Request
Forgery) security feature to prevent CSRF attacks.

To prevent a CSRF attack, you can turn on the validateRandNum


parameter:

1. Locate the sys_defaults.xml file in the following file path:


C:\ProgramFile\MicroStrategy\Web ASPx\Web-INF\xml

2. Update the value to 1 in the file, as seen below:

<pr des="Used to show if we use random token check before process


request" n="validateRandNum" scp="server" v="1"/>

Once you enable this setting, a dynamic token is appended to each


request made that is unique to the user session. If this setting is
enabled, URL API requests are denied.

3. Restart your web server to apply your changes.

Disallow Custom HTML and JavaScript in Dashboards,


Documents, Reports, and Bots
Starting in MicroStrategy ONE (March 2024), all custom Web content is
disabled by default to ensure a secure platform configuration. To enable
custom Web content in dashboards, documents, reports, and Bots, see
Enable Custom HTML and JavaScript Content in Dashboards, Documents,
Reports, and Bots. Although unadvised due to security risks, you can enable

Copyright © 2024 All Rights Reserved 664


Syst em Ad m in ist r at io n Gu id e

custom Web content without auditing and allow custom HTML content. For
more information, see Disable Granular Controls of HTML or JavaScript
Content in an Environment, which reverts the content behavior similar to
MicroStrategy ONE Update 12 and earlier.

If you are upgrading to MicroStrategy ONE (March 2024) from a previous


version, see KB486433: HTML Content Settings When Upgrading to
MicroStrategy One (March 2024) or Later From Previous Versions.

If you are using the Content Inspector tool with certified objects, see
KB486729: Use Content Inspector with Certified Objects.

There are additional settings that control the HTML and JavaScript behavior
in Web. For more information, see How to Control the Use of HTML and
JavaScript in Web.

You can disallow custom HTML in dashboard to ensure a secure


environment. When editing a dashboard, you can display third party Web
applications or custom HTML and JavaScript directly in the dashboard, if
you have the appropriate privileges.

HTML and JavaScript Content Rendering


You can create HTML and JavaScript content using the following methods:

l In the Attribute Editor, assign the attribute form as HTML tag. See
Attribute Editor for more information.

Copyright © 2024 All Rights Reserved 665


Syst em Ad m in ist r at io n Gu id e

l In the Metric Editor, select the checkbox next to Set as HTML content.

l Insert an HTML container and add HTML text to it. For more information,
see Add an HTML Container.

Copyright © 2024 All Rights Reserved 666


Syst em Ad m in ist r at io n Gu id e

Security Settings and Privileges to Render HTML and


JavaScript Content
The HTML and JavaScript content can potentially expose XSS vulnerabilities
and is governed by a series of settings and user privileges.

Environment-level Security Settings Starting in MicroStrategy


2021 Update 1
1. In Workstation, connect to your environment.

2. Right-click on your environment and choose Properties.

3. In the left pane, click Security Settings.

4. In Content Settings, find the Enable HTML and JavaScript content


in dossiers toggle.

Copyright © 2024 All Rights Reserved 667


Syst em Ad m in ist r at io n Gu id e

l By default, the setting is toggled on. Therefore, HTML and JavaScript


content is enabled for all dossiers (renamed dashboards in
MicroStrategy ONE (March 2024)).

l If the setting is toggled off, HTML and JavaScript content is disabled


for all dashboards in this environment.

5. Click OK.

MicroStrategy ONE (March 2024) and Later Environment-Level


Security Settings and Upgrade Impact
Starting in MicroStrategy ONE (March 2024), the Enable granular control
of HTML and JavaScript content setting replaces the Enable HTML and
JavaScript content in dossiers setting. The new setting provides more
detailed control over all content types including dashboards, documents,
reports, and Bots. It also includes a Content Inspector tool which helps
security administrators perform a security check of the content.

Copyright © 2024 All Rights Reserved 668


Syst em Ad m in ist r at io n Gu id e

l If the setting is enabled before upgrading to MicroStrategy ONE (March


2024) or later, the Enable granular control of HTML and JavaScript
content setting automatically takes effect by default.

l See the following image of the setting before the upgrade:

l See the following image of the setting after the upgrade:

l If the Enable HTML and JavaScript content in dossiers setting was


disabled in your environment before upgrading to MicroStrategy ONE
(March 2024) or later, the setting is renamed to Enable HTML and
JavaScript content in dashboards and no further action is needed.
Although, there are behavioral changes after upgrading:
o Report, document, and dashboard HTML content cannot render.
o When creating HTML content, ensure you have the Create custom
HTML and JavaScript content privilege.
o See the following image of the setting before the upgrade:

Copyright © 2024 All Rights Reserved 669


Syst em Ad m in ist r at io n Gu id e

o See the following image of the setting after the upgrade:

l If you enable the Enable HTML and JavaScript content in dashboards


setting after upgrading to MicroStrategy ONE (March 2024) or later,
granular control of HTML and JavaScript content is automatically
activated. After exiting Security Settings in Workstation, the setting is
automatically updated to the new setting name: Enable granular control
of HTML and JavaScript content. This update ensures a seamless
transition to the enhanced security and control included in MicroStrategy
ONE (March 2024) and later.

l If you disable the Enable granular control of HTML and JavaScript


content setting after upgrading to MicroStrategy ONE (March 2024) or
later, the behavior reverts to MicroStrategy 2021 Update 12 and earlier.

For best security practices, MicroStrategy recommends that you


invalidate all caches after enabling this setting. For more information, see
Invalidate All Caches.

Content-Level Security Settings


Starting in MicroStrategy ONE (March 2024), a content-level security
setting, Enable HTML and JavaScript content, is introduced at the
dashboard, document, report, or Bot level.

Copyright © 2024 All Rights Reserved 670


Syst em Ad m in ist r at io n Gu id e

By default, the setting is toggled off which means the HTML and JavaScript
content is disabled. The content can be enabled after performing content
inspection (for more information, see Audit and Allow Custom HTML
Content) or at the content level, as shown below.

Edit Content-Level Setting from Object Properties on Workstation

1. Go to Dashboards, Documents, Reports, or Bots.

2. Right-click a piece of content and choose Properties.

3. Click Security Settings in the left pane.

4. Toggle on or off Enable HTML and JavaScript Content.

5. Click OK.

Edit Content-Level Setting from Editors on Web Authoring, Library, or


Workstation

Report

Copyright © 2024 All Rights Reserved 671


Syst em Ad m in ist r at io n Gu id e

This option is available in Web authoring.

1. Open a report.

2. Click Tools > Report Options....

3. Click the Advanced tab.

4. Select or deselect the checkbox next to Enable HTML and JavaScript


content.

Copyright © 2024 All Rights Reserved 672


Syst em Ad m in ist r at io n Gu id e

5. Click OK.

Document

This option is available in Web authoring and Workstation.

1. Open a document.

2. Click Tools > Document Properties.

3. In the left pane, choose Document.

Copyright © 2024 All Rights Reserved 673


Syst em Ad m in ist r at io n Gu id e

4. Select or deselect the checkbox next to Enable HTML and JavaScript


content.

5. Click OK.

Dashboard

This option is available in Web authoring, Library, and Workstation.

1. Edit a dashboard.

2. Click File > Dashboard Properties.

3. Select or deselect Enable HTML and JavaScript content.

Copyright © 2024 All Rights Reserved 674


Syst em Ad m in ist r at io n Gu id e

4. Click OK.

If you edit and save dashboard, document, report, or Bot content


without the Configure server basic and Configure security settings
privileges, the content-level setting will be automatically disabled. If
you rename or change the Access Control List of a dashboard,
document, report, or Bot, the content-level setting will not change.

MicroStrategy ONE (March 2024) and Later User Privilege


Requirements
l The Create HTML Container privilege is renamed to Create custom
HTML and JavaScript content.

Copyright © 2024 All Rights Reserved 675


Syst em Ad m in ist r at io n Gu id e

l The Web create HTML container privilege is renamed to Web create


custom HTML and JavaScript content.

l Attribute forms with the HTML tag type and HTML containers can only be
added to a dashboard or document if the user has the Create custom
HTML and JavaScript content or Web create custom HTML and
JavaScript content privileges.

l Metrics and derived metrics with the HTML data type can only be added if
the user has the Create custom HTML and JavaScript content or Web
create custom HTML and JavaScript content privileges. Only metrics
configured with the HTML data type will render custom HTML content in
grids.

l The environment-level setting, Enable HTML and JavaScript content in


dashboards, and the content-level setting, Enable HTML and JavaScript
Content, can only be enabled if the user has the Configure server basic
and Configure security settings privileges.

User Privilege Requirements Before MicroStrategy ONE


(March 2024)
l HTML containers can only be added to a dashboard or document if the
user has the Web Create HTML Container privilege.

l Project schema attributes with HTML Tag type forms can only be created
by users with the Create schema objects privilege.

l Attributes with the HTML Tag type and text metrics can be created using
Data Import and can be added to a dashboard if the user has the Web
manage Document and Dashboard datasets privilege, in addition to one
of the following privileges:
o Access data (files) from Local, URL, DropBox, Google Drive, Sample
Files, Clipboard
o Access data from Cloud App (Google Analytics, Salesforce Reports,

Copyright © 2024 All Rights Reserved 676


Syst em Ad m in ist r at io n Gu id e

Facebook, Twitter)
o Allow data from Databases, Google BigQuery, BigData, OLAP, BI tools

l Metrics with the data type Text may contain custom markup or
code.Metrics with text can be created in a dashboard or document if the
user has the following privileges:
o Create Derived Metrics
o Web create Derived Metrics and Derived Attributes

l Project metrics can only be created by users with the Use Metric Editor or
Web Use Metric Editor privilege.

Render Rules for HTML and JavaScript Content


HTML and JavaScript only renders when running a report, dashboard,
document, or Bot only if it is enabled. If the content is disabled, it is replaced
with a warning symbol, displays as raw data, or displays a warning message.
In all options, HTML and JavaScript will not be rendered and malicious code
will not be triggered.

For example, in the image below, a dashboard contains an HTML container,


a bar chart, and a grid. When HTML and JavaScript content is disabled, the
HTML container content is replaced with a warning message. The HTML Tag
Attribute data and HTML Metric data in the grid are replaced with a warning
icon. The HTML Tag Attribute and HTML Metric data on the axis and tooltip
of the bar chart display as raw data.

Copyright © 2024 All Rights Reserved 677


Syst em Ad m in ist r at io n Gu id e

If an owner has the Create custom HTML and JavaScript content or Web
create custom HTML and JavaScript content privileges, the owner is able
to see the HTML or JavaScript content rendered on the dashboard,
document, report, or Bot even if the content is set to disabled.

Disable Granular Controls of HTML or JavaScript Content in


an Environment
Starting in MicroStrategy ONE (March 2024), the Enable granular control
of HTML and JavaScript content setting replaces Enable HTML and
JavaScript content in dossiers

Starting in MicroStrategy 2021 Update 1, custom HTML or JavaScript can be


disabled if you turn off the Enable custom HTML and JavaScript content
in dossiers setting:

This setting is only available in Workstation.

Copyright © 2024 All Rights Reserved 678


Syst em Ad m in ist r at io n Gu id e

1. In Workstation, connect to your environment.

2. Right-click on your environment and choose Properties.

3. In the left pane, click Security Settings.

4. In Content Settings, if you are using MicroStrategy ONE (March 2024)


or later, toggle off Enable granular control of HTML and JavaScript
content. If you are using a version earlier than MicroStrategy ONE
(March 2024), toggle off Enable custom HTML and JavaScript
content in dossiers.

MicroStrategy does not recommend disabling this setting.

5. Click OK.

Considerations When Turning Off the Enable HTML and


JavaScript Content in Dossiers Setting
The following considerations apply to MicroStrategy versions before
MicroStrategy ONE (March 2024).

Copyright © 2024 All Rights Reserved 679


Syst em Ad m in ist r at io n Gu id e

Rendering and Exporting


After turning the setting off, all custom HTML or JavaScript are removed or
encoded. The custom code will stop rendering in Workstation and all Web
and Mobile clients. HTML containers in dashboards will display the following
message:

This content has been disabled. Please contact your


administrator.

The HTML Container button disappears from the dashboard toolbar. Users
with the appropriate access can edit a dashboard and remove their HTML
container but can not add them back.

Grids that previously displayed images, links, or other custom content using
attribute forms with the HTML Tag type display a yellow icon with a tool tip
that indicates the content is disabled.

Copyright © 2024 All Rights Reserved 680


Syst em Ad m in ist r at io n Gu id e

The contents of any text metrics are encoded before they display on the
grid. Any markup or code displays as raw text in the browser or mobile
client. When you export the grid data as a CSV, the rendering behavior is
the same.

You can display hyperlinks in grids using attribute forms with the URL type.

You can display hyperlinks in grids in project schemas when editing an


attribute form in Developer or you can use the Prepare Data interface in
Data Import in Workstation and Web.

You can create dynamic hyperlinks in dashboards if you right-click an


attribute in the Datasets panel and click Create Links.

For more information, see Create Dynamic Links in a Grid.

Copyright © 2024 All Rights Reserved 681


Syst em Ad m in ist r at io n Gu id e

Mobile Caching
After you disable the setting, the change immediately applies to all mobile
clients. If a dashboard is open when the setting is disabled, the change
applies when the user closes and reopens the dashboard.

History List
After you disable the setting, the change also applies to any content sent to
a History List. Custom HTML and JavaScript will not be disabled for History
List entries and caches that were created when the setting was enabled.
MicroStrategy suggests that Administrators manually delete all History List
entries and caches when you disable the setting.

Enable Custom HTML and JavaScript Content in Dashboards,


Documents, Reports, and Bots
Use the following steps to enable custom Web content in dashboards,
documents, reports, and Bots in MicroStrategy ONE (March 2024) and later.

Invalidate All Caches


To ensure all HTML and JavaScript content is governed by the updated
security granular controls, MicroStrategy recommends that you invalidate all
caches before allowing custom HTML content. There are multiple methods to
invalidate all caches:

l Delete Files in the Cache Folder

l Use Workstation

l Use Command Manager

l Use Developer

Copyright © 2024 All Rights Reserved 682


Syst em Ad m in ist r at io n Gu id e

Delete Files in the Cache Folder

1. Locate your cache folder. For example,


${InstallationPath}\Caches\MicroStrategy Tutorial
Server\Servertec-w-1174084_
PA13890BC11D4E0F1C000EB9495D0F44F\RWDCache, where:

l tec-w-1174084 is the Intelligence server name.

l A13890BC11D4E0F1C000EB9495D0F44F is the project ID.

2. Select the cache files in bulk and delete.

3. Repeat steps 1-2 for each project.

4. Optionally, if your environment is in a Intelligence server cluster, repeat


steps 1-2 for each node.

Use Workstation

1. In Workstation Navigation pane, click Monitors.

2. Click Caches > Contents.

3. Right-click a cache and select Delete.

Copyright © 2024 All Rights Reserved 683


Syst em Ad m in ist r at io n Gu id e

4. Repeat steps 1-3 for each cache and project.

5. Optionally, if your environment is in a Intelligence server cluster, repeat


steps 1-3 for each node.

Use Com m and Manager

1. In Command Manager, create a new script file and enter the following
content:

DELETE DOCUMENT CACHES IN PROJECT "MicroStrategy Tutorial";


DELETE REPORT CACHES IN PROJECT "MicroStrategy Tutorial";

2. Replace MicroStrategy Tutorial with your project name.

3. Run the script.

4. Repeat steps 1-3 for each project.

5. Optionally, if your environment is in a Intelligence server cluster, repeat


steps 1-3 for each node.

Use Developer

For details to purge result caches in a project, see Purging all Result
Caches in a Project.

Audit and Allow Custom HTML Content


MicroStrategy ONE (March 2024) includes a new Content Inspector tool that
reduces friction when you upgrade to new security features and granular
controls. The Content Inspector tool allows you to query the entire metadata
to find objects that contain custom Web content (via HTML containers or
attributes and metrics with the HTML Tag form type), review custom Web
content, and approve it by enabling the Enable HTML and JavaScript
content option on each object or in bulk. Only dashboards, documents,
reports, and Bots that have the Enable HTML and JavaScript content
option enabled by a security administrator will render custom Web content.

Copyright © 2024 All Rights Reserved 684


Syst em Ad m in ist r at io n Gu id e

This setting mitigates security risks associated with custom HTML and
JavaScript that execute when users edit or consume content.

After upgrading to MicroStrategy ONE (March 2024) or later, MicroStrategy


strongly recommends that you use the Content Inspector to scan
HTML content in your environment and enable HTML and JavaScript on
demand.

When you use the Content Inspector for the first time, all HTML content is
disabled in your environment.

1. In Workstation, go to Environments.

2. Right-click an environment and choose Open Content Inspector.

3. You can inspect the content from the current view.

You can filter the objects by Content Type or other criteria in the Filter
panel.

Copyright © 2024 All Rights Reserved 685


Syst em Ad m in ist r at io n Gu id e

4. Right-click an object and choose Inspect.

5. The detailed HTML content in this object appears.

6. Click Open Dashboard to open the object and check the


HTML content.

Copyright © 2024 All Rights Reserved 686


Syst em Ad m in ist r at io n Gu id e

7. Click Enable or Disable to enable or disable the HTML content.

If a green checkmark appears next to the object, the HTML content is


enabled. If there is no checkmark next to the object, the HTML content
is disabled.

Copyright © 2024 All Rights Reserved 687


Syst em Ad m in ist r at io n Gu id e

8. To bulk enable or disable HTML content for object:

a. Click the checkbox next to each object.

b. Right-click a selected object.

c. Choose Enable HTML and JavaScript Content or Disable


HTML and JavaScript Content.

Trigger Content Inspector at the Object Level

1. In Workstation, go to Dashboards, Documents, Reports, or Bots.

2. Right-click a piece of content and choose Properties.

3. Click Security Settings in the left pane.

4. Click Inspect.. next to Enable HTML and JavaScript Content.

Copyright © 2024 All Rights Reserved 688


Syst em Ad m in ist r at io n Gu id e

5. Content inspection appears.

6. Select an object and click Enable or Disable to enable or disable HTML


and JavaScript content.

Copyright © 2024 All Rights Reserved 689


Syst em Ad m in ist r at io n Gu id e

Configure Session Idle Timeouts


To apply session idle timeout settings in MicroStrategy Web:

1. In the MicroStrategy Intelligence Server Configuration window, expand


Governing Rules then Default in the left pane.

2. Click General and navigate to the Web user session idle time (sec)
field.

3. Enter the session idle timeout value.

4. Click Ok.

Check If a Web User Session Has Expired


The MicroStrategy Intelligence Server performs a periodic check for whether
a web user's session has expired. The Web user session idle time (sec) and
User session idle time (sec) fields in the MicroStrategy Intelligence Server

Copyright © 2024 All Rights Reserved 690


Syst em Ad m in ist r at io n Gu id e

Configuration determine how often the check occurs. The check occurs
using the lowest of the two field's values.

In execution, a user session can last longer than what is defined, depending
on when the user session was created. For example, if the session is
checked every 20 minutes and the session is set to expire after 20 minutes,
the session can last up to 39 minutes and 59 seconds if the session was
created just after a session check completes. See below for a graphical
representation:

Copyright © 2024 All Rights Reserved 691


Syst em Ad m in ist r at io n Gu id e

At 10:21 AM, the user reaches their "soft timeout" or the time when the
session should be terminated. However, because the Intelligence Server
performed a session validity check one minute earlier, the session is not
terminated and lasts 39 minutes until the Intelligence Server ends the
session at 10:40 AM. Once the session is terminated, if a user returns to
their screen, they are presented with the MicroStrategy Web login screen.
The login screen will not display if the web server session is still active and
the Allow automatic login if session is lost setting is enabled.

For more information of the Allow automatic login if session is lost setting,
see KB12867.

Copyright © 2024 All Rights Reserved 692


Syst em Ad m in ist r at io n Gu id e

Enable Enforcing File Path Validation


MicroStrategy Web and Library are designed to not use any path from a user
controlled input. For additional security, Web and Library supports enforcing
file path validation before accessing files.

To enable enforcing file path validation:

1. Add the following line to the WEB-INF\xml\sys_


defaults.properties file:

enableFilePathValidation=1

After you enable the feature, only files under the web application root
folder can be accessed. For example, apache-tomcat-
9\webapps\MicroStrategyLibrary.

Access to other files that are outside of the web application root folder
will be denied. If you do not want access to other files, you can skip the
next step.

2. If you want to allow access to files that are outside of the web
application root folder, the file path pattern must be added to the
mstrExternalConfigurationFileAllowList file.

The mstrExternalConfigurationFileAllowList file must be


placed in a class path that can be loaded by the web application. For
example, apache-tomcat-
9\webapps\MicroStrategyLibrary\WEB-
INF\classes\mstrExternalConfigurationFileAllowList.

Each line in the mstrExternalConfigurationFileAllowList file


defines the allowed access path using glob syntax. The line can not be
parsed and will be ignored.

Copyright © 2024 All Rights Reserved 693


Syst em Ad m in ist r at io n Gu id e

For more information on the supported glob syntax, see the


FileSystem.getPathMatcher(String) section of Oracle's Class
FileSystem documentation.

Examples of file patterns in


mstrExternalConfigurationFileAllowList:

l Matches one absolute file path on Windows platform:


D:\\trusted.jks

Note the backslash is deleted.

l Matches all files underneath /home on UNIX platforms: /home/**

3. Once your configuration is complete, restart the web server to apply the
changes.

Configure SameSite Cookies for Library


Starting in MicroStrategy 2021 Update 7, you can manage SameSite cookies
for Library in Workstation. See Chrome v80 Cookie Behavior and the Impact
on MicroStrategy Deployments for managing SameSite cookies in
MicroStrategy 2021 Update 6 and older.

SameSite prevents the browser from sending cookies along with cross-site
requests. The main goal is to mitigate the risk of cross-origin information
leakage. It also provides protection against cross-site request forgery
attacks. Possible values are as follows:

l Lax Provides a reasonable balance between security and usability for


websites that want to maintain a user’s logged-in session after they arrive
from an external link. The default option for SameSite is Lax, including
when no option is selected.

l Strict Prevents the cookie from being sent by the browser to the target
site in all cross-site browsing contexts, even when following a regular link.

l None Allows cookies in all cross-site browsing contexts.

Copyright © 2024 All Rights Reserved 694


Syst em Ad m in ist r at io n Gu id e

An HTTPS connection is a prerequisite for the None selection. If the


SameSite cookie attribute is set to None, the associated cookie must be
marked as Secure.

A SameSite attribute of None is recommended in the following scenarios:

l There are cross-domain compatibility issues.

l MicroStrategy Web and MicroStrategy Library are deployed in a domain


other than the one displayed in the user's address bar.

l You are using Security Assertion Markup Language (SAML), OpenID


Connect (OIDC,) and third party authentication.

The cookie flag changes vary depending on your server:

l Tomcat Web Servers

l WebLogic Web Servers

l JBoss Web Servers

Due to application server limitations, settings in the user interface only


apply to the JSESSIONID cookie on Tomcat application servers.

Tomcat Web Servers


1. In Workstation, Connect to the Library environment with an admin user.

2. Right-click the environment and choose Properties.

Choose Get Info if you are using a Mac.

3. In the left pane, click Library and scroll down to the Cookies section.

Copyright © 2024 All Rights Reserved 695


Syst em Ad m in ist r at io n Gu id e

4. Based on your requirements, select the appropriate SameSite attribute


and click OK. The SameSite attribute is unselected by default.

5. Restart the Library server.

Learn more about the other settings on this dialog in View and Edit Library
Administration Settings.

WebLogic Web Servers


1. In Workstation, Connect to the Library environment with an admin user.

2. Right-click the environment and choose Properties.

Choose Get Info if you are using a Mac.

3. In the left pane, click Library and scroll down to the Cookies section.

4. Based on your requirements, select the appropriate SameSite attribute


and click OK. The SameSite attribute is unselected by default.

Copyright © 2024 All Rights Reserved 696


Syst em Ad m in ist r at io n Gu id e

5. In your MicroStrategy deployment, navigate to


MicroStrategyLibrary\WEB-INF\weblogic.xml.

6. Edit weblogic.xml and add the following code:

<wls:session-descriptor>
<wls:cookie-path>/;SameSite=NONE</wls:cookie-path>
</wls:session-descriptor>

7. Click Save and restart the Web server.

JBoss Web Servers


Setting Samesite as None for the JSessionID cookie is only supported by
JBoss 7.3.3 and newer. The following procedure was tested using JBoss
7.3.7.

1. In Workstation, Connect to the Library environment with an admin user.

2. Right-click the environment and choose Properties.

Choose Get Info if you are using a Mac.

3. In the left pane, click Library and scroll down to the Cookies section.

4. Based on your requirements, select the appropriate SameSite attribute


and click OK. The SameSite attribute is unselected by default.

Copyright © 2024 All Rights Reserved 697


Syst em Ad m in ist r at io n Gu id e

5. In JBoss, navigate to
jboss/standalone/configuration/standalone.xml.

6. Edit standalone.xml and add <session-cookie http-


only="true" secure="true"/> to the existing code as shown
below.

<subsystem xmlns="urn:jboss:domain:undertow:10.0" default-


server="default-server" default-virtual-host="default-host" default-
servlet-container="default" default-security-domain="other" statistics-
enabled="${wildfly.undertow.statistics-enabled:${wildfly.statistics-
enabled:false}}">
<buffer-cache name="default"/>
<server name="default-server">
<http-listener name="default" socket-binding="http"
redirect-socket="https" enable-http2="true"/>
<https-listener name="https" socket-binding="https"
security-realm="ApplicationRealm" enable-http2="true"/>
<host name="default-host" alias="localhost">
<location name="/" handler="welcome-content"/>
<http-invoker security-realm="ApplicationRealm"/>
</host>
</server>
<servlet-container name="default">
<jsp-config/>
<session-cookie http-only="true" secure="true"/>
<websockets/>
</servlet-container>
<handlers>
<file name="welcome-content"
path="${jboss.home.dir}/welcome-content"/>
</handlers>
</subsystem>

7. Create a new file named undertow-handlers.conf using the code


shown below and save it to the WEB-INF folder of the MicroStrategy
Library deployment.

samesite-cookie(mode=NONE)

8. Restart the Web server.

Copyright © 2024 All Rights Reserved 698


Syst em Ad m in ist r at io n Gu id e

Configure SameSite Cookies for MicroStrategy Web


and MicroStrategy Mobile
The information in this topic applies to MicroStrategy Mobile, as well as
MicroStrategy Web.

Starting in MicroStrategy 2021 Update 7, you can manage SameSite cookies


for MicroStrategy Web and Mobile in the MicroStrategy Web and Mobile
Administrator pages, respectively. See Configure the SameSite Flag for
MicroStrategy Deployments for managing SameSite cookies in
MicroStrategy 2021 Update 6 and older.

SameSite prevents the browser from sending cookies along with cross-site
requests. The main goal is to mitigate the risk of cross-origin information
leakage. It also provides protection against cross-site request forgery
attacks. Possible values are as follows:

l Lax Provides a reasonable balance between security and usability for


websites that want to maintain user’s logged-in session after the user
arrives from an external link. The default option for SameSite is Lax,
including when no option is selected.

l Strict Prevents the cookie from being sent by the browser to the target
site in all cross-site browsing contexts, even when following a regular link.

l None Allows cookies in all cross-site browsing contexts.

An HTTPS connection is a prerequisite for the None selection. If the


SameSite cookie attribute is set to None, the associated cookie must be
marked as Secure.

A SameSite attribute of None is recommended in the following scenarios:

Copyright © 2024 All Rights Reserved 699


Syst em Ad m in ist r at io n Gu id e

l There are cross-domain compatibility issues.

l MicroStrategy Web and MicroStrategy Library are deployed in a domain


other than the one displayed in the user's address bar.

l You are using Security Assertion Markup Language (SAML), OpenID


Connect (OIDC,) and third party authentication.

The cookie flag changes vary depending on your server:

l Tomcat Web and Mobile Servers

l IIS Web and Mobile Servers

l WebLogic Web and Mobile Servers

l JBoss Web and Mobile Servers

Due to application server limitations, settings in the user interface only


apply to the JSESSIONID cookie on Tomcat application servers.

Tomcat Web and Mobile Servers


1. Access the MicroStrategy Web Administrator page. (How?)

2. In the left pane, select Security.

3. Based on your requirements, select the appropriate SameSite attribute.


The SameSite attribute is unselected by default.

4. Click Save and restart the Web server.

Copyright © 2024 All Rights Reserved 700


Syst em Ad m in ist r at io n Gu id e

IIS Web and Mobile Servers


The application server must support the SameSite cookie changes. Upgrade
the .NET Framework to v4.8 and make sure that the latest updates have been
applied. See KB articles that support SameSite in .NET Framework on the
Microsoft Docs site for more information.

1. Navigate to C:\Program Files (x86)\MicroStrategy\Web


ASPx\web.config.

2. Create a backup of web.config.

3. in web.config, add the parameters shown below.

In the <sessionState /> tag, add cookieSameSite="None"

In the <httpCookies /> tag, add sameSite="None"

4. Contact your IT team to configure and enable SSL and apply the
necessary certificates on the IIS server.

5. Go to the SSL Settings and select Require SSL.

6. Restart the IIS Web server.

WebLogic Web and Mobile Servers


1. Access the MicroStrategy Web Administrator page. (How?)

2. In the left pane, select Security.

Copyright © 2024 All Rights Reserved 701


Syst em Ad m in ist r at io n Gu id e

3. Based on your requirements, select the appropriate SameSite attribute


and click Save. The SameSite attribute is unselected by default.

4. In the MicroStrategy deployment, navigate to MicroStrategy\WEB-


INF\weblogic.xml.

5. Edit weblogic.xml and add the code shown below.

<session-descriptor>
<cookie-path>/;SameSite=NONE</cookie-path>
<cookie-secure>true</cookie-secure>
</session-descriptor>

6. Click Save and restart the Web server.

JBoss Web and Mobile Servers


Setting Samesite as None for the JSessionID cookie is only supported by
JBoss 7.3.3 and newer. The following procedure was tested using JBoss
7.3.7.

1. Access the MicroStrategy Web Administrator page. (How?)

2. In the left pane, select Security.

3. Based on your requirements, select the appropriate SameSite attribute


and click Save. The SameSite attribute is unselected by default.

Copyright © 2024 All Rights Reserved 702


Syst em Ad m in ist r at io n Gu id e

4. In JBoss, navigate to
jboss/standalone/configuration/standalone.xml.

5. Edit standalone.xml and add <session-cookie http-


only="true" secure="true"/> to the existing code as shown
below.

<subsystem xmlns="urn:jboss:domain:undertow:10.0" default-


server="default-server" default-virtual-host="default-host" default-
servlet-container="default" default-security-domain="other" statistics-
enabled="${wildfly.undertow.statistics-enabled:${wildfly.statistics-
enabled:false}}">
<buffer-cache name="default"/>
<server name="default-server">
<http-listener name="default" socket-binding="http"
redirect-socket="https" enable-http2="true"/>
<https-listener name="https" socket-binding="https"
security-realm="ApplicationRealm" enable-http2="true"/>
<host name="default-host" alias="localhost">
<location name="/" handler="welcome-content"/>
<http-invoker security-realm="ApplicationRealm"/>
</host>
</server>
<servlet-container name="default">
<jsp-config/>
<session-cookie http-only="true" secure="true"/>
<websockets/>
</servlet-container>
<handlers>
<file name="welcome-content"
path="${jboss.home.dir}/welcome-content"/>
</handlers>
</subsystem>

6. Create a new file named undertow-handlers.conf using the code


shown below and save it to the WEB-INF folder of the MicroStrategy
Web/Mobile deployment.

samesite-cookie(mode=NONE)

7. Restart the Web server.

Copyright © 2024 All Rights Reserved 703


Syst em Ad m in ist r at io n Gu id e

Enable App Transport Security Using MicroStrategy


Mobile SDK or Library SDK
To enable App Transport Security (ATS):

1. Open the MicroStrategyMobile SDK project.

2. Locate the Info_IPad.plist file or Info_IPhone.plist file in the


project navigator.

3. Locate App Transport Security Settings in the property list and


expand it.

4. Locate the Allow Arbitrary Loads property and update the value to
NO.

Enable Support for HTTP Strict Transport Security


(HSTS)
HTTP Strict Transport Security (HSTS) allows websites to force web clients
to interact only using HTTPS and helps protect against protocol downgrade
attacks.

HSTS Implications
After HSTS is enabled, all HTTP requests from a particular domain name (for
example, myweb.server.com) convert to HTTPS requests by the browser.

HSTS will affect all other applications hosted on your domain. Before
enabling HSTS, MicroStrategy suggests that your IT or Network team
evaluate it.

Enable HSTS
Configuring HSTS varies for each application server, see your vendor
documentation for more information. You can use the following links to

Copyright © 2024 All Rights Reserved 704


Syst em Ad m in ist r at io n Gu id e

configure HSTS:

l Tomcat
o https://tomcat.apache.org/tomcat-9.0-doc/config/filter.html#HTTP_
Header_Security_Filter
o https://docs.microfocus.com/SM/9.52/Hybrid/Content/security/concepts/
support_of_http_strict_transport_security_protocol.htm

l IIS
o Use the following custom header solution:

<system.webServer>
<httpProtocol>
<customHeaders>
<add name="Strict-Transport-Security" value="max-
age=31536000"/>
</customHeaders>
</httpProtocol>
</system.webServer>

o https://docs.microsoft.com/en-
us/iis/configuration/system.webserver/httpprotocol/customheaders/

The third-party product(s) discussed in this technical note is manufactured


by vendors independent of MicroStrategy. MicroStrategy makes no
warranty, express, implied or otherwise, regarding this product, including its
performance or reliability.

Library Administration Control Panel


You can access the Library Admin control panel by navigating to
<FQDN>:<port>/MicroStrategyLibrary/admin. The control panel
allows you to examine and configure settings for Library server and
Collaboration server. Additionally you can view the settings related to
communication with the Intelligence server and configurations in

Copyright © 2024 All Rights Reserved 705


Syst em Ad m in ist r at io n Gu id e

MicroStrategy Library such as the Intelligence server cluster it is connected


to.

You can also View and Edit Library Administration Settings in Workstation.

Starting in MicroStrategy 2021 Update 8, you can Select an Authentication


Mode at the Application Level in Workstation.

Overview Tab
The Library Admin Overview tab provides the ability to examine the machine
name, port, state, and Intelligence and Collaboration servers. The top right
corner of each component displays its current status. Click Edit to change
settings for Intelligence and Collaboration servers. You can also choose to
expose or hide the Collaboration service feature to users.

Configure Collaboration Server


To add a Collaboration server, only the machine name (hostname) and port
are required.

Copyright © 2024 All Rights Reserved 706


Syst em Ad m in ist r at io n Gu id e

The Collaboration server uses 3000 as the default port. If the administrator
uses 0 as the port number when configuring the Collaboration server, it will
be replaced with 3000.

Configure Intelligence Server


To add an Intelligence server, the administrator needs to specify the
machine name (hostname), port, and TLS status.

The Intelligence server uses 34952 as the default port. If the administrator
uses 0 as the port number, the Library server tries to connect to an
Intelligence server under port 34952.

Ensure the input is the DNS of an Intelligence Server and not other
intermediate components. Load balancers are not supported between the
MicroStrategy Library and Intelligence Server layers, as load balancing is
handled within these layers by MicroStrategy Library.

Error / Warning Messages / Troubleshooting


Existing configuration issues appear with an error or warning icon on the
Overview page:

l Error icons in red: The system is not running as expected. Click to view a
more detailed message.

l Warning icons in yellow: There are some minor issues that should be
addressed.

When you click an error or warning icon, a pop up window displays a


detailed error message. It provides actions and links regarding why the
issue occurred and how to resolve it.

Copyright © 2024 All Rights Reserved 707


Syst em Ad m in ist r at io n Gu id e

Timeout Settings
If you receive a Timeout Error in the Library Admin control panel, modify the
timeout settings in the Library server configuration file
configOverride.properties to a larger value. For example:

http.connectTimeout=5000
http.requestTimeout=10000

Library Server Tab


The Library Server tab allows an administrator to view their deployment's
Library URL, Web Server URL, configure authentication modes, security
settings, extend user sessions, access the Mobile configuration link, and
configure connectivity settings to the Modeling service.

Copyright © 2024 All Rights Reserved 708


Syst em Ad m in ist r at io n Gu id e

Library Server
The connection URL for MicroStrategy Library.

MicroStrategy Web
This provides a connection URL to MicroStrategy Web, which enables the
Web options in Library.

Related Content Settings


This section allows you to manage the content displayed in the Related
Content section of your dashboard. You can choose to show all related
content, restrict recommendations to only certified items, or disable the
setting completely.

Authentication Modes
The Authentication Modes section gives an administrator to dictate which
authentication modes they want to allow. When the authentication mode is

Copyright © 2024 All Rights Reserved 709


Syst em Ad m in ist r at io n Gu id e

updated and saved, the server typically requires a restart.

Trusted authentication mode cannot be used in combination with any other


log in mode.

Starting in MicroStrategy 2021 Update 8, you can Select an Authentication


Mode at the Application Level in Workstation.

Security Settings

Chrome Web Browser version 80 and above introduces new changes which
may impact embedding. For more information, see KB484005: Chrome v80
Cookie Behavior and the Impact on MicroStrategy Deployments.

As with any web application, extending a user session may increase


security risk and exposure. The extended session is governed by the
Intelligence server Authentication Policy/Token Lifetime. By default, a
Library http session times out after 30 minutes. Enabling the Keep users
logged in feature will extend the user session beyond 30 minutes and up to
Token Lifetime. It is strongly recommended that you do not modify the
Token Lifetime beyond the default value of 24 hours.

The first security settings section allows administrators to allow Library


embedding in other sites. In other words, this section enables Cross-Origin
Resource Sharing (CORS) A mechanism that uses additional HTTP headers
to tell a browser to let a web application running at one origin (domain) have
permission to access selected resources from a server at a different origin.
settings. You must enable your CORS settings to use MicroStrategy
products like HyperIntelligence or MicroStrategy for Office or to embed a
dashboard in a website.

To enable CORS, select All. When this security setting is updated and
saved, restart the Library application.

Copyright © 2024 All Rights Reserved 710


Syst em Ad m in ist r at io n Gu id e

Additionally, the security settings section allows administrator to enter an


optional secret key which is used to allow sharing sessions between
MicroStrategy Web and MicroStrategy Library.

To extend a user session, select the Keep users logged in checkbox.

Mobile Configuration
The mobile configuration link provides with the mobile server link which can
copied and can be accessed.

Global Search
Select Enable Global Search to allow users to search for content outside
their personal library.

Modeling Service

To share sessions between MicroStrategy Library and the MicroStrategy


Modeling service, ensure the secret key in MicroStrategy Library's Security
Settings section matches the secret key in the MicroStrategy Modeling
service.

For more information on how to configure the secret key in the Modeling
service, see Modeling Service Configuration Properties.

Connection Method

Copyright © 2024 All Rights Reserved 711


Syst em Ad m in ist r at io n Gu id e

This setting allows the administrator to manage the default approach of how
the Library server locates the Modeling services.

l Auto-discovery: Allows the Library server to automatically locate the


Modeling service and set up the connection using Consul and/or the
connected Intelligence Service by default. (Recommended)

l Specified URLs: Allows the Library server to refer to the specified URL
path to locate the Modeling service and set up the connection. If it fails, it
falls back to Auto-discovery.

Any changes to this section only impact new user sessions. Existing user
sessions must be logged out and then logged back in.

Enable TLS

Enable this option if the Modeling service is HTTPS enabled, but with a
private Root CA certificate or self-signed certificate. In this case, a
trustStore file and its password must be configured within
configOverride.properties. For a Modeling service that is HTTPS
enabled with a public Root CA certificate, disable this option.

For more information on how to set up the HTTPS connection between the
Library server and the Modeling service, see Configure HTTPS Connection
Between Library Server and Modeling Service.

Intelligence Server Tab


The Intelligence server tab allows the administrator to manage the
Intelligence server Machine information.

Copyright © 2024 All Rights Reserved 712


Syst em Ad m in ist r at io n Gu id e

It includes the following sections:

l Intelligence Server Machine Information

This section allows for the input of Machine name and Port Number for the
Intelligence server.

l Intelligence Server Connection Settings

The Intelligence server Connection Settings tab allows the administrator to


configure the following parameters:

l Initial Pool Size

l Maximum Pool Size

l Request Timeout (socket connection request timeout is in milliseconds


for the Intelligence server)

l Working Set Size (indicates the number of recent reports or documents


in memory for manipulation)

Copyright © 2024 All Rights Reserved 713


Syst em Ad m in ist r at io n Gu id e

l Search Working Set Size (Indicates the maximum number of concurrent


searches which stay in memory)

Collaboration Server Tab


This tab displays the server URL and that state of the Collaboration server
machine. It also allows the administrator to update the following settings for
the Collaboration server that is currently connected:

l Enable Comments or Discussions

l Enable TLS

l Enable Logging

l Trusted Certificate Setting

To access this section, the communication between the Library server and
the Collaboration server needs to be established first: no errors or warnings
from the Library server to the Collaboration server on the overview page.

Manage Collaboration Panel Features


Manage how your users interact with Collaboration and messaging. This
setting allows you to enable or disable the Comments or Discussions panel
for end users.

You can choose to show Comments only, Discussions only, or hide the
Collaboration panel entirely across the environment. To do this, you can use
the checkboxes or the toggle to easily turn off functionality globally. Once
making new selections in Library Admin, end users will see the changes
immediately.

Copyright © 2024 All Rights Reserved 714


Syst em Ad m in ist r at io n Gu id e

Collaboration Server Setting


Enable Logging

This setting can enable or disable diagnostic logging functionality in the


Collaboration server.

Enable TLS

Selecting this setting requires a keystore path and a passphrase.

Any changes to this section require the administrator to restart the


Collaboration server manually. Related warnings appear on this page until
the restart is complete.

Collaboration Security Setting


Allow Collaboration embedding in other sites

This setting can enable or disable embedding MicroStrategy Collaboration


into other sites.

Copyright © 2024 All Rights Reserved 715


Syst em Ad m in ist r at io n Gu id e

Trusted Certificate Setting


This section is only visible when the Collaboration server targets a TLS
enabled Library server.

Mobile Configuration Tab


The Mobile Configuration tab provides the ability to configure a dashboard
or document as the home screen in Library Mobile. This allows the
organization to enhance and personalize the overall Library workflow for
their end users.

The panel includes the option to create a new configuration.

When creating a new mobile configuration, administrators can customize the


following:

l Create a name and description for the new configuration

l Select a dashboard or document to set as the home screen of Library

Copyright © 2024 All Rights Reserved 716


Syst em Ad m in ist r at io n Gu id e

l Configure Advanced Settings

l Access: Enable access preferences, advanced settings, or set


automatic configuration updates

l Connectivity: Specify a time for network timeout

l Logging: Select the maximum log size and logging level

l Cache: Choose to clear caches on logout

Enable Encryption for trustStore Secret Values


Encrypting keystores is a fundamental security practice that helps safeguard
sensitive cryptographic material, maintains confidentiality, and ensures
compliance with regulatory standards. It is an integral part of a
comprehensive security strategy to protect digital assets and maintain the
integrity and trustworthiness of cryptographic systems.

The configuration was last updated in MicroStrategy ONE Update 12. See
the following steps to enable the encryption of secret values in your
environment using one of the following methods:

l MicroStrategy Library

l MicroStrategy Web

l MicroStrategy Modeling Service

l MicroStrategy Collaboration Service

MicroStrategy Library
1. Open the configOverride.properties config file, which is located
in Tomcat Folder/webapps/MicroStrategyLibrary/WEB-
INF/classes/config/configOverride.properties.

2. Add the propertyEncryptionEnabled = true flag and save the


file.

Copyright © 2024 All Rights Reserved 717


Syst em Ad m in ist r at io n Gu id e

3. Restart the Service. The existing values will be encrypted


automatically.

MicroStrategy Web
1. Open the sys_defaults.properties config file, which is located in
Tomcat Folder/webapps/MicroStrategy/WEB-INF/xml/sys_
defaults.properties.

If the file does not exist, you must create it manually.

2. Add the propertyEncryptionEnabled=1 flag and save the file.

3. Restart the Service. The existing values will be encrypted


automatically.

MicroStrategy Modeling Service


1. Open the modelservice.properties customized file, which is
located in ${installPath}/admin/modelservice.conf.

Copyright © 2024 All Rights Reserved 718


Syst em Ad m in ist r at io n Gu id e

2. Add the
modelservice.featureflag.propertyEncryptionEnabled =
true flag and save the file.

3. Restart the Service. The existing values will be encrypted


automatically.

MicroStrategy Collaboration Service


1. Run the MicroStrategy\Collaboration Server\node_
modules\mstr-collab-svc\encrypt.js encryption script to
encrypt the identityToken.secretKey string.

2. Copy the encrypted identityToken.secretKey string to


MicroStrategy\Collaboration Server\config.json as
property "secretKey": XXXXX.

3. Fill the "secretKeyEncrypted" : true flag to indicate that the


secretKey string is encrypted.

Copyright © 2024 All Rights Reserved 719


Syst em Ad m in ist r at io n Gu id e

If the flag does not exist, you must create it manually.

Upgrade Metadata Encryption to AES256-GCM


MicroStrategy recommends users implement full-disk and full-database level
encryption as part of a comprehensive security practice. Starting in June
2024, you can opt-in to stronger application level encryption at AES-256 for
objects stored in the metadata. The encryption is turned off by default. To
enable application level encryption at AES-256:

You must have metadata version MicroStrategy ONE (June 2024) or later to
upgrade your metadata encryption to AES-256

1. Open the MicroStrategy REST API Explorer by appending


/MicroStrategyLibrary with /api-
docs/index.html?visibility=all in your browser.

2. Create a session and authenticate it. In the Authentication section, use


POST /api/auth/admin/login.

3. Click Try Out and modify the request body by providing your user name
and password.

4. Click Execute.

Copyright © 2024 All Rights Reserved 720


Syst em Ad m in ist r at io n Gu id e

5. In the response, find X-MSTR-AuthToken.

6. To get the current feature status:

a. Under the Configurations section, look up GET


​/ api​/ v2/configurations​/ featureFlags​.

b. Click Try Out.

c. Set the proper X-MSTR-AuthToken from step 5. You can also get
this via inspecting the browser network XHR requests.

d. Click Execute.

e. Search for CA/EnableAES256GCM in the response body to find its


status details.

7. Under the Configurations section, look up PUT


​/ api​/ configurations​/ featureFlags​/ {id}.

8. Click Try Out.

9. Set the proper X-MSTR-AuthToken from step 5. You also can get this
via inspecting the browser network XHR requests.

10. Set id to 6DB42B35426C582F7D6023B5B0853061.

11. To enable this preview feature, set the status value to 1.

12. Click Execute.

13. Repeat step 6 to verify that the feature is enabled.

Copyright © 2024 All Rights Reserved 721


Syst em Ad m in ist r at io n Gu id e

M AN AGE YOUR L ICEN SES

Copyright © 2024 All Rights Reserved 722


Syst em Ad m in ist r at io n Gu id e

As a system administrator, it is important that you manage your


MicroStrategy product licenses to maintain license compliance. Managing
your licenses can also help you take full advantage of your licenses. For
example, you might have a CPU-based Intelligence Server license for four
CPUs, but only be using two CPUs. An audit of your licenses can alert you to
this issue and you can then modify your setup so that you use all four of your
licensed CPUs.

This section covers how to manage the licenses involved in your


MicroStrategy system.

Manage and Verify Your Licenses


MicroStrategy licenses are managed differently according to the license type
that is purchased. Refer to your MicroStrategy contract and any
accompanying contract documentation for descriptions of the different
MicroStrategy license types.

MicroStrategy uses two main categories of licenses:

l Named User Licenses, page 724, in which the number of users with access
to specific functionality are restricted

l CPU Licenses, page 726, in which the number and speed of the CPUs
used by MicroStrategy server products are restricted

MicroStrategy License Manager can assist you in administering your


MicroStrategy licenses. For information about License Manager, see Using
License Manager, page 728.

When you obtain additional licenses from MicroStrategy, use License


Manager to update your license information. For details, see Update Your
License, page 736.

Copyright © 2024 All Rights Reserved 723


Syst em Ad m in ist r at io n Gu id e

Named User Licenses


In a Named User licensing scheme, the privileges given to users and groups
determine what licenses are assigned to users and groups. Intelligence
Server monitors the number of users in your MicroStrategy system with each
privilege, and compares that to the number of available licenses.

For example, the Web Use Filter Editor privilege is a Web Professional
privilege. If you assign this privilege to User1, then Intelligence Server
grants a Web Professional license to User1. If you only have one Web
Professional license in your system and you assign any Web Professional
privilege, for example Web Edit Drilling And Links, to User2, Intelligence
Server displays an error message when any user attempts to log in to
MicroStrategy Web.

The Administrator user that is created with the repository is not considered
in the licensed user count.

To fix this problem, you can either change the user privileges to match the
number of licenses you have, or you can obtain additional licenses from
MicroStrategy. License Manager can determine which users are causing the
metadata to exceed your licenses and which privileges for those users are
causing each user to be classified as a particular license type (see Using
License Manager, page 728).

For more information about the privileges associated with each license type,
see the List of Privileges section. Each privilege group has an introduction
indicating any license that the privileges in that group are associated with.
Every user must be associated with a base server module license type,
either the "Server-Reporter" or "Server-Intelligence" license. For more
information about the license types, see Privileges by License Type.

l The Client - Reporter and Client - Web licenses are linked together in a
hierarchy that allows users to inherit specific privilege sets. The hierarchy
is set up such that the Client - Reporter license is a subset of the Client -

Copyright © 2024 All Rights Reserved 724


Syst em Ad m in ist r at io n Gu id e

Web license. This means that if you have a Client - Web license, in
addition to the privilege set that comes with that license, you will
automatically inherit the privileges that come with the Client - Reporter
license.

l The Server - Intelligence and Server - Reporter licenses are organized


into a hierarchy that allows users to inherit certain privileges too. In this
hierarchy, the Server - Reporter license is a subset of the Server -
Intelligence license. This means that if you have the Server - Intelligence
license, in addition to that license's privilege set, you will have access to
the privilege set available in the Server - Reporter license. However, this
does not prevent you from using the privilege set of either license
individually.

Verifying Named User licenses


To verify your Named User licenses, Intelligence Server scans the metadata
repository daily for the number of users fitting each Named User license
type. If the number of licenses for a given type has been exceeded, an error
message is displayed when a user logs in to a MicroStrategy product.
Contact your MicroStrategy account executive to increase your number of
Named User licenses. For detailed information on the effects of being out of
compliance with your licenses, see Effects of Being Out of Compliance with
Your Licenses, page 727.

For steps to manually verify your Named User licenses using License
Manager, see Audit Your System for the Proper Licenses, page 734. You
can configure the time of day that Intelligence Server verifies your Named
User licenses.

To Configure the Time When Named User Licenses are Verified

1. In Developer, right-click a project source and select Configure


MicroStrategy Intelligence Server.

2. Expand the Server category, and select Advanced.

Copyright © 2024 All Rights Reserved 725


Syst em Ad m in ist r at io n Gu id e

3. Specify the time in the Time to run license check (24 hr format) field.

4. Click OK.

CPU Licenses
When you purchase licenses in the CPU format, the system monitors the
number of CPUs being used by Intelligence Server in your implementation
and compares it to the number of licenses that you have. You cannot assign
privileges related to certain licenses if the system detects that more CPUs
are being used than are licensed. For example, this could happen if you
have MicroStrategy Web installed on two dual-processor machines (four
CPUs) and you have a license for only two CPUs.

To fix this problem, you can either use License Manager to reduce the
number of CPUs being used on a given machine so it matches the number of
licenses you have, or you can obtain additional licenses from MicroStrategy.
To use License Manager to determine the number of CPUs licensed and, if
necessary, to change the number of CPUs being used, see Using License
Manager, page 728.

The ability to deploy Intelligence Server or MicroStrategy Web on specific,


selected CPUs (a subset of the total number of physical CPUs) on a given
machine is called CPU affinity. For details on setting up CPU affinity, see
Update CPU Affinity, page 737.

Verifying CPU Licenses


To verify your CPU licenses, Intelligence Server scans the network to count
the number of CPUs in use by Intelligence Servers. If the number of CPU
licenses has been exceeded, an error message is displayed when a user
logs in to a MicroStrategy product. Contact your MicroStrategy account
executive to increase your number of CPU licenses. For detailed information
on the effects of being out of compliance with your licenses, see Effects of
Being Out of Compliance with Your Licenses, page 727.

Copyright © 2024 All Rights Reserved 726


Syst em Ad m in ist r at io n Gu id e

For steps to manually verify your CPU licenses using License Manager, see
Audit Your System for the Proper Licenses, page 734.

Effects of Being Out of Compliance with Your Licenses


If your system is determined to be out of compliance with your licenses, an
error message is displayed any time a user accesses an administrative
product, such as the MicroStrategy Web Administrator page or the
Administration icon in Developer. This message describes the specific types
of licenses that are not in compliance and states how many days remain
before Intelligence Server can no longer be restarted. This error message is
only a warning, and users can still use the administrative product.

After the system has been out of compliance for fifteen days, an additional
error message is displayed to all users when they log into a project source,
warning them that the system is out of compliance with the available
licenses. This error message is only a warning, and users can still log in to
the project source.

After the system has been out of compliance for thirty days, the products
identified as out-of-compliance have their privileges marked as unavailable
in the User Editor in MicroStrategy Developer. In addition, if the system is
out of compliance with Named User licenses, the privileges associated with
the out-of-compliance products are disabled in the User Editor, Group
Editor, and Security Role Editor to prevent them from being assigned to any
additional users.

If the time mentioned in the out of compliance message is longer than the
validity of your key, then your product will not be accessible after the key
expires.

Contact your MicroStrategy account executive to purchase additional


licenses. For information on how Intelligence Server verifies licenses, see
Named User Licenses, page 724 and CPU Licenses, page 726.

Copyright © 2024 All Rights Reserved 727


Syst em Ad m in ist r at io n Gu id e

Audit and Update Licenses


Once your MicroStrategy system is in place, Intelligence Server verifies how
your system is being used in relation to licenses and users. You can use
License Manager to ensure that your system is in compliance with your
licenses.

You can check for and manage the following licensing issues:

l More copies of a MicroStrategy product are installed and being used than
you have licenses for.

l More users are using the system than you have licenses for.

l More CPUs are being used with Intelligence Server than you have licenses
for.

Using License Manager


License Manager is a tool for auditing and administering your MicroStrategy
licenses and installation. You can run License Manager as a graphical user
interface (GUI) or as a command line tool, in either Windows or Linux
environments.

In both GUI mode and command line mode, License Manager allows you to:

l Audit your MicroStrategy products.

l Request an Activation Code and activate your MicroStrategy installation.

l Update your license key.

Additionally, in GUI mode License Manager allows you to:

l Determine the number of product licenses in use by a specified user


group.

l Display the enabled or disabled licenses used by a particular user group


for selected products.

Copyright © 2024 All Rights Reserved 728


Syst em Ad m in ist r at io n Gu id e

From this information, you can determine whether you have the number of
licenses that you need. You can also print a report, or create and view a
Web page with this information.

l Update licenses by providing the new license key, without re-installing the
products. For example, you can:

l Upgrade from an evaluation edition to a standard edition.

l Update the number of Intelligence Server processors allowed.

l Update the processor speed allowed.

l Activate or deactivate your MicroStrategy installation.

For more information on activating your MicroStrategy installation, see


the Installation and Configuration Help.

l Change the number of CPUs being used for a given MicroStrategy product,
such as Intelligence Server or MicroStrategy Web, if your licenses are
based on CPUs.

l Trigger a license verification check after you have made any license
management changes, so the system can immediately return to normal
behavior.

l View your machine's configuration including hardware and operating


system information.

l View your MicroStrategy installation history including all license keys that
have been applied.

l View the version, edition, and expiration date of the MicroStrategy


products installed on the machine.

If the edition is not an Evaluation edition, the expiration date has a value
of "Never."

For detailed steps to perform all of these procedures, see the License
Manager Help (from within License Manager, press F1).

Copyright © 2024 All Rights Reserved 729


Syst em Ad m in ist r at io n Gu id e

To Start License Manager

License Manager can be run on Windows or UNIX, in either GUI mode or


command line mode.

l Windows GUI: From the Windows Start menu, point to All Programs,
then MicroStrategy Tools, and then select License Manager. License
Manager opens in GUI mode.

l Windows command line: From the Start menu, select Run. Type CMD and
press Enter. A command prompt window opens. Type malicmgr and
press Enter. License Manager opens in command line mode, and
instructions on how to use the command line mode are displayed.

l Linux GUI: In a UNIX or Linux console window, browse to <HOME_PATH>


where <HOME_PATH> is the directory that you specified as the home
directory during installation. Browse to the folder bin and type
./mstrlicmgr, then press Enter. License Manager opens in GUI mode.

l Linux command line: In a Linux console window, browse to HOME_PATH


where HOME_PATH is the specified home directory during installation.
Browse to the folder bin and type ./mstrlicmgr -console, then press
Enter. License Manager opens in command line mode, and instructions on
how to use the command line mode are displayed.

Manage License Compliance


Administrators can monitor system and product usage to ensure
MicroStrategy contract compliance. The embedded dashboard in the
Licenses section of the Workstation window contains information about the
licenses and related privileges currently being used across your
MicroStrategy environment.

Copyright © 2024 All Rights Reserved 730


Syst em Ad m in ist r at io n Gu id e

How to Manage License Compliance

You must be connected to an environment with Platform Analytics installed and


configured.

1. Open the with the Navigation pane in smart mode.

2. In the Navigation pane, click Licenses.

3. Run the pre-formatted Compliance Telemetry dashboard.

Compliance Telemetry Dashboard Chapters


The dashboard contains three chapters:

N am ed User Over vi ew

The Named User Overview chapter provides a summary of environment,


account, and product and license information. Pre-formatted thresholds
applied to the Compliance column make out of compliance usage instantly
recognizable. Reporter and Intelligence are represented in independent
sections to help quickly pinpoint issues.

Copyright © 2024 All Rights Reserved 731


Syst em Ad m in ist r at io n Gu id e

N am ed User Det ai l

The Named User Details chapter provides two pages: the Product Details
page and the User by Product page.

The Product Details page provides more in-depth analysis of license usage
at the product level, as well as detailed information on each user and their
associated privileges.

Copyright © 2024 All Rights Reserved 732


Syst em Ad m in ist r at io n Gu id e

The User by Product page consists of a Product-Privilege matrix in


reference to the current MicroStrategy Product Packaging. By rule each
client license requires a corresponding server license, so a client privilege
will automatically consume a server license, with the exception of Reporter
privileges. The privileges associated with the Reporter product are listed in
their own column in the matrix.

Copyright © 2024 All Rights Reserved 733


Syst em Ad m in ist r at io n Gu id e

CPU Li cen se

The CPU License chapter provides you with the number of CPUs related to
your license.

Troubleshooting
If you're having trouble running the dashboard, see
KB482878: Troubleshooting the Platform Analytics Compliance Telemetry
Dashboard.

Audit Your System for the Proper Licenses


License Manager counts the number of licenses based on the number of
users with at least one privilege for a given product. The Administrator user
that is created by default with the repository is not considered in the count.

To audit your system, perform the procedure below on each server machine
in your system.

Copyright © 2024 All Rights Reserved 734


Syst em Ad m in ist r at io n Gu id e

In rare cases, an audit can fail if your metadata is too large for the Java
Virtual Machine heap size. For steps to modify the Java Virtual Machine
heap size in your system registry settings, see MicroStrategy Tech Notes
TN6446 and TN30885.

If you are using License Manager on the physical machine on which


Intelligence Server is installed, and a three-tier project source does not
exist on that machine, you cannot log in to the server. To audit your licenses
in this case, you must first create a three-tier project source pointing to the
Intelligence Server. You can use either MicroStrategy Configuration Wizard
or Developer's Project Source Manager to create this project source.

To Audit Your MicroStrategy Licenses

1. Open MicroStrategy License Manager. For instructions, see Using


License Manager, page 728.

In command line mode, the steps to audit licenses vary from those
below. Refer to the License Manager command line prompts to guide
you through the steps to audit licenses.

2. On the Audit tab, expand the Intelligence Server folder.

3. Double-click a project source name (PSN).

4. Type your MicroStrategy login and password for the selected


Intelligence Server and click Connect. If you are in compliance, a
message appears notifying you that you are in compliance with your
software license agreement. Click OK.

5. Select the Everyone group and click Audit. A folder tree of the
assigned licenses is listed in the Number of licenses pane.

6. Users with no product-based privileges are listed under Users without


license association.

Copyright © 2024 All Rights Reserved 735


Syst em Ad m in ist r at io n Gu id e

7. Count the number of licenses per product for enabled users. Disabled
users do not count against the licensed user total, and should not be
counted in your audit.

8. Click Print.

9. For detailed information, click Report to create and view XML, HTML,
and CSV reports. You can also have the report display all privileges for
each user based on the license type. To do this, select the Show User
Privileges in Report check box.

10. Total the number of users with each license across all machines.

Update Your License


If you need to update a license and you receive a new license key from
MicroStrategy, use the License Manager to perform the upgrade. If you have
licenses based on the number of CPUs being used, you can also use the
update process to change the number of CPUs being used by a given
product. For complete details on performing an upgrade in your
environment, see the Upgrade Help.

You must update your license key on all machines where MicroStrategy
products are installed. License Manager updates the license information for
the products that are installed on that machine.

To Update a MicroStrategy License

1. Acquire a new license key from MicroStrategy.

2. Open MicroStrategy License Manager. For instructions, see Using


License Manager, page 728.

In command line mode, the steps to update your license vary from
those below. Refer to the License Manager command line prompts to
guide you through the steps to update your license.

Copyright © 2024 All Rights Reserved 736


Syst em Ad m in ist r at io n Gu id e

3. On the License Administration tab, select the Update local license


key option and click Next.

4. Type or paste the new key in the New License Key field and click Next.

If you have one or more products that are licensed based on CPU
usage, the Upgrade window opens, showing the maximum number of
CPUs each product is licensed to use on that machine. You can
change these numbers to fit your license agreement. For example, if
you purchase a license that allows more CPUs to be used, you can
increase the number of CPUs being used by a product.

5. The results of the upgrade are shown in the Upgrade Results dialog
box. License Manager can automatically request an Activation Code for
your license after you update.

6. If you have updated your license information, restart Intelligence


Server after the update. This allows the system to recognize the license
key update and system behavior can return to normal.

Update CPU Affinity


Depending on the number of CPU-based licenses you purchase, you can
have multiple processors (CPUs) running Intelligence Server and
MicroStrategy Web. The ability to deploy Intelligence Server or
MicroStrategy Web on specific, selected CPUs (a subset of the total number
of physical CPUs) on a given machine is called CPU affinity (or processor
affinity). As part of the installation process you must provide the number of
processors to be used by Intelligence Server or MicroStrategy Web on that
machine.

Related Topics

KB484614: How to set the CPU affinity for MicroStrategy Web JSP in Linux

Copyright © 2024 All Rights Reserved 737


Syst em Ad m in ist r at io n Gu id e

CPU Affinity for Intelligence Server on Windows


Upon installation, if the target machine contains more than one physical
processor and the MicroStrategy license key allows more than one CPU to
run Intelligence Server, you are prompted to provide the number of CPUs to
be deployed. The upper limit is either the number of licensed CPUs or the
physical CPU count, whichever is lower.

After installation you can specify CPU affinity through the MicroStrategy
Service Manager. This requires administrator privileges on the target
machine.

To Change CPU Affinity Settings in Service Manager

1. On the machine whose CPU affinity you want to change, in Windows, go


to Start > All Programs > MicroStrategy Tools > Service Manager.

2. From the Service drop-down list, select MicroStrategy Intelligence


Server.

3. Click Options.

4. Select the Intelligence Server Options tab.

5. In the Processor Usage section, select which processors Intelligence


Server should use.

6. Click OK.

CPU Affinity for Intelligence Server on Linux


CPU affinity behaves in a similar manner in both Windows and Linux
environments. This section describes details for setting up CPU affinity for
running Intelligence Server in a Linux environment.

The ability to set CPU affinity on Linux requires special system-level


privileges. MicroStrategy must be run under the root Linux account,
otherwise an error message appears.

Copyright © 2024 All Rights Reserved 738


Syst em Ad m in ist r at io n Gu id e

If the target machine contains more than one physical processor and the
MicroStrategy license key allows more than one CPU to run Intelligence
Server Universal Edition, you are prompted to provide the number of CPUs
to be deployed. The upper limit is either the number of licensed CPUs or the
physical CPU count, whichever is lower.

Each Linux platform exposes its own set of functionality to bind processes to
processors. However, Linux also provides commands to easily change the
processor assignments. As a result, Intelligence Server periodically checks
its own CPU affinity and takes steps whenever the CPU affinity mask does
not match the overall CPU licensing. Whenever your licenses do not match
your deployment, CPU affinity is automatically adjusted to the number of
CPUs necessary to be accurate again.

This automatic adjustment for CPU affinity attempts to apply the user's
specified CPU affinity value when it adjusts the system, but it may not
always be able to do so depending on the availability of processors. For
example, if you own two CPU licenses and CPU affinity is manually set to
use Processor 1 and Processor 2, the CPU affinity adjustment may reset
CPU usage to Processor 0 and Processor 1 when the system is
automatically adjusted.

Changing CPU Affinity in Linux


You can specify CPU affinity either through the MicroStrategy Service
Manager, or by modifying Intelligence Server options. If you want to view
and modify Intelligence Server's options, it must be registered as a service.
You can register Intelligence Server as a service using the Configuration
Wizard by selecting the Register Intelligence Server as a Service option;
alternatively, you can follow the procedure below.

To Set Up Intelligence Server to Run as a Service

1. Navigate to the bin directory in the installation location.

2. Enter the following command:

Copyright © 2024 All Rights Reserved 739


Syst em Ad m in ist r at io n Gu id e

mstrctl -s [IntelligenceServerName] rs

Whenever you change the CPU affinity, you must restart the machine.

CPU Affinity for MicroStrategy Web


If you have CPU-based licenses for MicroStrategy Web, the CPU affinity
feature allows you to match your CPUs and licenses by choosing which
processors MicroStrategy Web uses on a given machine.

This feature is only available in the ASP.NET version of MicroStrategy Web.

This section describes settings that may interact with CPU affinity that you
must consider, and provides steps to update CPU affinity in your
environment.

CPU Affinity and IIS


Before configuring CPU affinity for MicroStrategy Web, you should
understand how the CPU affinity setting behaves on different configurations
of IIS, and how it interacts with other IIS settings such as the Web Garden
mode.

IIS Versions

CPU affinity can be configured on machines running IIS 6.0 or 7.0. The
overall behavior depends on how IIS is configured. The following cases are
considered:

l Worker process isolation mode: In this mode, the CPU affinity setting is
applied at the application pool level. When MicroStrategy Web CPU
affinity is enabled, it is applied to all ASP.NET applications running in the
same application pool. By default, MicroStrategy Web runs in its own
application pool. The CPU affinity setting is shared by all instances of
MicroStrategy Web on a given machine. Worker process isolation mode is

Copyright © 2024 All Rights Reserved 740


Syst em Ad m in ist r at io n Gu id e

the default mode of operation on IIS 6.0 when the machine has not been
upgraded from an older version of Windows.

l IIS 5.0 compatibility mode: In this mode, all ASP.NET applications run in
the same process. This means that when MicroStrategy Web CPU affinity
is enabled, it is applied to all ASP.NET applications running on the Web
server machine. A warning is displayed before installation or before the
CPU affinity tool (described below) attempts to set the CPU affinity on a
machine with IIS running in IIS 5.0 compatibility mode.

This is the default mode of operation when the machine has been upgraded
from an older version of Windows.

Web Garden Mode

Both IIS 6.0 and IIS 7.0 support a "Web Garden" mode, in which IIS creates
some number of processes, each with affinity to a single CPU, instead of
creating a single process that uses all available CPUs. The administrator
specifies the total number of CPUs that are used. The Web Garden settings
can interact with and affect MicroStrategy CPU affinity.

The Web Garden setting should not be used with MicroStrategy Web. At
runtime, the MicroStrategy Web CPU affinity setting is applied after IIS sets
the CPU affinity for the Web Garden feature. Using these settings together
can produce unintended results.

In both IIS 6.0 and IIS 7.0, the Web Garden feature is disabled by default.

CPU affinity interaction depends on how IIS is configured, as described


below:

l In worker process isolation mode, the Web Garden setting is applied at the
application pool level. You specify the number of CPUs to be used. A given
number of CPUs are specified, and IIS creates that number of w3wp.exe
instances. Each of the instances runs all of the ASP.NET applications

Copyright © 2024 All Rights Reserved 741


Syst em Ad m in ist r at io n Gu id e

associated with the application pool. The Web Garden feature is


configured through the application pool settings. For more information,
refer to your IIS documentation.

l In IIS 5.0 compatibility mode, a single setting affects all ASP.NET


applications. The Web Garden feature is enabled or disabled using the
WebGarden and cpuMask attributes under the processModel node in
machine.config. A given number of CPUs are specified in the mask, and
IIS creates that number of aspnet_wp.exe instances. Each of these
instances runs the ASP.NET applications. For more information, refer to
your IIS documentation.

IIS provides metabase properties (SMPAffinitized and


SMPProcessorAffinityMask) to determine the CPU affinity for a given
application pool. Do not use these settings in conjunction with the
MicroStrategy Web CPU affinity setting.

Updating CPU Affinity Changes


After MicroStrategy Web is installed in your environment, you can update
MicroStrategy Web's CPU affinity using a tool called MAWebAff.exe. This
tool is located in the root directory of the MicroStrategy Web application,
which is located by default at C:\Program Files
(x86)\MicroStrategy\Web ASPx. The MAWebAff.exe tool allows you
to choose the physical CPUs MicroStrategy Web can use. The number of
CPUs that can be used depends on the limit specified by the license.

The MAWebAff.exe tool is shown below:

Copyright © 2024 All Rights Reserved 742


Syst em Ad m in ist r at io n Gu id e

The MAWebAff.exe tool lists each physical CPU on a machine. You can add
or remove CPUs or disable CPU affinity using the associated check boxes.
Clearing all check boxes prevents the MicroStrategy Web CPU affinity
setting from overriding any IIS-related CPU affinity settings.

To Update CPU Affinity

1. Double-click the MAWebAff.exe tool to open the CPU affinity tool.

2. Select or clear the check boxes for each processor as desired.

3. Click Apply or click OK.

4. Restart IIS to apply your CPU affinity changes.

Copyright © 2024 All Rights Reserved 743


Syst em Ad m in ist r at io n Gu id e

M AN AGE YOUR PROJECTS

Copyright © 2024 All Rights Reserved 744


Syst em Ad m in ist r at io n Gu id e

In a MicroStrategy system, a project is the environment in which reporting is


done. A project:

l Determines the set of data warehouse tables to be used, and therefore the
set of data available to be analyzed.

l Contains all schema objects used to interpret the data in those tables.
Schema objects include objects such as facts, attributes, and hierarchies.

l Contains all application objects used to create reports and analyze the
data. Application objects include objects such as reports, metrics, and
filters.

l Defines the security scheme for the user community that accesses these
objects. Security objects include objects such as security roles, privileges,
and access control lists.

The recommended methodology and tools for managing projects in the


MicroStrategy system include:

l The Project Life Cycle, page 746

l Implement the Recommended Life Cycle, page 751

l Duplicate a Project, page 753

l Update Projects with New Objects, page 758

l Copy Objects Between Projects: Object Manager, page 762

l Merge Projects to Synchronize Objects, page 809

l Compare and Track Projects, page 818

l Delete Unused Schema Objects: Managed Objects, page 822

For information about creating a project, creating attributes and facts,


building a logical data model, and other project design tasks, see the Project
Design Help.

Copyright © 2024 All Rights Reserved 745


Syst em Ad m in ist r at io n Gu id e

The Project Life Cycle


A MicroStrategy business intelligence application consists of many objects
within projects. These objects are ultimately used to create reports that
display data to the end user. As in other software systems, these objects
should be developed and tested before they can be used in a production
system. We call this process the project life cycle. This section discusses
several project life cycle scenarios and the tools you can use to implement
them.

In many cases, an application consists of a single project delivered to an


end user. MicroStrategy OEM developers may choose to bundle several
projects together to make a single application.

l For a description of the recommended scenario, see Recommended


Scenario: Development, Test, and Production, page 746

l For a real-life scenario, see Real-Life Scenario: New Version From a


Project Developer, page 749

l For details on how to implement the project life cycle in your MicroStrategy
environment, see Implement the Recommended Life Cycle, page 751

Recommended Scenario: Development, Test, and Production


This commonly used scenario is the project life cycle that MicroStrategy
recommends you use as you develop your projects. In this scenario, you
typically use three environments: development, test, and production. Each
environment contains a MicroStrategy project.

MicroStrategy recommends that if you want to copy objects between two


projects, such as from the development project to the test project, those
projects should be related. Two projects are considered to be related if one
was originally a duplicate of the other. To establish different development,
test, and production projects, for example, you can create the test project by
copying the development project, and you can create the production project
by copying the test project. All three of these projects are related to each

Copyright © 2024 All Rights Reserved 746


Syst em Ad m in ist r at io n Gu id e

other. For more information about duplicating a project, see Duplicate a


Project, page 753.

This scenario is shown in the diagram below in which objects iterate


between the development and test projects until they are ready for general
users. Once ready, they are promoted to the production project.

The Development Project


In the development environment project, you create objects. This may be a
project in which developers work. They think about the design of the whole
system as they create the project's schema and application objects. For
detailed instructions on how to design a project schema and create
application objects, see the Project Design Help.

The Test Project


Once the objects' definitions have stabilized, you move them to a test
project that a wider set of people can use for testing. You may have people
run through scripts or typical usage scenarios that users at your
organization commonly perform. The testers look for accuracy (are the
numbers in the reports correct?), stability (did the objects work? do their
dependent objects work?), and performance (did the objects work efficiently,
not producing overload on the data warehouse?).

In this test environment, you want the project to initially connect to a


development data warehouse for initial testing. Later, for more stringent
testing, connect the test project to the production data warehouse. If objects
need further work, they are changed in the development project and
recopied to the test project, but not changed in the test project.

Copyright © 2024 All Rights Reserved 747


Syst em Ad m in ist r at io n Gu id e

The Production Project


After the objects have been tested and shown to be ready for use in a
system accessible to all users, you copy them into the production project.
This is the project used by most of the people in your company. It provides
up-to-date reports and tracks various business objectives.

Implementing the Recommended Scenario


When migrating changes into a testing or development environment, be as
thorough as possible. Carefully consider how your business users will
access and use their application, reports, and dashboards on a daily basis.
Anticipate the needs of your business users, and test every type of scenario
before officially migrating to a production environment.

To set up the development, test, and production projects so that they all
have related schemas, you need to first create the development project. For
instructions on how to create a project, see the Project Design Help. Once
the development project has been created, you can duplicate it to create the
test and production projects using the Project Duplication Wizard. For
detailed information about the Project Duplication Wizard, see Duplicate a
Project, page 753.

Once the projects have been created, you can migrate specific objects
between them via Object Manager. For example, after a new metric has
been created in the development project, you can copy it to the test project.
For detailed information about Object Manager, see Copy Objects Between
Projects: Object Manager, page 762.

You can also merge two related projects with the Project Merge Wizard. This
is useful when you have a large number of objects to copy. The Project
Merge Wizard copies all the objects in a given project to another project. For
an example of a situation in which you would want to use the Project Merge
Wizard, see Real-Life Scenario: New Version From a Project Developer,
page 749. For detailed information about Project Merge, see Merge Projects
to Synchronize Objects, page 809.

Copyright © 2024 All Rights Reserved 748


Syst em Ad m in ist r at io n Gu id e

To help you decide whether you should use Object Manager or Project
merge, see Compare Project Merge to Object Manager, page 759.

The Project Comparison Wizard can help you determine what objects in a
project have changed since your last update. You can also save the results
of search objects and use those searches to track the changes in your
projects. For detailed information about the Project Comparison Wizard, see
Compare and Track Projects, page 818. For instructions on how to use
search objects to track changes in a project, see Track Your Projects with
the Search Export Feature, page 821.

Integrity Manager helps you ensure that your changes have not caused any
problems with your reports. Integrity Manager executes some or all of the
reports in a project, and can compare them against another project or a
previously established baseline. For detailed information about Integrity
Manager, see Chapter 16, Verifying Reports and Documents with Integrity
Manager.

Real-Life Scenario: New Version From a Project Developer


In this scenario, you have initially purchased a project from a vendor whose
products are specialized for analyzing sales data. This is project version 1.
Over the course of time, your developers have customized objects in the
project, resulting in what you called version 1.1 and later, version 1.2., and
so on. Now you have purchased version 2 of the project from the same
vendor, and you want to merge the new (Version 2) project with your existing
(Version 1.2) project.

MicroStrategy encourages vendors in these situations to include in the


installation of version 2 an "automatic" upgrade to the project using Project
Merge. In this way the vendor, rather than the user or purchaser, can
configure the rules for this project merge. For information about executing
Project Merge without user input, see Merge Projects with the Project Merge
Wizard, page 811.

Copyright © 2024 All Rights Reserved 749


Syst em Ad m in ist r at io n Gu id e

This combination of the two projects creates Project version 2.1, as shown in
the diagram below.

The vendor's new Version 2 project has new objects that are not in yours,
which you feel confident in moving over. But some of the objects in the
Version 2 project may conflict with objects that you had customized in the
Version 1.2 project. How do you determine which of the Version 2 objects
you want move into your system, or which of your Version 1.2 objects to
modify?

You could perform this merge object-by-object and migrate them manually
using Object Manager, but this will be time-consuming if the project is large.
It may be more efficient to use the Project Merge tool. With this tool, you can
define rules for merging projects that help you identify conflicting objects
and handle them a certain way. Project Merge then applies those rules while
merging the projects. For more information about using the MicroStrategy
Project Merge tool, see Merge Projects to Synchronize Objects, page 809.

Copyright © 2024 All Rights Reserved 750


Syst em Ad m in ist r at io n Gu id e

Implement the Recommended Life Cycle


The following section provides a high-level, simplified overview of the
procedure for implementing the recommended project life cycle in your
company's MicroStrategy environment. This is a simplified version of the
workflow you are likely to see at your organization. However, you should be
able to apply the basic principles to your specific situation.

1. Create the development project.

Creating the development project involves setting up the database


connections and project schema, configuring user security, and
building the initial schema and application objects. For information on
creating a project, see the Project Design Help.

2. Create the test and production projects by duplicating the development


project.

MicroStrategy recommends that you duplicate the development project


to create the test and production projects, rather than creating them
separately. Duplicating ensures that all three projects have related
schemas, enabling you to safely use Object Manager or Project Merge
to copy objects between the projects.

For instructions on how to duplicate a project, see Duplicate a Project,


page 753.

3. Create objects in the development project.

In the recommended scenario, all objects (attributes, metrics, reports)


are created in the development project, and then migrated to the other
projects. For more information about the development project, see
Recommended Scenario: Development, Test, and Production, page
746.

Copyright © 2024 All Rights Reserved 751


Syst em Ad m in ist r at io n Gu id e

For instructions on creating schema objects, see the Project Design


Help. For instructions on creating application objects, see the Basic
Reporting Helpand Advanced Reporting Help.

4. Migrate objects from the development project to the test project.

Once the objects have been created and are relatively stable, they can
be migrated to the test project for testing. For instructions on how to
migrate objects, see Update Projects with New Objects, page 758.

Depending on the number of objects you have created or changed, you


can use either Object Manager or Project Merge to copy the objects
from the development project to the test project. For a comparison of
the two tools, see Compare Project Merge to Object Manager, page
759. For a tool to determine what objects have changed, see Compare
and Track Projects, page 818.

5. Test the new objects.

Testing involves making sure that the new objects produce the
expected results, do not cause data errors, and do not put undue strain
on the data warehouse. If the objects are found to contain errors, these
errors are reported to the development team so that they can be fixed
and tested again. For more information about the test project, see
Recommended Scenario: Development, Test, and Production, page
746.

Integrity Manager is an invaluable tool in testing whether new objects


cause reports to generate different results. For detailed information
about Integrity Manager, see Chapter 16, Verifying Reports and
Documents with Integrity Manager.

6. Migrate objects from the test project to the production project.

Once the objects have been thoroughly tested, they can be migrated to
the production project and put into full use. For instructions on how to
migrate objects, see Update Projects with New Objects, page 758.

Copyright © 2024 All Rights Reserved 752


Syst em Ad m in ist r at io n Gu id e

7. Repeat steps 3 through 6 as necessary.

The project life cycle does not end with the first migration of new objects into
the production project. A developer may come up with a new way to use an
attribute in a metric, or a manager may request a specific new report. These
objects pass through the project life cycle in the same way as the project's
initial objects.

Duplicate a Project
Duplicating a project is an important part of the application life cycle. If you
want to copy objects between two projects, MicroStrategy recommends that
the projects have related schemas. This means that one must have originally
been a duplicate of the other, or both must have been duplicates of a third
project.

Autostyles, which give a uniform appearance to reports, can be freely


moved between projects regardless of whether their schemas are related.
For instructions on migrating autostyles between projects, see the
Advanced Reporting Help.

Project duplication is done using the Project Duplication Wizard. For


detailed information about the duplication process, including step-by-step
instructions, see The Project Duplication Wizard, page 755.

You can duplicate a MicroStrategy project in one of the following ways:

l From a three-tier (server) project source to a three-tier (server) project


source

l From a three-tier (server) project source to a two-tier (direct) project


source

l From a two-tier (direct) project source to a two-tier (direct) project source

l From a two-tier (direct) project source to a three-tier (server) project


source

Copyright © 2024 All Rights Reserved 753


Syst em Ad m in ist r at io n Gu id e

A server (three-tier) project source is connected to an Intelligence Server,


and has the full range of administrative options available. A direct (two-tier)
project source is not connected to an Intelligence Server. For more
information on three-tier and two-tier project sources, see the Project
Design Help.

Do not refresh the warehouse catalog in the destination project. Refresh the
warehouse catalog in the source project, and then use Object Manager to
move the updated objects into the destination project. For information about
the warehouse catalog, see the Optimizing and Maintaining your Project
section in the Project Design Help.

What Objects are Duplicated with a Project?


When you duplicate a project, all schema objects (attributes, facts,
hierarchies, and transformations) are duplicated. By default all application
objects (reports, documents, metrics, and so forth) contained in the project
are also duplicated.

If you are copying a project to another project source, you have the option to
duplicate configuration objects as well. Specifically:

l You can choose whether to duplicate all configuration objects, or only the
objects used by the project.

l You can choose to duplicate all users and groups, only the users and
groups used by the project, no users and groups, or a custom selection of
users and groups.

l You can choose to duplicate user, contact, and subscription information.

For each type of configuration object (user/group, security role, schedule,


contact/contact group, database connection/instance, database login) you
must choose whether to duplicate the object if it already exists in the
destination project source metadata. For users/groups and security roles,
you can also choose to merge the privileges of the source and destination
versions.

Copyright © 2024 All Rights Reserved 754


Syst em Ad m in ist r at io n Gu id e

Duplicating Projects in Multiple Languages


When you duplicate a project that contains warehouse data in multiple
languages, you have the option of duplicating all, some, or none of those
languages. In addition, you can select the new default language for the
project.

Whenever you duplicate a project or update the metadata, a language check


ensures that the language settings in the CURRENT_USER registry key, the
LOCAL_MACHINE registry key, and the Project locale property all match
before an update takes place. The location of the Language key is at
\Software\MicroStrategy\Language. The system performs the
following checks:

l In a direct (two-tier) configuration, without an Intelligence Server, the


system checks that the language under the LOCAL_MACHINE registry key
matches the language under the CURRENT_USER registry key.

l In a server (three-tier) configuration, with an Intelligence Server, the


system checks that the language under the CURRENT_USER registry key
on the client machine matches the language under the LOCAL_MACHINE
registry key on the server machine.

The MicroStrategy interface obtains the language information from the


CURRENT_USER registry key and the server obtains the language
information from the LOCAL_MACHINE registry key. This can lead to
inconsistencies in the language display. The language check prevents these
inconsistencies and ensures that the language display is consistent across
the interface.

The internationalization settings in Object Manager allow you to create


related projects in different languages. For more information on this
process, see What happens when You Copy or Move an Object, page 769.

The Project Duplication Wizard


You should always use the Project Duplication Wizard to duplicate your
projects. This ensures that all project objects are duplicated properly, and

Copyright © 2024 All Rights Reserved 755


Syst em Ad m in ist r at io n Gu id e

that the new project's schema is identical to the source project's schema.

To duplicate a project:

You must have the Bypass All Object Security Access Checks privilege for that
project.

You must be a member of the System Administrator group for the target project
source.

You must have the Create Schema Objects privilege for the target project
source.

The following high-level procedure provides an overview of what the Project


Duplication Wizard does. For an explanation of the information required at
any given page in the wizard, see the Help (from the wizard, click Help, or
press F1).

High-Level Steps to Duplicate a Project with the Project Duplication


Wizard

1. From Object Manager select the Project menu (or from Developer
select the Schema menu), then select Duplicate Project.

2. Specify the project source and project information that you are copying
from (the source).

3. Specify the project source and project information that you are copying
to (the destination).

4. Indicate what types of objects to copy.

5. Specify whether to keep or merge configuration object properties if


these already exist in the destination project source. For example, if
properties such as password expiration and so on are different by
default between the project sources, which set of properties do you
want to use?

Copyright © 2024 All Rights Reserved 756


Syst em Ad m in ist r at io n Gu id e

6. Specify whether you want to see the event messages as they happen
and, if so, what types. Also specify whether to create log files and, if so,
what types of events to log, and where to locate the log files. By default
Project Duplicator shows you error messages as they occur, and logs
most events to a text file. This log file is created by default in
C:\Program Files (x86)\Common Files\MicroStrategy\.

Scheduling Project Duplication


At the end of the Project Duplication Wizard, you are given the option of
saving your settings in an XML file. You can load the settings from this file
later to speed up the project duplication process. The settings can be loaded
at the beginning of the Project Duplication Wizard.

You can also use the settings file to run the wizard in command-line mode.
The Project Duplication Wizard command line interface enables you to
duplicate a project without having to load the graphical interface, or to
schedule a duplication to run at a specific time. For example, you may want
to run the project duplication in the evening, when the load on Intelligence
Server is lessened. You can create an XML settings file, and then use the
Windows AT command or the Unix scheduler to schedule the duplication to
take place at night.

To Duplicate a Project from the Command Line

After saving the settings from the Project Duplication Wizard, invoke the
Project Duplication Wizard executable ProjectDuplicate.exe. By
default this executable is located in C:\Program Files (x86)\Common
Files\MicroStrategy.

The syntax is:

ProjectDuplicate.exe -f Path\XMLFilename [-sp SourcePassword] [-


dp DestinationPassword] [-sup] [-md] [-dn OverwriteName]

Where:

Copyright © 2024 All Rights Reserved 757


Syst em Ad m in ist r at io n Gu id e

l Path is the path to the saved XML settings file.

l XMLFilename is the name of the saved XML settings file.

l SourcePassword is the password for the source project's project source.

l TargetPassword is the password for the destination project's project


source.

l -sup indicates that feedback messages will be suppressed (silent mode).

l -md indicates that the metadata of the destination project source will be
updated if it is older than the source project source's metadata.

l -dn OverwriteName specifies the name of the destination project. This


overrides the name specified in the XML settings file.

For information on the syntax for the Windows AT command or a UNIX


scheduler, see the documentation for your operating system.

Update Projects with New Objects


When you create or modify an object in your development environment, you
eventually need to copy that object to the test project, and later to the
production project.

For example, a developer creates a new metric in the development project.


Once the metric is ready to be tested, it needs to be present in the test
project. You could re-create the metric in the test project based on the same
specifications, but it can be easy to miss an important setting in the metric.
A quicker and more reliable method is to use MicroStrategy Object Manager
to migrate the new metric from the development project to the test project.
Then, when the metric is ready to be rolled out to your users, you can use
Object Manager again to migrate it from the test project to the production
project.

MicroStrategy has the following tools available for updating the objects in a
project:

Copyright © 2024 All Rights Reserved 758


Syst em Ad m in ist r at io n Gu id e

l Object Manager migrates a few objects at a time. For information about


Object Manager, see Copy Objects Between Projects: Object Manager,
page 762.

l An update package migrates a previously specified group of objects.


Update packages are part of Object Manager. For information about
update packages, see Copy Objects in a Batch: Update Packages, page
786.

l Project Merge migrates all the objects in a project at once. For information
about Project Merge, see Merge Projects to Synchronize Objects, page
809.

For a comparison of these tools, see Compare Project Merge to Object


Manager, page 759.

l If you want to move or copy objects between projects, MicroStrategy


recommends that those projects have related schemas. This means that
either one project must be a duplicate of the other, or both projects must
be duplicates of a third project. For information about duplicating
projects, including instructions, see Duplicate a Project, page 753.

l If one of the projects is updated to a new MicroStrategy release, but


another project is not updated, you cannot move or copy objects between
the projects. You must first update the other project before you can copy
objects between the projects.

Compare Project Merge to Object Manager


Object Manager and Project Merge are both designed for migrating objects
between projects. Both tools involve copying objects between projects in a
definite order according to object types. Which tool you should use depends
on several factors, such as how many objects you need to move at once. The
following are some of the differences between the tools:

l Object Manager can move just a few objects, or just the objects in a few
folders. Project Merge moves all the objects in a project.

Copyright © 2024 All Rights Reserved 759


Syst em Ad m in ist r at io n Gu id e

l Using Object Manager to merge whole projects means moving many


objects individually or as a subset of all objects. This can be a long and
tedious task. Project Merge packages the functionality for easier use
because it moves all objects at one time.

l Object Manager must locate the dependents of the copied objects and
then determine their differences before performing the copy operation.
Project Merge does not do a dependency search, since all the objects in
the project are to be copied.

l The Project Merge Wizard allows you to store merge settings and rules in
an XML file. These rules define what is copied and how conflicts are
resolved. Once they are in the XML file, you can load the rules and
"replay" them with Project Merge. This can be useful if you need to
perform the same merge on a recurring schedule. For example, if a project
developer sends you a new project version quarterly, Project Merge can
make this process easier.

l Project Merge can be run from the command prompt in Microsoft Windows.
An added benefit of this feature is that project merges can be scheduled
using the at command in Windows and can be run silently in an
installation routine.

l The changes to be made through Object Manager can be saved as an


update package and applied at a later time. For instructions on how to
create and use update packages, see Copy Objects in a Batch: Update
Packages, page 786.

l The changes to be made through an Object Manager update package can


be reversed using an undo package. For instructions on how to roll back
changes using Object Manager, see Copy Objects in a Batch: Update
Packages, page 786.

Lock Projects
When you open a project in Project Merge, you automatically place a
metadata lock on the project. You also place a metadata lock on the project

Copyright © 2024 All Rights Reserved 760


Syst em Ad m in ist r at io n Gu id e

if you open it in read/write mode in Object Manager, or if you create or


import an update package from the command line. For more information
about read/write mode versus read-only mode in Object Manager, see
Project Locking with Object Manager, page 763.

A metadata lock prevents other MicroStrategy users from modifying any


objects in the project in Developer or MicroStrategy Web, while objects are
being copied with Object Manager or Project Merge. It also prevents other
MicroStrategy users from modifying any configuration objects, such as users
or groups, in the project source. Locking a project prevents metadata
inconsistencies.

When other users attempt to open an object in a locked project using


Developer or MicroStrategy Web, they see a message that informs them that
the project is locked because a user that opened the project first is
modifying it. Users can then choose to open the object in read-only mode or
view more details about the lock. Users can execute reports in a locked
project, but the report definition that is used is the last definition saved prior
to the project being locked.

If you lock a project by opening it in Object Manager, you can unlock the
project by right-clicking the project in Object Manager, and choosing
Disconnect from Project Source.

Only the user who locked a project, or another user with the Bypass All
Object Security Access Checks and Create Configuration Objects
privileges, can unlock a project.

You can also lock or unlock a project or a configuration manually using


Developer.

Command Manager scripts can be used to automate metadata lock


management. For information about Command Manager, see Chapter 15,
Automating Administrative Tasks with Command Manager. For Command
Manager syntax for managing metadata locks, see the Command Manager
Help (press F1 from within Command Manager).

Copyright © 2024 All Rights Reserved 761


Syst em Ad m in ist r at io n Gu id e

Copy Objects Between Projects: Object Manager


MicroStrategy Object Manager can help you manage objects as they
progress through your project's life cycle. Using Object Manager, you can
copy objects within a project or across projects.

Object Manager and Project Merge both copy multiple objects between
projects. Use Object Manager when you have only a few objects that need to
be copied. For the differences between Object Manager and Project Merge,
see Compare Project Merge to Object Manager, page 759.

Prerequisites for Copying Objects Between Projects


l To use Object Manager to copy objects between projects, you must have
the Use Object Manager privilege for both projects. You do not need to
have ACL permissions for the objects you are migrating.

l To create an update package, you must have either the Use Object
Manager privilege or the Use Object Manager Read-only privilege for the
project from which you are creating an update package.

l If you want to migrate objects between projects with Object Manager,


MicroStrategy recommends that those projects have related schemas.
This means that either one project must be a duplicate of the other, or both
projects must be duplicates of a third project. For information about
duplicating projects, including instructions, see Duplicate a Project, page
753.

l To move system objects between projects that do not have related


schemas, the projects must either have been created with MicroStrategy
9.0.1 or later, or have been updated to version 9.0.1 or later using the
Perform system object ID unification option. For information about this
upgrade, see the Upgrade Help.

l If one of the projects is updated to a new MicroStrategy release, but


another project is not updated, you cannot move or copy objects from the
project using the updated version of MicroStrategy to the older version.

Copyright © 2024 All Rights Reserved 762


Syst em Ad m in ist r at io n Gu id e

However, you can move objects from the older version to the updated
project if the older version is interoperable with the updated version. For
detailed information about interoperability between versions of
MicroStrategy, see the Readme.

Project Locking with Object Manager


Opening a connection to a project with Object Manager causes the project
metadata to become locked. Other users cannot make any changes to the
project until it becomes unlocked. For detailed information about the effects
of locking a project, see Lock Projects, page 760.

If you need to allow other users to change objects in projects while the
projects are opened in Object Manager, you can configure Object Manager
to connect to projects in read-only mode. You can also allow changes to
configuration objects by connecting to project sources in read-only mode.

Connecting to a project or project source in read-only mode has the


following limitations:

l A connection in read-only mode may not display the most recent


information. For example, if you view a folder in Object Manager in a read-
only connection, and then another user adds an object to that folder, the
object is not displayed in Object Manager.

l You cannot copy objects into a read-only project or project source. If you
connect to a project in read-only mode, you can still move, copy, and
delete objects in a project, but you cannot copy objects from another
project into that project.

l By default, users cannot create update packages in read-only mode. This


is because objects, and their used dependencies, may be changed
between the time they are selected for inclusion in the update package
and the time the package is actually generated. If necessary, you can
configure Object Manager to allow the creation of update packages in
read-only mode. For information about update packages, see Copy
Objects in a Batch: Update Packages, page 786.

Copyright © 2024 All Rights Reserved 763


Syst em Ad m in ist r at io n Gu id e

To Open Projects or Connections in Read-Only Mode

1. From the Tools menu, select Preferences.

2. Expand the Object Manager category, and then select Connection.

3. To open project sources in read-only mode, select the Open


configuration in read-only mode check box.

4. To open projects in read-only mode, select the Open project in read-


only mode check box.

5. To allow the creation of update packages in read-only mode, select the


Allow update package creation in read-only mode check box.

6. Click OK.

Copy Objects
Object Manager can copy application, schema, and configuration objects.

l Application objects include reports and documents, and the objects used
to create them, such as templates, metrics, filters, prompts, and searches.
Folders are also considered to be application objects and configuration
objects.

l Schema objects include attributes, facts, hierarchies, transformations,


functions, partition mappings, columns, and tables.

l Configuration objects include objects that are used by all projects in a


project source, such as users and user groups, database instances and
logins, security roles, and Distribution Services devices, transmitters, and
contacts.

If you use Object Manager to copy a user or user group between project
sources, the user or group reverts to default inherited access for all projects
in the project source. To copy a user or group's security information for a
project, you must copy the user or group in a configuration update package.

Copyright © 2024 All Rights Reserved 764


Syst em Ad m in ist r at io n Gu id e

For information about update packages, see Copy Objects in a Batch:


Update Packages, page 786.

For background information on these objects, including how they are created
and what roles they perform in a project, see the Project Design Help.

In a MicroStrategy system, each object has a unique Object ID. Object


Manager identifies objects based on their Object ID, not their name. Hence,
objects with different names are treated as versions of the same object if
they have the same Object ID.

Best Practices for Copying Objects


MicroStrategy recommends that you observe the following practices when
copying objects:

l Back up your metadata before copying any objects. Object Manager


cannot undo the copying and replacing of objects.

l Ensure that the Dependency Search, Conflict Resolution, International,


and Migration options in the Object Manager Preferences dialog box are
set to fit your project's needs. For details about the Dependency Search
options, see What happens when You Copy or Move an Object, page 769.
For details about the Conflict Resolution options, see Resolve Conflicts
when Copying Objects, page 777. For details about the Migration options,
see What happens when You Copy or Move an Object, page 769. The
Object Manager Help also provides a detailed explanation for each of
these options.

l Copy application objects into the following project folders:

l My Personal Objects or any subfolder of My Personal Objects

l Public Objects or any subfolder of Public Objects.

l Copy schema objects into the appropriate Schema Objects sub- or


descendent folders only. For example, if you are copying a hierarchy, you

Copyright © 2024 All Rights Reserved 765


Syst em Ad m in ist r at io n Gu id e

should only paste the hierarchy into the Project Name\Schema


Objects\Hierarchies folder.

l When copying MDX cubes between projects, make sure that the conflict
resolution action for the cubes, cube attributes, and reports that use the
cubes is set to Replace.

l If you need to copy objects from multiple folders at once, you can create a
new folder, and create shortcuts in the folder to all the objects you want to
copy. Then copy that folder. Object Manager copies the folder, its contents
(the shortcuts), and their dependencies (the target objects of those
shortcuts) to the new project.

l Another way to copy objects from multiple folders at once is to create an


update package from the source project, and then import it into the target
project. For more information about update packages, including step-by-
step instructions, see Copy Objects in a Batch: Update Packages, page
786.

l If you are using update packages to update the objects in your projects,
use the Export option to create a list of all the objects in each update
package.

l When copying objects that contain location-specific strings (such as metric


aliases, custom group names, or text boxes in documents), make sure that
you either disable Advanced Conflict Resolution, or use the same option in
the translation preferences and in the conflict resolution. Otherwise there
may be inconsistencies between the object definition and the translation in
the destination project. For an explanation of the advanced conflict
resolution options, including how to enable or disable these options, see
What happens when You Copy or Move an Object, page 769.

l Regardless of the translation preferences, when copying objects with


location-specific strings, you should always verify the results. For
example, empty translations in the source or destination may result in
incorrect translations being saved with the new object. You can use

Copyright © 2024 All Rights Reserved 766


Syst em Ad m in ist r at io n Gu id e

Integrity Manager to identify reports or documents that have unexpected


translations. For information about Integrity Manager, see Chapter 16,
Verifying Reports and Documents with Integrity Manager.

To Copy Objects Between Projects

l To log in to a project source using Object Manager, you must have the
Use Object Manager privilege for that project.

l If you want to copy application or schema objects between projects,


MicroStrategy recommends that the two projects have related schemas
(one must be a duplicate of the other or both must be duplicates of a
common project). For details on this, see Duplicate a Project, page 753.

Lo g i n t o t h e Pr o j ect s i n Ob j ect M an ager

1. In Windows, go to Start > All Programs > MicroStrategy Products >


Object Manager.

2. In the list of project sources, select the check box for the project source
you want to access. You can select more than one project source.

3. Click Open.

4. Use the appropriate sub-procedure below depending on whether you


want to Copy Application and Schema Objects, page 768 or Copy
Configuration Objects, page 768.

Copyright © 2024 All Rights Reserved 767


Syst em Ad m in ist r at io n Gu id e

Co p y Ap p l i cat i o n an d Sch em a Ob j ect s

1. In the Folder List, expand the project that contains the object you want
to copy, then navigate to the object.

2. Copy the object by right-clicking and selecting Copy.

3. Expand the destination project in which you want to paste the object,
and then select the folder in which you want to paste the object.

4. Paste the application or schema object into the appropriate destination


folder by right-clicking and selecting Paste.

For information about additional objects that may be copied with a


given object, see What happens when You Copy or Move an Object,
page 769.

If you are copying objects between two different project sources, two
windows are open within the main Object Manager window. In this
case, instead of right-clicking and selecting Copy and Paste, you can
drag and drop objects between the projects.

5. If you copied any schema objects, you must update the destination
project's schema. Select the destination project, and from the Project
menu, select Update Schema.

Co p y Co n f i gu r at i o n Ob j ect s

1. In the Folder Lists for both the source and destination projects, expand
the Administration folder, then select the appropriate manager for the
type of configuration object you want to copy (Database Instance
Manager, Schedule Manager, or User Manager).

2. From the list of objects displayed on the right-hand side in the source
project source, drag the desired object into the destination project
source and drop it.

Copyright © 2024 All Rights Reserved 768


Syst em Ad m in ist r at io n Gu id e

To display the list of users on the right-hand side, expand User Manager,
then on the left-hand side select a group.

What happens when You Copy or Move an Object


If the object you are copying does not exist in the destination project,
MicroStrategy Object Manager copies the object into the destination project.
This new object has the same name as the source object.

If the object you are copying does exist in the destination project, a conflict
occurs and Object Manager opens the Conflict Resolution dialog box. For
information about how to resolve conflicts, see Resolve Conflicts when
Copying Objects, page 777.

Managing Object Dependencies


When an object uses another object in its definition, the objects are said to
depend on one another. Object Manager recognizes two types of object
dependencies: used dependencies and used-by dependencies.

When you migrate an object to another project, by default any objects used
by that object in its definition (its used dependencies) are also migrated.
You can exclude certain objects and tables from the dependency check and
migration. For instructions, see Excluding Dependent Attributes or Tables
from Object Migration, page 774.

Used Dependencies

A used dependency occurs when an object uses other objects in its


definition. For example, in the MicroStrategy Tutorial project, the metric
named Revenue uses the base formula named Revenue in its definition. The
Revenue metric is said to have a used dependency on the Revenue base
formula. (Additionally, the Revenue base formula has a used-by dependency
of the Revenue metric.)

Copyright © 2024 All Rights Reserved 769


Syst em Ad m in ist r at io n Gu id e

When you migrate an object to another project, any objects used by that
object in its definition (its used dependencies) are also migrated. The order
of these dependent relationships is maintained.

To Manage Used or Used-By Dependencies of an Object

1. After you have opened a project source and a project using Object
Manager, in the Folder List select the object.

2. From the Tools menu, select Object used dependencies. The Used
dependencies dialog box opens and displays a list of objects that the
selected object uses in its definition. The image below shows the used
dependencies of the Revenue metric in the MicroStrategy Tutorial
project: in this case, the used dependency is the Revenue base
formula.

3. In the Used dependencies dialog box, you can do any the following:

l View used dependencies for any object in the list by selecting the
object and clicking the Object used dependencies toolbar icon.

l Open the Used-by dependencies dialog box for any object in the list
by selecting the object and clicking the Object used-by
dependencies icon on the toolbar. For information about used-by
dependencies, see Used-By Dependencies, page 771.

l View the properties of any object, such as its ID, version number, and
access control lists, by selecting the object and from the File menu
choosing Properties.

Copyright © 2024 All Rights Reserved 770


Syst em Ad m in ist r at io n Gu id e

Used-By Dependencies

A used-by dependency occurs when an object is used as part of the


definition of other objects. For example, in the MicroStrategy Tutorial
project, the Revenue metric has used-by dependencies of many reports and
even other metrics. The Revenue metric is said to be used by these other
objects.

Used-by dependents are not automatically migrated with their used objects.
However, you cannot delete an object that has used-by dependencies
without first deleting its used objects.

To Manage the Used-By Dependencies of an Object

1. After you have opened a project source and a project using Object
Manager, from the Folder List select the object.

2. From the Tools menu, choose Object used-by dependencies. The


Used-by dependencies dialog box opens and displays a list of objects
that depend on the selected object for part of their definition. The image
below shows some of the used-by dependencies for the Revenue metric
in the MicroStrategy Tutorial project.

3. In the Used-by dependencies dialog box, you can do any of the


following:

Copyright © 2024 All Rights Reserved 771


Syst em Ad m in ist r at io n Gu id e

l View used-by dependencies for any object in the list by selecting the
object and clicking the Object used-by dependencies icon on the
toolbar.

l Open the Used dependencies dialog box for any object in the list by
selecting the object and clicking the Object used dependencies icon
on the toolbar. For information about used dependencies, see Used
Dependencies, page 769.

l View the properties of any object, such as its ID, version number, and
access control lists, by selecting the object and from the File menu
choosing Properties.

Migrating Dependent Objects


When you copy an object using Object Manager, it checks for any used
dependents of that object and copies them as well. These dependent objects
are copied to the same path as in the source project. If this path does not
already exist in the destination project, Object Manager creates the path.

For example, a user copies a report from the source project to the
destination project. In the source project, all dependents of the report are
stored in the Public Objects\Report Dependents folder. Object
Manager looks in the destination project's Public Objects folder for a
subfolder named Report Dependents (the same path as in the source
project). If the folder exists, the dependent objects are saved in that folder.
If the destination project does not have a folder in Public Objects with the
name User, Object Manager creates it and saves all dependent objects
there.

When you create an update package, click Add All Used Dependencies to
make sure all used dependencies are included in the package. If the
dependent objects for a specific object do not exist in either the destination
project source or in the update package, the update package cannot be
applied. If you choose not to add dependent objects to the package, make

Copyright © 2024 All Rights Reserved 772


Syst em Ad m in ist r at io n Gu id e

sure that all dependent objects are included in the destination project
source.

Object Dependencies

Some objects have dependencies that are not immediately obvious. These
are listed below:

l Folders have a used dependency on each object in the folder. If you copy
a folder using Object Manager, all the objects in that folder are also
copied.

A folder that is copied as part of an update package does not have a used
dependency on its contents.

l Shortcut objects have a used dependency on the object they are a


shortcut to. If you copy a shortcut using Object Manager, the object it is a
shortcut to is also copied.

l Security filters, users, and user groups have a used dependency on the
user groups they belong to. If you copy a security filter, user, or user
group, the groups that it belongs to are also copied.

Groups have a used-by dependency on the users and security filters that
are associated with them. Copying a group does not automatically copy
the users or security filters that belong to that group. To copy the users or
security filters in a group, select the users from a list of that group's used-
by dependents and then copy them.

l Attributes used in fact expressions are listed as dependents of the fact.


When the fact is copied, the attribute is also copied.

Attributes used in fact entry levels are not dependents of the fact.

Copyright © 2024 All Rights Reserved 773


Syst em Ad m in ist r at io n Gu id e

Excluding Dependent Attributes or Tables from Object Migration

When you copy an object, or add dependent objects to an update package,


Object Manager searches for that object's used dependencies so it can copy
those objects also. Depending on the options you set in the Object Manager
Preferences, you can exclude certain types of dependent objects from this
migration.

The options are:

l Exclude all parent attributes from an attribute and Exclude all child
attributes from an attribute: An attribute has a used dependency on its
parent and child attributes in a hierarchy. Thus, migrating an attribute may
result in migrating its entire hierarchy. To exclude the parent or child
attributes from being migrated, select the corresponding option.

l Exclude non-lookup tables from an attribute and Exclude all tables


from a fact: An attribute or fact has a used dependency on each table that
is referenced by the attribute or fact. Thus, by default, migrating an
attribute or fact results in migrating all its associated tables. You can
choose to exclude the tables from the dependency search if, for example,
you have mapped additional tables to an attribute or fact for testing
purposes but do not need those tables in the production project.

For attributes, the lookup table must always exist in the destination
project, so it is always migrated.

To Exclude Types of Dependent Objects

1. From the Tools menu, select Object Manager Preferences.

2. Expand Dependency search, and then select Dependency search.

3. Select the check boxes for the objects you want to exclude from Object
Manager's dependency checking.

4. Click OK.

Copyright © 2024 All Rights Reserved 774


Syst em Ad m in ist r at io n Gu id e

Timestamps for Migrated Objects


By default, when an object is migrated, the object's modification timestamp
is updated to the destination Intelligence Server's migration process time.
You can change this behavior so that the timestamp remains as the last
modification time the object had in the source project.

To Set the Migrated Object Modification Timestamp

1. From the Tools menu, select Object Manager Preferences.

2. Expand Migration, and then select Migration.

3. To cause objects to keep the modification timestamp from the source


project, select the Preserve object modification timestamp during
migration check box. If this check box is cleared, objects take the
modification timestamp from the destination Intelligence Server at the
time of migration.

4. Click OK.

Copying Objects Between Projects in Different Languages


Object Manager's internationalization options allow you to specify the locale
settings to be used when copying objects. You can also retain the object's
name, description, and long description from the destination project, when
replacing objects in the destination project using Object Manager.

The ability to retain the name, description, and long description is important
in internationalized environments. When replacing the objects to resolve
conflicts, retaining these properties of the objects in the destination project
facilitates support of internationalized environments. For example, if the
destination project contains objects with French names but the source
project has been developed in English (including English names), you can
retain the French names and descriptions for objects in the destination
project. Alternately, you can update the project with the English names and
not change the object itself.

Copyright © 2024 All Rights Reserved 775


Syst em Ad m in ist r at io n Gu id e

To Set the Internationalization Options

1. From the Tools menu, select Object Manager Preferences.

2. Expand the International category, and select Language.

3. From the Interface Language drop-down list, select the language to be


used in Object Manager. By default this is the language used in all
MicroStrategy products installed on this system.

4. From the Language for metadata and warehouse data if user and
project level preferences are set to default drop-down list, select
whether copied objects use the locale settings from Developer or from
the machine's regional settings.

For more information on metadata and warehouse data languages, see


About Internationalization. For a table on the prioritization of user- and
project-level language preferences, see Configuring Metadata Object
and Report Data Language Preferences, page 2068.

5. In the International category, select Translation.

6. To resolve translations with a different action than that specified for the
object associated with the translation, select the Enable advanced
conflict resolution check box.

l To always use the translations in the destination project, select Keep


Existing.

l To always use the translations in the source project, select Replace.

7. Select the Merge translations even if object exists identically check


box to update the translations for all copied objects in the destination
project, according to the option specified above (Keep Existing or
Replace (Default)), even if the object exists identically in both projects.

8. Click OK.

Copyright © 2024 All Rights Reserved 776


Syst em Ad m in ist r at io n Gu id e

Resolve Conflicts when Copying Objects


In the MicroStrategy system, every object has an ID (or GUID) and a
version. The version changes every time the object is updated; the ID is
created when the object is created and remains constant for the life of the
object. To see the ID and version of an object, right-click the object and
select Properties.

When copying objects across projects with Object Manager, if an object with
the same ID as the source object exists anywhere in the destination project,
a conflict occurs and the Conflict Resolution dialog box (shown below)
opens. It prompts you to resolve the conflict.

The table below lists the different kinds of conflict:

Conflict Explanation

Exists The object ID, object version, and path are the same in the source and
identically destination projects.

Exists The object ID is the same in the source and destination projects, but the
differently object versions are different. The path may be the same or different.

The object ID and object version are the same in the source and
destination projects, but the paths are different. This occurs when one of
Exists the objects exists in a different folder.
identically
except for If your language preferences for the source and destination projects
path are different, objects that are identical between the projects may be
reported as Exists Identically Except For Path. This occurs because
when different languages are used for the path names, Object

Copyright © 2024 All Rights Reserved 777


Syst em Ad m in ist r at io n Gu id e

Conflict Explanation

Manager treats them as different paths. To resolve this, set your


language preferences for the projects to the same language.

If you resolve the conflict with the Replace action, the destination object is
updated to reflect the path of the source object.

Exists (User only) The object ID and object version of the user are the same in the
identically source and destination projects, but at least one associated Distribution
except for Services contact or contact group is different. This may occur if you
Distribution modified a contact or contact group linked to this user in the source project.
Services If you resolve the conflict with the Replace action, the destination user is
objects updated to reflect the contacts and contact groups of the source user.

The object exists in the source project but not in the destination project.

Does not If you clear the Show new objects that exist only in the source
exist check box in the Migration category of the Object Manager
Preferences dialog box, objects that do not exist in the destination
project are copied automatically with no need for conflict resolution.

Choosing an Action to Resolve a Conflict


If a conflict occurs you must determine what action Object Manager should
take. The different actions are explained in the table below.

When Object Manager reports a conflict it also suggests a default action to


take for that conflict. For information on changing the default action, see
Setting Default Actions for Conflict Resolutions, page 781.

User Action Effect

No change is made to the destination object. The source object is not


Use existing
copied.

Replace The destination object is replaced with the source object.

Copyright © 2024 All Rights Reserved 778


Syst em Ad m in ist r at io n Gu id e

User Action Effect

If the conflict type is Exists Identically Except For Path, or Exists


Identically Except For Distribution Services Objects, the destination
object is updated to reflect the path or Distribution Services
addresses and contacts of the source object.

Replace moves the object into same parent folder as the source object. If
the parent path is the same between source and destination but the
grandparent path is different, Replace may appear to do nothing because
Replace puts the object into the same parent path.

Non-empty folders in the destination location will never have the same
version ID and modification time as the source, because the folder is
copied first and the objects are added to it, thus changing the version ID
and modification times during the copy process.

No change is made to the destination object. The source object is


Keep both
duplicated in the destination location.

If the source object's modification time is more recent than the destination
Use newer object's, the Replace action is used.

Otherwise, the Use existing action is used.

If the source object's modification time is more recent than the destination
Use older object's, the Use existing action is used.

Otherwise, the Replace action is used.

Merge The privileges, security roles, groups, and Distribution Services addresses
(user/group and contacts of the source user or group are added to those of the
only) destination user or group.

The selected table is not created in the destination project. This option is
Do not move only available if the Allow to override table creation for non-lookup
(table only) tables that exist only at source project check box in the Migration
category of the Object Manager Preferences dialog box is selected.

Force replace Replace the object in the destination project with the version of the object
(Update in the update package, even if both versions of the object have the same
packages Version ID.

Copyright © 2024 All Rights Reserved 779


Syst em Ad m in ist r at io n Gu id e

User Action Effect

only)

Delete the object from the destination project. The version of the object in
Delete
the update package is not imported into the destination project.
(Update
packages
If the object in the destination has any used-by dependencies when
only)
you import the update package, the import will fail.

Warehouse and other database tables associated with the objects moved
are handled in specific ways, depending on your conflict resolution choices.
For details, see Conflict Resolution and Tables, page 783.

If you choose to replace a schema object, the following message may


appear:

The schema has been modified. In order for the changes to take
effect, you must update the schema.

This message also appears if you choose to replace an application object


that depends on an attribute, and you have made changes to that attribute
by modifying its form properties at the report level or its column definition
through another attribute. For information about modifying the properties of
an attribute, see the Project Design Help.

To update the project schema, from the Object Manager Project menu,
select Update Schema. For details about updating the project schema, see
the Optimizing and Maintaining your Project section in the Project Design
Help.

To Resolve a Conflict

1. Select the object or objects that you want to resolve the conflict for.
You can select multiple objects by holding down SHIFT or CTRL when

Copyright © 2024 All Rights Reserved 780


Syst em Ad m in ist r at io n Gu id e

selecting.

2. Choose an option from the Action drop-down list (see table above).

3. On the toolbar, click Proceed.

Setting Default Actions for Conflict Resolutions


You can determine the default actions that display in the Conflict Resolution
dialog box when a conflict occurs. This includes setting the default actions
for the following object categories and types:

l Application objects

l Schema objects

l Configuration objects

l Folders

l Users and user groups

For a list of application, configuration, and schema objects, see Copy


Objects, page 764. For an explanation of each object action, see Choosing
an Action to Resolve a Conflict, page 778.

You can set a different default action for objects specifically selected by the
user, and for objects that are included because they are dependents of
selected objects. For example, you can set selected application objects to
default to Use newer to ensure that you always have the most recent
version of any metrics and reports. You can set dependent schema objects
to default to Replace to use the source project's version of attributes, facts,
and hierarchies.

These selections are only the default actions. You can always change the
conflict resolution action for a given object when you copy that object.

Copyright © 2024 All Rights Reserved 781


Syst em Ad m in ist r at io n Gu id e

To Set the Default Conflict Resolution Actions

1. From the Tools menu, select Object Manager Preferences.

2. Expand the Conflict Resolution category, and select Default Object


Actions.

3. Make any changes to the default actions for each category of objects.

l For an explanation of the differences between application,


configuration, and schema objects, see Copy Objects, page 764.

l For an explanation of each object action, see Choosing an Action to


Resolve a Conflict, page 778.

4. Click OK.

Conflict Resolution and Access Control Lists


When you update or add an object in the destination project, by default the
object keeps its access control list (ACL) from the source project. You can
change this behavior in two ways:

l If you resolve a conflict with the Replace action, and the access control
lists (ACL) of the objects are different between the two projects, you can
choose whether to keep the existing ACL in the destination project or
replace it with the ACL from the source project.

l If you add a new object to the destination project with the Create New or
Keep Both action, you can choose to have the object inherit its ACL from
the destination folder instead of keeping its own ACL. This is helpful when
copying an object into a user's profile folder, so that the user can have full
control over the object.

The Use Older or Use Newer actions always keep the ACL of whichever
object (source or destination) is used.

Copyright © 2024 All Rights Reserved 782


Syst em Ad m in ist r at io n Gu id e

To Set the ACL Options

1. From the Tools menu, select Object Manager Preferences.

2. Expand the Conflict Resolution category, and select Access Control


List.

3. Under ACL option on replacing objects, select how to handle the


ACL for conflicts resolved with the Replace action:

l To use the ACL of the source object, select Keep existing ACL when
replacing objects.

l To use the ACL of the replaced destination object, select Replace


existing ACL when replacing objects.

If this option is selected, the ACL is replaced even if the source and
destination objects are identical.

4. Under ACL option on new objects, select how to handle the ACL for
new objects added to the destination project:

l To use the ACL of the source object, select Keep ACL as in the
source objects.

l To inherit the ACL from the destination folder, select Inherit ACL
from the destination folder.

5. Click OK.

Conflict Resolution and Tables


When an attribute or fact is migrated from one project to another using
Object Manager, either specifically or because it is a dependent of another
object, by default all dependent tables are also migrated. This includes
warehouse tables as well as MDX tables and XDA tables.

Copyright © 2024 All Rights Reserved 783


Syst em Ad m in ist r at io n Gu id e

You can choose not to create a dependent table in the destination project by
changing the Action for the table from Create New to Ignore. You can also
choose not to migrate any dependent tables by specifying that they not be
included in Object Manager's dependency search. For detailed information,
including instructions, see What happens when You Copy or Move an
Object, page 769.

The following list and related tables explain how the attribute - table or fact -
table relationship is handled, based on the existing objects and tables and
the conflict resolution action you select.

In the following list and tables, attribute, fact, and table descriptions refer to
the destination project. For example, "new attribute" means the attribute is
new to the destination project: it exists in the source project but not the
destination project.

l New attribute or fact, new table: There is no conflict resolution. By


default the table is moved with the object. You can choose not to create
the dependent table in the destination project by changing the Action for
the table from Create New to Ignore.

l New attribute or fact, existing table: The object in the source project
contains a reference to the table in its definition. The table in the
destination project has no reference to the object because the object is not
present in the destination project. In this case the new object will have the
same references to the table as it did in the source project.

l Existing attribute or fact, new table: The object in the destination


project does not refer to the table because the table does not exist in the
destination project. The object in the source project contains a reference
to the table in its definition.

Copyright © 2024 All Rights Reserved 784


Syst em Ad m in ist r at io n Gu id e

Object
What happens in the destination project
Action

Use
The object does not reference the table.
Existing

The object has the same references to the table as it does in the source
Replace
project.

No change is made to the destination object. The source object is


Keep Both duplicated in the destination project. The duplicated object will have the
same references to the table as it does in the source project.

l Existing attribute or fact, existing table: The object has a reference to


the table in the source project but has no reference to it in the destinatio
project.

Object
What happens in the destination project
Action

Use
The object does not reference the table.
Existing

The object has the same references to the table as it does in the source
Replace
project.

No change is made to the destination object. The source object is


Keep Both duplicated in the destination project. The duplicated object will have the
same references to the table as it does in the source project.

l Existing attribute or fact, existing table: The object has no reference to


the table in the source project but has a reference to it in the destination
project.

Copyright © 2024 All Rights Reserved 785


Syst em Ad m in ist r at io n Gu id e

Object
What happens in the destination project
Action

Use
The object has the same references to the table as it did before the action.
Existing

Replace The object does not reference the table.

No change is made to the destination object. The source object is


Keep Both duplicated in the destination project. The duplicated object will not
reference the table.

Copy Objects in a Batch: Update Packages


In some cases, you may need to update the objects in several folders at
once, or at a time when the source project is offline. Object Manager allows
you to save the objects you want to copy in an update package, and import
that package into any number of destination projects at a later date.

For example, you have several developers who are each responsible for a
subset of the objects in the development project. The developers can submit
update packages, with a list of the objects in the packages, to the project
administrator. The administrator can then import those packages into the
test project to apply the changes from each developer. If a change causes a
problem with the test project, the administrator can undo the package import
process.

If your update package includes any schema objects, you may need to
update the project schema after importing the package. For more
information about updating the schema after importing an update package,
see Update Packages and Updating the Project Schema, page 807.

About Update Packages


An update package is a file containing a set of object definitions and conflict
resolution rules. When you create an update package, you first add objects,
and then specify how any conflict involving the objects is resolved. For more

Copyright © 2024 All Rights Reserved 786


Syst em Ad m in ist r at io n Gu id e

information on resolving conflicts with objects, see Resolve Conflicts when


Copying Objects, page 777.

In addition to the standard Object Manager conflict resolution rules (see


Resolve Conflicts when Copying Objects, page 777), two additional rules are
available for update packages:

l Force Replace: Replace the object in the destination project with the
version of the object in the update package, even if both versions of the
object have the same Version ID.

l Delete: Delete the object from the destination project. The version of the
object in the update package is not imported into the destination project.

If the object in the destination has any used-by dependencies when you
import the update package, the import will fail.

Object Manager supports the following kinds of update packages:

l Project update packages contain application and schema objects from a


single project.

l Configuration update packages contain configuration objects from a


single project source.

l Project security update packages contain security information about


users and user groups, such as privileges, security roles, and security
filters, for a single project. Since these update packages involve users and
groups, which are configuration objects, they are created at the same time
as configuration update packages.

l Undo packages enable you to reverse the changes made by importing


one of the other types of packages. You create undo packages based on
existing update packages. For more information about undo packages,
including instructions on creating and importing them, see Rolling Back
Changes: Undo Packages, page 808.

Copyright © 2024 All Rights Reserved 787


Syst em Ad m in ist r at io n Gu id e

Updating Project Access Inform ation for Users and Groups

You can include users and groups in a configuration update package.


However, the project access information, such as privileges, security roles,
and security filters, for those users and groups is not included in the
configuration update package, because this information can be different for
each project.

Specifically, configuration update packages do not include the information


found in the Project Access and Security Filter categories of the User
Editor or Group Editor. All other user and group information is included in
the configuration update package when you add a user or group to the
package.

To update your users and groups with the project access information for
each project, you must create a project security update package for each
project. You create these packages at the same time that you create the
configuration update package, by selecting the Create project security
packages check box and specifying which projects you want to create a
project security update package for. For detailed instructions on creating a
configuration update package and project security update packages, see
Creating a Configuration Update Package, page 792.

You must import the configuration update package before importing the
project security update packages.

Creating an Update Package


You create update packages from within Object Manager. From the Create
Package dialog box, you select the objects to copy from the source project,
and the rules that govern the cases when these objects already exist in the
destination project.

You can also create update packages from the command line, using rules
specified in an XML file. In the Create Package dialog box, you specify a
container object, such as a folder, search object, or object prompt, and

Copyright © 2024 All Rights Reserved 788


Syst em Ad m in ist r at io n Gu id e

specify the conflict resolution rules. Object Manager creates an XML file
based on your specifications. You can then use that XML file to create an
update package that contains all objects included in the container. For more
information and instructions, see Creating an Update Package from the
Command Line, page 794.

Configuration update packages and project security update packages are


created slightly differently from project update packages. For instructions
on how to create a configuration update package and associated project
security update packages, see Creating a Configuration Update Package,
page 792.

By default, users cannot create project update packages in read-only mode.


This is because objects, and their used dependencies, may be changed
between the time they are selected for inclusion in the update package and
the time the package is actually generated. For more information, see
Project Locking with Object Manager, page 763.

Copyright © 2024 All Rights Reserved 789


Syst em Ad m in ist r at io n Gu id e

To Create a Project Update Package

1. In Object Manager, log in to a project.

2. From the Tools menu, select Create Package.

You can also open this dialog box from the Conflict Resolution dialog
box by clicking Create Package. In this case, all objects in the Conflict
Resolution dialog box, and all dependents of those objects, are
automatically included in the package.

Ad d i n g Ob j ect s t o t h e Package

1. To add objects to the package, do one of the following:

l Drag and drop objects from the Object Browser into the Create
Package dialog box.

Copyright © 2024 All Rights Reserved 790


Syst em Ad m in ist r at io n Gu id e

l Click Add. Select the desired objects and click >. Then click OK.

l Click Add. You can import the results of a previously saved search
object.

2. To add the dependents of all objects to the package, click Add all used
dependencies.

If the dependent objects for a specific object do not exist in either the
destination project source or in the update package, the update
package cannot be applied. If you choose not to add dependent
objects to the package, make sure that all dependent objects are
included in the destination project source.

3. To add the dependents of specific objects, select those objects, right-


click, and select Add used dependencies.

Co n f i gu r i n g t h e Package

1. To change the conflict resolution action for an object, double-click the


Action column for the object and select the new action from the drop-
down list. For an explanation of the actions, see Resolve Conflicts
when Copying Objects, page 777.

2. Select the schema update options for this package. For more details on
these options, see Update Packages and Updating the Project Schema,
page 807.

3. Select the ACL options for objects in this package. For more details on
these options, see Resolve Conflicts when Copying Objects, page 777.

Savi n g t h e Package

1. Enter the name and location of the package file in the Save As field.
The default file extension for update packages is .mmp.

Copyright © 2024 All Rights Reserved 791


Syst em Ad m in ist r at io n Gu id e

You can set the default location in the Object Manager Preferences
dialog box, in the Object Manager: Browsing category.

2. To save a log file containing information about the package's contents


in the Object Manager directory, from the File menu select Save As
Text File or Save As Excel File.

3. When you have added all objects to the package, click Proceed.

Creating a Configuration Update Package

A configuration update package contains configuration objects from a


project source, instead of application and schema objects from a single
project. As such, configuration update packages are created at the project
source level.

If you choose to include users or groups in a configuration update package,


project access information (such as privileges, security roles, and security
filters) is not included in the configuration package. To migrate project
access information about the users or groups, you must create a project
security update package for each project at the same time you create the
configuration update package. For more information about project security
packages, see Updating Project Access Information for Users and Groups,
page 788.

To Create a Configuration Update Package

1. In Object Manager, log in to a project source.

2. In the folder list, select the top-level project source.

3. From the Tools menu, select Create Configuration Package.

You can also open this dialog box from the Conflict Resolution dialog
box by clicking Create Package. In this case, all objects in the Conflict
Resolution dialog box, and all dependents of those objects, are
automatically included in the package.

Copyright © 2024 All Rights Reserved 792


Syst em Ad m in ist r at io n Gu id e

Ad d i n g Co n f i gu r at i o n Ob j ect s t o t h e Package

1. To add configuration objects to the package, click Add Configuration


Objects.

2. Search for the objects you want to add to the package.

3. When the objects are loaded in the search area, click and drag them to
the Create Package dialog box.

4. When you have added all the desired objects to the package, close the
Configuration - Search for Objects dialog box.

5. To add the dependents of all objects to the package, click Add all used
dependencies.

If the dependent objects for a specific object do not exist in either the
destination project source or in the update package, the update
package cannot be applied. If you choose not to add dependent
objects to the package, make sure that all dependent objects are
included in the destination project source.

6. To add the dependents of specific objects, select those objects and


click Add used dependencies.

Cr eat i n g Packages f o r Pr o j ect -Level User an d Gr o u p Access

1. If your project includes users or groups, and you want to include


project-level information about those users or groups, select the Create
project security packages check box. For information about project
security packages, see Updating Project Access Information for Users
and Groups, page 788.

Copyright © 2024 All Rights Reserved 793


Syst em Ad m in ist r at io n Gu id e

2. In the Projects area, select the check boxes next to the projects you
want to create project security packages for.

Co n f i gu r i n g t h e Package

1. To change the conflict resolution action for an object, double-click the


Action column for the object and select the new action from the drop-
down list. For an explanation of the actions, see Resolve Conflicts
when Copying Objects, page 777.

If you are creating project security update packages, you must select
Replace as the conflict resolution action for all users and groups.
Otherwise the project-level security information about those users and
groups is not copied into the destination project.

2. Select the ACL options for objects in this package. For more details on
these options, see Resolve Conflicts when Copying Objects, page 777.

Savi n g t h e Package

1. Enter the name and location of the package file in the Save As field.
The default file extension for update packages is .mmp.

Project security update packages are named ProjectSource_


ProjectName.mmp, and are created in the same location as the
configuration update package.

2. To save a log file containing information about the package's contents


in the Object Manager directory, from the File menu select Save As
Text File or Save As Excel File.

3. When you have added all objects to the package, click Proceed.

Creating an Update Package from the Com m and Line

You may want to schedule the creation of an update package at a later time,
so that the project is not locked during normal business hours. Or you may

Copyright © 2024 All Rights Reserved 794


Syst em Ad m in ist r at io n Gu id e

want to create a package containing certain objects on a specific schedule.


For example, you may want to create a new package every week that
contains all the new metrics from the development project.

You can use Object Manager to create an XML file specifying what objects
are to be included in the update package. That XML file can then be used to
create the package from the command line.

The XML file specifies a container object in the source project, that is, a
folder, search object, or object prompt. When you create the package from
the XML file, all objects included in that container object are included in the
update package, as listed in the table below:

If the XML file specifies a... The update package contains...

Folder All objects in the folder

Search object All objects returned by the search

Object prompt All objects returned by the prompt

To create an XML file for a configuration update package, see Manually


Creating an Update Package Creation XML File, page 798. You cannot
create a configuration update package XML file from within Object Manager
because container objects do not exist at the project source level.

To Create an XML File for Creating an Update Package from the Com m and
Line

1. In Object Manager, log in to a project.

2. From the Tools menu, select Create Package.

Copyright © 2024 All Rights Reserved 795


Syst em Ad m in ist r at io n Gu id e

Ad d i n g a Co n t ai n er Ob j ect t o t h e Package

1. Click Add.

2. You need to specify what to use as a container object. You can use a
search object, object prompt, or folder. To specify a search object or
object prompt as the container object:

l Make sure the Import selected objects option is selected.

l In the Available objects area, browse to the search object or object


prompt.

l Select the search object or object prompt and click >.

3. OR, to specify a folder as the container object:

l Select the Import folder and children recursively option.

l Type the name of the folder in the field, or click ... (the browse button)
and browse to the folder.

4. Select the Return as a container to create XML checkbox.

5. Click OK.

6. To add the dependents of all objects to the package, select the Add all
used dependencies check box. All dependent objects of all objects
included in the container object will be included in the package when it
is created.

If the dependent objects for a specific object do not exist in either the
destination project or in the update package, the update package
cannot be applied. If you choose not to include dependent objects in
the package, make sure that all dependent objects are included in the
destination project.

Copyright © 2024 All Rights Reserved 796


Syst em Ad m in ist r at io n Gu id e

Co n f i gu r i n g t h e Package

1. To change the conflict resolution action for an object, double-click the


Action column for the object and select the new action from the drop-
down list. For an explanation of the actions, see Resolve Conflicts
when Copying Objects, page 777.

2. Select the schema update options for this package. For more details on
these options, see Update Packages and Updating the Project Schema,
page 807.

3. Select the ACL options for objects in this package. For more details on
these options, see Resolve Conflicts when Copying Objects, page 777.

Savi n g t h e XM L Fi l e

1. Enter the name and location of the package file to be created by this
XML in the Save As field. The default file extension for update
packages is .mmp.

You can set the default location in the Object Manager Preferences
dialog box, in the Object Manager: Browsing category.

2. Click Create XML. You are prompted to type the name and location of
the XML file. By default, this is the same as the name and location of
the package file, with an .xml extension instead of .mmp.

3. Click Save.

To Create an Update Package from an XML File

Creating a package from the command line locks the project metadata for
the duration of the package creation. Other users cannot make any changes
to the project until it becomes unlocked. For detailed information about the
effects of locking a project, see Lock Projects, page 760.

Copyright © 2024 All Rights Reserved 797


Syst em Ad m in ist r at io n Gu id e

Call the Project Merge executable, projectmerge.exe, with the following


parameters:

Effect Parameter

Use this XML file to create an update package (required) -f Filename.xml

Log into the project source with this password (the login ID to be
-sp Password
used is stored in the XML file)

Log into the project with this password (the login ID to be used is
-smp Password
stored in the XML file)

Suppress status updates (useful for creating an update package


-sup
in the background, so that the status window does not appear)

Manually Creating an Update Package Creation XML File

You can also create the XML file to create an update package without
opening Object Manager. To do this, you first copy a sample XML file that
contains the necessary parameters, and then edit that copy to include a list
of the objects to be migrated and conflict resolution rules for those objects.

This is the only way to create an XML file to create a configuration update
package.

Sample package creation XML files for project update packages and
configuration update packages are in the Object Manager folder. By default
this folder is C:\Program Files (x86)\MicroStrategy\Object
Manager\.

The XML file has the same structure as an XML file created using the
Project Merge Wizard. For more information about creating an XML file for
use with Project Merge, see Merge Projects to Synchronize Objects, page
809.

Copyright © 2024 All Rights Reserved 798


Syst em Ad m in ist r at io n Gu id e

High-Level Steps to Manually Create an Update Package Creation XML File

1. Make a copy of one of the sample XML files:

l To create a project update package, copy the file


createProjectPackage.xml.

l To create a configuration update package, copy the file


createConfigPackage.xml.

2. Edit your copy of the XML file to include the following information, in the
appropriate XML tags:

l SearchID (project update package only): The GUID of a search object


that returns the objects to be added to the project update package.

l TimeStamp (configuration update package only): A timestamp, of the


form MM/DD/YYYY hh:mm:ss (am/pm). All configuration objects
modified after that timestamp are included in the update package.

l PackageFile: The name and path of the update package. If a package


with this name already exists in this path, the creation timestamp is
appended to the name of the package created by this file.

l AddDependents:

l Yes for the package to include all dependents of all objects in the
package.

l No for the package to only include the specified objects.

l Location: In a three-tier system, this is the name of the machine that


is used to connect to the project source. In a two-tier system, this is
the DSN used to connect to the project source.

l Project (project update package only): The project containing the


objects to include in the update package.

Copyright © 2024 All Rights Reserved 799


Syst em Ad m in ist r at io n Gu id e

l ConnectionMode:

l 2-tier for a direct (2-tier) project source connection.

l 3-tier for a server (3-tier) project source connection.

3. AuthenticationMode: The authentication mode used to connect to the


project source, either Standard or Windows.

4. Login: The user name to connect to the project source. You must
provide a password for the user name when you run the XML file from
the command line.

5. For a project update package, you can specify conflict resolution rules
for individual objects. In an Operation block, specify the ID (GUID) and
Type of the object, and the action to be taken. For information about
the actions that can be taken in conflict resolution, see Resolve
Conflicts when Copying Objects, page 777.

6. Save the XML file.

7. When you are ready to create the update package from the XML file,
call the Project Merge executable, projectmerge.exe, as described
in To Create an Update Package from an XML File, page 797.

Editing an Update Package

You can make changes to an update package after it has been created. You
can remove objects from the package, change the conflict resolution rules
for objects in the package, and set the schema update and ACL options for
the package.

You cannot add objects to an update package once it has been created.
Instead, you can create a new package containing those objects.

Copyright © 2024 All Rights Reserved 800


Syst em Ad m in ist r at io n Gu id e

To Edit an Update Package

1. In Object Manager, log in to a project or project source.

2. From the Tools menu, select Import Package or Import


Configuration Package.

3. In the Selected Package field, type the name and path of the update
package, or click ... (the browse button) to browse to the update
package.

4. Click Edit. The Editing pane opens at the bottom of the dialog box, as
shown below.

5. To change the conflict resolution action for an object, double-click in


the Definition Rule column for that object and, from the drop-down list,
select the new conflict resolution rule.

When you edit a package, the Create New action is changed to the
Replace action.

6. To rename an object in the destination project, double-click in the


Rename column for that object and type the new name for the object.

7. To remove an object from the update package, select the object and
click Remove.

8. You can also change the schema update options (for a project update
package only) or the access control list conflict resolution options. For

Copyright © 2024 All Rights Reserved 801


Syst em Ad m in ist r at io n Gu id e

information about the schema update options, see Update Packages


and Updating the Project Schema, page 807. For information about the
ACL conflict resolution options, see Resolve Conflicts when Copying
Objects, page 777.

9. To create a text file containing a list of the objects in the update


package and their conflict resolution actions, click Export.

10. When you are done making changes to the update package, click Save
As. The default new name for the update package is the original name
of the package with a date and time stamp appended. Click Save.

Importing an Update Package


An update package is saved in a file, and can be freely copied and moved
between machines.

If you are importing a package that is stored on a machine other than the
Intelligence Server machine, make sure the package can be accessed by the
Intelligence Server machine.

Before importing any project security update packages, you must import the
associated configuration update package.

Importing a package causes the project metadata to become locked for the
duration of the import. Other users cannot make any changes to the project
until it becomes unlocked. For detailed information about the effects of
locking a project, see Lock Projects, page 760.

You can import an update package into a project or project source in the
following ways:

l From within Object Manager: You can use the Object Manager graphical
interface to import an update package.

l From the command line: MicroStrategy provides a command line utility


for importing update packages. You can use a scheduler such as Windows

Copyright © 2024 All Rights Reserved 802


Syst em Ad m in ist r at io n Gu id e

Scheduler to import the package at a later time, such as when the load on
the destination project is light.

The command line Import Package utility only supports Standard and
Windows Authentication. If your project source uses a different form of
authentication, you cannot use the Import Package utility to import an
update package.

You can also create an XML file to import an update package from the
command line, similar to using an XML file to create an update package
as described in Creating an Update Package from the Command Line,
page 794.

l Using a Command Manager script: You can also execute a Command


Manager script to import an update package without using Object
Manager. Command Manager is an administrative tool that enables you to
perform various administrative and project development tasks by using
text commands that can be saved as scripts. For more information about
Command Manager, see Chapter 15, Automating Administrative Tasks
with Command Manager.

To Im port an Update Package from Object Manager

1. In Object Manager, log in to the destination project or project source.

2. From the Tools menu, select Import Package (for a project update
package) or Import Configuration Package (for a configuration
update package).

Copyright © 2024 All Rights Reserved 803


Syst em Ad m in ist r at io n Gu id e

3. In the Selected Package field, type the name and path of the update
package, or click ... (the browse button) to browse to the update
package.

4. In the Undo Package Options, select whether to import this update


package, generate an undo package for this update package, or both.
For more information about undo packages, see Rolling Back Changes:
Undo Packages, page 808.

5. To create a log file describing the changes that would be made if the
update package were imported, instead of importing the update
package, select the Generate Log Only checkbox.

6. Click Proceed.

Any objects that exist in different folders in the update package and
the destination project are handled according to the Synchronize
folder locations in source and destination for migrated objects
preference in the Migration category in the Object Manager
Preferences dialog box.

7. If the package made any changes to the project schema, you may need
to update the schema for the changes to take effect. To update the
project schema, from the Object Manager Project menu, select Update
Schema.

To Im port an Update Package from the Com m and Line

Call the Import Package executable, MAImportPackage.exe. By default,


this file is located in C:\Program Files (x86)\Common
Files\MicroStrategy. Use the following parameters:

Only Standard Authentication and Windows Authentication are supported by


the Import Package utility.

Copyright © 2024 All Rights Reserved 804


Syst em Ad m in ist r at io n Gu id e

Effect Parameter

Import package into this project source (required) -n ProjectSourceName

Log into the project source with this MicroStrategy username -u UserName
and password, using standard authentication (required
-p Password
unless you are using Windows authentication)

Import this package into the specified project source


(required)

-f PackageLocation
The location must be specified relative to the
Intelligence Server machine, not relative to the
machine running the Import Package utility.

Import the package into this project (required for project


-j ProjectName
update packages)

Log information about the import process to this file

-l LogLocation
The location of the log file must be specified relative to
the machine running the Import Package utility.

Force a configuration or project lock prior to importing the


package. This lock is released after the package is imported.
-forcelocking
For more information about project and configuration locking,
see Lock Projects, page 760.

A full list of parameters can be accessed from a command prompt by


entering importpackage.exe -h.

To Im port an Update Package Using an XML File

Cr eat e t h e XM L Fi l e

1. In Object Manager, log in to the destination project or project source.

Copyright © 2024 All Rights Reserved 805


Syst em Ad m in ist r at io n Gu id e

2. From the Tools menu, select Import Package (for a project update
package) or Import Configuration Package (for a configuration
update package).

3. In the Selected Package field, type the name and path of the update
package, or click ... (the browse button) to browse to the update
package.

4. Select the Save import package XML file checkbox.

5. Click Proceed. You are prompted to type the name and location of the
XML file. By default, this is the same as the name and location of the
package file, with an .xml extension instead of .mmp. Click Save.

6. When you are ready to import the update package, call the Project
Merge executable, projectmerge.exe, with the following parameters:

Effect Parameter

Use this XML file to import an update package (required) -f Filename.xml

Log into the project source with this password (the login ID to be
-sp Password
used is stored in the XML file)

Log into the project with this password (the login ID to be used is
-smp Password
stored in the XML file)

Suppress status updates (useful for importing an update package


-sup
in the background, so that the status window does not appear)

To Im port an Update Package Using Com m and Manager

Call a Command Manager script that contains the following command:

IMPORT PACKAGE "Filename.mmp" [FOR PROJECT


"ProjectName"];

where "Filename" is the name and location of the update package, and
"ProjectName" is the name of the project that the update is to be applied
to.

Copyright © 2024 All Rights Reserved 806


Syst em Ad m in ist r at io n Gu id e

If the package made any changes to the project schema, you need to update
the schema for the changes to take effect. The syntax for updating the
schema in a Command Manager script is

UPDATE SCHEMA [REFRESHSCHEMA] [RECALTABLEKEYS]


[RECALTABLELOGICAL] [RECALOBJECTCACHE] FOR PROJECT
"ProjectName";

Update Packages and Updating the Project Schema


If a project update package contains new or replacement schema objects,
then when the package is imported the user must update the in-memory
definitions of these objects. This is done by updating the project schema.

When you create an update package, you can configure it to automatically


perform the following schema update functions:

l Recalculate table keys and fact entry levels, if you changed the key
structure of a table or if you changed the level at which a fact is stored.

l Recalculate table logical sizes, to override any modifications that you


have made to logical table sizes. (Logical table sizes affect how the
MicroStrategy SQL Engine determines which tables to use in a query.)

The update package cannot recalculate the object client cache size, and it
cannot update the schema logical information. These tasks must be
performed manually. So, for example, if you import an attribute that has a
new attribute form, you must manually update the project schema before any
objects in the project can use that attribute form.

You can update the project schema in the following ways:

l In Object Manager, select the project and, from the Project menu, select
Update Schema.

l In Developer, log into the project and, from the Schema menu, select

Copyright © 2024 All Rights Reserved 807


Syst em Ad m in ist r at io n Gu id e

Update Schema.

l Call a Command Manager script with the following command:

UPDATE SCHEMA [REFRESHSCHEMA] [RECALTABLEKEYS]


[RECALTABLELOGICAL] [RECALOBJECTCACHE] FOR PROJECT
"projectname";

Updating the schema can also be accomplished by unloading and reloading


the project. For information on loading and unloading projects, see Setting
the Status of a Project, page 48.

For more detailed information about updating the project schema, see the
Optimizing and Maintaining your Project section in the Project Design Help.

Rolling Back Changes: Undo Packages


You can use undo packages to roll back the changes made by an update
package. An undo package is an automatically created update package
consisting of all the objects in an update package, as they are currently
configured in the destination project. For example, if you create an undo
package for an update package containing a new version of three metrics,
the undo package contains the version of those three metrics that currently
exists in the destination project.

When you import an update package, you have the option of creating an
undo package at the same time as the import. Alternately, you can choose to
create an undo package without importing the associated update package.

You import an undo package in the same way as you import any update
package. When you import an undo package, the Version ID and
Modification Date of all objects in the undo package are restored to their
values before the original update package was imported.

The Intelligence Server change journal records the importing of both the
original update package and the undo package. Importing an undo package
does not remove the change journal record of the original update package.

Copyright © 2024 All Rights Reserved 808


Syst em Ad m in ist r at io n Gu id e

For more information about the change journal, see Monitor System Activity:
Change Journaling, page 828.

Merge Projects to Synchronize Objects


You can use MicroStrategy Project Merge to synchronize a large number of
objects between projects. Project Merge streamlines the task of migrating
objects from one project to another. While you can use Object Manager to
copy objects individually, Project Merge can be used as a bulk copy tool. For
differences between Object Manager and Project Merge, see Compare
Project Merge to Object Manager, page 759.

The rules that you use to resolve conflicts between the two projects in
Project Merge can be saved to an XML file and reused. You can then
execute Project Merge repeatedly using this rule file. This allows you to
schedule a project merge on a recurring basis. For more details about
scheduling project merges, see Merge Projects with the Project Merge
Wizard, page 811.

Project Merge migrates an entire project. All objects are copied to the
destination project. Any objects that are present in the source project but not
the destination project are created in the destination project.

l If you want to merge two projects, MicroStrategy recommends that the


projects have related schemas. This means that either one project must
be a duplicate of the other, or both projects must be duplicates of a third
project. For information about duplicating projects, including instructions,
see Duplicate a Project, page 753.

l To merge two projects that do not have related schemas, the projects
must either have been created with MicroStrategy 9.0.1 or later, or have
been updated to version 9.0.1 or later using the Perform system object
ID unification option. For information about this upgrade, see the
Upgrade Help.

Copyright © 2024 All Rights Reserved 809


Syst em Ad m in ist r at io n Gu id e

l Project Merge does not transfer user and group permissions on objects.
To migrate permissions from one project to another, use a project security
update package. For more information, see Copy Objects in a Batch:
Update Packages, page 786.

Projects may need to be merged at various points during their life cycle.
These points may include:

l Migrating objects through development, testing, and production projects


as the objects become ready for use.

l Receiving a new version of a project from a project developer.

In either case, you must move objects from development to testing, and then
to the production projects that your users use every day.

What happens when You Merge Projects


Project Merge requires a source project, a destination project, and a set of
rules to resolve object conflicts between the two projects. This set of rules is
defined in the Project Merge Wizard or loaded from an XML file.

In the MicroStrategy system, every object has an ID (or GUID) and a


version. (To see the ID and version of an object, right-click the object and
select Properties.) Project Merge checks the destination project for the
existence of every object in the source project, by ID. The resulting
possibilities are described below:

l If an object ID does not exist in the destination project, the object is copied
from the source project to the destination project.

l If an object exists in the destination project and has the same object ID
and version in both projects, the objects are identical and a copy is not
performed.

l If an object exists in the destination project and has the same object ID in
both projects but a different version, there is a conflict that must be
resolved. The conflict is resolved by following the set of rules specified in
the Project Merge Wizard and stored in an XML file. The possible conflict

Copyright © 2024 All Rights Reserved 810


Syst em Ad m in ist r at io n Gu id e

resolutions are discussed in Resolve Conflicts when Merging Projects,


page 817.

Merging projects with the Project Merge Wizard does not update the
modification date of the project, as shown in the Project Configuration
Editor. This is because, when copying objects between projects, only the
objects themselves change. The definition of the project itself is not
modified by Project Merge.

Merge Projects with the Project Merge Wizard


The Project Merge Wizard allows you to specify rules and settings for a
project merge. For details about all settings available when running the
wizard, see the Help. For information about the rules for resolving conflicts,
see Resolve Conflicts when Merging Projects, page 817.

After going through the steps in the wizard, you can either execute the
merge right away or save the rules and settings in a Project Merge XML file.
You can use this file to run Project Merge from the Windows command
prompt (see Running Project Merge from the Command Line, page 813) or to
schedule a merge (see Scheduling a Project Merge, page 816).

Before you use Project Merge in a server (three-tier) environment, check


the project source time out setting. In Developer, right-click on the project
source and select Modify Project Source to open the Project Source
Manager. On the Connection tab, either disable the Connection times out
after setting by clearing its check box, or else enter a sufficient number of
minutes for when the connection should time out, considering how long the
merge processes may take based on the size of the projects. If you are
unsure about a setting and have noticed other processes taking a long time,
it is recommended you disable the time out setting.

The following scenario runs through the Project Merge Wizard several times,
each time fine-tuning the rules, and the final time actually performing the
merge.

Copyright © 2024 All Rights Reserved 811


Syst em Ad m in ist r at io n Gu id e

To Safely Perform a Project Merge

Both the source and the destination project must be loaded for the project
merge to complete. For more information on loading projects, see Setting
the Status of a Project, page 48.

1. Go to Start > All Programs > MicroStrategy Tools > Project Merge
Wizard.

2. Follow the steps in the wizard to set your options and conflict resolution
rules.

For details about all settings available when running the wizard, see
the Help (press F1 from within the Project Merge Wizard). For
information about the rules for resolving conflicts, see Resolve
Conflicts when Merging Projects, page 817.

3. Near the end of the wizard, when you are prompted to perform the
merge or generate a log file only, select Generate log file only. Also,
choose to Save Project Merge XML. At the end of the wizard, click
Finish. Because you selected to generate a log file only, this serves as
a trial merge.

4. After the trial merge is finished, you can read through the log files to
see what would have been copied (or not copied) if the merge had
actually been performed.

5. Based on what you learn from the log files, you may want to change
some of the conflict resolution rules you set when going through the
wizard. To do this, run the wizard again and, at the beginning of the
wizard, choose to Load Project Merge XML that you created in the
previous run. As you proceed through the wizard, you can fine-tune the
settings you specified earlier. At the end of the wizard, choose to
Generate the log file only (thereby performing another trial) and
choose Save Project Merge XML. Repeat this step as many times as

Copyright © 2024 All Rights Reserved 812


Syst em Ad m in ist r at io n Gu id e

necessary until the log file indicates that objects are copied or skipped
as you desire.

6. When you are satisfied that no more rule changes are needed, run the
wizard a final time. At the beginning of the wizard, load the Project
Merge XML as you did before. At the end of the wizard, when prompted
to perform the merge or generate a log file only, select Perform merge
and generate log file.

Running Project Merge from the Command Line


A Project Merge can be launched from the Windows command line. You can
also run several sessions of the Project Merge Wizard with the same source
project, using the command prompt. For information on running multiple
sessions, see Multiple Project Merges from the Same Project, page 815.

The settings for this routine must be saved in an XML file which can easily
be created using the Project Merge Wizard. Once created, the XML file
serves as the input parameter to the command.

The syntax for the projectmerge.exe command is shown below. The


syntax for the command is simplified.

projectmerge -f[ ] -sp[ ] -dp[ ] -smp[ ] -dmp[ ] -sup[ ] -MD -SU -lto -h

All command line parameters are described in the table below.

Parameter Description and use

Specifies the path and file name (without spaces) of the XML file to use.
-f[ ] (You must have already created the file using the Project Merge Wizard.)
Example: -fc:\files\merge.xml

Password for SOURCE Project Source. (The login ID to be used is stored


-sp[ ]
in the XML file.) Example: -sphello

-dp[ ] Password for DESTINATION Project Source. (The login ID to be used is

Copyright © 2024 All Rights Reserved 813


Syst em Ad m in ist r at io n Gu id e

Parameter Description and use

stored in the XML file.) Example: -dphello

Password for SOURCE metadata. (The login ID to be used is stored in the


-smp[ ]
XML file.) Example: -smphello

Password for DESTINATION metadata. (The login ID to be used is stored


-dmp[ ]
in the XML file.) Example: -dmphello

Suppress progress window. This is useful for running a project merge in


-sup the background, and the window displaying status of the merge does not
appear.

Forces metadata update of DESTINATION metadata if it is older than the


-MD SOURCE metadata. Project Merge will not execute unless DESTINATION
metadata is the same version as or more recent than SOURCE metadata.

Updates the schema of the DESTINATION project after the Project Merge
is completed. This update is required when you make any changes to
schema objects (facts, attributes, or hierarchies).
-SU

Do not use this switch if the Project Merge configuration XML


contains an instruction to update the schema.

Take ownership of any metadata locks that exist on the source or


-lto destination projects. For more information about metadata locking, see
Lock Projects, page 760.

-h Displays help and explanations for all of the above parameters.

A sample command using this syntax is provided below. The command


assumes that "hello" is the password for all the project source and
database connections. The login IDs used with these passwords are stored
in the XML file created by the Project Merge Wizard.

projectmerge -fc:\temp\merge.xml -sphello -dphello -smphello -dmphello -lto -MD


-SU

Copyright © 2024 All Rights Reserved 814


Syst em Ad m in ist r at io n Gu id e

If the XML file contains a space in the name or the path, you must enclose
the name in double quotes, such as:

projectmerge -f "c:program files (x86)\xml\merge.xml" -sphello -dphello -


smphello -dmphello -MD -SU

Multiple Project Merges from the Sam e Project

The Project Merge Wizard can perform multiple simultaneous merges from
the same project source. This can be useful when you want to propagate a
change to several projects simultaneously.

During a multiple merge, the Project Merge Wizard is prevented from


locking the projects. This is so that multiple sessions of the wizard can
access the source projects. You will need to manually lock the source
project before beginning the merge. You will also need to manually lock the
destination projects at the configuration level before beginning the merge.
Failing to do this may result in errors in project creation due to objects being
changed in the middle of a merge. For information on locking and unlocking
projects, see Lock Projects, page 760.

To do this, you must modify the Project Merge XML file, and then make a
copy of it for each session that you want to run.

To Execute Multiple Simultaneous Merges from One Project

1. In a text editor, open the Project Merge Wizard XML file.

2. In the OMOnOffSettings section of the file, add the following node:


<Option><ID>OMOnOffSettings</ID><SkipProjectMergeSour
ceLockingSkipProjectMergeDestConfigLocking/></Option>.

3. Make one copy of the XML file for each session of the Project Merge
Wizard you want to run.

Copyright © 2024 All Rights Reserved 815


Syst em Ad m in ist r at io n Gu id e

4. In each XML file, make the following changes:

l Correct the name of the destination project.

l Ensure that each file uses a different Project Merge log file name.

5. Manually lock the source project.

6. Manually lock the destination projects at the configuration level.

7. For each XML file, run one instance of the Project Merge Wizard from
the command line.

Scheduling a Project Merge


To schedule a delayed or recurring Project Merge, use the AT command,
which is part of the Microsoft Windows operating system. For instructions on
how to use the AT command, refer to the Microsoft Windows help. The
sample AT command below schedules Project Merge to run at 6:00 PM
(18:00) every Friday (/every:F).

at 18:00 /every:F projectmerge -fc:\temp\merge.xml -sphello -dphello -smphello


-dmphello -MD -SU

For a list of the syntax options for this command, see Running Project Merge
from the Command Line, page 813.

To Schedule a Project Merge Using the Windows Command Prompt

1. From the Microsoft Windows machine where Project Merge is installed,


from the Start menu, select Programs, then choose Command
Prompt.

2. Change the drive to the one on which the Project Merge utility is
installed. The default installation location is the C: drive (the prompt
appears as: C:\>)

Copyright © 2024 All Rights Reserved 816


Syst em Ad m in ist r at io n Gu id e

3. Type an AT command that calls the projectmerge command. For a list


of the syntax options for this command, see Running Project Merge
from the Command Line, page 813.

Resolve Conflicts when Merging Projects


Conflicts occur when a destination object's version differs from the source
object's version. This difference usually means that the object has been
modified in one or both of the projects. These conflicts are resolved by
following a set of rules you define as you step through the Project Merge
Wizard.

When you define the rules for Project Merge to use, you first set the default
conflict resolution action for each category of objects (schema, application,
and configuration). (For a list of objects included in each category, see Copy
Objects.) Then you can specify conflict resolution rules at the object type
level (attributes, facts, reports, consolidations, events, schedules, and so
on). Object type rules override object category rules. Next you can specify
rules for specific folders and their contents, which override the object type
and object category rules. Finally you can specify rules for specific objects,
which, in turn, override object type rules, object category rules, and folder
rules.

For example, the Use Newer action replaces the destination object with the
source object if the source object has been modified more recently than the
destination object. If you specified the Use newer action for all metrics, but
the Sales metric has been changed recently and is not yet ready for the
production system, you can specify Use existing (use the object in the
destination project) for that metric only and it will not be replaced.

Project Merge Conflict Resolution Rules


If the source object has a different version than the destination object, that
is, the objects exist differently, you must determine what action should
occur. The various actions that can be taken to resolve conflicts are
explained in the table below.

Copyright © 2024 All Rights Reserved 817


Syst em Ad m in ist r at io n Gu id e

Action Effect

Use
No change is made to the destination object. The source object is not copied.
existing

The destination object is replaced with the source object.

Non-empty folders in the destination location will never have the same
Replace
version ID and modification time as the source, because the folder is
copied first and the objects are added to it, thus changing the version ID
and modification times during the copy process.

No change is made to the destination object. The source object is duplicated


Keep both
in the destination location.

If the source object's modification time is more recent than the destination
Use newer object's, the Replace action is used. Otherwise, the Use existing action is
used.

If the source object's modification time is more recent than the destination
Use older object's, the Use existing action is used. Otherwise, the Replace action is
used.

Compare and Track Projects


Often during the project life cycle, you do not know exactly which objects
need to be moved from one project to another. This is because there are
many developers working on a project and it is difficult for a single person to
know all of the work that has been done. The migration process becomes
much easier if you first compare objects in the source and destination
projects.

You can use the MicroStrategy Project Comparison Wizard to compare


objects in related projects. This wizard tells you which objects are different
between the two projects, and which objects exist in one project but not in
the other. From this list you can decide what objects to move between
projects, using Object Manager. For instructions on moving objects with

Copyright © 2024 All Rights Reserved 818


Syst em Ad m in ist r at io n Gu id e

Object Manager, see Copy Objects Between Projects: Object Manager, page
762.

You can track changes to your projects with the MicroStrategy Search
feature, or retrieve a list of all unused objects in a project with the Find
Unreferenced Objects feature of Object Manager.

This section covers the following topics:

Compare Objects Between Two Projects


The Project Comparison Wizard compares objects in a source project and a
destination project.

For the source project, you specify whether to compare objects from the
entire project, or just from a single folder and all its subfolders. You also
specify what types of objects (such as reports, attributes, or metrics) to
include in the comparison.

Every object in a MicroStrategy project has a unique ID. Project Comparison


looks at each object ID in the source project, and compares it to the object in
the destination project with the same ID. For each object ID, Project
Comparison indicates whether the object is:

l Identical in both projects

l Identical in both projects except for the folder path

l Only present in the source project

l Different between projects, and newer in the source or destination project

You can print this result list, or save it as a text file or an Excel file.

Since the Project Comparison Wizard is a part of Object Manager, you can
also select objects from the result set to immediately migrate from the
source project to the destination project. For more information about
migrating objects using Object Manager, see Copy Objects Between
Projects: Object Manager, page 762.

Copyright © 2024 All Rights Reserved 819


Syst em Ad m in ist r at io n Gu id e

Using the Project Comparison Wizard


The following high-level procedure provides an overview of what the Project
Comparison Wizard does.

To Compare Two Projects

l To compare two projects with the Project Comparison Wizard, those


projects must have related schemas. This means that either one project
must be a duplicate of the other, or both projects must be duplicates of a
third project. For information about duplicating projects, including
instructions, see Duplicate a Project, page 753.

l The Project Comparison Wizard is a part of Object Manager, and thus


requires the Use Object Manager privilege to run. For an overview of
Object Manager, see Copy Objects Between Projects: Object Manager,
page 762.

1. In Windows, go to Start > All Programs > MicroStrategy Products >


Object Manager.

2. Open a project source in Object Manager.

3. From the Project menu, select Compare Projects.

4. Select the source and destination projects.

5. Specify whether to compare all objects or just objects in a specific


folder, and what types of objects to compare.

6. Review your choices at the summary screen and click Finish.

7. Select Save as Text File or Save as Excel File.

8. To migrate objects from the source project to the destination project


using Object Manager, select those objects in the list and click
Proceed. For more information about Object Manager, see Copy
Objects Between Projects: Object Manager, page 762.

Copyright © 2024 All Rights Reserved 820


Syst em Ad m in ist r at io n Gu id e

Track Your Projects with the Search Export Feature


Exporting the results of a search object can be a useful way to keep track of
changes to a project. The Search Export feature enables you to perform a
search for either a specific object in a project or for a group of objects that
meet certain criteria. After the search is performed, you can save your
search definition and search results to a text file, and save the search object
itself for later reuse.

For example, you can create a search object in the development project that
returns all objects that have been changed after a certain date. This lets you
know what objects have been updated and need to be migrated to the test
project. For more information about development and test projects, see The
Project Life Cycle, page 746.

The search export file contains the following information:

l The user who was logged in when the search was performed.

l The search type, date and time, and project name.

l Any search criteria entered into the tabs of the Search for Objects dialog
box.

l Any miscellaneous settings in Developer that affected the search (such as


whether hidden and managed objects were included in the search).

l A list of all the objects returned by the search, including any folders. The
list includes object names and paths (object locations in the Developer
interface).

To Search for Objects and Save the Results in a Text File

1. In Developer, from the Tools menu, select Search for Objects.

2. Perform your search.

3. After your search is complete, from the Tools menu in the Search for
Objects dialog box, select Export to Text. The text file is saved by

Copyright © 2024 All Rights Reserved 821


Syst em Ad m in ist r at io n Gu id e

default to C:\Program Files (x86)\MicroStrategy\Desktop\


SearchResults_<date and timestamp>.txt, where <date and
timestamp> is the day and time when the search was saved. For
example, the text file named SearchResult_022607152554.txt
was saved on February 26, 2007, at 15:25:54, or 3:25 PM.

List Unused Objects in a Project


In Object Manager, you can retrieve a list of all the objects in a project that
are not used by any other objects. For example, you can find which
attributes or metrics are no longer used in any reports, so that you can
delete those objects.

Finding unused objects is a part of Object Manager, and thus requires the
Use Object Manager privilege to run. For an overview of Object Manager,
see Copy Objects Between Projects: Object Manager, page 762.

1. From the Windows Start menu, point to All Programs, then


MicroStrategy Products, and then select Object Manager.

2. Open a project source in Object Manager.

3. From the Tools menu, select Find Unreferenced Objects.

4. In the Look In field, enter the folder you want to start your search in.

5. Make sure the Include Subfolders check box is selected.

6. Click Find Now.

Delete Unused Schema Objects: Managed Objects


MicroStrategy projects contain schemas and related schema objects,
including attributes, tables, hierarchies, and so on. For an introduction to
schema objects, see the Project Design Help.

Copyright © 2024 All Rights Reserved 822


Syst em Ad m in ist r at io n Gu id e

Certain MicroStrategy features automatically create new schema objects,


referred to as managed objects, which are not directly related to the project
schema. The features that create their own managed objects are:

l Freeform SQL and Query Builder. For information on Freeform SQL and
Query Builder, see the Advanced Reporting Help.

l MDX cube sources such as SAP BW, Hyperion Essbase, Microsoft


Analysis Services, and IBM Cognos TM1. For information on MDX cube
sources, see the MDX Cube Reporting Help.

l Import Data, which lets you use MicroStrategy Web to import data from
different data sources, such as an Excel file, a table in a database, or the
results of a SQL query, with minimum project design requirements.

Managed objects are stored in a special system folder, and can be difficult
to delete individually due to how these objects are created and stored. If you
use one of the features listed above, and then decide to remove some or all
of that feature's related reports and MDX cubes from the project, there may
be unused managed objects included in your project that can be deleted.

This section covers the following topics:

l Delete Managed Objects One-By-One, page 823

l Delete All Unused Managed Objects, page 824

Delete Managed Objects One-By-One


When you delete managed objects one-by-one, you individually select which
managed objects you want to delete and which you want to keep. You can
perform this clean-up for any of the Freeform SQL, Query Builder, or MDX
cube source database instances included for your project.

For example, you decide to delete a single Freeform SQL report that
automatically created a new managed object named Store. When you delete
the report, the managed object Store is not automatically deleted. You do
not plan to use the object again; however, you do plan to create more
Freeform SQL reports and want to keep the database instance included in

Copyright © 2024 All Rights Reserved 823


Syst em Ad m in ist r at io n Gu id e

the project. Instead of deleting the entire Freeform SQL schema, you can
delete only the managed object Store.

To Delete Managed Objects One-By-One

1. In Developer, delete any Freeform SQL, Query Builder, or MDX cube


reports in the project that depend on the managed objects you want to
delete.

If you are removing MDX cube managed objects, you must also remove
any MDX cubes that these managed objects depend on.

2. Right-click the project and select Search for Objects.

3. From the Tools menu, go to Options.

4. Select the Display managed objects and Display managed objects


only check boxes.

5. Click OK.

6. Enter your search criteria and select Find Now.

7. Manually delete managed objects by right-clicking their name in the


search result and selecting Delete.

Delete All Unused Managed Objects


Managed objects can become unused in a project when you stop using the
feature that created the managed objects. You can delete all unused
managed objects to clean up your project.

For example, you can create a separate database instance for your Freeform
SQL reports in your project. Later on, you may decide to no longer use
Freeform SQL, or any of the reports created with the Freeform SQL feature.
After you delete all the Freeform SQL reports, you can remove the Freeform
SQL database instance from the project. Once you remove the database
instance from the project, any Freeform SQL managed objects that
depended solely on that database instance can be deleted.

Copyright © 2024 All Rights Reserved 824


Syst em Ad m in ist r at io n Gu id e

You can implement the same process when removing database instances for
Query Builder, SAP BW, Essbase, and Analysis Services.

To Delete All Unused Managed Objects from a Project

1. Remove all reports created with Freeform SQL, Query Builder, or MDX
cubes.

If you are removing MDX cube managed objects, you must also remove
all imported MDX cubes.

2. In Developer, right-click the project and select Project Configuration.

3. Expand the Database instances category.

4. Select either SQL data warehouses or MDX data warehouses,


depending on the database instance you want to remove.

Freeform SQL and Query Builder use relational database instances,


while SAP BW, Essbase, and Analysis Services use MDX cube
database instances. For more information on the difference between
the two, see the Installation and Configuration Help.

5. Clear the check box for the database instance you want to remove from
the project. You can only remove a database instance from a project if
the database instance has no dependent objects in the project.

6. Click OK to accept the changes and close the Project Configuration


Editor.

This procedure removes some preliminary object dependencies.


Attribute and metric managed objects are not automatically deleted by
this procedure, because you can reuse the managed attributes and
metrics at a later time. If you do not plan to use the attribute and metric
managed objects and want to delete them permanently from your
project, continue through the rest of this procedure.

Copyright © 2024 All Rights Reserved 825


Syst em Ad m in ist r at io n Gu id e

To Del et e Un u sed At t r i b u t e an d M et r i c M an aged Ob j ect s

In Developer, from the Administration menu, select Projects > Delete


unused managed objects.

Copyright © 2024 All Rights Reserved 826


Syst em Ad m in ist r at io n Gu id e

MicroStrategy System Monitors


You can monitor various aspects of your MicroStrategy system from within
Developer. The Administration category for a project source contains
several system monitors for that project source. These monitors are listed in
the table below, and are described in detail in the relevant section of this
guide.

For information about monitoring... See...

Projects loaded on Intelligence Server, or on Managing and Monitoring Projects, page


all nodes of the cluster 44

Projects loaded on specific nodes of the Manage your Projects Across Nodes of a
cluster Cluster, page 1169

Monitoring Currently Executing Jobs, page


Jobs that are currently executing
76

Users that are currently connected to Monitoring Users' Connections to Projects,


Intelligence Server page 87

Monitoring Database Instance


Active and cached database connections
Connections, page 24

Report and document caches Monitoring Result Caches, page 1217

History List messages Managing History Lists, page 1254

Intelligent Cubes, whether they are loaded on Managing Intelligent Cubes: Intelligent
Intelligence Server Cube Monitor, page 1287

Quick search indices and their status Monitor Quick Search Indices

Before you can view a system monitor, you must have the appropriate privilege
to access that monitor. For example, to view the Job Monitor, you must have
the Monitor Jobs privilege. For more information about privileges, see
Controlling Access to Functionality: Privileges, page 101.

Copyright © 2024 All Rights Reserved 827


Syst em Ad m in ist r at io n Gu id e

In addition, you must have Monitoring permission for the server definition that
contains that monitor. You can view and modify the ACL for the server
definition by right-clicking the Administration icon, selecting Properties, and
then selecting the Security tab. For more information about permissions and
ACLs, see Controlling Access to Objects: Permissions, page 89.

To View a System Monitor

1. In Developer, log in to the project source that you want to monitor. You
must log in as a user with the appropriate administrative privilege.

2. Expand the Administration category.

3. To monitor projects or clusters, expand the System Administration


category and select either Project or Cluster Nodes.

4. To view additional system monitors, expand the System Monitors


category and select the desired monitor. For a list of the different
monitors available, and where you can find more information about
each monitor, see the table above.

Monitor System Activity: Change Journaling


Change journaling is the process of logging information about changes to
objects in a project. Change journaling tracks the changes to each object in
the system. This makes it easier for administrators to quickly determine
when and by whom certain changes were made. For example, reports using
a certain metric executed correctly in a test two weeks ago, but no longer
execute correctly in this morning's test. The administrator can search the
change journal to determine who has made changes to that metric within the
last two weeks.

The logged information includes items such as the user who made the
change, the date and time of the change, and the type of change (such as
saving, copying, or deleting an object). With change journaling, you can
keep track of all object changes, from simple user actions such as saving or

Copyright © 2024 All Rights Reserved 828


Syst em Ad m in ist r at io n Gu id e

moving objects to project-wide changes such as project duplication or


project merging.

Certain business regulations, such as Sarbanes-Oxley in the United States,


require detailed records of changes made to a BI system. Change journaling
aids in compliance with these regulations.

Change journaling is enabled by default on all projects in your production


environment.

View the Change Journal Entries


When an object is changed, information about the change is entered in the
change journal. To view the change journal for all projects in a project
source, in Developer, expand Administration, then expand System
Monitors, and then select Change Journal Transactions. The change
journal entries are listed in the main window of Developer.

You must have the Audit Change Journal privilege to view the change
journal.

To view the detailed information for a change journal entry, double-click that
entry. Each entry contains the following information:

Entry Details

Object name The name of the object that is changed.

The type of object changed. For example, Metric, User, or Server


Object type
Definition.

User name The name of the MicroStrategy user that made the change.

Transaction The date and time of the change, based on the time on the Intelligence
timestamp Server machine.

Transaction The type of change and the target of the change. For example, Delete
type Objects, Save Objects, or Enable Logging.

Copyright © 2024 All Rights Reserved 829


Syst em Ad m in ist r at io n Gu id e

Entry Details

Transaction The application that made the change. For example, Developer, Command
source Manager, MicroStrategy Web, or Scheduler.

The name of the project that contains the object that was changed.

Project name
If the object is a configuration object, the project name is listed as
<Configuration>

Any comments entered in the Comments dialog box at the time of the
Comments
change.

Object ID The object's GUID, a unique MicroStrategy system identifier.

Machine name The name of the machine that the object was changed on.

The type of change that was made. For example, Create, Change, or
Change type
Delete.

Transaction ID A unique 32-digit hexadecimal number that identifies this change.

A unique 32-digit hexadecimal number that identifies the user session in


Session ID
which the change was made.

Link ID For MicroStrategy use.

This information can also be viewed in the columns of the change journal. To
change the visible columns, right-click anywhere in the change journal and
select View Options. In the View Options dialog box, select the columns you
want to see.

Increase the Number of Change Journal Entries to View or


Export
By default the change journal displays and exports the last 1,000 entries.
You can increase this number in the Browsing category of the Developer
Preferences dialog box. Viewing more entries may make the browsing and
exporting process take longer.

Copyright © 2024 All Rights Reserved 830


Syst em Ad m in ist r at io n Gu id e

1. In Developer, from the Tools menu select MicroStrategy Developer


Preferences.

2. In the General category, select Browsing.

3. In the Maximum number of monitoring objects displayed per page


field, specify the maximum number of change journal entries to display.

4. In the Maximum number of transactions retrieved per metadata


change journaling search field, specify the maximum number of
change journal entries to export.

5. Click OK.

Search the Change Journal for Relevant Entries


Because the change journal records every transaction, finding the relevant
records can be daunting. To make searching the change journal easier, you
can filter it so that you see the relevant entries.

For example:

l To find out when certain users were given certain permissions, you can
view entries related to Users.

l To discover which user made a change that caused a report to stop


executing correctly, you can view the entries related to that report.

You can also quickly filter the entries so that you see the entries for an
object or the changes made by a specific user. To do this, right-click one of
the entries for that object or that user and select either Filter view by
object or Filter view by user. To remove the filter, right-click in the change
journal and select Clear filter view.

To Filter the Change Journal for Relevant Entries

1. In the Change Journal Transactions Monitor, right-click and select


Filter.

Copyright © 2024 All Rights Reserved 831


Syst em Ad m in ist r at io n Gu id e

2. To filter the change journal by changed object type, project, transaction


type, or source of the change, select from the appropriate drop-down
list.

3. To filter the change journal by multiple conditions, click Advanced. The


advanced filtering options panel opens at the bottom of the dialog box.
Enter the columns and conditions.

4. To see changes made in a specific time range, enter the start and end
time and date.

5. To view all transactions, not just those that change the version of an
object, clear the Show version changes only and Hide Empty
Transactions check boxes.

If the Show version changes only check box is cleared, two


transactions named "LinkItem" are listed for every time an application
object is saved. These transactions are monitored for MicroStrategy
technical support use and do not indicate that the application object
has been changed. Any time the object has actually been changed, a
SaveObjects transaction with the name of the application object is
listed.

6. Click OK to close the dialog box and filter the change journal.

To Quickly Filter the Change Journal by Object or User

1. In the Change Journal Transactions Monitor, right-click an entry for the


object or user you want to filter by, and select the type of filtering:

l To see the changes to this object, select Filter view by object.

l To see the changes made by this user, select Filter view by user.

2. To remove a quick filter, right-click in the change journal and select


Clear filter view.

Copyright © 2024 All Rights Reserved 832


Syst em Ad m in ist r at io n Gu id e

Export the Change Journal


You can export the contents of the change journal to a text file. This can be
useful so that you can save this file to an archival location, or email it to
MicroStrategy technical support for assistance with a problem.

The name of this file is AuditLog_MMDDYYhhmmss.txt, where MMDDYY is


the month, date, and last two digits of the year, and hhmmss is the
timestamp, in 24-hour format. This file is saved in the MicroStrategy
Common Files directory. By default this directory is C:\Program Files
(x86)\Common Files\MicroStrategy\.

When you export the change journal, any filters that you have used to view
the results of the change journal are also applied to the export. If you want
to export the entire change journal, make sure that no filters are currently in
use. To do this, right-click in the change journal and select Clear filter
view.

To Export the Change Journal to a File

1. In Developer, go to Administration > System Monitors.

2. Right-click Change Audit and select Export list. The change journal is
exported to a text file.

A prompt is displayed informing you that the list was exported and noting the
folder and file name, and asks if you want to view the file. To view the file,
click Yes.

Purge the Change Journal


You can keep the size of the change journal to a manageable size by
periodically purging older entries that you no longer need to keep.

When you purge the change journal, you specify a date and time. All entries
in the change journal that were recorded prior to that date and time are

Copyright © 2024 All Rights Reserved 833


Syst em Ad m in ist r at io n Gu id e

deleted. You can purge the change journal for an individual project, or for all
projects in a project source.

MicroStrategy recommends archiving your change journal entries before


purging. For instructions on how to archive the change journal, see Export
the Change Journal, page 833.

To Purge the Change Journal for All Projects in a Project Source

1. In Developer, expand Administration, and then expand System


Monitors.

2. Right-click Change Journal Transactions and select Manage change


journal.

3. Set the date and time. All data recorded before this date and time is
deleted from the change journal.

4. To purge data for all projects, select the Apply to all projects check
box. To purge data relating to the project source configuration, leave
this check box cleared.

5. Click Purge Now. When the warning dialog box opens, click Yes to
purge the data, or No to cancel the purge. If you click Yes, change
journal information recorded before the specified date is deleted.

If you are logging transactions for this project source, a Purge Log
transaction is logged when you purge the change journal.

6. Click Cancel.

To Purge the Change Journal for a Single Project

1. In Developer, right-click on the project and select Project


Configuration.

2. Expand Project definition, and then select Change Journaling.

Copyright © 2024 All Rights Reserved 834


Syst em Ad m in ist r at io n Gu id e

3. Under Purge Change Journal, set the date and time. All change
journal data for this project from before this date and time will be
deleted from the change journal.

4. In the Purge timeout (seconds) field, specify the timeout setting in


seconds.

5. Click Purge Now. When the warning dialog box opens, click Yes to
purge the data, or No to cancel the purge. If you click Yes, change
journal information for this project from before the specified date and
time is deleted.

6. Click OK.

Enable Change Journaling


When change journaling is enabled for a project or project source,
Intelligence Server logs information in the change journal about any change
made to any object in the project or project source. This includes changes
made in Developer or MicroStrategy Web as well as through other
MicroStrategy tools such as Command Manager or Project Merge.

You can enable change journaling for any number of projects in a project
source. For each project, when change journaling is enabled, all changes to
all objects in that project are logged.

You can also enable change journaling at the project source level. In this
case information about all changes to the project configuration objects, such
as users or schedules, is logged in the change journal.

By default, change journaling is enabled in all newly created projects and


project sources.

Click here if you are using MicroStrategy version 10.8 or higher

In versions of MicroStrategy 10.8 and higher, change journaling is


automatically enabled and there is no option to disable it. This is because
the Web Quick Search feature will not function without change journaling
enabled.

Copyright © 2024 All Rights Reserved 835


Syst em Ad m in ist r at io n Gu id e

If your metadata database grows too large due to change journaling, best
practice is to keep records active only for a certain amount of days and
archive older records. You can set a specific amount of days using
Developer.

To Enable or Disable Change Journaling for a Project Source

1. In Developer, log in to a project source. You must log in as a user with


the Configure Change Journaling privilege.

2. Expand Administration, and then expand System Monitors.

3. Right-click Change Journal Transactions and select Manage Change


Journal.

4. To enable or disable change journaling for this project source, select or


clear the Enable change journaling check box.

Copyright © 2024 All Rights Reserved 836


Syst em Ad m in ist r at io n Gu id e

5. In the Comments field, enter any comments that you may have about
the reason for enabling or disabling change journaling.

6. To enable or disable change journaling for all projects in the project


source, select the Apply to all projects check box. To determine
which projects have change journaling on a project-by-project basis,
leave this check box cleared.

7. Click OK.

To Enable or Disable Change Journaling for a Project

1. From Developer, right-click the project and select Project


Configuration.

2. Expand Project definition, and then select Change Journaling.

3. To enable or disable change journaling for this project, select or clear


the Enable Change Journaling check box.

4. Click OK.

Change Journal Comments


When change journaling is enabled, users are prompted for comments every
time they change an object. These comments can provide documentation as
to the nature of the changes made to objects.

You can disable the requests for object comments from the Developer
Preferences dialog box.

To Disable the Requests for Change Journaling Comments

1. From Developer, go to Tools > MicroStrategy Developer


Preferences.

2. Expand Optional Actions, and then select General.

Copyright © 2024 All Rights Reserved 837


Syst em Ad m in ist r at io n Gu id e

3. Clear the Display change journal comments input dialog check box.

4. Click OK.

Monitor System Usage: Intelligence Server Statistics


To tune your system for best performance, you need information about how
the system is being used. Intelligence Server can record usage and
performance statistics for each project in your system. You can then analyze
these statistics to determine what changes need to be made.

This section provides the following information about Intelligence Server


statistics:

l Overview of Intelligence Server Statistics, page 838

l Best Practices for Recording Intelligence Server Statistics, page 844

l Configure Intelligence Server to Log Statistics, page 846

MicroStrategy Enterprise Manager can help you analyze the Intelligence


Server statistics data. Enterprise Manager consists of a MicroStrategy
project containing a wide variety of reports and dashboards that present the
statistics data in an easy-to-understand format. For more information about
Enterprise Manager, see the Enterprise Manager Help.

Overview of Intelligence Server Statistics


Intelligence Server can record a wide variety of statistics relating to user
activity, data warehouse activity, report SQL, and system performance.
These statistics are logged in the statistics database (see The Statistics
Database, page 842).

The statistics that are logged for each project are set in the Project
Configuration Editor, in the Statistics: General subcategory. The options
are as follows:

Copyright © 2024 All Rights Reserved 838


Syst em Ad m in ist r at io n Gu id e

Statistics logging option Statistics logged

User session and project session analysis. This option must be


All basic statistics
selected for any statistics to be logged.

Report job steps Detailed statistics on the processing of each report.

Document job steps Detailed statistics on the processing of each document.

The generated SQL for all report jobs.

Report job SQL


This option can create a very large statistics table. Select
this option when you need the job SQL data.

Report job
tables/columns Data warehouse tables and columns accessed by each report.
accessed

Detailed statistics on reports and documents that are executed


Mobile Clients
on a mobile device.

Mobile Clients
Manipulations
Detailed statistics on actions performed by end users on a
This option is available if mobile client.
Mobile Clients is
selected

Only purge statistics Purge statistics from the database if they are from the
logged from the current Intelligence Server you are now using. This is applicable if you
Intelligence Server. are using clustered Intelligence Servers.

You can log different statistics for each project. For example, you may want
to log the report job SQL for your test project when tracking down an error. If
you logged report job SQL for your production project, and your users are
running many reports, the statistics database would quickly grow to an
unwieldy size.

Copyright © 2024 All Rights Reserved 839


Syst em Ad m in ist r at io n Gu id e

Recording Performance Counters in the Statistics Tables


Intelligence Server can be configured to collect performance information
from the Diagnostics and Performance Logging Tool and record that
information in the statistics database. For more information about logging
performance counters, see Configure What is Logged, page 858.

Intelligence Server can collect and log information from the MicroStrategy
Server Jobs and MicroStrategy Server Users categories. On UNIX or Linux,
Intelligence Server can also collect and log information from the following
categories:

l Memory

l System

l Process

l Processor

l Network Interface

l Physical Disk

This information is recorded in the STG_IS_PERF_MON_STATS table in the


statistics database.

To Configure the Performance Counters to Record Information in the


Statistics Repository

1. Open the Diagnostics and Performance Logging Tool.

l From Developer: From the Tools menu, select Diagnostics.

If you are running MicroStrategy Developer on Windows for the first


time, run it as an administrator.

Right-click the program icon and select Run as Administrator.

Copyright © 2024 All Rights Reserved 840


Syst em Ad m in ist r at io n Gu id e

This is necessary in order to properly set the Windows registry keys.


For more information, see KB43491.

If the Diagnostics option does not appear on the Tools menu, it has
not been enabled. To enable this option, from the Tools menu,
select MicroStrategy Developer Preferences. In the General
category, in the Advanced subcategory, select the Show
Diagnostics Menu Option check box and click OK.

l In Windows: From the Windows Start menu, point to All Programs,


then MicroStrategy Tools, and then select Diagnostics
Configuration.

l In Linux: Navigate to the directory ~/MicroStrategy/bin and enter


mstrdiag.

2. From the Select Configuration drop-down list, select CastorServer


Instance.

3. Select the Performance Configuration tab.

4. Make sure the Use Machine Default Performance Configuration


check box is cleared so that your logging settings are not overridden by
the default settings.

5. In the Statistics column, select the check boxes for the counters that
you want to log to the statistics repository.

6. In the Statistics Properties group, in the Logging Frequency (min),


specify how often (in minutes) you want the performance counters to
log information.

7. From the Persist statistics drop-down list, select Yes.

8. From the File menu, select Save. The changes that you have made to
the logging properties are saved.

Copyright © 2024 All Rights Reserved 841


Syst em Ad m in ist r at io n Gu id e

The Statistics Database


Intelligence Server logs the specified statistics to the staging tables in the
statistics repository. For a detailed examination of the staging tables in the
statistics repository, see the Statistics Data Dictionary in the System
Administration Help.

If you are using Enterprise Manager to monitor your statistics, the database
that hosts the staging tables also contains the Enterprise Manager data
warehouse. The information in the staging tables is processed and loaded
into the data warehouse as part of the data load process. For information
about the structure of the Enterprise Manager data warehouse, see the
Enterprise Manager Data Dictionary. For steps on configuring Enterprise
Manager and scheduling data loads, see the Enterprise Manager Help.

Intelligence Server may open up to one database connection for each


project that is configured to log statistics. For example, in a project source
with four projects, each of which is logging statistics, there may be up to four
database connections opened for logging statistics. However, the maximum
number of database connections is typically seen in high-concurrency
environments.

In a clustered environment, each node of the cluster requires a database


connection for each project loaded onto that node. For example, a two-node
cluster with 10 projects loaded on each node has 20 connections to the
warehouse (10 for each node). Even if the same 10 projects are loaded on
both nodes, 20 database connections exist.

Supported Database Platform s

MicroStrategy supports the following database platforms for use with


Intelligence Server statistics:

l SQL Server

l Oracle

l Teradata

Copyright © 2024 All Rights Reserved 842


Syst em Ad m in ist r at io n Gu id e

l IBM DB2 UDB

l Sybase ASE

For information about the specific versions of each database that are
supported, see the Readme.

Logging All Statistics from a Project Source to the Same


Database
By default, all projects for a project source must be configured to log
statistics individually. This configuration is called Complete Session
Logging. It allows some projects to log statistics to a database and some
projects to log to another database.

The Enterprise Manager data warehouse must be in the same database as


the statistics repository for a project. If you are using Enterprise Manager in
a complete session logging configuration, there are as many Enterprise
Manager data warehouses as there are statistics repositories. A separate
Enterprise Manager project must be configured for each statistics
repository.

MicroStrategy recommends that you configure all projects in your project


source to log statistics to the same database. This is accomplished by
configuring your system to use Single Instance Session Logging. This can
minimize session logging and optimize system performance.

Under single instance session logging, you must still specify which statistics
are logged for each individual project in the project source, as described in
Overview of Intelligence Server Statistics, page 838.

To use single instance session logging successfully, the selected single


instance session logging project must be loaded onto the Intelligence
Server at startup. If clustered Intelligence Servers are being used, the
project must be loaded onto all the clustered Intelligence Servers. Failing to
load this project on all servers at startup results in a loss of session
statistics for any Intelligence Server on which the project is not loaded at

Copyright © 2024 All Rights Reserved 843


Syst em Ad m in ist r at io n Gu id e

startup. For details on the possible side effects of not loading all projects,
see MicroStrategy Tech Note TN14591.

To Log All Statistics from a Project Source to the Same Database

1. In Developer, right-click the project source and select Configure


MicroStrategy Intelligence Server.

2. On the left, expand Statistics, then select General.

3. Select the Single Instance Session Logging option.

4. Select a project from the drop-down list.

5. Click OK.

Best Practices for Recording Intelligence Server Statistics


MicroStrategy recommends the following best practices for logging
Intelligence Server statistics:

l Configure your system for single instance session logging, so that all
projects for a project source use the same statistics repository. This can
reduce duplication, minimize database write time, and improve
performance. For information about single instance session logging, see
Overview of Intelligence Server Statistics, page 838.

l Use the sizing guidelines (see Sizing Guidelines for the Statistics
Repository, page 844) to plan how much hard disk space you need for the
statistics repository.

l Use Enterprise Manager to monitor and analyze the statistics information.


For more information about Enterprise Manager, see the Enterprise
Manager Help

Sizing Guidelines for the Statistics Repository


The following guidelines can help you determine how much space you need
for the statistics repository. These guidelines are for planning purposes;

Copyright © 2024 All Rights Reserved 844


Syst em Ad m in ist r at io n Gu id e

MicroStrategy recommends that you monitor the size of your statistics


repository and adjust your hardware requirements accordingly.

l When the Basic Statistics, Report Job Steps, Document Job Steps, Report
SQL, Report Job Tables/Columns Accessed, and Prompt Answers
statistics are logged, a user executing a report increases the statistics
database size by an average of 70 kilobytes.

l This value assumes that large and complex reports are run as often as
small reports. In contrast, in an environment where more than 85 percent
of the reports that are executed return fewer than 1,000 cells, the average
report increases the statistics database size by less than 10 kilobytes.

l When the Subscription Deliveries and Inbox Messages statistics are


logged, each subscription that is delivered increases the statistics
database size by less than 100 kilobytes. This is in addition to the
database increase from logging the report execution.

l When performance counters are logged to the statistics database, each


performance counter value that is logged increases the database size by
an average of 0.4 kilobyte. You can control this table's growth by
specifying what counters to log and how often to log each. For more
information on logging performance counters to the statistics database,
including instructions, see Overview of Intelligence Server Statistics, page
838.

To determine how large a database you need, multiply the space required
for a report by the number of reports that will be run over the amount of time
you are keeping statistics. For example, you may plan to keep the statistics
database current for six months and archive and purge statistics data that
are older than six months. You expect users to run an average of 400 reports
per day, of which 250, or 63 percent, return fewer than 1,000 rows, so you
assume that each report will increase the statistics table by about 25
kilobytes.

25 KB/report * 400 reports/day * 30 days/month * 6 months =


1,800,000 KB or 1.8 GB

Copyright © 2024 All Rights Reserved 845


Syst em Ad m in ist r at io n Gu id e

According to these usage assumptions, you decide to allocate 2 GB of disk


space for the statistics database.

Configure Intelligence Server to Log Statistics


Below is a high-level overview of the steps to configure a project to log
statistics.

Creating the Statistics Database


You can store Intelligence Server statistics in an existing database in your
system, or create a new database.

Do not store the statistics in the same database that you are using for either
your MicroStrategy metadata or your data warehouse.

l To use an existing database, note its Data Source Name (DSN). This DSN
is used when you create the statistics tables.

If you choose to use Enterprise Manager to analyze the statistics, this


DSN is also used to specify the data warehouse location for Enterprise
Manager. For information on Enterprise Manager, see the Enterprise
Manager Help.

l To create a new database, follow the procedure below. For a list of


databases that are certified for use with Intelligence Server statistics, see
Overview of Intelligence Server Statistics, page 838, or see the Readme.

To Create a New Statistics Database

1. Create the empty data warehouse database. (This is generally


performed by your database administrator.) This database must be one
of the databases certified for Intelligence Server statistics, as listed in
the Readme.

2. Use the MicroStrategy Connectivity Wizard to create a Data Source


Name for the data warehouse. Make note of this DSN for later.

Copyright © 2024 All Rights Reserved 846


Syst em Ad m in ist r at io n Gu id e

To access the Connectivity Wizard, go to Start > All Programs >


MicroStrategy Tools > Connectivity Wizard. For detailed instructions
on using the Connectivity Wizard, see the Installation and Configuration
Help.

To avoid a situation in which some statistics database entries reports


have incomplete information, synchronize the time of the Intelligence
Server machine with the database time, if possible.

Creating Statistics Tables in the Statistics Database


After the statistics database has been created, or you have noted your
existing database's DSN, you need to create the empty statistics tables for
Intelligence Server to use. The MicroStrategy Configuration Wizard walks
you through this process.

To Create the Empty Statistics Tables

1. Start the MicroStrategy Configuration Wizard.

l Windows: go to Start > All Programs > MicroStrategy Tools >


Configuration Wizard.

l Linux: Browse to the directory specified as the home directory during


MicroStrategy installation. Browse to the folder bin and type
./mstrcfgwiz and press Enter.

2. On the Welcome page, select Create Metadata, History List and


Enterprise Manager Repositories and click Next.

3. Select the Statistics & Enterprise Manager option and clear the other
options. Click Next.

4. From the DSN drop-down list, select the Data Source Name for the
database that will contain your Enterprise Manager repository (the
same database that you will use to log Intelligence Server statistics).

Copyright © 2024 All Rights Reserved 847


Syst em Ad m in ist r at io n Gu id e

Any table in this database that has the same name as a MicroStrategy
statistics table is dropped. For a list of the MicroStrategy statistics
tables, see the Intelligence Server Statistics Data Dictionary.

5. In the User Name and Password fields, enter a valid login and
password for the data warehouse database.

The user name you specify must have permission to create and drop
tables in the database, and permission to create views.

6. If you want to use a custom SQL script for creating the repository, click
Advanced.

l In the Script field, the default script file name is displayed. The
selected script depends on the database type that you specified
earlier.

l To select a different script, click ... (the Browse button) to browse to


and select a script that corresponds to the DBMS for the repository.

7. Click Next.

If Enterprise Manager statistics tables already exist in this database, it


prompts you for whether to re-create the tables. To re-create them,
click Yes. To leave the existing tables in place, click No.

Clicking Yes deletes the existing tables and all information in them.

8. Click Finish.

Setting the Statistics Database Instance for a Project


Once the statistics repository has been created, you must configure your
project to log statistics to this database.

MicroStrategy recommends that you configure your system to use single


instance session logging. In this configuration, statistics for all projects in a
project source are logged to a single database. To enable single instance

Copyright © 2024 All Rights Reserved 848


Syst em Ad m in ist r at io n Gu id e

session logging, in the Intelligence Server Configuration Editor, in the


Statistics: General category, select Single Instance Session Logging
and, from the drop-down list, select a project. Then specify that project's
statistics database using the procedure below. For steps on enabling single
instance session logging, see Overview of Intelligence Server Statistics,
page 838.

To Set Up a Project to Log Statistics

1. In Developer, log in to the server (three-tier) project source containing


the projects for which you want to log statistics. You must log in as a
user with the Configure Server Basic privilege.

2. Right-click the project that you want to monitor and select Project
Configuration.

If you are using single instance session logging, the project that you
select to configure must be the project that you selected when you set
up single instance session logging.

3. Expand the Database Instances category, and select the SQL Data
warehouses subcategory.

4. You need to create a new database instance for the statistics repository
database. Click New.

Copyright © 2024 All Rights Reserved 849


Syst em Ad m in ist r at io n Gu id e

5. In the Database instance name field, type in a name for the statistics
repository database instance.

6. From the Database connection type drop-down list, select the


database type and version that corresponds to the statistics repository
database DBMS.

7. You need to create a new database connection to connect to the


database instance. Click New.

8. In the Database connection name field, type a name for the database
connection.

9. From the ODBC Data Sources list, select the Data Source Name used
to connect to the statistics repository database.

10. Enable parameterized queries in the statistics repository database


connection. To do this, on the Advanced tab, select the Use
parameterized queries check box.

11. You need to create a new database login to log in to the database
instance. On the General tab, click New.

Copyright © 2024 All Rights Reserved 850


Syst em Ad m in ist r at io n Gu id e

12. Type a name for the new database login in the Database login field.

If this database login is more than 32 characters long, the statistics


logging will generate errors in the DSS Errors log.

13. Type a valid database login ID and password in the corresponding


fields.

MicroStrategy does not validate this login ID and password, so be


careful to type them correctly.

14. Click OK three times to return to the Project Configuration Editor. In


each case before clicking OK, make sure your new database login and
database connection are selected.

15. In the Database Instances category, select the Statistics


subcategory.

16. From the Statistics database instance drop-down list, select your new
statistics database instance.

17. Click OK.

Co n f i gu r e an Ad d i t i o n al Dat ab ase Dr i ver Set t i n g

If your statistics and Enterprise Manager repository is in an Oracle, Sybase,


or Teradata database, you must configure an additional ODBC driver setting
so the information is recorded properly in the statistics repository.

1. Open the ODBC Data Source Administrator tool in Windows.

2. Select the DSN for your statistics and Enterprise Manager repository
and click Modify.

3. Perform the following according to your database:

l Oracle: click the Advanced tab and select the Enable


SQLDescribeParam checkbox.

Copyright © 2024 All Rights Reserved 851


Syst em Ad m in ist r at io n Gu id e

l Sybase: click the Advanced tab and select the Enable Describe
Parameter checkbox.

l Teradata: click Options and select the Enable Extended Statement


Information checkbox.

4. Click OK twice.

Specifying Which Statistics to Log


Once you have specified a statistics database instance for a project, you can
select what statistics to log. For detailed information about what statistics
can be logged, see Overview of Intelligence Server Statistics, page 838.

You must specify what statistics to log for all projects that log statistics.
Single instance session logging (see Overview of Intelligence Server
Statistics, page 838) causes all projects on a project source to share the
same statistics database, but not to log the same statistics.

To log information from performance counters, use the Diagnostics and


Performance Logging Tool. For steps on how to log performance
information, see Overview of Intelligence Server Statistics, page 838.

To Specify Which Statistics to Log

1. In Developer, log in to the project source containing the project for


which you want to log statistics. You must log in as a user with the
Configure Server Basic privilege.

2. Right-click the project that you want to monitor and select Project
Configuration.

3. Expand the Statistics category, and select the General subcategory.

4. Select the Basic Statistics checkbox.

Copyright © 2024 All Rights Reserved 852


Syst em Ad m in ist r at io n Gu id e

5. To log advanced statistics, select the checkboxes for the statistics you
want to log. For information about each check box, see Overview of
Intelligence Server Statistics, page 838.

6. Click OK.

7. To begin logging statistics, unload and reload the project for which you
are logging statistics:

1. In Developer, expand Administration, then expand System


Administration, then select Project.

2. Right-click the project, point to Administer Project, and select


Unload.

3. Right-click the project, point to Administer Project, and select


Load.

Platform Analytics Failed Statistics


When the connection between Intelligence Server and Telemetry Server is
lost, the statistics telemetry messages will be lost for consumption by
Platform Analytics. Enabling this feature will save the failed statistics
message to disk when the connection is lost. The Intelligence Server can
then resend the saved statistics messages to Telemetry Server when the
connection reestablished.

Enable/Disable Automatic Resending


The feature is enabled by default in MicroStrategy ONE. It can be turned off
through a feature flag "Automatically Resend Failed Messages to Platform
Analytics". The feature flag will only be available after you upgrade the
metadata to the ONE version.

The feature flag is controlled via Command Manager with the following
scripts:

Copyright © 2024 All Rights Reserved 853


Syst em Ad m in ist r at io n Gu id e

l To check the feature flag status run:

LIST ALL FEATURE FLAGS;

l To turn on the feature flag run:

ALTER FEATURE FLAG "Automatically Resend Failed Messages


to Platform Analytics" ON;

l To turn off the feature flag run:

ALTER FEATURE FLAG "Automatically Resend Failed Messages


to Platform Analytics" OFF;

To verify that failed statistics messages are being captured by this feature:

l If this feature is enabled and connection between Intelligence Server and


Telemetry Server is lost, and statistics messages are produced, the
KFKProducerError.log file will contain messages like:

l 2019-10-29 08:52:54.675[HOST:env-168909laiouse1]
[PID:8701][THR:139814881539840] Open file
/opt/mstr/MicroStrategy/log/FailedSentOutMessages/KafkaFailMessag
e_1 for storing fail messages.

l When connection between Intelligence Server and Telemetry Server is


reestablished, and IServer has started to process the failed messages
from disk, the KFKProducerError.log file will contain messages like:

l 2019-10-29 08:52:54.675[HOST:env-168909laiouse1]
[PID:8701][THR:139814881539840] Start processing fail messages
in file
/opt/mstr/MicroStrategy/log/FailedSentOutMessages/KafkaFailMessag
e_1 .

l 2019-10-29 08:52:54.675[HOST:env-168909laiouse1]
[PID:8701][THR:139814881539840] Finish processing 4 fail
messages in the file.

Copyright © 2024 All Rights Reserved 854


Syst em Ad m in ist r at io n Gu id e

l The FailedSentOutMessages has been created and contains the


KafkaFailMessage_1 file.

Feature Behavior
Intelligence Server will create a folder named "FailedSentOutMessages"
in the same folder as DSSError.log file. By default this is:

l Windows: C:\Program Files (x86)\Common


Files\MicroStrategy\Log

l Linux: /opt/MicroStrategy/log

When the connection between Intelligence Server and Telemetry Server


fails, Intelligence Server will save the failed statistics messages in the folder
as DSSError log files such as KafkaFailMessage_1,
KafkaFailMessage_2, etc.

When the connection between Intelligence Server and Telemetry Server is


re-established, Intelligence Server will start to read the log file which starts
with the smallest number. Intelligence Server will try to reconnect to
Telemetry Server every 10 seconds, so once Telemetry Server is running,
Intelligence Server can detect whether the connection can be established
very quickly.

Once the file has been read by Intelligence Server, the file will be deleted
from the disk.

Changing the file count lim it

In MicroStrategy ONE the file count limit is set to 2560 log files with a
default file size limit of 4MB. The file count limit can be modified through the
following registry setting:

KEY_LOCAL_MACHINE\SOFTWARE\MicroStrategy\Data
Sources\CastorServer\KafkaProducer Fail Message File
Count.

Copyright © 2024 All Rights Reserved 855


Syst em Ad m in ist r at io n Gu id e

The default file size limit of 4MB cannot be changed, only the count limit of
files can be changed.

Exam p l e
l Linux: Open [HKEY_LOCAL_
MACHINE\SOFTWARE\MicroStrategy\Data
Sources\CastorServer] and add a new line:

"KafkaProducer Fail Message File Count"=dword:00000010

l Windows: Open [HKEY_LOCAL_


MACHINE\SOFTWARE\Wow6432Node\MicroStrategy\Data
Sources\CastorServer] and add a new DWORD value:

"KafkaProducer Fail Message File Count"

Monitor Quick Search Indices


A quick search index is created when a project is loaded for the first time. If
no index folder is found within the project, MicroStrategy will do an initial
crawl and create the search index within that project. Each time a project is
loaded or an object is created within a project, the search index will be
updated based upon the change journal entries that are not part of the index.

Rebuilding a Quick Search Index


If certain system configurations change, it may be necessary to rebuild your
quick search index. Quick search indices are accessible through your
system monitors. To rebuild a search index:

1. Open a project source in Developer.

If you are running MicroStrategy Developer on Windows for the first


time, run it as an administrator.

Right-click the program icon and select Run as Administrator.

Copyright © 2024 All Rights Reserved 856


Syst em Ad m in ist r at io n Gu id e

This is necessary in order to properly set the Windows registry keys.


For more information, see KB43491.

2. Open Administration > System Monitors > Quick Search Indices to


view the indices.

3. Right-click and select Rebuild Index.

When to Rebuild an Index


The following scenarios may require you to rebuild a search index:

l If you switch the metadata in your environment with a copy of the currently
used metadata, like the backup of the current metadata, a rebuild will be
required. Quick Search cannot distinguish the copies of a metadata.

l If you change your index folder, a rebuild will be required. If the index
folder is changed via MicroStrategy Developer, the index for all loaded
projects will be rebuilt automatically. You do not need to rebuilt the index.

l If you upgrade your environment, a rebuild may be necessary.

l If you the index status of a specific project is in a paused state for an


extended time, a rebuild may be necessary. Quick Search will typically
resume the crawl process for a paused index.

l If you can not search your objects normally, a rebuild may be necessary.

Related Articles

KB482892: How to Rebuild the Index of the Quick Search

Additional Monitoring Tools


In addition to the logging tools and system monitors listed above,
MicroStrategy provides several tools that help you track system usage and
changes to the system.

Copyright © 2024 All Rights Reserved 857


Syst em Ad m in ist r at io n Gu id e

Diagnostics and Performance Logging Tool


In MicroStrategy, the process of logging and analyzing operation and
performance information is known as diagnostics. Information can be logged
for many Intelligence Server and operating system features and functions.
You can configure the log files to record diagnostics information at different
levels to different files. For example, you can log all MicroStrategy errors to
the default log file of DSSErrors.log, and log all information about
Memory Contract Manager (see Governing Intelligence Server Memory Use
with Memory Contract Manager, page 1039) to a new log file called
MemoryLog.log. You can also log performance information, such as the
time taken to perform various operations and the total number of operations
performed.

However, if too much information is logged, it can degrade the system's


performance. By default, logging is set to a minimum. At some point you may
want to detect problems in the system for which logging is not enabled by
default.

This section includes information on the following topics:

l Configure What is Logged, page 858

l View and Analyze Log Files, page 875

l Analyze a Server State Dump, page 878

Configure What is Logged


The MicroStrategy Diagnostics and Performance Logging tool configures
which diagnostic messages are recorded to MicroStrategy log files. You can
customize the logging options to gather information from more or fewer
system components and performance counters, and to save log messages to
different log files.

Copyright © 2024 All Rights Reserved 858


Syst em Ad m in ist r at io n Gu id e

Configure Logging with the Diagnostics and Performance Logging


Tool

If you save any changes to settings in the Diagnostics and Performance


Logging tool, you cannot automatically return to the out-of-the-box settings.
If you might want to return to the original default settings at any time, record
the default setup for your records.

1. Open the Diagnostics and Performance Logging Tool.

l From Developer: go to Tools > Diagnostics.

If the Diagnostics option does not appear on the Tools menu, it has
not been enabled. To enable this option, go to Tools > Preferences
> Developer > General > Advanced > Show Diagnostics Menu
Option check box and click OK.

l In Windows: go to Start > All Programs > MicroStrategy Tools >


Diagnostics Configuration.

l In Linux: Navigate to the directory ~/MicroStrategy/bin and enter


mstrdiag.

Copyright © 2024 All Rights Reserved 859


Syst em Ad m in ist r at io n Gu id e

2. From the Select Configuration drop-down list, select whether to


configure logging for this machine only or for the entire server instance:

l To configure logging for this machine only, select Machine Default.

l To configure logging for the server instance, select CastorServer


Instance.

To configure the server instance with the logging settings that are used by
this machine, select CastorServer Instance and then select the Use
Machine Default Diagnostics Configuration check box.

Configure Diagnostics Logging

1. Select the Diagnostics Configuration tab. For more information about


diagnostics logging, see Diagnostics Configuration, page 862.

Copyright © 2024 All Rights Reserved 860


Syst em Ad m in ist r at io n Gu id e

2. To log information about a component to the operating system log file,


select the System Log check box for that component.

3. To log information about a component to the MicroStrategy Monitor


console, select the Console Log check box for that component.

This log destination is intended for use for interactive testing and
troubleshooting purposes, and should not be used in production
deployments.

4. To log information about a component to a MicroStrategy log file, in the


File Log drop-down list for that component, select the log file.

Logging the Kernel XML API component can cause the log file to grow very
large. If you enable this diagnostic, make sure the log file you select in the
File Log column has a Max File Size (KB) of at least 2000. For instructions
on how to set the maximum size of a log file, see Creating and Managing
Log Files, page 874

Configure Performance Logging

1. Select the Performance Configuration tab. For more information


about performance logging, see Configure Performance Logging
Settings, page 864.

2. Configure the performance log file and statistics logging properties


using the options on the right side of the Diagnostics and Performance
Logging Tool.

3. To log information from a performance counter to the performance log


file, select the File Log check box for that counter.

4. To log information from a performance counter to the statistics tables,


select the Statistics check box for that counter.

5. Click Save.

Copyright © 2024 All Rights Reserved 861


Syst em Ad m in ist r at io n Gu id e

You may need to restart Intelligence Server for the new logging
settings to take effect.

Once the system begins logging information, you can analyze it by viewing
the appropriate log file. For instructions on how to read a MicroStrategy log
file, see Creating and Managing Log Files, page 874.

Diagnostics Configuration

Each component of the MicroStrategy system can produce log messages.


These messages can help you track down the source of any errors that you
encounter. For example, if your system seems to be running low on memory,
you can view the log files to determine which components and processes are
using more memory than anticipated.

These log messages can be recorded in a MicroStrategy log file. They can
also be recorded in the operating system's log file, such as the Windows
Event Monitor.

The component/dispatcher combinations that you choose to enable logging


for depend on your environment, your system, and your users' activities. In
general, the most useful dispatchers to select are the following:

l Error: This dispatcher logs the final message before an error occurs,
which can be important information to help detect the system component
and action that caused or preceded the error.

l Fatal: This dispatcher logs the final message before a fatal error occurs,
which can be important information to help detect the system component
and action that caused or preceded the server fatality.

l Info: This dispatcher logs every operation and manipulation that occurs on
the system.

Some of the most common customizations to the default diagnostics setup


are shown in the following table. Each component/dispatcher combination in
the table is commonly added to provide diagnostic information about that

Copyright © 2024 All Rights Reserved 862


Syst em Ad m in ist r at io n Gu id e

component and its related trace (dispatcher). To add a combination, select


its check box.

Component Dispatcher

Authentication
Trace
Server

Database Classes All

Content Source Trace


Metadata Server
Transaction Trace

Engine DFC Engine

Element Source Trace


Element Server
Object Source Trace

Content Source Trace

Object Server Object Source Trace

Scope Trace

Report Net Server Scope Trace

Cache Trace

Report Server Object Source Trace

Report Source Trace

Scheduler Trace
Kernel
User Trace

Trace

If you enable this diagnostic,


Kernel XML API make sure that the log file that
you select in the File Log column
has its Max File Size (KB) set to
at least 2000.

Copyright © 2024 All Rights Reserved 863


Syst em Ad m in ist r at io n Gu id e

Perform ance Configuration

MicroStrategy components can also record various performance


measurements. You can use these measurements to help tune your system
for better performance, or to identify areas where performance can be
improved. For example, you may want to discover exactly how much the
CPU is used to perform a given system function.

Some performance counters can be logged to the Intelligence Server


statistics tables as well. For more information about Intelligence Server
statistics, see Monitor System Usage: Intelligence Server Statistics, page
838.

Co n f i gu r e Per f o r m an ce Lo ggi n g Set t i n gs

When you select the performance counters to be recorded, you can


determine how often data is recorded, and whether to persist the counters.

You can enable or disable performance logging without having to clear all
the logging settings. To enable logging to a file, make sure the Log
Counters drop-down list is set to Yes. To enable logging to the statistics
database, make sure the Persist Statistics drop-down list is set to Yes.

To Configure the Performance Logging Settings

1. In the Diagnostics and Performance Logging tool, select the


Performance Configuration tab.

2. From the Log Destination drop-down box, select the file to log
performance counter data to.

To create a new performance log file, from the Log Destination drop-
down box, select <New>. For instructions on using the Log Destination
Editor to create a new log file see Creating and Managing Log Files,
page 874.

Copyright © 2024 All Rights Reserved 864


Syst em Ad m in ist r at io n Gu id e

3. In the Logging Frequency (sec) field, type how often, in seconds, that
you want the file log to be updated with the latest performance counter
information.

4. To log performance information to a log file, make sure the Log


Counters drop-down list is set to Yes.

5. In the Logging Frequency (min) field, type how often, in minutes, that
you want the statistics database to be updated with the latest
performance counter information.

6. To log performance information to the statistics database, make sure


the Persist Statistics drop-down list is set to Yes.

7. When you are finished configuring the performance counter log file,
click Save.

Per f o r m an ce Co u n t er s f o r Sp eci f i c M i cr o St r at egy Feat u r es

The table below lists the major MicroStrategy software features and the
corresponding performance counters that you can use to monitor those
features. For example, if the Attribute Creation Wizard seems to be running
slowly, you can track its performance with the DSS AttributeCreationWizard,
DSS ProgressIndicator, and DSS PropertySheetLib performance counters.

MicroStrategy Feature Components Trace Level

DSS AttributeCreationWizard
Attribute Creation
DSS ProgressIndicator Function Level Tracing
Wizard
DSS PropertySheetLib

DSS AttributeEditor All components perform

DSS ColumnEditor Function Level Tracing. DSS


Attribute Editor Components also performs
DSS CommonDialogsLib Explorer and Component
DSS Components Tracing.

Copyright © 2024 All Rights Reserved 865


Syst em Ad m in ist r at io n Gu id e

MicroStrategy Feature Components Trace Level

DSS EditorContainer

DSS EditorManager

DSS ExpressionboxLib

DSS FormCategoriesEditor

DSS PropertySheetLib

DSS AuthServer Authentication Tracing

Session Tracing
Client Connection Data Source Tracing
DSS ClientConnection
Data Source Enumerator
Tracing

DSS CommonDialogsLib

DSS Components All components perform

DSS ConsolidationEditorLib Function Level Tracing. DSS


Consolidation Editor Components also performs
DSS EditorContainer Explorer and Component
DSS EditorManager Tracing.

DSS PromptsLib

DSS CommonDialogsLib

DSS CommonEditorControlsLib

DSS Components
All components perform
DSS DateLib
Function Level Tracing. DSS
Custom Group Editor DSS EditorContainer Components also performs
Explorer and Component
DSS EditorManager
Tracing.
DSS EditorSupportLib

DSS ExpressionboxLib

DSS FilterLib

Copyright © 2024 All Rights Reserved 866


Syst em Ad m in ist r at io n Gu id e

MicroStrategy Feature Components Trace Level

DSS FTRContainerLib

DSS ObjectsSelectorLib

DSS PromptEditorsLib

DSS PromptsLib

DSS DataTransmitter

DSS MhtTransformer
Data Transmitters
DSS MIME Function Level Tracing
and Transformers
DSS SMTPSender

DSS Network

DSS DBElementServer All components perform

DSS ElementNetClient Element Source Tracing. DSS


Element Browsing DBElementServer also
DSS ElementNetServer performs Report Source
DSS ElementServer Tracing.

DSS FactCreationWizard
Fact Creation Wizard Function Level Tracing
DSS ProgressIndicator

DSS ColumnEditor

DSS CommonDialogsLib

DSS Components All components perform

DSS EditorContainer Function Level Tracing. DSS


Fact Editor Components also performs
DSS EditorManager Explorer and Component
DSS ExpressionboxLib Tracing.

DSS ExtensionEditor

DSS FactEditor

DSS CommonDialogsLib All components perform


Filter Editor
DSS CommonEditorControlsLib Function Level Tracing. DSS

Copyright © 2024 All Rights Reserved 867


Syst em Ad m in ist r at io n Gu id e

MicroStrategy Feature Components Trace Level

DSS Components

DSS DateLib

DSS EditorContainer

DSS EditorManager

DSS EditorSupportLib
Components also performs
DSS ExpressionboxLib Explorer and Component
Tracing.
DSS FilterLib

DSS FTRContainerLib

DSS ObjectsSelectorLib

DSS PromptEditorsLib

DSS PromptsLib

DSS CommonDialogsLib

DSS EditorContainer
Hierarchy Editor Function Level Tracing
DSS EditorManager

DSS HierarchyEditor

DSS CommonDialogsLib
All components perform
DSS Components
Function Level Tracing. DSS
HTML Document
DSS DocumentEditor Components also performs
Editor
Explorer and Component
DSS EditorContainer
Tracing.
DSS EditorManager

Object Tracing
DSS MD4Server
Access Tracing
Metadata SQL
SQL Tracing
DSS MDServer
Content Source Tracing

Metric Editor DSS CommonDialogsLib All components perform

Copyright © 2024 All Rights Reserved 868


Syst em Ad m in ist r at io n Gu id e

MicroStrategy Feature Components Trace Level

DSS Components

DSS DimtyEditorLib

DSS EditorContainer
Function Level Tracing. DSS
DSS EditorManager Components also performs
DSS ExpressionboxLib Explorer and Component
Tracing.
DSS MeasureEditorLib

DSS PromptsLib

DSS PropertiesControlsLib

DSS ObjectServer All components perform


Content Source Tracing. DSS
Object Browsing DSS SourceNetClient
ObjectServer also performs
DSS SourceNetServer Object Source Tracing.

DSS CommonDialogsLib

DSS Components
All components perform
DSS DataSliceEditor
Function Level Tracing. DSS
Partition Editor DSS EditorContainer Components also performs
Explorer and Component
DSS EditorManager
Tracing.
DSS FilterLib

DSS PartitionEditor

DSS PrintCore

Print Schema DSS PrintSchema Function Level Tracing

DSS ProgressIndicator

DSS AttributeCreationWizard

DSS FactCreationWizard
Project Creation Function Level Tracing
DSS ProgressIndicator

DSS ProjectCreationLib

Copyright © 2024 All Rights Reserved 869


Syst em Ad m in ist r at io n Gu id e

MicroStrategy Feature Components Trace Level

DSS WHCatalog

DSS AsynchLib

DSS ProgressIndicator
Project Duplication Function Level Tracing
DSS ProjectUpgradeLib

DSS SchemaManipulation

DSS AsynchLib

DSS ProgressIndicator
Project Upgrade Function Level Tracing
DSS ProjectUpgradeLib

DSS SchemaManipulation

DSS CommonDialogsLib

DSS CommonEditorControlsLib

DSS Components
All components perform
DSS EditorContainer
Function Level Tracing. DSS
Prompt Editor DSS EditorManager Components also performs
Explorer and Component
DSS EditorSupportLib
Tracing.
DSS PromptEditorsLib

DSS PromptStyles

DSS SearchEditorLib

DSS CommonDialogsLib

DSS CommonEditorControlsLib
All components perform
DSS Components
Function Level Tracing. DSS
Report Editor DSS DateLib Components also performs
Explorer and Component
DSS EditorContainer
Tracing.
DSS EditorManager

DSS EditorSupportLib

Copyright © 2024 All Rights Reserved 870


Syst em Ad m in ist r at io n Gu id e

MicroStrategy Feature Components Trace Level

DSS ExportLib

DSS ExpressionboxLib

DSS FilterLib

DSS FTRContainerLib

DSS GraphLib

DSS GridLib

DSS ObjectsSelectorLib

DSS PageByLib

DSS PrintGraphInterface

DSS PrintGridInterface

DSS PromptEditorsLib

DSS PromptsLib

DSS PropertySheetLib

DSS RepDrillingLib

DSS RepFormatsLib

DSS RepFormsLib

DSS ReportControl

DSS ReportDataOptionsLib

DSS ReportSortsLib

DSS ReportSubtotalLib

Report Source Tracing


DSS ReportNetClient
Process Tracing
Report Execution
DSS ReportNetServer Process Tracing

DSS ReportServer Report Source Tracing

Server DSS AdminEditorContainer Function Level Tracing

Copyright © 2024 All Rights Reserved 871


Syst em Ad m in ist r at io n Gu id e

MicroStrategy Feature Components Trace Level

DSS DatabaseInstanceWizard

DSS DBConnectionConfiguration

DSS DBRoleConfiguration

DSS DiagnosticsConfiguration

DSS EVentsEditor

DSS PriorityMapEditor

Administration DSS PrivilegesEditor

DSS ProjectConfiguration

DSS SecurityRoleEditor

DSS SecurityRoleViewer

DSS ServerConfiguration

DSS UserEditor

DSS VLDBEditor

DSS CommonDialogsLib

DSS EditorContainer
Table Editor Function Level Tracing
DSS EditorManager

DSS TableEditor

DSS CommonDialogsLib

DSS Components

DSS EditorContainer All components perform

DSS EditorManager Function Level Tracing. DSS


Template Editor Components also performs
DSS ExportLib Explorer and Component
DSS FTRContainerLib Tracing.

DSS GraphLib

DSS GridLib

Copyright © 2024 All Rights Reserved 872


Syst em Ad m in ist r at io n Gu id e

MicroStrategy Feature Components Trace Level

DSS PageByLib

DSS PrintGraphInterface

DSS PrintGridInterface

DSS PromptsLib

DSS PropertySheetLib

DSS RepDrillingLib

DSS RepFormatsLib

DSS RepFormsLib

DSS ReportControl

DSS ReportDataOptionsLib

DSS ReportSortsLib

DSS ReportSubtotalLib

DSS CommonDialogsLib

DSS Components All components perform

DSS EditorContainer Function Level Tracing. DSS


Transformation
Components also performs
Editor DSS EditorManager Explorer and Component
DSS ExpressionboxLib Tracing.

DSS TransformationEditor

DSS CommonDialogsLib

DSS DatabaseInstanceWizard
Warehouse Catalog
DSS DBRoleConfiguration Function Level Tracing
Browser
DSS SchemaManipulation

DSS WHCatalog

Copyright © 2024 All Rights Reserved 873


Syst em Ad m in ist r at io n Gu id e

Creating and Managing Log Files

Diagnostics information can be logged to multiple log files. For example, in


the default configuration, all error messages are logged to DSSErrors.log,
license information is logged to License.log, and messages from the Java
Virtual Machine in MicroStrategy Web are logged to JVMMessages.log.

Performance information must all be logged to the same log file.

Each log file has a specified maximum size. When a MicroStrategy log file
reaches its maximum size, the file is renamed with a .bak extension, and a
new log file is created using the same file name. For example, if the
DSSErrors.log file reaches its maximum size, it is renamed
DSSErrors.bak, and a new DSSErrors.log file is created.

You can create new log files and change the maximum size of log files in the
Log Destination Editor.

To Change the Maximum Size of a Log File

1. In the Diagnostics and Performance Logging Tool, go to Tools > Log


Destinations.

2. From the Select Log Destination drop-down list, select the log file.

3. In the Max File Size (KB) field, type the new maximum size of the log
file, in kilobytes.

If the Kernel XML API component is selected in the Diagnostics tab,


the Max File Size for that file should be set to no lower than 2000 KB.

4. Click Save and Close.

Copyright © 2024 All Rights Reserved 874


Syst em Ad m in ist r at io n Gu id e

To Create a New Log File

1. In the Diagnostics and Performance Logging Tool, from the Tools


menu, select Log Destinations.

2. From the Select Log Destination drop-down list, select <New>.

3. In the File Name field, type the name of the file. The .log extension is
automatically appended to this file name.

4. In the Max File Size (KB) field, type the maximum size of the new log
file, in kilobytes.

5. Click Save and Close.

View and Analyze Log Files


All MicroStrategy log files are stored in the log file location. This location is
set during installation and cannot be changed.

l On Windows, all log files are stored in C:\Program Files


(x86)\Common Files\MicroStrategy\Log\.

l On Linux, all log files are stored in /opt/mstr/MicroStrategy/log

These log files are plain text files and can be viewed with any text editor.

The MicroStrategy Web server error log files are in the MstrWeb/WEB-
INF/log/ directory. These log files can be viewed from the Web
Administrator page, by clicking View Error log on the left side of the page.
For more information about viewing log files in MicroStrategy Web, see the
Web Administrator Help (from the Web Administrator page, click Help).

Anatom y of a Log File

Non-error messages in the log files have the same format. Each entry has
the following parts:

Copyright © 2024 All Rights Reserved 875


Syst em Ad m in ist r at io n Gu id e

Date Time [HOST:MACHINE_NAME][SERVER:SERVER_DEFINITAION_NAME]


[PID:PROCESS_ID][THR:THREAD_ID][MODULE_NAME][TRACE_TYPE]message

Section Definition

Date Time Date and time at which the action happened

HOST The name of the machine you are running

SERVER The server definition name

PID Numeric ID of the process that performed the action

THR Numeric ID of the thread that performed the action

MODULE NAME Name of the MicroStrategy component that performed the action

trace type Type of the log file entry

message Message about the action

Error messages in the log files have a similar format, but include the error and the error
code in the log files:

Date Time [HOST:MACHINE_NAME][SERVER:SERVER_DEFINITAION_NAME][PID:PROCESS_ID]


[THR:THREAD_ID][MODULE_NAME][Error][ERROR_CODE]message

Sam p l e Lo g Fi l e

The following sample is a simple log file that was generated from
MicroStrategy Web (ASP.NET) after running the report called Length of
Employment in the MicroStrategy Tutorial project. The bulleted line before
each entry explains what the log entry is recording.

• Intelligence Server creates a report definition.

286:[THR:480][02/07/2003::12:24:23:860][DSS ReportServer][Report Source


Tracing]Creating Report(Definition) with Flags=0x1000180(OSrcCch
UptOSrcCch)

Copyright © 2024 All Rights Reserved 876


Syst em Ad m in ist r at io n Gu id e

• Intelligence Server loads the report definition object named Length of Employment from
the metadata.

286:[THR:480][02/07/2003::12:24:23:860][DSS ReportServer][Report Source


Tracing] where Definition = Object(Name="Length of Employment" Type=3
(Report Definition) ID=D1AE564911D5C4D04C200E8820504F4F
Proj=B19DEDCC11D4E0EFC000EB9495D0F44F Ver=493C8E3447909F1FBF75C48E11AB7DEB)

• Intelligence Server creates a report instance named Length of Employment.

286:[THR:480][02/07/2003::12:24:24:931][DSS ReportServer][Report Source


Tracing]Created ReportInstance(Name="Length of Employment"
ExecFlags=0x1000180(OSrcCch UptOSrcCch) ExecActn=0x1000180(RslvCB LclCch))

• Intelligence Server begins executing the report instance.

286:[THR:480][02/07/2003::12:24:24:931][DSS ReportServer][Report Source


Tracing]Executing ReportInstance(Name="Length of Employment"
ExecFlags=0x1000180(OSrcCch UptOSrcCch) ExecActn=0x1000180(RslvCB LclCch))
with Actions=0x8300003f(Rslv GenSQL ExeSQL Alrt XTab EvalVw LclCch
UptLclCch), Flags=0x1000180(OSrcCch UptOSrcCch)

• Intelligence Server checks to see whether the report exists in the report cache.

286:[THR:480][02/07/2003::12:24:25:181][DSS ReportServer][Report Source


Tracing]Finding in cache: ReportInstance(Name="Length of Employment"
ExecFlags=0x1000180(OSrcCch UptOSrcCch) ExecActn=0x1000180(RslvCB LclCch))

• Intelligence Server did not find the report in the cache.

286:[THR:480][02/07/2003::12:24:25:342][DSS ReportServer][Report Source


Tracing]Not found in cache: ReportInstance(Name="Length of Employment"
ExecFlags=0x1000180(OSrcCch UptOSrcCch) ExecActn=0x1000180(RslvCB LclCch))

• Intelligence Server checks for prompts and finds none in the report.

286:[THR:314][02/07/2003::12:24:25:432][DSS ReportServer][Report Source


Tracing]No prompts in ReportInstance(Name="Length of Employment"
ExecFlags=0x1000180(OSrcCch UptOSrcCch) ExecActn=0x1000180(RslvCB LclCch))

• Intelligence Server executes the report and updates the caches.

286:[THR:492][02/07/2003::12:24:26:634][DSS ReportServer][Report Source


Tracing]Executing ReportInstance(Job=2 Name="Length of Employment"
ExecFlags=0x1000184(OSrcCch UptOSrcCch) ExecActn=0x1000184(ExeSQL RslvCB
LclCch)) with Actions=0x300003f(Rslv GenSQL ExeSQL Alrt XTab EvalVw LclCch
UptLclCch), Flags=0x1000184(OSrcCch UptOSrcCch)

More detail is logged for report execution if the report is run from Developer.

Copyright © 2024 All Rights Reserved 877


Syst em Ad m in ist r at io n Gu id e

Working with Exceptions

When Intelligence Server encounters an error, it "throws an exception." Not


all exceptions are fatal; in fact Intelligence Server uses some of them
internally. Fatal exceptions cause Intelligence Server to shut down and they
are logged in the DSSErrors.log, often as "unknown exceptions."

Fatal exception messages by themselves are not sufficient for accurate


diagnosis. Intelligence Server includes a built-in mechanism to capture
structured exceptions and generate a dump file that has more information in
it. You may need to do this for MicroStrategy Technical Support specialist.

Analyze a Server State Dump


A server state dump (SSD) is a collection of information related to the state
of Intelligence Server that is written to the DSSErrors.log file, usually as a
result of an unexpected shutdown of Intelligence Server. It provides insight
into what was going on in Intelligence Server when the shutdown occurred.
This information can be used to help diagnose the cause of the shutdown
and avert subsequent problems.

Problems that trigger an SSD include memory depletion (see Memory


Depletion Troubleshooting, page 2908) or exceptions (see View and Analyze
Log Files, page 875). Changes to the server definition trigger a subset of the
SSD information.

Each SSD records information under the same process ID and thread ID.
This information includes the server and project configuration settings,
memory usage, schedule requests, user sessions, executing jobs and
processing unit states, and so on. The SSD information is broken into 14
sections, summarized below.

Sect i o n 1: Tr i gger i n g Er r o r an d Er r o r Sp eci f i c Pr eam b l e

This section precedes the actual SSD and provides information on what
triggered the SSD, such as memory depletion or an unknown exception
error.

Copyright © 2024 All Rights Reserved 878


Syst em Ad m in ist r at io n Gu id e

Sect i o n 2: Ser ver Execu t ab l e Ver si o n an d Bu i l d In f o r m at i o n

This section provides information on the Intelligence Server executable


version and build time so you can accurately identify the version of the
MicroStrategy software.

Sect i o n 3: Ser ver Def i n i t i o n Basi c (Cast o r Ser ver Co n f i gu r at i o n ' Pr o j ect ' )
In f o r m at i o n

This section provides a subset of Intelligence Server level settings as they


are defined in the Intelligence Server Configuration Editor (in Developer,
right-click the project source, and select Configure MicroStrategy
Intelligence Server). The settings consist of:

l Server definition name

l Maximum jobs per project

l Maximum connections per project

l Number of projects

l Communication protocol and port

WorkingSet File Directory and Max RAM for WorkingSet Cache values are
not listed in an SSD.

Sect i o n 4: Pr o j ect / s Basi c In f o r m at i o n

This section includes basic information related to the state and configuration
of projects. This shows settings that are defined in the Project Configuration
Editor, such as:

l Project name

l Cache settings

l Governor settings

Copyright © 2024 All Rights Reserved 879


Syst em Ad m in ist r at io n Gu id e

l DBRole used

l DBConnection settings

Sect i o n 5: Ser ver Def i n i t i o n Ad van ced In f o r m at i o n

This section includes additional server definition settings, such as:

l Thread load balancing mode

l Memory throttling

l History List settings

l Idle timeouts

l XML governors

l Memory Contract Manager (MCM) settings

MCM is designed to help you avoid memory depletions. For more


information on MCM, see Governing Intelligence Server Memory Use with
Memory Contract Manager, page 1039.

Sect i o n 6: Cal l st ack, Lo ckst ack, an d Lo ad ed M o d u l es

The callstack dump provides information on the functions being used at the
time the SSD was written. Similarly, the lockstack provides a list of active
locks. The Module info dump provides a list of files that are loaded into
memory by Intelligence Server, and their location in memory.

This information can help MicroStrategy Technical Support trace errors to


specific areas of functionality.

Sect i o n 7: Ser ver Pr o cess M em o r y Sn ap sh o t

This section contains the memory profile of the Intelligence Server process
and machine. If any of these values are near their limit, memory may be a
cause of the problem.

Copyright © 2024 All Rights Reserved 880


Syst em Ad m in ist r at io n Gu id e

Sect i o n 8: Pr o j ect St at e Su m m ar y

This section provides a summary of whether each project is Loaded and


Registered, and the number of users logged in and jobs running at the time
of the SSD.

Sect i o n 9: Sch ed u l e Req u est In f o r m at i o n

This section provides a listing of schedule requests that Intelligence Server


is configured for. This list includes:

l Reports

l Documents

l Administration tasks, such as idling projects and other tasks related to


cache management

For additional information about schedules and subscriptions, see Chapter


12, Scheduling Jobs and Administrative Tasks.

Sect i o n 10: Dat ab ase Co n n ect i o n Sn ap sh o t

This section displays a snapshot of the state of the database connections


between Intelligence Server and the metadata and data warehouse
databases. This information is similar to what is shown in the Database
Connection Monitor. For more information about database connections, see
Communicating with Databases, page 19.

Sect i o n 11: User In b o x Sn ap sh o t

The section provides information on the size of various user inboxes and
information related to the WorkingSet.

Sect i o n 12: Jo b s St at u s Sn ap sh o t

This section provides a snapshot of the jobs that were executing at the time
of the SSD. This information may be useful to see what the load on
Intelligence Server was, as well as what was executing at the time of the

Copyright © 2024 All Rights Reserved 881


Syst em Ad m in ist r at io n Gu id e

error. If the error is due to a specific report, the information here can help
you reproduce it.

Sect i o n 13: User Sessi o n Sn ap sh o t

This section provides details on the various user sessions in Intelligence


Server at the time of the SSD.

Sect i o n 14: Pr o cessi n g Un i t Th r ead s St at e Sn ap sh o t

This section provides information about the states of the threads in each
processing unit in Intelligence Server. It also provides information on the
number of threads per Processing Unit and to what priority they are
assigned.

Automated Crash Reporting and Diagnostics Tool


The Automated Crash Reporting and Diagnostics Tool is used to capture
essential crash reporting and diagnostics information related to
MicroStrategy program reliability, performance, customer environment
configuration, and so on. The data is automatically uploaded to a remote
crash and diagnostics collection server for postmortem analysis.
MicroStrategy uses the data to triage and fix these critical crashes and other
identified issues in a prompt fashion.

How Crash Reporting and Diagnostics Works


MicroStrategy collects essential diagnostics information during program
startup, shutdown, running, crashes, and uploads to
microstrategy.sp.backtrace.io through HTTPS protocol. For this
reason, outbound network traffic needs to be allowed on port 443 to transmit
the data. The crash and diagnostics collection server analyzes the incoming
crashes and diagnostics information and notifies MicroStrategy technical
support and production teams for the incidents.

Copyright © 2024 All Rights Reserved 882


Syst em Ad m in ist r at io n Gu id e

What Data is Shared with MicroStrategy


The crash reporting and diagnostics information contains crash reporting
data related to MicroStrategy program reliability (such as contextual data
collected if a program crashes, which consists of a crash dump file in the
Minidump file format, log files, and a set of key-value pairs), diagnostics
data related to MicroStrategy program performance (such as how long the
program startup takes), the environment configuration (such as processor
information, memory, operating system, and so on), and starting in
MicroStrategy ONE Update 11 this includes gateway usage information
(such as DBMS name, driver name, and authentication mode).

The Automated Crash Reporting and Diagnostics Tool does not transmit any
personally identifiable information.

DSSErrors.log File

DSSErrors.log is the main log file recorded by the Intelligence Server.


See Diagnostics and Performance Logging Tools to understand more about
the contents of and how to configure what is logged into DSSErrors.log. It
is one of the first log files that should be examined when troubleshooting
Intelligence Server issues, including crashes.

Minidum p File

A minidump file records the thread stack memory, running context, and
statistics for a crashed process, including:

l A list of the executable and shared libraries that were loaded in the
process.

l A list of threads present in the process. For each thread, the minidump
includes the state of the processor registers, and the contents of the
threads' stack memory. These data are uninterpreted byte streams.

Copyright © 2024 All Rights Reserved 883


Syst em Ad m in ist r at io n Gu id e

l Processor and operating system versions, the reason for the dump, and so
on.

l Environment variables of the process.

Exam ple of Key-Value Pairs

Example Crash Report

dump_path: /backtrace/var/coronerd//microstrategy/iserver/_
objects/0000/tx.0EC4.dmp outfile: -
Classifier: abort
Crash info:
Application: MSTRSvr
Crashed YES
reason: SIGABRT
address: 0x3e800007c69

"attributes": {
"license_key": "1234567890abcdefghijklmnopqrstuvwxyz",
"object_ids": "(OID,PID,UID)=
(DB8D5B064BBE3C24F541DAA81A507FDC,A279D4564F7E225A5FEF6C9164251029,54F3D26011D
2896560009A8E67019608)",
"production_env": 0,
"reason": "crash",
"server_sid": "CD880B5A4240CFDEF86251902C3E37DA",
"sys_name": "Windows NT",
"sys_version": "6.2.9200",
"total_cpu": "4",
"total_ram": "25769197568",
"upload_status_DSSErrors.log": "uploaded",
"attachment_DSSErrors.log": "DSSErrors.log",
"cpu.count": 2,
"uname.sysname": "Linux",
"uname.version": "3.10.0-957.el7.x86_64",
"uname.machine": "amd64",
"Product": "Intelligence Server",
"hostname": "CentOS-DEBUG-10-244-20-234",
"customer_dsi": "0123",

"purpose": "ABA",
"test_id": "T634",
"test_type": "acceptance",
"version": "11.2.0000.35320",
"upload_file_minidump": "322f119b-8951-4381-870a889e-84830514.dmp",
"error.message": "SIGABRT",
"fault.address": "0x3e800007c69",
"application": "MSTRSvr",
"process.age": 1570209499
},

Memory Regions:
[0] 0x400000 - 0x421000 r-xp
[1] 0x620000 - 0x622000 rw-p

Copyright © 2024 All Rights Reserved 884


Syst em Ad m in ist r at io n Gu id e

...

[1033] 0xffffffffff600000 - 0xffffffffff601000 r-xp


Sysinfo:
arch: amd64 family 6 model 79 stepping 1
cpu_count: 2
os: Linux linux 0.0.0 Linux 3.10.0-957.el7.x86_64 #1 SMP Thu Oct 4 20:48:51 UTC
2018 x86_64
Modules:
0x400000 - 0x421000 MSTRSvr (Warning no symbol, MSTRSvr
2647584AC63561E0FB40F3C267B0B6E10)
0x7f2eb4d6f000 - 0x7f2eb5009000 libMHSCHMNP.so
0x7f2eb521e000 - 0x7f2eb528c000 libMHCATNETS.so

...

0x7ffe9fdbc000 - 0x7ffe9fdbe000 linux-gate.so


Threads:
thread-0 tid=31849
[0] /usr/lib64/libpthread-2.17.so!__libc_sigwait + 0xf1
[1] /iserver-install/BIN/Linux/bin/MSTRSvr + 0x11c9b
[2] /iserver-install/BIN/Linux/bin/MSTRSvr + 0x13158
[3] /iserver-install/BIN/Linux/bin/MSTRSvr + 0x15f21
[4] /iserver-install/BIN/Linux/bin/MSTRSvr + 0x14d73
[5] /iserver-install/BIN/Linux/bin/MSTRSvr + 0x18b95
[6] /usr/lib64/libc-2.17.so!__libc_start_main + 0xf5
[7] /iserver-install/BIN/Linux/bin/MSTRSvr + 0x8399
thread-1 tid=31861
[0] /usr/lib64/libpthread-2.17.so!__pthread_cond_wait + 0xc5
[1] /iserver-install/BIN/Linux/lib/libsmartheap_smp64.so!shi_
waitForEventPageCache [/opt/admin/code/MSTR/3rdParty_Source/SmartHeap/sysunix.c
: 2780 + 0xf]
[2] /iserver-install/BIN/Linux/lib/libsmartheap_smp64.so!shi_pageCacheThread
[/opt/admin/code/MSTR/3rdParty_Source/SmartHeap/smp.c : 1269 + 0x13]
[3] /usr/lib64/libpthread-2.17.so!start_thread + 0xc5
[4] /usr/lib64/libc-2.17.so!__clone + 0x6d

...

thread-63 tid=32352
[0] /usr/lib64/libpthread-2.17.so!__pthread_cond_wait + 0xc5
[1] /iserver-
install/BIN/Linux/lib/libM8Synch4.so!MSynch::InprocessRecursiveMutex::SmartLoc
k::WaitUntilSpuriouslyWokenUp(pthread_cond_t&) const
[/var/lib/jenkins/Projects/microstrategy/Tech/Server/Common/Synch/Synch/Privat
eSource/../ProtectedSource/InprocessRecursiveMutex.h : 198 + 0x12]
[2] /iserver-
install/BIN/Linux/lib/libM8Synch4.so!MSynch::ManualEvent::WaitForever() const
[/var/lib/jenkins/Projects/microstrategy/Tech/Server/Common/Synch/Synch/Privat
eSource/../ProtectedSource/ManualEvent.h : 180 + 0xf]
[3] /iserver-
install/BIN/Linux/lib/libM8Synch4.so!MSynch::EventImpl::WaitForever() const
[/var/lib/jenkins/Projects/microstrategy/Tech/Server/Common/Synch/Synch/Privat
eSource/EventImpl.cpp : 87 + 0x10]
[4] /iserver-install/BIN/Linux/lib/libMJThread.so!MSIThread::Run()
[/var/lib/jenkins/Projects/microstrategy/Tech/Server/Kernel/SourceCode/MSIThre
ad/MSIThread.cpp : 603 + 0x5]
[5] /iserver-
install/BIN/Linux/lib/libM8Synch4.so!MSynch::RunnableProxyImpl::Run()

Copyright © 2024 All Rights Reserved 885


Syst em Ad m in ist r at io n Gu id e

[/var/lib/jenkins/Projects/microstrategy/Tech/Server/Common/Synch/Synch/Privat
eSource/../../Defines/RunnableProxyImpl.h : 93 + 0x5]
[6] /iserver-
install/BIN/Linux/lib/libM8Synch4.so!MSynch::ThreadImpl::ThreadFunction(void*)
[/var/lib/jenkins/Projects/microstrategy/Tech/Server/Common/Synch/Synch/Privat
eSource/ThreadImpl.cpp : 162 + 0x5]
[7] /usr/lib64/libpthread-2.17.so!start_thread + 0xc5
[8] /usr/lib64/libc-2.17.so!__clone + 0x6d

Example Diagnostic Report

Crash info:
"attributes": {
"server_sid": "E5A0BB539945ACC7C8D4C586D267C9F6",
"production_env": "false",
"license_key": "1234567890abcdefghijklmnopqrstuvwxyz",
"total_cpu": "4",
"total_ram": "16775131136",
"sys_version": "4.18.0-305.el8.x86_64 #1 SMP Thu Apr 29 08:54:30 EDT 2021",
"sys_name": "Linux",
"hostname": "CentOS-DEBUG-10-244-20-234",
"customer_dsi": "0123",
"purpose": "ABA",
"reason": "server_stop",
"version": "11.3.0560.01287",
}

Automated Crash Report and Diagnostics Configuration


The Intelligence Server uses the crash_report.ini file to configure the
Automated Crash Reporting and Diagnostics Tool behavior, which is under
the current working directory. Below is the default location:

l Windows: C:\Program Files


(x86)\MicroStrategy\Intelligence Server

l Linux: /opt/mstr/MicroStrategy/IntelligenceServer/crash_
report.ini

The crash_report.ini file

The crash_report.ini configuration file has two sections as shown


below:

Copyright © 2024 All Rights Reserved 886


Syst em Ad m in ist r at io n Gu id e

[Config]
ServerURL=https://submit.backtrace.io/microstrategy/35d272ef647f8ec00f3560749e
6457bb7a131fed4418f9b48adad8e113e0dca8/minidump

ServerProdURL=https://submit.backtrace.io/microstrategy/35d272ef647f8ec00f3560
749e6457bb7a131fed4418f9b48adad8e113e0dca8/minidump
dump_path=crash_dumps
enable=truediagnostics=true
native_dump=true
keep_dump_file=true

[CrashAttachments]
DSSErrors.log=true
cubehealthchecker.log=true

Ho w t o d i sab l e t h e Au t o m at ed Cr ash Rep o r t i n g an d Di agn o st i cs To o l

The Automated Crash Reporting and Diagnostics Tool is enabled by default


when installing MicroStrategy. To disable:

1. Open the crash_report.ini file in a text editor.

2. In the [Config] section of the file:

l To disable crash reporting, set the enable parameter to false as


shown below.

l To disable diagnostics reporting, set the diagnostics parameter to


false as shown below.

This configuration is available starting in MicroStrategy 2021 Update


5.

3. Save the crash_report.ini file.

4. Restart the Intelligence Server to apply the changes.

Ho w t o m an u al l y u p l o ad t h e m i n i d u m p f i l e t o M i cr o St r at egy

The Automatic Crash Reporting and Diagnostics Tool will automatically


upload results to a remote server for analysis. Should it be necessary to
manually upload a crash report to MicroStrategy, perform the following
steps:

Copyright © 2024 All Rights Reserved 887


Syst em Ad m in ist r at io n Gu id e

1. Copy the dump file to a machine with outbound communication enabled


on port 443.

2. Open a Command Line/Terminal and enter the following command:

curl -X POST -F "upload_file_minidump=@<dump_file_path>" -F "customer_


dsi=<dsi>" -F "attachment_DSSErrors.log=@<dsserrors_log_path>"
<ServerProdURL>

l <dump_file_path> is the path to the copied dump file.

l <dsi> is the customer identifier.

l <dsserrors_log_path> is the path to your DSSErrors.log file.


This is optional and you can remove the -F "attachment_
DSSErrors.log=@<dsserrors_log_path>" argument to exclude
DSSErrors.log.

l <ServerProdURL> is the value of key 'ServerProdURL' in the


crash_report.ini configuration file.

Windows Example

curl -X POST -F "upload_file_minidump=@C:\Program Files


(x86)\MicroStrategy\Intelligence Server\crash_dumps\ABCDEFG.dmp" -F
"customer_dsi=1234" -F "attachment_DSSErrors.log=@C:\Program Files
(x86)\Common Files\MicroStrategy\logs\DSSErrors.log"
https://submit.backtrace.io/microstrategy/<token_id>/minidump

Linux/MacOS Example

curl -X POST -F "upload_file_


minidump=@/var/opt/MicroStrategy/IntelligenceServer/crash_
dumps/ABCDEFG.dmp" -F "customer_dsi=1234" -F "attachment_
DSSErrors.log=@/var/log/MicroStrategy/DSSErrors.log"
https://submit.backtrace.io/microstrategy/<token_id>/minidump

3. If successful, you receive a 200 response:

{"response":"ok","_rxid":"68000000-cde8-6f03-0000-
000000000000"}

Copyright © 2024 All Rights Reserved 888


Syst em Ad m in ist r at io n Gu id e

Select Environment Type for Crash Reports


The Automated Crash Reporting and Diagnostics Tool determines whether a
crash occurs or the diagnostics information collected in a production
environment and uploads this information together with the generated crash
report and diagnostics information. It helps MicroStrategy to correctly
understand the impact on the customer and prioritize the investigation
accordingly. The correct environment type should be selected when
activating a MicroStrategy installation. To select the correct environment:

1. Execute the product activation workflow according to the steps in


Activating Your Installation Using License Manager.

2. When prompted for installation information, select the correct


environment type under the Use section. For example, to select a

Copyright © 2024 All Rights Reserved 889


Syst em Ad m in ist r at io n Gu id e

production environment, go to the License Administration tab > Use


and select Production.

3. Click Next and finish the product activation.

4. Restart the Intelligence Server to apply the newly selected environment


type and activation settings.

Verifying Reports and Documents with Integrity Manager


MicroStrategy Integrity Manager is an automated comparison tool designed
to streamline the testing of MicroStrategy reports and documents in projects.
This tool can determine how specific changes in a project environment, such
as the regular maintenance changes to metadata objects or hardware and
software upgrades, affect the reports and documents in that project.

For instance, you may want to ensure that the changes involved in moving
your project from a development environment into production do not alter
any of your reports. Integrity Manager can compare reports in the
development and the production projects, and highlight any differences. This
can assist you in tracking down discrepancies between the two projects.

You can use Integrity Manager to execute reports or documents from a


single MicroStrategy project to confirm that they remain operational after
changes to the system. Integrity Manager can execute any or all reports
from the project, note whether those reports execute, and show you the
results of each report.

Integrity Manager can also test the performance of an Intelligence Server by


recording how long it takes to execute a given report or document. You can
execute the reports or documents multiple times in the same test and record
the time for each execution cycle, to get a better idea of the average
Intelligence Server performance time. For more information about
performance tests, see Testing Intelligence Server Performance, page 1576.

For reports you can test and compare the SQL, grid data, graph, Excel, or
PDF output. For documents you can test and compare the Excel or PDF
output, or test whether the documents execute properly. If you choose not to

Copyright © 2024 All Rights Reserved 890


Syst em Ad m in ist r at io n Gu id e

test and compare the Excel or PDF output, no output is generated for the
documents. Integrity Manager still reports whether the documents executed
successfully and how long it took them to execute.

l To execute an integrity test on a project, you must have the Use Integrity
Manager privilege for that project.

l Integrity Manager can only test projects in Server (three-tier) mode.


Projects in Direct Connection (two-tier) mode cannot be tested with this
tool.

l To test the Excel export of a report or document, you must have Microsoft
Excel installed on the machine running Integrity Manager.

Enterprise Manager
MicroStrategy Enterprise Manager helps you analyze Intelligence Server
statistics. Enterprise Manager provides a prebuilt MicroStrategy project with
more than a hundred reports and dashboards covering all aspects of
Intelligence Server operation. You can also use Enterprise Manager's
prebuilt facts and attributes to create your own reports so you can have
immediate access to the performance and system usage information.

For steps on setting up Enterprise Manager and using the reports in it, see
the .Enterprise Manager Help.

Copyright © 2024 All Rights Reserved 891


Syst em Ad m in ist r at io n Gu id e

TUN E YOUR SYSTEM FOR


TH E B EST PERFORM AN CE

Copyright © 2024 All Rights Reserved 892


Syst em Ad m in ist r at io n Gu id e

Tuning a MicroStrategy system is not an exact science. Because your


system resources, application performance, and user requirements and
expectations are unique, it is not possible for MicroStrategy to include an
exact methodology or set of recommendations for optimization.

One of your most important jobs as a MicroStrategy system administrator is


to find the balance that maximizes the use of your system's capacity to
provide the best performance possible for the required number of users.
This section discusses how to analyze your users' requirements, and the
ways you can configure and tune your system to meet those requirements.

Tuning: Overview and Best Practices


To get the best performance out of your MicroStrategy system, you must be
familiar with the characteristics of your system and how it performs under
different conditions. In addition to this, you need a plan for tuning the
system. For example, you should have a base record of certain key
configuration settings and performance measures, such as Enterprise
Manager reports or diagnostics logs, before you begin experimenting with
those settings. Make one change at a time and test the system performance.
Compare the new performance to the base and see if it improved. If it did not
improve, change the setting back to its previous value. This way, when
system performance improves, you know which change is responsible.

The specifications of the machines that you use to run Intelligence Server,
how you tune those machines, and how they are used depend on the number
of users, number of concurrently active users, their usage patterns, and so
on. MicroStrategy provides up-to-date recommendations for these areas on
the MicroStrategy Knowledge Base.

As a high-level overview of tuning the system, you should first define your
system requirements, and then configure the system's design using those
requirements. The following topics lay the foundation for the specific tuning
guidelines that make up the rest of this section.

Copyright © 2024 All Rights Reserved 893


Syst em Ad m in ist r at io n Gu id e

l Define the System Requirements, page 894

l Configure the System Design, page 895

l Best Practices for Tuning your System, page 1024

Define the System Requirements


You most likely have certain expectations or requirements that the system
must meet for it to be considered a success. For example, you may have a
set of requirements similar to one of these scenarios:

l Global Web-based deployment for 400 users with 15-second response


time for prompted reports and the ability to subscribe to personalized
weekly sales reports.

l Internal deployment for 200 market research analysts accessing an


enterprise data warehouse on a completely ad hoc basis.

l Web-based deployment for 1,500 remote users with access to pre-defined


daily sales and inventory reports with 5-second response time.

These scenarios share common requirements that can help you define your
own expectations for the system, such as the following:

l You may require that the system be able to handle a certain number of
concurrent users logged in, or a certain number of active users running
reports and otherwise interacting with the system.

l You may require a certain level of performance, such as report results


returning to the users within a certain time, or that the results of report
manipulation happen quickly, or that a certain number of reports can be
run within an hour or within a day.

l You may require that users have access to certain features, such as
scheduling a report for later execution, or sending a report to someone
else via email, or that your users will be able to access their reports online
through MicroStrategy Web.

Copyright © 2024 All Rights Reserved 894


Syst em Ad m in ist r at io n Gu id e

l You may require that certain functionality be available in the system,


such as allowing report flexibility so users can run ad hoc, predefined,
prompted, page-by, or Intelligent Cube reports.

Configure the System Design


It is important to understand that the MicroStrategy business intelligence
system has a limited capacity. It cannot serve an unlimited number of users
and process an unlimited number of jobs in a short time. This capacity can
be thought of as a box shared by the two important goals of serving the
necessary number of user sessions (through which users submit requests)
and maximizing the number of jobs executed (which return results).

The limits that the system encounters may be Intelligence Server machine
capacity, the data warehouse's throughput capacity, or the network's
capacity.

The main factors that affect the system's capacity are:

l The system resources available (including memory)

l The architecture of the system and network

l The design of the reports that are executed

l The configuration of Intelligence Server and projects to determine


how system resources can be used

The diagram below illustrates these factors that influence the system's
capacity.

Copyright © 2024 All Rights Reserved 895


Syst em Ad m in ist r at io n Gu id e

UNIX and Linux systems allow processes and applications to run in a virtual
environment. Intelligence Server Universal installs on UNIX and Linux
systems with the required environment variables set to ensure that the
server's jobs are processed correctly. However, you can tune these system
settings to fit your system requirements and improve performance. For more
information, see the Planning Your Installation section of the Installation
and Configuration Help.

Configuring Run-Time Capacity Variables


Run-time capacity variables are factors that influence performance and
capacity after Intelligence Server has started. The two run-time capacity
variables are user sessions (see Managing User Sessions, page 1062) and
executing jobs (see Manage Job Execution, page 1080).

These runtime capacity variables are interrelated with system capacity. If


you change settings in one, the others are affected. For example, if you
place more emphasis on serving more user sessions, the job execution may
suffer because it does not have as much of the system capacity available to
use. Or, for example, if you increase Intelligence Server's capacity, it could
execute jobs more quickly or it could serve more user sessions.

Accessing the System Configuration Editors


Many of the options in the following sections are specified in the Intelligence
Server Configuration Editor or the Project Configuration Editor.

You must have the Configure Governing privilege for the project or project
source.

You must have Configuration permissions for the server object. In addition, to
access the Project Configuration Editor you must have Write permission for the
project object. For more information about server object permissions, see
Permissions for Server Governing and Configuration, page 95.

Copyright © 2024 All Rights Reserved 896


Syst em Ad m in ist r at io n Gu id e

To Access the Intelligence Server Configuration Editor

1. In Developer, log in to a project source.

2. Go to Administration > Server > Configure MicroStrategy


Intelligence Server.

See Intelligence Server Configuration Default Settings for more


information about the default settings for Intelligence Server.

To Access the Project Configuration Editor for a Project

1. In Developer, log in to a project source.

2. Expand the project that you want to configure.

3. Go to Administration > Projects > Project Configuration.

See Project Configuration Default Settings for more information about


the default settings for projects.

Intelligence Server Configuration Default Settings


The default Intelligence Server configuration settings, defined in the
Intelligence Server Configuration Editor, are provided below.

To access the Intelligence Server Configuration Editor in Developer, right-


click the Intelligence Server and select Configure MicroStrategy
Intelligence Server.

Copyright © 2024 All Rights Reserved 897


Syst em Ad m in ist r at io n Gu id e

Server Definition - General

Default
Setting Description
Value

Enter a description for the server


Description Empty
definition.

Client-Server Number of
Enter the number of network threads. 5
Communications network threads

Set Click Modify to open the Properties


Properties Configuration dialog box to enter a Server Empty
object properties Definition.

Content Server Database


Select a database instance <None>
Location Instance

Server Definition - Security

Default
Setting Description
Value

Specifies the number of failed


login attempts allowed. Once a
user has this many failed login
attempts in a row, the user is
Account Lock Lock after (failed
locked out of the MicroStrategy -1
Policy attempts)
account until an administrator
unlocks the account. Setting this
value to -1 indicates that users are
never locked out of their accounts.

The number of each user's


Number of past previous passwords that
Password Policy password Intelligence Server stores in 0
remembered memory. Intelligence Server
prevents users from using a

Copyright © 2024 All Rights Reserved 898


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

password that is identical to one


they have previously used. This
option must be greater than 0 to
enable the other options.

When this option is selected,


Do not allow
Intelligence Server ensures that
user login and
new passwords do not contain the Unchecked
full name in
user's login or part of the user's
password
name.

Do not allow When this option is selected,


rotating Intelligence Server prevents users
Unchecked
characters from from using a password that is
last password backwards from the old password.

The minimum password length.


Minimum number
This option must be greater than 0
of characters
zero to enable the other options.

The minimum number of upper


Minimum upper
case (A-Z) characters that mist be 0
case characters
present in users' passwords.

The minimum number of lower


Minimum lower
case (a-z) characters that mist be 0
Enforce case characters
present in users' passwords.
Password
Complexity Minimum The minimum number of numeric
numeric (0-9) characters that mist be 0
characters present in users' passwords.

The minimum number of non-


Minimum special alphanumeric (symbol) characters
0
characters that mist be present in users'
passwords.

Minimum number The minimum number of character 0

Copyright © 2024 All Rights Reserved 899


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

of character
changes.
changes

Update pass-
Select this checkbox to update the
through
user's database or LDAP
credentials when Checked
credentials on a successful
a successful
MicroStrategy login.
login occurs

Database Select to use database


Selected
Authentication authentication
Authentication
Policy LDAP
Select to use LDAP authentication Unselected
Authentication

Use
Check this checkbox to use a
Public/Private
public or private key to sign or
Key to Unchecked
verify a token. This requires the
Sign/Verify
setup of a public or private key.
Token

Token Lifetime The lifetime, in minutes, of the


1440
(Minutes) token.

Select the number of hashing


Encryption Level Hash Iterations 10,000
iterations.

Server Definition - Change Journaling

Default
Setting Description
Value

Starting with Version 10.8, Change


Configure Enable
Journaling is permanently enabled.
Change Change Checked
Administrators can manage file size purging
Journaling Journaling
as needed.

Copyright © 2024 All Rights Reserved 900


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

In the Purge all data logged before fields,


enter a date and time. All change journal data
logged before this date and time is purged
when you click Purge Now.
Purge all Current
data logged In the Date field, enter the date. You can also Date and
before click the drop-down list and select a date from Time

Purge the calendar. To change the month in the

Change calendar, click the < or > arrows.

Journal
In the Time field, enter the time.

Enter the timeout setting. This is the length of


time Intelligence Server waits for a response
Purge
when it issues a purge command to the
timeout 600
change journal. If there is no response by the
(seconds)
time the timeout has elapsed, the purge
command is cancelled.

Server Definition - Advanced

Default
Setting Description
Value

Backup frequency (minutes): controls


the frequency (in minutes) at which
cache, History List messages, and
Backup
Intelligent Cubes are backed up to
frequency 0
disk. A value of 0 means that cache,
Settings (minutes)
History List messages, and Intelligent
Cubes are backed up immediately
after they are created.

Balance Balance MicroStrategy Server threads: Unchecked

Copyright © 2024 All Rights Reserved 901


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

controls whether threads within the


Intelligence Server are allocated to
processes such as object serving,
MicroStrategy
element serving, SQL generation, and
Server threads
so forth that need them most, while
less loaded ones can return threads to
the available pool.

Cache lookup cleanup frequency


(sec): cleans up the cache lookup
Cache lookup
table at the specified frequency (in
cleanup
seconds). This reduces the amount of 0
frequency
memory it consumes and the time it
(seconds)
takes to back up the lookup table to
disk.

Project failover latency (min): in a


clustered system, controls the delay
between one server failing and its
projects being loaded onto other
Project failover servers. A high latency period allows
latency more time for the server to come back 30
(minutes) online before projects are loaded onto
other servers; a low latency period can
provide less downtime in the event of
a server failure, but higher load on the
servers while the projects are loading.

Configuration recovery latency (min):


in a clustered system, controls the
Configuration amount of time before the system
recovery reverts to its original configuration
-1
latency when a server has failed and then
(minutes) come back online. A high latency
period allows more time to be certain
the server is permanently online; a low

Copyright © 2024 All Rights Reserved 902


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

latency period reduces the strain on


the other machines in the cluster
faster.

Time to run license check (24 hr


Time to run
format): sets the specific time at which
license check 23:59
Intelligence Server checks licenses
(24 hr format)
daily.

Include LDAP Select this option to include LDAP


Unchecked
Users users in the license check.

User Affinity Cluster: in a clustered


system, causes Intelligence Server to
connect all sessions for a given user
to the same node of the cluster. This
improves performance by reducing the
resources necessary for the user's
History List.

If the User Affinity Cluster checkbox is


User Affinity
selected, Subscription load balancing Unchecked
Cluster
causes Intelligence Server to load-
balance subscriptions across all nodes
of the cluster. One subscription job is
created for each user or user group in
the subscription. If User A and User
Group G are subscribed to a
dashboard, the subscription creates
one job for User A, and a second job
for User Group G.

Select to enable subscription load


balancing. If Subscription Load
Subscription
Balancing is enabled in a two-node Unchecked
Load Balancing
cluster, the subscription for User A
would execute on one node, and the

Copyright © 2024 All Rights Reserved 903


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

subscription for user Group G would


execute on the other node.

Use MicroStrategy Scheduler enables


the scheduling features in the
Use
MicroStrategy environment. If this
MicroStrategy Checked
checkbox is cleared, neither the
Scheduler
reports nor administration tasks can
Scheduler be scheduled.

Scheduler session time out (sec)


Scheduler
controls how long the Scheduler
Session 300
attempts to communicate with
Timeout (sec)
Intelligence Server before timing out.

Enable performance monitoring


enables Intelligence Server logging in
to Microsoft Windows Performance
Monitor. Select this checkbox to be
Enable able to add counters to the
Performance
Performance Performance Monitor specifically for Checked
Monitoring
Monitoring MicroStrategy Server Jobs and
MicroStrategy Server Users. If you
clear the checkbox you cannot select
these options within the Performance
Monitor.

Statistics - General

Default
Setting Description
Value

Server Complete Different projects log statistics to different


Level Session databases. This ensures complete data for all Selected
Session Logging these projects.

Copyright © 2024 All Rights Reserved 904


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Single Statistics for all projects on this Intelligence


Statistics Instance Server are logged to a single database. From
Unselected
Session the drop-down list, select the project name
Logging whose database you want to log statistics to.

Statistics - Purge

Default
Setting Description
Value

Today
Select Select the date range within which you want the minus one
From/To
dates purge operation to be performed. year/
Today

Purge timeout (sec.): Specify a timeout setting in


seconds; the server uses this setting during the
purge operation. The server issues a single SQL
Purge timeout (sec) 10
statement to purge each statistics table, and the
timeout setting applies to each individual SQL
statement issued during the purge operation.

Governing Rules - Default - General

Default
Setting Description
Value

Maximum number of jobs: Limits the


Maximum number of concurrent jobs that may
number of exist on this Intelligence Server. 10,000
jobs Concurrent jobs include report,
element, and autoprompt requests that

Copyright © 2024 All Rights Reserved 905


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

are executing or waiting to execute.


Finished (open) jobs, cached jobs, or
jobs that returned errors are not
counted. A value of -1 indicates no
limit.

Maximum number of interactive jobs:


Maximum Limits the number of concurrent
number of interactive (non-scheduled) jobs that -1
interactive jobs may exist on this Intelligence Server. A
value of -1 indicates no limit.

Maximum number of scheduled jobs:


Maximum Limits the number of concurrent
number of scheduled jobs that may exist on this -1
scheduled jobs Intelligence Server. A value of -1
indicates no limit.

Maximum number of user sessions:


Limits the number of user sessions
(connections) for an Intelligence
Server. A single user account may
establish multiple sessions to an
Intelligence Server. Each session
connects once to the Intelligence
Maximum number of user Server and once for each project the
320
sessions user accesses. Project sessions are
governed separately with a project
level setting. When the maximum
number of user sessions is reached,
users cannot log in, except for the
administrator, who may wish to
disconnect current users or increase
the governing setting.

User session idle time (sec): Limits the


User session idle time (sec) 1800
time, in seconds, that Developer users

Copyright © 2024 All Rights Reserved 906


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

can remain idle before their Developer


session is ended. A value of -1
indicates no limit.

Web user session idle time (sec):


Limits the time, in seconds, that Web
Web user session idle time (sec) users can remain idle before their Web 600
session is ended. A value of -1
indicates no limit.

Limits the time, in seconds, that mobile


Mobile APNS and GCM session client connections remain open to
1800
idle time (sec) download Newsstand subscriptions. A
value of -1 indicates no limit.

If this option is selected, reports


executed as part of a Report Services
For Intelligence Server job and Document are not counted against the
Unchecked
history list... server-level job limits above and the
project-level job limits set in the Project
Configuration Editor.

Enable background execution of


documents after their caches are hit: If
this option is selected, when a
document cache is hit, Intelligence
Enable Server displays the cached document
Background background and re-executes the document in the
Unchecked
Execution execution of background. If this option is cleared,
documents... when a document cache is hit,
Intelligence Server displays the cached
document and does not re-execute the
document until a manipulation is
performed.

Copyright © 2024 All Rights Reserved 907


Syst em Ad m in ist r at io n Gu id e

Governing Rules - Default - File Generation

Default
Setting Description
Value

Limits the size of reports (rows x columns)


when they are executed from MicroStrategy
Web. When this limit is met, incremental
fetch is used. Note that this setting does
Maximum not affect reports executed from
number of XML Developer. However, if the report is part of 500,000
cells an HTML document, then when this HTML
document is executed from either Web or
Developer, the report is cut off, and there
is no incremental fetch. The maximum
value that you can specify is 9999999.

Limits the number of attributes to which


users can drill in MicroStrategy Web.
Attributes are displayed under the
XML
hierarchy to which they belong, and
Generation
hierarchies are displayed in alphabetical
Maximum order by the name of the hierarchy. If this
number of XML setting is set to a low number, the available 100
drill paths drill attributes may not all be displayed to
the user. However, if it is set too high,
performance may be affected because
reports will consume more memory. The
maximum value that you can specify is
3000.

Limits the memory consumption for XML


Maximum
files. Set the limit according to the size
memory
expected of the XML documents to be 512
consumption
generated to avoid memory-related errors.
for XML (MB)
The maximum value is 2047 MB.

PDF Maximum Limits the memory consumption for PDF


512
Generation memory files. Set the limit according to the

Copyright © 2024 All Rights Reserved 908


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

consumption expected size of the PDF documents to be


for PDF files generated to avoid memory-related errors.
(MB) The maximum value is 2047 MB.

Limits the memory consumption for Excel


Maximum
files generated from MicroStrategy Web.
memory
Excel Set the limit according to the expected size
consumption 512
Generation of the Excel documents to be generated to
for Excel files
avoid memory-related errors. The
(MB)
maximum value is 2047 MB.

Maximum Limits the memory consumption for HTML


memory files. Set the limit according to the
HTML
consumption expected size of the HTML documents to be 512
Generation
for HTML files generated to avoid memory-related errors.
(MB) The maximum value is 2047 MB.

Governing Rules - Default - Memory Settings

Default
Setting Description
Value

Job throttling affects MicroStrategy Web


requests only. If either of the following
conditions is met, all MicroStrategy Web
requests of any nature (login, report Checked
execution, search, and folder browsing)
Enable Web are denied until the conditions are
request job resolved.
throttling
Maximum Sets the maximum amount of total system
Intelligence memory (RAM + Page File) that the
Server use of Intelligence Server process can use 97
total memory compared to the total amount of memory
(%) on the machine.

Copyright © 2024 All Rights Reserved 909


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Minimum This sets the minimum amount of RAM


machine free that must be available compared to the
0
physical total amount of physical memory on the
memory (%) machine.

Select to specify how much memory can


be reserved for a single Intelligence Unchecked
Server operation at a time.

Each memory request is compared to the


value in this field. If the request exceeds
this limit, the request is denied.
Enable single Example: If the allocation limit is set to
memory 100 MB and a request is made for 120
allocation Maximum
MB, the request is denied, but a later
governing single
request for 95 MB is allowed. 20
allocation
size (MB) Each request is handled independently of
other requests and is resolved in the
order it is received. The maximum value
allowed for the Maximum single
allocation size (MBytes) setting is 2047
MB.

Enable the use of the Memory Contract


Manager, an Intelligence Server
Checked
component that is controlled by the
following settings.
Enable
memory Specify the amount of memory (in MB)
contract that cannot be used by Intelligence
management Minimum Server. This may be useful if the machine
reserved is also used to run software from other 0
memory (MB) parties and is not solely dedicated to
Intelligence Server. The maximum value
allowed for this setting is 10239 MB.

Copyright © 2024 All Rights Reserved 910


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Specify the amount of memory (in MB)


that cannot be used by Intelligence
Minimum Server. This may be useful if the machine
reserved is also used to run software from other 10
memory (%) parties and is not solely dedicated to
Intelligence Server. The maximum value
allowed for this setting is 10239 MB. .

If the Intelligence Server machine uses a


32-bit operating system, specify the
Maximum use maximum amount of the virtual address
of virtual space that Intelligence Server can use,
90
address as a percentage of the total virtual
space (%) address space. Memory requests that
would cause Intelligence Server to
exceed this limit are denied.

The amount of time Intelligence Server


denies requests while operating in
memory request idle mode. Memory
Memory request idle mode is enabled by the
request idle Memory Contract Manager when a 300
time (sec) memory request would cause Intelligence
Server to exceed the Maximum use of
virtual address space setting, based on
current memory utilization and contracts.

Specifies the size of the pool of memory


(in megabytes) allocated for creating and
Maximum
Temporary initially storing reports in the working set.
RAM for
Storage This is also the size of the largest 2048
Working Set
Settings working set that can be created. For 32-
cache (MB)
bit operating systems, the maximum
value is 2048 megabytes (2 gigabytes).

Copyright © 2024 All Rights Reserved 911


Syst em Ad m in ist r at io n Gu id e

Governing Rules - Default - Temporary Storage Settings

Setting Description Default Value

Specifies the location where the


user's active working sets are
Temporary
Working Set written to disk if they have been
Storage .\TmpPool
file directory forced out of the pool of memory
Settings
allocated for the Maximum RAM for
working set cache.

Session
Recovery
.\Inbox\SERVER_
and Deferred Specifies where the session
DEFINITION_
Inbox information is written to disk.
NAME\
storage
directory

Session Enable Web


Select this checkbox to allow users
Recovery User
to recover the report, document, or
and Deferred Session Checked
dashboard they were on when their
Inbox Recovery on
session ended.
Logout

Session How many hours a session backup


Recovery can remain on disk before it is
backup considered expired. After it is 168
expiration expired, the user cannot recover the
(hrs) session.

Copyright © 2024 All Rights Reserved 912


Syst em Ad m in ist r at io n Gu id e

Governing Rules - Default - Import Data

Default
Setting Description
Value

If you have an OLAP Services license, you can


import data from sources such as Excel
spreadsheets into your MicroStrategy system.

High This data is made available in Intelligent Cubes 1


and can be used in reports and documents. See
the Project Design Help for more information
about importing data from other file or database
sources.

A connection thread is assigned to each import


data job. These connection threads are
Number of assigned from a pool of threads of high,
connections Medium medium, and low priority. You can configure the 1
by priority number of threads of each priority that are
available for importing data in the Import Data
section of the Intelligence Server Configuration
Editor. For more information about how
Intelligence Server determines what jobs are
high, medium, or low priority, see Creating job
prioritizations.
Low In this category, you can specify the maximum 20

number of high, medium, and low priority


connection threads that are used to import data
into the data warehouse.

Governing Rules - Default - Catalog Cache

Default
Setting Description
Value

Enable Maximum The Catalog cache is a cache for the catalog for Checked
catalog use of the data warehouse database. When the Catalog
25

Copyright © 2024 All Rights Reserved 913


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

cache is enabled, the client can retrieve the data


warehouse catalog directly from Intelligence
Server memory, instead of executing catalog
SQL. This can significantly speed up retrieval of
the data warehouse catalog.

The Catalog cache is used during data import,


and when using Query Builder in MicroStrategy
Web.
memory
cache
(MB) The Catalog cache subcategory has the following
settings:

Enable catalog cache: Select this checkbox to


enable the catalog cache, or clear it to disable the
catalog cache.

Maximum use of memory (MB): Limits the


maximum amount of memory, in megabytes, used
by the catalog cache.

Projects - General

Default
Setting Description
Value

A table of projects indicating whether your server and


Project Name Checked
environment are selected.

When selected, this allows you to display only those


Show selected projects that have been assigned to be loaded on a
Unchecked
projects only node. For display purposes it filters out projects that
are not loaded on any server in the cluster.

When selected, this immediately applies your changes


Apply startup
across the cluster. If cleared, any changes you made
configuration on Checked
are saved when you click OK, but are not put into
save
effect until the Intelligence Server is restarted.

Copyright © 2024 All Rights Reserved 914


Syst em Ad m in ist r at io n Gu id e

Clustering - General

Default
Setting Description
Value

Server The Clustering category displays the names of all the


servers that are available in a clustered environment. You Unchecked
Name
can select multiple servers to form the cluster during a
Load manual restart of the MicroStrategy Intelligence Server.
Balance See Cluster Multiple MicroStrategy Servers topic for more 1
Factor information about clustering.

LDAP - Server

Default
Setting Description
Value

Host (Server The host name or the IP address of


Name or IP the LDAP server. This is a required Empty
Address) field.

Port number of the LDAP server.


Port 389 is the default for the clear
text LDAP, and Port 636 is the
default for SSL. However, your
Port LDAP administrator may have set 389
LDAP Server the LDAP port to a different number
Settings than the default; always confirm the
accurate port number with your
LDAP administrator.

Clear text (not


Clear text is not encrypted Selected
encrypted)

SSL (encrypted) Select to encrypt SSL. Unselected

Server If select SSL (encrypted), select the


Empty
certificate file server certificate file. The certificate

Copyright © 2024 All Rights Reserved 915


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

information required for the server


certificate file field depends on the
LDAP server vendor (product) being
used, since the certificate comes
from the LDAP server.

With connection pooling, you can


reuse an open connection to the
LDAP server for subsequent
operations. The connection to the
LDAP server remains open even
Use connection when the connection is not
Unchecked
pooling processing any operations. This
setting can improve performance by
removing the processing time
required to open and close a
connection to the LDAP server for
each operation.

MicroStrategy uses authentication


binding to authenticate users on
user name, password, and several
Binding other account restrictions. Account Selected
restrictions include whether the
account is locked, expired, disabled,
or identified as an intruder.

Password MicroStrategy authenticates users


Unselected
comparison on user name and password only.

The distinguished name for the


Distinguished
trusted LDAP Authentication User Empty
Authentication name (DN)
who searches the LDAP repository.
User
Password for the LDAP
Password Empty
Authentication User.

Copyright © 2024 All Rights Reserved 916


Syst em Ad m in ist r at io n Gu id e

LDAP - Platform

Setting Description Default Value

Select the vendor name of the


LDAP server software that
Intelligence Server is connecting
to, from the drop-down list.
Options include HP-UX, IBM,
LDAP server Microsoft Active Directory,
Novell
vendor name Novell, Open LDAP, Other
providers, and Sun ONE/iPlanet.
When you select the vendor
name, the default LDAP
connectivity file names are
populated in the interface.

Select the vendor name of the


LDAP server software that
Intelligence Server is connecting
to, from the drop-down list.
Vendor
Options include HP-UX, IBM,
LDAP Microsoft Active Directory,
Connectivity Novell, Open LDAP, Other Novell
Driver providers, and Sun ONE/iPlanet.
When you select the vendor
LDAP Connectivity Driver, the
default LDAP connectivity file
names are populated in the
interface.

Select the operating system


Intelligence Server is installed
Inteliigence on, from the drop-down list. This
Server is the operating system on which Windows
platform the LDAP Connectivity Driver
and connectivity files should be
installed.

Copyright © 2024 All Rights Reserved 917


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

Enter the LDAP connectivity file


name(s) for the LDAP server.
The correct DLLs are
automatically populated when the
LDAP Connectivity Driver and
Intelligence Server platform are
LDAP selected. It should not be
connectivity necessary to change the default Ldapsdk.dll;Ldapssl.dll;
file names value. You must separate
multiple DLL names with
semicolons (;). See Identifying
Users: Authentication for
recommeded DLLs and
information on LDAP
Connectivity Drivers.

LDAP - Filters

Default
Setting Description
Value

Provide the root DN to establish the


directory location from where in the
LDAP tree to start any user and group
searches. If a root DN is not provided,
Intelligence Server searches the entire
Search root LDAP tree. You can think of the root
Search
distinguished DN as the highest level in the LDAP Empty
Settings
name (DN) tree where the search can reach. The
image below represents the section of
the LDAP tree that the search
accesses, represented as the
distinguished name nodes with solid
blue lines. The distinguished name

Copyright © 2024 All Rights Reserved 918


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

nodes with dashed grey lines are not


included in the search since they are
not under the search root distinguished
name.

Enter the user search filter to search


for lists of users in the LDAP directory.
Default information appears
automatically based on the vendor
name provided in the Platform
Connectivity step of this wizard; this
default is only an example, contact your
LDAP administrator for the appropriate
values to enter.

The user search filter is generally in the


following form:

(&(objectclass=LDAP_USER_OBJECT_
CLASS)(LDAP_LOGIN_ATTR=#LDAP_
User search LOGIN#))
Empty
filter
Where:

LDAP_USER_OBJECT_CLASS
indicates the object class of the LDAP
users. For example, you can enter (&
(objectclass=person)(cn=#LDAP_
LOGIN#)).

LDAP_LOGIN_ATTR indicates which


LDAP attribute to use to store LDAP
logins. For example, you can enter
($(objectclass=person)(cn=#LDAP_
LOGIN#)).

#LDAP_LOGIN# can be used in this


filter to represent the LDAP user login.

Copyright © 2024 All Rights Reserved 919


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Note: Depending on your LDAP server


vendor and your LDAP tree structure,
you may need to try different attributes
within the search filter syntax above.
For example:

(&(objectclass=person)
(uniqueID=#LDAP_LOGIN#))

where uniqueID is the LDAP attribute


name your company uses for
authentication.

Enter the group search filter to search


for lists of LDAP groups that LDAP
users belong to. Default information
automatically appears based on the
vendor name provided in the Platform
Connectivity step of this wizard; this
default is only an example, contact
your LDAP administrator for the
appropriate values to enter.

The group search filter is generally in


one of the following forms (or the
Group search
following forms may be combined with Empty
filter
a pipe | symbol):

(&(objectclass=LDAP_GROUP_
OBJECT_CLASS)(LDAP_MEMBER_
LOGIN_ATTR=#LDAP_LOGIN#))

(&(objectclass=LDAP_GROUP_
OBJECT_CLASS)(LDAP_MEMBER_
DN_ATTR=#LDAP_DN#))

(&(objectclass=LDAP_GROUP_
OBJECT_CLASS)(gidNumber=#LDAP_
GIDNUMBER#))

Copyright © 2024 All Rights Reserved 920


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

The group search filter forms listed


above have the following placeholders:

LDAP_GROUP_OBJECT_CLASS
indicates the object class of the LDAP
groups. For example, you can enter (&
(objectclass=groupOfNames)
(member=#LDAP_DN#)).

LDAP_MEMBER_[LOGIN or DN]_ATTR
indicates which LDAP attribute of an
LDAP group is used to store LDAP
logins/DNs of the LDAP users. For
example, you can enter (&
(objectclass=groupOfNames)
(member=#LDAP_DN#)).

#LDAP_DN# can be used in this filter


to represent the distinguished name of
an LDAP user.

#LDAP_LOGIN# can be used in this


filter to represent an LDAP user’s login
(for Intelligence Server version 8.0.1
and later).

#LDAP_GIDNUMBER# can be used in


this filter to represent the UNIX group
ID number; this corresponds to the
LDAP attribute gidNumber (for
Intelligence Server version 8.0.1 and
later).

Number of Click the up or down arrows to specify


nested group how many LDAP groups to import into
1
levels above to MicroStrategy when the user or group
import is imported.

Copyright © 2024 All Rights Reserved 921


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Click Test Connection to test the


Test connection to the LDAP server. You
Connection Unselected
Connection will be prompted for a login and
password to test the connection.

LDAP - Schedules

Default
Setting Description
Value

Along with setting up synchronization schedules, you


can synchronize your MicroStrategy users and groups
with the latest LDAP users and groups immediately after
Run schedules clicking OK to accept your changes and exit the
Unchecked
on save Intelligence Server Configuration Editor. Users and
groups are synchronized using the user and group
search filters you defined in Intelligence Server
Configuration Editor: LDAP category, Filters.

LDAP - Import - Import/Synchronize

Default
Setting Description
Value

Select this checkbox to indicate


that Intelligence Server should
import LDAP users into the
Import Users Checked
Import/Synchronize at MicroStrategy metadata as
Login MicroStrategy users when users
log in.

Synchronize Select this checkbox to indicate Checked

Copyright © 2024 All Rights Reserved 922


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

that Intelligence Server should


MicroStrategy
synchronize the users that are
User Login/User
already in the metadata directory
Name with LDAP
each time a new user logs in.

Select this checkbox to indicate


that Intelligence Server should
import the LDAP groups to which
Import Groups each imported LDAP user Checked
belongs into the MicroStrategy
metadata as MicroStrategy
groups when users log in.

Select this checkbox to indicate


Synchronize
that Intelligence Server should
MicroStrategy
synchronize the groups that are Checked
Group Name
already in the metadata directory
with LDAP
each time a user logs in.

Select this checkbox to indicate


that Intelligence Server should
Import Users import a list of LDAP users into Checked
the MicroStrategy metadata as
MicroStrategy users in batch.

Select this checkbox to indicate


Synchronize that Intelligence Server should
Import/Synchronize in MicroStrategy synchronize the users that are
Checked
Batch User Login/User already in the metadata directory
Name with LDAP when users are imported in
batch.

Enter a user search filter to


Enter search
return a list of users to import in
filter for
batch. You should contact your Empty
importing list of
LDAP administrator for the
users
proper user search filter syntax.

Copyright © 2024 All Rights Reserved 923


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

A user search filter is generally


of the following form:

(&(objectclass=LDAP_USER_
OBJECT_CLASS)(LDAP_
LOGIN_ATTR=SEARCH_
STRING))

The user search filter form given


above has the following
placeholders:

LDAP_USER_OBJECT_CLASS
indicates the object class of the
LDAP users. For example, you
can enter (&
(objectclass=person)(cn=h*)).

LDAP_LOGIN_ATTR indicates
which LDAP attribute to use to
store LDAP logins. For example,
you can enter (&
(objectclass=person)(cn=h*)).

SEARCH_STRING indicates the


search criteria for your user
search filter. You must match the
correct LDAP attribute for your
search filter. For example, you
can search for all users with an
LDAP user login that begins with
the letter h by entering (&
(objectclass=person)(cn=h*)).

Note: Depending on your LDAP


server vendor and your tree
structure created within LDAP,
you may need to try different

Copyright © 2024 All Rights Reserved 924


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

(&
(objectclass=LD
AP_USER_
OBJECT_
CLASS)(LDAP_
LOGIN_
ATTR=SEARC
H_STRING))

The user search


filter form given
above has the
following
placeholders:

LDAP_USER_
OBJECT_
CLASS attributes within the search filter
indicates the syntax above. For example:
object class of
the LDAP users.
For example,
you can enter (&
(objectclass=pe
rson)(cn=h*)).

LDAP_LOGIN_
ATTR indicates
which LDAP
attribute to use
to store LDAP
logins. For
example, you
can enter (&
(objectclass=pe
rson)(cn=h*)).

Copyright © 2024 All Rights Reserved 925


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

SEARCH_
STRING
indicates the
search criteria
for your user
search filter.
You must match
the correct
LDAP attribute
for your search
filter. For
example, you
can search for
all users with an
LDAP user login
that begins with
the letter h by
entering (&
(objectclass=pe
rson)(cn=h*)).

Note:
Depending on
your LDAP
server vendor
and your tree
structure
created within
LDAP, you may
need to try
different
attributes within
the search filter
syntax above.
For example:

Copyright © 2024 All Rights Reserved 926


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

(&
(objectclass=pe
rson)
(uniqueID=SEA
RCH_STRING))

where uniqueID
is the LDAP
attribute your
company uses
for
authentication.

Select this checkbox to indicate


that Intelligence Server should
Checked
import groups into the
MicroStrategy metadata.

Select this checkbox to indicate


Synchronize that Intelligence Server should
MicroStrategy synchronize the groups that are
Checked
Group Name already in the metadata directory
with LDAP when groups are imported in
batch.
Import Groups
Enter a group search filter to
return a list of groups to import in
batch. You should contact your
LDAP administrator for the
Enter search proper group search filter syntax.
filter for A group search filter is generally Empty
importing list of of the following form:
groups
&(objectclass=LDAP_GROUP_
OBJECT_CLASS)(LDAP_
GROUP_ATTR=SEARCH_
STRING))

Copyright © 2024 All Rights Reserved 927


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

The group search filter form


given above has the following
placeholders:

LDAP_GROUP_OBJECT_CLASS
indicates the object class of the
LDAP groups. For example, you
can enter (&
(objectclass=groupOfNames)
(cn=h*)).

LDAP_GROUP_ATTR indicates
which LDAP attribute of an LDAP
group is searched for to retrieve
a list of groups. For example, you
can enter (&
(objectclass=groupOfNames)
(cn=h*)).

SEARCH_STRING indicates the


search criteria for your group
search filter. You must match the
correct LDAP attribute for your
search filter. For example, you
can search for all groups with an
LDAP group name that begins
with the letter h by entering (&
(objectclass=groupOfNames)
(cn=h*)).

Copyright © 2024 All Rights Reserved 928


Syst em Ad m in ist r at io n Gu id e

LDAP - Import - User/Group

Default
Setting Description
Value

Sets the MicroStrategy user login to be the


User login same as the LDAP user login, when the Selected
MicroStrategy user is created at import.

Sets the MicroStrategy user login to be the


Distinguished same as the user’s LDAP distinguished
Unselected
name name, when the MicroStrategy user is
Import
created at import.
user
login as You can provide a different LDAP attribute
than the two listed above to be imported
and used as the MicroStrategy user login,
Other when the MicroStrategy user is created at Unselected
import. Your LDAP administrator should
provide you with the appropriate LDAP
attribute to be used as the user login.

Sets the MicroStrategy user name to be the


User name same as the LDAP user name, when the Selected
MicroStrategy user is created at import.

Sets the MicroStrategy user name to be the


Distinguished same as the user’s LDAP distinguished
Unselected
name name, when the MicroStrategy user is
Import
created at import.
user
name as You can provide a different LDAP attribute
than the two listed above to be imported and
used as the MicroStrategy user name, when
Other the MicroStrategy user is created at import. Unselected
Your LDAP administrator should provide you
with the appropriate LDAP attribute to be
used as the user name.

Import Sets the MicroStrategy group names to be


Group name Selected
group the same as the group names in the LDAP

Copyright © 2024 All Rights Reserved 929


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

server for the groups imported into


MicroStrategy.

Sets the MicroStrategy group names to be


Distinguished the same as the distinguished names in the
Unselected
name LDAP server for the groups imported into
MicroStrategy.
name as
You can provide an LDAP attribute to be
imported and used as the MicroStrategy
group name, when MicroStrategy users are
Other imported and created along with the users’ Unselected
groups. Your LDAP administrator should
provide you with the appropriate LDAP
attribute to be used as the group name.

LDAP - Import - Options

Default
Setting Description
Value

Select this checkbox


to use LDAP with
Windows
authentication. By
Synchronize user/group
creating a link
information with LDAP
between a Windows
during Windows
system login, an Unchecked
authentication and
LDAP user, and a
import Windows link
MicroStrategy user,
during Batch Import
a single login into
the machine
authenticates the
user for the machine

Copyright © 2024 All Rights Reserved 930


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

as well as in
MicroStrategy.

To support this
option, the LDAP
Server must be
configured as the
Microsoft Active
Directory Server
domain controller,
which stores the
Windows system
login information.

See Identifying
Users:
Authentication for
more information on
Windows
authentication.

Select this checkbox


to use an LDAP-
based single sign-on
Synchronize user/group system.
information with LDAP
See Identifying Unchecked
during Trusted
authentication Users:
Authentication for
more information on
single sign-on.

Copyright © 2024 All Rights Reserved 931


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Unchecked

If you are importing


users in a batch
using an LDAP-
based single sign-on
system, select this
checkbox to specify
the unique ID to use
Use default LDAP in identifying the
attribute users. If you select Selected
Batch import Integrated
('userPrincipalName') this checkbox,
Authentication/Trusted
specify whether to
Authentication unique ID
use the default LDAP
name attribute
userPrincipalName
(the default
selection) or another
LDAP attribute.

If you are using


Other (type in the another LDAP
Unselected
value) attribute, enter it
here.

If you are importing


LDAP users, either
in a batch or at login,
select this option to
import email
Import email address addresses Unchecked
associated with
those users as
MicroStrategy
Distribution Services
contacts.

Copyright © 2024 All Rights Reserved 932


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Specify whether to
use the default
Use default LDAP LDAP email attribute
Selected
attribute ('mail') mail (the default
selection) or another
LDAP attribute.

If you are using


Other (type in the another LDAP
Unselected
value) attribute, enter it
here.

If you choose to
import email
addresses, the
imported email
address becomes
Address Properties - Generic
the default email
Device email
address. This
overwrites the
existing default
email address, if one
exists.

LDAP - Import - Attributes

Default
Setting Description
Value

User logon Select the User login fails if LDAP attribute value is not
fails if LDAP read from the LDAP server checkbox to prevent LDAP
attribute users from logging into the MicroStrategy system if they Unselected
value is not do not have all the attributes that have been imported into
read from the the system.

Copyright © 2024 All Rights Reserved 933


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Warning: If your system uses multiple LDAP servers,


make sure that all LDAP attributes used by Intelligence
Server are defined on all LDAP servers. If a required
LDAP server
attribute is defined on LDAP server A and not on LDAP
server B, users from LDAP server B will not be able to log
in to MicroStrategy if this setting is enabled.

Web Single Sign-On - Configuration

Default
Setting Description
Value

The Intelligence Server Configuration


Editor: Web Single Sign-On category,
Configuration window shows details about
applications that have established trust
relationship with the Intelligence Server. A
trust relationship is required to enable
single sign-on authentication to
MicroStrategy Web.
Allow user to log
on if Web Single The Web Single Sign-On category,
Sign-on - Configuration dialog box also allows
administrators to define how users are Unchecked
MicroStrategy
user link not handled when they log in to MicroStrategy
found Web with an account that is not linked to a
MicroStrategy Web user.

See Identifying Users: Authentication for


more information about implementing single
sign-on authentication in MicroStrategy
Web.

The Configuration subcategory contains the


following areas:

Copyright © 2024 All Rights Reserved 934


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Trusted Web Application Registration: Any


applications that have trust relationships
with the Intelligence Server are displayed in
these fields.

If user not found: These options allow you


to configure how users are handled if they
log in to MicroStrategy Web but are not
linked to MicroStrategy users.

Allow user to log on if Web Single Sign-On -


MicroStrategy user link not found: Selecting
this checkbox allows users that are not
linked to a MicroStrategy user, to log in to
MicroStrategy Web as a guest.

Select this checkbox to import a single sign-


Import
on user into the MicroStrategy metadata
user at Unchecked
when the user logs on to MicroStrategy Web
logon
for the first time.

Synch
Select this checkbox to synch users when
user at Unchecked
they log in.
logon

History Settings - General

Setting Description Default Value

Maximum
number of Controls how many messages can
10,000
messages exist in a user's History List.
History
per user
settings

Message Controls how many days messages


-1
lifetime can exist in a user's History List. The

Copyright © 2024 All Rights Reserved 935


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

default value of -1 means there is no


limit. Messages stay in the system
until the user deletes them.

The message lifetime can ensure that


no History List messages reside in
the system indefinitely. When the

(days) user logs out of the system, the


messages are checked. If they are
older than the lifetime configured
here, the messages are deleted. This
setting complements the other
History setting of Maximum number
of messages, which can be set
concurrently.

Selecting this option stores History


List messages in the location
File Based Unelected
specified in the History Directory
field.

.\Inbox\SERVER_
History The location where History List
DEFINITION_
Directory messages are saved.
NAME\

Selecting this option stores History


Repository List messages in the database that is
Type associated with the Database
Instance that is selected. To specify
a database instance, in the
Database Intelligence Server Configuration
Selected
Based Editor, select the General
subcategory of the Server Definition
category, and, in the Database
Instance drop-down list, select the
database instance to use for the
History List.

Copyright © 2024 All Rights Reserved 936


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

Database
Select the database instance <None>
Instance

If this checkbox is selected, History


caches are stored in the database
Backup without needing to be preserved in
report the file system. If this checkbox is
history cleared, History caches are stored in Checked
caches to the Intelligence Server file system.
database MicroStrategy recommends leaving
this checkbox selected for
performance and reliability reasons.

External
central
storage Specify where file-based History List
directory for messages are stored if you are using Empty
Database- a hybrid History List repository.
based
History LIst

History Settings - Replacement Policy

Default
Setting Description
Value

When the History List is full and another


Number of
message is added, Intelligence Server
messages
automatically deletes the specified number
deleted to
of messages, beginning with the oldest 0
Message reclaim
messages. If this number is set to zero,
Replacement History List
new messages are not added to the History
Policy space
List until messages are manually removed.

Delete error If this checkbox is selected, error messages


Checked
messages are deleted (oldest first) before regular

Copyright © 2024 All Rights Reserved 937


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

first History List messages.

This drop-down list controls what


timestamp Intelligence Server uses to
determine which History List messages are
the oldest. You can select from:

Creation time: The time at which the


Delete
message was created.
oldest Creation
messages Finish time: The time at which the report Time
by finished executing.

Modification time: The time at which the


message was last modified.

Start time: The time at which the report


started executing.

SAP User Management - General

Default
Setting Description
Value

If this option is selected, a user with the same SAP user


Import users name is created in MicroStrategy. This user is created as a Unchecked
member of the Warehouse Users group.

If this option is selected, the MicroStrategy groups


assigned to a MicroStrategy user are synchronized with the
Search for SAP roles assigned to the SAP user. This means that if SAP
Unchecked
groups roles have been added or removed for an SAP user, the
associated MicroStrategy user is added or removed from
the MicroStrategy groups that represent the SAP roles.

Import If this option is selected, all SAP roles that the SAP user is
Unchecked
groups a member of are imported as groups in MicroStrategy.

Copyright © 2024 All Rights Reserved 938


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

These groups are created within the Warehouse Users


group and only have the inherited privileges of the
Warehouse Users group. Once these groups are created in
MicroStrategy, you can assign privileges to these groups,
which are applied to all users that are members of the
groups.

Web Quick Search - General

Setting Description Default Value

The Enable Search Engine check


box must be selected for the rest of
Enable Search Engine Checked
the options on this interface to
become available.

To enable quick search for a


Enable/disable
project, from the Quick Search
Quick Search and
drop-down list next to the project On
manage index for
name, select On. You are prompted
Projects projects
whether to create the search index.

The Index directory is the folder


Index directory where the quick search index is .\SearchData
stored.

Specify the list of


words (separated
The list of Stop words are those
by space) that
Stop words words that are not included in the Empty
should be
('Contains' quick search index.
excluded from the
search type search.
only)
Machine
Select the language that Stop
Language language
words should apply to.
settings

Copyright © 2024 All Rights Reserved 939


Syst em Ad m in ist r at io n Gu id e

Project Configuration Default Settings


The default project configuration settings, defined in the Project
Configuration Editor, are provided below.

To access the Project Configuration Editor in Developer, right-click the


project source and choose Project Configuration.

Project Definition - General

Default
Setting Description
Value

Description Create or edit a description of the project. Empty

Click the Modify button to alter three different properties:

Security - Change user permissions.


Properties International - Change the default language. Empty

Change journal - Enable the change journey category to list


the changes that have been made to this object.

Project Definition - Security

Default
Setting Description
Value

Use 7.1x Set the security model at report execution


Unselected
security model time, whether security is checked only on
Security
the report object itself (Use 7.1.x security
model
Use 7.2x model) or on all objects that make up a
Selected
security model report (Use 7.2.x security model).

Set project Click Modify to open the Properties dialog


Access
definition box: Security tab. The ACL you define here Full control
control
security is applied as the default ACL for the project

Copyright © 2024 All Rights Reserved 940


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

and all of the objects within the project,


excluding the objects mentioned in the
following option.

Click Modify to open the Properties dialog


box: Security tab. The ACL you define here
is applied as the default ACL for all objects
in the project related to Freeform SQL,
Set Freeform Query Builder, and MDX cube sources (SAP
SQL and BW, Microsoft Analysis Services, Hyperion
MDX objects Essbase, and IBM Cognos TM1). This Full Control
default includes objects such as reports, metrics,
security attributes, MDX cubes, tables, and other
such objects that are created with these
MicroStrategy features. You can modify this
default ACL for any object in a project
individually.

Click Modify to open the Properties dialog


Set project
box: Security tab. The ACL you define here
root folder Full Control
is applied as the ACL for the project root
security
folder.

Project Definition - Drilling

Default
Setting Description
Value

Select a default
project drill map or
click Clear to
Default project drill map Empty
remove the default
drill map from the
field.

Copyright © 2024 All Rights Reserved 941


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

All projects must


have a default drill
map; you cannot
remove the existing
default drill map
from this field until
you specify a new
default drill map. If
you try to remove
the only default drill
map for a project, a
message indicates
that other objects
depend on it. When
you search for
dependent objects,
none are found,
because the
dependent object is
the project itself.

You can disable


drilling for the
project by selecting
an empty drill map,
and then clearing
the Drill to
immediate
children/parents
checkbox,
described below.

This controls drill-


Drill to immediate down behavior from
Advanced Unchecked
children/parents objects on a report;
when it is enabled,

Copyright © 2024 All Rights Reserved 942


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

only the immediate


children or parents
of an object, rather
than all children or
parents, are
displayed to be
drilled to.

When this is
selected, Web users
can see only
personalized drill
paths rather than all
drill paths.
Personalized drill
paths are based on
each object's
access control list
(ACL), specified in
the Security
Enable Web category of the
personalized drill Properties dialog Unchecked
paths box. If you set up
ACLs, all drill paths
are still displayed in
Web until you
enable Web
personalized drill
paths.

Note: Selecting this


checkbox disables
XML caching, which
can adversely
impact
MicroStrategy Web

Copyright © 2024 All Rights Reserved 943


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

performance.

When this is
enabled, all drilling
options are
automatically sorted
alphabetically in the
display when a user
right-clicks on a
drillable object.
Sorting occurs
within a hierarchy
and between
hierarchies, in
ascending
Sort drilling alphabetical order.
options in
Note: Sorting is by Unchecked
ascending
alphabetical order drill type, then by
set name, then by
path (attribute)
name. However, for
most custom drill
paths, the drill type
is "drilltounit" and
the set name is
generally empty, so
the most likely
default sorting order
is ascending order
of path (attribute)
name.

Copyright © 2024 All Rights Reserved 944


Syst em Ad m in ist r at io n Gu id e

Project Definition - Object Templates

Default
Setting Description
Value

The object template used to create a new


report for any users who have disabled
Default object templates for reports.
(None)
Template Note: The same default object template is
Report
used for both reports and Intelligent Cubes.

Show empty Determines whether to show or hide the


Checked
template empty object template.

The object template used to create a new


Default
template for any users who have disabled (None)
Template
Template object templates for templates.

Show empty Determines whether to show or hide the


Checked
template empty object template.

The object template used to create a new


Default
metric for any users who have disabled (None)
Template
Metric object templates for metrics.

Show empty Determines whether to show or hide the


Checked
template empty object template.

As an alternative to providing a default


document template to document designers,
Use if you select this the designer is
Document automatable presented with a wizard that Unselected
Wizard steps them through document creation. This
can be helpful for newer or infrequent
Document designers of documents.

The object template used to create a new


Use
document for any users who have disabled Selected
Template
object templates for documents.

Show empty Determines whether to show or hide the


Checked
template empty object template.

Copyright © 2024 All Rights Reserved 945


Syst em Ad m in ist r at io n Gu id e

Project Definition - PDF Settings

Default
Setting Description
Value

From the Edit drop-down list, select whether you want to add a
Edit header or a footer to PDFs created when a report is exported Empty
from this project.

From the Insert Auto-Text drop-down list, select the auto-text


Insert Auto-
to display. Options include Date, Number of Pages, Page By, Empty
Text
and more.

Project Definition - Export Settings

Default
Setting Description
Value

You can define static text that will appear on all reports
within a project. This is particularly useful for adding text
such as "Confidential," "Proprietary," your company's name,
Edit Empty
and so on. The text appears on every report that is exported
from the project. The text can appear as a header or as a
footer.

If you have upgraded your MicroStrategy system from 8.1.x


and are experiencing problems with exporting reports or
Run Export to
documents to Excel or Word, MicroStrategy Technical
Excel/Word in
Support may instruct you to select the Run Export to
8.1.x Empty
Excel/Word in 8.1.x compatibility mode checkbox.
compatibility
MicroStrategy recommends that you only select this
mode
checkbox if instructed to do so by MicroStrategy Technical
Support.

Do not merge Select the Do not merge or duplicate headers when exporting
or duplicate to Excel/Word checkbox to repeat the table headers when
Empty
headers when exporting a report or a document to an Excel sheet or a Word
exporting to document, as per MicroStrategy 8.1.x. Again, MicroStrategy

Copyright © 2024 All Rights Reserved 946


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

recommends that you only select this check box if instructed


Excel/Word
to do so by MicroStrategy Technical Support.

The Export to Flash using this file format setting allows you
Export to
to select the Flash file format for documents and dashboards.
Flash file PDF
You can choose to export all the Flash files in a project in
format
either MHT or PDF format.

Project Definition - Communications

Default
Setting Description
Value

Please type The Window title is displayed after the name of the object
the text to (report, document, metric, and so on) on the title bar of each
display in interface for Developer users. The window title allows a Empty
the window user to confirm which project definition they are working
title with. Type the text to display in this field.

Click Modify to open the Project Status Editor. The Project


Status is a message that is displayed on the project's home
Project
page. Enter a message and select the Display project status Unchecked
Status
checkbox. Select to display the project at the top or bottom
of the home page.

Copyright © 2024 All Rights Reserved 947


Syst em Ad m in ist r at io n Gu id e

Project Definition - History List

Default
Setting Description
Value

Maximum number of messages per user -1

Save Report Services document dataset


Checked
History messages to History List

settings Save exported results for interactive


Unchecked
executions sent to History LIst

Maximum Inbox message size (MB) -1

Project Definition - User Profiles

Default
Setting Description
Value

Enable or disable the automatic creation of these folders.


Create user If you plan to duplicate a project, to minimize the time
profiles at involved in the project duplication process it can be useful to Checked
login disable this setting (by clearing the checkbox) so that user
profile folders are not automatically created for new users.

Project Definition - Documents and Reports

Setting Description Default Value

Click to
specify
Report Report report
Details details details
Properties properties properties.
See Project

Copyright © 2024 All Rights Reserved 948


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

Definition -
Documents
and Reports
- Report
Details
Properties -
General for
more
information.

Click to
specify the
default
watermark
for
documents
and reports.
Watermark Watermark
See Project
Definition -
Documents
and Reports
- Watermark
for more
information.

Select this
checkbox if
Allow
you want
documents
individual
Watermark to overwrite Checked
documents
this
to be able to
watermark
overwrite the
watermark.

Specify the When a


Web document http://localhost:8080/MicroStrategy/servlet/m
web server
Server strWeb
that will be

Copyright © 2024 All Rights Reserved 949


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

containing
the
WEBSERVE
R macro is
executed
from
MicroStrateg
y Web, the
macro is
replaced with
the web
server used
to execute
the
document. If
the
used to
document is
replace
executed
WEBSERVE
from
R macro in
Developer,
documents
the
WEBSERVE
R macro is
replaced with
the web
server
specified in
the Specify
the web
server that
will be used
to replace
WEBSERVE
R macro in
documents
field.

Copyright © 2024 All Rights Reserved 950


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

If the
document is
executed
Specify the
through a
web server
subscription,
that will be
you can use
used in link
this field to
to history list
specify
for email
which web Empty
subscription
server to use
s and
in the link to
notification
History List
of history list
messages in
subscription
email
s
subscription
s and
notifications.

Select this
Enable links
option to
in exported
Flash enable links
Flash Unchecked
documents in stand-
documents
alone Flash
(.mht files)
documents.

Select this
option to
Mobile Enable enable smart
Unchecked
documents smart client client for
mobile
documents.

Copyright © 2024 All Rights Reserved 951


Syst em Ad m in ist r at io n Gu id e

Project Definition - Documents and Reports - Report Details


Properties - General

Default
Setting Description
Value

Report Select this checkbox to display a


Unchecked
Description short description of the report.

Select this checkbox to display the


Prompt Details Unchecked
prompt details on the report.

Select this checkbox to include the


filter details such as the definition of
Report Details Filter Details Checked
the report filter, view filter, and report
limits.

Select this checkbox to include


details about the objects on the
Template
report, such as attributes and Unchecked
Details
metrics, as well as the metric
definitions.

The prompt title is defined when the


prompt is created and the index is a
number indicating the order of the
prompts in the dataset reports. To
Include Prompt Title and
enable them to be displayed in the
Titles Index
Report Details pane above a report,
select Title and Index. Select No Title
or Index to exclude the title and
Prompt Details
index.

Select an option to be displayed from


Replacement the drop-down list if a prompt is
string for unanswered. The options are:
Default
unanswered Default: The prompt is answered by
prompts the default prompt answer for the
prompt in the target report.

Copyright © 2024 All Rights Reserved 952


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Blank: There is no prompt in the


target report.

Prompt Not Answered: The prompt in


the target is ignored, which means
that the prompt is not answered. No
prompt answer is provided from the
source and the user is not prompted
to provide answers.

No Selection: The prompt is


answered using none of the object
selected in the source.

All/None: The prompt is answered


using all the objects or none of the
objects selected in the source.

You can specify whether and how


you want to display the attribute
name for the attribute element list
Show attribute
prompts in the document. The
name for
options are:
Attribute No
Element Yes to show the attribute names.
Prompts No to exclude the attribute names.

Repeated to repeat the attribute


name for each prompt answer.

Select the checkbox to include


Include unused unused prompts, which occur when
Unchecked
prompts you drill on a report that contains a
prompt.

Delimiters are characters that can


Use delimiters
appear around object names to set
Miscellaneous around report Automatic
them off from other text. Braces { }
object names
are used as delimiters. You can

Copyright © 2024 All Rights Reserved 953


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

select the following options from the


drop-down list:

Yes to display delimiters for all


metadata object names

No to exclude delimiters for all


metadata object names

Automatic to display delimiters only


for those object names that contain a
special character. Special characters
are characters other than a - z, A - Z,
0 - 9, #, _, and . (period).

Select this checkbox to display


aliases (object names) in the display
of filter details. An alias is a
secondary name for an object on a
report, which is created when a user
renames the object, to display a
meaningful description in the context
Use aliases in of that particular report. An alias
Unchecked
Filter Details does not change the name of the
object as it is stored in the system, it
only changes the name displayed on
the report. A filter uses the actual
name of the object, not the alias. You
can determine whether aliases
replace object names in the filter
details.

Copyright © 2024 All Rights Reserved 954


Syst em Ad m in ist r at io n Gu id e

Project Definition - Documents and Reports - Report Details


Properties - Filter Details - Contents

Default
Setting Description
Value

Select this checkbox to enable report


Report Filter Checked
filters.

Choose whether to include the filter name


in the Report Details pane or exclude it. If
Report Filter
you select Automatic, the report filter Automatic
Name
name is displayed for a stand-alone filter
and not displayed for an embedded filter.

Select this checkbox to include a short


Report Filter
description of the report filter in the Checked
Description
Report Details pane.

Select whether to enable the display of


report limits in the Report Details pane. A
report limit specifies a set of criteria used
to restrict the data returned in a result set
Report Limits Checked
General after the report’s metrics are calculated. A
report limit can make a report more
efficient to run, because less information
is returned from the data source.

Select whether to enable the display of


view filter details. A view filter is a quick
View Filter Checked
qualification applied in memory to the
report results.

Metric Select this checkbox to display the view


Qualification filter’s metric qualification in the Report Checked
in View Filter Details pane.

Select this checkbox to display the drill


Drill Filter Unchecked
filter.

Security Filter Select this checkbox to display the Unchecked

Copyright © 2024 All Rights Reserved 955


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

security filter that is applied to the report


and to the objects that make up the report.

Select this checkbox to display the filter


Include filter
type name, such as Report Filter, View Checked
type name
Filter, etc.

Show Empty Select this checkbox to display empty


Unchecked
Expressions expressions in a filter.

Select this to add a line after each filter


New line after type name and before the actual definition
filter type of the filter. This provides spacing, Checked
name making complex filter definitions easier to

Additional read.

Options New line Select this to add a line after each sub-
between filter expression, to help differentiate between Checked
types the various filters.

Select this to display report limit details


Show Report Before View
either above or below the view filter
Limits Filter
details, if any.

Select to expand the details displayed for


Expand shortcut filters. You can enable display of Show Filter
shortcut filters the shortcut filter’s name, definition, or Definition
both name and definition.

Project Definition - Documents and Reports - Report Details


Properties - Filter Details - Other

Default
Setting Description
Value

In List Show attribute Determines the display of the attribute’s Yes

Copyright © 2024 All Rights Reserved 956


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

name in the filter’s attribute element


list. Select from the following options:

Yes to display the attribute name. This


is the default setting.
name for In
List conditions No to exclude the attribute name.

Repeated to repeat the attribute name


for each attribute element. Example:
Region = Northeast, Region = Mid-
Atlantic.

If you want a character to separate the


attribute name from the attribute
Separator after
element name, type the character. The =
attribute name
common characters used are an equals
(=) sign or a colon.

Select this checkbox to display the


Conditions New line after
attribute name and its element on Unchecked
attribute name
separate lines.

You can select the text that separates


the last two attribute elements in the
list. The options are:

custom: The Custom separator field,


described below, is enabled when you
select this option.
Separator
between last or: The word ”or” is displayed between comma
two elements the last two attribute elements in the
list.

and: The word ”and” is displayed


between the last two attribute elements
in the list.

comma: The comma character , is

Copyright © 2024 All Rights Reserved 957


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

displayed between the last two attribute


elements in the list.

Type the character or text to be used as


Custom
a separator. To enable this field, select ,
separator
Custom in the option described above.

New line Select this checkbox to display each


between attribute element on a separate line in Unchecked
elements the Report Details pane.

Select this checkbox to trim any extra


spaces in the attribute element names.
For example, an element of an account
attribute is PSI2415 : 10 :
Trim elements Unchecked
COMMERCIAL. If Trim elements is
selected, the attribute is displayed as
PSI2415 :10:COMMERCIAL, with the
extra spaces excluded.

Select the desired option to display


Use names or
operator names (such as Equals or
symbols for Symbols
Greater Than) or operator symbols
operators
(such as = or >).

Include
attribute form
Qualification Select this checkbox to display attribute
names in Checked
Conditions form names (such as DESC or ID).
qualification
conditions

Select this option to determine whether


dates display the expression used to
Dynamic Dates Default
generate the date, the dates
themselves, or a default display is used.

Logical New line Select the desired option to specify


No
Operators between whether or not each condition is

Copyright © 2024 All Rights Reserved 958


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

displayed on a separate line in the


Report Details pane. You can also
select Automatic, which inserts a line
conditions only when filter conditions are joined by
different logical operators. For more
information, see Defining how logical
operators are displayed.

Select this option to display


parentheses around each condition,
such as (Region = Northeast). The
options are:
Parentheses
around Yes to display parentheses. Automatic
conditions No to exclude parentheses.

Automatic to display parentheses only


when it clarifies any ambiguity in the
expression.

Select this option to display the logical


operator, for example AND or OR, that
appears between filter conditions. The
options are:
Logical
Yes to display all operators. This is the
operator
default setting. Yes
between
conditions No to exclude all operators.

AND only to display only AND


operators.

OR only to display only OR operators.

Copyright © 2024 All Rights Reserved 959


Syst em Ad m in ist r at io n Gu id e

Project Definition - Documents and Reports - Report Details


Properties - Template Details

Default
Setting Description
Value

Use these options to display the details


for report objects from the base report
or the view report. Select one of the
following from the Units from View or
Base drop-down list:
Units from View
To display details for all the objects on View
or Base
the report, that is, all the objects in
Report Objects, select Base.

To show details for only those objects


displayed on the report grid, select
View.

Use these options to select whether the


template name of the report is displayed
in the Report Details pane. Select one of
Logical the following from the Base Template
Operators name drop-down list:

Yes: Select this to display the template


name for a stand-alone template and the
Local Template name for an embedded
Base Template template. Automatic
name
Automatic: Select this to display the
template name for a stand-alone
template, but not display the Local
Template name for an embedded
template. This is the default setting.

No: Select this to omit display of the


template name, whether or not the
template is stand-alone or embedded.

Template Select this checkbox to display the short Checked

Copyright © 2024 All Rights Reserved 960


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

description of the template. If the


template is embedded or does not have
Description
a description, the Template Description
field does not appear.

Select this checkbox to display


Non_Metric definitions and expressions for objects
Checked
template units on the report other than metrics, such as
attributes and consolidations.

Select this checkbox to display


Metrics definitions and expressions for the Checked
metrics on the report.

Conditional Determines whether a metric has a


Unchecked
Metrics only condition applied to it.

Formula Determines the metric formula. Checked

Determines the level at which the metric


Dimensionality Unchecked
is calculated.

Determines the conditionality to a


metric when you create a filter and add
the filter to the metric’s definition so
that only data that meets the filter
Conditionality Checked
conditions is included in that metric’s
calculation. See the Basic Reporting
Help for more information on
conditionality.

Determines whether a metric has a


Transformation Unchecked
transformation applied to it.

Copyright © 2024 All Rights Reserved 961


Syst em Ad m in ist r at io n Gu id e

Project Definition - Documents and Reports - Watermark

Default
Setting Description
Value

If you select No watermarks while


creating a project watermark, and the
Allow documents to overwrite this
watermark checkbox in the Project
Configuration Editor is selected, a
document displays its document
watermark, if it is defined for the
document. Reports do not display any
watermarks when exported to PDF. See
Creating document watermarks for more
details.

If you select No watermarks while


creating a project watermark, and the
No watermark Allow documents to overwrite this Selected
watermark checkbox in the Project
Configuration Editor is cleared, all
watermarks are disabled. Neither
documents nor reports display any
watermarks. See Disabling all
watermarks for more details.

If you select No watermarks while


creating a document watermark, the
document does not display any
watermark, even if a project watermark
has been defined. See Hiding a project
watermark for a specific document for
more details.

Select this option to enable a text


Unselected
Text watermark.

watermark Enter the watermark text, up to 255


Text Empty
characters.

Copyright © 2024 All Rights Reserved 962


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Size font Select to automatically adjust the font


Checked
automatically size to fill the layout.

Select to fade the watermark text, so you


Washout can view the report text through the Checked
watermark.

Select to display the watermark text


Diagonal Selected
diagonally.

Select to display the watermark text


Horizontal Unselected
horizontally.

Unselected

Source Select an image to use as a watermark. Empty


Image
watermark Select Auto to scale the watermark
Scale automatically. Select a percentage to Auto
scale the image manually.

Project Definition - Change Journaling

Default
Setting Description
Value

Select or clear this checkbox to enable or


disable change journaling for this project
Configure Enable source. When change journaling is enabled
Change Change for a project, Intelligence Server records Checked
Journaling Journaling information in the change journal about
changes made to the project configuration
objects, such as users or schedules.

Purge Purge all Select an appropriate date and time. All


Today
Change data logged change journal data logged before this date

Copyright © 2024 All Rights Reserved 963


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

before (date)

Purge all and time is deleted when you click Purge

data logged Now. Now


before (time)

Journal Enter the purge timeout in seconds. This is


the length of time Intelligence Server waits for
Purge
a response when it issues a purge command
timeout 600
to the change journal. If there is no response
(seconds)
by the time the timeout has elapsed, the
purge command is canceled.

Project Definition - Advanced

Default
Setting Description
Value

HTML document directory Define the HTML document directory. Empty

Click Register Styles to associate


prompt types with prompt styles, which
Prompt custom styles Empty
determine how prompts display in
MicroStrategy Web.

This is the maximum number of


elements to display in a single request
from Developer. Element requests that
Maximum exceed this limit but are under the
Attribute
number of project level setting for maximum
elements 1,000
elements to element rows are stored on the
browsing
display Intelligence Server. This setting
impacts how many elements are
displayed at one time in Developer's
Data Explorer, and in prompt answer

Copyright © 2024 All Rights Reserved 964


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

windows. Values of 0 and -1 indicate no


limit.

Note: If a limit has been specified in the


Hierarchy Editor or Attribute Editor that
is lower than the limit specified here,
that limit is used instead. If the limit
specified here is lower than the limit in
the Hierarchy Editor or Attribute Editor,
this limit is used. If you are not seeing
as many elements as you expect, check
the attribute’s Limit setting in the
Attribute Editor or Hierarchy Editor

If this checkbox is selected, security


filters are applied to attribute element
browsing. For example, a user has a
security filter defined as
Category=Electronics. If this checkbox
is selected, when the user browses the
Apply security Product hierarchy, the user can see only
filters to element the Electronics category. If this Checked
browsing checkbox is cleared, the user can see
all elements when browsing the Product
hierarchy. Regardless of whether this
checkbox is selected or cleared, the
user can see only elements in the
Electronics category when viewing a
report.

Click to
configure the
Project-
analytical
Level VLDB
engine settings.
settings
See Details for
All VLDB

Copyright © 2024 All Rights Reserved 965


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Properties for
more
information.

If the checkbox is selected, a user is


Enable Find and allowed to find all dependent objects
Replace object and replace them. If the checkbox is Unchecked
dependencies cleared, a user is not allowed to find any
dependent objects in the project.

If the checkbox is selected, users are


allowed to view and delete all
Dependent dependent objects that may be
Object preventing the object in question from
Options being deleted. A user must have the
Enable deleting appropriate ACL access on all
of object dependent objects for the deletion to Unchecked
dependencies occur (for details and links to steps, see
About controlling access and ACLs).

If the checkbox is cleared, users cannot


delete any dependent objects in the
project.

If the checkbox is selected, users see


the attributes in a hierarchy prompt
displayed alphabetically. This setting
Display Attribute overrides any specific sorting order
alphabetically in defined in the Hierarchy Editor. Checked
Advanced hierarchy
prompt If the checkbox is cleared, users see the
Prompt
attributes in a hierarchy prompt
Properties
displayed according to the sort order
defined in the Hierarchy Editor.

Personal answers allow a user to save


Enable personal
prompt answers for a specific prompt, Checked
answers
and then reuse the answers on any

Copyright © 2024 All Rights Reserved 966


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

report that uses a prompt.

If the checkbox is selected, prompt


designers can allow users to select
personal answers while creating a
prompt.

If the checkbox is cleared, selecting


personal answers for prompts is not
allowed.

Populate Mobile
ID system Enter a value to populate as a mobile
Unchecked
prompt for non- ID system prompt for non-mobile users.
mobile users.

Project Definition - Right to Left

Default
Setting Description
Value

Enable export Select this checkbox to enable export to PDF in Hebrew.


to PDF in This functionality is currently for internal MicroStrategy Unchecked
Hebrew use only.

Database Instances - SQL Data Warehouses

Default
Setting Description
Value

Select the
Select the primary database instance for the project from the
Primary
Select the Primary Database Instance for the Project drop- Empty
Database
down list.
Instance for

Copyright © 2024 All Rights Reserved 967


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Note: If you have a license for the MultiSource Option, The


primary database instance acts as the main source of data
for a project and is used as the default database instance for
the Project
tables added to the project. See the Project Design Help for
information on the Warehouse Catalog, and accessing
multiple data sources with the MultiSource Option.

VLDB Click to configure VLDB properties. See Details for All VLDB
Properties Properties for more information.

Database Instances - Statistics

Default
Setting Description
Value

The SQL Data warehouses subcategory is where you create


and modify your relational database instances.

You can configure the following settings for relational


database instances:

Primary database instance: Select the primary database


instance for the project from the Select the Primary Database
Instance for the Project drop-down list.
SQL Data New: Creates a new relational database instance with the Empty
warehouses Database Instance Editor.

Modify: Lets you modify database instances included for the


project (including warehouse, data mart, Freeform SQL, and
Query Builder database instances).

Set as default: Sets the selected database instance as the


default database instance for the project. The default
database instance is the database instance selected by
default for Freeform SQL and Query Builder.

Copyright © 2024 All Rights Reserved 968


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

VLDB Properties: Modify the warehouse database instance


VLDB properties using the VLDB Properties Editor.

Selecting a database instance check box makes that


database instance available in the project for standard
MicroStrategy reporting, data marts, Query Builder, and
Freeform SQL. If you have a license for the MultiSource
Option, selecting a check box for a database instance also
makes the database instance available from the Warehouse
Catalog to be part of the project’s relational schema.

The MDX Data warehouses subcategory provides settings to


create and modify your MDX cube source database instances
to connect to MDX cube sources.

You can configure the following settings for MDX cube


sources:

New: Creates a new MDX cube source database instance


with the Database Instance Editor.
MDX Data Modify: Lets you modify an MDX cube database instance’s Empty
warehouse connection information.

VLDB Properties: You can Modify the database instance


VLDB properties using the VLDB Properties Editor.

Schema Maintenance: Opens the Schema Maintenance


dialog box, which lets you remove MDX database instances
from a project, determine when MDX cube schemas are
loaded into Intelligence Server, and move MDX cube
schemas between different database instances.

The Connection mapping category lists all the connection


mappings for the project. It provides the following details:
Connection
Database Instance: The database instance that is being Empty
mapping
mapped to.

User: The user or group associated with the connection

Copyright © 2024 All Rights Reserved 969


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

mapping.

Language: The default language used by the connection


mapping.

Database Connection: The database connection used by the


mapped database login.

Database Login: The database login ID that the users are


mapped to.

The Statistics subcategory is where you select the database


instance for the database where statistics tables are stored.

From the Statistics database instance drop-down list, select


Statistics the database instance that represents the database in which <None>
the statistics tables are stored.

Statistics can be recorded for user sessions, caches, basic or


detailed report jobs, etc.

Governing Rules - Default - Result Sets

Default
Setting Description
Value

Reports that are executed


Interactive Reports directly by a user. A value of - 600
Intelligence 1 indicates no limit.
Server
Elapsed Time Reports that are executed
(sec) from a subscription. The
Scheduled Reports 600
default value of -1 indicates
no limit.

Specify the maximum time to


Wait time for prompt answers (sec) 600
wait for a prompt to be

Copyright © 2024 All Rights Reserved 970


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

answered by the user in


seconds. If the user fails to
answer the prompt in the
specified time limit, the job is
cancelled.

Specify the maximum time for


warehouse jobs to be
executed by Intelligence
Warehouse execution time (sec) Server. Jobs lasting longer 3600
than this setting are
cancelled. A value of 0 or -1
indicates infinite time.

Specify the maximum number


of rows that can be returned
to Intelligence Server for a
report request, which
includes intelligent cubes.
When retrieving the results
from the database, the Query
Intelligent Cubes 32,000
Engine applies this setting. If
the number of rows in a report
exceeds the specified limit,
Final Result the report generates an error
Rows and the report is not
displayed. A value of 0 or -1
indicates no limit.

Specify the maximum number


of rows that can be returned
to Intelligence Server for a
Data marts data mart report request. 32,000
When retrieving the results
from the database, the Query
Engine applies this setting. If

Copyright © 2024 All Rights Reserved 971


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

the number of rows in a report


exceeds the specified limit,
the report generates an error
and the report is not
displayed. The default value
of -1 indicates no limit.

Specify the maximum number


of rows that can be returned
to Intelligence Server for a
document or dashboard
request. When retrieving the
results from the database, the
Query Engine applies this
Document/Dashboard
setting. If the number of rows 50,000,000
views
in a document or dashboard
exceeds the specified limit,
an error is displayed and no
results are shown for the
document or dashboard. A
value of 0 or -1 indicates no
limit.

Specify the maximum number


of rows that can be returned
to Intelligence Server for all
other report requests. When
retrieving the results from the
database, the Query Engine
All other reports applies this setting. If the 32,000
number of rows in a report
exceeds the specified limit,
the report generates an error
and the report is not
displayed. A value of 0 or -1
indicates no limit.

Copyright © 2024 All Rights Reserved 972


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Limits the number of rows


that can be in an intermediate
result set used for analytical
processing. This is not the
number of rows that may be
created in an intermediate or
temporary table on the 32,000
database. Intelligence Server
uses intermediate tables for
all analytic calculations that
cannot be done on the
database. Values of 0 and -1
All indicate no limit.
intermediate
result rows Specify the maximum number
of rows that can be part of the
intermediate result set when
combining datasets to create
the view of data for a
document or dashboard. If the
Document/Dashboard
number of rows in a document 50,000,000
views
or dashboard exceeds the
specified limit, an error is
displayed and no results are
shown for the document or
dashboard. A value of 0 or -1
indicates no limit.

Limits the number of rows


that can be retrieved from the
database for an attribute

All element browsing rows element request. Values of 0 -1


and -1 indicate no limit. Both
MicroStrategy Developer and
MicroStrategy Web have the

Copyright © 2024 All Rights Reserved 973


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

ability to incrementally fetch


element rows from
Intelligence Server. This
setting for Developer is in the
Project Configuration Editor
(Project definition category:
Advanced subcategory); Web
uses a General project
default setting for
incremental fetch called
Maximum number of attribute
elements per block.

Limits the memory


consumption during SQL
generation. This setting can
be useful if you have jobs that
may be extremely large, for
example, when several highly
complex custom groups exist
on a report. When you use
Memory consumption during SQL this setting, if the job is too
2,000
generation (MB) big, the job is prevented from
executing rather than the
server becoming unavailable.
Set the limit according to the
expected number of SQL
queries generated to avoid
memory-related errors. A
value of 0 or -1 indicates no
limit.

Limits the memory


Memory consumption during data fetching consumption during the
2048
(MB) importing of data from data
sources such as web services

Copyright © 2024 All Rights Reserved 974


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

or Excel spreadsheets. When


you use this setting, if a data
source is too large, the data
is not imported. A value of -1
indicates no limit. A value of
0 or -1 indicates no limit.

Limits the file size, in


megabytes, when
downloading a dashboard
from MicroStrategy Web. If a
dashboard is larger than the
specified file size, an error is
displayed that provides the
current limit, and the
dashboard is not downloaded.
Additionally, this setting
applies to dashboards sent
MicroStrategy (.mstr) file size (MB) through Distribution Services. 100
If a dashboard is larger than
the specified size, the
dashboard is not sent. A
value of -1 indicates no limit.
A value of 0 prevents the
ability to download a
dashboard from Web and to
distribute a dashboard
through Distribution Services.
The maximum .mstr file size is
2047 MB.

Copyright © 2024 All Rights Reserved 975


Syst em Ad m in ist r at io n Gu id e

Governing Rules - Default - Jobs

Default
Setting Description
Value

Limits the number of concurrent jobs for a given


user account and project. Concurrent jobs include
report, element, and autoprompt requests that are
Jobs per account 100
executing or waiting to execute. Finished (open)
jobs, cached jobs, or jobs that returned errors are
not counted. A value of -1 indicates no limit.

Limits the number of concurrent jobs a user may


have during a given session. Concurrent jobs
include report, element, and autoprompt requests
Jobs per user session that are executing or waiting to execute. Finished 100
(open) jobs, cached jobs, or jobs that returned
errors are not counted. A value of -1 indicates no
limit.

Limits the number of concurrent jobs a single user


account may have executing in the project at one
time. If this limit is met, additional jobs are placed
Executing jobs per user -1
in the waiting queue until executing jobs finish. All
requests are processed in the order they are
received. A value of -1 indicates no limit.

Limits the number of concurrent jobs that the


project can process at a time. Concurrent jobs
include report, element, and autoprompt requests
that are executing or waiting to execute. Finished 1,000
(open) jobs, cached jobs, or jobs that returned
errors are not counted. A value of -1 indicates no
Jobs per limit.
project
Interactive Specify the maximum number of interactive jobs
jobs per that the selected project processes at a time. The 600
project default value of -1 indicates no limit.

Scheduled Specify the maximum number of scheduled jobs


jobs per that the selected project processes at a time. The 400
project default value of -1 indicates no limit.

Copyright © 2024 All Rights Reserved 976


Syst em Ad m in ist r at io n Gu id e

Governing Rules - Default - User Sessions

Default
Setting Description
Value

Limits the number of user sessions (connections) that are


allowed in the project. When the limit is reached, new users
User sessions cannot log in, except for the administrator, who may wish to
500
per project disconnect current users or increase the governing setting.
A value of -1 indicates no limit. See Governing Concurrent
Users for steps to configure this setting.

Concurrent
Limits the number of concurrent interactive project sessions
interactive
for a given user account. When the limit is reached, users 20
project session
cannot access new project sessions.
per user

Governing Rules - Default - Subscriptions

Default
Setting Description
Value

Limits the number of report or document execution requests that


a user can subscribe to, to be delivered to the History List folder.
History
A value of -1 indicates no limit. For steps to subscribe -1
List
reports/documents to be delivered to a History List, see History
List Subscription Editor.

Cache Limits the number of cache updates that a user can process at a
-1
Update time. A value of -1 indicates no limit.

Limits the number of reports or documents that a user can send


Email -1
to an email address at a time. A value of -1 indicates no limit.

Limits the number of files that a user can subscribe to at a time.


File -1
A value of -1 indicates no limit.

Limits the number of reports/documents that the user can


FTP -1
subscribe to, to be delivered to an FTP location, at a time. A

Copyright © 2024 All Rights Reserved 977


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

value of -1 indicates no limit.

Limits the number of reports/documents that the user can


Print subscribe to, to be delivered to a printer, at a time. A value of -1 -1
indicates no limit.

Limits the number of reports or documents the user can


Mobile -1
subscribe to, to be delivered to a mobile device, at a time.

Personal Limits the number of personal views that can be created by URL
-1
View sharing. A value of -1 indicates no limit.

Governing Rules - Default - Import Data

Default
Setting Description
Value

The maximum size for a file to be imported


for use as a data source. Files larger that
this value cannot be opened during data
import. The default value is 100 MB, the
minimum value is 1 MB, and the maximum
Maximum file size (MB) 30
value is as follows:

Files from disk: 4 GB

Other sources: Dependent on Intelligence


Server

Defines the maximum size of all data import


cubes for each individual user regardless of
whether they are published to memory or on
Maximum quota per user disk. You can set the maximum size quota 100
(MB) by entering one the following values:

-1: Unlimited - No limit is placed on the size


of data import cubes for each user.

Copyright © 2024 All Rights Reserved 978


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

0: Default - The default size limit of 100 MB


is applied to each user.

1+: Specific limit - Entering a value of 1 or


greater will apply a MB limit of that size to
each user.

In a clustered environment this setting


applies to all nodes in the cluster.

Enable this checkbox to allow users to


Unchecked
import files from the Internet using a URL.

Allow users to import data from the Internet


HTTP/HTTPS Unchecked
by connecting to an HTTP or HTTPS URL.

Enable Allow users to import data from the Internet


URL file FTP by connecting to an FTP (File Transfer Unchecked
upload Protocol) Server.
via
Allow users to import data from files on your
Intelligence Server machine. Warning:
Enabling this option can provide users the
File Unchecked
ability to import any critical or system files
that are stored on your Intelligence Server
machine.

Caching - Result Caches - Creation

Default
Setting Description
Value

Project If this option is enabled, you can


Enable report server modify the following:
Default Checked
caching
Behavior Enable prompted report caching

Copyright © 2024 All Rights Reserved 979


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Enable non-prompted report caching

Copyright © 2024 All Rights Reserved 980


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Enable Document Select this checkbox to enable


Output Caching in document output caching in various Checked
Selected Formats formats.

Select to enable document caching


PDF Checked
for PDF documents.

Select to enable document caching


Excel Checked
for Excel documents.

Select to enable document caching


HTML Checked
for HTML documents.

Select to enable document caching


XML/Flash/HTML5 Checked
for XML/Flash/HTML5 documents.

Select to enable document caching


All Checked
for all document formats.

Select this checkbox to enable


caching for reports and documents
that contain prompts. If your users
Enable caching for commonly answer prompted reports
prompted reports with different answers each time the Checked
and documents report is run, caching these reports
and documents may not provide
significant benefits. In this case, you
may want to disable this setting.

Record prompt Select this checkbox to display the


answers for cache answers to prompts in cached reports Checked
monitoring in the Cache Monitor.

Select this checkbox to enable XML


caching for reports. XML caching
Enable XML caching
stores the attributes to which all Checked
for reports
users in Web can drill in the report’s
XML cache.

Copyright © 2024 All Rights Reserved 981


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Note: If you select the Enable Web


personalized drill paths checkbox
from the Project definition Drilling
category, XML caching is disabled,
which may adversely impact
MicroStrategy Web performance.

Determine what cache creation is


Create caches per
based on: by user, by database login, Unchecked
user
and/or by database connection.

Determine what cache creation is


based on: by user, by database login,
and/or by database connection.

Important: If you use database


Cache authentication, for security reasons
Create caches per
Creation MicroStrategy recommends selecting Unchecked
database login
Options the Create caches per database login
checkbox. Doing this ensures that
users who execute their reports using
different database login IDs cannot
use the same cache.

Create caches per Determine what cache creation is


database based on: by user, by database login, Unchecked
connection and/or by database connection.

Caching - Result Caches - Storage

Setting Description Default Value

Cache file Navigate to the directory you want to .\Caches\SERVER_


Disk directory use for cache files. DEFINITION_NAME\
Storage
Cache If you need to encrypt the cache None

Copyright © 2024 All Rights Reserved 982


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

encryption files, set the level of encryption


level on disk using this drop-down list.

Define the maximum RAM usage (in


megabytes) for report caching. This
setting needs to be at least the size
Datasets -
of the largest cache file, or the
Maximum
largest report caches will not be 256
RAM usage
used. The minimum value for this
(MB)
setting is 20 megabytes, and the
maximum value is 65536
megabytes, or 64 gigabytes.

Define the maximum number of


caches allowed in the project at one
time.

Datasets - Note: The default value for the

Maximum maximum number of caches is


10,000. The maximum value that you 10,000
number of
Memory
caches can set is 999,999. If you enter any
Storage
positive value greater than 999,999,
the value sets itself to 1,000,000. If
you enter ?, Intelligence Server uses
the default value.

Define the maximum RAM usage (in


megabytes) for report caching. This
Formatted setting needs to be at least the size
Documents - of the largest cache file, or the
Maximum largest report caches will not be 4096
RAM usage used. The minimum value for this
(MB) setting is 20 megabytes, and the
maximum value is 65536
megabytes, or 64 gigabytes.

Formatted Define the maximum number of


100,000
Documents - caches allowed in the project at one

Copyright © 2024 All Rights Reserved 983


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

time.

Note: The default value for the


maximum number of caches is

Maximum 100,000. The maximum value that

number of you can set is 999,999. If you enter

caches any positive value greater than


999,999, the value sets itself to
1,000,000. If you enter ?,
Intelligence Server uses the default
value.

This setting controls how much


memory is swapped to disk, relative
to the size of the cache being
swapped into memory. For example,
if the RAM swap multiplier setting is
2 and the requested cache is 80
RAM swap
Kilobytes, 160 Kilobytes are 2
multiplier
swapped from memory to disk.
Increasing this setting can increase
caching efficiency in cases where
the cache memory is full and several
concurrent reports are trying to
swap from disk.

This setting determines what


percentage of the amount of memory
specified in the Maximum RAM
usage limits can be used for result
Maximum
cache lookup tables. If your reports
RAM for
and documents contain many prompt 100
report caches
answers, the cache lookup table may
index (%)
reach this limit. At this point,
Intelligence Server no longer creates
new caches. To continue creating
new caches, you must either remove

Copyright © 2024 All Rights Reserved 984


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

existing caches to free up memory


for the cache lookup table, or
increase this limit. The default value
for this setting is 100%, and the
values can range from 10% to 100%.

When this setting is enabled


(selected), when Intelligence Server
Load caches
starts up it will load report caches Checked
on startup
from disk until the maximum RAM
usage for cache has been reached.

Caching - Result Caches - Maintenance

Default
Setting Description
Value

Causes caches to never automatically expire.

MicroStrategy recommends selecting the Never expire


caches checkbox, instead of using use time-based result
cache expiration. A cache is dependent on the results in
your data source and should only be invalidated when

Never expire the events occur that result in the cache no longer being
valid. For example, for a daily report, the cache may Checked
caches
need to be deleted when the data warehouse is loaded.
For weekly reports, you may want to delete the cache
and recreate it at the end of each week. In production
systems, cache invalidation should be driven by events
such as Warehouse Load or End Of Week, not short-
term time-based occurrences such as 24 hours.

Select the number of hours that a report cache should


Cache duration exist before it expires.
24
(hours)
Note: When a cache is updated, the current cache

Copyright © 2024 All Rights Reserved 985


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

lifetime is used to determine the cache expiration date


based on the last update time of the cache. This means
that changing the Cache duration (Hours) setting or the
Never Expire Caches setting does not affect the
expiration date of the already existing caches. It only
affects the new caches that are being or will be
processed.

By default, the caches for reports are based on filters


that use dynamic dates. These caches always expire at
midnight of the last day in the dynamic date filter.

For example, a report has a filter based on the dynamic


date Today. If this report is executed on Monday, the
Do not apply
cache for this report expires at midnight on Monday.
automatic
This is because a user who runs the report on Tuesday
expiration logic
expects to view data from Tuesday, not the cached data Unchecked
for reports
from Monday.
containing
dynamic data To change the default behavior, select the Do not Apply
Automatic Expiration Logic for reports containing
dynamic dates checkbox. When this setting is enabled,
report caches with dynamic dates expire in the same
way as other report caches, according to the other cache
duration settings.

Click Purge Now to ensure that a re-run report displays


Purge Caches the most recent data stored in your data source, you Unclicked
should purge caches regularly.

Copyright © 2024 All Rights Reserved 986


Syst em Ad m in ist r at io n Gu id e

Caching - Auxiliary Caches - Objects

Default
Setting Description
Value

Set the maximum RAM usage (MB) for object


caching for the server. The object cache is non-
Maximum schema metadata information used by
RAM usage Intelligence Server to speed the retrieval of 1024
(MB) objects from the metadata. A value of -1 indicates
Server the default value of 50 megabytes. A value of 0
resets to the minimum value of 1 megabyte.

Purge
Object Click Purge Now to delete all object caches. Unclicked
Cache

Set the maximum RAM usage (MB) for object


caching for the client. The object cache is non-
Maximum schema metadata information used by
Client RAM usage Intelligence Server to speed the retrieval of 10
(MB) objects from the metadata. A value of -1 indicates
the default value of 50 megabytes. A value of 0
resets to the minimum value of 1 megabyte.

Caching - Auxiliary Caches - Elements

Default
Setting Description
Value

Set the maximum RAM usage (MB) for element


caching for the server. The Element cache is
stored in memory or in a cache file located on
Maximum
the Intelligence Server. If the Element cache
Server RAM usage 512
memory setting is not large enough to hold a
(MB)
newly created cache, that cache is not created.
Even though the cache may have been purged
on the Intelligence Server machine, it may still

Copyright © 2024 All Rights Reserved 987


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

be created on the Developer client machine.

Select this checkbox to create element caches


Create for each passthrough login. If you use database
element authentication, for security reasons
caches per MicroStrategy recommends selecting this Checked
passthrough checkbox. This ensures that users who execute
login their reports using different pass-through login
IDs do not use the same cache.

Select this checkbox to create caches per


Create caches
connection map. Use this setting if connection
per
mapping is used. See Controlling Access to the Checked
connection
Database: Connection Mappings for more
map
information.

Purge element
Click Purge Now to delete all element caches. Unclicked
caches

Set the maximum RAM usage (MB) for element


caching for the client. The Element cache is
stored in memory or in a cache file located on
Maximum the Intelligence Server. If the Element cache
Client RAM usage memory setting is not large enough to hold a 1
(MB) newly created cache, that cache is not created.
Even though the cache may have been purged
on the Intelligence Server machine, it may still
be created on the Developer client machine.

Caching - Subscription Execution

Default
Setting Description
Value

Re-run history list Select this checkbox to create caches or update Unchecked

Copyright © 2024 All Rights Reserved 988


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

and mobile
existing caches when a report or document is
subscriptions
executed and that report/document is subscribed to
against the
the History List folder or a mobile device.
warehouse

Re-run, file,
Select this checkbox to create caches or update
email, print, or
existing caches when a report or document is
FTP subscriptions Unchecked
executed and that report/document is subscribed to a
against the
file, email, or print device
warehouse

Select this checkbox to prevent the subscription from


Do not create or creating or updating Matching caches. If this
update matching checkbox is cleared, Matching caches are created or Unchecked
caches updated as per the standard report or document
caching rules.

Keep document
available for
Select this checkbox to retain a document or report for
manipulation for
later manipulation that was delivered to the History Checked
History List
List folder.
Subscriptions
only

Intelligent Cubes - General

Setting Description Default Value

Specifies the file location in which


Intelligent Cubes are stored when you
select to save an Intelligent Cube to
Intelligent Cube file .\Cube\SERVER_
secondary storage. Along with storing
directory DEFINITION_NAME\
Intelligent Cubes in Intelligence Server
memory, you can store them in
secondary storage, such as a hard

Copyright © 2024 All Rights Reserved 989


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

disk. These Intelligent Cubes can then


be loaded from this secondary storage
into Intelligence Server memory when
reports require access to the Intelligent
Cube data.

Defines the amount of data required for


all Intelligent Cubes to limit the amount
of Intelligent Cube data stored in
Intelligence Server memory at one time
for a project. The default is 256
megabytes.

The total amount of memory used on


Maximum RAM usage (MB)
Intelligence Server by Intelligent Cubes 256
for a project is calculated and
compared to the limit you define. If an
attempt to load an Intelligent Cube is
made that would exceed this limit, an
Intelligent Cube is removed from
Intelligence Server memory before the
new Intelligent Cube is loaded into
memory.

Defines the limit of how many


Intelligent Cubes are stored in
Intelligence Server memory at one time
for a project.

The total number of Intelligent Cubes


Maximum number of cubes for a project that are stored in
Intelligence Server memory is 1,000
compared to the limit you define. If an
attempt to load an Intelligent Cube is
made that would exceed the limit, an
Intelligent Cube is removed from
Intelligence Server memory before the
new Intelligent Cube is loaded into

Copyright © 2024 All Rights Reserved 990


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

memory.

Defines the maximum cube size, in


megabytes, that can be downloaded
from Intelligence Server. Additionally,
Maximum cube size this value is used by Distribution
100
allowed for download (MB) Services when sending an .mstr file by
email. If the cube size is greater than
the specified value, the .mstr file will
not be sent by email.

Defines the maximum that indexes are


allowed to add to the Intelligent Cube’s
size, as a percentage of the original
size. For example, a setting of 50
Maximum % growth of an
percent defines that a 100 MB
Intelligent Cube due to 500
Intelligent Cube can grow to 150 MB
indexes
due to its indexes. If the Intelligent
Cube’s size exceeds this limit, the
least-used indexes are dropped from
the Intelligent Cube.

Defines, in minutes, how often the


Cube growth check Intelligent Cube’s size is checked and,
30
frequency (minutes) if necessary, how often the least-used
indexes are dropped.

Select this checkbox to define your


Intelligent Cubes to use and support
connection mapping. If you do not
define Intelligent Cubes to support
Create Intelligent Cubes connection mapping when connection
by database connection mapping is used in a project, users Unchecked
may be able to access data they are
not intended to have access to.

When an Intelligent Cube that supports


connection mapping is published, it

Copyright © 2024 All Rights Reserved 991


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

uses the connection mapping of the


user account that published the
Intelligent Cube. Only users that have
this connection mapping can create
and view reports that access this
Intelligent Cube. This maintains the
data access security and control
defined by your connection mappings.

If an Intelligent Cube needs to be


available for multiple connection
mappings, you must publish a separate
version of the Intelligent Cube for each
of the required connection mappings.

Select this checkbox to include the


process of loading all published
Intelligent Cubes as one of the tasks
completed when Intelligence Server is
started. Report runtime performance
for reports accessing Intelligent Cubes
is optimized because the Intelligent
Cube for the report has already been
loaded. However, the overhead
Load Intelligent Cubes on experienced during Intelligence Server
startup startup is increased because of the
Checked
processing of loading Intelligent
Cubes.

You can clear this checkbox to exclude


the process of loading all published
Intelligent Cubes as one of the tasks
completed when Intelligence Server is
started. The overhead experienced
during Intelligence Server startup is
decreased as compared to including
loading Intelligent Cubes as part of the

Copyright © 2024 All Rights Reserved 992


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

startup tasks. However, report runtime


performance for reports accessing
Intelligent Cubes can be negatively
affected because the Intelligent Cube
must first be loaded into Intelligence
Server. To avoid these report
performance issues, you can load
Intelligent Cubes manually or with
subscriptions after Intelligence Server
is started.

All reports that access Intelligent


Cubes allow you to drill within the data
included in an Intelligent Cube. This
provides ROLAP-type analysis without
having to re-execute against the data
warehouse. For example, an Intelligent
Cube includes Year and Quarter. A
report accessing the Intelligent Cube
includes only Year on the report. On
the report, you can drill down from
Year to Quarter, which returns the
Allow reports to drill
results without any extra load on the
outside the Intelligent
data warehouse or Intelligence Server.
Cube Unchecked

The decision to enable or disable


drilling outside an Intelligent Cube
depends on several factors. You
should consider the size and
complexity of your Intelligent Cubes
when deciding whether to enable
drilling outside an Intelligent Cube.
Enabling drilling outside relatively
small Intelligent Cubes can give the
benefit of ROLAP analysis through
drilling, but enabling this analysis on

Copyright © 2024 All Rights Reserved 993


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

relatively large Intelligent Cubes can


put increased load on your data
warehouse and Intelligence Server.
See the In-memory Analytics Help for
steps to define the drilling behavior of
Intelligent Cubes.

Select this checkbox to load Intelligent


Cubes into Intelligence Server memory
when the Intelligent Cube is published.
Intelligent Cubes must be loaded into
Intelligence Server memory to allow
reports to access and analyze their
Load Intelligent Cubes into data.
Intelligence Server
To conserve Intelligence Server Checked
memory upon publication
memory, clear this checkbox to define
Intelligent Cubes to be stored in
secondary storage only when
published. The Intelligent Cube can
then be loaded into Intelligence Server
memory manually, using schedules, or
whenever a report attempts to access
the Intelligent Cube.

Select this checkbox to enable


Enable
dynamic sourcing for the entire project,
Dynamic Checked
or clear this checkbox to disable
Sourcing
dynamic sourcing for the entire project.

Dynamic Make
Select this checkbox to enable dynamic
Sourcing Intelligent
sourcing for all Intelligent Cubes in a
Cubes
project. To disable dynamic sourcing
available for Unchecked
as the default behavior for all
Dynamic
Intelligent Cubes in a project, clear this
Sourcing by
checkbox.
default

Copyright © 2024 All Rights Reserved 994


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

Allow
Select this checkbox to make
Dynamic
Intelligent Cubes available for dynamic
Sourcing
sourcing even if some outer join
even if outer Unchecked
properties are not set. However, this
join
may cause incorrect data to be shown
properties
in reports that use dynamic sourcing.
are not set

Statistics - General

Default
Setting Description
Value

The statistics database is listed in the


Statistics Connection field. You set
the statistics database instance in the
Statistics Connection <None>
Project Configuration Editor:
Database Instances category,
Statistics subcategory.

User session and project session


Basic statistics analysis. This option must be selected Unchecked
for any statistics to be logged.

Detailed statistics on the processing


Report job steps Unchecked
of each report.

Document job Detailed statistics on the processing of


Unchecked
steps each document.
Advanced
Statistics The generated SQL for all report jobs.
Collections Warning: This option can create a very
Options Report job SQL large statistics table. Select this Unchecked
option only when you need the job
SQL data.

Report job Data warehouse tables and columns Unchecked

Copyright © 2024 All Rights Reserved 995


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

tables/columns
accessed by each report.
accessed

Detailed statistics on reports and


Mobile Clients documents that are executed on a Unchecked
mobile device.

Statistics on reports and document


Mobile Clients users executed on mobile clients as a
Unchecked
Manipulations result of a manipulation of the report
or document.

Mobile Clients Location information logged from a


Unchecked
Location mobile client.

Statistics - Purge

Default
Setting Description
Value

The beginning
date of the
Today minus
From date range for
one year
which to purge
statistics.
Select dates
The end date of
the date range
To for which to Today
purge
statistics.

The number of
seconds to
Purge timeout (seconds) 10
wait for the

Copyright © 2024 All Rights Reserved 996


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

purge process
to finish. If the
process does
not respond by
the end of this
time, a timeout
for the process
occurs, and
the system
does not
continue to
take up system
resources
trying to start
the process.

You can select


the categories
of staging
Advanced >> statistics to be Unselected
purged for the
selected
period:

Starts the
Purge Now Unclicked
purge process.

Project Access - General

Default
Setting Description
Value

Select a Use this drop-down list to view existing security roles and to
Empty
security assign a security role to a group or to individual users.

Copyright © 2024 All Rights Reserved 997


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

role from
the
following
list

Use this drop-down list to select the groups or users or both


Available you want to assign a security role to. To search for a group
Empty
members or user name, type a name into the Find field and click the
Filter button next to the Find field.

Show Select this checkbox to display user names in a selected


Unchecked
users group.

This box displays any user or group that has the selected
security role assigned to them. You can assign security roles
Selected
by using the right arrow to move users and groups from the Empty
members
Available members box on the left to the Selected members
box on the right.

Security Filter - General

Default
Setting Description
Value

Click Modify to open the Security Filter


Manager. From this manager you can assign
Security Filter Manager Empty
security filters to groups or individual users
and modify a security filter's definition.

Union (OR) By default, MicroStrategy merges related


Security Security security filters with OR and unrelated security
Filter Filters on filters with AND. That is, if two security filters
Selected
Merge related are related, the user can see all data available
Options attributes, from either security filter. However, if the
intersect security filters are not related, the user can

Copyright © 2024 All Rights Reserved 998


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

(AND) see only the data available in both security


Security filters. Two security filters are considered
Filters on related if the attributes that they derive from
unrelated belong in the same hierarchy, such as Country
attributes and Region, or Year and Month.

You can also configure Intelligence Server to


always merge security filters with an AND,
regardless of whether they are related. This
Intersect
setting may cause problems if a user is
(AND) all
included in two mutually exclusive security Unselected
Security
filters. For example, a user who is a member of
Filters
both the Northeast and Southeast regions
cannot see any data from the Geography
hierarchy.

Report Definition - SQL Generation

Default
Setting Description
Value

Click Modify to open the Attribute weights


dialog box. Here you can define attribute
Attribute weights Empty
weights to be used when creating a
temporary table index.

Click Catalog Options to open the


Warehouse catalog Warehouse catalog dialog box. Configure Empty
options for the Warehouse Catalog.

Click Attribute Options to open the Attribute


Attribute creation Creation Rules dialog box .Set the default Empty
behavior for attribute creation dialogs.

Fact creation Click Fact Options to open the Fact Creation Empty

Copyright © 2024 All Rights Reserved 999


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Rules dialog box. Set the default behavior


for fact creation dialogs.

Determine whether the system should


Custom column creation Check for invalid characters and name Checked
length in custom column names.

If data is available in multiple data sources


through MultiSource Option, the primary
database instance is used if it has the
necessary data. If the data is only available
in other secondary data sources, one of the
Use secondary data sources that includes the
Multisource data is used to retrieve the necessary data
Option using some basic internal logic. Any data Selected
default source priority you defined using Database
ordering Instance Ordering is ignored.

By selecting this option, this MultiSource


Option default ordering is used for all
reports in a project. You can enable or

Database disable the use of this ordering for individual

Instance reports.

Ordering If data is available in multiple data sources


through MultiSource Option, the data
source used to retrieve the data is based off
of the priority that you defined using
Database Instance Ordering. If data is only
Use project available in a data source that is not
level included in the priority list, then an
database applicable data source is chosen using the Unselected
instance standard MultiSource Option logic.
ordering
By selecting this option, the data source
priority list you defined for the project is
used for all reports in a project. You can
enable or disable the use of this ordering for
individual reports.

Copyright © 2024 All Rights Reserved 1000


Syst em Ad m in ist r at io n Gu id e

Report Definition - Null Values

Default
Setting Description
Value

Set the value


to be
displayed in
the reports
when there is
Empty
an empty
value in the
data retrieved
from the
warehouse

Set the value


to be
displayed in
Null display the reports
settings when there is
an empty
Empty
value in the
data as a
result of the
cross-
tabulation
process

Set the value


to be used in
place of empty
Unchecked
values when
the report
data is sorted

Set the value


Aggregation
to be --
null values
displayed in

Copyright © 2024 All Rights Reserved 1001


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

the reports
when the
metric value
cannot be
calculated at
the desired
level

Enter the value to display when an


object on a report or document
references an object that is not
available. For example, a document may
use a metric that is not included in a
dataset report. Or an MDX report may
Missing Object Display Empty
use an attribute that is no longer
mapped to an MDX cube. The types of
objects that may encounter this problem
include MDX reports, Freeform SQL
reports, views within a working set
report, and documents.

Report Definition - Graph

Default
Setting Description
Value

Override Select this checkbox to define a character set


the default and font to override the defaults.
Use font in the
graph graph Select the character set to be used for the new
graph: Select a character set from the drop- Unchecked
default template
font when down list. This character set overrides the

creating a default character set when creating a new graph.

new chart

Copyright © 2024 All Rights Reserved 1002


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Select the font to be used for the new graph:


Select a font from the drop-down list. This font
overrides the default font when creating a new
graph.

Select this checkbox to display a zero value (0)


in place of null values when a report is displayed
Use zero
Null as a graph. This acts as the default behavior for
instead of Unchecked
values all graph reports in a project. You can also
null values
define this support for each individual graph
report using the Graph Preferences dialog box.

Select the manner in which rounded effects are


applied to graph reports in the project. You can
then define whether rounded effects are applied
for each individual graph report using the Graph
Preferences dialog box. You have the following
options for how rounded effects are applied to
graph reports:

<None>: Select this option to disable rounded


effects from being applied to graph reports in the
project. Even if the individual graph reports are
defined to use rounded effects, no rounded
Rounded Effects effects are used for the graph reports. Instead of Standard
using these rounded effects, you can define
bevel and fill formatting for each individual
series marker in graph reports. Disabling the
rounded effects can also allow thresholds to be
displayed on certain graph types.

Standard: Select this option to enable the


standard rounded effects to be applied to graph
reports in the project, but disable threshold
formatting for Horizontal Bar graphs. This is the
default option to support backward compatibility.
However, you should select Optimized to

Copyright © 2024 All Rights Reserved 1003


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

support both rounded effects and threshold


formatting for most graph types.

Optimized: Select this option to enable the


optimized rounded effects to be applied to graph
reports in the project. This applies the rounded
effects to the graph series, while also keeping
any formatting that was applied through
thresholds for graph types such as Bar graphs.
However, some graph types cannot support both
rounded effects and thresholds. You can support
the display of thresholds for those graph types
by selecting <None> to disable rounded effects
for all graphs in a project, or by using the Graph
Preferences dialog box to disable rounded
effects for individual graph reports.

Report Definition - Advanced

Default
Setting Description
Value

This is the
message
that will be
displayed
when the Empty
No data report
returned execution
has no data
as a result

Display If you select this option, empty Grid/Graphs


Selected
message in display a message as described below.

Copyright © 2024 All Rights Reserved 1004


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

If no text is entered in the No data returned


field, empty Grid/Graphs display the default
message (No data returned) in the localized
document language.
grids
If text has been entered in the No data
returned field, empty Grid/Graphs display that
text.

If you select this option, empty Grid/Graphs


Hide
are displayed as blank Grid/Graphs. Any text
document Unselected
entered in the No data returned field is not
grid
displayed.

Retain
page-by
Select this checkbox if you want to retain
Page by selections
page-by selections when saving a report in Checked
reports when you
this project.
save a
report

(OLAP Services only) Determine how differing


report versions will be handled for OLAP
Services reports. Previous versions of
MicroStrategy clients, such as Web or
Developer, may not display correct formatting
for OLAP Services reports created or edited
Overwriting reports with and saved in version 9.0 and later. If your Allow with a
MicroStrategy 9 OLAP environment uses a mixture of 9.0 clients and warning
Service Reports older clients, select one of the following message
settings to determine how the system should
handle OLAP Services reports.

If a user attempts to save a 9.0 OLAP


Services report with the same name as an
older report, select one of the settings below
to determine the outcome:

Copyright © 2024 All Rights Reserved 1005


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Allow without a warning message: The 9.0


report automatically replaces the older report;
the user is not warned.

Allow with a warning message: A warning


message opens. The user can overwrite the
older report, or cancel the save, or change
the name of the 9.0 report so that the older
report is not overwritten.

Prevent: The user is not allowed to overwrite


the older report, and is prompted to save the
9.0 report with a different name.

When you pivot (move an object between the


Move sort
rows and the columns of a report), this option
Sorting keys with Checked
determines whether the pivoted object retains
pivoting unit
its sorting.

Language - Metadata

Default
Settings Description
Value

Select the languages that will be available for translations of Based on


Selected
text strings of objects, such as report names, descriptions, machine
Languages
and custom group elements in the metadata. settings

Copyright © 2024 All Rights Reserved 1006


Syst em Ad m in ist r at io n Gu id e

Language - Data

Default
Setting Description
Value

The Enable data


SQL based
internationalization
Enable data checkbox allows users Checked with
internationalization to translate the SQL based
Connection mapping
objects in the selected
based
project.

Select the languages


that will be available for
translations of text
strings of objects, such Based on machine Selected
Selected Languages
as report names, settings Languages
descriptions, and custom
group elements in the
metadata.

Language - User Preferences

Default
Setting Description
Value

User Language Preferences Click Modify to specify the metadata and data
Empty
Manager language for this project by individual user.

Metadata language
Select the metadata language to be used in
preference for all users in Default
this project.
this project

Data language preference Select the data language to be used in this


Default
for all users in this project project.

Copyright © 2024 All Rights Reserved 1007


Syst em Ad m in ist r at io n Gu id e

Deliveries - Email Delivery - Email Notification

Default
Setting Description
Value

Select this checkbox to send a


notification email to the recipient when
the subscribed report or document is
Checked
delivered to the file location. If this
checkbox is cleared, all other options in
this category are disabled.

The MicroStrategy user or contact that


Recipient name Checked
subscribed to the delivery.

Owner name The owner of the subscription. Checked

Report or
Name of the subscribed report or
Document Checked
document.
name
Enable email
Project containing the report or
notification to Project name Checked
document.
administrator
for failed email Delivery Email, file, FTP, print, Histoy List,
delivery Checked
method Cache, or Mobile

The schedule associated with the


Schedule Checked
subscription.

Subscription
The name of the subscription. Checked
name

Status of the delivery, such as


Delivery status Checked
Complete, Timed Out, or Error.

Date Date of the delivery. Checked

Time Time of the delivery. Checked

Address to which the failed delivery


Email address Checked
attempted to be sent.

Copyright © 2024 All Rights Reserved 1008


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

The specific error message for a failed


Error message Checked
delivery.

To include a message with each cache


Append the delivery notification, select this
Checked
following text checkbox and type the message in the
field.

Send
notification to
this Enter the email address of a system
administrator administrator to receive a notification Empty
email address email for the failed cache delivery.
when delivery
fails

Deliveries - Email Delivery - Compression

Default
Setting Description
Value

Select this checkbox to compress the report or


Enable
document that is subscribed and delivered to a file Checked
compression
location.

Select the level of compression for the subscribed report


Level of
or document as High, Medium, or Low from the drop- Medium
compression
down list.

In this field, specify the file extension for the


compressed files. The default extension is ZIP. Reports
Extension of or documents are compressed using the ZIP algorithm.
zip
compressed file Changing the extension does not change the algorithm;
it changes only the file extension. For example, some
network environments do not allow files with the ZIP

Copyright © 2024 All Rights Reserved 1009


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

extension to be delivered as email attachments. In these


environments, changing the file extension is necessary
for the attachments to be delivered.

Deliveries - Email Delivery - Email Footer

Default
Setting Description
Value

Append the Select this checkbox and type the text that you want to
following add as a footer in the email that is sent to email Unchecked
footer subscription recipients.

Deliveries - File Delivery - Email Notification

Setting Description Default Value

Select this
checkbox to send
a notification
email to the
recipient when
the subscribed
Enable email
report or
notification
document is Checked
for file
delivered to the
delivery
file location. If
this checkbox is
cleared, all other
options in this
category are
disabled.

Copyright © 2024 All Rights Reserved 1010


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

Select this
checkbox to send
a notification
Send email to the
notification to recipient when
Checked
recipient when the subscribed
delivery fails report or
document fails to
be delivered on
schedule.

The
MicroStrategy
Recipient name user or contact Checked
that subscribed
to the delivery.

The owner of the


Owner name Checked
subscription.

Report or Name of the


Document subscribed report Checked
name or document

Project
containing the
Project name Checked
report or
document

The delivery
method of email,
Delivery file, FTP, print,
Checked
method Histoy List,
Cache, or
Mobile.

The schedule
Schedule associated with Checked
the subscription.

Copyright © 2024 All Rights Reserved 1011


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

Subscription The name of the


Checked
name subscription.

The status of the


delivery, such as
Delivery status Checked
Complete, Timed
Out, or Error.

Date of the
Date Checked
delivery

Time of the
Time Checked
delivery

Location of the
File location Checked
file

Hyperlink to the
Link to file Checked
file

The specific error


Error message message for a Checked
failed delivery.

To include a
message with
each cache
delivery
Append the
notification, Checked
following text
select this
checkbox and
type the message
in the field.

Send Type the email


notification to address of a
this system Empty
administrator administrator to
email address receive a

Copyright © 2024 All Rights Reserved 1012


Syst em Ad m in ist r at io n Gu id e

Setting Description Default Value

notification email
when delivery
for the failed
fails
cache delivery.

Deliveries - File Delivery - Compression

Default
Setting Description
Value

Select this checkbox to compress the report or


Enable
document that is subscribed and delivered to a file Checked
compression
location.

Select the level of compression for the subscribed report


Level of
or document as High, Medium, or Low from the drop- Medium
compression
down list.

In this field, specify the file extension for the


compressed files. The default extension is ZIP. Reports
or documents are compressed using the ZIP algorithm.
Changing the extension does not change the algorithm;
Extension of
it changes only the file extension. For example, some zip
compressed file
network environments do not allow files with the ZIP
extension to be delivered as email attachments. In these
environments, changing the file extension is necessary
for the attachments to be delivered.

Deliveries - FTP Delivery - Email Notification

Default
Setting Description
Value

Enable email Select this checkbox to send a


Checked
notification notification email to the recipient when

Copyright © 2024 All Rights Reserved 1013


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

the subscribed report or document is


delivered to the file location. If this
checkbox is cleared, all other options in
this category are disabled.

Send Select this checkbox to send a notification


notification to email to the recipient when the subscribed
Checked
recipient when report or document fails to be delivered
delivery fails on schedule.

The MicroStrategy user or contact that


Recipient name Checked
subscribed to the delivery.

Owner name The owner of the subscription. Checked

Report or Name of the subscribed report or


Checked
Document name document

Project name Project containing the report or document Checked


for FTP
The delivery method of email, file, FTP,
Delivery method Checked
print, Histoy List, Cache, or Mobile.

The schedule associated with the


Schedule Checked
subscription.

Subscription
The name of the subscription. Checked
name

The status of the delivery, such as


Delivery status Checked
Complete, Timed Out, or Error.

Date Date of the delivery Checked

Time Time of the delivery Checked

File location Location of the file Checked

Link to file Hyperlink to the file Checked

Error message The specific error message for a failed Checked

Copyright © 2024 All Rights Reserved 1014


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

delivery

To include a message with each cache


Append the
delivery notification, select this checkbox Checked
following text
and type the message in the field.

Send
notification to
this Type the email address of a system
administrator administrator to receive a notification Empty
email address email for the failed cache delivery.
when delivery
fails

Deliveries - FTP Delivery - Compression

Default
Setting Description
Value

Select this checkbox to compress the report or


Enable
document that is subscribed and delivered to a file Checked
compression
location.

Select the level of compression for the subscribed report


Level of
or document as High, Medium, or Low from the drop- Medium
compression
down list.

In this field, specify the file extension for the


compressed files. The default extension is ZIP. Reports
or documents are compressed using the ZIP algorithm.
Changing the extension does not change the algorithm;
Extension of
it changes only the file extension. For example, some zip
compressed file
network environments do not allow files with the ZIP
extension to be delivered as email attachments. In these
environments, changing the file extension is necessary
for the attachments to be delivered.

Copyright © 2024 All Rights Reserved 1015


Syst em Ad m in ist r at io n Gu id e

Deliveries - Printing - Email Notification

Default
Setting Description
Value

Select this checkbox to send a


notification email to the recipient when
the subscribed report or document is
Checked
delivered to the file location. If this
checkbox is cleared, all other options in
this category are disabled.

Send Select this checkbox to send a notification


notification to email to the recipient when the subscribed
Checked
recipient when report or document fails to be delivered on
delivery fails schedule.

The MicroStrategy user or contact that


Recipient name Checked
subscribed to the delivery

Owner name The owner of the subscription Checked


Enable email
Report or Name of the subscribed report or
notification Checked
Document name document
for printing
Project name Project containing the report or document Checked

Delivery Delivery method of email, file, FTP, print,


Checked
method Histoy List, Cache, or Mobile

The schedule associated with the


Schedule Checked
subscription

Subscription
The name of the subscription Checked
name

Status of the delivery, such as Complete,


Delivery status Checked
Timed Out, or Error

Date Date of the delivery Checked

Time Time of the delivery Checked

Copyright © 2024 All Rights Reserved 1016


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Printer name The name of the printer Checked

The specific error message for a failed


Error message Checked
delivery.

To include a message with each cache


Append the
delivery notification, select this checkbox Checked
following text
and type the message in the field.

Send
notification to
this Enter the email address of a system
administrator administrator to receive a notification Empty
email address email for the failed cache delivery.
when delivery
fails

Deliveries - Printing - PDF Properties

Setting Description Default Value

Enable print range Checked

Deliveries - History List - Email Notification

Default
Setting Description
Value

Select this checkbox to send a notification


Enable email email to the administrator when a
notification subscribed report or document fails to be Checked
for history list delivered to the cache. If this checkbox is
cleared, all other options in this category

Copyright © 2024 All Rights Reserved 1017


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

are disabled.

Copyright © 2024 All Rights Reserved 1018


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Send Select this checkbox to send a notification


notification to email to the recipient when the subscribed
Checked
recipient when report or document fails to be delivered on
delivery fails schedule.

The MicroStrategy user or contact that


Recipient name Checked
subscribed to the delivery

Owner name The owner of the subscription Checked

Report or
Name of the subscribed report or
Document Checked
document
name

Project name Project containing the report or document Checked

Delivery Delivery method of Email, file, FTP, print,


Checked
method Histoy List, Cache, or Mobile

The schedule associated with the


Schedule Checked
subscription

Subscription
The name of the subscription Checked
name

Status of the delivery, such as Complete,


Delivery status Checked
Timed Out, or Error

Date Date of the delivery Checked

Time Time of the delivery Checked

Link to History A link to the report or document's History


Checked
List List message

The specific error message for a failed


Error message Checked
delivery

To include a message with each cache


Append the
delivery notification, select this checkbox Checked
following text
and type the message in the field.

Copyright © 2024 All Rights Reserved 1019


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

Send
notification to
this Type the email address of a system
administrator administrator to receive a notification Empty
email address email for the failed cache delivery.
when delivery
fails

Deliveries - Mobile Delivery - Email Notification

Default
Setting Description
Value

Select this checkbox to send a


notification email to the administrator
when a subscribed report or document
Checked
fails to be delivered to the cache. If this
checkbox is cleared, all other options in
this category are disabled.

The MicroStrategy user or contact that


Recipient name Checked
Enable email subscribed to the delivery
notification to
Owner name The owner of the subscription Checked
administrator
for failed Report or
Name of the subscribed report or
mobile delivery Document Checked
document
name

Project containing the report or


Project name Checked
document

Delivery Delivery method of Email, file, FTP,


Checked
method print, Histoy List, Cache, or Mobile

Schedule The schedule associated with the Checked

Copyright © 2024 All Rights Reserved 1020


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

subscription

Subscription
The name of the subscription Checked
name

Status of the delivery, such as


Delivery status Checked
Complete, Timed Out, or Error

Date Date of the delivery Checked

Time Time of the delivery Checked

The specific error message for a failed


Error message Checked
delivery

To include a message with each cache


Append the delivery notification, select this
Checked
following text checkbox and type the message in the
field.

Send
notification to
this Type the email address of a system
administrator administrator to receive a notification Empty
email address email for the failed cache delivery.
when delivery
fails

Deliveries - Mobile Delivery - Real Time Updates

Default
Setting Description
Value

Enable real
Select this checkbox to enable updated report and
time updates
document data to be automatically sent to Mobile users Checked
for mobile
that are subscribed to the report or document.
delivery

Copyright © 2024 All Rights Reserved 1021


Syst em Ad m in ist r at io n Gu id e

Deliveries - Cache - Email Notification

Default
Setting Description
Value

Select this checkbox to send a


notification email to the administrator
when a subscribed report or document
Checked
fails to be delivered to the cache. If this
checkbox is cleared, all other options in
this category are disabled.

The MicroStrategy user or contact that


Recipient name Checked
subscribed to the delivery

Owner name The owner of the subscription Checked

Report or
Name of the subscribed report or
Document Checked
document
name
Enable email
Project containing the report or
notification to Project name Checked
document
administrator
for failed cache Delivery Delivery method of email, file, FTP,
creation Checked
method print, Histoy List, Cache, or Mobile

The schedule associated with the


Schedule Checked
subscription

Subscription
The name of the subscription Checked
name

Status of the delivery, such as


Delivery status Checked
Complete, Timed Out, or Error.

Date Date of the delivery Checked

Time Time of the delivery Checked

The specific error message for a failed


Error message Checked
delivery

Copyright © 2024 All Rights Reserved 1022


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

To include a message with each cache


Append the delivery notification, select this
Checked
following text checkbox and type the message in the
field.

Send
notification to
this Type the email address of a system
administrator administrator to receive a notification Empty
email address email for the failed cache delivery.
when delivery
fails

Deliveries - Error Handling

Default
Setting Description
Value

Select whether to Deliver or Do not deliver the


Do not
Deliver subscription if the report or document execution
deliver
No data returns no data.

returned Select whether to Notify or Do not notify the


Do not
Notify subscription if the report or document execution
notify
returns no data.

Select whether to Deliver or Do not deliver the


subscription if the report or document execution
returns only some of the data. Partial results are Do not
Deliver
delivered when the size of the report or document deliver
Partial
exceeds the memory governing setting for Maximum
results
memory consumption for PDF files and Excel files.

Select whether to Deliver or Do not deliver the Do not


Notify
subscription if the report or document execution notify

Copyright © 2024 All Rights Reserved 1023


Syst em Ad m in ist r at io n Gu id e

Default
Setting Description
Value

returns only some of the data. Partial results are


delivered when the size of the report or document
exceeds the memory governing setting for Maximum
memory consumption for PDF files and Excel files.

Best Practices for Tuning your System


MicroStrategy recommends the following best practices for designing,
configuring, and tuning your MicroStrategy system. For detailed information
about increasing system performance by tuning the governing settings, see
the remainder of this section.

l When designing your projects and data warehouse, follow the


MicroStrategy best practices as outlined in Prerequisites.

l When configuring your network, follow the MicroStrategy best practices as


outlined in How the Network can Affect Performance, page 1028.

l Use Intelligence Server's Memory Contract Manager to manage memory


usage, as described in Governing Intelligence Server Memory Use with
Memory Contract Manager, page 1039.

l Use MicroStrategy system privileges to restrict users' access to certain


features, as described in Governing User Profiles, page 1069.

l Assign a high priority to more time-sensitive jobs, and a low priority to jobs
that may use a great deal of system resources, as described in Prioritize
Jobs, page 1086.

l Enable Intelligence Server thread balancing, as described in Results


Processing, page 1090.

Copyright © 2024 All Rights Reserved 1024


Syst em Ad m in ist r at io n Gu id e

l Ensure that report and document designers are aware of the features that
can place an exceptionally heavy load on the system. These features are
listed in detail in Design Reports, page 1104.

l Enable automatic memory tuning and dynamic buffer sizing.

Configure Automatic Memory Tuning


MicroStrategy Intelligence Server uses a memory management component
called SmartHeap. This component is designed to maximize memory
allocation performance, especially in a multi-threaded and multi-processor
environment.

The table below details the environment variable settings you can use to
adjust automatic memory turning.

Environment Variable Setting Behavior

Enables light memory tuning and


disables dynamic memory tuning. The
MSTR_MEM_CACHE_AUTO_CONFIG=0
memory usage is consistent with 10.10
and previous versions.

Enables high-level, aggressive memory


tuning. This can provide performance
improvements under concurrency for
MSTR_MEM_CACHE_AUTO_CONFIG=1
some cases. The memory usage is also
expected to have a considerable
increase.

Enables middle-level, conservative


memory tuning. This is the default
behavior in 11.0. It can provide some
performance improvements under
MSTR_MEM_CACHE_AUTO_CONFIG=2 concurrency for machines larger than
256 GB with more than 64 cores. The
memory usage is expected to slightly
increase, but it is lower than MSTR_
MEM_CACHE_AUTO_CONFIG=1. This

Copyright © 2024 All Rights Reserved 1025


Syst em Ad m in ist r at io n Gu id e

Environment Variable Setting Behavior

setting does not apply to versions prior


to 11.0.

If you do not set this variable, the


default behavior is used. In
MicroStrategy 10.11 and previous
versions, the default behavior is the
MSTR_MEM_CACHE_AUTO_CONFIG is not set
same as MSTR_MEM_CACHE_AUTO_
CONFIG=0. In MicroStrategy 11.0, the
default behavior is the same as MSTR_
MEM_CACHE_AUTO_CONFIG=2.

Enabling middle level or high level memory tuning can potentially increase
the memory footprint of Intelligence Server. However, Intelligence Server
has the ability to release the cached memory in Smartheap when
Intelligence Server is about to hit Memory Contract Manager denial.

For detail about how Intelligence Server releases the cached memory in
Smartheap, please refer to the knowledge base in MicroStrategy
Community.

Design System Architecture


The choices that you make when designing the architecture of your
MicroStrategy system have a significant impact on system performance and
capacity.

Copyright © 2024 All Rights Reserved 1026


Syst em Ad m in ist r at io n Gu id e

Choices that you must make when designing your system architecture
include:

l How the data warehouse is configured (see How the Data Warehouse can
Affect Performance, page 1027)

l The physical location of machines relative to each other and the amount of
bandwidth between them (see How the Network can Affect Performance,
page 1028)

l Whether you cluster several Intelligence Servers together and what


benefits you can get from clustering (see How Clustering can Affect
Performance, page 1032)

How the Data Warehouse can Affect Performance


The data warehouse is a crucial component of the business intelligence
system. If it does not perform well, the entire system's performance suffers.
The data warehouse platform or RDBMS and the data warehouse's design
and tuning are factors that can affect your system's performance.

Platform Considerations
The size and speed of the machines hosting your data warehouse and the
database platform (RDBMS) running your data warehouse both affect the
system's performance. A list of supported RDBMSs can be found in the

Copyright © 2024 All Rights Reserved 1027


Syst em Ad m in ist r at io n Gu id e

Readme. You should have an idea of the amount of data and the number of
users that your system serves, and research which RDBMS can handle that
type of load.

Design and Tuning Considerations


Your data warehouse's design (also called the physical warehouse schema)
and tuning are important and unique to your organization. They also affect
the performance of your business intelligence system. The discussion of the
set of trade-offs that you must make when designing and tuning the data
warehouse is out of the scope of this guide. Examples of the types of
decisions that you must make include:

l Will you use a normalized, moderately normalized, or fully denormalized


schema?

l What kind of lookup, relate, and fact tables will you need?

l What aggregate tables will you need?

l What tables do you need to partition and how?

l What tables will you index?

For more information about data warehouse design and data modeling, see
the Advanced Reporting Help and Project Design Help.

How the Network can Affect Performance


The various components of the MicroStrategy system need to be installed on
different machines for performance reasons. The network plays an important
role in connecting these components. In the diagram below, the separate
components of the MicroStrategy system are linked by lines representing the
network. The steps that occur over each connection are described in the
table below the diagram.

Copyright © 2024 All Rights Reserved 1028


Syst em Ad m in ist r at io n Gu id e

Step Protocol Details

HTML sent from Web server to client. Data size is small


compared to other points because results have been
1 HTTP/HTTPS
incrementally fetched from Intelligence Server and HTML
results do not contain any unnecessary information.

TPC/IP
XML requests are sent to Intelligence Server. XML report
2 or
results are incrementally fetched from Intelligence Server.
TLS/SSL

TCP/IP
Requests are sent to Intelligence Server. (No incremental fetch
3 or
is used.)
TLS/SSL

TCP/IP Broadcasts between all nodes of the cluster (if implemented):


metadata changes, Inbox, report caches. Files containing cache
4 or
and Inbox messages are exchanged between Intelligence
TLS/SSL Server nodes.

TCP/IP Files containing cache and Inbox messages may also be


exchanged between Intelligence Server nodes and a shared
5 or
cache file server if implemented (see Sharing Result Caches
TLS/SSL and Intelligent Cubes in a Cluster, page 1138).

Object requests and transactions to metadata. Request results


6 ODBC
are stored locally in Intelligence Server object cache.

Copyright © 2024 All Rights Reserved 1029


Syst em Ad m in ist r at io n Gu id e

Step Protocol Details

Complete result set is retrieved from database and stored in


7 ODBC
Intelligence Server memory and/or caches.

The maximum number of threads used in steps 2 and 3 can be controlled in


the Intelligence Server Configuration Editor, in the Server Definition:
General category, in the Number of Network Threads field. Depending on
how your network is configured, one network thread may be sufficient to
serve anywhere from 64 to 1028 user connections.

Network Configuration Best Practices


The network configuration, that is, where the components are installed in
relation to each other, can have a large effect on performance. For example,
if the physical distance between Intelligence Server and the data warehouse
is great, you may see poor performance due to network delays between the
two machines.

MicroStrategy recommends the following best practices for network design:

l Place the Web server machines close to the Intelligence Server machines.

l Place Intelligence Server close to the both the data warehouse and the
metadata repository.

l Dedicate a machine for the metadata repository.

l If you use Enterprise Manager, dedicate a machine for the Enterprise


Manager database (statistics tables and data warehouse).

l If you have a clustered environment with a shared cache file server, place
the shared cache file server close to the Intelligence Server machines.

Copyright © 2024 All Rights Reserved 1030


Syst em Ad m in ist r at io n Gu id e

Network Bandwidth and How its Capacity is Used


Your network design depends on the type of reports that your users typically
run. These reports, in turn, determine the load they place on the system and
how much network traffic occurs between the system components.

The ability of the network to quickly transport data between the components
of the system greatly affects its performance. For large result sets, the
highest load or the most traffic typically occurs between the data warehouse
and the Intelligence Servers (indicated by C in the diagram below). The load
between Intelligence Server and Web server is somewhat less (B), followed
by the least load between the Web server and the Web browser (A).

This is illustrated in the diagram and explained below.

l Incremental fetch size directly influences the amount of traffic at A.

l Graphics increase network bandwidth at B.

l The load at C is determined primarily by the number of rows retrieved from


the data warehouse. Actions such as sending SQL or retrieving objects
from the metadata result in minimal traffic.

l Cached reports do not cause any network traffic at C.

l Report manipulations that do not cause SQL to be generated and sent to


the data warehouse (such as pivot, sort, and page-by) are similar to
running cached reports.

l Report manipulations that cause SQL to be generated and sent to the


data warehouse are similar to running non-cached reports of the same
size.

Copyright © 2024 All Rights Reserved 1031


Syst em Ad m in ist r at io n Gu id e

After noting where the highest load is on your network, you can adjust your
network bandwidth or change the placement of system components to
improve the network's performance.

You can tell whether your network configuration has a negative effect on
your system's performance by monitoring how much of your network's
capacity is being used. Use the Windows Performance Monitor for the object
Network Interface, and the watch the counter Total bytes/sec as a percent
of your network's bandwidth. If it is consistently greater than 60 percent (for
example), it may indicate that the network is negatively affecting the
system's performance. You may want to use a figure different than 60
percent for your system.

To calculate the network capacity utilization percent, take the total capacity,
in terms of bits per second, and divide it by (Total bytes per second * 8).
(Multiply the Total Bytes per second by 8 because 1 byte = 8 bits.)

The Current Bandwidth counter in Performance Monitor gives an


approximate value of total capacity because it is only an estimate. You may
want to use another network monitoring utility such as NetPerf to get the
actual bandwidth figure.

How Clustering can Affect Performance


Clustering several Intelligence Server machines provides substantial gains
in memory and CPU capacity because multiple machines are sharing the
work. Clustering has additional benefits for your system as well. The
clustering feature is built into Intelligence Server and is available out of the
box if you have the proper license. For more information on clustering
Intelligence Servers, including instructions, see Chapter 9, Cluster Multiple
MicroStrategy Servers.

Manage System Resources


If you had unlimited money, you could create a system that would impose
few limits on system capacity. While system resources is not the place to

Copyright © 2024 All Rights Reserved 1032


Syst em Ad m in ist r at io n Gu id e

save money when building a business intelligence system, you may not have
the resources that you want you could have.

You must make certain choices about how to maximize the use of your
system's resources. Because Intelligence Server is the main component of
the MicroStrategy system, it is important that the machines running it have
sufficient resources for your needs. These resources include:

l The processors (Processor Type, Speed, and Number of Processors, page


1033)

l Physical disk characteristics (Physical Disk, page 1034)

l The amount of memory (Memory, page 1035)

The Installation and Configuration Help contains detailed information about


small, medium, and large configurations.

Processor Type, Speed, and Number of Processors


Intelligence Server recognizes the type and speed of the machine's CPUs,
and performs faster on a machine with multiple CPUs. If Intelligence Server
is consistently using a great deal of processor capacity, greater than 80
percent, for example, it may be a sign that a faster processor would improve

Copyright © 2024 All Rights Reserved 1033


Syst em Ad m in ist r at io n Gu id e

the system's capacity. In Windows, you can monitor the processor usage
with the Windows Performance Monitor.

If you upgrade a machine's CPU, make sure you have the appropriate
license to run Intelligence Server on the faster CPU. For example, if you
upgrade the processor on the Intelligence Server machine from a 2 GHz to a
2.5 GHz processor, you should obtain a new license key from MicroStrategy.

Intelligence Server is also aware of the number of processors it is allowed to


use according to the license key that you have purchased. For example, if a
machine running Intelligence Server has two processors and you upgrade it
to four, Intelligence Server uses only the two processors and ignores the
additional two until you purchase a new license key from MicroStrategy.
Also, if several Intelligence Server machines are clustered, the application
ensures that the total number of processors being used does not exceed the
number licensed.

For detailed information about CPU licensing, see CPU Licenses, page 726.

Physical Disk
If the physical disk is used too much on a machine hosting Intelligence
Server, it can indicate a bottleneck in the system's performance. To monitor
physical disk usage in Windows, use the Windows Performance Monitor
counters for the object Physical Disk and the counter % Disk Time. If the
counter is greater than 80 percent on average, it may indicate that the
machine does not have enough memory. This is because when the
machine's physical RAM is full, the operating system starts swapping
memory in and out of the page file on disk. This is not as efficient as using
RAM. Therefore, Intelligence Server's performance may suffer.

By monitoring the disk utilization, you can see if the machine is consistently
swapping at a high level. Defragmenting the physical disk may help lessen
the amount of swapping. If that does not sufficiently lessen the utilization,
consider increasing the amount of physical RAM in the machine. For
information on how Intelligence Server uses memory, see Memory, page
1035.

Copyright © 2024 All Rights Reserved 1034


Syst em Ad m in ist r at io n Gu id e

MicroStrategy recommends that you establish a benchmark or baseline of a


machine's normal disk utilization, perhaps even before Intelligence Server
is installed. This way you can determine whether Intelligence Server is
responsible for excessive swapping because of limited RAM.

Another performance counter that you can use to gauge the disk's utilization
is the Current disk queue length, which indicates how many requests are
waiting at a time. MicroStrategy recommends using the % Disk Time and
Current Disk Queue Length counters to monitor the disk utilization.

Memory
If the machine hosting Intelligence Server has too little memory, it may run
slowly, or even shut down during memory-intensive operations. You can use
the Windows Performance Monitor to monitor the available memory, and you
can govern Intelligence Server's memory use with the Memory Contract
Manager.

Memory Limitations: Virtual Memory


The memory used by Intelligence Server is limited by the machine's virtual
memory.

Virtual memory is the amount of physical memory (RAM) plus the Disk Page
file (swap file). It is shared by all processes running on the machine,
including the operating system.

When a machine runs out of virtual memory, processes on the machine are
no longer able to process instructions and eventually the operating system
may shut down. More virtual memory can be obtained by making sure that as
few programs or services as possible are executing on the machine, or by
increasing the amount of physical memory or the size of the page file.

Increasing the amount of virtual memory, and therefore the available private
bytes, by increasing the page file size may have adverse effects on
Intelligence Server performance because of increased swapping.

Copyright © 2024 All Rights Reserved 1035


Syst em Ad m in ist r at io n Gu id e

Private bytes are the bytes of virtual memory that are allocated to a process.
Private bytes are so named because they cannot be shared with other
processes: when a process such as Intelligence Server needs memory, it
allocates an amount of virtual memory for its own use. The private bytes
used by a process can be measured with the Private Bytes counter in the
Windows Performance Monitor.

The governing settings built into Intelligence Server control its demand for
private bytes by limiting the number and scale of operations which it may
perform simultaneously. In most production environments, depletion of
virtual memory through private bytes is not an issue with Intelligence Server.

How Much Memory does Intelligence Server Use When it Starts


Up?
The amount of memory consumed during startup is affected by a number of
factors such as metadata size, the number of projects, schema size, number
of processing units, number of database connection threads required, and
whether Intelligence Server is in a clustered configuration. Because these
factors are generally static, the amount of memory consumed at startup is
fairly constant. This lets you accurately estimate how much memory is
available to users at runtime.

When Intelligence Server starts up, it uses memory in the following ways:

l It initializes all internal components and loads the static DLLs necessary
for operation. This consumes 25 MB of private bytes and 110 MB of virtual
bytes. You cannot control this memory usage.

l It loads all server definition settings and all configuration objects. This
consumes an additional 10 MB of private bytes and an additional 40 MB of
virtual bytes. This brings the total memory consumption at this point to 35
MB of private bytes and 150 MB of virtual bytes. You cannot control this
memory usage.

Copyright © 2024 All Rights Reserved 1036


Syst em Ad m in ist r at io n Gu id e

l It loads the project schema (needed by the SQL engine component) into
memory. The number and size of projects greatly impacts the amount of
memory used. This consumes an amount of private bytes equal to three
times the schema size and an amount of virtual bytes equal to four times
the schema size. For example, with a schema size of 5 MB, the private
bytes consumption would increase by 15 MB (3 * 5 MB). The virtual bytes
consumption would increase by 20 MB (4 * 5 MB). You can control this
memory usage by limiting the number of projects that load at startup time.

l It creates the database connection threads. This primarily affects virtual


bytes consumption, with an increase of 1 MB per thread regardless of
whether that thread is actually connected to the database. You cannot
control this memory usage.

To Calculate the Amount of Memory that Intelligence Server Uses


When it Starts

If you are not performing this procedure in a production environment, make


sure that you set all the configuration options as they exist in your
production environment. Otherwise, the measurements will not reflect the
actual production memory consumption.

1. Start Intelligence Server.

2. Once Intelligence Server has started, use Windows Performance


Monitor to create and start a performance log that measures Private
and Virtual bytes of the MSTRSVR process.

3. While logging with Performance Monitor, stop Intelligence Server.


Performance Monitor continues to log information for the Intelligence
Server process. You can confirm this by logging the counter information
to the current activity window as well as the performance log.

4. Start Intelligence Server again. The amount of memory consumed


should be easily measured.

Copyright © 2024 All Rights Reserved 1037


Syst em Ad m in ist r at io n Gu id e

How does Intelligence Server Use Memory After it is Running?


Intelligence Server increases its memory use as needed during its
operation. The following factors determine when memory use increases:

l Additional configuration objects: caching of user, connection map, and


schedule and subscription information created or used after Intelligence
Server has been started.

l Caches: result (report and document) caches, object caches, and element
caches created after Intelligence Server has been started. The maximum
amount of memory that Intelligence Server uses for result caches is
configured at the project level. For more information about caches, see
Chapter 10, Improving Response Time: Caching.

l Intelligent Cubes: any Intelligent Cubes that have been loaded after
Intelligence Server has been started. The maximum amount of memory
used for Intelligent Cubes is configured at the project level. For details,
see Chapter 11, Managing Intelligent Cubes.

l User session-related resources: History List and Working set memory,


which are greatly influenced by governing settings, report size, and report
design. For details, see Managing User Sessions, page 1062 and Saving
Report Results: History List, page 1240.

l Request and results processing: memory needed by Intelligence Server


components to process requests and report results. This is primarily
influenced by report size and report design with respect to analytical
complexity. For details, see Governing Requests, page 1072 and Results
Processing, page 1090.

l Clustering: memory used by Intelligence Server to communicate with


other cluster nodes and maintain synchronized report cache and History
List information. For more information about clustering, see Chapter 9,
Cluster Multiple MicroStrategy Servers.

Copyright © 2024 All Rights Reserved 1038


Syst em Ad m in ist r at io n Gu id e

l Scheduling: memory used by scheduler while executing reports for users


when they are not logged in to the system. For more information about
scheduling, see Chapter 12, Scheduling Jobs and Administrative Tasks.

Governing Intelligence Server Memory Use with Memory


Contract Manager
Memory Contract Manager (MCM) is designed to protect Intelligence Server
in cases where a memory request would cause the system to approach a
state of memory depletion. When enabled, MCM grants or denies requests
for memory from tasks in Intelligence Server. The requests are granted or
denied according to user-configured limits on the amount of memory
Intelligence Server is allowed to use. Because MCM is a component in
Intelligence Server, it does not manage the actual memory used by
Intelligence Server itself.

MCM governs the following types of requests:

l Database requests from either the MicroStrategy metadata or the data


warehouse

l SQL generation

l Analytical Engine processing (subtotals, cross tabulation, analytic


functions)

l Cache creation and updating

l Report parsing and serialization for network transfer

l XML generation

The memory load of the requests governed by MCM depends on the amount
of data that is returned from the data warehouse. Therefore, this memory
load cannot be predicted.

Requests such as graphing, cache lookup, or document generation use a


predictable amount of memory and, thus, are not governed by MCM. For
example, a request for a report returns an acceptable amount of data. A

Copyright © 2024 All Rights Reserved 1039


Syst em Ad m in ist r at io n Gu id e

graph of the report's results would be based on the same data and, thus,
would be allowed. Therefore, MCM is not involved in graphing requests. If
the report was not returned because it exceeded memory limits, the graphing
request would never be issued.

Using the Memory Contract Manager


The MCM settings are in the Intelligence Server Configuration Editor, in the
Governing Rules: Default: Memory Settings category.

The Enable single memory allocation governing option lets you specify
how much memory can be reserved for a single Intelligence Server operation
at a time. When this option is enabled, each memory request is compared to
the Maximum single allocation size (MBytes) setting. If the request
exceeds this limit, the request is denied. For example, if the allocation limit
is set to 100 MB and a request is made for 120 MB, the request is denied,
but a request for 90 MB is allowed.

If the Intelligence Server machine has additional software running on it, you
may want to set aside some memory for those processes to use. To reserve

Copyright © 2024 All Rights Reserved 1040


Syst em Ad m in ist r at io n Gu id e

this memory, you can specify the Minimum reserved memory in terms of
either the number of MB or the percent of total system memory. In this case,
the total available memory is calculated as the initial size of the page file
plus the RAM. It is possible that a machine has more virtual memory than
MCM knows about if the maximum page file size is greater than the initial
size.

Intelligence Server always reserves up to 500 MB for its own operation. If


the machine does not have this much memory, or if the Minimum reserved
memory would leave less than 500 MB available for Intelligence Server, no
memory is reserved for other processes.

When MCM receives a request that would cause Intelligence Server's


memory usage to exceed the Minimum reserved memory settings, it denies
the request and goes into memory request idle mode. In this mode, MCM
denies any requests that would deplete memory. MCM remains in memory
request idle mode until the memory used by Intelligence Server falls below a
certain limit, known as the low water mark. For information on how the low
water mark is calculated, see Memory Water Marks, page 1043. For
information about how MCM handles memory request idle mode, see
Memory Request Idle Mode, page 1046.

The Maximum use of virtual address space is applicable in 32-bit


Windows operating systems. For 64-bit operating systems, use the
Minimum reserved memory setting to control the amount of memory
available for Intelligence Server.

The Memory request idle time is the longest time MCM remains in memory
request idle mode. If the memory usage has not fallen below the low water
mark by the end of the Memory request idle time, MCM restarts
Intelligence Server. Setting the idle time to -1 causes Intelligence Server to
remain idle until the memory usage falls below the low water mark.

Copyright © 2024 All Rights Reserved 1041


Syst em Ad m in ist r at io n Gu id e

How does MCM Grant or Deny a Request?


When a task requests memory, it provides MCM with an estimate of how
much memory it requires. If the request is granted, MCM decreases the
amount of available memory and the task allocates memory from the memory
subsystem. When the task is completed or canceled, the memory is released
and the amount of available memory increases.

MCM does not submit memory allocations to the memory subsystem (such
as a memory manager) on behalf of a task. Rather, it keeps a record of how
much memory is available and how much memory has been contracted out to
the tasks.

A memory request is granted if it meets the following criteria:

l It is smaller than the Maximum single allocation size setting.

l It is smaller than the high water mark, or the low water mark if Intelligence
Server is in memory request idle mode. These water marks are derived
from the Intelligence Server memory usage and the Maximum use of
virtual address space and Minimum reserved memory settings. For
detailed explanations of the memory water marks, see Memory Water
Marks, page 1043.

l It is smaller than 80 percent of the largest contiguous block of free


memory to account for memory fragmentation.

To determine whether a memory request is granted or denied, MCM follows


the logic in the flowchart below.

Copyright © 2024 All Rights Reserved 1042


Syst em Ad m in ist r at io n Gu id e

Mem ory Water Marks

The high water mark (HWM) is the highest value that the sum of private
bytes and outstanding memory contracts can reach before triggering
memory request idle mode. The low water mark (LWM) is the value that
Intelligence Server's private byte usage must drop to before MCM exits
memory request idle mode. MCM recalculates the high and low water marks
after every 10 MB of memory requests. The 10 MB value is a built-in
benchmark and cannot be changed.

Two possible values are calculated for the high water mark: one based on
virtual memory, and one based on virtual bytes. For an explanation of the

Copyright © 2024 All Rights Reserved 1043


Syst em Ad m in ist r at io n Gu id e

different types of memory, such as virtual bytes and private bytes, see
Memory, page 1035.

l The high water mark for virtual memory (HWM1 in the diagram above) is
calculated as (Intelligence Server private bytes +
available system memory). It is recalculated for each potential
memory depletion.

The available system memory is calculated using the Minimum reserved


memory limit if the actual memory used by other processes is less than
this limit.

l The high water mark for virtual bytes (HWM2 in the diagram above) is
calculated as (Intelligence Server private bytes). It is
calculated the first time the virtual byte usage exceeds the amount
specified in the Maximum use of virtual address space or Minimum
Reserved Memory settings. Because MCM ensures that Intelligence
Server private byte usage cannot increase beyond the initial calculation, it
is not recalculated until after Intelligence Server returns from the memory
request idle state.

The high water mark used by MCM is the lower of these two values. This
accounts for the scenario in which, after the virtual bytes HWM is calculated,
Intelligence Server releases memory but other processes consume more
available memory. This can cause a later calculation of the virtual memory
HWM to be lower than the virtual bytes HWM.

The low water mark is calculated as 95 percent of the HWM. It is


recalculated every time the HWM changes.

Mem ory Contract Managem ent

Once the high and low water marks have been established, MCM checks to
see if single memory allocation governing is enabled. If it is, and the request
is for an amount of memory larger than the Maximum single allocation
size setting, the request is denied.

Copyright © 2024 All Rights Reserved 1044


Syst em Ad m in ist r at io n Gu id e

If single memory allocation governing is not enabled, or if the request is for a


block smaller than the Maximum single allocation size limit, MCM checks
whether it is in memory request idle mode, and calculates the maximum
contract request size accordingly:

l For normal Intelligence Server operation, the maximum request size is


based on the high water mark. The formula is [HWM - (1.05 *
(Intelligence Server Private Bytes) + Outstanding
Contracts)].

l In memory request idle mode, the maximum request size is based on the
low water mark. The formula is [LWM - (1.05 *(Intelligence
Server Private Bytes) + Outstanding Contracts)].

The value of 1.05 is a built-in safety factor.

For normal Intelligence Server operation, if the request is larger than the
maximum request size, MCM denies the request. It then enters memory
request idle mode.

If MCM is already in memory request idle mode and the request is larger
than the maximum request size, MCM denies the request. It then checks
whether the memory request idle time has been exceeded, and if so, it
restarts Intelligence Server. For a detailed explanation of memory request
idle mode, see Memory Request Idle Mode, page 1046.

If the request is smaller than the maximum request size, MCM performs a
final check to account for potential fragmentation of virtual address space.
MCM checks whether its record of the largest free block of memory has been
updated in the last 100 requests, and if not, updates the record with the size
of the current largest free block. It then compares the request against the
largest free block. If the request is more than 80 percent of the largest free
block, the request is denied. Otherwise, the request is granted.

After granting a request, if MCM has been in memory request idle mode, it
returns to normal operation.

Copyright © 2024 All Rights Reserved 1045


Syst em Ad m in ist r at io n Gu id e

Mem ory Request Idle Mode

When MCM first denies a request, it enters memory request idle mode. In
this mode, MCM denies all requests that would keep Intelligence Server's
private byte usage above the low water mark. MCM remains in memory
request idle mode until one of the following situations occurs:

l Intelligence Server's memory usage drops below the low water mark. In
this case, MCM exits memory request idle mode and resumes normal
operation.

l MCM has been in memory request idle mode for longer than the Memory
request idle time. In this case, MCM restarts Intelligence Server. This
frees up the memory that had been allocated to Intelligence Server tasks,
and avoids memory depletion.

The Memory request idle time limit is not enforced via an internal clock or
scheduler. Instead, after every denied request MCM checks how much time
has passed since the memory request idle mode was triggered. If this time is
more than the memory request idle time limit, Intelligence Server restarts.

This eliminates a potentially unnecessary Intelligence Server restart. For


example, a memory request causes the request idle mode to be triggered,
but then no more requests are submitted for some time. A scheduled check
at the end of the Memory request idle time would restart Intelligence
Server even though no new jobs are being submitted. However, because
Intelligence Server is completing its existing contracts and releasing
memory, it is possible that the next contract request submitted will be below
the low water mark. In this case, MCM accepts the request and resumes
normal operation, without having to restart Intelligence Server.

When MCM forces Intelligence Server to restart because of the Memory


request idle time being exceeded, it also writes the contents of
Intelligence Server's memory use to disk. This memory dump is saved in the
file MCMServerStallDump.dmp in the Intelligence Server folder. By

Copyright © 2024 All Rights Reserved 1046


Syst em Ad m in ist r at io n Gu id e

default, this folder is located at C:\Program Files


(x86)\MicroStrategy\Intelligence Server\.

MicroStrategy recommends setting the Memory request idle time to


slightly longer than the time it takes most large reports in your system to run.
This way, Intelligence Server does not shut down needlessly while waiting
for a task to complete. To help you determine the time limit, use Enterprise
Manager to find out the average and maximum report execution times for
your system. For instructions on using Enterprise Manager, see the
Enterprise Manager Help.

System Memory Depletion


The diagram below shows an example of a potential depletion of system
memory.

In this example, MCM grants memory request A. Once granted, a new


memory contract is accounted for in the available system memory. Request
B is then denied because it exceeds the high water mark, as derived from
the Maximum use of virtual address space setting.

Copyright © 2024 All Rights Reserved 1047


Syst em Ad m in ist r at io n Gu id e

Once request B has been denied, Intelligence Server enters the memory
request idle mode. In this mode of operation, it denies all requests that
would push the total memory used above the low water mark.

In the example above, request C falls above the low water mark. Because
Intelligence Server is in memory request idle mode, this request is denied
unless Intelligence Server releases memory from elsewhere, such as other
completed contracts.

Request D is below the low water mark, so it is granted. Once it has been
granted, Intelligence Server switches out of request idle mode and resumes
normal operation.

If Intelligence Server continues receiving requests for memory above the low
water mark before the Memory request idle time is exceeded, MCM shuts
down and restarts Intelligence Server.

Virtual Byte Depletion


Below is a diagram of potential memory depletion due to available bytes in
the Intelligence Server virtual address space.

In this example, Intelligence Server has increased its private byte usage to
the point that existing contracts are pushed above the high water mark.

Copyright © 2024 All Rights Reserved 1048


Syst em Ad m in ist r at io n Gu id e

Request A is denied because the requested memory would further deplete


Intelligence Server's virtual address space.

Once request A has been denied, Intelligence Server enters the memory
request idle mode. In this mode of operation, all requests that would push
the total memory used above the low water mark are denied.

The low water mark is 95 percent of the high water mark. In this scenario,
the high water mark is the amount of Intelligence Server private bytes at the
time when the memory depletion was first detected. Once the virtual byte
high water mark has been set, it is not recalculated. Thus, for Intelligence
Server to exit memory request idle mode, it must release some of the private
bytes.

Although the virtual bytes high water mark is not recalculated, the virtual
memory high water mark is recalculated after each request. MCM calculates
the low water mark based on the lower of the virtual memory high water
mark and the virtual bytes high water mark. This accounts for the scenario
in which, after the virtual bytes high water mark is calculated, Intelligence
Server releases memory but other processes consume more available
memory. This can cause a later calculation of the virtual memory high water
mark to be lower than the virtual bytes high water mark.

Intelligence Server remains in memory request idle mode until the memory
usage looks like it does at the time of request B. The Intelligence Server
private byte usage has dropped to the point where a request can be made
that is below the low water mark. This request is granted, and MCM exits
memory request idle mode.

If Intelligence Server does not free up enough memory to process request B


before the Memory request idle time is exceeded, MCM restarts
Intelligence Server.

Copyright © 2024 All Rights Reserved 1049


Syst em Ad m in ist r at io n Gu id e

Avoid Server Shutdown when the Memory Contract Manager Limit


is Exceeded
Hitting the Memory Request Idle Time limit set by Memory Contract Manager
can cause the Intelligence Server to shut down, leaving the MicroStrategy
environment unavailable. To avoid these types of shutdowns, administrators
can enable the Intelligence Server to unload uncertified cubes from memory
to allow the Intelligence Server to try and recover before it crashes. For
more information about certifying cubes, see the Workstation Help.

Starting in MicroStrategy ONE (September 2024), you can swap server


messages to disk to improve server stability and prevent memory outages.

Starting in MicroStrategy ONE (June 2024), there is more granular control


over the cube unload process.

If you are using a MicroStrategy version prior to MicroStrategy ONE (June


2024), see Enable or Disable Cube Unloading Prior to MicroStrategy ONE
(June 2024).

Starting in MicroStrategy ONE (September 2024), when system memory is


low (for example, available total system memory is less than 20% of the
machine or container's physical memory), the Intelligence Server will
automatically swap the least recently used (LRU) server messages to disk
which releases up to 20% of total working set memory usage. For more
information, see Swap Server Messages to Disk.

For more information about server messages and working set, see
Governing User Resources

Unload Cubes in Low Mem ory

When system memory is low, such as when the available total system
memory is under pressure (less than 20% of the machine/container's
physical memory), the Intelligence Server will automatically start unloading
cubes up to 10% of the total physical memory, using the following steps:

Copyright © 2024 All Rights Reserved 1050


Syst em Ad m in ist r at io n Gu id e

1. Release index of cubes not used in the past 2 days. Administrators can
customize the most used cube interval via registry. This is enabled by
default and Administrators can disable cube index governing.

2. If you Enable Cube Governing and the system is still under pressure,
the Intelligence server will start unloading cubes based on the Least
Recently Used (LRU) algorithm. This setting is disabled by default.

If the system does not recover from low memory and enters Memory
Depletion status, the Intelligence Server will follow the same steps above to
unload a larger number of cubes. For more information, see Governing
Intelligence Server Memory Use with Memory Contract Manager.

The unload process will be skipped if cube memory usage is less than 30%
of the total physical memory. Certified cubes will be skipped.

Enable Cube Governing

To enable the behavior in MicroStrategy Web:

1. On the upper right of any page, click your user name, and then select
Preferences from the drop-down list.

2. From the left, select General.

3. Enable Cube Governing.

4. Save the change.

If enough memory is released before reaching the full Memory Request Idle
Time limit, Intelligence Server will exit Memory Contract Manager. If not,
Intelligence Server will still be shutdown by Memory Contract Manager as
usual. Even if the Intelligence Server will avoid shutdown, the unloading
process will be performed for all cubes once it is triggered.

Copyright © 2024 All Rights Reserved 1051


Syst em Ad m in ist r at io n Gu id e

Enable or Disable Cube Unloading Prior to MicroStrategy ONE (June 2024)

Ho w t o En ab l e o r Di sab l e Cu b e Un l o ad i n g

The feature is by default off in MicroStrategy ONE. This feature can be


enabled through a environment variable named
DisableUnloadingCachingForMCM.

Available settings:

l DisableUnloadingCachingForMCM is unset, the feature is off by


default.

l DisableUnloadingCachingForMCM= 0, the feature is on.

l DisableUnloadingCachingForMCM= 1, the feature is off.

To change the value of this setting on:

l Windows:

1. Go to Environment Variables > System Variables.

2. Add a variable named DisableUnloadingCachingForMCM with


value of 0.

l Linux:

1. Export DisableUnloadingCachingForMCM=0 in the same terminal


before starting Intelligence Server.

If the variable is set correctly, you could see following log in the
DSSErrors.log file:

[Kernel][Info][UID:0][SID:0][OID:0] The environment


variable "DisableUnloadingCachingForMCM" is set to 0. Enable
unloading cube For MCM.

Copyright © 2024 All Rights Reserved 1052


Syst em Ad m in ist r at io n Gu id e

Feat u r e Beh avi o r

When the Intelligence Server exceeds half of the Memory Request Idle Time
setting, the cube unloading will be triggered. Messages such as the
following will appear in the DSSErros.log file:

l [Kernel][Info][UID:0][SID:0][OID:0] Intelligence Server has


entered Request Idle Mode for 301 seconds, the event of purging cache is
triggered.

All un-certified cubes for each project will then be unloaded one by one to
release memory. Messages such as the following will appear in the
DSSErros.log file:

l [Kernel][Info][UID:0][SID:0][OID:0] Start to unload cube for


project "ART - GCSBB AML Reporting" to release memory for Intelligence
Server, current memory for loaded cubes in this project's cube manager is
1177845.

l [Kernel][Info][UID:0][SID:0][OID:0] Finish attempting to unload


cubes for project "ART - GCSBB AML Reporting", current memory for
loaded cubes in this project's cube manager is 0. Some cubes may be
loaded again by other users during the processing. Please check cube
monitor to find out which cube is not unloaded.

If enough memory is released before reaching the full Memory Request Idle
Time limit, Intelligence Server will exit Memory Contract Manager. If not,
Intelligence Server will still be shutdown by Memory Contract Manager as
usual. Even if the Intelligence Server will avoid shutdown, the unloading
process will be performed for all cubes once it is triggered.

Swap Server Messages to Disk

Key considerations when swapping server messages to disk:

Copyright © 2024 All Rights Reserved 1053


Syst em Ad m in ist r at io n Gu id e

l The latest active server message of each session will not be swapped.

l Correct configuration of the Working Set file directory is required. For


more information, see Edit Server-Level Governing Settings.

l The swap skips if the following scenarios occur:


o The current usage of working set memory is less than 20% of the
Maximum memory for working set cache governing setting.
o The Intelligence Server already entered MCM denial status.

The swap skips to avoid an additional memory request that is required


by disk operations. For more information on MCM denial status, see
Governing Intelligence Server Memory Use with Memory Contract
Manager.

This feature is enabled by default. You can Disable Working Set Memory
Governing via the MicroStrategy REST API, if necessary.

Di sab l e Wo r ki n g Set M em o r y Go ver n i n g vi a t h e M i cr o St r at egy REST API

1. Open the MicroStrategy REST API Explorer by appending


/MicroStrategyLibrary with /api-
docs/index.html?visibility=all in your browser.

2. Create a session and authenticate it. In the Authentication section, use


POST /api/auth/admin/login.

3. Click Try Out and modify the request body by providing your username
and password.

4. Click Execute.

5. In the response, find X-MSTR-AuthToken.

6. To get the current setting status:

Copyright © 2024 All Rights Reserved 1054


Syst em Ad m in ist r at io n Gu id e

a. Under the Configurations section, look up GET


/api/v2/configurations/featureFlags.

b. Click Try Out.

c. Set the proper X-MSTR-AuthToken from step 5. You can also get
this via inspecting the browser network XHR requests.

d. Click Execute.

e. Search for WorkingSetGoverning in the response body to find


its status details.

7. Under the Configurations section, look up PUT


/api/configurations/featureFlags/{id}.

8. Click Try Out.

9. Set the proper X-MSTR-AuthToken from step 5. You can get this by
inspecting the browser network XHR requests.

10. Set id to 2858F54E4B456DFD52AC90BA740DF4C8.

11. To disable this setting, set the status value to 2.

12. Click Execute.

13. Repeat step 6 to verify that the setting is disabled.

Memory Usage Breakdown


When the Intelligence server enters Memory Contract Manager (MCM)
denial state or shuts down due to an extended timeout limit of the MCM
denial state, it may be difficult to understand and troubleshoot the issue.
Starting in MicroStrategy 2021 Update 1, the overall memory usage
breakdown information is logged by default in the DSSErrors.log to help
troubleshoot such issue.

Copyright © 2024 All Rights Reserved 1055


Syst em Ad m in ist r at io n Gu id e

When is it Logged?

A breakdown of memory usage is logged when:

l The Intelligence server enters MCM denial state for the first time. There
are two cases of this:

1. The Intelligence server has never entered MCM denial state and this
is the first time doing so.

2. The Intelligence server has entered MCM denial state, but recovers
and then works as expected. After some time, it enters MCM denial
state again.

However, if the Intelligence server has been in MCM denial state, to the
subsequent continuous request rejects, a memory usage breakdown is not
logged.

l The Intelligence server shuts down due to being in MCM denial state for an
extended time.

If the Intelligence server stalls for too long due to MCM denial, it shuts
down. For this shutdown, a memory usage breakdown is logged as well.

What Inform ation is Collected?

The Intelligence server does two things when outputting memory usage
breakdown information in DSSErrors.log.

l Collect system memory usage counters and format them in a string. Key
system memory counters are collected and divided into two parts:

1. Virtual memory usage counters:

l Total System Virtual Memory (GB)

l Total In Use Virtual Memory for Other Processes (MB)

Copyright © 2024 All Rights Reserved 1056


Syst em Ad m in ist r at io n Gu id e

l Total In Use Virtual Memory for Intelligence Server (MB) Including


MMF Virtual Size

l Total SmartHeap Cached Memory Utilization (MB)

l Max Memory Available to the Intelligence Server Based on the


Memory Contract Manager (MB)

2. Physical memory usage counters:

l Total System Physical Memory (GB)

l Total In Use Physical Memory for Intelligence Server (MB)

l Total In Use Physical Memory for Other Processes (MB)

l Total Physical Memory for File Caches (MB)

This is specific to Linux.

l Collect object cache related memory usage counters and format them in a
string. Key object cache related memory counters are collected from the
Performance Monitor. These counters include:

l Report Caches In Memory (MB)

l Document Caches In Memory (MB)

l Cube Caches In Memory (MB)

l Object Server Caches In Memory (MB)

l Element Server Caches In Memory (MB)

l Total Working Set Memory Utilization (MB)

l Total Size Of Physical Memory Used For Memory Mapped Files (MB)

l MMF Virtual Memory Size (MB)

Copyright © 2024 All Rights Reserved 1057


Syst em Ad m in ist r at io n Gu id e

l Cube Size Growth In Memory Including Indexes (MB)

l Memory Used by Cube Element Blocks (KB)

l Memory Used by Cube Index Keys (KB)

l Memory Used by Cube Rowmaps (KB)

The following table maps the item names in the memory usage breakdown
information to the names in the MicroStrategy Diagnostics and
Performance Logging Tool.

Memory Usage Breakdown MicroStrategy Diagnostics and


Information in DSSErros.log Performance Logging Tool

Report Caches In Memory (MB) Memory Used by Report Caches (MB)

Total Size (in MB) of Document Caches Loaded in


Document Caches In Memory (MB)
Memory

Cube Caches In Memory (MB) Total Size (in MB) of Cubes Loaded in Memory

Object Server Caches In Memory


Object Server Cache (KB)
(MB)

Element Server Caches In Memory


Element Server Cache (KB)
(MB)

Total Size of Physical Memory (Workingset/RSS)


Cube Components Using Memory- Used for Memory Mapped Files (MB)
mapped Files (MB)

Total Working Set Memory


Working Set Cache RAM Usage (MB)
Utilization (MB)

MMF Virtual Memory Size (MB) Total Memory Mapped Files Size (MB)

Memory Used by Cube Element Blocks (KB)


Cube Size Growth In Memory
Including Indexes (MB) Memory Used by Cube Index Keys (KB)

Memory Used by Cube Rowmaps(KB)

Copyright © 2024 All Rights Reserved 1058


Syst em Ad m in ist r at io n Gu id e

All collected memory counters are formatted to a message. In the messages,


the counters are hierarchically ordered.

The following are complete output examples of the memory breakdown


information on Linux and Windows.

All values in the MCM memory breakdown information are rounded to the
integer part. For example, the total system physical memory of 15.999 GB
appears as 15 GB in the memory breakdown information.

When the Intelligence server enters MCM denial state

2021-02-20 03:18:00.008-05:00 [HOST:tec-l-002717][SERVER:CastorServer]


[PID:27649][THR:139735680997120][Kernel][Info][UID:0][SID:0][OID:0] IServer
enters MCM denial state. The breakdown of memory usage is:
Total System Physical Memory(GB): 11
Total In Use Physical Memory For Intelligence Server(MB): 1075
Total Size Of Physical Memory Used For Memory Mapped Files(MB): 38
Total In Use Physical Memory For Other Processes(MB): 1706
Total Physical Memory For File Cache(MB): 1579
Total System Virtual Memory(GB): 21
Total In Use Virtual Memory For Other Processes(MB): 3646
Max memory Available to the Intelligence Server Based On Memory Contract
Manager(MB): 0
Total In Use Virtual Memory For Intelligence Server(MB): 3323
Report Caches In Memory(MB): 0
Document Caches In Memory(MB): 0
Cube Caches In Memory(MB): 255
Cube Size Growth In Memory Including Indexes(MB): 0
MMF Virtual Memory Size(MB): 53
Object Server Caches In Memory(MB): 27
Element Server Caches In Memory(MB): 0
Total Working Set Memory Utilization(MB): 0
Total SmartHeap Cached Memory Utilization(MB):95
Other Memory In Intelligence Server(MB): 2946
Note: Other memory In Intelligence Server may include part of the memory for
runtime jobs that is not included in the above counters
Working set includes runtime memory necessary for Document execution While the
Document instance is in use
SmartHeap cache memory is used for performance improvements

When the Intelligence server shuts down due to being in MCM denial state
for an extended time

2021-02-20 03:19:00.009-05:00 [HOST:tec-l-002717][SERVER:CastorServer]


[PID:27649][THR:139735680997120][Kernel][Info][UID:0][SID:0][OID:0] IServer
will shut down soon as the time of MCM denial exceeds the limit(timeout setting
is 30). The breakdown of memory usage is:
Total System Physical Memory(GB): 11
Total In Use Physical Memory For Intelligence Server(MB): 1075
Total Size Of Physical Memory Used For Memory Mapped Files(MB): 38
Total In Use Physical Memory For Other Processes(MB): 1706
Total Physical Memory For File Cache(MB): 1579
Total System Virtual Memory(GB): 21
Copyright © 2024 All Rights
Total In Reserved
Use Virtual Memory For Other Processes(MB): 3646 1059
Max memory Available to the Intelligence Server Based On Memory Contract
Manager(MB): 0
Total In Use Virtual Memory For Intelligence Server(MB): 3323
Report Caches In Memory(MB): 0
Document Caches In Memory(MB): 0
Syst em Ad m in ist r at io n Gu id e

When the Intelligence server enters MCM denial state

When the Intelligence server enters MCM denial state


2021-02-20 04:14:08.452-05:00 [HOST:tec-w-012230]
[SERVER:CastorServer][PID:12596][THR:8832][Kernel][Info]
[UID:54F3D26011D2896560009A8E67019608]
[SID:FFD6204FF8703C3F5AD905A6D7ECA083]
[OID:57EC44DA4B2497EA71574A9ED1577689] IServer enters MCM denial state. The
breakdown of memory usage is:
Total System Physical Memory(GB): 15
Total In Use Physical Memory For Intelligence Server(MB): 900
Total Size Of Physical Memory Used For Memory Mapped Files(MB): 33
Total In Use Physical Memory For Other Processes(MB): 11174
Total System Virtual Memory(GB): 21
Total In Use Virtual Memory For Other Processes(MB): 14995
Max memory Available to the Intelligence Server Based On Memory
Contract Manager(MB): 0
Total In Use Virtual Memory For Intelligence Server(MB): 1088
Report Caches In Memory(MB): 0
Document Caches In Memory(MB): 0
Cube Caches In Memory(MB): 246
Cube Size Growth In Memory Including Indexes(MB): 0
MMF Virtual Memory Size(MB): 43
Object Server Caches In Memory(MB): 24
Element Server Caches In Memory(MB): 0
Total Working Set Memory Utilization(MB): 0
Total SmartHeap Cached Memory Utilization(MB):10
Other Memory In Intelligence Server(MB): 808
Note: Other memory In Intelligence Server may include part of the memory
for runtime jobs that is not included in the above counters
Working set includes runtime memory necessary for Document execution While
the Document instance is in use
SmartHeap cache memory is used for performance improvements

When the Intelligence server shuts down due to being in MCM denial
state for an extended time

2021-02-20 04:15:46.664-05:00 [HOST:tec-w-012230][SERVER:CastorServer]


[PID:12596][THR:7376][Kernel][Info][UID:54F3D26011D2896560009A8E67019608]
[SID:FFD6204FF8703C3F5AD905A6D7ECA083][OID:0] IServer will shut down soon
as the time of MCM denial exceeds the limit(timeout setting is 60). The
breakdown of memory usge is:
Total System Physical Memory(GB): 15
Total In Use Physical Memory For Intelligence Server(MB): 900
Total Size Of Physical Memory Used For Memory Mapped Files(MB): 33
Total In Use Physical Memory For Other Processes(MB): 10797
Total System Virtual Memory(GB): 21
Total In Use Virtual Memory For Other Processes(MB): 14596
Max memory Available to the Intelligence Server Based On Memory
Contract Manager(MB): 0
Total In Use Virtual Memory For Intelligence Server(MB): 1088
Report Caches In Memory(MB): 0
Document Caches In Memory(MB): 0
Cube Caches In Memory(MB): 246
Cube Size Growth In Memory Including Indexes(MB): 0

Copyright © 2024 All Rights Reserved 1060


Syst em Ad m in ist r at io n Gu id e

MMF Virtual Memory Size(MB): 43


Object Server Caches In Memory(MB): 24
Element Server Caches In Memory(MB): 0
Total Working Set Memory Utilization(MB): 0
Total SmartHeap Cached Memory Utilization(MB):12
Other Memory In Intelligence Server(MB): 806
Note: Other memory In Intelligence Server may include part of the memory
for runtime jobs that is not included in the above counters
Working set includes runtime memory necessary for Document execution While
the Document instance is in use
SmartHeap cache memory is used for performance improvements

Governing Memory for Requests from MicroStrategy Web


Products
You can limit the total amount of memory that Intelligence Server can use for
serving requests from MicroStrategy Web, and you can set the amount of
memory that must be kept free for requests from MicroStrategy Web. These
limits are enabled when the Web Request job throttling check box is
selected. If either condition is met, all requests from MicroStrategy Web of
any nature (log in, report execution, search, folder browsing) are denied
until the conditions are resolved. For more details about each setting, see
below.

l Maximum Intelligence Server use of total memory sets the maximum


amount of total system memory (RAM plus Page File) that can be used by
the Intelligence Server process compared to the total amount of memory
on the machine.

This setting is useful to prevent the system from servicing a Web request if
memory is depleted. If the condition is met, Intelligence Server denies all
requests from a MicroStrategy Web product or a client built with the
MicroStrategy Web API.

l Minimum machine free physical memory sets the minimum amount of


RAM that must remain available for Web requests. This value is a
percentage of the total amount of physical memory on the machine, not
including the Page File memory.

Copyright © 2024 All Rights Reserved 1061


Syst em Ad m in ist r at io n Gu id e

This can be useful if the machine is running applications other than


Intelligence Server and you want to increase the chances that requests
from MicroStrategy Web products are serviced using RAM and not the
Page File, which does not work as efficiently.

Managing User Sessions


Each user connection from a MicroStrategy client (MicroStrategy Web,
Developer, Narrowcast Server, and others) establishes a user session on
Intelligence Server. Each user session consumes a set amount of resources
on the Intelligence Server machine and can consume additional resources
depending on the actions that the user takes while they are connected.

The number of active users in a system (those actually executing reports


and using the system) is considered a different category of user from
concurrent users (those simply logged in).

This section covers:

l How the concurrent users and user sessions on your system use system
resources just by logging in to the system (see Governing Concurrent
Users, page 1063)

Copyright © 2024 All Rights Reserved 1062


Syst em Ad m in ist r at io n Gu id e

l How memory and CPU are used by active users when they execute jobs,
run reports, and make requests, and how you can govern those requests
(see Governing User Resources, page 1066)

l How user profiles can determine what users are able to do when they are
logged in to the system, and how you can govern those profiles (see
Governing User Profiles, page 1069)

With the User Connection Monitor, you can track the users who are
connected to the system. For details about how to use this system monitor,
see Monitoring Users' Connections to Projects, page 87.

Governing Concurrent Users


When a user logs in to a MicroStrategy system, a user session is
established. This user session remains open until the user logs out of the
system or the system logs the user out. Users that are logged in but are not
doing anything still consume some resources on Intelligence Server. The
more user sessions that are allowed on Intelligence Server, the more load
those users can put on the system because each session can run multiple
jobs.

To help control the load that user sessions can put on the system, you can
limit the number of concurrent user sessions allowed for each project and for
Intelligence Server. Also, both Developer and MicroStrategy Web have
session timeouts so that when users forget to log out, the system logs them
out and their sessions do not unnecessarily use up Intelligence Server
resources.

For example, a user logs in, runs a report, then leaves for lunch without
logging out of the system. If Intelligence Server is serving the maximum
number of user sessions and another user attempts to log in to the system,
that user is not allowed to log in. You can set a time limit for the total
duration of a user session, and you can limit how long a session remains
open if it is inactive or not being used. In this case, if you set the inactive
time limit to 15 minutes, the person who left for lunch has their session
ended by Intelligence Server. After that, another user can log in.

Copyright © 2024 All Rights Reserved 1063


Syst em Ad m in ist r at io n Gu id e

Intelligence Server does not end a user session until all the jobs submitted
by that user have completed or timed out. This includes reports that are
waiting for autoprompt answers. For example, if a MicroStrategy Web user
runs a report with an autoprompt and, instead of answering the prompt,
clicks the browser's Back button, an open job is created. If the user then
closes thier browser or logs out without canceling the job, the user session
remains open until the open job "Waiting for Autoprompt" times out.

These user session limits are discussed below as they relate to software
features and products.

Limit the Number of User Sessions on Intelligence Server


This setting limits the number of user sessions that can be connected to an
Intelligence Server. This includes connections made from MicroStrategy
Web products, Developer, Distribution Services, Scheduler, or other
applications that you may have created with the SDK. A single user account
can establish multiple sessions on an Intelligence Server. Each session
connects once to Intelligence Server and once to each project that the user
accesses. In the User Connection Monitor, the connections made to
Intelligence Server display as <Server> in the Project column. Project
sessions are governed separately with a project level setting, User
sessions per project, which is discussed below. When the maximum
number of user sessions on Intelligence Server is reached, users cannot log
in, except for the administrator, who can disconnect current users by means
of the User Connection Monitor or increase this governing setting.

To specify this setting, in the Intelligence Server Configuration Editor, select


the Governing Rules: Default: General category and type the number in
the Maximum number of user sessions field.

Limit User Sessions Per Project


When a user accesses a project, a connection (called a user session) is
established for the project and Intelligence Server. In the User Connection

Copyright © 2024 All Rights Reserved 1064


Syst em Ad m in ist r at io n Gu id e

Monitor, the connections made to the project display the project name in the
Project column. If you sort the list of connections by the Project column, you
can see the total number of user sessions for each project.

You can limit the number of sessions that are allowed for each project. When
the maximum number of user sessions for a project is reached, users cannot
log in to the system. An exception is made for the system administrator, who
can log in to disconnect current users by means of the User Connection
Monitor or increase this governing setting.

To specify this setting, in the Project Configuration Editor for the project,
select the Governing Rules: Default: User sessions category and type the
number in the User sessions per project field.

You can also limit the number of concurrent sessions per user. This can be
useful if one user account, such as "Guest," is used for multiple connections.
To specify this setting, in the Project Configuration Editor for the project,
select the Governing Rules: Default: User sessions category and type the
number in the Concurrent interactive project sessions per user field.

Limit User Session Idle Times


When a user logs in to Developer (in a three-tier configuration) or
MicroStrategy Web, a user session is established. As long as the user
logged into that session is using the project, creating or executing reports,
and so on, the session is considered active. When the user stops actively
using the session, this is considered idle time. You can specify the maximum
amount of time a session can remain idle before Intelligence Server
disconnects that session. This frees up the system resources that the idle
session was using and allows other users to log in to the system if the
maximum number of user sessions has been reached.

To specify this setting for Developer, in the Intelligence Server


Configuration Editor, select the Governing Rules: Default: General
category and, in the User session idle time (sec) field, type the number of
seconds of idle time that you want to allow.

Copyright © 2024 All Rights Reserved 1065


Syst em Ad m in ist r at io n Gu id e

To specify this setting for MicroStrategy Web, in the Intelligence Server


Configuration Editor, select the Governing Rules: Default: General
category and, in the Web user session idle time (sec) field, type the
number of seconds of idle time that you want to allow.

If designers are building Report Services documents and dashboards in


MicroStrategy Web, set the Web user session idle time (sec) to 3600 to
avoid a project source timeout.

Governing User Resources


User sessions consume system resources when users log in to the system,
especially when they use the History List and, in MicroStrategy Web, the
Working Set. If a Web user's session expires and the system is configured to
allow users to recover their session information, the stored session
information uses resources. This section discusses these features and how
you can govern them.

Like all requests, user resources are also governed by the Memory Contract
Manager settings. For more information about Memory Contract Manager,
see Governing Intelligence Server Memory Use with Memory Contract
Manager, page 1039.

History List
The History List is an in-memory message list that references reports that a
user has executed or scheduled. The results are stored as History or
Matching-History caches on Intelligence Server.

The History List can consume much of the system's resources. You can
govern the resources used by old History List messages in the following
ways:

l You can delete messages from the History List with a scheduled
administrative task. For more information and instructions on scheduling

Copyright © 2024 All Rights Reserved 1066


Syst em Ad m in ist r at io n Gu id e

this task, see Scheduling Administrative Tasks, page 1328.

l In the Intelligence Server Configuration Editor, in the History settings:


General category, you can limit the Maximum number of messages per
user. If a user has hit this maximum and tries to add another message to
the History List, the oldest message is automatically purged.

l In the Intelligence Server Configuration Editor, in the History settings:


General category, you can set the Message lifetime (days). Intelligence
Server automatically deletes any History List messages that are older than
the specified message lifetime.

For more information about the History List, including details on History List
governing settings, see Saving Report Results: History List, page 1240.

Working Set
When a user runs a report from MicroStrategy Web or MicroStrategy Library,
the results from the report are added to the working set for that user's
session and stored in memory on Intelligence Server. The working set is a
collection of messages that reference in-memory report instances. A
message is added to the working set when a user executes a report or
retrieves a message from the History List. The purpose of the working set is
to:

l Improve MicroStrategy Web performance for report manipulations, without


having to run SQL against the data warehouse for each change

l Allow the efficient use of the web browser's Back button

l Allow users to manually add messages to the History List

Each message in the working set can store two versions of the report
instance in memory: the original version and the result version. The
original version of the report instance is created the first time the report is
executed and is held in memory the entire time a message is part of the
working set. The result version of the report instance is added to the working
set only after the user manipulates the report. Each report manipulation

Copyright © 2024 All Rights Reserved 1067


Syst em Ad m in ist r at io n Gu id e

adds what is called a delta XML to the report message. On each successive
manipulation, a new delta XML is applied to the result version. When the
user clicks the browser's Back button, previous delta XMLs are applied to
the original report instance up to the state that the user is requesting. For
example, if a user has made four manipulations, the report has four delta
XMLs; when the user clicks the Back button, the three previous XMLs are
applied to the original version.

Governing History List and Working Set Mem ory Use in MicroStrategy Web

You can control the amount of the memory that is used by the History List
and Working set in these ways:

l Limit the number of reports that a user can keep available for manipulation
in a MicroStrategy Web product. This number is defined in the
MicroStrategy Web products' interface in Project defaults: History List
settings. You must select the Manually option for adding messages to the
History List, then specify the number in the field labeled If manually, how
many of the most recently run reports and documents do you want to
keep available for manipulation? The default is 10 and the minimum is
1. The higher the number, the more memory the reports may consume.

l Limit the maximum amount of RAM that all users can use for the working
set. When the limit is reached and new report instances are created, the
least recently used report instance is swapped to disk. To set this, in the
Intelligence Server Configuration Editor, under the Governing Rules:
Default: Working Set category, type the limit in the Maximum RAM for
Working Set cache (MB) field.

l If you set this limit to more memory than the operating system can make
available, Intelligence Server uses a value of 100 MB.

l If you set this limit too low and you do not have enough hard disk space
to handle the amount of disk swapping, reports may fail to execute in
peak usage periods because the reports cannot write to memory or to
disk.

Copyright © 2024 All Rights Reserved 1068


Syst em Ad m in ist r at io n Gu id e

If a user session has an open job, the user session remains open and that
job's report instance is removed from the Working set when the job has
finished or timed out. In this way, jobs can continue executing even after
the user has logged out. This may cause excessive memory usage on
Intelligence Server because the session's working set is held in memory
until the session is closed. For instructions on how to set the timeout
period for jobs, see Limit the Maximum Report Execution Time, page
1077.

Governing Saved User Session Information (MicroStrategy Web


only)
You can allow Web users to recover their document, report, or dashboard
after their user session has been ended. If this feature is enabled and, for
example, the user runs a report and walks away from their desk and the
session times out, the user session information is saved. The next time the
Web user logs in, if the recoverable session has not expired, the user can
click a link to return to their recovered report. Enabling this feature uses disk
space for storing the information. You can govern how long the sessions are
stored before expiring. Long expiration times allow more information to be
stored, thus using more system disk space. Shortening the expiration time
more quickly frees up the system resources that the saved session was
using.

To configure these settings, access the Intelligence Server Configuration


Editor, select the Governing Rules: Default: Temporary Storage Settings
category. To enable the feature, select the Enable Web User Session
Recovery on Logout check box, and in the Session Recovery backup
expiration (hrs) field, type the number of hours you want to allow a session
to be stored. In Session Recovery and Deferred Inbox storage directory,
specify the folder where the user session information is stored.

Governing User Profiles


The user profile can be defined as what the user can do when logged in to
the system. If you allow users to use certain features in the system, they can

Copyright © 2024 All Rights Reserved 1069


Syst em Ad m in ist r at io n Gu id e

affect the system's performance. For example, when users schedule report
executions, this creates user sessions on Intelligence Server, thus placing a
load on it even when the users are not actively logged in.

You can limit these types of activities by restricting various privileges, as


discussed below. For general information about privileges and the
MicroStrategy security model, including instructions on how to grant and
revoke privileges, see Controlling Access to Functionality: Privileges, page
101.

Subscription-Related Privileges
Allowing users to subscribe to reports to be run later can affect system
performance. You can limit the use of subscriptions by using the Web
Scheduled Reports and Schedule Request privileges.

If you have Distribution Services or Narrowcast Server implemented in your


system and users have the Web Scheduled Email or Web Send Now
privileges, they can have a report emailed either at a set time or
immediately. This causes the system to create a user session on
Intelligence Server when the report is emailed.

For detailed information about subscribing to reports and documents, see


Scheduling Reports and Documents: Subscriptions, page 1333.

History List Privileges


Allowing users to use the History List can consume extra system resources.
Governing History List usage is discussed more fully in the previous section
(see Governing User Resources, page 1066). The non-administrative
privileges relating to the History List are:

l Web Subscribe To History List

l Web View History List

l Web Add To History List

Copyright © 2024 All Rights Reserved 1070


Syst em Ad m in ist r at io n Gu id e

l Use Link To History List in Email (Distribution Services)

l Use History List

Report Manipulation Privileges


The more manipulations that you allow users to do, the greater the potential
for using more system resources. Manipulations that can use extra system
resources include pivoting, page-by, and sorting. You can limit these
manipulations with the following privileges:

l To limit the use of pivoting, use the Web Pivot Report and Pivot Report
privileges.

l To limit the use of page-by, use the Web Switch Page-by Elements
privilege.

l To limit the use of sorting, use the Web Sort and Modify Sorting privilege.

Exporting Privileges
Exporting reports can consume large amounts of memory, especially when
reports are exported to Excel with formatting. For more information on how
to limit this memory usage, see Limit the Number of XML Cells, page 1099.
The privileges related to exporting reports are found in the Common
privilege group, and are as follows:

l Export to Excel

l Export to Flash

l Export to HTML

l Export to MicroStrategy File

l Export to PDF

l Export to Text

Copyright © 2024 All Rights Reserved 1071


Syst em Ad m in ist r at io n Gu id e

To restrict users from exporting any reports from MicroStrategy Web, use
the Web Export privilege in the Web Reporter privilege group.

OLAP Services Privileges


If you have purchased OLAP Services licenses for your users, they could
use a great deal of the available system resources. For example, if your
users are creating large Intelligent Cubes and doing many manipulations on
them, the system will be loaded much more than if they are running
occasional, small reports and not performing many manipulations.

The OLAP Services privileges are marked with a * in the list of all privileges
(see the List of Privileges section. For more details about how OLAP
Services uses system resources, see Intelligent Cubes, page 1107.

Governing Requests
Each user session can execute multiple concurrent jobs or requests. This
happens when users run documents that submit multiple child reports at a
time or when they send a report to the History List, then execute another
while the first one is still executing. Users can also log in to the system
multiple times and run reports simultaneously. Again, this may use up a
great deal of the available system resources.

Copyright © 2024 All Rights Reserved 1072


Syst em Ad m in ist r at io n Gu id e

To control the number of jobs that can be running at the same time, you can
set limits on the requests that can be executed. You can limit the requests
per user and per project. You can also choose to exclude reports submitted
as part of a Report Services document from the job limits (see Exclude
Document Datasets from the Job Limits, page 1073).

Specifically, you can limit:

l The total number of jobs (Limit the Total Number of Jobs, page 1074)

l The number of jobs per project (Limit the Number of Jobs Per Project,
page 1074)

l The number of jobs per user account and per user session (Limit the
Number of Jobs Per User Session and Per User Account, page 1075)

l The number of executing reports or data marts per user account (not
counting element requests, metadata requests, and report manipulations)
(Limit the Number of Executing Jobs Per User and Project, page 1076)

l The amount of time reports can execute (Limit the Maximum Report
Execution Time, page 1077)

l A report's SQL (per pass) including both its size and the time it executes
(Limit a Report's SQL Per Pass, page 1078)

l The amount of memory used for Intelligent Cubes (Governing Intelligent


Cube Memory Usage, page 1297)

Exclude Document Datasets from the Job Limits


Multiple jobs may be submitted when documents and reports are executed.
For example, if you execute a document that has a prompt and three reports
embedded in it, Intelligence Server processes five jobs: one for the
document, one for the prompt, and three for the embedded dataset reports.

To avoid unexpectedly preventing document from executing, you can


exclude report jobs submitted as part of document execution from the job
limits. In this case, if you execute a document that has a prompt and three

Copyright © 2024 All Rights Reserved 1073


Syst em Ad m in ist r at io n Gu id e

reports embedded in it, Intelligence Server would only count two jobs, the
document and the prompt, towards the job limits described below.

To exclude document dataset jobs from the job limits, in the Intelligence
Server Configuration Editor, select the Governing Rules: Default: General
category, and select the For Intelligence Server job and history list
governing, exclude reports embedded in Report Services documents
from the counts check box. This selection applies to the project-level job
limits as well as to the server-level limits.

Limit the Total Number of Jobs


You can limit the total number of concurrent jobs being processed by
Intelligence Server. Concurrent jobs include report requests, element
requests, and autoprompt requests that are executing or waiting to execute.
Completed (open) jobs, cached jobs, or jobs that have returned an error are
not counted. If the job limit is reached, a user sees an error message stating
that the maximum number of jobs has been reached. The user needs to
submit the job again.

To set this limit, in the Intelligence Server Configuration Editor, select the
Governing Rules: Default: General category, and specify the value in the
Maximum number of jobs field. You can also specify a maximum number of
interactive jobs (jobs executed by a direct user request) and scheduled jobs
(jobs executed by a scheduled request). A value of -1 indicates that there is
no limit on the number of jobs that can be executed.

Limit the Number of Jobs Per Project


You can limit the number of concurrent jobs that are being processed by
Intelligence Server for a project. If you have multiple projects on an
Intelligence Server, each can have its own job limit setting. Limiting the
number of concurrent jobs per project helps reduce unnecessary strain on
the system by limiting the amount of resources that concurrently executing
jobs can take up.

Copyright © 2024 All Rights Reserved 1074


Syst em Ad m in ist r at io n Gu id e

Concurrent jobs include report requests, element requests, and autoprompt


requests that are executing or waiting to execute. Finished jobs that are still
open, cached jobs, and jobs that returned an error are not counted. If the
limit is reached, a user sees an error message stating that the number of
jobs per project is too high. The user then needs to submit the job again.

In a clustered system, these settings limit the number of concurrent jobs per
project on each node of the cluster.

To specify this job limit setting, in the Project Configuration Editor for the
project, select the Governing Rules: Default: Jobs category, and specify
the number of concurrent jobs that you want to allow for the project in each
Jobs per project field. You can also specify a maximum number of
interactive jobs (jobs executed by a direct user request) and scheduled jobs
(jobs executed by a scheduled request). A value of -1 indicates that the
number of jobs that can be executed has no limit.

Limit the Number of Jobs Per User Session and Per User
Account
If your users' job requests place a heavy burden on the system, you can limit
the number of open jobs within Intelligence Server, including element
requests, autoprompts, and reports for a user.

l To help control the number of jobs that can run in a project and thus
reduce their impact on system resources, you can limit the number of
concurrent jobs that a user can execute in a user session. For example, if
the Jobs per user session limit is set to four and a user has one session
open for the project, that user can only execute four jobs at a time.
However, the user can bypass this limit by logging in to the project
multiple times. (To prevent this, see the next setting, Jobs per user
account limit.)

To specify this setting, in the Project Configuration Editor for the project,
select the Governing Rules: Jobs category, and type the number in the

Copyright © 2024 All Rights Reserved 1075


Syst em Ad m in ist r at io n Gu id e

Jobs per user session field. A value of -1 indicates that the number of
jobs that can be executed has no limit

l You can set a limit on the number of concurrent jobs that a user can
execute for each project regardless of the number of user sessions that
user has at the time. For example, if the user has two user sessions and
the Jobs per user session limit is set to four, the user can run eight jobs.
But if this Jobs per user account limit is set to five, that user can execute
only five jobs, regardless of the number of times the user logs in to the
system. Therefore, this limit can prevent users from circumventing the
Jobs per user session limit by logging in multiple times.

To specify this setting, in the Project Configuration Editor for the project,
select the Governing Rules: Jobs category, and type the number of jobs
per user account that you want to allow in the Jobs per user account
field. A value of -1 indicates that the number of jobs that can be executed
has no limit.

These two limits count the number of report, element, and autoprompt job
requests that are executing or waiting to execute. Jobs that have finished,
cached jobs, or jobs that returned in error are not counted toward these
limits. If either limit is reached, any jobs the user submits do not execute and
the user sees an error message.

Limit the Number of Executing Jobs Per User and Project


If your users tend to request jobs that do not place much burden on the
system, you may want to limit only executing reports and data marts, and
still allow users to answer autoprompts and issue element requests. You can
limit the number of concurrent reports (both regular reports and dataset
reports in a document) in a project per user account.

This limit is called Executing jobs per user. If the limit is reached for the
project, new report requests are placed in the Intelligence Server queue
until other jobs finish. They are then processed in the order in which they
were placed in the queue, which is controlled by the priority map (see
Prioritize Jobs, page 1086).

Copyright © 2024 All Rights Reserved 1076


Syst em Ad m in ist r at io n Gu id e

To specify this limit setting, in the Project Configuration Editor for the
project, select the Governing Rules: Default: Jobs category, and type the
number of concurrent report jobs per user you want to allow in the
Executing jobs per user field. A value of -1 indicates that the number of
jobs that can be executed has no limit.

Limit the Maximum Report Execution Time


You can limit a job in Intelligence Server by specifying the maximum amount
of time that a job can execute within a project. Intelligence Server cancels
any jobs that exceed the limit.

To set this limit, in the Project Configuration Editor, select the Governing
Rules: Default: Result Sets category, and specify the number of seconds
in the Intelligence Server Elapsed Time (sec) fields. You can set different
limits for ad-hoc reports and scheduled reports.

This limit applies to most operations that are entailed in a job from the time it
is submitted to the time the results are returned to the user. If the job
exceeds the limit, the user sees an error message and cannot view the
report.

The figure below illustrates how job tasks make up the entire report
execution time. In this instance, the time limit includes the time waiting for
the user to complete report prompts. Each step is explained in the table
below.

Copyright © 2024 All Rights Reserved 1077


Syst em Ad m in ist r at io n Gu id e

Step Status Comment

Waiting for
1 Resolving prompts
Autoprompt

2* Waiting (in queue) Element request is waiting in job queue for execution

3* Executing Element request is executing on the database

Waiting for
4 Waiting for user to make prompt selections
Autoprompt

5 Waiting (in queue) Waiting in job queue for execution

Query engine executes SQL on database (can be multiple


6 Executing
passes)

7 Executing Analytical engine processes results

*Steps 2 and 3 are for an element request. They are executed as separate
jobs. During steps 2 and 3, the original report job has the status "Waiting for
Autoprompt."

The following tasks are not shown in the example above because they
consume very little time. However, they also count toward the report
execution time.

l Element request SQL generation

l Report SQL generation

l Returning results from the database

For more information about the job processing steps, see Processing Jobs,
page 55.

Limit a Report's SQL Per Pass


You can limit a report's SQL size per pass. This includes limits on the
amount of time that each pass can take and the maximum size (in bytes) that
the SQL statement can be. These limits are set in the VLDB properties, as

Copyright © 2024 All Rights Reserved 1078


Syst em Ad m in ist r at io n Gu id e

described below. For more information about VLDB properties in general,


see SQL Generation and Data Processing: VLDB Properties

You can also limit the amount of memory that Intelligence Server uses
during report SQL generation. This limit is set for all reports generated on
the server. To set this limit, in the Project Configuration Editor, open the
Governing Rules: Default: Result Sets category, and specify the Memory
consumption during SQL generation. A value of -1 indicates no limit.

SQL Time Out (Per Pass) (Database Instance and Report)


You can limit the amount of time that each pass of SQL can take within the
data warehouse. If the time for a SQL pass reaches the maximum,
Intelligence Server cancels the job and the user sees an error message. You
can specify this setting at either the database instance level or at the report
level.

To specify this setting, edit the VLDB properties for the database instance or
for a report, expand Governing settings, then select the SQL Time Out
(Per Pass) option.

Maximum SQL Size (Database Instance)


You can limit the size (in bytes) of the SQL statement per pass before it is
submitted to the data warehouse. If the size for a SQL pass reaches the
maximum, Intelligence Server cancels the job and the user sees an error
message. You can specify this setting at the database instance level.

To specify this, edit the VLDB properties for the database instance, expand
Governing settings, then select the Maximum SQL Size option.

Limit the Size of Messages Logged to Kafka


MicroStrategy can log messages to Kakfa which are stored as text files.
Limiting the size of these files allows you to quickly diagnose problems if
they occur. The default setting for these log files is set to 20 MB and can be

Copyright © 2024 All Rights Reserved 1079


Syst em Ad m in ist r at io n Gu id e

adjusted in the LogConsumer.properties file or the Kafka Consumer


Console.

When the log files reach the size limit they will automatically roll over to a
backup file.

Adjust the Setting in the Properties File


1. Open LogConsumer.properties from one of the following locations:

l Windows: C:\Program Files


(x86)\MicroStrategy\Intelligence
Server\KafkaConsumer\LogConsumer.properties

l Linux:
[InstallPath]/IntelligenceServer/KafkaConsumer/LogCo
nsumer.properties

2. Modify the setting max_file_size_M=20 by replacing the default


value of 20 with the MB you would like log files to be.

3. Click Save.

Adjust the Setting via Kafka Consumer Console


1. Delete the LogConsumer.properties properties file.

2. Open the Kafka Consumer Console by executing the following


command:

java -jar KafkaConsumer.jar

3. Follow the command line prompts to enter the Kafka consumer settings.

Manage Job Execution


The system's ability to execute jobs is limited by the available system
resources and by how those resources are used by Intelligence Server.

Copyright © 2024 All Rights Reserved 1080


Syst em Ad m in ist r at io n Gu id e

This section discusses the different ways you have of managing job
execution. These include:

l Manage Database Connection Threads, page 1081

l Prioritize Jobs, page 1086

l Results Processing, page 1090 (the processing that Intelligence Server


performs on results returned from the data warehouse)

Manage Database Connection Threads


The main factor that determines job execution performance is the number of
database connections that are made to the data warehouse. Report and
element requests are submitted from Intelligence Server to the data
warehouse through a database connection thread. Results of these requests
are also returned to Intelligence Server through the database connection
thread.

You must determine the number of threads that strikes a good balance
between quickly serving each user request while not overloading the
system. The overall goal is to prioritize jobs and provide enough threads so
that jobs that must be processed immediately are processed immediately,
and the remainder of jobs are processed as timely as possible. If your
system has hundreds of concurrent users submitting requests, you must

Copyright © 2024 All Rights Reserved 1081


Syst em Ad m in ist r at io n Gu id e

determine at what point to limit the number of database connection threads


by placing user requests in a queue.

The number of available database connection threads falls in the range


depicted as the Optimal use of resources in the illustration below.

To monitor whether the number of database connection threads in your


system is effective, use the Database Connection Monitor. For more
information about this tool, see Monitoring Database Instance Connections,
page 24. If all threads are "Busy" a high percentage of the time, consider
increasing the number of connection threads as long as your data
warehouse can handle the load and as long as Intelligence Server does not
become overloaded.

Once you have the number of threads calculated, you can then set job
priorities and control how many threads are dedicated to serving jobs
meeting certain criteria.

Limiting and Prioritizing the Number of Database Connections


To set the number of database connection threads allowed at a time, modify
the database instance used to connect to the data warehouse. Use the Job
Prioritization tab in the Database Instance Editor and specify the number of

Copyright © 2024 All Rights Reserved 1082


Syst em Ad m in ist r at io n Gu id e

high, medium, and low connections. The sum of these numbers is the total
number of concurrent connection threads allowed between Intelligence
Server and the data warehouse. These settings apply to all projects that use
the selected database instance.

You should have at least one low-priority connection available, because low
priority is the default job priority, and low-priority jobs can use only low-
priority database connection threads. Medium-priority connection threads
are reserved for medium- and high-priority jobs, and high-priority
connection threads are reserved for high-priority jobs only. For more
information about job priority, including instructions on how to set job
priority, see Prioritize Jobs, page 1086.

If you set all connections to zero, jobs are not submitted to the data
warehouse. This may be a useful way for you to test whether scheduled
reports are processed by Intelligence Server properly. Jobs wait in the
queue and are not submitted to the data warehouse until you increase the
connection number, at which point they are then submitted to the data
warehouse. Once the testing is over, you can delete those jobs so they are
never submitted to the data warehouse.

Optimizing Database Connection Threads Using ODBC Settings


In addition to limiting the number of database connection threads created
between Intelligence Server and the data warehouse, it is a good practice to
efficiently use those connection threads once they are established. You
want to ensure that the threads are being used and are not tied up by
processes that are running too long. To optimize how those threads are
used, you can limit the length of time they can be used by certain jobs.
These limits are described below.

To set these limits, edit the database instance, then modify the database
connection (at the bottom of the Database Instances dialog box), and on the
Database Connections dialog box, select the Advanced tab. A value of 0 or
-1 indicates no limit.

Copyright © 2024 All Rights Reserved 1083


Syst em Ad m in ist r at io n Gu id e

Maxim um Cancel Attem pt Tim e

When a user runs a report that executes for a long time on the data
warehouse, the user can cancel the job execution. This may be due to an
error in the report's design, especially if it is in a project in a development
environment, or the user may simply not want to wait any longer. If the
cancel is not successful after 30 seconds, Intelligence Server deletes that
job's database connection thread. The Maximum cancel attempt time
(sec) field controls how long you want Intelligence Server to wait in addition
to the 30 seconds before deleting the thread.

Maxim um Query Execution Tim e

This is the maximum amount of time that a single pass of SQL can execute
on the data warehouse. When the SQL statement or fetch operation begins,
a timer starts counting. If the Maximum query execution time (sec) limit is
reached before the SQL operation is concluded, Intelligence Server cancels
the operation.

This setting is very similar to the SQL time out (per pass) VLDB setting
(see Limit a Report's SQL Per Pass, page 1078). That VLDB setting
overrides the Maximum query execution time (sec) setting. This setting is
made on the database connection and can be used to govern the maximum
query execution time across all projects that use that connection. The VLDB
setting can override this setting for a specific report.

Maxim um Connection Attem pt Tim e

This is the maximum amount of time that Intelligence Server waits while
attempting to connect to the data warehouse. When the connection is
initiated, a timer starts counting. If the Maximum connection attempt time
(sec) limit is reached before the connection is successful, the connection is
canceled and an error message is displayed.

Copyright © 2024 All Rights Reserved 1084


Syst em Ad m in ist r at io n Gu id e

Limiting Database Connection Caches


Establishing a database connection thread is expensive in terms of time and
resources. Because of this, Intelligence Server caches the threads so that
every SQL pass and job execution it performs does not need to create a new
connection. Rather, those processes use an existing cached thread.
However, the RDBMS may, after a certain time limit, delete the connection
threads without notifying Intelligence Server. If this happens and an
Intelligence Server job tries to use a cached connection thread, the user
sees an error message. To avoid this, you can limit the length of time that a
database connection cache can exist. You can limit the maximum lifetime of
a database connection (see Connection Lifetime, page 1085), and you can
limit the amount of time an inactive database connection remains open (see
Connection Idle Timeout, page 1086).

To set these limits, edit the database instance, then modify the database
connection (at the bottom of the Database Instances dialog box), and on the
Database Connections dialog box, select the Advanced tab. For these
settings, a value of -1 indicates no limit, and a value of 0 indicates that the
connection is not cached and is deleted immediately when execution is
complete.

Connection Lifetim e

The Connection lifetime (sec) limit is the maximum amount of time that a
database connection thread remains cached. The Connection lifetime
should be shorter than the data warehouse RDBMS connection time limit.
Otherwise the RDBMS may delete the connection in the middle of a job.

When the Connection lifetime is reached, one of the following occurs:

l If the database connection has a status of Cached (it is idle, but available)
when the limit is reached, the connection is deleted.

l If the database connection has a status of Busy (it is executing a job)


when the limit is reached, the connection is deleted as soon as the job
completes. The database connection does not go into a Cached state.

Copyright © 2024 All Rights Reserved 1085


Syst em Ad m in ist r at io n Gu id e

Connection Idle Tim eout

The Connection idle timeout (sec) limit is the amount of time that an
inactive connection thread remains cached in Intelligence Server until it is
terminated. When a database connection finishes a job and no job is waiting
to use it, the connection becomes cached. If the connection remains cached
for longer than this timeout limit, the database connection thread is then
deleted. This prevents connections from tying up data warehouse and
Intelligence Server resources if they are not needed.

Prioritize Jobs
Job priority defines the order in which jobs are processed. Jobs are usually
executed as first-come, first-served. However, your system probably has
certain jobs that need to be processed before other jobs.

Job priority does not affect the amount of resources a job gets once it is
submitted to the data warehouse. Rather, it determines whether certain jobs
are submitted to the data warehouse before other jobs in the queue.

For example, an executive in your company runs reports at unplanned times


and you want to ensure that these reports are immediately processed. If no
priority is set for the executive's reports, they are processed with the other
jobs in the system. Depending on data warehouse activity, this may require
some wait time. If you assign a high priority to all jobs from the executive's
user group, Intelligence Server processes and submits those jobs to the
data warehouse first, rather than waiting for other jobs to finish.

Intelligence Server processes a job on a database connection that


corresponds to the job's priority. If no priority is specified for a job,
Intelligence Server processes the job on a low-priority connection. For
example, jobs with high priority are processed by high-priority connections,
and jobs with low or no priority are processed by a low-priority connection.
For information about setting database connection thread priority, see
Manage Database Connection Threads, page 1081.

Copyright © 2024 All Rights Reserved 1086


Syst em Ad m in ist r at io n Gu id e

Intelligence Server also engages in connection borrowing when processing


jobs. Connection borrowing occurs when Intelligence Server executes a job
on a lower priority connection because no connections that correspond to
the job's priority are available at execution time. High-priority jobs can run
on high-, medium-, and low-priority connections. Likewise, medium-priority
jobs can run on medium- and low-priority connections.

When a job is submitted and no connections are available to process it,


either with the same priority or with a lower priority, Intelligence Server
places the job in queue and then processes it when a connection becomes
available.

You can set jobs to be high, medium, or low priority, by one or more of the
following variables:

l Request type: Report requests and element requests can have different
priority (Prioritizing Jobs by Request Type, page 1088).

l Application type: Jobs submitted from different MicroStrategy


applications, such as Developer, Scheduler, MicroStrategy Web, Library,
MicroStrategy Workstation, or Narrowcast Server, are processed
according to the priority that you specify (Prioritizing Jobs by
MicroStrategy Application Type, page 1088).

l User group: Jobs submitted by users in the groups you select are
processed according to the priority that you specify (Prioritizing Jobs by
User Group, page 1089).

l Cost: Jobs with a higher resource cost are processed according to the
priority that you specify (Prioritizing Jobs by Report Cost, page 1089). Job
cost is an arbitrary value you can assign to a report that represents the
resources used to process that job.

l Project: Jobs submitted from different projects are processed according to


the priority that you specify (Prioritizing Jobs by Project, page 1090).

These variables allow you to create sophisticated rules for which job
requests are processed first. For example, you could specify that any

Copyright © 2024 All Rights Reserved 1087


Syst em Ad m in ist r at io n Gu id e

element requests are high priority, any requests from your test project are
low priority, and any requests from users in the Developers group are
medium priority.

A job is processed at the highest priority assigned to it by any rules. For


example, if you set all jobs from your test project at low priority, and all jobs
from users in the Developers group at medium priority, jobs in the test
project that are requested by users in the Developers group are processed
at medium priority.

To Set Job Prioritization Rules

1. On the Intelligence Server machine, in Developer, log in to a project


source. You must log in as a user with administrative privileges.

2. Expand the Administration folder, then expand Configuration


Managers, and then select Database Instances.

3. Right-click the database instance used to connect to the data


warehouse and select Prioritization.

4. To add new job prioritization rules, click New.

Prioritizing Jobs by Request Type


You can select whether element requests or report requests are processed
first. For example, you may want element requests to be submitted to the
data warehouse before report requests, because element requests are
generally used in prompts and you do not want users to have to wait long
while prompt values load. In this case you might specify all element requests
to be processed at a high priority by default, and all report requests to be
processed at a low priority by default.

Prioritizing Jobs by MicroStrategy Application Type


You can assign a different priority to jobs submitted from Developer,
MicroStrategy Web, Scheduler, and Narrowcast Server. All jobs submitted

Copyright © 2024 All Rights Reserved 1088


Syst em Ad m in ist r at io n Gu id e

from the specified application use the specified priority. For example, you
may want report designers to be able to quickly test their reports, so you
may specify that all jobs that are submitted from Developer are processed at
a high priority.

Prioritizing Jobs by User Group


You can assign a different priority to jobs submitted from different
MicroStrategy user groups. For example, you can assign all jobs from users
in the Executive user group to be processed at a high priority.

Prioritizing Jobs by Report Cost


Report cost is an arbitrary value that you can assign to a report to help
determine its priority in relation to other requests. If you choose to use
report cost as a priority variable, you must define a set of priority groups
based on report cost. The default priority groups are:

l Light: reports with costs between 0 and 334

l Medium: reports with costs between 335 and 666

l Heavy: reports with costs between 667 and 999

The set of cost groupings must cover all values from 0 to 999. You can then
assign a priority level to each priority group. For example, you can set heavy
reports to low priority, because they are likely to take a long time to process,
and set light reports to high priority, because they do not place much strain
on the system resources.

Once you determine the cost groupings, you can set the report cost value on
individual reports. For example, you notice that a report requires
significantly more processing time than most other reports. You can assign it
a report cost of 900 (heavy). In this sample configuration, the report has a
low priority. For factors that may help you determine the cost of a report, see
Results Processing, page 1090.

Copyright © 2024 All Rights Reserved 1089


Syst em Ad m in ist r at io n Gu id e

You set the cost of a report in the report's Properties dialog box, in the
Priority category. You must have system administrator privileges to set the
cost of a report.

To Set the Cost for a Report

1. In Developer, right-click the report and select Properties.

2. Select the Priority category.

3. In the Report Cost field, type the cost of the report. Higher numbers
indicate a report that uses a great deal of system resources. Lower
numbers indicate a less resource-intensive report.

4. Click OK.

Prioritizing Jobs by Project


You can assign a different priority to reports from different projects. For
example, you may want all jobs submitted from your production project to
have a medium priority, so that they take precedence over reports from your
test project.

Results Processing
When Intelligence Server processes results that are returned from the data
warehouse, several factors determine how much of the machine's resources
are used. These factors include:

l Whether Intelligence Server is using thread balancing (see Intelligence


Server Thread Balancing, page 1091)

l The size of the report (see Limiting the Maximum Report Size, page 1091)

l Whether the report is an Intelligent Cube (see Limiting the Size and
Number of Intelligent Cubes, page 1095)

l Whether the report is imported from an external data source (see Limiting
the Memory Used During Data Fetching, page 1096)

Copyright © 2024 All Rights Reserved 1090


Syst em Ad m in ist r at io n Gu id e

Intelligence Server Thread Balancing


By default, threads within Intelligence Server process tasks in the order that
they are received. You can configure Intelligence Server to allocate threads
to processes, such as object serving, element serving, SQL generation, and
so forth, that need them most, while less loaded processes can return
threads to the available pool.

To enable thread balancing for Intelligence Server, in the Intelligence Server


Configuration Editor, in the Server Definition: Advanced category, select
the Balance MicroStrategy Server threads check box.

Limiting the Maximum Report Size


A report instance is the version of the report results that Intelligence Server
holds in memory for cache and working set results. The size of the report
instance is proportional to the size of the report results, that is, the row size
multiplied by the number of rows.

The row size depends on the data types of the attributes and metrics on the
report. Dates are the largest data type. Text strings, such as descriptions
and names, are next in size, unless the description is unusually long, in
which case they may be larger than dates. Numbers, such as IDs, totals, and
metric values, are the smallest.

The easiest way to estimate the amount of memory that a report uses is to
view the size of the cache files using the Cache Monitor in Developer. The
Cache Monitor shows the size of the report results in binary format, which
from testing has proven to be 30 to 50 percent of the actual size of the report
instance in memory. For instructions on how to use the Cache Monitor to
view the size of a cache, see Monitoring Result Caches, page 1217.

Intelligence Server allows you to govern the size of a report or request in the
following ways:

l Limiting the Number of Report Result Rows, page 1092

l Limiting the Number of Element Rows, page 1093

Copyright © 2024 All Rights Reserved 1091


Syst em Ad m in ist r at io n Gu id e

l Limiting the Number of Intermediate Rows, page 1094

Like all requests, large report instances are also governed by the Memory
Contract Manager settings. For more information about Memory Contract
Manager, see Governing Intelligence Server Memory Use with Memory
Contract Manager, page 1039.

Lim iting the Num ber of Report Result Rows

Reports with a large number of result rows can take up a great deal of
memory at run time. For example, your data warehouse may contain daily
sales data for thousands of items over several years. If a user attempts to
build a report that lists the revenue from every item for every day in the data
warehouse, the report may use all available Intelligence Server memory.

You can limit a report's size in Intelligence Server by setting a maximum


limit on the number of rows that a report can contain. This setting is applied
by the Query Engine when retrieving the results from the database. If the
report exceeds this limit, the report is not executed and an error message is
displayed.

To set the maximum number of result rows for all reports, data marts, and
Intelligent Cubes in a project, in the Project Configuration Editor, expand the
Governing Rules: Default: Result Sets category, and type the maximum
number in the appropriate Final Result Rows field. You can set different
limits for standard reports, Intelligent Cubes, and data marts.

You can also set the result row limit for a specific report in that report's
VLDB properties. The VLDB properties limit for a report overrides the project
limit. For example, if you set the project limit at 10,000 rows, but set the limit
to 20,000 rows for a specific report that usually returns more than 10,000
rows, users are able to see that report without any errors.

Copyright © 2024 All Rights Reserved 1092


Syst em Ad m in ist r at io n Gu id e

To Set the Result Set Limit for a Specific Report

1. In Developer, right-click the report to set the limit for and select Edit.

2. From the Data menu, select VLDB properties.

3. Expand the Governing settings, then select Results Set Row Limit.

4. Make sure the Use default inherited value check box is cleared.

5. In the Results Set Row Limit field, type the limit.

6. Click Save and Close.

Lim iting the Num ber of Elem ent Rows

Another way that you can limit the size of a request is to limit the number of
element rows returned at a time. Element rows are returned when a user
accesses a report prompt, and when using the Data Explorer feature in
Developer.

Element rows are incrementally fetched, that is, returned in small batches,
from the data warehouse to Intelligence Server. The size of the increment
depends on the maximum number of element rows specified in the client.
Intelligence Server incrementally fetches four times the number for each
element request.

For more information about element requests, such as how they are created,
how incremental fetch works, and the caches that store the results, see
Element Caches, page 1261.

MicroStrategy recommends that you set the element row limit to be larger
than the maximum number of attribute element rows that you expect users to
browse. For example, if the Product table in the data warehouse has 10,000
rows that users want to browse and the Order table has 200,000 rows that
you do not expect users to browse, you should set this limit to 11,000.
Intelligence Server incrementally fetches the element rows. If the element

Copyright © 2024 All Rights Reserved 1093


Syst em Ad m in ist r at io n Gu id e

rows limit is reached, the user sees an error message and cannot view the
prompt or the data.

To set the maximum number of element rows returned for all element
requests in a project in Developer, in the Project Configuration Editor for
that project, expand the Governing Rules: Default: Result Sets category
and type the number in the All element browsing result rows field.

To Set the Number of Objects Returned for Requests in MicroStrategy


Web

1. In MicroStrategy Web, log in to a project as a user with the Web


Administration privilege.

2. Click the MicroStrategy icon, then select Preferences.

3. Select Project defaults, and then select the General category.

4. In the Incremental Fetch section, specify the values in the Maximum


number of attribute elements per block and Maximum number of
report objects per block fields.

5. Click OK.

Lim iting the Num ber of Interm ediate Rows

You can limit a report's size on Intelligence Server by setting a maximum


number of intermediate result rows that are allowed in Intelligence Server.
This limit does not apply to the rows in intermediate or temporary tables
created in the data warehouse. Rather, it controls the number of rows held in
memory in the Analytical Engine processing unit of Intelligence Server for
analytic calculations that cannot be done on the database. Lowering this
setting reduces the amount of memory consumed for large reports. If the
limit is reached, the user sees an error message and cannot view the report.
For example, this may happen when you add a complex subtotal to a large
report or when you pivot a large report.

Copyright © 2024 All Rights Reserved 1094


Syst em Ad m in ist r at io n Gu id e

To specify this limit for all reports in a project, in the Project Configuration
Editor, select the Governing Rules: Default: Result Sets category and
type the number in the All intermediate result rows box.

You can also set the intermediate row limit for a specific report in that
report's VLDB properties. The VLDB properties limit for the report overrides
the project limit. For example, if you set the project limit at 10,000 rows but
set the limit to 20,000 rows for a specific report that usually returns more
than 10,000 rows, users are able to see that report without any errors.

To Set the Intermediate Row Limit for a Specific Report

1. In Developer, right-click the report to set the limit for and select Edit.

2. From the Data menu, select VLDB properties.

3. Expand the Governing settings, then select Intermediate Row Limit.

4. Make sure the Use default inherited value check box is cleared.

5. In the Intermediate Row Limit field, type the limit.

6. Click Save and Close.

Limiting the Size and Number of Intelligent Cubes


If you have purchased OLAP Services licenses from MicroStrategy, your
report designers can create Intelligent Cube reports. These Intelligent
Cubes must be stored in Intelligence Server memory for reports to access
their data. This may cause a shortage of memory for other processes on the
Intelligence Server machine.

You can govern the amount of resources used by Intelligent Cubes by


limiting the amount of memory used by Intelligent Cubes and by limiting the
number of Intelligent Cubes that can be loaded into memory.

To specify these settings, in the Project Configuration Editor for the project,
select the Cubes: General category and type the new values in the

Copyright © 2024 All Rights Reserved 1095


Syst em Ad m in ist r at io n Gu id e

Maximum RAM usage (MBytes) and Maximum number of cubes fields.


For detailed information on governing Intelligent Cube memory usage, see
Defining Memory Limits for Intelligent Cubes, page 1302.

Limiting the Memory Used During Data Fetching


Certain MicroStrategy features enable you to fetch data from external data
sources, such as web services, MDX cubes, or Excel spreadsheets. When
data is fetched from one of these data sources, it is temporarily stored in
Intelligence Server memory while being converted to a report. This can
cause a shortage of memory for other processes on the Intelligence Server
machine.

You can govern the amount of memory used for an individual data fetch in
the Project Configuration Editor. Select the Governing Rules: Default:
Result Sets category, and type the new value in the Memory consumption
during data fetching (MB) field. The default value is -1, indicating no limit.

Governing Results Delivery


After Intelligence Server processes the results of a job (see Manage Job
Execution, page 1080), it then delivers the results to the user. In a three-tier
system, results delivery uses very little of the system resources. Most of the
tuning options for results delivery are focused on a four-tier system involving
MicroStrategy Web.

Copyright © 2024 All Rights Reserved 1096


Syst em Ad m in ist r at io n Gu id e

To deliver results, when a report is first run or when it is manipulated,


Intelligence Server generates XML and sends it to the MicroStrategy Web
server. The Web server then translates the XML into HTML for display in the
user's web browser.

You can set limits in two areas to control how much information is sent at a
time. The lower of these two settings determines the maximum size of
results that Intelligence Server delivers at a time:

l How many rows and columns can be displayed simultaneously in


MicroStrategy Web (see Limit the Information Displayed at One Time,
page 1098)

l How many XML cells in a result set can be delivered simultaneously (see
Limit the Number of XML Cells, page 1099)

The following settings also govern results delivery:

l The maximum size of a report that can be exported (see Limit Export
Sizes, page 1100 and Limit the Memory Consumption for File Generation,
page 1101)

l The number of XML drill paths in a report (see Limit the Total Number of
XML Drill Paths, page 1102)

Copyright © 2024 All Rights Reserved 1097


Syst em Ad m in ist r at io n Gu id e

Like all requests, displayed and exported reports are also governed by the
Memory Contract Manager settings. For more information about Memory
Contract Manager, see Governing Intelligence Server Memory Use with
Memory Contract Manager, page 1039.

Limit the Information Displayed at One Time


In MicroStrategy Web, if a report contains a large amount of data, it can use
a great deal of the system resources and take a significant amount of time
before it is displayed to the user. You can lessen the impact of these large
reports by limiting the maximum number of rows and columns that are
displayed. If a report's result set is larger than these limits, the report is
broken into pages (increments) that are fetched from the server one at a
time.

The size of these increments can be set as project defaults by the


MicroStrategy Web administrator. Users with the Web Change User
Preferences privilege can also customize these sizes.

To Limit the Number of Rows and Columns for All Users

1. In MicroStrategy Web, log in to a project as a user with the Web


Administration privilege.

2. Click the MicroStrategy icon, then click Preferences.

3. Select Project defaults, and then select the Grid display category.

4. Specify the values in the Maximum rows in grid and Maximum


columns in grid fields.

5. Click OK.

Copyright © 2024 All Rights Reserved 1098


Syst em Ad m in ist r at io n Gu id e

To Limit the Number of Rows and Columns for One User

1. In MicroStrategy Web, log in to a project as a user with the Web


Change User Preferences privilege.

2. Click the MicroStrategy icon, then click Preferences.

3. Select the Grid display category.

4. Specify the values in the Maximum rows in grid and Maximum


columns in grid fields.

If the user sets the number of rows and columns too high, the number
of XML cells limit that is set in Intelligence Server (see Limit the
Number of XML Cells, page 1099) governs the size of the result set.

5. Click OK.

Limit the Number of XML Cells


When large report result sets are generated into XML, they can require a
significant amount of Intelligence Server memory. MicroStrategy Web
handles this by implementing the incremental fetch feature (see Limit the
Information Displayed at One Time, page 1098). You can also govern the
result set's size by setting the Maximum number of XML cells at the
Intelligence Server level. This determines the maximum number of cells that
can be returned from Intelligence Server to the Web server at a time. For
this limit, the number of cells is the number of rows multiplied by the number
of metric columns. Attribute cells are not considered.

For example, if the XML limit is set at 10,000 and a report has 100,000
metric cells, the report is split into 10 pages. The user clicks the page
number to view the corresponding page.

Additionally, when users export large reports from MicroStrategy Web as


formatted data, the XML is generated in batches. This XML limit determines

Copyright © 2024 All Rights Reserved 1099


Syst em Ad m in ist r at io n Gu id e

how large the batches are. Depending on this XML limit, Intelligence Server
behaves differently:

l If the limit is smaller, it takes a longer time to generate the XML because it
is generated in small batches, which use less memory and system
resources.

l If the limit is larger, it takes a shorter time to generate the XML because it
is generated in fewer, but larger, batches, which use more memory and
system resources.

To set the XML limit, in the Intelligence Server Configuration Editor, select
the Governing Rules: Default: File Generation category, then specify the
Maximum number of XML cells. You must restart Intelligence Server for
the new limit to take effect.

Limit Export Sizes


When users export a report from MicroStrategy Web, the results are not
constrained by the incremental fetch limit or the XML limit. To govern the
size of reports that can be exported, you can set limits on the number of
cells for various export formats.

Limit the Number of Rows and Columns for All Users

1. In MicroStrategy Web, log in to a project as a user with the Web


Administration privilege.

2. Click the MicroStrategy icon, then click Preferences. ens.

3. Select Project defaults, and then select the Export Reports category.

4. Specify the values in the Maximum number of cells to export to


plain text and Maximum number of cells to export to HTML and
Excel with formatting fields.

5. Click OK.

Copyright © 2024 All Rights Reserved 1100


Syst em Ad m in ist r at io n Gu id e

Limit the Memory Consumption for File Generation


Exporting a report to a different format can consume a great deal of memory.
The amount of memory available for use by exporting files from
MicroStrategy Web is governed by the maximum memory consumption limits
in the Intelligence Server Configuration Editor. If an export attempts to use
more memory than these settings allow, the export fails with the error
message "MicroStrategy Intelligence Server cannot handle your request
because a memory request has exceeded the configured limit. Please
contact the server administrator."

The more formatting an exported report has, the more memory it consumes.
When exporting large reports the best options are plain text or CSV file
formats because formatting information is not included with the report data.
In contrast, exporting reports as Excel with formatting uses a significant
amount of memory because the exported Excel file contains both the report
data and all the formatting data. For more information about exporting
reports, see Client-Specific Job Processing, page 72.

Because Excel export uses significantly more memory than other export
formats, you can limit the size of reports exported to Excel from Developer
as well as from Web. The default memory consumption limit is 100 MB.

To set the maximum memory consumption limits for exporting reports from
Web, in the Intelligence Server Configuration Editor, select the Governing
Rules: Default: File Generation category, and specify the Maximum
memory consumption for the XML, PDF, Excel, and HTML files.

Depending on your Memory Contract Manager settings, an export can use


less memory than specified by these settings and still be denied because of
a lack of memory. For more information about Memory Contract Manager,
see Governing Intelligence Server Memory Use with Memory Contract
Manager, page 1039.

Copyright © 2024 All Rights Reserved 1101


Syst em Ad m in ist r at io n Gu id e

To Set the Maximum Memory Consumption for Excel File Generation

1. In Developer, log in to a project source using an account with the


Configure Server Basic privilege.

2. From the Tools menu, select Project Source Manager.

3. Select the project source and click Modify.

4. On the Memory tab, in the Export to Excel section, select Use custom
value. In the Maximum RAM Usage (MB) field, specify the maximum
memory consumption.

5. Click OK.

Limit the Total Number of XML Drill Paths


Another way that you can prevent reports from consuming too much memory
is to limit the number of XML drill paths allowed on reports in MicroStrategy
Web products. The default drill map for reports uses all attributes included in
hierarchies marked as drill hierarchies. Report designers can significantly
reduce the size of an attribute's drill path by modifying a report's drill map to
include fewer drill options. You can also impose a limit for all reports coming
from MicroStrategy Web products by setting the Maximum number of XML
drill paths.

For more information about customizing drill maps, see the Advanced
Reporting Help.

To set this limit, in the Intelligence Server Configuration Editor, select the
Governing Rules: Default: File Generation category, then specify the
Maximum number of XML drill paths. You must restart Intelligence Server
for the new limit to take effect.

Disabling XML caching for a project may have a negative effect on


performance, especially for large reports. For more information, see Types
of Result Caches, page 1209 and Controlling Access to Objects:
Permissions, page 89.

Copyright © 2024 All Rights Reserved 1102


Syst em Ad m in ist r at io n Gu id e

Tune Your System for In-Memory Datasets


You can import large datasets into your Intelligence Server's memory as
Intelligent Cubes, and divide the Intelligent Cubes into multiple segments.
These segments, called partitions, are processed simultaneously,
distributed across the processor cores of your Intelligence Server.

By storing your data in your Intelligence Server's memory and processing


the data using all the server's processor cores, you can analyze large and
complex datasets with very fast response times.

The following sections cover the settings you can configure to improve the
performance of your in-memory datasets:

l Configure Intelligence Server for In-Memory Datasets, page 1103

l Configure your Projects for In-Memory Datasets, page 1104

Configure Intelligence Server for In-Memory Datasets


To ensure the best performance for your partitioned in-memory datasets,
you can configure the following settings for your Intelligence Server:

l Consider increasing the number of database connections that Intelligence


Server uses to connect to data sources. When users import data into
Intelligence Server's memory, the job to connect to the data source is
given a low priority. To allow Intelligence Server to retrieve large datasets,
you can increase the number of low-priority database connections that
Intelligence Server can make.

For background information on prioritizing jobs, see Prioritize Jobs. For


background information on changing the number of database connections,
see Manage Database Connection Threads.

l Consider increasing the maximum time that a database query is allowed to


run, to ensure that the Intelligence Server has more time to retrieve large
datasets from the data source. For background information on increasing

Copyright © 2024 All Rights Reserved 1103


Syst em Ad m in ist r at io n Gu id e

the execution time for database queries, see Manage Database


Connection Threads

Configure your Projects for In-Memory Datasets


For each of project that uses in-memory datasets, make the following
changes to improve the performance of the in-memory datasets:

l Increase the maximum size of the datasets that users can import. If users
need to import large datasets into a project, increase the limit on the size
of the dataset that they can import. For steps to increase this limit, see
Governing Intelligent Cube Memory Usage.

l Enable parallel queries for the reports in your project, so that Intelligence
Server can execute database queries in parallel and retrieve more data
from your database. For steps to enable parallel queries, and to define the
maximum number of parallel queries that can be run for every report, see
the Optimizing Queries section.

Design Reports
In addition to the fact that large reports can exert a heavy toll on system
performance, a report's design can also affect it. Some features consume
more of the system's capacity than others when they are used.

Copyright © 2024 All Rights Reserved 1104


Syst em Ad m in ist r at io n Gu id e

Some report design features that can use a great deal of system resources
include:

l Complex analytic calculations (Analytic Complexity, page 1105)

l Subtotals (Subtotals, page 1105)

l Page-by (Page-By Feature, page 1106)

l Prompt complexity (Prompt Complexity, page 1106)

l Report Services documents (Report Services Documents, page 1106)

l Intelligent Cubes (Intelligent Cubes, page 1107)

Analytic Complexity
Calculations that cannot be done with SQL in the data warehouse are
performed by the Analytical Engine in Intelligence Server. These may result
in significant memory use during report execution. Some analytic
calculations (such as AvgDev) require the entire column of the fact table as
input to the calculation. The amount of memory used depends on the type of
calculation and the size of the report that is used. Make sure your report
designers are aware of the potential effects of these calculations.

Subtotals
The amount of memory required to calculate and store subtotals can be
significant. In some cases, the size of the subtotals can surpass the size of
the report result itself.

The size of the subtotals depends on the subtotaling option chosen, along
with the order and the number of unique attributes. The easiest way to
determine the number of subtotals being calculated is to examine the
number of result rows added with the different options selected in the
Advanced Subtotals Options dialog box. To access this dialog box, view the
report in Developer, then point to Data, then Subtotals, and then choose
Advanced. For more detailed information about the different subtotal
options, see the Reports section in the Advanced Reporting Help.

Copyright © 2024 All Rights Reserved 1105


Syst em Ad m in ist r at io n Gu id e

Subtotals can use a great deal of memory if you select the All Subtotals
option in the Pages drop-down list. This option calculates all possible
subtotal calculations at runtime and stores the results in the report instance.
MicroStrategy recommends that you encourage users and report designers
to use less taxing options for calculating subtotals across pages, such as
Selected Subtotals and Grand Total.

Page-By Feature
If designers or users create reports that use the page-by feature, they may
use significant system resources. This is because the entire report is held in
memory even though the user is seeing only a portion of it at a time. To
lessen the potential effect of using page-by with large reports, consider
splitting those reports into multiple reports and eliminating the use of page-
by. For more information about page-by, see the Advanced Reporting Help.

Prompt Complexity
Each attribute element or hierarchy prompt requires an element request to
be executed by Intelligence Server. The number of prompts used and the
number of elements returned from the prompts determine how much load is
placed on Intelligence Server. Report designers should take this into
account when designing prompted reports.

In addition to limiting the number of elements returned from element


requests (as described in Results Processing, page 1090), you should make
sure your element caches are being used effectively. For information on
managing element caches, including instructions, see Element Caches,
page 1261.

Report Services Documents


Report Services documents may contain multiple reports. Executing a
document can result in several report requests being submitted
simultaneously.

To limit the effect of Report Services documents on the system, consider


enabling document caching. If the documents are cached on Intelligence

Copyright © 2024 All Rights Reserved 1106


Syst em Ad m in ist r at io n Gu id e

Server, less load is placed on the data warehouse and on the Intelligence
Server machine. For information about document caching, including
instructions, see Result Caches, page 1203.

Intelligent Cubes
With OLAP Services features, your report designers can create Intelligent
Cube reports. These reports allow data to be returned from the data
warehouse, stored in Intelligence Server memory, and then shared among
multiple reports.

Because Intelligent Cubes must be loaded into memory to be used in


reports, they can use a great deal of system resources. Make sure your
report designers are familiar with the Intelligent Cube design best practices
found in Governing Intelligent Cube Memory Usage, page 1297.

You can also restrict the number and size of Intelligent Cubes that can be
loaded at once. For instructions, see Results Processing, page 1090.

Configure Intelligence Server and Projects


At times you may need to adjust settings in the MicroStrategy system, either
as a result of changes to the system or to improve an aspect of system
efficiency. This section provides an overview of the governing settings
throughout the system.

Copyright © 2024 All Rights Reserved 1107


Syst em Ad m in ist r at io n Gu id e

These governors are arranged by where in the interface you can find them.

Intelligence Server Configuration Editor


To set the following governors in the Intelligence Server Configuration
Editor, right-click the project source, select Configure MicroStrategy
Intelligence Server, then select the category as described below.

Only the categories and settings in the Intelligence Server Configuration


Editor that affect system scalability are described below. Other categories
and settings that appear in the Intelligence Server Configuration Editor are
described elsewhere in this guide, and in the Help for the edito

Server definition: General category in Intelligence Server configuration

Governor Description See page

Controls the number of network


connections available for How the Network
Number of network threads communication between Intelligence can Affect
Server and the client, such as Performance
Developer or MicroStrategy Web.

Server definition: Advanced category in Intelligence Server configuration

Copyright © 2024 All Rights Reserved 1108


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

Controls the frequency (in minutes) at


which cache and History List messages
Configuring
are backed up to disk. A value of 0
Backup frequency (minutes) Result Cache
means that cache and history
Settings
messages are backed up immediately
after they are created.

Controls whether threads in Intelligence


Server are allocated to processes such
Balance MicroStrategy Server as object serving, element serving, SQL Results
threads generation, and so on that need them Processing
most, while processes with lighter loads
can return threads to the available pool.

Cleans up the cache lookup table at the


specified frequency (in seconds). This
Configuring
Cache lookup cleanup reduces the amount of memory the
Result Cache
frequency (sec) cache lookup table consumes and the
Settings
time Intelligence Server takes to back
up the lookup table to disk.

The amount of time (the delay) before


Project Failover
Project failover latency (min.) the project is loaded on another server
and Latency
to maintain minimum level availability.

When the conditions that caused a


project failover disappear, the failover
configuration reverts automatically to
Configuration recovery Project Failover
the original configuration. This setting
latency (min.) and Latency
is the amount of time (the delay) before
the failover configuration reverts to the
original configuration.

Configures additional MicroStrategy-


Enable performance
specific monitors in Windows Memory
monitoring
Performance Monitor.

Scheduler session time out Controls how long the Scheduler

Copyright © 2024 All Rights Reserved 1109


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

attempts to communicate with


(sec) Intelligence Server before timing out.
By default, this is set to 300 seconds.

Governing Rules: Default: General category in Intelligence Server


configuration

Governor Description See page

Limit the Total


Maximum number of The maximum concurrent number of jobs that
Number of
jobs can exist on an Intelligence Server.
Jobs

Limits the number of concurrent interactive


Maximum number of (nonscheduled) jobs that can exist on this Limit the Total
interactive jobs Intelligence Server. A value of -1 indicates no Number of Jobs
limit.

Limits the number of concurrent scheduled Limit the Total


Maximum number of
jobs that can exist on this Intelligence Server. Number of
scheduled jobs
A value of -1 indicates no limit. Jobs

The maximum number of user sessions


Governing
Maximum number of (connections) for an Intelligence Server. A
Concurrent
user sessions single user account may establish multiple
Users
sessions to an Intelligence Server.

The time allowed for a Developer user to


Governing
User session idle time remain idle before their session is ended. A
Concurrent
(sec) user session is considered idle when it
Users
submits no requests to Intelligence Server.

The time allowed for a Web user to remain idle


before their session is ended. Governing
Web user session idle
Concurrent
time (sec)
If designers will be building Report Users
Services documents and dashboards in

Copyright © 2024 All Rights Reserved 1110


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

MicroStrategy Web, set the Web user


session idle time (sec) to 3600 to avoid
a project source timeout.

For Intelligence Server


and history list
Exclude
governing, exclude Do not include reports submitted as part of a
Document
reports embedded in document in the count of jobs for the job
Datasets from
Report Services limits.
the Job Limits
documents from the
counts

If selected, when a document cache is hit,


Intelligence Server displays the cached
Background Execution:
document and re-executes the document in the
Enable background
background. If this option is cleared, when a
execution of documents
document cache is hit, Intelligence Server
after their caches are
displays the cached document and does not
hit.
re-execute the document until a manipulation
is performed. By default this option is cleared.

Limits the time, in seconds, that mobile client


Mobile APNS and connections remain open to download
GMC session idle time Newsstand subscriptions. A value of -1
(sec) indicates no limit. By default, this is set to
1800.

Governing Rules: Default: File Generation category in Intelligence Server


configuration

Governor Description See page

The maximum number of XML cells in


XML Generation: Maximum a report result set that Intelligence Limit the Number
number of XML cells Server can send to the MicroStrategy of XML Cells
Web products at a time. When this

Copyright © 2024 All Rights Reserved 1111


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

limit is reached, the user sees an


error message along with the partial
result set. The user can incrementally
fetch the remaining cells.

The maximum number of attribute


elements that users can see in the
Limit the Total
XML Generation: Maximum drill across menu in MicroStrategy
Number of XML
number of XML drill paths Web products. If this setting is set too
Drill Paths
low, the user does not see all the
available drill attributes.

The maximum amount of memory (in


megabytes) that Intelligence Server
XML Generation: Maximum can use to generate a report or Limit the Memory
memory consumption for XML document in XML. If this limit is Consumption for
(MB) reached, the XML document is not File Generation
generated and the user sees an error
message.

The maximum amount of memory (in


megabytes) that Intelligence Server
PDF Generation: Maximum can use to generate a report or Limit the Memory
memory consumption for PDF document in PDF. If this limit is Consumption for
files (MB) reached, the PDF document is not File Generation
generated and the user sees an error
message.

The maximum amount of memory (in


megabytes) that Intelligence Server
Excel Generation: Maximum can use to generate a report or Limit the Memory
memory consumption for document in Excel. If this limit is Consumption for
Excel files (MB) reached, the Excel document is not File Generation
generated and the user sees an error
message.

HTML Generation: Maximum The maximum amount of memory (in Limit the Memory
memory consumption for megabytes) that Intelligence Server Consumption for

Copyright © 2024 All Rights Reserved 1112


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

can use to generate a report or


document in HTML. If this limit is
HTML files (MB) reached, the HTML document is not File Generation
generated and the user sees an error
message.

Governing Rules: Default: Memory Settings category in Intelligence


Server configuration

Governor Description See page

A check box that enables the


following governors:
Governing Memory
Enable Web request job Maximum Intelligence Server for Requests from
throttling use of total memory MicroStrategy Web
Products
Minimum machine free physical
memory

The maximum amount of total


system memory (RAM + Page File)
that can be used by the Intelligence
Server process ( MSTRSVR.exe )
Governing Memory
compared to the total amount of
Maximum Intelligence Server for Requests from
memory on the machine. If the limit
use of total memory (%) MicroStrategy Web
is met, all requests from
Products
MicroStrategy Web products of any
nature (log in, report execution,
search, folder browsing) are denied
until the conditions are resolved.

The minimum amount of physical


Governing Memory
memory (RAM) that needs to be
Minimum machine free for Requests from
available, as a percentage of the
physical memory (%) MicroStrategy Web
total amount of physical memory on
Products
the machine. If the limit is met, all

Copyright © 2024 All Rights Reserved 1113


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

requests from MicroStrategy Web


products (for example, log in, report
execution, search, folder browsing)
are denied until the conditions are
resolved.

Governing
A check box that enables the Intelligence Server
Enable single memory
Maximum single allocation size Memory Use with
allocation governing
governor. Memory Contract
Manager

Governing
Prevents Intelligence Server from Intelligence Server
Maximum single allocation
granting a request that would Memory Use with
size (MBytes)
exceed this limit. Memory Contract
Manager

A check box that enables the


following governors:
Governing
Minimum reserved memory (MB or Intelligence Server
Enable memory contract %) Memory Use with
management
Maximum use of virtual address Memory Contract
space (%) Manager

Memory request idle time

Governing
The amount of system memory, in
Intelligence Server
Minimum reserved memory either MB or a percent, that must be
Memory Use with
(MBytes or %) reserved for processes external to
Memory Contract
Intelligence Server.
Manager

Governing
The maximum percent of the
Intelligence Server
Maximum use of virtual process' virtual address space that
Memory Use with
address space (%) Intelligence Server can use before
Memory Contract
entering memory request idle mode.
Manager

Copyright © 2024 All Rights Reserved 1114


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

This setting is used in 32-bit


operating systems and is no longer
applicable. In 64-bit operating
systems, to control the amount of
memory available for Intelligence
Server, use the Minimum reserved
memory governor.

The amount of time Intelligence


Server denies requests that may
Governing
result in memory depletion. If
Intelligence Server
Memory request idle time Intelligence Server does not return
Memory Use with
(sec) to acceptable memory conditions
Memory Contract
before the idle time is reached,
Manager
Intelligence Server shuts down and
restarts.

The maximum amount of memory


Temporary Storage Setting:
that can be used for report instances Governing User
Maximum RAM for Working
referenced by messages in the Resources
Set cache (MB)
Working Set.

Governing Rules: Default: Temporary Storage Settings category in


Intelligence Server configuration

Governor Description See page

The location where the user's active


working sets are written to disk if they
have been forced out of the pool of Governing
Working Set file directory memory allocated for the Maximum RAM User

for working set cache. The default is Resources

.\TmpPool

Session Recovery and Specifies the where the session Governing User
Deferred Inbox storage information is written to disk. The Resources

Copyright © 2024 All Rights Reserved 1115


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

directory default is .\TmpPool

Governing
Enable Web User Session If selected, allows Web users to recover
User
Recovery on Logout their sessions.
Resources

How many hours a session backup can


Session Recovery backup remain on disk before it is considered Governing User
expiration (hrs) expired. After it is expired, the user Resources
cannot recover the session.

Governing Rules: Default: Import Data category in Intelligence Server


configuration

Governor Description See page

The number of connection threads to


create for Import Data jobs, depending
Manage
on whether the priority of the job is
Number of connections by Database
high, medium or low. You must
priority Connection
determine the number of threads that
Threads
quickly serves users without
overloading the system.

Governing Rules: Default: Catalog cache category in Intelligence Server


configuration

Governor Description See page

A check box that enables the Maximum use of


Enable catalog cache
memory (MB) governor.

Limits the maximum amount of memory, in


Maximum use of memory (MB) megabytes, used by the catalog cache. The
default value is 25 MB

Copyright © 2024 All Rights Reserved 1116


Syst em Ad m in ist r at io n Gu id e

History Settings: General category in Intelligence Server configuration

Governor Description See page

The maximum number of History


Saving Report
Maximum number of messages that can exist in a user's
Results:
messages per user History List at any time. When the limit is
History List
reached, the oldest message is removed.

The length of time before a History List


Saving Report
message expires and is automatically
Message lifetime (days) Results:
deleted. A value of -1 indicates that
History List
messages do not expire.

Select File Based for History List


messages to be stored on disk in a file Saving Report
Repository type system, or Database Based for History Results:
List messages to be stored in a database History List
(recommended).

Project Configuration Editor


These governors can be set per project. To access them, right-click the
project, select Project Configuration, then select the category as noted
below.

Project definition: Advanced category

Governor Description See page

Maximum
The maximum number of attribute elements Limiting the Number of
number of
that can be being retrieved from the data Elements Displayed and
elements to
warehouse at one time. Cached at a Time
display

Governing Rules: Default: Result sets category in Project Configuration

Copyright © 2024 All Rights Reserved 1117


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

The amount of time that an ad-hoc report


Limit the
request can take before it is canceled. This
Intelligence Server Maximum
includes time spent resolving prompts, waiting
Elapsed Time - Report
for autoprompts, waiting in the job queue,
Interactive reports (sec) Execution
executing SQL, analytical calculation, and
Time
preparing report results.

The amount of time that a scheduled report


Limit the
request can take before it is canceled. This
Intelligence Server Maximum
includes time spent resolving prompts, waiting
Elapsed Time - Report
for autoprompts, waiting in the job queue,
Scheduled reports (sec) Execution
executing SQL, analytical calculation, and
Time
preparing report results.

Specify the maximum time to wait for a prompt Limit the


to be answered by the user in seconds. If the Maximum
Wait time for prompt
user fails to answer the prompt in the specified Report
answers (sec)
time limit, the job is cancelled. By default, this Execution
is set to -1. Time

Specify the maximum time for warehouse jobs Limit the


to be executed by Intelligence Server. Jobs Maximum
Warehouse execution
lasting longer than this setting are cancelled. A Report
time (sec)
value of 0 or -1 indicates infinite time. By Execution
default, this is set to -1. Time

The maximum number of rows that can be


returned to Intelligence Server for an Intelligent
Cube request. This setting is applied by the
Final Result Rows - Results
Query Engine when retrieving the results from
Intelligent Cubes Processing
the database. This is the default for all reports
in a project and can be overridden for individual
reports by using the VLDB settings.

The maximum number of rows that can be


Final Result Rows - returned to Intelligence Server for a data mart Results
Data marts report request. This setting is applied by the Processing
Query Engine when retrieving the results from

Copyright © 2024 All Rights Reserved 1118


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

the database. This is the default for all reports


in a project and can be overridden for individual
reports by using the VLDB settings.

Specify the maximum number of rows that can


be returned to Intelligence Server for a
document or dashboard request. When
retrieving the results from the database, the
Final Result Rows -
Query Engine applies this setting. If the number Results
Document/Dashboard
of rows in a document or dashboard exceeds Processing
views
the specified limit, an error is displayed and no
results are shown for the document or
dashboard. A value of 0 or -1 indicates no limit.
By default, this is set to 50000000.

The maximum number of rows that can be


returned to Intelligence Server for a standard
report request. This setting is applied by the
Final Result Rows - All Results
Query Engine when retrieving the results from
other reports Processing
the database. This is the default for all reports
in a project and can be overridden for individual
reports by using the VLDB settings.

The maximum number of rows that can be in an


intermediate result set used for analytical
All intermediate result processing in Intelligence Server. This is the Results
rows default for all reports in a project and can be Processing
overridden by using the VLDB settings for
individual reports.

All intermediate rows -


The maximum number of rows for intermediate Results
Document/Dashboard
results. The default value is 32,000. Processing
views

The maximum number of rows that can be


All element browsing Results
retrieved from the data warehouse for an
result rows Processing
element request.

Copyright © 2024 All Rights Reserved 1119


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

The maximum amount of memory (in


Memory consumption Limit a
megabytes) that Intelligence Server can use for
during SQL generation Report's SQL
SQL generation. The default is -1, which
(MB) Per Pass
indicates no limit.

Memory consumption The maximum amount of memory (in


Results
during data fetching megabytes) that Intelligence Server can use for
Processing
(MB) importing data. The default is 2048 MB (2 GB).

Limits the file size, in megabytes, when


downloading a dashboard from MicroStrategy
Web. If a dashboard is larger than the specified
file size, an error is displayed that provides the
current limit, and the dashboard is not Best
downloaded. Additionally, this setting applies to Practices for
MicroStrategy (.mstr)
dashboards sent through Distribution Services. Using
file size (MB)
If a dashboard is larger than the specified size, Distribution
the dashboard is not sent. A value of 0 prevents Services
the ability to download a dashboard from Web
and to distribute a dashboard through
Distribution Services. By default, this is set to
25. The maximum .mstr file size is 2047 MB.

Governing Rules: Default: Jobs category in Project Configuration

Governor Description See page

Limit the Number


Jobs per The maximum number of concurrent jobs per user of Jobs Per User
user account account and project. Session and Per
User Account

Limit the Number


Jobs per The maximum number of concurrent jobs a user can of Jobs Per User
user session have during a session. Session and Per
User Account

Copyright © 2024 All Rights Reserved 1120


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

The maximum number of concurrent jobs a single user Limit the Number
Executing
account can have executing in the project. If this of Executing Jobs
jobs per
condition is met, additional jobs are placed in the Per User and
user
queue until executing jobs finish. Project

Jobs per Limit the Number


The maximum number of concurrent ad-hoc jobs that
project - of Jobs Per
the project can process at a time.
interactive Project

Jobs per Limit the Number


The maximum number of concurrent scheduled jobs
project - of Jobs Per
that the project can process at a time.
scheduled Project

Limit the Number


Jobs per The maximum number of concurrent jobs that the
of Jobs Per
project project can process at a time.
Project

Governing Rules: Default: User sessions category in Project


Configuration

Governor Description See page

The maximum number of user sessions that are Governing


User sessions
allowed in the project. When the limit is reached, users Concurrent
per project
other than the Administrator cannot log in. Users

Concurrent
Governing
interactive
The maximum number of concurrent sessions per user. Concurrent
project sessions
Users
per user

Governing Rules: Default: Subscriptions category in Project


Configuration

Copyright © 2024 All Rights Reserved 1121


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

Maximum
The maximum number of reports or documents to
History List Managing
which a user can be subscribed for delivery to the
subscriptions Subscriptions
History List.
per user

Maximum
Cache Update The maximum number of reports or documents to Managing
subscriptions which a user can be subscribed for updating caches. Subscriptions
per user

Maximum email The maximum number of reports or documents to


Managing
subscriptions which a user can be subscribed for delivery to an
Subscriptions
per user email address (Distribution Services only).

Maximum file The maximum number of reports or documents to


Managing
subscriptions which a user can be subscribed for delivery to a file
Subscriptions
per user location (Distribution Services only).

Maximum print The maximum number of reports or documents to


Managing
subscriptions which a user can be subscribed for delivery to a
Subscriptions
per user printer (Distribution Services only).

Maximum
The maximum number of reports or documents to
Mobile Managing
which a user can be subscribed for delivery to a
subscriptions Subscriptions
Mobile device (MicroStrategy Mobile only).
per user

The maximum number of reports/documents that the


Maximum FTP
user can subscribe to, to be delivered to an FTP Managing
subscriptions
location, at a time. A value of -1 indicates no limit. By Subscriptions
per user
default, this is set to -1.

Maximum
The maximum number of personal views that can be
Personal View Managing
created by URL sharing. A value of -1 indicates no
subscriptions Subscriptions
limit. By default, this is set to -1.
per user

Governing Rules: Default: Import Data category in Project Configuration

Copyright © 2024 All Rights Reserved 1122


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

Defining Limits for


Maximum The maximum size for a file to be imported for
Intelligent Cubes Created
file size use as a data source. Files larger that this
using the Import Data
(MB) value cannot be opened during data import.
Feature

Defining Limits for


Maximum
The maximum size of all data import cubes for Intelligent Cubes Created
quota per
each individual user. using the Import Data
user (MB)
Feature

Caching: Result Caches: Storage category in Project Configuration

Governor Description See page

Datasets - The maximum amount of memory reserved for the


Configuring
Maximum creation and storage of report and dataset caches. This
Result Cache
RAM usage setting should be configured to at least the size of the
Settings
(MBytes) largest cache file, or that report will not be cached.

The maximum number of report and dataset caches that


Datasets - the project can have at a time.
Maximum Beginning with MicroStrategy 2020 Update 1, this Managing
number of governing setting is being retired. It will remain available, Result Caches
caches but the setting will not be enforced if set below the default
value of 10000.

Formatted
The maximum amount of memory reserved for the
Documents - Configuring
creation and storage of document caches. This setting
Maximum Result Cache
should be configured to be at least the size of the largest
RAM usage Settings
cache file, or that report will not be cached.
(MBytes)

Formatted The maximum number of document caches that the


Documents - project can have at a time. Managing
Maximum Beginning with MicroStrategy 2020 Update 1, this Result Caches
number of governing setting is being retired. It will remain available,

Copyright © 2024 All Rights Reserved 1123


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

but the setting will not be enforced if set below the default
caches
value of 100000.

The amount of memory that is swapped to disk, relative to


the size of the cache being swapped into memory. For Configuring
RAM swap
example, if the RAM swap multiplier setting is 2 and the Result Cache
multiplier
requested cache is 80 Kbytes, 160 Kbytes are swapped Settings
from memory to disk.

Maximum
This setting determines what percentage of the amount of
RAM for
memory specified in the Maximum RAM usage limits can
report cache
be used for result cache lookup tables.
index (%)

Caching: Result caches: Maintenance category in Project Configuration

Governor Description See page

Configuring
Never expire
Determines whether caches automatically expire. Result Cache
caches
Settings

The amount of time that a result cache remains


valid. The default value is 24 hours.
Configuring
Cache duration Beginning with MicroStrategy 2020 Update 1, this Result Cache
(Hours) governing setting is being retired. This setting will Settings
no longer affect the Document cache's lifetime, but
will still apply to the Report cache lifetime.

Do not Apply
Automatic Select this check box for report caches with Configuring
Expiration Logic for dynamic dates to expire in the same way as other Result Cache
reports containing report caches. Settings
dynamic dates

Caching: Auxiliary Caches: Objects category in Project Configuration

Copyright © 2024 All Rights Reserved 1124


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

Summary Table of
Server - Maximum The amount of memory that Intelligence
Object Caching
RAM usage (MBytes) Server allocates for object caching.
Settings

Summary Table of
Client - Maximum The amount of memory that Developer
Object Caching
RAM usage (MBytes) allocates for object caching.
Settings

Caching: Auxiliary Caches: Elements category in Project Configuration

Governor Description See page

Server - Maximum Summary Table of


The amount of memory that Intelligence
RAM usage Element Cache
Server allocates for element caching.
(MBytes) Settings

Summary Table of
Client - Maximum The amount of memory that Developer
Element Cache
RAM usage (MBytes) allocates for object caching.
Settings

Caching: Subscription Execution category in Project Configuration

Governor Description See page

Re-run history list


Causes new subscriptions to create caches or Managing
and mobile
update existing caches by default when a report or Scheduled
subscriptions
document is executed and that report/document is Administration
against the
subscribed to the History List or a Mobile device. Tasks
warehouse

Re-run file, email,


Causes new subscriptions to create caches or Managing
print, or FTP
update existing caches by default when a report or Scheduled
subscriptions
document is executed and that report/document is Administration
against the
subscribed to a file, email, or print device. Tasks
warehouse

Copyright © 2024 All Rights Reserved 1125


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

Managing
Do not create or
Prevents subscriptions from creating or updating Scheduled
update matching
caches by default. Administration
caches
Tasks

Keep document
Managing
available for
Retains a document or report for later Scheduled
manipulation for
manipulation that was delivered to the History List. Administration
History List
Tasks
subscriptions only

Intelligent Cubes: General category in Project Configuration

Governor Description See page

Defining
Memory
Maximum RAM The maximum amount of memory used on Intelligence
Limits for
Usage (MBytes) Server by Intelligent Cubes for this project.
Intelligent
Cubes

The maximum number of Intelligent Cubes that can be


loaded onto Intelligence Server for this project. Defining
Maximum Memory
number of Beginning with MicroStrategy 2020 Update 1, this Limits for
cubes governing setting is being retired. It will remain available, Intelligent
but the setting will not be enforced if set below the default Cubes
value of 100000.

Defines the maximum cube size, in megabytes, that can


Maximum cube
be downloaded from Intelligent Server. Additionally, this
size allowed for
value is used by Distribution Services when sending a
download (MB)
.MSTR file by email.

Maximum % Defines the maximum that indexes are allowed to add to


growth of an the Intelligent Cube’s size, as a percentage of the
Intelligent Cube original size.

Copyright © 2024 All Rights Reserved 1126


Syst em Ad m in ist r at io n Gu id e

Governor Description See page

due to indexes

Cube growth
Defines, in minutes, how often the Intelligent Cube’s size
check
is checked and, if necessary, how often the least-used
frequency (in
indexes are dropped.
mins)

Database Connection
This set of governors can be set by modifying a project source's database
instance and then modifying either the number of Job Prioritization
connections or the Database connection. For more details on each governor,
see the page references in the table below.

ODBC Settings

Governor Description See page

Number of The total number of High, Medium, and Low database Manage
database connections that are allowed at a time between Database
connection Intelligence Server and the data warehouse (set on the Connection
threads database instance's Job Prioritization tab). Threads

Maximum Manage
cancel The maximum amount of time that the Query Engine waits Database
attempt time for a successful attempt to cancel a query. Connection
(sec) Threads

Maximum Manage
query The maximum amount of time that a single pass of SQL Database
execution may execute on the data warehouse. Connection
time (sec) Threads

Maximum Manage
connection The maximum amount of time that Intelligence Server Database
attempt time waits to connect to the data warehouse. Connection
(sec) Threads

Copyright © 2024 All Rights Reserved 1127


Syst em Ad m in ist r at io n Gu id e

Database Connection Caching

Governor Description See page

The amount of time that an active database Manage Database


Connection
connection thread remains open and cached on Connection
lifetime (sec)
Intelligence Server. Threads

Connection The amount of time that an inactive database Manage Database


idle timeout connection thread remains cached until it is Connection
(sec) terminated. Threads

VLDB Settings
These settings can be changed in the VLDB Properties dialog box for either
reports or the database instance. For information about accessing these
properties, see the page reference for each property in the table below. For
complete details about all VLDB properties, see SQL Generation and Data
Processing: VLDB Properties.

Governor Description See page

The maximum number of rows that can be in an


Intermediate intermediate table used by Intelligence Server. This Results
row limit setting overrides the project's default Number of Processing
intermediate result rows setting

The maximum number of rows that can be in a report


Results Set Results
result set. This setting overrides the project's default
Row Limit Processing
Number of report result rows set.

The amount of time, in seconds, that any SQL pass can Limit a
SQL time out
execute on the data warehouse. This can be set at the Report's SQL
(per pass)
database instance and report levels. Per Pass

Limit a
Maximum SQL The maximum size (in bytes) that the SQL statement can
Report's SQL
size be. This can be set at the database instance level.
Per Pass

Copyright © 2024 All Rights Reserved 1128


Syst em Ad m in ist r at io n Gu id e

Tuning Narrowcast Server and Intelligence Server


If you are using Narrowcast Server as part of your system to deliver reports
to users, you should be aware of its impact on Intelligence Server system
resources. This section includes relevant discussions about:

l How you design Narrowcast Server applications (Application Design


Considerations, page 1129)

l How Narrowcast Server connects to Intelligence Server (How Narrowcast


Server Connects to Intelligence Server, page 1130)

For more information, refer to the Narrowcast Server Getting Started Guide.

Application Design Considerations


Depending on how you design applications in Narrowcast Server, you can
place more or less load on Intelligence Server. Two main options to consider
are personal report execution and personal page execution.

Personal report execution (PRE) executes a separate report for each set of
users with unique personalization. Users can have reports executed under
the context of the corresponding Intelligence Server user if desired. Using
this option, security profiles defined in Developer are maintained. However if
the system contains many users who all have unique personalization, this
option can place a large load on Intelligence Server.

Personal page execution (PPE) executes one multi-page report for all users
in a segment and then uses this single report to provide personalized
content (pages) for different users. All users have their reports executed
under the context of the same Intelligence Server user, so individual
security profiles are not maintained. However, the load on Intelligence
Server may be significantly lower than for PRE in some cases.

For more detailed information about these options, refer to the Narrowcast
Server Application Designer Guide, specifically the section on Page
Personalization and Dynamic Subscriptions.

Copyright © 2024 All Rights Reserved 1129


Syst em Ad m in ist r at io n Gu id e

Two additional points to consider in designing your Narrowcast Server


applications are:

l Timing of Narrowcast Server jobs: You can schedule reports to run at off-
peak hours when Intelligence Server's load from MicroStrategy Web
products and Developer users is lowest.

l Intelligence Server selection: You can send Narrowcast Server jobs to a


specific Intelligence Server to ensure that some Intelligence Servers are
used solely for MicroStrategy Web products or Developer.

How Narrowcast Server Connects to Intelligence Server


Narrowcast Server can connect to a specific Intelligence Server. Narrowcast
Server does this by using one or more information sources to point to and
connect to the desired Intelligence Servers.

l Intelligence Server provides automatic load balancing for Narrowcast


Server requests. Once an information source is configured, jobs using
that information source go to the appropriate Intelligence Server for the
most efficient response.

l Narrowcast Server can connect to any Intelligence Server in a cluster—


this does not need to be the primary node.

l You can balance the load manually by creating multiple information


sources or by using a single information source pointing to one
Intelligence Server, thereby designating it to handle all Narrowcast Server
requests.

Copyright © 2024 All Rights Reserved 1130


Syst em Ad m in ist r at io n Gu id e

CLUSTER M ULTIPLE
M ICRO STRATEGY SERVERS

Copyright © 2024 All Rights Reserved 1131


Syst em Ad m in ist r at io n Gu id e

A clustered set of machines provides a related set of functionality or


services to a common set of users. MicroStrategy recommends clustering
Intelligence Servers in environments where access to the data warehouse is
mission-critical and system performance is of utmost importance.
Intelligence Server provides you the functionality to cluster a group of
Intelligence Server machines to take advantage of the many benefits
available in a clustered environment.

This section provides the following information:

Overview of Clustering
A cluster is a group of two or more servers connected to each other in such a
way that they behave like a single server. Each machine in the cluster is
called a node. Because each machine in the cluster runs the same services
as other machines in the cluster, any machine can stand in for any other
machine in the cluster. This becomes important when one machine goes
down or must be taken out of service for a time. The remaining machines in
the cluster can seamlessly take over the work of the downed machine,
providing users with uninterrupted access to services and data.

You can cluster MicroStrategy components at two levels:

l You can cluster Intelligence Servers using the built-in Clustering feature.
A Clustering license allows you to cluster up to eight Intelligence Server
machines. For instructions on how to cluster Intelligence Servers, see
Cluster Intelligence Servers, page 1146.

l You can cluster MicroStrategy Web servers using third-party clustering


software, such as Cisco Local Router, Microsoft Windows Load Balancing
Service, or Microsoft Network Load Balancing. Most clustering tools work
by using IP distribution based on the incoming IP addresses. For details
on implementing this clustering method, see the documentation for your
third-party clustering software.

Copyright © 2024 All Rights Reserved 1132


Syst em Ad m in ist r at io n Gu id e

The built-in clustering feature allows you to connect MicroStrategy Web to a


cluster of Intelligence Servers. The Intelligence Server does not support
generic load balancers. For instructions, see Connect MicroStrategy Web to
a Cluster, page 1194.

Benefits of Clustering
Clustering Intelligence Servers provides the following benefits:

l Increased resource availability: If one Intelligence Server in a cluster fails,


the other Intelligence Servers in the cluster can pick up the workload. This
prevents the loss of valuable time and information if a server fails.

l Strategic resource usage: You can distribute projects across nodes in


whatever configuration you prefer. This reduces overhead because not all
machines need to be running all projects, and allows you to use your
resources flexibly.

l Increased performance: Multiple machines provide greater processing


power.

l Greater scalability: As your user base grows and report complexity


increases, your resources can grow.

l Simplified management: Clustering simplifies the management of large or


rapidly growing systems.

Failover Support
Failover support ensures that a business intelligence system remains
available for use if an application or hardware failure occurs. Clustering
provides failover support in two ways:

l Load redistribution: When a node fails, the work for which it is responsible
is directed to another node or set of nodes.

l Request recovery: When a node fails, the system attempts to reconnect


MicroStrategy Web users with queued or processing requests to another

Copyright © 2024 All Rights Reserved 1133


Syst em Ad m in ist r at io n Gu id e

node. Users must log in again to be authenticated on the new node. The
user is prompted to resubmit job requests.

Load Balancing
Load balancing is a strategy aimed at achieving even distribution of user
sessions across Intelligence Servers, so that no single machine is
overwhelmed. This strategy is especially valuable when it is difficult to
predict the number of requests a server will receive. MicroStrategy achieves
four-tier load balancing by incorporating load balancers into the
MicroStrategy Web and Web products.

Load is calculated as the number of user sessions connected to a node. The


load balancers collect information on the number of user sessions each
node is carrying. Using this information at the time a user logs in to a
project, MicroStrategy Web connects them to the Intelligence Server node
that is carrying the lightest session load. All requests by that user are routed
to the node to which they are connected until the user disconnects from the
MicroStrategy Web product.

Project Distribution and Project Failover


When you set up several server machines in a cluster, you can distribute
projects across those clustered machines or nodes in any configuration, in
both Windows and Linux environments. All servers in a cluster do not need
to be running all projects. Each node in the cluster can host a different set of
projects, which means only a subset of projects need to be loaded on a
specific Intelligence Server machine. This feature provides you with
flexibility in using your resources, and it provides better scalability and
performance because of less overhead on each Intelligence Server machine.

Distributing projects across nodes also provides project failover support. For
example, one server is hosting project A and another server is hosting
projects B and C. If the first server fails, the other server can host all three
projects to ensure project availability.

Copyright © 2024 All Rights Reserved 1134


Syst em Ad m in ist r at io n Gu id e

Project creation, duplication, and deletion in a three-tier, or server,


connection are automatically broadcast to all nodes during runtime to ensure
synchronization across the cluster.

Work Fencing
User fences and workload fences allow you to reserve nodes of a cluster for
either users or a project subscriptions. For more information, see Reserve
Nodes with Work Fences, page 1165.

The Clustered Architecture


The diagram below shows report distribution in a four-tier clustered
environment. The clustered Intelligence Servers are shown in gray.

The node of the cluster that performs all job executions is the node that the
client application, such as Developer, connects to. This is also the node that
can be monitored by an administrator using the monitoring tools.

Copyright © 2024 All Rights Reserved 1135


Syst em Ad m in ist r at io n Gu id e

The following steps describe a typical job process in a clustered, four-tier


environment. They correspond to the numbers in the report distribution flow
diagram above.

1. MicroStrategy Web users log into a project and request reports from
their Web browsers.

2. A third-party IP distribution tool such as Cisco Local Router, Microsoft


Network Load Balancing, or Microsoft Windows Load Balancing Service
distributes the user connections from the MicroStrategy Web clients
among web servers.

3. The MicroStrategy Web product load balancers on each server collect


load information from each cluster node and then connect the users to
the nodes that carry the lightest loads and that run the project the user
requested. All report requests are then processed by the nodes to
which the users are connected.

4. The Intelligence Server nodes receive the requests and process them.
In addition, the nodes communicate with each other to maintain
metadata synchronization and cache accessibility across nodes.

5. The nodes send the requests to the warehouse as queries.

Query flow in a clustered environment is identical to a standard query flow in


an unclustered environment (see Processing Jobs, page 55), with two
exceptions:

l Result (report and document) caches and Intelligent Cubes: When a query
is submitted by a user, if an Intelligent Cube or a cached report or
document is not available locally, the server will retrieve the cache (if it
exists) from another node in the cluster. For an introduction to report and
document caching, see Result Caches, page 1203. For an introduction to
Intelligent Cubes, see Chapter 11, Managing Intelligent Cubes.

l History Lists: Each user's History List, which is held in memory by each
node in the cluster, contains direct references to the relevant cache files.
Accessing a report through the History List bypasses many of the report

Copyright © 2024 All Rights Reserved 1136


Syst em Ad m in ist r at io n Gu id e

execution steps, for greater efficiency. For an introduction to History Lists,


see Saving Report Results: History List, page 1240.

Synchronizing Cached Information Across Nodes in a Cluster


In a clustered environment, each node shares cached information with the
other nodes so that the information users see is consistent regardless of the
node to which they are connected when running reports. All nodes in the
cluster synchronize the following cached information:

l Metadata information and object caches (for details, see Synchronizing


Metadata, page 1137)

l Result caches and Intelligent Cubes (for details, see Sharing Result
Caches and Intelligent Cubes in a Cluster, page 1138)

l History Lists (for details, see Synchronizing History Lists, page 1142)

To view clustered cache information, such as cache hit counts, use the
Cache Monitor.

Result cache settings are configured per project, and different projects may
use different methods of result cache storage. Different projects may also
use different locations for their cache repositories. However, History List
settings are configured per project source. Therefore, different projects
cannot use different locations for their History List backups.

For result caches and History Lists, you must configure either multiple local
caches or a centralized cache for your cluster. The following sections
describe the caches that are affected by clustering, and it presents the
procedures to configure caches across cluster nodes.

Synchronizing Metadata
Metadata synchronization refers to the process of synchronizing object
caches across all nodes in the cluster.

For example, when a user connected to a node in a cluster modifies a


metadata object, the cache for that object on other nodes is no longer valid.

Copyright © 2024 All Rights Reserved 1137


Syst em Ad m in ist r at io n Gu id e

The node that processed the change automatically notifies all other nodes in
the cluster that the object has changed. The other nodes then delete the old
object cache from memory. The next request for that object that is
processed by another node in the cluster is executed against the metadata,
creating a new object cache on that node.

In addition to server object caches, client object caches are also invalidated
when a change occurs. When a user requests a changed object, the invalid
client cache is not used and the request is processed against the server
object cache. If the server object cache has not been refreshed with the
changed object, the request is executed against the metadata.

Sharing Result Caches and Intelligent Cubes in a Cluster


In a non-clustered environment, Intelligent Cubes and report and document
caches (result caches) are typically stored on the Intelligence Server
machine. For an overview of Intelligent Cubes, see Chapter 11, Managing
Intelligent Cubes, or see the In-memory Analytics Help. For an overview of
result caches, see Result Caches, page 1203.

In a clustered environment, each node in a cluster must share its result


caches and Intelligent Cubes with the other nodes, so all clustered machines
have the latest cache information. For example, for a project, result caches
on each node that has loaded the project are shared among other nodes in
the cluster that have also loaded the project. Configuring caches to be
shared among appropriate nodes eliminates the overhead associated with
executing the same report or document on multiple nodes.

l Both memory and disk caches are shared among nodes.

l When an Intelligent Cube is updated, either through Incremental Refresh


or by republishing the Intelligent Cube, the updated Intelligent Cube is
available on all nodes of the cluster as soon as it is loaded into memory.

Intelligent Cube and result cache sharing among nodes can be configured in
one of the following ways:

Copyright © 2024 All Rights Reserved 1138


Syst em Ad m in ist r at io n Gu id e

l Local caching: Each node hosts its own cache file directory and
Intelligent Cube directory. These directories need to be shared so that
other nodes can access them. For more information, see Local Caching,
page 1140.

If you are using local caching, the cache directory must be shared as
"ClusterCaches" and the Intelligent Cube directory must be shared as
"ClusterCube". These are the share names Intelligence Server looks for
on other nodes to retrieve caches and Intelligent Cubes.

l Centralized caching: All nodes have the cache file directory and
Intelligent Cube directory set to the same network locations, \\<machine
name>\<shared cache folder name> and \\<machine
name>\<shared Intelligent Cube folder name>. For more
information, see Centralized Caching, page 1141.

For caches on Windows machines, and on Linux machines using Samba,


set the path to \\<machine name>\<shared cache folder name>.
For caches on Linux machines, set the path to
//<SharedLocation>/<CacheFolder>.

The following table summarizes the pros and cons of the result cache
configurations:

Pros Cons

Allows faster read and write The local cache files may be
operations for cache files created temporarily unavailable if an
by the local server. Intelligence Server is taken off the

Local Faster backup of cache lookup network or powered down.


caching table. A document cache on one node may
Allows most caches to remain depend on a dataset that is cached on
accessible even if one node in a another node, creating a multi-node
cluster goes offline. cluster dependency.

Copyright © 2024 All Rights Reserved 1139


Syst em Ad m in ist r at io n Gu id e

Pros Cons

All cache operations are required to go


Allows for easier backup process. over the network if shared location is
not on one of the Intelligence Server
Allows all cache files to be
machines.
accessible even if one node in a
Centralized cluster goes offline. Requires additional hardware if shared
caching location is not on an Intelligence
May better suit some security
Server.
plans because nodes using a
network account are accessing All caches become inaccessible if the
only one machine for files. machine hosting the centralized caches
goes offline.

MicroStrategy recommends storing the result caches locally if your users


mostly do ad hoc reporting. In ad hoc reporting the caches are not used very
much, and the overhead incurred by creating the caches on a remote file
server outweighs the low probability that a cache may be used. On the other
hand, if the caches are to be heavily used, centralized caching may suit your
system better.

For steps to configure cache files with either method, see Configure Caches
in a Cluster, page 1147.

Local Caching

In this cache configuration, each node maintains its own local Intelligent
Cubes and local cache file and, thus, maintains its own cache index file.
Each node's caches are accessible by other nodes in the cluster through the
cache index file. This is illustrated in the diagram below.

Copyright © 2024 All Rights Reserved 1140


Syst em Ad m in ist r at io n Gu id e

For example, User A, who is connected to node 1, executes a report and


thus creates report cache A on node 1. User B, who is connected to node 2,
executes the report. Node 2 checks its own cache index file first. When it
does not locate report cache A in its own cache index file, it checks the
index file of other nodes in the cluster. Locating report cache A on node 1, it
uses that cache to service the request, rather than executing the report
against the warehouse.

Centralized Caching

In this cache configuration, all nodes in the cluster use one shared,
centralized location for Intelligent Cubes and one shared, centralized cache
file location. These can be stored on one of the Intelligence Server machines
or on a separate machine dedicated to serving the caches. The Intelligent
Cubes, History List messages, and result caches for all the Intelligence
Server machines in the cluster are written to the same location. In this
option, only one cache index file is maintained. This is illustrated in the
diagram below.

Copyright © 2024 All Rights Reserved 1141


Syst em Ad m in ist r at io n Gu id e

For example, User A, who is connected to node 1, executes report A and


thus creates report cache A, which is stored in a centralized file folder. User
B, who is connected to node 2, executes report A. Node 2 checks the
centralized cache index file for report cache A. Locating report cache A in
the centralized file folder, it uses that cache to service the request,
regardless of the fact that node 1 originally created the cache.

Synchronizing History Lists


A History List is a set of pointers to cache files. Each user has their own
History List, and each node in a cluster stores the pointers created for each
user who is connected to that node. Each node's History List is synchronized
with the rest of the cluster. Even if report caching is disabled, History List
functionality is not affected.

If you are using a database-based History List, History List messages and
their associated caches are stored in the database and automatically
synchronized across all nodes in the cluster.

If you are using a file-based History List, the Intelligence Server Inbox folder
contains the collection of History List messages for all users, which appear
in the History folder in Developer. Inbox synchronization refers to the
process of synchronizing History Lists across all nodes in the cluster, so that
all nodes contain the same History List messages. Inbox synchronization
enables users to view the same set of personal History List messages,
regardless of the cluster node to which they are connected.

Copyright © 2024 All Rights Reserved 1142


Syst em Ad m in ist r at io n Gu id e

For more background information on History Lists, see Saving Report


Results: History List, page 1240. For steps to set up History List sharing in a
file-based system, see Configure Caches in a Cluster, page 1147.

MicroStrategy recommends that you enable user affinity clustering to


minimize History List resource usage. User affinity clustering causes
Intelligence Server to connect all sessions for a user to the same node of the
cluster. This enables Intelligence Server to keep the user's History List on
one node of the cluster. Resource use is minimized because the pointers to
the History List are not stored on multiple machines. In addition, if you are
using a file-based History List, the History List is never out of sync across
multiple nodes of the cluster. For instructions on how to enable user affinity
clustering, see Configure Caches in a Cluster, page 1147.

Prerequisites for Clustering Intelligence Servers


Before you can cluster Intelligence Servers in your system, you must fulfil
these prerequisites.

MicroStrategy Prerequisites
l You must have purchased an Intelligence Server license that allows
clustering. To determine the license information, use the License Manager
tool and verify that the Clustering feature is available for Intelligence
Server. For more information on using License Manager, see Chapter 5,
Manage Your Licenses.

l The computers to be clustered must all have the same version of


Intelligence Server installed.

l All MicroStrategy projects on the clustered machines must be based on


the same metadata.

l At least one project must be defined in the metadata.

Copyright © 2024 All Rights Reserved 1143


Syst em Ad m in ist r at io n Gu id e

l No more than one Intelligence Server can be configured for a single


machine. Multiple instances of Intelligence Server should not run on the
same machine for clustering purposes.

l The user account under which the Intelligence Server service is running
must have full control of cache and History List folders on all nodes.
Otherwise, Intelligence Server will not be able to create and access cache
and History List files.

l Server definitions store Intelligence Server configuration information.


MicroStrategy strongly recommends that all servers in the cluster use the
same server definition. This ensures that all nodes have the same
governing settings.

l Server definitions can be modified from Developer through the Intelligence


Server Configuration Editor and the Project Configuration Editor. For
instructions, see the MicroStrategy Web Help.

l Developer must be installed on a Windows machine to administer the


cluster. This version of Developer must be the same as the version of
Intelligence Servers. For example, if the Intelligence Servers are running
MicroStrategy Intelligent Enterprise, Developer must also be Intelligent
Enterprise.

l You must have access to the Cluster view of the System Administration
monitor in Developer. Therefore, you must have the Administration
privilege to create a cluster. For details about the Cluster view of the
System Administration monitor, see Manage Your Clustered System, page
1168.

l The computers that will be clustered must have the same intra-cluster
communication settings. To configure these settings, on each Intelligence
Server machine, in Developer, right-click the project source and select
Configure MicroStrategy Intelligence Server. The Intelligence Server
Configuration Editor opens. Under the Server definition category, select
General.

Copyright © 2024 All Rights Reserved 1144


Syst em Ad m in ist r at io n Gu id e

l The same caching method (localized or centralized caching) should be


used for both result caches and file-based History Lists. For information
about localized and centralized caching, see Synchronizing Cached
Information Across Nodes in a Cluster, page 1137.

Server Prerequisites
l The machines to be clustered must be running the same version of the
same operating system.

l Load balancing and system configuration are simpler if identical hardware


is used for each of the clustered nodes.

l If you are using time-based schedules in a clustered environment, all the


nodes in the cluster must have their clocks synchronized.

l The RDBMS containing the metadata and warehouse instances must


already be set up on machines separate from the Intelligence Server
nodes.

l Information on the clustered configuration is stored in the metadata, so the


machines to be clustered must use the same metadata repository. The
metadata may be created from any of the nodes, and it needs to be set up
only once. When you create or modify the server definition in the
MicroStrategy Configuration Wizard, you can specify either a new or an
existing metadata repository for Intelligence Server to use.

l The required data source names (DSNs) must be created and configured
for Intelligence Server on each machine. MicroStrategy strongly
recommends that you configure both servers to use the same metadata
database, warehouse, port number, and server definition.

l All nodes must join the cluster before you make any changes to any
governing settings, such as in the Intelligence Server Configuration Editor.

Copyright © 2024 All Rights Reserved 1145


Syst em Ad m in ist r at io n Gu id e

Prerequisites for Windows Clustering


l When Intelligence Server is installed, the last step is to choose a user
identity under which the service will run. To run a clustered configuration,
the user must be a domain account that has a trust relationship with each
of the computers in the cluster. This allows resources to be shared across
the network.

l The service user's Regional Options settings must be the same as the
clustered system's Regional Options settings.

Prerequisites for Linux Clustering


l MicroStrategy strongly recommends that all servers in a cluster use the
same server definition. Therefore, in some cases you cannot specify the
cache location with an absolute path such as /<machine_name>. This
occurs because the location would have to be different for each server
machine. To solve this problem, use relative paths and soft links. A soft
link is a special type of UNIX file that refers to another file by its path
name. A soft link is created with the ln (link) command:

ln -s OLDNAME NEWNAME

Where

OLDNAME is the target of the link, usually a path name.

NEWNAME is the path name of the link itself.

Most operations (open, read, write) on the soft link automatically de-
reference it and operate on its target (OLDNAME). Some operations (for
example, removing) work on the link itself (NEWNAME).

l Confirm that each server machine works properly, and then shut down
each machine.

Cluster Intelligence Servers


Below is a high-level overview of the steps to cluster Intelligence Servers:

Copyright © 2024 All Rights Reserved 1146


Syst em Ad m in ist r at io n Gu id e

1. Confirm that you have fulfilled the prerequisites for clustering


Intelligence Servers.

2. Configure the caches to synchronize information across nodes.

3. Join nodes.

4. Test the clustered system.

5. (Optional) Distribute projects across nodes.

6. (Optional) Reserve nodes with work fences.

Configure Caches in a Cluster


You can configure caches in one of two ways:

l Local caching: Each node hosts its own cache file directory and
Intelligent Cube directory. These directories need to be shared so that
other nodes can access them. For more information, see Synchronizing
Cached Information Across Nodes in a Cluster, page 1137.

l Centralized caching: All nodes have the cache file directory and
Intelligent Cube directory set to the same network locations. For more
information, see Synchronizing Cached Information Across Nodes in a
Cluster, page 1137. MicroStrategy recommends this method since it’s
simpler in both configuration and maintenance.

Configure Caches in a Cluster on Windows


Use one of the procedures below to share cache files among the nodes in
your cluster. For a detailed explanation of the two methods of cache sharing,
see Synchronizing Cached Information Across Nodes in a Cluster, page
1137.

Copyright © 2024 All Rights Reserved 1147


Syst em Ad m in ist r at io n Gu id e

Configure Cache Sharing Using Multiple Local Cache Files

1. Open the Project Configuration Editor for the project.

2. Select Caching > Result Caches > Storage.

3. In the Cache file directory box, type:

.\Caches\ServerDefinition

where ServerDefinition is the name of the server definition.

This tells the other clustered nodes to search for caches in the
following path on all machines in the cluster:

<Intelligence Server Application


Folder>\Caches\ServerDefinition

4. Click OK.

5. On each machine in the cluster, open Windows Explorer and navigate


to the cache file folder. The default location is:

C:\Program Files (x86)\MicroStrategy\Intelligence


Server\Caches\ServerDefinition

where ServerDefinition is the name of the server definition.

6. Right-click the cache file folder, and select Sharing.

7. On the Sharing tab, select the Shared as option. In Share Name,


delete the existing text and enter ClusterCaches.

Copyright © 2024 All Rights Reserved 1148


Syst em Ad m in ist r at io n Gu id e

8. Click OK.

9. Select Intelligent Cubes > General.

10. In Intelligent Cube File directory, enter:

.\Cube\ServerDefinition

Where ServerDefinition is the name of the server definition.

This tells the other clustered nodes to search for caches in the
following path on all machines in the cluster:

<Intelligence Server Application


Folder>\Cube\ServerDefinition

11. Click OK.

Copyright © 2024 All Rights Reserved 1149


Syst em Ad m in ist r at io n Gu id e

12. On each machine in the cluster, open Windows Explorer and navigate
to the cache file folder. The default location is:

C:\Program Files (x86)\MicroStrategy\Intelligence


Server\Cube\ServerDefinition

where ServerDefinition is the name of the server definition.

13. Right-click the cache file folder and select Sharing.

14. Select the Shared as option and in Share name, delete the existing
text and enter ClusterCube.

15. Click OK.

16. Restart the server. If the other cluster servers are running during the
configuration, restart them as well.

Copyright © 2024 All Rights Reserved 1150


Syst em Ad m in ist r at io n Gu id e

Configure Cache Sharing Using a Centralized Cache File

1. Open the Project Configuration Editor for the project.

2. Select Caching > Result Caches > Storage.

3. In Cache file directory, enter:

\\<Machine Name>\<Shared Folder Name>\Caches

or

\\<IP Address>\<Shared Folder Name>\Caches

For example, \\My_File_Server\My_Cache_Directory\Caches.

4. Click OK.

5. Select Intelligent Cubes > General.

6. In Intelligent Cube File directory, enter:

\\<Machine Name>\<Shared Folder Name>\Cube

or

\\<IP Address>\<Shared Folder Name>\Cube

For example, \\My_File_Server\My_Cache_Directory\Cube.

7. On the machine that is storing the centralized cache, create the file
folder that will be used as the shared folder. The file folder name must
be identical to the name you specified earlier in Cache file directory.
This is shown as the Shared Folder Name above.

Copyright © 2024 All Rights Reserved 1151


Syst em Ad m in ist r at io n Gu id e

8. Restart the server. If the other cluster servers are running during the
configuration, restart them as well.

Make sure this cache directory is writable to the network account


under which Intelligence server is running. Each Intelligence server
creates its own subdirectory.

Configure History List Sharing Using Multiple Local Cache Files

If you are using a file-based history list, you can set up history lists to use
multiple local disk backups on each node in the cluster, using a procedure
similar to the procedure above, Configure Cache Sharing Using Multiple
Local Cache Files, page 1148. The history list messages are stored in the
History folder. To locate this folder, in the Intelligence Server Configuration
Editor, expand History settings and select General.

The History List location is .\Inbox\ServerDefinition, where Server


Definition is the name of the folder containing the history lists. This
folder must be shared with the share name "ClusterInBox" because this is
the share name used by Intelligence server to look for history lists on other
nodes.

Configure Caches in a Cluster on Linux


To configure a cluster of Intelligence servers in a Linux environment, all
servers must have access to each others' caches and inbox (history list)
files. Both cache and history list files are referred to generally as cache files
throughout this section. An Intelligence server looks for cache files from
other nodes in the cluster by machine name. For an explanation and
diagrams of general cache synchronization setup, see Synchronizing
Cached Information Across Nodes in a Cluster, page 1137.

The cache and Inbox folders must be named as follows:

/<machine_name>/ClusterCaches

/<machine_name>/ClusterInBox

Copyright © 2024 All Rights Reserved 1152


Syst em Ad m in ist r at io n Gu id e

For example, a two-node cluster with Intelligence servers is running on


UNIX1 and UNIX2 machines. Intelligence server running on UNIX1 looks for
caches of the other Intelligence server only on /UNIX2/ClusterCaches.

The procedures below demonstrates how to configure the caches on two


servers, named UNIX1 and UNIX2. Use these steps as a guideline for
configuring your own system.

You can choose to use either procedure below, depending on whether you
want to use centralized or local caching. For a detailed description and
diagrams of cache synchronization setup, see Synchronizing Cached
Information Across Nodes in a Cluster, page 1137.

Configure a Cluster with Multiple Local Cache Files

This procedure makes the following assumptions:

l The Linux machines are called UNIX1 and UNIX2. Note that UNIX1 and
UNIX2 are the hostnames, not the IP address.

l Intelligence server is installed in MSTR_HOME_PATH on each machine.

Set Up t h e UN IX1 M ach i n e

Mount the folders from UNIX2 on UNIX1.

mkdir /UNIX2
mount UNIX2:/<MSTR_HOME_PATH>/IntelligenceServer /UNIX2

Set Up t h e UN IX2 M ach i n e

Mount the folders from UNIX1 on UNIX2.

mkdir /UNIX1
mount UNIX1:/<MSTR_HOME_PATH>/IntelligenceServer /UNIX1

Copyright © 2024 All Rights Reserved 1153


Syst em Ad m in ist r at io n Gu id e

Co n f i gu r e t h e Ser ver Def i n i t i o n an d Pr o j ect

1. Start Intelligence server on UNIX1.

2. In Developer, create project sources pointing to UNIX1.

3. Connect to UNIX1 using Developer.

4. Right-click the project source of UNIX1 and select Configure Server.

5. Select History Settings and General.

6. Set the path to ./ClusterInBox and click OK.

7. Right-click the project name and select Project Configuration.

8. Select Caching > Result Caches > Storage.

9. Set the path for the cache file directory to ./ClusterCaches.

10. Select Intelligent Cubes > General > Intelligent Cube File directory.

11. Set the path for the cube cache file directory to ./ClusterCube.

12. Disconnect from the project source and restart both Intelligence
servers.

Configure a Cluster with a Centralized Cache

This procedure assumes that the Linux machines are called UNIX1 and
UNIX2.

Cr eat e t h e Cach e Fo l d er o n t h e Sh ar ed Devi ce

1. Create the folders for caches on the shared device called UNIX3 as
described in Prerequisites for Clustering Intelligence Servers, page
1143

mkdir /sandbox

Copyright © 2024 All Rights Reserved 1154


Syst em Ad m in ist r at io n Gu id e

2. On UNIX1, mount the folders from the shared device on UNIX1.

mkdir /sandbox
mount UNIX3:/sandbox /sandbox

3. On UNIX2, mount the folders from the shared device on UNIX2.

mkdir /sandbox
mount UNIX3:/sandbox /sandbox

Co n f i gu r e t h e Ser ver Def i n i t i o n an d Pr o j ect

1. Start Intelligence server on UNIX1.

2. In Developer, create project sources pointing to UNIX1.

3. Connect to UNIX1 using Developer.

4. Right-click the project source of UNIX1 and select Configure Server.

5. Select History Settings and General.

6. Set the path using the following convention:

//<SharedLocation>/<InboxFolder>

In this example, set it as //sandbox/Inbox.

7. Right-click the project name and select Project Configuration.

8. Select Caching > Result Caches > Storage.

9. Following the convention, //<SharedLocation>/<CacheFolder>,


set the path to //sandbox/Caches.

For caches stored on Linux machines using Samba, set the path to
\\<machine name>\<shared folder name>.

10. Select Intelligent Cubes > General > Intelligent Cube File directory.

Copyright © 2024 All Rights Reserved 1155


Syst em Ad m in ist r at io n Gu id e

11. Following the convention, //<SharedLocation>/<CubeFolder>, set


the path to //sandbox/Cube.

For caches stored on Linux machines using Samba, set the path to
\\<machine name>\<shared folder name>.

12. Disconnect from the project source and restart both Intelligence
servers.

Configure History Lists in a Clustered Environment


MicroStrategy recommends that you enable user affinity clustering to reduce
history list resource usage. User affinity clustering causes Intelligence
server to connect all sessions for a user to the same node of the cluster. For
background information about user affinity clustering, see Synchronizing
Cached Information Across Nodes in a Cluster, page 1137.

If you are not using user affinity clustering, MicroStrategy recommends that
you set the cache backup frequency to 0 (zero) to ensure that history list
messages are synchronized correctly between nodes. For more information
about this setting, see Configuring Result Cache Settings, page 1228.

Configure the History List Governing Settings for a Clustered


Environment

1. In Developer, log in to a project source. You must log in as a user that


has administrative privileges.

2. From the Administration menu, go to Server > Configure


MicroStrategy Intelligence Server.

3. Expand the Server Definition category and select Advanced.

4. Do one of the following:

l To enable user affinity clustering, select the User Affinity Cluster


check box.

Copyright © 2024 All Rights Reserved 1156


Syst em Ad m in ist r at io n Gu id e

l OR, if you do not want to enable user affinity clustering, in the


Backup frequency (minutes) field, type 0 (zero).

5. Click OK.

6. Restart the Intelligence server.

Configure Session Recovery Message Sharing in a Cluster


In a clustered Intelligence Server environment, additional configuration is
required for the cluster nodes to be able to share Session Recovery
functionality for sessions that existed on other cluster nodes. This
configuration is similar to the configuration required to share caches, cubes,
and history list messages. The repository can be located either locally or in
a shared network location. In localized storage, each cluster node will retain
the recovery files for sessions hosted by that node. The file location will be
shared to the other cluster nodes so that each node can access other node's
recovery files. In centralized storage, all cluster nodes will store their
repository files in a central shared network location.

Configure Session Recovery Message Repository Sharing in a


Cluster on Windows

The domain user running the remote Intelligence Servers must have full read
and write access to this shared location.

Shared network locations should be set up before configuring the Intelligence


Servers for centralized storage.

Shared network locations should be accessible via a Universal Naming


Convention (UNC) path, in the format of \\machinename\path.

Configure Session Recovery Messages for Localized Storage

1. Open the Intelligence Server Configuration Editor.

2. Select Governing Rules > Default > Temporary Storage Settings.

Copyright © 2024 All Rights Reserved 1157


Syst em Ad m in ist r at io n Gu id e

3. In the Session Recovery and Deferred Inbox storage directory box,


enter .\inbox\ServerDefinition where ServerDefinition is
the name of the server definition.

4. Click OK.

5. Right-click the configured path file folder, and select Sharing.

6. On the Sharing tab, select the Shared as option. In the Share Name
box, delete the existing text and type ClusterWSRM.

This folder must be shared with the name "ClusterWSRM". This name
is used by Intelligence Server to look for Session Recovery messages
on other nodes.

7. Click OK.

8. Restart Intelligence Server.

Configure Session Recovery Messages for a Centralized Storage


Location

1. Open the Intelligence Server Configuration Editor.

2. Select Governing Rules > Default > Temporary Storage Settings

3. In the Session Recovery and Deferred Inbox storage directory box,


enter:

\\<Machine Name>\<Shared Folder Name>

or

\\<IP Address>\<Shared Folder Name>

4. Click OK.

Copyright © 2024 All Rights Reserved 1158


Syst em Ad m in ist r at io n Gu id e

Configure Session Recovery Message Repository Sharing in a


Cluster on UNIX/Linux

The domain user running the remote Intelligence Servers must have full read
and write access to this shared location.

Shared network locations should be set up and mounted to the local file system
on each Intelligence Server before configuring for centralized storage.

Configure Session Recovery Messages for a Centralized Storage


Location

Cr eat e t h e Sessi o n Reco ver y Fo l d er o n t h e Sh ar ed Devi ce

1. Create the folders for Session Recovery messages on the shared


device:

mkdir /sandbox/WSRMshare

2. Restart your Intelligence Servers.

Co n f i gu r e t h e Ser ver Def i n i t i o n an d Pr o j ect

1. Right-click the project source and select Configure Server.

2. Select Governing Rules > Default > Temporary Storage Settings.

3. In the Session Recovery and Deferred Inbox storage directory box,


set the path using the following convention:

//<machine_name>/sandbox/WSRMshare

4. Click OK.

5. Repeat for each Intelligence Server.

Copyright © 2024 All Rights Reserved 1159


Syst em Ad m in ist r at io n Gu id e

Configure Session Recovery Messages for Localized Storage

This procedure makes the following assumptions:

l The Linux machines are called UNIX1 and UNIX2.

l Intelligence Server is installed in MSTR_<HOME_PATH> on each machine.

l The MSTR_<HOME_PATH> for each machine is /Build/BIN/SunOS/.

Co n f i gu r e t h e Ser ver Def i n i t i o n an d Pr o j ect

1. Start Intelligence Server on UNIX 1.

2. In Developer, create project sources pointing to UNIX1 and UNIX2.

3. Connect to UNIX1 using Developer.

4. Right-click the project source of UNIX1 and select Configure Server.

5. Select Governing Rules > Default > Temporary Storage Settings

6. Set the path to ./ClusterInBox in the Session Recovery and


Deferred Inbox storage directory

7. Click OK.

Set Up t h e UN IX1 M ach i n e

1. Create a top level folder /UNIX2.

mkdir /UNIX2

2. In this folder create a sub folder ClusterInBox

3. Mount the folders from UNIX2 on the UNIX1 machine using the
following command:

mount UNIX2:/Build/BIN/SunOS/UNIX2/ClusterInBox

Copyright © 2024 All Rights Reserved 1160


Syst em Ad m in ist r at io n Gu id e

Set Up t h e UN IX2 M ach i n e

1. Create a top level folder /UNIX1.

mkdir /UNIX1

2. In this folder create a sub folder ClusterInBox

3. Mount the folders from UNIX2 on the UNIX1 machine using the
following command:

mount UNIX2:/Build/BIN/SunOS/UNIX1/ClusterInBox

Join Nodes in a Cluster


You join one node (or machine) to another node to form a cluster using the
Cluster Monitor.

To Join a Node to a Cluster

1. In Developer, log in to a project source. You must log in as a user with


the Administer Cluster privilege.

2. Expand Administration, then expand System Administration, and


then select Cluster. Information about each node in the cluster
information displays on the right-hand side.

3. From the Administration menu, point to Server, then select Join


cluster.

4. Type the name of the machine running Intelligence Server to which you
are adding this node, or click ... to browse for and select it.

5. Click OK.

Copyright © 2024 All Rights Reserved 1161


Syst em Ad m in ist r at io n Gu id e

Verify the Clustered System is Working


Once all nodes have been synchronized and added to the cluster, you can
verify that the cluster is working properly.

Verify from Developer


1. Connect to one Intelligence Server in the cluster and ensure that the
Cluster view in Developer (under Administration, under System
Administration) is showing all the proper nodes as members of the
cluster.

2. Connect to any node and run a large report.

3. Use the Cache Manager and view the report details to make sure the
cache is created.

4. Connect to a different node and run the same report. Verify that the
report used the cache created by the first node.

5. Connect to any node and run a report.

6. Add the report to the History List.

7. Without logging out that user, log on to a different node with the same
user name.

8. Verify that the History List contains the report added in the first node.

Verify from MicroStrategy Web


1. Open the MicroStrategy Web Administrator page.

2. Connect to any node in the cluster. MicroStrategy Web Universal


should automatically recognize all nodes in the cluster and show them
as connected.

If MicroStrategy Web does not recognize all nodes in the cluster, it is


possible that the machine itself cannot resolve the name of that node.

Copyright © 2024 All Rights Reserved 1162


Syst em Ad m in ist r at io n Gu id e

MicroStrategy cluster implementation uses the names of the machines for


internal communication. Therefore, the Web machine should be able to
resolve names to IP addresses. You can edit the lmhost file to relate IP
addresses to machine names.

You can also perform the same cache and History List tests described above
in Verify from Developer, page 1162.

Distribute Projects Across Nodes in a Cluster


You can distribute projects across nodes of a cluster in any clustered
configuration. Each node can host a different set of projects, which means
only a subset of projects needs to be loaded on an Intelligence Server. This
provides you with flexibility in using your resources and better scalability
and performance.

To distribute projects across the cluster, you manually assign the projects to
specific nodes in the cluster. Once a project has been assigned to a node, it
is available for use.

If you do not assign a project to a node, the project remains unloaded and
users cannot use it. You must then manually load the project for it to be
available. To manually load a project, right-click the project in the Project
Monitor and select Load.

If you are using single instance session logging in Enterprise Manager with
clustered Intelligence Servers, the single instance session logging project
must be loaded onto all the clustered Intelligence Servers. Failure to load
this project on all servers at startup results in a loss of session statistics for
any Intelligence Server onto which the project is not loaded at startup. For
more information, see MicroStrategy Community Knowledge Base article
KB14591. For detailed information about session logging in Enterprise
Manager, see the Enterprise Manager Help .

1. In Developer, from the Administration menu, point to Projects, then


select Select Projects. Intelligence Server Configuration Editor opens,

Copyright © 2024 All Rights Reserved 1163


Syst em Ad m in ist r at io n Gu id e

at the Projects: General category.

2. One column is displayed for each node in the cluster that is detected at
the time the Intelligence Server Configuration Editor opens. Select the
corresponding check box to configure the system to load a project on a
node. A selected box at the intersection of a project row and a node
column signifies that the project is to be loaded at startup on that node.

If no check boxes are selected for a project, the project is not loaded on
any node at startup. Likewise, if no check boxes are selected for a
node, no projects are loaded on that node at startup.

If you are using single instance session logging with Enterprise


Manager, the single instance session logging project must be loaded
onto all the clustered Intelligence Servers at startup. Failure to load
this project on all servers at startup results in a loss of session
statistics for any Intelligence Server onto which the project is not
loaded at startup. For steps on implementing single instance session
logging, see the Enterprise Manager Help. For more information about
this issue, see MicroStrategy Tech Note TN14591.

or

If the All Servers checkbox is selected for a project, all nodes in the
cluster load this project at startup. All individual node check boxes are
also selected automatically. When you add a new node to the cluster,
any projects set to load on All Servers automatically load on the new
node.

If you select a check mark for a project to be loaded on every node but
you do not select the All Servers check box, the system loads the
project on the selected nodes. When a new node is added to the
cluster, this project is not automatically loaded on that new node.

Copyright © 2024 All Rights Reserved 1164


Syst em Ad m in ist r at io n Gu id e

3. Select Show selected projects only to display only those projects that
have been assigned to be loaded on a node. For display purposes it
filters out projects that are not loaded on any node in the cluster.

4. Select Apply startup configuration on save to allow your changes to


be reflected immediately across the cluster. If this check box is cleared,
any changes are saved when you click OK, but they do not take effect
until Intelligence Server is restarted.

5. Click OK.

If you do not see the projects you want to load displayed in the Intelligence
Server Configuration Editor, you must configure Intelligence Server to use a
server definition that points to the metadata containing the project. Use the
MicroStrategy Configuration Wizard to configure this. For details, see the
Installation and Configuration Help.

It is possible that not all projects in the metadata are registered and listed in
the server definition when the Intelligence Server Configuration Editor
opens. This can occur if a project is created or duplicated in a two-tier
(direct connection) project source that points to the same metadata as that
being used by Intelligence Server while it is running. Creating, duplicating,
or deleting a project in two-tier while a server is started against the same
metadata is not recommended.

Reserve Nodes with Work Fences


Within a cluster, work fences allow an administrator to reserve specific
nodes for use by certain users or workloads during normal operation. There
are two types of fences:

l User Fence: used to process requests from a list of specified users or


user groups. User fences can be further limited by specifying applicable
projects.

l Workload Fence: used to run subscriptions triggered by an event or time-


based schedule for specified projects. Note that on-demand event
subscriptions such as run immediately, preview, or personal view are not

Copyright © 2024 All Rights Reserved 1165


Syst em Ad m in ist r at io n Gu id e

included. For more information on subscriptions, see Scheduling Reports


and Documents: Subscriptions, page 1333.

For example, a user fence could be configured for users who require more
processing power or high availability. Conversely, a workload fence, could
be configured to limit the resources for lower priority subscriptions.

Typically, the majority of the nodes in a cluster will not be part of a fence,
making them available for general use. All configured fences are defined in a
single list ordered by precedence. When a request is received, the ordered
list of all fences and their configurations are assessed to determine if the
request matches any fence configuration. A request will be processed by the
first fence found with an available node in the ordered list where the request
matches the fence criteria.

When all nodes in the cluster are part of the fence list, the request will be
sent to a node in the last fence in the ordered list.

Fencing is not supported with legacy clients which includes MicroStrategy


Developer and administration tools such as Command Manager.

Consider the following figure which shows a clustered implementation with


eight nodes:

l Nodes 7 and 8 are defined


in the "CXO" user fence,
meaning that these nodes
are reserved for requests
from users in the CXO
group.

l Nodes 5 and 6 are defined


in the "DistSvcs" workload
fence, meaning that these
nodes are reserved for
processing subscriptions that are not on-demand events.

Copyright © 2024 All Rights Reserved 1166


Syst em Ad m in ist r at io n Gu id e

l Nodes 1, 2, 3, and 4 are not defined in a fence, meaning that they are
available to process requests that do not meet the criteria of either fence.

Use Fences with Asymmetric Project Clustering


When user fences are configured with a cluster that has projects that are
only loaded on specific nodes, users are always sent to a node that supports
the project. The first fence found in the priority list that includes a node
where the requested project is loaded will be used. For more information
about asymmetric project clustering, see Distribute Projects Across Nodes
in a Cluster, page 1163.

Configure Fences
Using Command Manager, you can create, modify, list, and delete fences
without restarting the clustered Intelligence Servers. For more information
about Command Manager, see Chapter 15, Automating Administrative Tasks
with Command Manager.

l You have properly configured an Intelligence Server cluster and all nodes in
the cluster must use the same server definition.

l You can log in to Command Manager as a user that has the


DssPrivilegesConfigureServerBasic privilege, which is a default
privilege for the Server Resource Settings Administrators group.

Configure Fences

Script outlines are provided in Command Manager to assist with configuring


fences in the Fence_Outlines folder. For more information about these
commands, see the Command Manager Help Help.

Enable User Fencing in MicroStrategy Web

After your fences have been configured, you will need to enable
MicroStrategy Web to use user fences. The setting is off by default.

Copyright © 2024 All Rights Reserved 1167


Syst em Ad m in ist r at io n Gu id e

1. On the Web Administration page open Other Configuration.

2. Under Fencing select the Enable Fencing checkbox.

3. Click Save.

Enable User Fencing in MicroStrategy Library

1. Locate <MicroStrategy Library Root>/WEB-INF/xml/sys_


defaults.xml.

2. Modify the following entry to a value of "1" and save.

<pr des="whether to enable fencing" n="enableFencing" scp="system" v="1"


dt="boolean"/>

3. Restart Library.

Manage Your Clustered System


Once your clustered system is up and running, you can monitor and
configure the projects that are running on each node of the cluster.

l Manage your Projects Across Nodes of a Cluster, page 1169

l Project Failover and Latency, page 1171

l Shut Down a Node, page 1174

l Maintain Result Caches and History Lists in a Clustered Environment,


page 1176

Copyright © 2024 All Rights Reserved 1168


Syst em Ad m in ist r at io n Gu id e

Manage your Projects Across Nodes of a Cluster


Managing a project across all nodes of a cluster can be done through the
Project view of the System Administration monitor. From this view, you can
unload or idle a project during System Administration on Intelligence Server.
However, sometimes you need to perform maintenance on only one node of
the cluster. In this case, you can use the Cluster view to idle or unload a
project from that node, while leaving the project running on the other nodes
of the cluster.

For detailed information about the effects of the various idle states on a
project, see Setting the Status of a Project, page 48.

Manage the Projects and Nodes in a Cluster

1. In Developer, log in to a project source. You must log in as a user with


the Administer Cluster privilege.

2. Expand Administration, then expand System Administration, and


then select Cluster.

3. To see a list of all the projects on a node, click the + sign next to that
node.

You can perform an action on multiple servers or projects at the same time.
To do this, select several projects (CTRL+click), then right-click and select
one of the options.

Idle or Resume a Project on a Node

1. In the Cluster view, right-click the project whose status you want to
change, point to Administer project on node, and select
Idle/Resume.

Copyright © 2024 All Rights Reserved 1169


Syst em Ad m in ist r at io n Gu id e

2. Select the options for the idle mode that you want to set the project to:

l Request Idle (Request Idle): all executing and queued jobs finish
executing, and any newly submitted jobs are rejected.

l Execution Idle (Execution Idle for All Jobs): all executing, queued,
and newly submitted jobs are placed in the queue, to be executed
when the project resumes.

l Warehouse Execution Idle (Execution Idle for Warehouse jobs): all


executing, queued, and newly submitted jobs that require SQL to be
submitted to the data warehouse are placed in the queue, to be
executed when the project resumes. Any jobs that do not require SQL
to be executed against the data warehouse are executed.

l Full Idle (Request Idle and Execution Idle for All jobs): all
executing and queued jobs are canceled, and any newly submitted
jobs are rejected.

l Partial Idle (Request Idle and Execution Idle for Warehouse jobs):
all executing and queued jobs that do not submit SQL against the
data warehouse are canceled, and any newly submitted jobs are
rejected. Any executing and queued jobs that do not require SQL to
be executed against the data warehouse are executed.

Copyright © 2024 All Rights Reserved 1170


Syst em Ad m in ist r at io n Gu id e

To resume the project from a previously idled state, clear the Request
Idle and Execution Idle check boxes.

3. Click OK.

Load or Unload a Project from a Specific Node

In the Cluster view, right-click the project whose status you want to change,
point to Administer project on node, and select Load or Unload.

Project Failover and Latency


Project failover support in a cluster is similar to system failover support. For
example, one server in a cluster is hosting project A and another server in
the cluster is running projects B and C. If the first server becomes
unavailable, the other can begin running all three projects. Project failover
support ensures that projects remain available even if hardware or an
application fails.

Project failover is triggered when the number of nodes running a project


reaches zero due to node failure. At that point, the system automatically
loads any projects that were on the failed system onto another server in the
cluster to maintain the availability of those projects. Once the failed server
recovers, the system reloads the original project onto the recovered server.
It also removes the project from the server that had temporarily taken over.

Failover and latency take effect only when a server fails. If a server is
manually shut down, its projects are not automatically transferred to another
server, and are not automatically transferred back to that server when it
restarts.

You can determine several settings that control the time delay, or latency
period, in the following instances:

l After a machine fails, but before its projects are loaded onto to a different
machine

Copyright © 2024 All Rights Reserved 1171


Syst em Ad m in ist r at io n Gu id e

l After the failed machine is recovered, but before its original projects are
reloaded

To Set Project Failover Latency

1. In Developer, from the Administration menu, select Server, then


select Configure MicroStrategy Intelligence Server.

2. Expand the Server Definition category, then select Advanced.

3. Enter the Project Failover Latency and Configuration Recovery


Latency, and click OK.

When deciding on these latency period settings, consider how long it takes
an average project in your environment to load on a machine. If your
projects are large, they may take some time to load, which presents a strain
on your system resources. With this consideration in mind, use the following
information to decide on a latency period.

Project Failover Latency


You can control the time delay (latency) before the project on a failed
machine is loaded on another node to maintain a minimum level of
availability.

Latency takes effect only when a server fails. If a server is manually shut
down, its projects are not automatically transferred to another machine.

Consider the following information when setting a latency period:

l Setting a higher latency period prevents projects on the failed server from
being loaded onto other servers quickly. This can be a good idea if your
projects are large and you trust that your failed server will recover quickly.
A high latency period provides the failed server more time to come back
online before its projects need to be loaded on another server.

Copyright © 2024 All Rights Reserved 1172


Syst em Ad m in ist r at io n Gu id e

l Setting a lower latency period causes projects from the failed machine to
be loaded relatively quickly onto another server. This is good if it is crucial
that your projects are available to users at all times.

l Disabling the latency period or the failover process:

l If you enter 0 (zero), there is no latency period and thus there is no


delay; the project failover process begins immediately.

l If you enter -1, the failover process is disabled and projects are not
transferred to another node if there is a machine failure.

Configuration Recovery Latency


When the conditions that caused the project failover disappear, the system
automatically reverts to the original project distribution configuration by
removing the project from the surrogate server and loading the project back
onto the recovered server (the project's original server).

Consider the following information when setting a latency period:

l Setting a higher latency period leaves projects on the surrogate server


longer. This is good idea if your projects are large and you want to be sure
your recovered server stays online for a specific period before the project
load process begins. A high latency period provides the recovered server
more time after it comes back online before its projects are reloaded.

l Setting a lower latency period causes projects on the surrogate machine to


be removed and loaded relatively quickly onto the recovered server. This
is desirable if you want to reduce the strain on the surrogate server as
soon as possible.

You can also disable the latency period:

l If you enter a 0 (zero), there is no latency period and thus there is no


delay. The configuration recovery process begins immediately.

l If you enter a -1, the configuration recovery process is disabled and


projects are never automatically reloaded onto the recovered server.

Copyright © 2024 All Rights Reserved 1173


Syst em Ad m in ist r at io n Gu id e

Shut Down a Node


MicroStrategy ONE (June 2024) introduces a preview feature to Preview
Feature: Access Cubes Regardless of Node Status.

A node can be shut down in two ways:

l Administrative shutdown: This includes instances when a node is


removed from a cluster or the Intelligence Server service is stopped.

l Node failure: This includes instances such as a power failure or a


software error; this is sometimes called a forceful shutdown. Forcefully
shutdown nodes retain their valid caches if they are available. However,
while the node is shut down, there is no way to monitor the caches, change
their status, or invalidate them. They can be deleted by manually deleting
the cache files on the local node or by deleting the appropriate cache files
on a shared network location. Be aware that cache files are named with
object IDs.

The results of each of these types of shutdown are discussed below.

Resource Availability
If a node is rendered unavailable because of a forceful shutdown, its cache
resources are still valid to other nodes in the cluster and are accessed if
they are available. If they are not available, new caches are created on other
nodes.

In an administrative shutdown, caches associated with the shut down node


are no longer valid for other nodes, even if they are physically available,
such as on a file server.

Copyright © 2024 All Rights Reserved 1174


Syst em Ad m in ist r at io n Gu id e

Client Connection Status

Developer

Client connections that are not cluster-aware, such as Developer, do not


experience any change if a node is removed from a cluster. However, the
local node must regenerate its own caches rather than accessing the
resources of other nodes. If Intelligence Server is shut down, any Developer
clients connected to that Intelligence Server receive an error message
notifying them of the lost connection, regardless of whether that Intelligence
Server was in a cluster.

MicroStrategy Web

If a cluster node shuts down while MicroStrategy Web users are connected,
those jobs return an error message by default. The error message offers the
option to resubmit the job, in which case MicroStrategy Web automatically
reconnects the user to another node.

Customizations to MicroStrategy Web can alter this default behavior in


several ways.

If a node is removed from the cluster, all existing connections continue to


function and remain connected to that machine, although the machine no
longer has access to the clustered nodes' resources. Future connections
from MicroStrategy Web will be to valid cluster nodes.

Status After Reboot


If a node goes down for any reason, all jobs on that node are terminated.
Restarting the node provides an empty list of jobs in the job queue.

If a node is forcefully shut down in a Windows environment, it automatically


rejoins the cluster when it comes back up.

Copyright © 2024 All Rights Reserved 1175


Syst em Ad m in ist r at io n Gu id e

If multiple nodes in the cluster are restarted at the same time, they may not
all correctly rejoin the cluster. To prevent this, separate the restart times by
several minutes.

The nodes that are still in the cluster but not available are listed in the
Cluster Monitor with a status of Stopped.

Maintain Result Caches and History Lists in a Clustered


Environment
Proper maintenance of result caches and History Lists is important in any
MicroStrategy system. For detailed information on caches and cache
management, including recommended best practices, see Result Caches,
page 1203. For detailed information on History Lists, including best
practices, see Saving Report Results: History List, page 1240.

When maintaining result caches and History Lists in a clustered


environment, be aware of the following:

l You can manage the caches on a node only if that node is active and
joined to the cluster and if the project containing the caches is loaded on
that node.

l Whenever a cache on one node of the cluster is created or updated, any


copies of the old cache for that report, on the same node or on other
nodes, are automatically invalidated. This means that only one valid copy
of a cache exists at any time for a report on all nodes in the cluster. For
more information about invalidating caches, see Managing Result Caches,
page 1221.

l The Cache Monitor's hit count number on a machine reflects only the
number of cache hits that machine initiated on any cache in the cluster. If
a different machine in the cluster hits a cache on the local machine, that
hit is not be counted on the local machine's hit count. For more information
about the Cache Monitor, see Monitoring Result Caches, page 1217.

Copyright © 2024 All Rights Reserved 1176


Syst em Ad m in ist r at io n Gu id e

For example, ServerA and ServerB are clustered, and the cluster is
configured to use local caching (see Synchronizing Cached Information
Across Nodes in a Cluster, page 1137). A report is executed on ServerA,
creating a cache there. When the report is executed on ServerB, it hits the
report cache on ServerA. The cache monitor on ServerA does not record
this cache hit, because ServerA's cache monitor displays activity initiated
by ServerA only.

l To ensure that History List messages are synchronized correctly between


nodes and to reduce system overhead, either enable user affinity
clustering or set the cache backup frequency to 0 (zero). For a discussion
of these settings, including instructions, see Configure Caches in a
Cluster, page 1147.

Maintaining History Lists in a Clustered Environment


User affinity clustering causes Intelligence Server to connect all sessions for
a user to the same node of the cluster. This enables Intelligence Server to
keep the user's History List on one node of the cluster. Resource use is
minimized because the History List is not stored on multiple machines, and
the History List is never out of sync across multiple nodes of the cluster.

MicroStrategy recommends that you enable user affinity clustering in any


clustered system. If you are not using user affinity clustering, MicroStrategy
recommends that you set the cache backup frequency to 0 (zero) to ensure
that History List messages are synchronized correctly among nodes. For
more information about this setting, see Configuring Result Cache Settings,
page 1228.

To Configure the History List Governing Settings for a Clustered


Environment

1. In Developer, log in to a project source. You must log in as a user with


administrative privileges.

Copyright © 2024 All Rights Reserved 1177


Syst em Ad m in ist r at io n Gu id e

2. From the Administration menu, point to Server and then select


Configure MicroStrategy Intelligence Server.

3. Expand the Server Definition category, and then select Advanced.

4. Do one of the following:

l To enable user affinity clustering, select the User Affinity Cluster


check box.

l If you do not want to enable user affinity clustering, in the Backup


frequency (minutes) field, enter 0 (zero).

5. Click OK.

6. Restart Intelligence Server.

MicroStrategy Messaging Services


Messaging Services is a component that is coupled with the Intelligence
Server during installations and upgrades. Messaging Services is configured
out-of-the-box and runs automatically after the installation is completed.

After installation, you can see the following services are automatically
started:

l Apache Kafka (C:\Program Files


(x86)\MicroStrategy\Messaging Services\Kafka\kafka_2.11-
0.10.1.0)

l Apache ZooKeeper (C:\Program Files


(x86)\MicroStrategy\Messaging Services\Kafka\kafka_2.11-
0.10.1.0)

By default MicroStrategy will still send Intelligence Server diagnostic logs to


local disk. Diagnostic logs will be sent to the Messaging Services Server
after you perform the following:

l Enable MicroStrategy Messaging Services

l Turn On the Sending Log to Messaging Services Feature

Afterwards you will see Kafka log files created in the Kafka installation
Copyright © 2024 All Rights Reserved 1178
folder:
Syst em Ad m in ist r at io n Gu id e

C:\Program Files (x86)\MicroStrategy\Messaging


Services\tmp\kafka-logs

Different Kafka topics will be created to store data for different


MicroStrategy components.

Co n f i gu r i n g M essagi n g Ser vi ces af t er u p gr ad i n g

By default, MicroStrategy Messaging Services are installed along with the


Intelligence server upgrade.

Once you have completed the upgrade process, you need to enable
MicroStrategy Messaging Services. If not, the Intelligence Server continues
to write to the original log.

Messaging Services Workflow for Intelligence Server


l Intelligence Server is the Kafka Producer and can be deployed a single node
or cluster.

l Kafka Server can be deployed as a single node or cluster.

Enable MicroStrategy Messaging Services


Messaging Services configuration is saved in the MicroStrategy Intelligence
Server configuration. It can be enabled or disabled on the fly, without
restarting your Intelligence Server.

Com m and Manager Scripts for Messaging Services

To check if Messaging Services is enabled, execute:

LIST ALL PROPERTIES FOR SERVER CONFIGURATION;

To enable Messaging Services through Command Manager, execute:

ALTER SERVER CONFIGURATION ENABLEMESSAGINGSERVICES TRUE


CONFIGUREMESSAGINGSERVICES
"bootstrap.servers:10.15.208.236:9092/batch.num.messages:5000/queue.buffering.
max.ms:2000";
Copyright © 2024 All Rights Reserved 1179
Syst em Ad m in ist r at io n Gu id e

In the example above set:

l bootstrap.servers: to your Kafka Server IP address and port number.

l batch.num.messages: to the number of messages to send in one batch


when using asynchronous mode.

l queue.buffering.max.ms: to the maximum time to buffer data when


using asynchronous mode.

You can specify more Kafka Producer configuration settings in this command
following the same format.

Turn On the Sending Log to Messaging Services Feature


You can turn on the Sending Log to Messaging Services feature using either
MicroStrategy Web or Command Manager.

From MicroStrategy Web

1. Log in using and Administrator account.

2. Open User Preferences > Project Defaults.

3. Locate Sending Log to Messaging Services in the Features for Customer


Feedback section.

4. Select On from the drop-down menu.

5. Click Apply.

From Com m and Manager

1. Connect to your project source.

2. Execute the following:

ALTER FEATURE FLAG "SENDING LOG TO MESSAGING SERVICES" ON;

Copyright © 2024 All Rights Reserved 1180


Syst em Ad m in ist r at io n Gu id e

Modifying Messaging Services Configuration

Apache Kafka Server

The Kafka Server can be configured by modifying the server.properties


file found in:

C:\Program Files (x86)\MicroStrategy\Messaging


Services\Kafka\kafka_2.11-0.10.1.0\config

Both Apache Kafka Server and ZooKeeper should be restarted after


modifying the above configuration file.

MicroStrategy Messaging Services Configuration for Clustered


Environments
If you have clustered your Intelligence Servers and want to use a separate
machine to run MicroStrategy Messaging Services after upgrading, complete
the following steps for each node in the cluster.

The minimum number of nodes for a cluster is 3.

Each node must have the following installed:

l MicroStrategy Messaging Services

l Apache Kafka

l Apache Zookeeper

Copyright © 2024 All Rights Reserved 1181


Syst em Ad m in ist r at io n Gu id e

Configure Zookeeper

1. Browse to folder C:\Program Files


(x86)\MicroStrategy\Messaging Services\Kafka\kafka_2.11-
0.10.1.0\config.

2. Edit file zookeeper.properties by adding following lines:

clientPort=2181
dataDir=C:\\Program Files (x86)\\MicroStrategy\\Messaging
Services\\tmp\\zookeeper
maxClientCnxns=0
initLimit=5
syncLimit=2
server.1=10.27.20.16:2888:3888
server.2=10.27.20.60:2888:3888
server.3=10.15.208.236:2888:3888

Each server parameter must contain a unique integer identifier as shown


above. You attribute the server id to each machine by creating a text file
named myid, one for each server, which resides in that server's data
directory, as specified by the configuration file parameter dataDir =
C:\Program Files (x86)\MicroStrategy\Messaging
Services\tmp\zookeeper

3. Go to folder C:\Program Files


(x86)\MicroStrategy\MessagingServices\Kafka\kafka_2.11-
0.9.0.1\config\zookeeper.

4. Create a text file named myid containing the identifying value from the
server parameter name in the zookeeper.properties file.

Copyright © 2024 All Rights Reserved 1182


Syst em Ad m in ist r at io n Gu id e

Configure Kafka

1. Browse to folder C:\Program Files


(x86)\MicroStrategy\Messaging Services\Kafka\kafka_2.11-
0.10.1.0\config.

2. Edit file server.properties, add a row


zookeeper.connect=10.27.20.16:2181,10.27.20.60:2181,10.15
.208.236:2181 to the Zookeeper section.

############################# Zookeeper #############################


# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
# zookeeper.connect=localhost:2181
zookeeper.connect=10.27.20.16:2181,10.27.20.60:2181,10.15.208.236:2181

3. Modify the broker.id value to a unique integer from other Kafka servers
(the default value is 0), such as for node 10.27.20.60 we use number 2.

############################# Server Basics #############################


# The id of the broker. This must be set to a unique integer for each broker.
broker.id=2

Start, Stop, Restart, and Check Status of Messaging Services


On Windows installations, open Task Manager > Services to start, stop,
restart, and check the status of Messaging Services components.

Messaging Services is a component that is coupled with the Intelligence


Server during installations and upgrades. Messaging Services is
configured out-of-the-box and runs automatically after the installation is
completed.

Copyright © 2024 All Rights Reserved 1183


Syst em Ad m in ist r at io n Gu id e

After installation, you can see the following services are automatically
started:

l Apache Kafka (
/opt/mstr/MicroStrategy/install/MessagingServices/Kaf
ka/kafka_2.11-0.10.1.0)

l Apache ZooKeeper (
/opt/mstr/MicroStrategy/install/MessagingServices/Kaf
ka/kafka_2.11-0.10.1.0)

By default MicroStrategy will still send Intelligence Server diagnostic


logs to local disk. Diagnostic logs will be sent to the Messaging Services
Server after you perform the following:

l Enable MicroStrategy Messaging Services

l Turn On the Sending Log to Messaging Services Feature

Afterwards you will see Kafka log files created in the Kafka installation
folder:

/opt/mstr/MicroStrategy/install/MessagingServices/Kafk
a/tmp/kafka-logs

Different Kafka topics will be created to store data for different


MicroStrategy components.

Co n f i gu r i n g M essagi n g Ser vi ces af t er u p gr ad i n g

By default, MicroStrategy Messaging Services are installed along with


the Intelligence server upgrade.

Once you have completed the upgrade process, you need to enable
MicroStrategy Messaging Services. If not, the Intelligence Server
continues to write to the original log.

Copyright © 2024 All Rights Reserved 1184


Syst em Ad m in ist r at io n Gu id e

Messaging Services Workflow for Intelligence Server


l Intelligence Server is the Kafka Producer and can be deployed a single
node or cluster.

l Kafka Server can be deployed as a single node or cluster.

Enable MicroStrategy Messaging Services


Messaging Services configuration is saved in the MicroStrategy
Intelligence Server configuration. It can be enabled or disabled on the
fly, without restarting your Intelligence Server.

Com m and Manager Scripts for Messaging Services

To check if Messaging Services is enabled, execute:

LIST ALL PROPERTIES FOR SERVER CONFIGURATION;

To enable Messaging Services through Command Manager, execute:

ALTER SERVER CONFIGURATION ENABLEMESSAGINGSERVICES TRUE


CONFIGUREMESSAGINGSERVICES
"bootstrap.servers:10.15.208.236:9092/batch.num.messages:5000/queue.buffer
ing.max.ms:2000";

In the example above set:

l bootstrap.servers: to your Kafka Server IP address and port


number.

l batch.num.messages: to the number of messages to send in one


batch when using asynchronous mode.

l queue.buffering.max.ms: to the maximum time to buffer data


when using asynchronous mode.

Copyright © 2024 All Rights Reserved 1185


Syst em Ad m in ist r at io n Gu id e

You can specify more Kafka Producer configuration settings in this


command following the same format.

Turn On the Sending Log to Messaging Services Feature


You can turn on the Sending Log to Messaging Services feature using
either MicroStrategy Web or Command Manager.

From MicroStrategy Web

1. Log in using and Administrator account.

2. Open User Preferences > Project Defaults.

3. Locate Sending Log to Messaging Services in the Features for


Customer Feedback section.

4. Select On from the drop-down menu.

5. Click Apply.

From Com m and Manager

1. Connect to your project source.

2. Execute the following:

ALTER FEATURE FLAG "SENDING LOG TO MESSAGING SERVICES" ON;

Modifying Messaging Services Configuration

Apache Kafka Server

The Kafka Server can be configured by modifying the


server.properties file found in:

Copyright © 2024 All Rights Reserved 1186


Syst em Ad m in ist r at io n Gu id e

/opt/mstr/MicroStrategy/install/MessagingServices/Kafk
a/kafka_2.11-0.10.1.0

Both Apache Kafka Server and ZooKeeper should be restarted after


modifying the above configuration file.

MicroStrategy Messaging Services Configuration for Clustered


Environments
If you have clustered your Intelligence Servers and want to use a
separate machine to run MicroStrategy Messaging Services after
upgrading, complete the following steps for each node in the cluster.

The minimum number of nodes for a cluster is 3.

Each node must have the following installed:

l MicroStrategy Messaging Services

l Apache Kafka

l Apache Zookeeper

Configure Zookeeper

1. Browse to
/opt/mstr/MicroStrategy/install/MicroStrategy/Messa
gingServices/Kafka/kafka_2.11-0.9.0.1/config.

2. Edit zookeeper.properties by adding the following lines:

maxClientCnxns=0
initLimit=5
syncLimit=2
server.1=10.27.20.16:2888:3888
server.2=10.27.20.60:2888:3888
server.3=10.15.208.236:2888:3888

Copyright © 2024 All Rights Reserved 1187


Syst em Ad m in ist r at io n Gu id e

Each server parameter must contain a unique integer identifier as


shown above.

3. Go to
/opt/mstr/MicroStrategy/install/MicroStrategy/Messa
gingServices/Kafka/kafka_2.11-
0.9.0.1/tmp/zookeeper.

4. Create a file named myid containing the identifying value from the
server parameter name in the zookeeper.properties file.

Configure Kafka

1. Browse to
/opt/mstr/MicroStrategy/install/MicroStrategy/Messa
gingServices/Kafka/kafka_2.11-0.9.0.1/config.

2. Edit server.properties, adding


zookeeper.connect=10.27.20.16:2181,10.27.20.60:2181
,10.15.208.236:2181 to the Zookeeper section.

############################# Zookeeper #############################


# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a
zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to
specify the
# root directory for all kafka znodes.
# zookeeper.connect=localhost:2181
zookeeper.connect=10.27.20.16:2181,10.27.20.60:2181,10.15.208.236:218
1

3. Modify the broker.id value to a unique integer from other Kafka


servers (the default value is 0), such as for node 10.27.20.60 we
use number 2.

############################# Server Basics

Copyright © 2024 All Rights Reserved 1188


Syst em Ad m in ist r at io n Gu id e

#############################
# The id of the broker. This must be set to a unique integer for each
broker.
broker.id=2

Start, Stop, Restart, and Check Status of Messaging Services


Kafka Server and Zookeeper have been registered as service on Linux,
so we can use service command to start, stop, and check status. The
restart command is not supported.

To execute a service command for Kafka Server and Zookeeper, enter:

/etc/init.d/kafka-zookeeper {start|stop|status}

Preview Feature: Access Cubes Regardless of Node Status


Starting in MicroStrategy ONE (June 2024) adds support for additional
scenarios for cube accessibility regardless of the node status where the
cube(s) were initially published.

Preview features are early versions of features and are not to be used in a
production environment as the core behavior remain subject to change
between preview and GA. By selecting to expose preview features, you can
access and use them as you would any other functionality. The official
versions of preview features are included in subsequent releases.

This functionality is enabled by default in MicroStrategy Cloud Environment


(MCE) non-containerized environments. If you are using an on-premises
environment, you must Manually Enable Cube Accessibility Regardless of
Node Status.

You can use this new functionality for the following scenarios:

Copyright © 2024 All Rights Reserved 1189


Syst em Ad m in ist r at io n Gu id e

1. In a cluster of two Intelligence server nodes, node A and node B are


both running. You publish a cube on node A and node A goes down due
to an unplanned incident and maintenance. The cubes that were
published on node A will be accessible on node B after a restart.

2. In a cluster of two Intelligence server nodes, node A is running and


node B is offline for maintenance. You publish a cube on node A and
node A stops for maintenance. You can start node B and the cubes
published on node A are accessible on node B.

Manually Enable Cube Accessibility Regardless of Node Status

If you incorrectly modify registry values, serious system-wide problems


could occur that may require you to reinstall Microsoft Windows. Any edit
you perform to the registry is at your own risk. These are user-initiated
changes and are not covered by any MicroStrategy warranty. If you are
using Microsoft Windows, you should backup the registry and/or update an
Emergency Repair Disk prior to making changes.

Windows

1. Open Registry Editor.

2. In Computer\HKEY_LOCAL_
MACHINE\SOFTWARE\WOW6432Node\MicroStrategy\DSS Server
key, add a new DeploymentFeatureFlags key, if it does not already
exist.

3. In the DeploymentFeatureFlags key, add a new string value. The


value name is CA/AdvancedCubeAvailability and the value data
is true.

Copyright © 2024 All Rights Reserved 1190


Syst em Ad m in ist r at io n Gu id e

4. Restart the Intelligence server.

Linux

1. Edit the MSIReg.reg file under install directory.

2. Find the [HKEY_LOCAL_MACHINE\SOFTWARE\MicroStrategy\DSS


Server\DeploymentFeatureFlags] line or add it to the file if it
does not exist.

3. Add "CA/AdvancedCubeAvailability"="true" as a new line


under the feature flags line.

4. Restart the Intelligence server.

Copyright © 2024 All Rights Reserved 1191


Syst em Ad m in ist r at io n Gu id e

Manually Disable Cube Accessibility Regardless of Node Status

If you incorrectly modify registry values, serious system-wide problems


could occur that may require you to reinstall Microsoft Windows. Any edit
you perform to the registry is at your own risk. These are user-initiated
changes and are not covered by any MicroStrategy warranty. If you are
using Microsoft Windows, you should backup the registry and/or update an
Emergency Repair Disk prior to making changes.

Windows

1. Open Registry Editor.

2. In Computer\HKEY_LOCAL_
MACHINE\SOFTWARE\WOW6432Node\MicroStrategy\DSS Server
key, add a new DeploymentFeatureFlags key, if it does not already
exist.

3. In the DeploymentFeatureFlags key, delete the


CA/AdvancedCubeAvailability key.

4. Restart the Intelligence server.

Linux

1. Edit the MSIReg.reg file under install directory.

2. Find the [HKEY_LOCAL_MACHINE\SOFTWARE\MicroStrategy\DSS


Server\DeploymentFeatureFlags] line or add it to the file if it
does not exist.

3. Delete "CA/AdvancedCubeAvailability"="true".

4. Restart the Intelligence server.

Copyright © 2024 All Rights Reserved 1192


Syst em Ad m in ist r at io n Gu id e

Additional Notes
This functionality only takes effect when the Intelligence server node(s) that
will stop are in a cluster, meaning one of the following conditions must be
satisfied:

l The node is shut down normally as Maintenance Mode.

l The node is added to the cluster startup list. For more information on
monitoring clusters, see Server Clustering.
o You can check the Cluster Startup column to confirm if the node is in
the cluster startup list.

o To add a node to the cluster startup list:

1. Locate the node.

2. Right-click the red triangle icon in the Cluster Startup column.

3. Choose Add to Cluster Startup.

The cube file directory should be correctly configured and accessible by all
Intelligence server nodes in a cluster. For more information, Configure
Caches in a Cluster

Stopped nodes will appear in the clustering monitor with a Stopped status.

Copyright © 2024 All Rights Reserved 1193


Syst em Ad m in ist r at io n Gu id e

Duplicate cubes may display in the cube monitor when you republish a cube
that was originally published by a stopped mode. In these cases, the new
cube will be used while the old cube will remain visible until the original node
starts. This is a limitation of the current cube architecture.

Connect MicroStrategy Web to a Cluster


You connect MicroStrategy Web to a cluster using MicroStrategy Web's
Administration page. If the Intelligence Servers are on the same subnet as
MicroStrategy Web and are accessible by User Datagram Protocol (UDP),
the MicroStrategy Web Administration page can dynamically list the servers
by looking for the listener service running on the machines. If the server is
listed that you want to connect to, you can connect from this page.
Alternatively, you can type the server name.

If the machine selected is part of a cluster, the entire cluster appears on the
Administration page and is labeled as a single cluster. Once MicroStrategy
Web is connected to a cluster, all nodes reference the same project. Load
balancing directs new Web connections to the least loaded node, as
measured by user connections. Once connected to a node, the Web user
runs all MicroStrategy activity on the same node.

If nodes are manually removed from the cluster, projects are treated as
separate in MicroStrategy Web, and the node connected to depends on
which project is selected. However, all projects are still accessing the same
metadata.

Clustering and Firewalls


Connecting to Intelligence Server from MicroStrategy Web through a firewall
is the same process regardless of the cluster state. The only difference is
that allowable ports, sources, and destinations may be available between
MicroStrategy Web and each of the nodes in the cluster.

Copyright © 2024 All Rights Reserved 1194


Syst em Ad m in ist r at io n Gu id e

Exporting to PDF or Excel in a Clustered Environment


In MicroStrategy Web, users can export reports to PDF or to Excel for later
viewing. Users must have the Write privilege for the Inbox folder on the
Intelligence Server machine to be able to export reports.

To export to PDF or Excel in a clustered environment, users must have the


Write privilege for the ClusterInBox folder on all Intelligence Servers in the
cluster. For instructions on how to set up the ClusterInBox folder, see
Configure Caches in a Cluster, page 1147.

Node Failure
MicroStrategy Web users can be automatically connected to another node
when a node fails. To implement automatic load redistribution for these
users, on the Web Administrator page, under Web Server select Security,
and in the Login area select Allow Automatic Login if Session is Lost.

Copyright © 2024 All Rights Reserved 1195


Syst em Ad m in ist r at io n Gu id e

I M PROVIN G RESPON SE
TIM E: CACH IN G

Copyright © 2024 All Rights Reserved 1196


Syst em Ad m in ist r at io n Gu id e

A cache is a result set that is stored on a system to improve response time in


future requests. With caching, users can retrieve results from Intelligence
Server rather than re-executing queries against a database.

Intelligence Server supports the following types of caches:

l Page caches: When a user views a published dashboard the Intelligence


Server generates one cache per page, so that the cache can be hit when
the user switches between pages.

l Result caches: Report and document results that have already been
calculated and processed, that are stored on the Intelligence Server
machine so they can be retrieved more quickly than re-executing the
request against the data warehouse. For more information on these, see
Result Caches, page 1203.

Intelligent Cubes can function in a similar fashion to result caches: they


allow you to store data from the data warehouse in Intelligence Server
memory, rather than in the database. Intelligent Cubes are part of the
OLAP Services add-on to Intelligence Server. For detailed information
about Intelligent Cubes, see the In-memory Analytics Help.

l The History List is a way of saving report results on a per-user basis. For
more information, see Saving Report Results: History List, page 1240.

l Element caches: Most-recently used lookup table elements that are


stored in memory on the Intelligence Server or Developer machines so
they can be retrieved more quickly. For more information on these, see
Element Caches, page 1261.

l Object caches: Most-recently used metadata objects that are stored in


memory on the Intelligence Server and Developer machines so they can
be retrieved more quickly. For more information on these, see Object
Caches, page 1276.

You specify settings for all cache types except History List under Caching in
the Project Configuration Editor. History List settings are specified in the
Intelligence Server Configuration Editor.

Copyright © 2024 All Rights Reserved 1197


Syst em Ad m in ist r at io n Gu id e

Result, element, and object caches are created and stored for individual
projects; they are not shared across projects. History Lists are created and
stored for individual users.

To make changes to cache settings, you must have the Administer Caches
privilege. In addition, changes to cache settings do not take effect until you
stop and restart Intelligence Server.

For additional ways to improve your MicroStrategy system's response time,


see Chapter 8, Tune Your System for the Best Performance.

Introduction to Page Caches


Considering users may switch dashboard pages frequently in Library, page
caches have been introduced to improve dashboard performance.
Intelligence Server generates one cache for each dashboard page, so that
the cache can be hit when the user switches between pages. This differs
from Result Caches, which records only a partial result of the Dashboard.
Result caches are generated when running a dashboard from the
MicroStrategy Web interface or MicroStrategy Mobile App.

For example, you switch to page 1 of a multi-page dashboard, apply a


chapter-level filter and then save the dashboard. The result cache would
only record the filtered results on page 1. It doesn't record the results of
other pages, so if you execute the dashboard again, the result cache is hit.
However, if you switch to page 2, no cache is hit because the result cache
won't include the page 2 results.

For Library Web and the Library Mobile App on Android, the page cache is in
HTML5 format, and one page corresponds to one page cache. For the
Library Mobile App on iOS, the page cache is in Flash format, and one page
corresponds to one page cache.

Types of Page Caches

A base dashboard page cache is a dashboard shortcut that contains:

Copyright © 2024 All Rights Reserved 1198


Syst em Ad m in ist r at io n Gu id e

l No manipulations

l Only page-switching manipulations

l A reset dashboard shortcut

A dashboard shortcut page cache contains any manipulations other than the
ones mentioned above.

A bookmark page cache is generated if the page caches are generated for a
bookmark.

Page Cache Generation


A page cache is generated in the following cases:

On-the-fly:

l When you open a dashboard from MicroStrategy Library, if there is no


valid cache, a page cache will be generated on-the-fly.

l When you switch pages on a dashboard, a page cache for the


corresponding page will be generated on-the-fly.

l If you perform any analytical manipulations in a dashboard, such as


filtering, sorting, or drilling, the Intelligence Server will stop generating
page caches on-the-fly for the affected pages.

Affected pages include pages that contain targets affected by the


manipulations. For example, if you exclude an element from a
visualization, the page holding that visualization is the only affected page.
However, if you change a selection from the chapter level, all pages under
that chapter are affected. In this case, the Intelligence Server will not
generated new server caches when you switch to the affected pages.

Copyright © 2024 All Rights Reserved 1199


Syst em Ad m in ist r at io n Gu id e

Manipulation Affected Page

Switch pages None

Link to other pages from images or text None

Keep only or exclude on a visualization Current page

Filter visualizations in the same page Current page

Go to targets from a visualization Current and target pages

Change chapter level filter panel All pages under the same chapter

Drill on the page Current page

Sort on a grid or visualization Current page

l If you click the Rest button from the dashboard title bar or from the
dashboard cover in Library, a page cache will be generated on-the-fly.

When closing a dashboard

When you close a dashboard and return to Library, the Intelligence Server
will generate page caches for the current page and several pages before
and after the current page, if there are no valid page caches. By default, at
most 10 page caches will be generated.

When adding or updating a bookmark

When a bookmark is created or updated on an existing bookmark, the


Intelligence will generate page caches for the current page and several
pages before and after the current page, if there are no valid page caches.
By default, at most 10 page caches will be generated.

When logging out of Library or if the user session times out

When you log out of Library or the user session times out, the Intelligence
Server will generate page caches for the dashboards that are active in the
server message. For each dashboard, the Intelligence Sever will generate
page caches for the last viewed page and several pages before and after the

Copyright © 2024 All Rights Reserved 1200


Syst em Ad m in ist r at io n Gu id e

last view page, if there are no valid page caches. By default, at most 10
page caches will be generated.

Server message refers to the last several dashboards that were ran from
Library before logging out or the session timing out. The number of server
messages is defined per user session and is restricted by the working set
limit. The working set limit can be configured via MicroStrategy Web
Preferences.

By cache update subscriptions

Cache scheduling is supported by Distribution Services. You can create a


cache subscription for a base dashboard and specify the Users and/or User
Groups.

On triggering the cache subscription different caches are generated:

Cache Type
Case
Generated

The base dashboard is not published to the specified User or User


None
Group.

The base dashboard is published to the specified User, but the User Base dashboard
hasn't logged in to Library yet page caches

The base dashboard is published to the specified User, and the User Base dashboard
only switches page or resets Dashboard page caches

Dashboard
The base dashboard is published to the specified User, and the User
shortcut page
changes the base dashboard
caches

The base dashboard is published to the specified User Group, and User 1: Base
the User Group contains User1 and User2. After the Base Dashboard dashboard page
is published to the User Group, User1 has logged in to their Library, caches or
but User2 hasn't logged in to their Library. The cache generation for Dashboard
User1 follows Case #3, and the cache generation for User2 follows shortcut page
Case #2. caches depending

Copyright © 2024 All Rights Reserved 1201


Syst em Ad m in ist r at io n Gu id e

Cache Type
Case
Generated

on changes made.

User 2: Base
dashboard page
caches

Page Cache Matching


Page cache is a type of cache matching and follows the same matching
algorithm to determine whether a cache can be used to satisfy a request.

When page cache is generated, the Intelligence Server save manipulations


as part of the cache key. When deciding whether an existing page cache can
be hit, the Intelligence Server compares a user's saved manipulations with
the manipulations saved in the page cache key. The Intelligence Server only
hits the cache if the manipulations are the same. When generating and
hitting page cache, the Intelligence Server only counts the manipulations
that can affect the page. To see when a page is affected by manipulations,
see Page Cache Generation.

Cache Priority Queues


The cache manager maintains two last recently used (LRU) queues, one for
high-priority caches and one for low-priority caches.

For MicroStrategy versions prior to 11.0 cache manager maintains only one
LRU queue for the caches. If the cache pool becomes full, the least-
recently-used cache would be swapped out to free memory.

The following are the pre-defined priority for different caches generated
under different circumstances:

Copyright © 2024 All Rights Reserved 1202


Syst em Ad m in ist r at io n Gu id e

Cache Type Priority

All traditional document cache High

Page cache that has no manipulation saved as a cache key High

Page cache generated by a cache update subscription High

Page cache generated when a bookmark is opened High

Page cache generated on the fly and has manipulations saved as a cache key Low

If a low-priority cache is hit by a bookmark, the priority will be updated from


Low to High.

There is a soft limit of 20% of the cache pool for low-priority caches. This is
to avoid low-priority caches not being generated if there are too many high-
priority caches filling up the cache pool.

When a new cache is going to be generated, if the cache pool is not full, the
cache can be generated successfully. If the cache pool is full, the cache-
swapping logic is triggered. If the low-priority caches already occupy more
than 20% of the cache pool, then they will be deleted until the total low-
priority cache size is equal to or below the limit. If the new cache still needs
more memory, the high-priority caches will be swapped out to free up more
memory, until the new cache can be generated.

Result Caches
A result cache is a cache of an executed report or document that is stored on
Intelligence Server. Result caches are either report caches or document
caches.

You cannot create or use result caches in a direct (two-tier) environment.


Caches are stored in Intelligence Server, not retained on Developer.

Copyright © 2024 All Rights Reserved 1203


Syst em Ad m in ist r at io n Gu id e

Report caches can be created or used for a project only if the Enable report
server caching check box is selected in the Project Configuration Editor
under the Caching: Result Caches: Creation category.

Document caches can be created or used for a project only if the Enable
Document Output Caching in Selected Formats check box is selected in
the Project Configuration Editor under the Caching: Result Caches:
Creation category, and one or more formats are selected.

Document caches are created or used only when a document is executed in


MicroStrategy Web. Document caches are not created or used when a
document is executed from Developer.

By default, result caching is enabled at the project level. It can also be set
per report and per document. For example, you can disable caching at the
project level, and enable caching only for specific, frequently used reports.
For more information, see Configuring Result Cache Settings, page 1228.

A result cache is created when you do any of the following:

l In MicroStrategy Web or Developer, execute a saved report or document


containing only static objects.

l In MicroStrategy Web or Developer, execute a saved report or document


containing one or more prompts. Each unique set of prompt selections
corresponds to a distinct cache.

l In MicroStrategy Web, execute a template and filter combination.

l Execute a report or document based on a schedule. The schedule may be


associated with MicroStrategy Web, Developer, Mobile, Distribution
Services, or Narrowcast Server. For more information about scheduling
reports, see Scheduling Reports and Documents: Subscriptions, page
1333.

Caching does not apply to a drill report request because the report is
constructed on the fly.

Copyright © 2024 All Rights Reserved 1204


Syst em Ad m in ist r at io n Gu id e

When a user runs a report (or, from MicroStrategy Web, a document), a job
is submitted to Intelligence Server for processing. If a cache for that request
is not found on the server, a query is submitted to the data warehouse for
processing, and then the results of the report are cached. The next time
someone runs the report or document, the results are returned immediately
without having to wait for the database to process the query.

The Cache Monitor displays detailed information about caches on a


machine; for more information see Monitoring Result Caches, page 1217.

You can easily check whether an individual report hit a cache by viewing the
report in SQL View. The image below shows the SQL View of a
MicroStrategy Tutorial report, Sales by Region. The fifth line of the SQL
View of this report shows "Cache Used: Yes."

Client-side analytical processing, such as ad hoc data sorting, pivoting,


view filters, derived metrics, and so on, does not cause Intelligence Server
to create a new cache.

This section discusses the following topics concerning result caching:

Copyright © 2024 All Rights Reserved 1205


Syst em Ad m in ist r at io n Gu id e

l Cache Management Best Practices, page 1207

l Types of Result Caches, page 1209

l Location of Result Caches, page 1211

l Cache Matching Algorithm, page 1213

l Disabling Result Caching, page 1216

l Monitoring Result Caches, page 1217

l Managing Result Caches, page 1221

l Configuring Result Cache Settings, page 1228

Server-Side Behavior Changes for Cache Settings


Intelligence Server does not enforce the Formatted Documents - Maximum
Number of Caches setting if the values are set below the default value of
1000000 (1M).

Intelligence Server does not be enforce the following three cache count limit
settings if the values are set below the default value of 100000 (100K):

l Dataset - Maximum Number of Caches

l Formatted Documents - Maximum Number of Caches

l Maximum number of cubes

When configuring the registry setting for the left disk size to 0, Intelligence
Server falls back to use the cache count limit settings as the maximum
number for document, report, or cube cache entries.

M ake Do cu m en t Cach e Exp i r at i o n d ep en d en t o n t h e l i f et i m e o f i t s


d at aset s (st an d al o n e o r em b ed d ed )

Document Cache Expiration based directly on Cache Duration (hrs) has


been retired for newly created document caches. The
DssProjectReportCacheLifeTime, remains in effect for existing
document caches that remain unchanged.

Copyright © 2024 All Rights Reserved 1206


Syst em Ad m in ist r at io n Gu id e

M o n i t o r t h e d i sk an d st o p t o gen er at e cach es f o r d o cu m en t , r ep o r t , o r
cu b e i f t h e f r ee d i sk sp ace i s l ess t h an a sp eci f i c val u e

Intelligence Server uses a single absolute value representing the remaining


free disk space as a threshold to stop generating caches, the default value
for is 10GB.

This setting can be configured through the registry. If the registry setting is
0, MicroStrategy falls back to use the three cache count limit settings.

The registry setting for the disk size:

l Key: SOFTWARE\\MicroStrategy\\Data Sources\\CastorServer

l Value: Reserved Disk Space(GB)

Up d at e t h e p age cach e n u m b er l i m i t f r o m 10 t o 20 f o r a cer t i f i ed


d ash b o ar d .

If a value larger than 20 is already set for a certified dashboard, the value is
kept.

At least 20 page caches are generated for any shortcuts or bookmarks for a
certified dashboard.

For example, suppose this setting was customized on an installation with the
customized value is set to X.

Intelligence Server will:

1. Generate at most X pages of caches for the dashboard shortcut or


bookmark for an uncertified dashboard.

2. Generate at most Max(X, 20) pages of caches for the dashboard


shortcut or bookmark for a certified dashboard.

Cache Management Best Practices


Good result cache management practices depend on a number of factors,
such as the number of reports and documents in the project, the available

Copyright © 2024 All Rights Reserved 1207


Syst em Ad m in ist r at io n Gu id e

disk space for caches, the amount of personalization in reports and


documents, and whether you are using clustered Intelligence Servers.

MicroStrategy recommends the following best practices for cache


management:

l The drive that holds the result caches should always have at least 10% of
its capacity available.

l In a project with many reports, consider enabling caching on a report-by-


report basis. Use MicroStrategy Enterprise Manager to determine which
reports are used often and thus are good candidates for caching. For
information about Enterprise Manager, see the Enterprise Manager Help.
For information about enabling caching per report, see Configuring Result
Cache Settings, page 1228.

l Disable caching for reports and documents with a high amount of


personalization, such as prompt answers or security filters.

l To reuse results for reports and documents with a high amount of


personalization, use MicroStrategy OLAP Services to create Intelligent
Cubes. For more information about OLAP Services, see the In-memory
Analytics Help.

l If results are cached by user ID (see Configuring Result Cache Settings,


page 1228), it may be better to disable caching and instead use the
History List. For information about the History List, see Saving Report
Results: History List, page 1240.

l Be aware of the various ways in which you can tune the caching properties
to improve your system's performance. For a list of these properties, and
an explanation of each, see Configuring Result Cache Settings, page
1228.

l If you are using clustered Intelligence Servers, caching presents


additional maintenance requirements. For information on maintaining
caches in a clustered system, see Maintain Result Caches and History
Lists in a Clustered Environment, page 1176.

Copyright © 2024 All Rights Reserved 1208


Syst em Ad m in ist r at io n Gu id e

Types of Result Caches


The following types of result caches are created by Intelligence Server:

l Matching Caches, page 1209

l History Caches, page 1209

l Matching-History Caches, page 1210

l XML Caches, page 1210

All document caches are Matching caches; documents do not generate


History caches or XML caches. Intelligent Cube reports do not create
Matching caches.

Matching Caches
Matching caches are the results of reports and documents that are retained
for later use by the same requests later on. In general, Matching caches are
the type of result caches that are used most often by Intelligence Server.

When result caching is enabled, Intelligence Server determines for each


request whether it can be served by an already existing Matching cache. If
there is no match, it then runs the report or document on the database and
creates a new Matching cache that can be reused if the same request is
submitted again. This caching process is managed by the system
administrator and is transparent to general users who benefit from faster
response times.

History Caches
History caches are report results saved for future reference in the History
List by a specific user. When a report is executed, an option is available to
the user to send the report to the History List. Selecting this option creates a
History cache to hold the results of that report and a message in the user's
History List pointing to that History cache. The user can later reuse that
report result set by accessing the corresponding message in the History List.

Copyright © 2024 All Rights Reserved 1209


Syst em Ad m in ist r at io n Gu id e

It is possible for multiple History List messages, created by different users,


to refer to the same History cache.

The main difference between Matching and History caches is that a


Matching cache holds the results of a report or document and is accessed
during execution; a History cache holds the data for a History List message
and is accessed only when that History List message is retrieved.

For more information about History Lists, see Saving Report Results: History
List, page 1240.

Matching-History Caches
A Matching-History cache is a Matching cache that is referenced by at least
one History List message. It is a single cache composed of a Matching cache
and a History cache. Properties associated with the Matching caches and
History caches discussed above correspond to the two parts of the
Matching-History caches.

XML Caches
An XML cache is a report cache in XML format that is used for personalized
drill paths. It is created when a report is executed from MicroStrategy Web,
and is available for reuse in Web. It is possible for an XML cache to be
created at the same time as its corresponding Matching cache. XML caches
are automatically removed when the associated report or History cache is
removed.

To disable XML caching, select the Enable Web personalized drill paths
option in the Project definition: Drilling category in the Project
Configuration Editor. Note that this may adversely affect Web performance.
For more information about XML caching, see Controlling Access to Objects:
Permissions, page 89.

Copyright © 2024 All Rights Reserved 1210


Syst em Ad m in ist r at io n Gu id e

Location of Result Caches


Separate result caches are created for each project on an Intelligence
Server. They are kept in memory and on disk. The server manages the
swapping of these caches between memory and disk automatically. Caches
are automatically unloaded, beginning with the least recently used cache,
until the maximum memory governing limits are reached.

The amount of memory available to store result caches is limited by the


Memory Storage settings. For information, see Configuring Result Cache
Settings, page 1228.

Result Cache Files


By default, result cache files are stored in the directory where Intelligence
Server is installed \Caches\ServerDefinition\Machine Name\.
Report caches are stored in this folder; document caches are stored in the
\RWDCache\ subfolder of this folder.

Report Cache File Form at

Report caches are stored on the disk in a binary file format. Each report
cache has two parts:

l Cache<cache ID>_Info.che contains information about the cache, such


as the user and prompt answers.

l Cache<cache ID>.che contains the actual data for the cache.

Report Cache Index Files

Intelligence Server creates two types of index files to identify and locate
report caches:

l CachePool.idx is an index file that contains a list of all Matching and


History caches and pointers to the caches' locations.

Copyright © 2024 All Rights Reserved 1211


Syst em Ad m in ist r at io n Gu id e

l CacheLkUp.idx is a lookup table that contains the list of all Matching


caches and their corresponding cache keys. Incoming report requests are
matched to report cache keys in this table to determine whether a
Matching cache can be used. This process is called cache matching (see
Cache Matching Algorithm, page 1213). This lookup table is always
backed up to disk when Intelligence Server shuts down. Additional
backups are based on the Backup frequency and the Lookup Cleanup
Frequency settings (see Configuring Result Cache Settings, page 1228).

Docum ent Cache File Form at

Document caches are stored on the disk in a binary file format. Each
document cache has two parts:

l <cache ID>_info.rwdc contains information about the cache, such as the


user and prompt answers.

l <cache ID>.rwdc contains the actual data for the cache.

Docum ent Cache Index Files

Intelligence Server creates two types of index files to identify and locate
document caches:

l RWDPool.idx is an index file that contains a list of all Matching caches


and pointers to the caches' locations.

l RWDLkUp.idx is a lookup table that contains the list of all Matching


caches and their corresponding cache keys. Incoming document requests
from Web are matched to document cache keys in this table to determine
whether a Matching cache can be used. This process is called cache
matching (see Cache Matching Algorithm, page 1213). The lookup table is
always backed up to disk when Intelligence Server shuts down. Additional
backups are based on the Backup frequency and the Lookup Cleanup
Frequency settings (see Configuring Result Cache Settings, page 1228).

Copyright © 2024 All Rights Reserved 1212


Syst em Ad m in ist r at io n Gu id e

Cache Matching Algorithm


When a user requests a report, or a document from Web, cache keys are
used to determine whether a cache can be used to satisfy the request. If the
cache keys in the request match the ones in the result cache, the cached
report or document results are used. The matching process takes several
steps that involve a number of cache keys, and each step is explained in
detail below. If at any step, the matching is not successful, then the cache is
not used and the request executes against the data warehouse.

Step 1: Check the IDs


To check whether the requested report/document and the cached
report/document are the same, Intelligence Server compares the ID and
Version ID of the two. If they match, the process continues to Step 2.

Alternately, Intelligence Server checks the Template ID, Template Version


ID, Filter ID, and Filter Version ID in the requested report/document
against the ones in the cache. If all of them match, the process continues to
Step 2.

If you are not using MicroStrategy OLAP Services, any modification to a


report, even a simple formatting change or an Access Control List (ACL)
modification, changes the Template Version ID and invalidates the report
cache. With MicroStrategy OLAP Services, the cache is invalidated only if
the contents of the Report Objects pane change. For more information
about OLAP Services, see Intelligent Cubes, page 1107.

Step 2: Check the Personalization Impact


If the report or document contains prompts, Intelligence Server checks the
prompt answers selected for the report. Different prompt answers change
the content of the report; therefore, the cache is not used if the prompt
answers in the report request are not the same as the ones in the report
cache. Each set of distinct prompt answers creates a distinct cache.

Copyright © 2024 All Rights Reserved 1213


Syst em Ad m in ist r at io n Gu id e

Step 3: Check the Security Impact


Intelligence Server makes sure that users with different security filters
cannot access the same cache. Intelligence Server compares the Security
ID and Security Version ID of all the security filters applied to the user in
the request, including those inherited from the groups to which they belong,
with the security profile of the user who originated the cache.

Step 4: Check the Modification Impact


Intelligence Server does not use a cache if an object in the report/document
changes. To check this, Intelligence Server compares the IDs and Version
IDs of all application objects used in the requested report/document with the
ones used in the cached report/document. If any of these IDs are different,
the existing cache is automatically invalidated.

Step 5: Check the Data Language


Intelligence Server makes sure a cache is not used if the user running the
report is using a different language than the user who created the cache.
Each different language creates a different cache.

Step 6: Check the Database Security Impact (Optional)


You may find it necessary to add optional criteria, listed below, to the cache
matching process. These criteria are useful if database security view and
connection mapping are used to ensure that users with different security
profiles, who see different data from the data warehouse, cannot access the
same cache. For information about connection mapping, see Controlling
Access to the Database: Connection Mappings, page 113).

l User ID: To match caches by the global unique identifier (GUID) of the
user requesting the cache, in the Caching: Result Caches: Creation
category in the Project Configuration Editor, select the Create caches per
user check box.

Copyright © 2024 All Rights Reserved 1214


Syst em Ad m in ist r at io n Gu id e

l Database login: To match caches by the GUID of the database login


assigned to the user via a connection mapping, in the Caching: Result
Caches: Creation category in the Project Configuration Editor, select the
Create caches per database login check box.

This option is especially useful if database warehouse authentication is


used. For more information, see Implement Database Warehouse
Authentication, page 614.

l Database connection: To match caches by the GUID of the database


connection assigned to the user via a connection mapping, in the Caching:
Result Caches: Creation category in the Project Configuration Editor,
select the Create caches per database connection check box.

Step 7: Check Additional Criteria for Documents


Document caches have additional criteria that must match before a cache
can be used:

l The Export Option (All or Current Page) and Locale of the document
must match the cache.

l The selector and group-by options used in the document must match those
used in the cache.

l The format of the document (PDF, Excel, HTML, or XML/Flash) must


match the format of the cache.

l In Excel, the document and cache must both be either enabled or disabled
for use in MicroStrategy Office.

This information applies to the legacy MicroStrategy Office add-in, the


add-in for Microsoft Office applications which is no longer actively
developed.

Copyright © 2024 All Rights Reserved 1215


Syst em Ad m in ist r at io n Gu id e

It was substituted with a new add-in, MicroStrategy for Office, which


supports Office 365 applications. The initial version does not yet have all
the functionalities of the previous add-in.

If you are using MicroStrategy 2021 Update 2 or a later version, the


legacy MicroStrategy Office add-in cannot be installed from Web.;

For more information, see the MicroStrategy for Office page in the
Readme and the MicroStrategy for Office Help.

l In XML/Flash, the mode of the document (View, Interactive, Editable,


Flash) must match the mode of the cache.

l In XML/Flash, the Web preferences of the user executing the document


must match the Web preferences of the user who created the cache.

Disabling Result Caching


By default, result caching is enabled in Intelligence Server. If the
performance gain is marginal compared to the added overhead, you can
disable report caching. You may want to disable caching in the following
situations:

l The data warehouse is updated more than once a day.

l Most reporting is ad hoc so caching provides little value.

l Reports are heavily prompted, and the answer selections to the prompts
are different each time the reports are run.

l Few users share the same security filters when accessing the reports.

If you disable result caching for a project, you can set exceptions by
enabling caching for specific reports or documents. For more information,
see Configuring Result Cache Settings, page 1228.

Copyright © 2024 All Rights Reserved 1216


Syst em Ad m in ist r at io n Gu id e

To Disable Result Caching

1. Open the Project Configuration Editor for the project.

2. Expand Caching, expand Result Caches, then select Creation.

3. To disable report and document caching, clear the Enable report


server caching check box.

4. To disable document caching but not report caching, leave the Enable
report server caching check box selected and clear the Enable
document output caching in selected formats check box.

5. Click OK.

Monitoring Result Caches


You use the Cache Monitor in Developer to monitor result caches. When
result caching is enabled and a user executes a report or document, a cache
entry is listed in the Cache Monitor.

You can also use the Diagnostics Configuration Tool for diagnostic tracing of
result caches (see Diagnostics and Performance Logging Tool, page 1220),
and Command Manager to automatically update information about result
caches (see Command Manager, page 1221).

A cache's hit count is the number of times the cache is used. When a report
is executed (which creates a job) and the results of that report are retrieved
from a cache instead of from the data warehouse, Intelligence Server
increments the cache's hit count. This can happen when a user runs a report
or when the report is run on a schedule for the user. This does not include
the case of a user retrieving a report from the History List (which does not
create a job). Even if that report is cached, it does not increase its hit count.

Copyright © 2024 All Rights Reserved 1217


Syst em Ad m in ist r at io n Gu id e

To View All Report or Document Caches for a Project in the Cache


Monitor

1. In Developer, log in to a project source. You must log in as a user with


the Monitor Caches privilege.

2. Expand Administration, then expand System Monitors, then expand


Caches, and then select Reports or Documents.

3. Select the project for which you want to view the caches and click OK.

4. To view additional details about a cache, double-click that cache.

5. To view additional details about all caches, from the View menu select
Details.

6. To change the columns shown in the Details view, right-click in the


Cache Monitor and select View Options. Select the columns you want
to see and click OK.

7. To view caches from a different project, right-click in the Cache Monitor


and select Filter.

8. Select the project for which you want to view caches and click OK.

9. To display History and XML caches in the Report Cache Monitor, right-
click in the Cache Monitor and select Filter. Select Show caches for
History List messages or Show XML caches and click OK.

You can perform any of the following options after you select one or more
caches and right-click:

l Delete: Removes the cache from both memory and disk

l Invalidate: Marks the cache as unusable, but leaves a reference to it in


users' History Lists (if any)

l Load from disk: Loads into memory a cache that was previously unloaded

Copyright © 2024 All Rights Reserved 1218


Syst em Ad m in ist r at io n Gu id e

to disk

l Unload to disk: Removes the cache from memory and stores it on disk

For detailed information about these actions, see Managing Result Caches,
page 1221.

Cache Statuses
A result cache's status is displayed in the Report Cache Monitor using one
or more of the following letters:

Stands
Status Description
for

R The cache is valid and ready to be used.

P The cache is currently being updated.

The cache has been invalidated, either manually or by a change to


one of the objects used in the cache. It is no longer used, and will be
I
deleted by Intelligence Server. For information about invalid caches,
see Managing Result Caches, page 1221.

The cache has been invalidated because its lifetime has elapsed. For
E information about expired caches, see Managing Result Caches, page
1221.

L The cache is loaded into Intelligence Server memory.

U The cache file has been updated.

The cache has been updated in Intelligence Server memory since the
D
last time it was saved to disk.

The cache has been unloaded, and exists as a file on disk instead of
F in Intelligence Server memory. For information about loading and
unloading caches, see Managing Result Caches, page 1221.

Cache Types
Result caches can be of the following types:

Copyright © 2024 All Rights Reserved 1219


Syst em Ad m in ist r at io n Gu id e

Type Description

The cache is valid and available for use.


Matching
All document caches are Matching caches.

History The cache referenced in at least one History List message.

Matching- The cache is valid and available for use, and also referenced in at least one
History History List message.

(Web only) The cache exists as an XML file and is referenced by the
XML matching cache. When the corresponding Matching cache is deleted, the
XML cache is deleted.

For more information about each type of cache, see Types of Result Caches,
page 1209.

Diagnostics and Performance Logging Tool


The Intelligence Server logs are often useful when troubleshooting issues
with report caching in a MicroStrategy system. You can view these logs and
configure what information is logged using the Diagnostics and Performance
Logging Tool. For more information, see Configure What is Logged, page
858.

To Enable Diagnostic Tracing of Result Caches

1. Open the MicroStrategy Diagnostics and Performance Logging Tool.


(From the Windows Start menu, point to All Programs, then
MicroStrategy Tools, and then select Diagnostics Configuration.)

2. In the Select Configuration drop-down list, select CastorServer


Instance.

3. Clear the Use Machine Default Diagnostics Configuration check


box.

Copyright © 2024 All Rights Reserved 1220


Syst em Ad m in ist r at io n Gu id e

4. In the Report Server component, in the Cache Trace dispatcher, click


the File Log (currently set to <None>) and select <New>.

5. Enter the following information in the editor:

l Select Log Destination: <New>

l File Name: cacheTrace

l Max File Size: 5000

l File Type: Diagnostics

6. Click Save, and then click Close.

7. In the Report Server component, in the Cache Trace dispatcher, click


the File Log (currently set to <None>) and select cacheTrace.

Command Manager
You can also use the following Command Manager scripts to monitor result
caches:

l LIST [ALL] REPORT CACHES [FOR PROJECT "<project_name>"]


lists all report caches on Intelligence Server for a project.

l LIST [ALL] PROPERTIES FOR REPORT CACHE "<cache_name>" IN


PROJECT "<project_name>" lists information about a report cache.

By default, these scripts are at C:\Program Files


(x86)\MicroStrategy\Command Manager\Outlines\Cache_
Outlines.

For more information about Command Manager, see Chapter 15, Automating
Administrative Tasks with Command Manager, or the Command Manager
Help (from within Command Manager, press F1).

Managing Result Caches


As a system administrator, your greatest concerns about caching are
consistency and availability of the cached data. You have the important

Copyright © 2024 All Rights Reserved 1221


Syst em Ad m in ist r at io n Gu id e

responsibility of synchronizing the caches with the data in the data


warehouse. Therefore, as data changes in the data warehouse, you must
ensure that the outdated cached data is either updated or discarded. You
can do this in two main ways: Invalidating and Scheduling. These methods,
along with other maintenance operations that you can use when managing
result caches, are discussed below. They include:

l Scheduling Updates of Result Caches, page 1222

l Unloading and Loading Result Caches to Disk, page 1222

l Invalidating Result Caches, page 1223

l Deleting Result Caches, page 1225

l Purging all Result Caches in a Project, page 1226

l Expiring Result Caches, page 1227

Scheduling Updates of Result Caches


You can schedule a report or document to be executed regularly, to ensure
that the result cache is up-to-date. Scheduling is a proactive measure aimed
at making sure result caches are readily available when needed.

Typically, reports and documents that are frequently used best qualify for
scheduling. Reports and documents that are not frequently used do not
necessarily need to be scheduled because the resource cost associated with
creating a cache on a schedule might not be worth it. For more information
on scheduling a result cache update, see Scheduling Reports and
Documents: Subscriptions, page 1333.

Unloading and Loading Result Caches to Disk


You may need to unload caches from memory to disk to create free memory
for other operations on the Intelligence Server machine.

Copyright © 2024 All Rights Reserved 1222


Syst em Ad m in ist r at io n Gu id e

If a report cache is unloaded to disk and a user requests that report, the
report is then loaded back into memory automatically. You can also manually
load a report cache from the disk into memory.

Caches are saved to disk according to the Backup frequency setting (see
Configuring Result Cache Settings, page 1228). Caches are always saved to
disk regardless of whether they are loaded or unloaded; unloading or
loading a cache affects only the cache's status in Intelligence Server
memory.

Invalidating Result Caches


Invalidating a result cache indicates to Intelligence Server that this cache
should not be used. Invalidation is a preventive measure that you can take to
ensure that users do not run reports that are based on outdated cached
data. Examples of when the data may be outdated include:

l When the data warehouse changes, the existing caches are no longer
valid because the data may be out of date. In this case, future
report/document requests should no longer use the caches.

l When the definition of an application object (such as a report definition,


template, filter, and so on) changes, the related result cache is
automatically marked as invalid.

l When the cache for any of the datasets for a document becomes
invalidated or deleted, the document cache is automatically invalidated.

Caches need to be invalidated when new data is loaded from the data
warehouse so that the outdated cache is not used to fulfill a request. You
can invalidate all caches that rely on a specific table in the data warehouse.
For example, you could invalidate all report/document caches that use the
Sales_Trans table in your data warehouse.

Only Matching and Matching-History caches can be invalidated. Invalidating


a cache has the following effects:

Copyright © 2024 All Rights Reserved 1223


Syst em Ad m in ist r at io n Gu id e

l An invalid Matching cache is automatically deleted.

l An invalid Matching-History cache is converted to a History cache. If all


History messages relating to this cache are deleted, the converted History
cache is also deleted.

MicroStrategy strongly recommends that you invalidate Matching and


Matching-History caches instead of deleting them directly.

Invalid caches are deleted automatically based on the Cache lookup


cleanup frequency setting. For more information about this setting, see
Configuring Result Cache Settings, page 1228.

You can invalidate caches manually or by scheduling the invalidation


process.

Invalidating a Cache with a Scheduled Adm inistration Task

You can schedule a MicroStrategy administration task to invalidate caches


on a recurring schedule. In the Project Configuration Editor, in the Caches:
Result Caches (Maintenance) category, you can select a schedule to be
used to invalidate caches. For more information about scheduling tasks, see
Scheduling Administrative Tasks, page 1328.

Invalidating a Cache with a Com m and Manager Script

You can update the data warehouse load routine to invoke a MicroStrategy
Command Manager script to invalidate the appropriate caches. This script is
at C:\Program Files (x86)\MicroStrategy\Command
Manager\Outlines\Cache_Outlines\Invalidate_Report_Cache_
Outline. For more information about Command Manager, see Chapter 15,
Automating Administrative Tasks with Command Manager.

To invoke Command Manager from the database server, use one of the
following commands:

Copyright © 2024 All Rights Reserved 1224


Syst em Ad m in ist r at io n Gu id e

l SQL Server: exec xp.cmdshell cmdmgr

l Oracle: host cmdmgr

l DB2: ! cmdmgr

l Teradata: os cmdmgr

Invalidating a Cache Manually

From the Cache Monitor, you can manually invalidate one or more caches.

To Manually Invalidate a Cache

1. In Developer, log in to a project source. You must log in as a user with


the Monitor Caches privilege.

2. Expand Administration, then expand System Monitors, then expand


Caches, and then select Reports or Documents.

3. Select the project for which you want to invalidate a cache and click
OK.

4. Right-click the cache to invalidate and select Invalidate Cache.

Deleting Result Caches


Typically, you do not need to manually delete result caches if you are
invalidating caches and managing History List messages. Result caches are
automatically deleted by Intelligence Server if cache invalidation and History
Lists are performed and maintained properly, as follows:

l A Matching cache is deleted automatically when it is invalidated.

l A History cache is deleted automatically when all History List messages


that reference it are deleted. MicroStrategy recommends that you actively
maintain History List messages, as History caches are deleted
automatically.

Copyright © 2024 All Rights Reserved 1225


Syst em Ad m in ist r at io n Gu id e

l A Matching-History cache is handled in the following way:

l When all the History List messages that reference a Matching-History


cache are deleted, the cache is converted to a Matching cache.

l When a Matching-History cache is invalidated, it is converted to a


History cache.

l An XML cache is deleted automatically when its associated Matching or


History cache is deleted.

In all cases, cache deletion occurs based on the Cache lookup cleanup
frequency setting. For more information about this setting, see Configuring
Result Cache Settings, page 1228.

You can manually delete caches via the Cache Monitor and Command
Manager, or schedule deletions via the Administration Tasks Scheduling, in
the same way that you manually invalidate caches. For details, see
Invalidating Result Caches, page 1223.

Purging all Result Caches in a Project


You can delete all the result caches in a project at once by selecting the
Purge Caches option in the Project Configuration Editor. This forces reports
executed after the purge to retrieve and display the latest data from the data
warehouse.

Purging deletes all result caches in a project, including caches that are still
referenced by the History List. Therefore, purge caches only when you are
sure that you no longer need to maintain any of the caches in the project,
and otherwise delete individual caches.

Even after purging caches, reports and documents may continue to display
cached data. This can occur because results may be cached at the object
and element levels, in addition to at the report/document level. To ensure
that a re-executed report or document displays the most recent data, purge
all three caches. For instructions on purging element and object caches, see

Copyright © 2024 All Rights Reserved 1226


Syst em Ad m in ist r at io n Gu id e

Deleting All Element Caches, page 1274 and Deleting Object Caches, page
1283.

To Purge all Result Caches in a Project

1. In Developer, right-click the project and select Project Configuration


Editor.

2. Expand Caching, then Result Caches, and then select Maintenance.

3. Click Purge Now.

Expiring Result Caches


Cache expiration is the process of marking a cache out of date. Expiring a
cache has the same result as invalidating a cache, and applies to Matching
caches and Matching-History caches. The only difference between
expiration and invalidation is that expiration happens after a set period of
time. For information on how invalidation works, see Invalidating Result
Caches, page 1223.

MicroStrategy strongly recommends that you invalidate a cache when


changes in the data from the data warehouse affect the cache, rather than
relying on a time interval to expire caches. To disable cache expiration, in
the Caching: Result Caches: Maintenance subcategory of the Project
Configuration Editor, select the Never expire caches check box.

Cache expiration occurs automatically according to the Cache duration


(Hours) setting in the Caching: Result Caches (Maintenance) subcategory in
the Project Configuration Editor.

When a cache is updated, the current cache lifetime is used to determine


the cache expiration date based on the last update time of the cache. This
means that changing the Cache duration (Hours) setting or the Never
Expire Caches setting does not affect the expiration date of existing
caches. It affects only the new caches that are being or will be created.

Copyright © 2024 All Rights Reserved 1227


Syst em Ad m in ist r at io n Gu id e

Configuring Result Cache Settings


Result cache settings can be configured at three levels:

l At the server level

l At the project level

l At the individual report/document level

Changes to any of the caching settings are in effect only after Intelligence
Server restarts.

Result Cache Settings at the Server Level


You can configure the following caching settings in the Intelligence Server
Configuration Editor, in the Server Definition (Advanced) category. Each is
described below.

You can also configure these settings using the Command Manager script,
Alter_Server_Config_Outline.otl, located at C:\Program Files
(x86)\MicroStrategy\Command Manager\Outlines\Cache_
Outlines.

Backup Frequency (Minutes)

When a result cache is created, the cache is initially stored in memory on


Intelligence Server. Caches are backed up to disk as specified by the
backup frequency setting.

You can specify the cache backup frequency in the Backup frequency
(minutes) box under the Server Definition: Advanced subcategory in the
Intelligence Server Configuration Editor.

If you specify a backup frequency of 0 (zero), result caches are saved to


disk as soon as they are created. If you specify a backup frequency of 10
(minutes), the result caches are backed up from memory to disk ten minutes
after they are created.

Copyright © 2024 All Rights Reserved 1228


Syst em Ad m in ist r at io n Gu id e

In a clustered environment, MicroStrategy recommends that you set the


backup frequency to 0 (zero) to ensure that History List messages are
synchronized correctly.

Backing up caches from memory to disk more frequently than necessary can
drain resources.

This setting also defines when Intelligent Cubes are saved to secondary
storage, as described in Storing Intelligent Cubes in Secondary Storage,
page 1313.

Cache Lookup Cleanup Frequency (Sec)

The Cache lookup cleanup frequency (sec) setting determines how


frequently the CacheLkUp.idx file is cleaned up. This file stores cache
matching information and can become significant in size, especially when a
large number of caches include a large number of prompts. The cleanup
process reduces the amount of memory that the file consumes and the time
that it takes to back up the lookup table to disk.

The default value for this setting is 0 (zero), which means that the cleanup
takes place only at server shutdown. You may change this value to another
based on your needs, but make sure that it does not negatively affect your
system performance. MicroStrategy recommends cleaning the cache lookup
at least daily but not more frequently than every half hour.

Result Cache Settings at the Project Level


You can configure caching settings in the Project Configuration Editor, in the
Result Caches category. Each is described below.

To locate these settings, right-click the project and select Project


Configuration. Then, in the Project Configuration Editor, expand Caching,
and then select Result Caches.

Copyright © 2024 All Rights Reserved 1229


Syst em Ad m in ist r at io n Gu id e

You can also configure these settings using Command Manager scripts
located at C:\Program Files (x86)\MicroStrategy\Command
Manager\Outlines\Cache_Outlines.

Enable Report Server Caching

Result caches can be created or used for a project only if the Enable report
server caching check box is selected in the Project Configuration Editor in
the Caching: Result Caches: Creation category.

If this option is disabled, all the other options in the Result Caches: Creation
and Result Caches: Maintenance categories are grayed out, except for
Purge Now. By default, report server caching is enabled. For more
information on when report caching is used, see Result Caches, page 1203.

Enable Docum ent Output Caching in Selected Form ats

Document caches can be created or used for a project only if the Enable
document output caching in selected formats check box is selected in
the Project Configuration Editor in the Caching: Result Caches: Creation
category. Document caches are created for documents that are executed in
the selected output formats. You can select all or any of the following: PDF,
Excel, HTML, and XML/Flash/HTML5.

Document caches are created or used only when a document is executed


from MicroStrategy Web. They are not created or used in Developer.

Enable Prom pted Report and Docum ent Caching

Enabled by default, the Enable caching for prompted reports and


documents setting controls whether prompted reports and documents are
cached. In an environment where the majority of reports are prompted and
each prompt is likely to receive a different answer each time it is used, the
probability of matching an existing cache is low. In this case, caching these

Copyright © 2024 All Rights Reserved 1230


Syst em Ad m in ist r at io n Gu id e

report datasets do not provide significant benefits; therefore you may want
to disable this setting.

To disable this setting, clear its check box in the Project Configuration Editor
under the Caching: Result Caches: Creation category.

Record Prom pt Answers for Cache Monitoring

If you Enable caching for prompted reports and documents (see above),
you can also Record prompt answers for cache monitoring. This causes
all prompt answers to be listed in the Cache Monitor when browsing the
result caches. You can then invalidate specific caches based on prompt
answers, either from the Cache Monitor or with a custom Command Manager
script.

This option is disabled by default. To enable it, select its check box in the
Project Configuration Editor under the Caching: Result Caches: Creation
category.

Enable Non-Prom pted Report and Docum ent Caching

If you Enable caching for non-prompted reports and documents, reports


and documents without any prompts are cached.

This option is enabled by default. To disable it, clear its check box in the
Project Configuration Editor under the Caching: Result Caches: Creation
category.

Enable XML Caching for Reports

If you Enable XML caching for reports, reports executed from


MicroStrategy Web create XML caches in addition to any Matching or History
caches they may create. For information about XML caches, see Types of
Result Caches, page 1209.

Copyright © 2024 All Rights Reserved 1231


Syst em Ad m in ist r at io n Gu id e

This option is enabled by default. To disable it, clear its check box in the
Project Configuration Editor under the Caching: Result Caches: Creation
category.

Create Caches per User

If the Create caches per user setting is enabled, different users cannot
share the same result cache. Enable this setting only in situations where
security issues (such as database-level Security Views) require users to
have their own cache files. For more information, see Cache Matching
Algorithm, page 1213.

Instead of enabling this setting, it may be more efficient to disable caching


and instead use the History List. For information about the History List, see
Saving Report Results: History List, page 1240.

This option is disabled by default. To enable it, select its check box in the
Project Configuration Editor under the Caching: Result Caches: Creation
category.

Create Caches Per Database Login

Select the Create caches per database login option if database


authentication is used. This means that users who execute their reports
using different database login IDs cannot use the same cache. For more
information, see Cache Matching Algorithm, page 1213.

This option is disabled by default. To enable it, select its check box in the
Project Configuration Editor under the Caching: Result Caches: Creation
category.

Create Caches Per Database Connection

Select the Create caches per database connection option if connection


mapping is used. For more information, see Cache Matching Algorithm, page

Copyright © 2024 All Rights Reserved 1232


Syst em Ad m in ist r at io n Gu id e

1213.

This option is disabled by default. To enable it, select its check box in the
Project Configuration Editor under the Caching: Result Caches: Creation
category.

Cache File Directory

The Cache file directory, in the Project Configuration Editor under the
Caching: Result Caches: Storage category, specifies where all the cache-
related files are stored. By default these files are stored in the Intelligence
Server installation directory, in the \Caches\<Server definition
name> subfolder.

In a non-clustered environment, report caches are typically stored on the


same machine that is running Intelligence Server.

In a clustered environment, there are two options:

l Local caching: Each node hosts its own cache file directory that needs to
be shared as "ClusterCache" so that other nodes can access it.
ClusterCache is the share name Intelligence Server looks for on other
nodes to retrieve caches.

l Centralized caching: All nodes have the cache file directory set to the
same network location, \\<machine name>\<shared directory
name>. For example, \\My_File_Server\My_Cache_Directory.

l For caches located on Windows machines, and on Linux machines using


Samba, set the path to \\<machine name>\<shared directory
name>. For caches stored on Linux machines, set the path to
//<SharedLocation>/<CacheFolder>.

l On UNIX systems, it is recommended that you mount the shared location


as a network drive. You must create a folder in your machine's Volumes
directory before mounting the location. For example, mount -t afp

Copyright © 2024 All Rights Reserved 1233


Syst em Ad m in ist r at io n Gu id e

afp://my_file_server/my_inbox_directory /Volumes/my_
network_mount

Make sure this cache directory is writable from the network account under
which Intelligence Server is running. Each Intelligence Server creates its
own subdirectory.

For more information about which configuration may be best in clustered


environments, see Configure Caches in a Cluster, page 1147.

Cache Encryption Level on Disk

The Cache encryption level on disk drop-down list controls the strength of
the encryption on result caches. Encrypting caches increases security, but
may slow down the system.

By default the caches that are saved to disk are not encrypted. You can
change the encryption level in the Project Configuration Editor under the
Caching: Result Caches: Storage category.

Maxim um RAM Usage

The Maximum RAM usage settings, in the Project Configuration Editor


under the Caching: Result Caches: Storage category, control the amount
of memory that result caches consume on Intelligence Server. When this
setting is about to be exceeded, the least recently used caches are
automatically unloaded to disk.

If the machine experiences problems because of high memory use, you may
want to reduce the Maximum RAM usage for the result caches. You need to
find a good balance between allowing sufficient memory for report caches
and freeing up memory for other uses on the machine. The default value is
250 megabytes for reports and datasets, and 256 megabytes for formatted
documents. The maximum value for each of these is 65536 megabytes, or 64
gigabytes.

Copyright © 2024 All Rights Reserved 1234


Syst em Ad m in ist r at io n Gu id e

MicroStrategy recommends that you initially set this value to 10% of the
system RAM if it is a dedicated Intelligence Server machine, that is, if no
other processes are running on it. This setting depends on the following
factors:

l The size of the largest report cache.

This setting should be at least as large as the largest report in the project
that you want to cache. If the amount of RAM available is not large enough
for the largest report cache, that cache will not be used and the report will
always execute against the warehouse. For example, if the largest report
you want to be cached in memory is 20 MB, the maximum RAM usage
needs to be at least 20 MB.

l The average size and number of cache files.

l The amount of memory on the Intelligence Server machine.

l The amount of memory used while the system is at maximum capacity.

You should monitor the system's performance when you change the
Maximum RAM usage setting. In general, it should not be more than 30% of
the machine's total memory.

For more information about when report caches are moved in and out of
memory, see Location of Result Caches, page 1211.

Maxim um Num ber of Caches

The Maximum number of caches settings, in the Project Configuration


Editor under the Caching: Result Caches: Storage category, limit the
number of result caches, including Matching caches, History caches,
Matching-History caches, and XML caches, allowed in the project at one
time. The default values are 10,000 datasets, and 100,000 formatted
documents.

This setting depends on the following factors:

Copyright © 2024 All Rights Reserved 1235


Syst em Ad m in ist r at io n Gu id e

l The number of users and the number of History List messages they keep.

l The number of report caches and their average size.

l The amount of hard disk space available in the cache directory.

RAM Swap Multiplier

If the Intelligence Server memory that has been allocated for caches
becomes full, it must swap caches from memory to disk. The RAM swap
multiplier setting, in the Project Configuration Editor under the Caching:
Result Caches: Storage category, controls how much memory is swapped
to disk, relative to the size of the cache being swapped into memory. For
example, if the RAM swap multiplier setting is 2 and the requested cache is
80 kilobytes, 160 kilobytes are swapped from memory to disk.

If the cache memory is full and several concurrent reports are trying to swap
from disk, the swap attempts can fail and re-execute those reports. This
counteracts any gain in efficiency due to caching. In this case, increasing
the RAM swap multiplier setting provides additional free memory into
which those caches can be swapped.

The default value for this setting is 2.

Maxim um RAM for Cache Index %

This setting determines what percentage of the amount of memory specified


in the Maximum RAM usage limits (see Maximum RAM Usage, page 1234)
can be used for result cache lookup tables. If your reports and documents
contain many prompt answers, the cache lookup table may reach this limit.
At this point, Intelligence Server no longer creates new caches. To continue
creating new caches, you must either remove existing caches to free up
memory for the cache lookup table, or increase this limit.

The default value for this parameter is 100%, and the values can range from
10% to 100%.

Copyright © 2024 All Rights Reserved 1236


Syst em Ad m in ist r at io n Gu id e

You can change this setting in the Project Configuration Editor under the
Caching: Result Caches: Storage category.

Load Caches on Startup

If report caching is enabled and the Load caches on startup setting is


enabled, when Intelligence Server starts up, it loads report caches from disk
into memory until it reaches the Maximum RAM usage limit (see Maximum
RAM Usage, page 1234). If the Load caches on startup setting is disabled,
it loads report caches only when requested by users.

Load caches on startup is enabled by default. To disable it, in the Project


Configuration Editor under the Caching: Result Caches: Storage category,
clear the Load caches on startup check box.

For large projects, loading caches on startup can take a long time so you
have the option to set the loading of caches on demand only. However, if
caches are not loaded in advance, there will be a small additional delay in
response time when they are hit. Therefore, you need to decide which is
best for your set of user and system requirements.

Never Expire Caches

The Never expire caches setting, in the Project Configuration Editor under
the Caching: Result Caches: Maintenance category, causes caches to
never automatically expire. MicroStrategy recommends selecting this check
box, instead of using time-based result cache expiration. For more
information, see Managing Result Caches, page 1221.

Cache Duration (Hours)

All caches that have existed for longer than the Cache duration (Hours) are
automatically expired. This duration is set to 24 hours by default. You can
change the duration in the Project Configuration Editor under the Caching:
Result Caches: Maintenance category.

Copyright © 2024 All Rights Reserved 1237


Syst em Ad m in ist r at io n Gu id e

As mentioned earlier, MicroStrategy recommends against using time-based


result cache expiration. For more information, see Managing Result Caches,
page 1221.

Cache Expiration and Dynam ic Dates

By default, caches for reports based on filters that use dynamic dates
always expire at midnight of the last day in the dynamic date filter. This
behavior occurs even if the Cache Duration (see above) is set to zero.

For example, a report has a filter based on the dynamic date "Today." If this
report is executed on Monday, the cache for this report expires at midnight
on Monday. This is because a user who executes the report on Tuesday
expects to see data from Tuesday, not the cached data from Monday. For
more information on dynamic date filters, see the Filters section in the
Advanced Reporting Help.

To change this behavior, in the Project Configuration Editor under the


Caching: Result Caches: Maintenance category, select the Do Not Apply
Automatic Expiration Logic for reports containing dynamic dates check
box. When this setting is enabled, report caches with dynamic dates expire
in the same way as other report caches do, according to the Cache duration
setting.

Cache Usage Defaults for Subscriptions

By default, if a cache is present for a subscribed report or document, the


report or document uses the cache instead of re-executing the report or
document. If no cache is present, one is created when the report or
document is executed. For more information about subscriptions, see
Scheduling Reports and Documents: Subscriptions, page 1333.

When you create a subscription, you can force the report or document to re-
execute against the warehouse even if a cache is present. You can also
prevent the subscription from creating a new cache.

Copyright © 2024 All Rights Reserved 1238


Syst em Ad m in ist r at io n Gu id e

To change the default behavior for new subscriptions, use the following
check boxes in the Project Configuration Editor, in the Caching: Subscription
Execution category.

l To cause new History List and Mobile subscriptions to execute against the
warehouse by default, select the Re-run History List and Mobile
subscriptions against the warehouse check box.

l To cause new email, file, and print subscriptions to execute against the
warehouse by default, select the Re-run file, email, and print
subscriptions against the warehouse check box.

l To prevent new subscriptions of all types from creating or updating caches


by default, select the Do not create or update matching caches check
box.

Result Cache Settings at the Report Level


You can enable caching for a specific report, subset report, or document.
Result caching settings made at the report level will apply regardless of the
project level caching settings.

You must have the Use Design Mode privilege to configure


report/document-level cache settings.

Result Caching Options


To set the caching options for a report or subset report:

1. In the Report Editor select Data > Report caching options.

2. Select Enabled in the Report Caching Options dialog box.

To set the caching options for a document:

1. In the Document Editor select Format > Document Properties.

2. Under Document Properties > Caching select Enable document


caching .

Copyright © 2024 All Rights Reserved 1239


Syst em Ad m in ist r at io n Gu id e

For a document, you can choose which formats, such as HTML or PDF, are
cached. You can also choose to create a new cache for every page-by,
incremental fetch block, and selector setting.

To disable caching for a report or document even if caching is enabled at the


project level, select the Disable Caching option.

To use the project-level setting for caching, select the Use default project-
level behavior option. This indicates that the caching settings configured at
the project level in the Project Configuration Editor apply to this specific
report or document as well.

Saving Report Results: History List


The History List is a folder where Intelligence Server places report and
document results for future reference. Each user has a unique History List.

With the History List, users can:

l Keep shortcuts to previously run reports, like the Favorites list when
browsing the Internet.

l Perform asynchronous report execution. For example, multiple reports can


be run at the same time within one browser, or pending reports can remain
displayed even after logging out of a project.

l View the results of scheduled reports.

The History List is displayed at the user level, but is maintained at the
project source level. The History List folder contains messages for all the
projects in which the user is working. The number of messages in this folder
is controlled by the setting Maximum number of messages per user. For
example, if you set this number at 40, and you have 10 messages for Project
A and 15 for Project B, you can have no more than 15 for Project C. When
the maximum number is reached, the oldest message in the current project
is purged automatically to leave room for the new one.

Copyright © 2024 All Rights Reserved 1240


Syst em Ad m in ist r at io n Gu id e

If the current project has no messages but the message limit has been
reached in other projects in the project source, the user may be unable to
run any reports in the current project. In this case the user must log in to
one of the other projects and delete messages from the History list in that
project.

This section provides the following information about History Lists:

l Understanding History Lists, page 1241

l Configuring History List Data Storage, page 1245

l Accessing History Lists, page 1250

l Archiving History List Messages, page 1252

l Managing History Lists, page 1254

Understanding History Lists


A History List is a collection of pre-executed reports and documents that
have been sent to a user's personal History folder. These pre-executed
reports and documents are called History List messages.

The data contained in these History List messages is stored in the History
List repository, which can be located on Intelligence Server, or in the
database. For more information about the differences between these storage
options, see Configuring History List Data Storage, page 1245.

A History List message provides a snapshot of data at the time the message
is created. Using a different report filter on a History List message does not
cause the message to return different data. To view a report in the History
List with a different report filter, you must re-execute the report.

Each report that is sent to the History List creates a single History List
message. Each document creates a History List message for that document,
plus a message for each dataset report in the document.

You can send report results to the History List manually or automatically.

Copyright © 2024 All Rights Reserved 1241


Syst em Ad m in ist r at io n Gu id e

Sending a Message to the History List Manually


Report results can be manually sent to the History List any time you plan to
execute a report, during report execution, or even after a report is executed:

l Before report execution:

l From Developer: Right-click the report or document name and select


Send to History from the shortcut menu. The report or document is
executed, and a message is generated in the History List.

This option is not available from a shortcut to a report or document.

l From Web: This option is not available.

l In the middle of report execution:

l From Developer: While the report is being executed, select Send to


History List from the File menu.

This operation creates two jobs, one for executing the report (against
the data warehouse) and another for sending the report to History List.
If caching is enabled, the second job remains in the waiting list for the
first job to finish; if caching is not enabled, the second job runs against
the data warehouse again. Therefore, to avoid wasting resources,
MicroStrategy recommends that if caching is not enabled, users not
send the report to History List in the middle of a report execution.

l From Web: While the report is being executed, click Add to History
List on the wait page.

This operation creates only one job because the first one is modified for
the Send to History List request.

l After report execution:

l From Developer: After the report is executed, select Send to History


from the File menu.

Copyright © 2024 All Rights Reserved 1242


Syst em Ad m in ist r at io n Gu id e

l From Web: After the report is executed, select Add to History List from
the Home menu.

Two jobs are created for Developer, and only one is created for Web.

Sending a Message to the History List Automatically


Report results can be automatically sent to the History List. There are two
different ways to automatically send messages to the History list. You can
either have every report or document that you execute sent to your History
List, or you can subscribe to specific reports or documents:

l To automatically send every report and document that is executed to


your History List:

l From MicroStrategy Web: Select History List from the Project


Preferences, and then select Automatically for Add reports and
documents to my History List.

l From Developer: Select MicroStrategy Developer Preferences from


the Tools menu, then select History Options, and then select
Automatically send reports to History List during execution.

l To schedule delivery of specific reports or documents:

l From MicroStrategy Web: On the reports page, under the name of the
report that you want to send to History List, select Subscriptions, and
then click Add History List subscription on the My Subscriptions
page. Choose a schedule for the report execution. A History List
message is generated automatically whenever the report is executed
based on the schedule.

l From Developer: Right-click a report or document and select Schedule


Delivery to and select History List. Define the subscription details. For
specific information about using the Subscription Editor, click Help.

Copyright © 2024 All Rights Reserved 1243


Syst em Ad m in ist r at io n Gu id e

Filtering and Purging Your History List Messages in Developer


The History List Monitor filter can be used to either filter which messages
are displayed in the History List, or it can define the History List messages
that you want to purge from the History List. The History List Monitor filter
allows you to define various parameters to filter or purge your History List
messages.

To use the History List Monitor Filter to filter your History List messages,
right click the History List folder, and select Filter. After you have specified
the filter parameters, click OK. The History List Monitor Filter closes, and
your History List messages will be filtered accordingly.

To use the History List Monitor Filter to purge items from your History List
folder, right click the History List folder and select Purge. After you have
specified the filter parameters, click Purge. The History List Monitor Filter
closes, and the History List Messages that match the criteria defined in the
History List Monitor Filter are deleted.

For more details about the History List Monitor Filter, click Help.

History Lists and Caching


The History List is closely related to caching functionality. History Lists
consist of messages that point to report results, which are stored as History
caches. Therefore, when a History List message is deleted, the History
cache that the message points to is deleted as well.

Multiple messages can point to the same History cache. In this case, the
History cache is deleted after all messages pointing to it have been deleted.

If you are using a database-based History List repository, by default,


duplicates of the report caches that are associated with the History List
messages are stored in the database, as well as being stored locally. This
way, if a user deletes the local report cache, the cache that is stored in the
database can still be accessed. This behavior applies to both History
Caches and History-Matching Caches. For more information about types of

Copyright © 2024 All Rights Reserved 1244


Syst em Ad m in ist r at io n Gu id e

caches, see Types of Result Caches, page 1209. For more information about
storing History List data, see Configuring History List Data Storage, page
1245.

If you are exporting a report or document to a database-based History List,


only the most recent export is stored in the History List. For example, if you
export a document as an Excel file, and then export it as a PDF, only the
PDF is saved in the History List.

You can use the History List messages to retrieve report results, even when
report caching is disabled.

Configuring History List Data Storage


Starting in MicroStrategy ONE (June 2024), file and database-based
options for history list storage are deprecated.

The History List repository is the location where all History List data is
stored.

There are several different ways that the History List repository can be
configured to store data for the History List. It can be stored in a database,
or in a file on the Intelligence Server machine. Alternately, you can use a
hybrid approach that stores the message information in a database for
improved search results and scalability, and the message results in a file for
performance reasons.

MicroStrategy recommends a hybrid configuration since it improves history


list performance.

Configuring Intelligence Server to Use a Database-Based or


Hybrid History List Repository
The caches associated with History Lists can be stored in a database.
Storing the History List messages in a database reduces the load on the
machine that hosts Intelligence Server.

Copyright © 2024 All Rights Reserved 1245


Syst em Ad m in ist r at io n Gu id e

If you are using a database-based History List repository, the caches that
are associated with a History List message are also stored in the History List
database.

You can also configure Intelligence Server to use a hybrid History List
repository. In this configuration the History List message information is
stored in a database, and the cached results are stored in a file. This
approach preserves the scalability of the database-based History List, while
maintaining the improved performance of the file-based History List.

l Once Intelligence Server has been configured to store the History List
cached data in the database, this setting will apply to the entire server
definition.

l MicroStrategy does not recommend reverting back to a file-based History


List repository. If you want to revert back to a file-based repository, you
must replace the existing server definition with a new one.

The storage location for the History List data (the History List repository) must
have been created in the database. For information about creating the History
List repository in the database, see the Installation and Configuration Help.

If you are using a hybrid History List repository, the storage location for the
History List results must have been created and shared on the Intelligence
Server machine. For information about how to configure this location, see
Configuring Intelligence Server to Use a File-Based History List Repository,
page 1248.

To Configure Intelligence Server to Use a Database-Based or Hybrid


History List Repository

1. In Developer, log in to the project source as a user with administrative


privileges.

2. Go to Administration > Server > Configure MicroStrategy


Intelligence Server.

Copyright © 2024 All Rights Reserved 1246


Syst em Ad m in ist r at io n Gu id e

3. On the left, go to History Settings > General.

4. Select Database based. The following warning message is displayed:

Once Intelligence Server has been configured to store the History List
cached data in the database, this setting will apply to the entire server
definition.

5. Click Yes.

6. By default, History List caches are backed up to the database. To store


only History List caches on the server, clear the Backup report history
caches to the database checkbox.

7. To use a hybrid History List repository, in the External central storage


directory for Database-based History List field, type the location for
the file-based History List message storage. For information about how
the cached results are stored, see Configuring Intelligence Server to
Use a File-Based History List Repository, page 1248.

You can browse to the file location by clicking the . . . (browse) button.

8. Expand Server Definition, and then select General.

9. Under Content Server Location, from the Database Instance menu,


select the database instance that points to the History List repository in
the database.

10. Click OK.

Copyright © 2024 All Rights Reserved 1247


Syst em Ad m in ist r at io n Gu id e

11. Restart Intelligence Server for the changes to take effect.

To Co n f i r m t h at t h e Hi st o r y Li st Rep o si t o r y h as b een Co n f i gu r ed
Co r r ect l y

1. Log in to the project source as a user with administrative privileges.

2. Go to Administration > Server, > Configure MicroStrategy


Intelligence Server.

3. On the left, expand History Settings and select General. If you have
configured Intelligence Server properly, the following message is
displayed in the Repository Type area of the Intelligence Server
Configuration Editor:

Configuring Intelligence Server to Use a File-Based History List


Repository
When you initially set up your History List, you can store the History List in a
file folder on the machine that hosts Intelligence Server. The default location
of this folder is relative to the installation path of Intelligence Server:

.\Inbox\<Server definition name>

For example, C:\Program Files


(x86)\MicroStrategy\Intelligence
Server\Inbox\MicroStrategy Tutorial Server.

In a non-clustered environment, History List cached data is typically stored


on the same machine that is running Intelligence Server.

In a clustered environment, there are two storage options:

Copyright © 2024 All Rights Reserved 1248


Syst em Ad m in ist r at io n Gu id e

l Local caching: Each node hosts its own cache file directory that needs to
be shared as "ClusterCache" so that other nodes can access it.

l Centralized caching: All nodes have the cache file directory set to the
same network location, \\<machine name>\<shared directory
name>. For example, \\My_File_Server\My_Inbox_Directory.

For caches stored on Windows machines, and on Linux machines using


Samba, set the path to \\<machine name>\<shared directory
name>. For caches stored on Linux machines, set the path to
//<SharedLocation>/<CacheFolder>.

On UNIX systems, it is recommended that you mount the shared location


as a network drive. You must create a folder in your machine's Volumes
directory before mounting the location. For example, mount -t afp
afp://my_file_server/my_inbox_directory/Volumes/my_
network_mount.

Make sure that the network directory is writable from the network account
under which Intelligence Server is running. Each Intelligence Server
creates its own subdirectory.

For steps to configure Intelligence Server to store cached History List data
in a file-based repository, see the procedure below.

To Configure Intelligence Server to Use a File-Based History List


Repository

1. Log in to the project source as a user with administrative privileges.

2. Go to Administration > Server > Configure MicroStrategy


Intelligence Server.

3. On the left, expand History Settings and select General.

4. Select File based, and type the file location in the History directory
field.

Copyright © 2024 All Rights Reserved 1249


Syst em Ad m in ist r at io n Gu id e

You can browse to the file location by clicking the …(browse) button.

5. Click OK.

Accessing History Lists


History Lists can be accessed from both MicroStrategy Web and Developer.
You cannot see the History Lists for all users unless you have access to the
History List Messages Monitor. For more information about the History List
Messages Monitor, see Managing History Lists, page 1254.

Accessing the History List Folder in MicroStrategy Web


In MicroStrategy Web, log in to the desired project and click the History List
link in the top navigation bar. This displays all history list messages for the
user that is currently logged in. The following information is available:

l Name: Name (or alias) of the report.

l Status: Status of a report job, for example, executing, processing on


another node, ready, and so on.

If you are working in a clustered environment, only Ready and Error


statuses are synchronized across nodes. While a job on one node is
reported as Executing, it is reported as Processing On Another Node on
all the other nodes.

l Message Creation Time: The time the message was created, in the
currently selected time zone.

l Details: More information about the report, including total number of rows,
total number of columns, server name, report path, message ID, report ID,
status, message created, message last updated, start time, finish time,
owner, report description, template, report filter, view filter, template
details, prompt details, and SQL statements.

Each time a user submits a report that contains a prompt, the dialog
requires that they answer the prompt. As a result, multiple listings of the

Copyright © 2024 All Rights Reserved 1250


Syst em Ad m in ist r at io n Gu id e

same report may occur. The differences among these reports can be found
by checking the timestamp and the data contents.

Accessing the History List Folder in Developer


In Developer, History List messages are located in the History folder under
the project name. The number next to the History List folder indicates how
many unread History List messages are contained in the folder. Click the
History folder to view all the messages. Each message is listed with the
following information:

l Name: Name of the report

l Finish Time: The time the report execution is finished

l Folder name: Name of the folder where the original report is saved

l Last update time: The time when the original report was last updated

l Message text: The status message for the History List message

l Start time: The time the report execution was started

l Status: Status of a report job, for example, has been executed


successfully and is ready, is not executed successfully, is currently
executing, or is waiting to execute

You can see more details of any message by right-clicking it and selecting
Quick View. This opens a new window with the following information:

l Report definition: Expand this category to see information about the


report definition, including the description, owner, time and date it was last
modified, the project it resides in, the report ID, the path to the report's
location, and report details.

l Job execution statistics: Expand this category to see information about


the report execution, including the start and end time, the total number of
rows and columns in the report, the total number of rows and columns that

Copyright © 2024 All Rights Reserved 1251


Syst em Ad m in ist r at io n Gu id e

contain raw data, whether a cache was used, the job ID, and the SQL
produced.

l Message status: Expand this category to see information about the


message itself, including the language, user creation time, last update
time, read status, format, request type, application, message ID, and
message text.

Archiving History List Messages


Generally, you archive History List messages if you want to see the report
results as they were when the messages were originally created. This
feature is useful when you need to track changes in the report results for a
scheduled report.

Intelligence Server automatically marks History List messages as archived


when, in the Subscription Editor, the The new scheduled report will
overwrite older versions of itself check box is cleared. Archived
messages can also be created in a MicroStrategy Web subscription if, on the
Project Defaults - History List Preferences page, the The new scheduled
report will overwrite older versions of itself check box is cleared.

To Archive All History List Messages in a Project in Web


1. In Preferences Levels category, select Project defaults.

2. In the Preferences category, select History List.

3. Clear the check box for The new scheduled report will overwrite
older versions of itself.

To Archive History List Messages in Developer


1. Go to Administration > Scheduling > Subscription Creation Wizard.

2. Click Next.

3. Specify the following characteristics of the schedule:

Copyright © 2024 All Rights Reserved 1252


Syst em Ad m in ist r at io n Gu id e

l Choose the schedule that you want to use.

l Choose the project that contains the object that you want to archive.

l Choose History List from the Delivery Type drop-down menu.

4. Click Next.

5. Choose the reports or documents that you want to archive:

l Browse to the report or document that you want to archive. You can
select multiple reports or documents by holding the Ctrl key while
clicking them.

l Click the right arrow to add the report or document.

l Click Next when all of the reports or documents that you want to
archive have been added.

6. Select a user group to receive the message for the archived report or
document:

l Browse to the user group that you want to send the archived report to.
You can select multiple reports or documents by holding the Ctrl key
while clicking them.

l Click the right arrow to add the group.

l Click Next when all of the user groups that you want to receive the
archived report or document have been added.

All members in the user group receive the History List message.

7. Specify the subscription properties:

l Run the schedule immediately

l Set the expiration date for the subscription

l Send a delivery notification to all users included in the subscription.

Copyright © 2024 All Rights Reserved 1253


Syst em Ad m in ist r at io n Gu id e

8. Clear the The new scheduled report will overwrite older versions
of itself check box, and click Next.

9. Click Finish.

Managing History Lists


Administrators manage History Lists and the History caches at the same
time. For information on the relationship between the History caches and
History Lists, see Types of Result Caches, page 1209.

An administrator can control the size of the History List and thus control
resource usage through the following settings:

l The maximum size of the History List is governed at the project level. Each
user can have a maximum number of History List messages, set by an
administrator. For more details, including instructions, see Controlling the
Maximum Size of the History List, page 1255.

l Message lifetime is the length of time before a History List message is


automatically deleted. For more details about message lifetime, see
Controlling the Lifetime of History List Messages, page 1256.

l You can also delete History List messages according to a schedule. For
more details, including instructions, see Scheduling History List Message
Deletion, page 1257.

l If you are using a database-based History List, you can reduce the size of
the database by disabling the History List backup caches. For more
details, including instructions, see Backing up History Caches to the
History List Database, page 1259.

If you are using a database-based History List repository and you have the
proper permissions, you have access to the History List Messages Monitor.
This powerful tool allows you to view and manage History List messages for
all users. For more information, see Monitoring History List Messages, page
1260.

Copyright © 2024 All Rights Reserved 1254


Syst em Ad m in ist r at io n Gu id e

History List Backup Frequency


The backup frequency for History List messages is the same as for caching.
History List messages are backed up to disk as frequently as the server
backup frequency setting specifies. For more information, see Configuring
Result Cache Settings, page 1228.

History Lists in a Clustered Environment


In a clustered environment, each server maintains its own History List file.
However, the same messages are retrieved and presented to the user
regardless of the machine from which the History List is accessed. For
complete details on History Lists in a clustered environment, see
Synchronizing Cached Information Across Nodes in a Cluster, page 1137.

Controlling the Maximum Size of the History List


The maximum size of the History List is governed at the project level. The
project administrator can set a maximum number of History List messages
for each user. Additionally, the project administrator can set a maximum size
for messages. For both settings, the default value is -1, which means that
there is no maximum.

The administrator can also specify whether to create separate messages for
each dataset report that is included in a Report Services document or to
create only a message for the document itself, and whether to create
messages for documents that have been exported in other formats, such as
Excel or PDF. Not creating these additional History List messages can
improve History List performance, at the cost of excluding some data from
the History List. By default, all reports and documents create History List
messages.

Copyright © 2024 All Rights Reserved 1255


Syst em Ad m in ist r at io n Gu id e

To Configure the Messages that are Stored in the History List

1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.

2. From the Administration menu, point to Projects, and then select


Project Configuration.

3. Expand the Project Definition category and select the History list
subcategory.

4. In the Maximum number of messages per user field, type the


maximum number of History List messages to store for each user, or
type -1 for no limit.

5. To create a History List message for each dataset report included in a


Report Services document, select the Save Report Services
document dataset messages to History List check box. To create
only a message for the document, and not for the dataset reports, clear
this check box.

6. To create History List messages for Report Services documents that


are exported to other formats, select the Save exported results for
interactive executions sent to History List check box. To not create
messages for documents when they are exported, clear this check box.

7. In the Maximum Inbox message size (MB) field, type the maximum
message size, in megabytes, for inboxes. Type -1 for no limit.

8. Click OK.

9. Restart Intelligence Server for your changes to take effect.

Controlling the Lifetime of History List Messages


Message lifetime controls how long (in days) messages can exist in a user's
History List. This setting allows administrators to ensure that no History List
messages reside in the system indefinitely. Messages are tested against

Copyright © 2024 All Rights Reserved 1256


Syst em Ad m in ist r at io n Gu id e

this setting at user logout and deleted if found to be older than the
established lifetime.

When a message is deleted for this reason, any associated History caches
are also deleted. For more information about History caches, see Types of
Result Caches, page 1209.

The default value is -1, which means that messages can stay in the system
indefinitely until the user manually deletes them.

To Set Message Lifetime

1. In Developer, log in to a project source.

2. From the Administration menu, point to Server and then select


Configure MicroStrategy Intelligence Server.

3. Expand History Settings on the left, then select General.

4. Type a number in the Message lifetime (days) field.

5. Click OK.

Scheduling History List Message Deletion


You can delete History List messages using the Schedule Administration
Tasks feature, which is accessed by selecting Scheduling from the
Administration menu. This allows you to periodically and selectively purge
History List messages of certain users and groups. You can choose to target
only certain messages, including:

l Messages for a certain project or for all projects

l Messages in the History Lists of all users in a certain group

l Messages that are read or unread

l Messages that were created more than x number of days ago

Copyright © 2024 All Rights Reserved 1257


Syst em Ad m in ist r at io n Gu id e

The Delete History List messages feature can also be used for one-time
maintenance by using a non-recurring schedule.

To Schedule History List Message Deletion

1. In Developer, log in to a project source.

2. From the Administration menu, select Scheduling, then select


Schedule Administration Tasks.

3. Select a project from the Available Projects list.

4. Select Delete History List messages as the action.

5. Select a schedule from the preconfigured options, for example, at close


of business (weekday), first of month, on database load, and so on.

6. Type a number in the Lifetime (days) box.

7. Select an option for the messages status:

l Read

l Unread

l All

8. Click … (the browse button) to select a user/group for which the


History List messages will be deleted.

9. Click OK.

Cleaning up the History List Database


You can clean up the History List database using the Schedule
Administration Tasks feature, which is accessed by selecting Scheduling
from the Administration menu. This allows you to periodically remove
orphaned entries from the database, and it allows you to remove history list
messages for deleted users.

Copyright © 2024 All Rights Reserved 1258


Syst em Ad m in ist r at io n Gu id e

The Clean History List database feature can also be used for one-time
maintenance by using a non-recurring schedule.

To Schedule History List Database Cleanup

1. In Developer, log in to a project source.

2. From the Administration menu, select Scheduling, then select


Schedule Administration Tasks.

3. Select a project from the Available Projects list.

4. Select Clean History List database as the action.

5. Click OK.

Backing up History Caches to the History List Database


By default, in a database-based History List, the History caches are backed
up to the database. This provides increased scalability in large systems, and
increases availability to the History caches if a node fails. It also allows you
to set longer message lifetimes for History List messages, because older
History caches can be deleted from the Intelligence Server machine's hard
disk and can be served by the database instead.

If you are concerned about the size of the database used for a database-
based History List, you can disable the use of the database as a long-term
backup for History caches.

To Disable the Database Backup for History Caches

1. In Developer, log in to a project source.

2. From the Administration menu, point to Server and then select


Configure MicroStrategy Intelligence Server.

3. Expand the History Settings category, and select General.

Copyright © 2024 All Rights Reserved 1259


Syst em Ad m in ist r at io n Gu id e

4. Clear the Backup report history caches to database check box.

5. Click OK.

Monitoring History List Messages


The History List Messages Monitor allows you to view all History List
messages for all users, view detailed information about each message, and
purge the messages based on certain conditions.

To use the History List Messages Monitor, your History List repository must
be stored in a database. For more information about configuring the History
List repository, see Configuring History List Data Storage, page 1245.

To Monitor the History List Messages

1. In Developer, log in to a project source. You must log in as a user with


the Administer History List Monitor and the Monitor History List
privileges.

2. Expand Administration, then expand System Monitors, and then


select History List Messages. All History List messages are
displayed, as shown below:

3. To view the details of a History List message, double-click that


message. A Quick View window opens, with detailed information about
the message.

Copyright © 2024 All Rights Reserved 1260


Syst em Ad m in ist r at io n Gu id e

4. To filter the messages displayed based on criteria that you define,


right-click a message and select Filter.

To Purge a History List Message

1. Select the message in the History List Monitor.

2. Right-click the message and select Purge.

Element Caches
When a user runs a prompted report containing an attribute element prompt
or a hierarchy prompt, an element request is created. (Additional ways to
create an element request are listed below.) An element request is actually a
SQL statement that is submitted to the data warehouse. Once the element
request is completed, the prompt can be resolved and sent back to the user.
Element caching, set by default, allows for this element to be stored in
memory so it can be retrieved rapidly for subsequent element requests
without triggering new SQL statements against the data warehouse.

For example, if ten users run a report with a prompt to select a region from a
list, when the first user runs the report, a SQL statement executes and
retrieves the region elements from the data warehouse to store in an
element cache. The next nine users see the list of elements return much
faster than the first user because the results are retrieved from the element
cache in memory. If element caching is not enabled, when the next nine
users run the report, nine additional SQL statements will be submitted to the
data warehouse, which puts unnecessary load on the data warehouse.

Element caches are the most-recently used lookup table elements that are
stored in memory on the Intelligence Server or Developer machines so they
can be retrieved more quickly. They are created when users:

l Browse attribute elements in Developer using the Data Explorer, either in


the Folder List or the Report Editor

Copyright © 2024 All Rights Reserved 1261


Syst em Ad m in ist r at io n Gu id e

l Browse attribute elements in the Filter Editor

l Execute a report containing a prompt exposing an attribute list (which


includes hierarchies and element list types). The element list is displayed
when the report executes and creates an element cache.

This section discusses the following topics concerning element caching:

l Element Caching Terminology, page 1262

l Location of Element Caches, page 1263

l Cache Matching Algorithm, page 1264

l Enabling or Disabling Element Caching, page 1264

l Limiting the Number of Elements Displayed and Cached at a Time, page


1265

l Caching Algorithm, page 1269

l Limiting the Amount of Memory Available for Element Caches, page 1270

l Limiting Which Attribute Elements a User can See, page 1272

l Limiting Element Caches by Database Connection, page 1273

l Location of Result Caches, page 1211

l Deleting All Element Caches, page 1274

l Summary Table of Element Cache Settings, page 1275

Element Caching Terminology


The following terminology is helpful in understanding the concept of element
caching:

l Element Request/Browse Query: A SQL request issued to the data


warehouse to retrieve a list of attribute elements. This request accesses
the attributes lookup table, which is defined when the attribute is created
in Architect. If the key to the lookup table is the attribute itself, a SELECT

Copyright © 2024 All Rights Reserved 1262


Syst em Ad m in ist r at io n Gu id e

is issued for the element request. If the attributes lookup table is in a


lower-level lookup table (for example, month in the lookup date table) a
SELECT DISTINCT is used for the element request. Element requests
may also contain a WHERE clause if resulting from a search, filtered
hierarchy prompt, drill request on a hierarchy prompt, or a security filter.

l Element Cache Pool: The amount of memory Intelligence Server


allocates for element caching. In the interface, this value is called
Maximum RAM usage, set in the Project Configuration Editor in the
Caching: Auxiliary Caches: Elements category. The default value for this
setting is 1 MB. Intelligence Server estimates that each object uses 512
bytes; therefore, by default, Intelligence Server caches about 2,048
element objects. If an element request results in more objects needing to
be cached than what the maximum size of the element cache pool allows,
the request is not cached.

l Element Incremental Fetch Size: The maximum number of elements for


display in the interface per element request. On Developer, the default for
the Element Incremental Fetch setting is 1,000 elements; on Web, the
default is 15 elements.

Location of Element Caches


Element caches are stored only in memory and are not saved to disk. They
can exist on both Intelligence Server and Developer machines.

When a Developer user triggers an element request, the cache within the
Developer machine's memory is checked first. If it is not there, the
Intelligence Server memory is checked. If it is not there, the results are
retrieved from the data warehouse. Each option is successively slower than
the previous one, for example, the response time could be 1 second for
Developer, 2 seconds for Intelligence Server, and 20 seconds for the data
warehouse.

Copyright © 2024 All Rights Reserved 1263


Syst em Ad m in ist r at io n Gu id e

Cache Matching Algorithm


For an element cache to be used, the cache must be valid, and it must match
the job being executed. The following cache keys are used in the matching
process:

l Attribute ID

l Attribute version ID

l Element ID

l Search criteria

l Database connection (if the project is configured to check for the cache
key)

l Database login (if the project is configured to check for the cache key)

l Security filter (if the project and attributes are configured to use the cache
key)

Enabling or Disabling Element Caching


When the MicroStrategy system is installed for the first time, the element
caching is enabled by default. You can disable it for an entire project, for a
Developer client, or for a specific attribute in the project's schema. The data
source cache setting
DssCacheSettingElementMaxMemoryConsumption controls the total
amount of memory used by the element server cache. Setting this value to
zero completely disables the element cache.

In situations where the data warehouse is loaded more that once a day, it
may be desirable to disable element caching.

To Disable Element Caching for a Project

In the Project Configuration Editor, in the Caching: Auxiliary Caches


(Elements) category, under Server, set the Maximum RAM usage (KBytes)

Copyright © 2024 All Rights Reserved 1264


Syst em Ad m in ist r at io n Gu id e

to 0 (zero).

To Disable Element Caching for Developer

In the Project Source Manager, select the Memory tab, set the Maximum
RAM usage (KBytes) to 0 (zero).

You might want to perform this operation if you always want to use the
caches on Intelligence Server. This is because when element caches are
purged, only the ones on Intelligence Server are eliminated automatically
while the ones in Developer remain intact. Caches are generally purged
because there are frequent changes in the data warehouse that make the
caches invalid.

To Disable Element Caching for an Attribute

1. In Developer, right-click the attribute and select Edit.

2. On the Display tab, clear the Enable element caching check box.

Limiting the Number of Elements Displayed and Cached at a


Time
Incremental element fetching reduces the amount of memory Intelligence
Server uses to retrieve elements from the data warehouse and improves the
efficiency of Intelligence Server's element caching. You can set the
maximum number of elements to display in the interface per element request
in the Project Configuration Editor, by using the Maximum number of
elements to display setting in the Project definition: Advanced category.
The default value is 1,000 for Developer and 15 for Web.

Attribute element requests can be quite large (sometimes exceeding


100,000 elements). Requests of this size take a large amount of memory and
time to pull into Intelligence Server and typically force many of the smaller
element caches out of the element cache pool. Caching such large element

Copyright © 2024 All Rights Reserved 1265


Syst em Ad m in ist r at io n Gu id e

lists is often unnecessary because users rarely page through extremely


large element lists; they do a search instead.

When the incremental element fetching is used, an additional pass of SQL is


added to each element request. This pass of SQL determines the total
number of elements that exist for a given request. This number helps users
decide how to browse a given attributes element list. This additional pass of
SQL generates a SELECT COUNT DISTINCT on the lookup table of the
attribute followed by a second SELECT statement (using an ORDER BY) on
the same table. From the result of the first query, Intelligence Server
determines if it should cache all of the elements or only an incremental set.

The incremental retrieval limit is four times the incremental fetch size. For
example, if your MicroStrategy Web product is configured to retrieve 50
elements at a time, 200 elements along with the distinct count value are
placed in the element cache. The user must click the next option four times
to introduce another SELECT pass, which retrieves another 200 records in
this example. Because the SELECT COUNT DISTINCT value was cached,
this would not be issued a second time the SELECT statement is issued.

To optimize the incremental element caching feature (if you have large
element fetch limits or small element cache pool sizes), Intelligence Server
uses only 10 percent of the element cache on any single cache request. For
example, if 200 elements use 20 percent of the cache pool, Intelligence
Server caches only 100 elements, which is 10 percent of the available
memory for element caches.

The number of elements retrieved per element cache can be set for
Developer users at the project level, MicroStrategy Web product users, a
hierarchy, or an attribute. Each is discussed below.

Copyright © 2024 All Rights Reserved 1266


Syst em Ad m in ist r at io n Gu id e

To Limit the Number of Elements Displayed for a Project (Affects Only


Developer Users)

1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.

2. From the Administration menu, point to Projects, and then select


Project Configuration.

3. Expand Project definition, then select Advanced.

4. Type the limit in the Maximum number of elements to display box.

To Limit the Number of Elements Displayed for MicroStrategy Web


Product Users

1. In MicroStrategy Web, log in to a project as a user with the Web


Administration privilege.

2. Click the MicroStrategy icon, then click Preferences.

3. Select Project Defaults in the Preferences Level category.

4. Select General in the Preferences category.

5. Type the limit for the Maximum number of attribute elements per
block setting in the Incremental Fetch subcategory.

To Limit the Number of Elements Displayed on a Hierarchy

1. Open the Hierarchy editor, right-click the attribute and select Element
Display from the shortcut menu, and then select Limit.

2. Type a number in the Limit box.

Copyright © 2024 All Rights Reserved 1267


Syst em Ad m in ist r at io n Gu id e

To Limit the Number of Elements Displayed for an Attribute

1. Open the Attribute Editor.

2. Select the Display tab.

3. In the Element Display category, select the Limit option and type a
number in the box.

The element display limit set for hierarchies and attributes may further
limit the number of elements set in the project properties or Web
preferences. For example, if you set 1,000 for the project, 500 for the
attribute, and 100 for the hierarchy, Intelligence Server retrieves only
100 elements.

Optimizing Element Requests


You may find the incremental element fetching feature's additional SELECT
COUNT DISTINCT query to be costly on your data warehouse. In some
cases, this additional query adds minutes to the element browse time,
making this performance unacceptable for production environments.

To make this more efficient, you can set a VLDB option to control how the
total rows are calculated. The default is to use the SELECT COUNT
DISTINCT. The other option is to have Intelligence Server loop through the
table after the initial SELECT pass, eventually getting to the end of the table
and determining the total number of records. You must decide whether to
have the database or Intelligence Server determine the number of element
records. MicroStrategy recommends that you use Intelligence Server if your
data warehouse is heavily used, or if the SELECT COUNT DISTINCT query
itself adds minutes to the element browsing time.

Using Intelligence Server to determine the total number of element records


results in more traffic between Intelligence Server and the data warehouse.

Copyright © 2024 All Rights Reserved 1268


Syst em Ad m in ist r at io n Gu id e

Either option uses significantly less memory than what is used without
incremental element fetching enabled. Using the count distinct option,
Intelligence Server retrieves four times the incremental element size. Using
the Intelligence Server option retrieves four times the incremental element
size, plus additional resources needed to loop through the table. Compare
this to returning the complete result table (which may be as large as 100,000
elements) and you will see that the memory use is much less.

The setting is called Attribute Element Number Count Method.

To Configure Attribute Element Number Count Method

1. In the Database Instance manager, select the database instance.

2. From the Administration menu, select VLDB Properties.

3. Under Query Optimizations, select Attribute Element Number Count


Method and on the right-hand side, select one of the options:

l To have the data warehouse calculate the count, select Use Count
(Attribute@ID) to calculate total element number (will use count
distinct if necessary) - Default.

l To have Intelligence Server calculate the count, select Use ODBC


cursor to calculate total element number.

4. Click Save and Close.

Caching Algorithm
The cache behaves as though it contains a collection of blocks of elements.
Each cached element is counted as one object and each cached block of
elements is also counted as an object. As a result, a block of four elements
are counted as five objects, one object for each element and a fifth object for
the block. However, if the same element occurs on several blocks it is
counted only once. This is because the element cache shares elements
between blocks.

Copyright © 2024 All Rights Reserved 1269


Syst em Ad m in ist r at io n Gu id e

The cache uses the "least recently used" algorithm on blocks of elements.
That is, when the cache is full, it discards the blocks of elements that have
been in the cache for the longest time without any requests for the blocks.
Individual elements, which are shared between blocks, are discarded when
all the blocks that contain the elements have been discarded. Finding the
blocks to discard is a relatively expensive operation. Hence, the cache
discards one quarter of its contents each time it reaches the maximum
number of allowed objects.

Limiting the Amount of Memory Available for Element Caches


You can control the amount of memory that element caches use on both
Intelligence Server (set at the project level) and the Developer machines.
This memory is referred to as the cache pool. If Intelligence Server attempts
to cache a new element request, but there is not enough available cache
pool space to store all of the new elements, existing elements must be
removed from memory before the new ones can be cached. When this
happens, the least recently used 25% of element caches are removed from
the cache.

You can configure the memory setting for both the project and the client
machine in the Cache: Element subcategory in the Project Configuration
Editor. You should consider these factors before configuring it:

l The number of attributes that users browse elements on, for example, in
element prompts, hierarchy prompts, and so on

l The number of unique elements

For example, attribute "Year" (10 years = 10 elements), attribute "city"


(500 cities = 500 elements)

l Time and cost associated with running element requests on the data
warehouse

For example, if the element request for cities runs quickly (say in 2
seconds), it may not have to exist in the element cache.

Copyright © 2024 All Rights Reserved 1270


Syst em Ad m in ist r at io n Gu id e

l The amount of RAM on the Intelligence Server machine

To Set the RAM Available for Element Caches for a Project

1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.

2. From the Administration menu, point to Projects, and then select


Project Configuration.

3. Expand Caching, expand Auxiliary Caches, then click Elements.

4. Specify the amount of RAM (in megabytes) in the Server: Maximum


RAM usage (MBytes) box.

l The default value is 1 megabyte.

l If you set the value to 0, element caching is disabled.

l If you set it to -1, Intelligence Server uses the default value of 1 MB.

5. Specify the amount of RAM (in megabytes) in the Maximum RAM


usage (MBytes) box.

The new settings take affect only after Intelligence Server is restarted.

To Set the RAM Available for Element Caches on Developer

1. In the Project Source Manager, click the Caching tab and within the
Element Cache group of controls, select the Use custom value option.

If you select the Use project default option, the amount of RAM will
be the same as specified in the Client section in the Project
Configuration Editor described above.

2. Specify the RAM (in megabytes) in the Client section in the Maximum
RAM usage (MBytes) field.

Copyright © 2024 All Rights Reserved 1271


Syst em Ad m in ist r at io n Gu id e

Limiting Which Attribute Elements a User can See


You can limit the attribute elements that a user can see to only the elements
allowed by their security filter. For example, if a user's security filter allows
them to see only the Northeast Region and they run a report that prompts for
cities, only those cities in the Northeast are displayed.

This functionality can be enabled for a project and limits the element cache
sharing to only those users with the same security filter. This can also be set
for attributes. That is, if you do not limit attribute elements with security
filters for a project, you can enable it for certain attributes. For example, if
you have Item information in the data warehouse available to external
suppliers, you could limit the attributes in the Product hierarchy with a
security filter. This is done by editing each attribute. This way, suppliers can
see their products, but not other suppliers' products. Element caches not
related to the Product hierarchy, such as Time and Geography, are still
shared among users.

For more information on security filters, see Restricting Access to Data:


Security Filters, page 121.

To Limit Which Attribute Elements Users can See Per Project

1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.

2. From the Administration menu, point to Projects, and then select


Project Configuration.

3. Expand Project definition, then select Advanced.

4. Select the Apply security filters to element browsing check box.

To Limit Which Attribute Elements Users can See Per Attribute

1. Edit the attribute, and click the Display tab.

2. Select the Apply security filters to element browsing check box.

Copyright © 2024 All Rights Reserved 1272


Syst em Ad m in ist r at io n Gu id e

You must update the schema before changes to this setting take affect
(from the Schema menu, select Update Schema).

Limiting Element Caches by Database Connection


In most cases, users connect to the data warehouse based on their
connection maps. By default, all users have the same connection map,
unless you map them to different ones with the Connection Mapping editor.
When using connection mapping, you can also ensure that users with
different database connections cannot share element caches. This causes
the element cache matching key to contain the user's database connection.

To Limit Element Caches by Database Connection

1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.

2. From the Administration menu, point to Projects, and then select


Project Configuration.

3. Expand Caching, expand Auxiliary Caches, then click Elements.

4. Select the Create element caches per connection map check box.

The new setting takes affect only after the project is reloaded or after
Intelligence Server is restarted.

For more information about connection mapping, see Controlling Access to


the Database: Connection Mappings, page 113.

Users may connect to the data warehouse using their linked warehouse
logins, as described below.

Limiting Element Caches by Database Login


This setting allows you to ensure that users with different data warehouse
logins cannot share element caches. When this feature is used, the element
cache matching key contains the user's database login. Only users with the

Copyright © 2024 All Rights Reserved 1273


Syst em Ad m in ist r at io n Gu id e

same database login are able to share the element caches. Before you
enable this feature, you must configure two items.

1. The user must have a Warehouse Login and Password specified


(selecting the Authentication tab in the User Editor).

2. The project must be configured to Use linked warehouse login for


execution (in the Project Configuration Editor, select the Project
definition: Advanced category).

If both of these properties are not set, the users will use their connection
maps to connect to the database.

To Limit Element Caches by Database Login

1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.

2. From the Administration menu, point to Projects, and then select


Project Configuration.

3. Expand Caching, expand Auxiliary Caches, then select Elements.

4. Select the Create element caches per passthrough login check box.

The new setting takes affect only after the project is reloaded or after
Intelligence Server is restarted.

Deleting All Element Caches


You can purge (delete) all of the element caches for a project on both the
Developer and Intelligence Server machines. This does not delete element
caches on other Developer machines. You cannot delete only certain
caches; they are all deleted at the same time.

If you are using a clustered Intelligence Server setup, to purge the element
cache for a project, you must purge the cache from each node of the cluster
individually.

Copyright © 2024 All Rights Reserved 1274


Syst em Ad m in ist r at io n Gu id e

Even after purging element caches, reports and documents may continue to
display cached data. This can occur because results may be cached at the
report/document and object levels in addition to at the element level. To
ensure that a re-executed report or document displays the most recent data,
purge all three caches. For instructions on purging result and object caches,
see Managing Result Caches, page 1221 and Deleting Object Caches, page
1283.

To Delete All Element Caches for a Project

1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.

2. From the Administration menu, go to Projects > Project


Configuration > Caching > Auxiliary Caches > Elements.

3. Click Purge Now.

All element caches are automatically purged whenever schema is updated.

Summary Table of Element Cache Settings


Many of the settings that help make element caching an efficient use of
system resources are explained in the sections above.

The following table lists all MicroStrategy's element caching settings.

Setting For information...

Maximum number of elements to see Limiting the Number of Elements Displayed and
display Cached at a Time, page 1265

Attribute element number count see Limiting the Number of Elements Displayed and
method Cached at a Time, page 1265

Element cache - Max RAM usage see Limiting the Amount of Memory Available for
(MBytes) Project Element Caches, page 1270

Copyright © 2024 All Rights Reserved 1275


Syst em Ad m in ist r at io n Gu id e

Setting For information...

Element cache - Max RAM usage see Limiting the Amount of Memory Available for
(MBytes) Developer Element Caches, page 1270

Apply security filter to element see Limiting Which Attribute Elements a User can
browsing See, page 1272

see Limiting Element Caches by Database


Create caches per connection map
Connection, page 1273

Create caches per passthrough


see Location of Result Caches, page 1211
login

Purge element caches see Deleting All Element Caches, page 1274

Object Caches
When you or any users browse an object definition (attribute, metric, and so
on), you create what is called an object cache. An object cache is a recently
used object definition stored in memory on Developer and Intelligence
Server. You browse an object definition when you open the editor for that
object. You can create object caches for applications.

For example, when a user opens the Report Editor for a report, the collection
of attributes, metrics, and other user objects displayed in the Report Editor
compose the report's definition. If no object cache for the report exists in
memory on Developer or Intelligence Server, the object request is sent to
the metadata for processing.

The report object definition retrieved from the metadata and displayed to the
user in the Report Editor is deposited into an object cache in memory on
Intelligence Server and also on the Developer of the user who submitted the
request. As with element caching, any time the object definition can be
returned from memory in either the Developer or Intelligence Server
machine, it is faster than retrieving it from the metadata database.

Copyright © 2024 All Rights Reserved 1276


Syst em Ad m in ist r at io n Gu id e

So when a Developer user triggers an object request, the cache within the
Developer machine's memory is checked first. If it is not there, the
Intelligence Server memory is checked. If the cache is not even there, the
results are retrieved from the metadata database. Each option is
successively slower than the previous. If a MicroStrategy Web product user
triggers an object request, only the Intelligence Server cache is checked
before getting the results from the metadata database.

Cache Matching Algorithm


For an object cache to be used, the cache must be valid, and it must match
the job being executed. The following cache keys are used in the matching
process:

l Object ID

l Object version ID

l Project ID

Enabling or Disabling Object Caching


Object caching is enabled by default when the MicroStrategy system is first
installed. Object caching cannot be disabled. Intelligence Server must
maintain a minimum amount of memory (1 MB) available for the object
caches to operate efficiently.

Limiting the Amount of Memory Available for Object Caches


You can control the amount of memory that object caches can use on both
Intelligence Server (set at the project level) and the Developer machines.
This memory is referred to as the cache pool. If a new object request size is
small enough to fit into the object cache pool, but there is not enough
available cache pool space to store all of the new objects, existing objects
must be removed from memory before the new ones can be cached. When
this happens, the least recently used 25% of object caches are removed
from the cache.

This setting depends on the following factors:

Copyright © 2024 All Rights Reserved 1277


Syst em Ad m in ist r at io n Gu id e

l Size of the project in terms of application objects

l The amount of RAM on the Intelligence Server machine

In MicroStrategy, object cache can exist at either the:

l Environment-level

The Environment-level object cache is used for environment-level objects


like users, user groups, database instances, security roles, etc.

For the Environment-level setting, the value for Server:Maximum RAM


Usage should vary between Minimum (50 MB) and Maximum (64 GB), and
the default value is 512 MB. To determine the appropriate cache size
value, you can consider the commonly used configuration object count,
such as users and groups, security roles, database instances, etc. Each
object consumes approximately 5-20 KB. You should adjust the Server:
Maximum RAM Usage setting accordingly based on the available server
memory. This helps ensure optimal performance and efficient memory
utilization. This setting is only available in Workstation, it is not exposed in
Developer.

l Project-level

The Project-level object cache is used for project-level objects like


schema objects, reports, cubes, dashboards, etc.

For the Project-level setting, the value for Server:Maximum RAM Usage
should vary between Minimum (100 MB) and Maximum (64 GB), and the
default value is 1024 MB.

For a project that has a large schema object, the project loading speed
suffers if the maximum memory for object cache setting is not large enough.
This issue is recorded in the DSSErrors.log file. See KB13390 for more
information.

Copyright © 2024 All Rights Reserved 1278


Syst em Ad m in ist r at io n Gu id e

Set the Server Environm ent-Level Maxim um RAM Usage for the Object
Cache in Workstation

1. In Workstation, connect to an environment.

2. Once the connection is established, right-click on the environment and


choose Properties.

3. In the left pane, click All Settings and search for cache.

Copyright © 2024 All Rights Reserved 1279


Syst em Ad m in ist r at io n Gu id e

4. Update the Maximum memory consumption for object cache (MB).

To Set the Server Project-Level Maxim um RAM Usage for the Object Cache
in Workstation

1. In Workstation, connect to an environment.

2. In the left navigation pane, click Projects.

3. Right-click the project you want to apply the settings to and choose
Properties.

Copyright © 2024 All Rights Reserved 1280


Syst em Ad m in ist r at io n Gu id e

4. In the left pane, click All Settings and search for cache.

5. Update the Maximum memory usage for Server Object Cache (MB).

Copyright © 2024 All Rights Reserved 1281


Syst em Ad m in ist r at io n Gu id e

Set the Server Project-Level Maximum RAM Available for a Project's


Object Caches

1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.

2. From the Administration menu, point to Projects, and then select


Project Configuration.

3. Expand Caching, expand Auxiliary Caches, and select Objects.

4. Specify the RAM (in megabytes) in the Server section in the Maximum
RAM usage (MBytes) box.

5. Specify the RAM (in megabytes) in the Client section in the Maximum
RAM usage (MBytes) box.

The new settings take effect only after Intelligence Server is restarted.

On the Developer client machine, you maintain object caching by using the
Client: Maximum RAM usage (MBytes) setting in the Caching: Auxiliary
Caches (Objects) subcategory in the Project Configuration Editor. This is a
Developer client-specific setting.

To Set the Client Maximum RAM Available for Object Caches for a
Developer Machine

1. In the Project Source Manager, click the Caching tab and in the Object
Cache group of controls, select the Use custom value option.

If you select the Use project default option, the amount of RAM is the
same as specified in the Client section in the Project Configuration
Editor described above.

2. Specify the RAM (in megabytes) in the Maximum RAM usage


(MBytes) box.

Copyright © 2024 All Rights Reserved 1282


Syst em Ad m in ist r at io n Gu id e

Deleting Object Caches


You can purge (delete) all of the object caches for a project on both the
Developer and Intelligence Server machines. However, this does not delete
object caches on other Developer machines. You cannot select to delete
only certain object caches; they are all deleted at the same time.

Even after purging object caches, reports and documents may continue to
display cached data. This can occur because results may be cached at the
report/document and element levels, in addition to at the object level. To
ensure that a re-executed report or document displays the most recent data,
purge all three caches. For instructions on purging result and element
caches, see Managing Result Caches, page 1221 and Deleting All Element
Caches, page 1274.

To Delete All Object Caches for a Project

1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.

2. From the Administration menu, go to Projects > Project


Configuration > Caching > Auxiliary Caches > Objects.

3. Click Purge Now.

Object caches are automatically purged whenever your schema is updated.

Configuration objects are cached at the server level. You can choose to
delete these object caches as well.

To Delete All Configuration Object Caches for a Server

1. Log in to the project source.

2. From the Administration menu in Developer, go to Server > Purge


Server Object Caches.

Copyright © 2024 All Rights Reserved 1283


Syst em Ad m in ist r at io n Gu id e

You cannot automatically schedule the purging of server object caches from
within Developer. However, you can compose a Command Manager script to
purge server object caches and schedule that script to execute at certain
times. For a description of this process, see MicroStrategy Tech Note
TN12270. For more information about Command Manager, see Chapter 15,
Automating Administrative Tasks with Command Manager.

Summary Table of Object Caching Settings


Many of the settings that help make object caching an efficient use of
system resources are explained in the sections above. The table below lists
all MicroStrategy object caching settings.

Setting For information...

Object cache - Max RAM usage


(MBytes) See Limiting the Amount of Memory Available for
Object Caches, page 1277
Project level

Object cache - Max RAM usage See Limiting the Amount of Memory Available for
(MBytes) Developer Object Caches, page 1277

Purge object caches See Deleting Object Caches, page 1283

Viewing Document Cache Hits


You can view document cache hit data using the Diagnostics and
Performance Logging Tool. There are four performance counters available
for retrieving this information:

l Document Cache Utilization Ratio: Number of document cache files


which have been hit / total number of document cache files.

l Document Cache Hit Ratio: Number of times a document cache is found


successfully / total times of finding document cache.

Copyright © 2024 All Rights Reserved 1284


Syst em Ad m in ist r at io n Gu id e

l Total size (in MB) of high priority document caches loaded in


memory: The total high-priority document cache size in Intelligence
Server memory.

l Total size (in MB) of low priority document caches loaded in


memory: The total low-priority document cache size in Intelligence Server
memory.

To enable these performance counters:

1. Launch the Diagnostics and Performance Logging Tool.

2. Select the Performance Configuration tab.

3. Scroll to the MicroStrategy Server Jobs category.

4. Select the File Log check box for the counters.

The log counters will be saved in the performance monitor log file:
DSSPerformanceMonitor<IServer Process ID>.csv

Copyright © 2024 All Rights Reserved 1285


Syst em Ad m in ist r at io n Gu id e

M AN AGIN G I N TELLIGEN T
CUBES

Copyright © 2024 All Rights Reserved 1286


Syst em Ad m in ist r at io n Gu id e

You can return data from your data warehouse and save it to Intelligence
Server memory, rather than directly displaying the results in a report. This
data can then be shared as a single in-memory copy, among many different
reports created by multiple users. The reports created from the shared sets
of data are executed against the in-memory copy, also known as an
Intelligent Cube, rather than having to be executed against a data
warehouse.

Intelligent Cubes are part of the OLAP Services feature in Intelligence


Server. For detailed information about Intelligent Cubes, see the In-memory
Analytics Help.

Managing Intelligent Cubes: Intelligent Cube Monitor


You must create Intelligent Cubes before they can be published. For
information on creating Intelligent Cubes, see the In-memory Analytics Help.

Once an Intelligent Cube has been published, you can manage it from the
Intelligent Cube Monitor. You can view details about your Intelligent Cubes
such as last update time, hit count, memory size, and so on.

To View the Available Intelligent Cubes

1. In Developer, log in to a project source. You must log in as a user with


the Monitor Cubes privilege.

2. Expand Administration, then expand System Monitors, Caches, and


then select Intelligent Cubes. Information about the existing Intelligent
Cubes displays on the right-hand side.

3. To view the details of an Intelligent Cube, double-click that Intelligent


Cube. A Quick View window opens, with detailed information about the
Intelligent Cube.

4. To change the status of an Intelligent Cube, right-click that Intelligent


Cube and select the desired action from the list. For a description of all

Copyright © 2024 All Rights Reserved 1287


Syst em Ad m in ist r at io n Gu id e

Intelligent Cube statuses, see Monitoring and Modifying Intelligent


Cube Status, page 1290.

Viewing Intelligent Cube Information and Usage Statistics


The Intelligent Cube Monitor provides information about published
Intelligent Cubes, as well as Intelligent Cube usage statistics. The Intelligent
Cube Monitor is shown in the image below.

You can view the following information in the Intelligent Cube Monitor:

l Cube Report Name: The name of the Intelligent Cube.

l Project Name: The project the Intelligent Cube belongs to.

l Status: The current status of the Intelligent Cube. For information on


reviewing and modifying the status of an Intelligent Cube, see Monitoring
and Modifying Intelligent Cube Status, page 1290.

l Last Update Time: The time when the Intelligent Cube was last updated
against the data warehouse.

l Last Update Job: The job number that most recently updated the
Intelligent Cube against the data warehouse. You can use the Job Monitor
to view information on a given job.

l Creation Time: The time when the Intelligent Cube was first published to
Intelligence Server.

l Size (KB): The size of the Intelligent Cube, in kilobytes.

Copyright © 2024 All Rights Reserved 1288


Syst em Ad m in ist r at io n Gu id e

l Hit Count: The number of times the Intelligent Cube has been used by
reports since it was last loaded into Intelligence Server's memory. You can
reset the Hit Count to zero by unloading the Intelligent Cube from
Intelligence Server's memory.

l Historic Hit Count: The total number of times the Intelligent Cube has
been used by reports. You can reset the Historic Hit Count to zero by
deleting the Intelligent Cube's cache, and then republishing the Intelligent
Cube.

l Open View Count: The number of reports currently accessing the


Intelligent Cube.

l Owner: The user who published the Intelligent Cube.

l Database connection: The database connection account used for the


Intelligent Cube to run against the data warehouse.

l File Name: The file location where the Intelligent Cube is saved to the
machine's secondary storage.

l Cube Instance ID: The ID for the current published version of the
Intelligent Cube.

l Cube Definition ID: The ID for the Intelligent Cube object.

l Data Language: The language used for the Intelligent Cube. This is
helpful if the Intelligent Cube is used in an internationalized environment
that supports multiple languages.

l Total number of rows: The number of rows of data that the Intelligent
Cube contains. To view this field, the Intelligent Cube must be published
at least once.

l Total number of columns: The number of columns of data that the


Intelligent Cube contains. To view this field, the Intelligent Cube must be
published at least once.

You can also view Intelligent Cube information for a specific Intelligent
Cube, by double-clicking that Intelligent Cube in the Intelligent Cube

Copyright © 2024 All Rights Reserved 1289


Syst em Ad m in ist r at io n Gu id e

Monitor. This opens a Quick View of the Intelligent Cube information and
usage statistics.

Monitoring and Modifying Intelligent Cube Status


The status of an Intelligent Cube tells you how the Intelligent Cube is
currently being used and whether reports can access the Intelligent Cube.
To modify the status of an Intelligent Cube, right-click the Intelligent Cube in
the Intelligent Cube Monitor, and select one of the actions listed below:

Required
Status to
Action Description
Perform
Action

Filed, but Loads a previously deactivated Intelligent Cube as an


Activate
not Active accessible set of data for multiple reports.

Removes an Intelligent Cube instance from Intelligence


Deactivate Active Server memory, but saves it to secondary storage, such as a
hard disk.

Re-executes and publishes an Intelligent Cube. When the


data for an Intelligent Cube is modified and saved, the
Update Active
Update action updates the Intelligent Cube with the latest
data.

Saves an Intelligent Cube to secondary storage, and keeps


the Intelligent Cube in Intelligence Server memory.

Save to disk Loaded If you have defined the backup frequency as zero minutes,
Intelligent Cubes are automatically saved to secondary
storage, as described in Storing Intelligent Cubes in
Secondary Storage, page 1313.

Removes a published Intelligent Cube as an accessible set of


data for multiple reports.
Always
Delete
available This action does not delete the Intelligent Cube object
saved in a MicroStrategy project. To delete an
Intelligent Cube object, you must log into the project

Copyright © 2024 All Rights Reserved 1290


Syst em Ad m in ist r at io n Gu id e

Required
Status to
Action Description
Perform
Action

containing the Intelligent Cube and delete it there.

For information on whether you should deactivate or


unpublish an Intelligent Cube, see Deactivating or
Unpublishing an Intelligent Cube, page 1292.

Moves an Intelligent Cube from your machine's secondary


storage to Intelligence Server memory. For information on
when to load and unload Intelligent Cubes, see Loading and
Load in Active, but Unloading Intelligent Cubes, page 1293.
memory not Loaded
If the memory limit is reached, this action unloads a
previously loaded Intelligent Cube from Intelligence
Server memory.

Moves an Intelligent Cube from Intelligence Server memory


Unload from to your machine's secondary storage, such as a hard disk.
Loaded
memory For information on when to load and unload Intelligent Cubes,
see Loading and Unloading Intelligent Cubes, page 1293.

Additional statuses such as Processing and Load Pending are also used by
the Intelligent Cube Monitor. These statuses denote that certain tasks are
currently being completed.

Additionally, if you have defined the backup frequency as greater than zero
minutes (as described in Storing Intelligent Cubes in Secondary Storage,
page 1313), the following additional statuses can be encountered:

l Dirty: This status occurs if the copy of an Intelligent Cubes data in


secondary storage is not up to date with data in Intelligence Server
memory. This can occur if an Intelligent Cube is updated in Intelligence
Server memory but the new data is not saved to secondary storage.

Copyright © 2024 All Rights Reserved 1291


Syst em Ad m in ist r at io n Gu id e

l Monitoring information dirty: This status occurs if Intelligent Cube


monitoring information changes, and this information is not updated in
secondary storage. Monitoring information includes details such as the
number of reports that have accessed the Intelligent Cube.

In both scenarios listed above, the data and monitoring information saved in
secondary storage for an Intelligent Cube is updated based on the backup
frequency. You can also manually save an Intelligent Cube to secondary
storage using the Save to disk action listed in the table above, or by using
the steps described in Storing Intelligent Cubes in Secondary Storage, page
1313.

Deactivating or Unpublishing an Intelligent Cube


Both deactivating and unpublishing an Intelligent Cube prevent reports that
access an Intelligent Cube from being able to load the Intelligent Cube into
Intelligence Server memory. This gives you more administrative control of
when to make an Intelligent Cube available to reports.

However, each of these actions provides this administrative control in


slightly different ways that can fit different scenarios.

Deactivating an Intelligent Cube saves the Intelligent Cube to secondary


storage, such as a hard disk. When you reactivate the Intelligent Cube, the
copy in secondary storage is loaded back into Intelligence Server memory.
This option is ideal when an Intelligent Cube should not be used for some
length of time, but after that should be available again in its current form.

Unpublishing an Intelligent Cube deletes the copy of data from Intelligence


Server memory without making a copy of the data. To make the Intelligent
Cube accessible to reports, the Intelligent Cube must be re-executed
against the data warehouse and published to the Intelligent Cube Monitor.
This option is ideal if the current Intelligent Cube should not be reported on
until it is re-executed against the data warehouse.

Copyright © 2024 All Rights Reserved 1292


Syst em Ad m in ist r at io n Gu id e

Loading and Unloading Intelligent Cubes


When an Intelligent Cube is published, by default, it is automatically loaded
into Intelligence Server memory.

Intelligent Cubes must be loaded in Intelligence Server memory to allow


reports to access the data in the Intelligent Cube. If an Intelligent Cube is
constantly in use, it should be loaded in Intelligence Server memory.

Using the Intelligent Cube Monitor you can load an Intelligent Cube into
Intelligence Server memory, or unload it to secondary storage, such as a
disk drive.

By default, Intelligent Cubes are loaded when Intelligent Cubes are


published and when Intelligence Server starts. To change these behaviors,
see:

l Publishing Intelligent Cubes Without Loading Them into Intelligence


Server Memory, page 1294

l Loading Intelligent Cubes when Intelligence Server Starts, page 1311

l If loading an Intelligent Cube into Intelligence Server memory causes the


memory limit to be exceeded, a different Intelligent Cube is unloaded from
Intelligence Server memory.

l The act of loading an Intelligent Cube can require memory resources up to


twice the size of an Intelligent Cube. This can affect performance of your
Intelligence Server as well as the ability to load the Intelligent Cube. For
information on how to plan for these memory requirements, see Governing
Intelligent Cube Memory Usage, page 1297.

One way to free memory on Intelligence Server, which can improve


Intelligence Server performance, is to temporarily unload an Intelligent Cube
from memory. This can be a good option for Intelligent Cubes that are not
constantly in use, because when a report accessing an active but unloaded
Intelligent Cube is executed, that Intelligent Cube is automatically loaded
into Intelligence Server memory. Be aware that if the Intelligent Cube is very

Copyright © 2024 All Rights Reserved 1293


Syst em Ad m in ist r at io n Gu id e

large there may be some delay in displaying report results while the
Intelligent Cube is being loaded into memory. For more suggestions on how
to manage Intelligence Server's memory usage, see Chapter 8, Tune Your
System for the Best Performance.

Publishing Intelligent Cubes Without Loading Them into Intelligence


Server Mem ory

By default, Intelligent Cubes are automatically loaded into Intelligence


Server memory so that reports can access and analyze their data.

To conserve Intelligence Server memory, you can define Intelligent Cubes to


only be stored in secondary storage when the Intelligent Cube is published.
The Intelligent Cube can then be loaded into Intelligence Server memory
manually, using a schedule, or whenever a report attempts to access the
Intelligent Cube.

The steps below show you how to define whether publishing Intelligent
Cubes loads them into Intelligence Server memory. You can enable this
setting at the project level, or for individual Intelligent Cubes.

To Define Whether Publishing Intelligent Cubes Loads Them into


Intelligence Server Memory, at the Project Level

1. In Developer, log in to a project using an account with administrative


privileges.

2. Right-click the project and select Project Configuration.

3. Expand Intelligent Cubes, then select General.

4. You can select or clear the Load Intelligent Cubes into Intelligence
Server memory upon publication check box:

l Select this check box to load Intelligent Cubes into Intelligence


Server memory when the Intelligent Cube is published. Intelligent
Cubes must be loaded into Intelligence Server memory to allow

Copyright © 2024 All Rights Reserved 1294


Syst em Ad m in ist r at io n Gu id e

reports to access and analyze their data.

l To conserve Intelligence Server memory, you can clear this check


box to define Intelligent Cubes to only be stored in secondary storage
upon being published. The Intelligent Cube can then be loaded into
Intelligence Server memory manually, using schedules, or whenever
a report attempts to access the Intelligent Cube.

If you are using multiple Intelligence Servers in a clustered


environment, this setting applies to all nodes.

5. Click OK.

6. For any changes to take effect, you must restart Intelligence Server.
For clustered environments, separate the restart times for each
Intelligence Server by a few minutes.

To Define Whether Publishing Intelligent Cubes Loads Them into


Intelligence Server Memory, at the Intelligent Cube Level

1. In Developer, log in to a project using an account with administrative


privileges.

2. In the Folder List, browse to the folder that contains the Intelligent
Cube you want to configure.

3. Right-click the Intelligent Cube, and choose Edit.

4. From the Data menu, select Configure Intelligent Cube.

5. Under the Options category, select General.

6. Clear Use default project-level settings.

7. Select or clear the Load Intelligent Cubes into Intelligence Server


memory upon publication check box:

Copyright © 2024 All Rights Reserved 1295


Syst em Ad m in ist r at io n Gu id e

l Select this check box to load the Intelligent Cube into Intelligence
Server memory when the Intelligent Cube is published. Intelligent
Cubes must be loaded into Intelligence Server memory to allow
reports to access and analyze their data.

l To conserve Intelligence Server memory, clear this check box to


define Intelligent Cubes to only be stored in secondary storage upon
being published. The Intelligent Cube can then be loaded into
Intelligence Server memory manually, using schedules, or whenever
a report attempts to access the Intelligent Cube.

If you are using multiple Intelligence Servers in a clustered


environment, this setting applies to all nodes.

8. Click OK.

9. In the Intelligent Cube Editor, click Save and Close.

10. Restart Intelligence Server. For clustered environments, separate the


restart times for each Intelligence Server by a few minutes.

Governing Intelligent Cube Memory Usage, Loading,


and Storage
Intelligent Cubes must be stored in Intelligence Server memory for reports to
access their data. While this can improve performance of these reports,
loading too much data onto Intelligence Server memory can have a negative
impact on Intelligence Server's ability to process jobs. For this reason, it is
important to govern how much Intelligent Cube data can be stored on
Intelligence Server.

Intelligent Cube data can also be stored in secondary storage, such as a


hard disk, on the machine hosting Intelligence Server. These Intelligent
Cubes can be loaded into memory when they are needed. For more
information, see Monitoring and Modifying Intelligent Cube Status, page
1290.

Copyright © 2024 All Rights Reserved 1296


Syst em Ad m in ist r at io n Gu id e

l Governing Intelligent Cube Memory Usage, page 1297

l Loading Intelligent Cubes when Intelligence Server Starts, page 1311

l Storing Intelligent Cubes in Secondary Storage, page 1313

Governing Intelligent Cube Memory Usage


Starting in MicroStrategy ONE (June 2024), you can use the amount of data
required for Intelligent Cubes to limit the amount of Intelligent Cube data
stored in Intelligence Server memory at one time for all projects when
MicroStrategy is deployed in container environments.

Intelligent Cubes must be stored in Intelligence Server memory for reports to


access the data. While this can improve performance of these reports,
loading too much data onto Intelligence Server memory can negatively affect
Intelligence Server's ability to process jobs. For this reason, it is important
to limit how much Intelligent Cube data can be stored on Intelligence Server.

Determining Memory Limits for Intelligent Cubes


Storing Intelligent Cubes can greatly improve performance by allowing
reports to execute against an in memory copy of data. However, storing too
much Intelligent Cube data in memory can cause other Intelligence Server
processes to compete for system resources and may cause degradations in
performance. This makes defining a memory limit for Intelligent Cubes an
important step in maintaining Intelligence Server response time.

An Intelligent Cube memory limit defines the maximum amount of RAM of the
Intelligence Server machine that can be used to store loaded Intelligent
Cubes. This data is allocated separately of memory used for other
Intelligence Server processes.

For example, you define a memory limit on Intelligent Cubes to be 512 MB.
You have 300 MB of Intelligent Cube data loaded into Intelligence Server
memory, and normal processing of other Intelligence Server tasks uses 100
MB of memory. In this scenario, Intelligence Server uses 400 MB of the RAM
available on the Intelligence Server machine. This scenario demonstrates

Copyright © 2024 All Rights Reserved 1297


Syst em Ad m in ist r at io n Gu id e

that to determine a memory limit for Intelligent Cubes, you must consider the
below factors:

l The amount of RAM available on the Intelligence Server machine, and of


that RAM what percentage can be used by Intelligence Server without
negatively affecting performance and successful operation of the host
machine.

l The average and peak usage of RAM by Intelligence Server processes


other than Intelligent Cube storage. For information on setting governing
limits on other Intelligence Server processes and monitoring system
usage, Governing Intelligent Cube Memory Usage, Loading, and Storage.

l The amount of memory required to load all Intelligent Cubes necessary to


meet reporting requirements. To help save space, Intelligent Cubes that
are not used often can be unloaded until they are required by reports (see
Monitoring and Modifying Intelligent Cube Status, page 1290).

l The Maximum RAM usage (Mbytes) memory limit can be defined per
project. If you have multiple projects that are hosted from the same
Intelligence Server, each project may store Intelligent Cube data up to its
memory limit.

l For example, you have three projects and you set their Maximum RAM
usage (Mbytes) limits to 1 GB, 1 GB, and 2 GB. This means that 4 GB of
Intelligent Cube data could be stored in RAM on the Intelligence Server
machine if all projects reach their memory limits.

l The size of the Intelligent Cubes that are being published and loaded into
memory. The process of publishing an Intelligent Cube can require
memory resources in the area of two to four times greater than the
Intelligent Cube's size. This can affect performance of your Intelligence
Server and the ability to publish the Intelligent Cube. For information on
how to plan for these memory requirements, see the next section.

l If your project and Intelligent Cubes support multiple languages, each


language supported may require additional memory.

Copyright © 2024 All Rights Reserved 1298


Syst em Ad m in ist r at io n Gu id e

l To help reduce Intelligent Cube memory size, review the best practices
described in Best Practices for Reducing Intelligent Cube Memory Size,
page 1299.

Best Practices for Reducing Intelligent Cube Memory Size


MicroStrategy recommends the following best practices to reduce the
memory size of your Intelligent Cubes:

l Attributes commonly use numeric values for their ID forms. Using


attributes defined in this way can save space as compared to attributes
that use character strings for their ID forms.

l Attribute forms should be included only as required because including


additional attribute forms in Intelligent Cubes requires additional memory.

l You should avoid including compound metrics and smart metrics in


Intelligent Cubes. The same results provided by compound metrics and
smart metrics can often be provided by creating derived metrics in reports
that connect to Intelligent Cubes.

l You can define Intelligent Cube normalization to reduce the amount of


memory required for an Intelligent Cube. Intelligent Cube normalization
can be configured using the Data population for Intelligent Cubes VLDB
property. For instructions on setting VLDB properties, see SQL Generation
and Data Processing: VLDB Properties.

Planning for Intelligent Cube Publishing and Loading


Publishing an Intelligent Cube can require memory resources in the area of
two to four times greater than the size of an Intelligent Cube. Once the
Intelligent Cube is published, the extra resources are returned to the system
and only the space required for the Intelligent Cube and some indexing
information is taken up in RAM. However, you should consider this peak in
memory usage when planning for the publication of Intelligent Cubes.

Copyright © 2024 All Rights Reserved 1299


Syst em Ad m in ist r at io n Gu id e

By default, publishing an Intelligent Cube includes the step of loading the


Intelligent Cube into memory. You can modify this default behavior as
described in Monitoring and Modifying Intelligent Cube Status, page 1290.
Loading and unloading Intelligent Cubes is described in Monitoring and
Modifying Intelligent Cube Status, page 1290.

If publishing an Intelligent Cube is processed in RAM alone without using


swap space, this can greatly reduce the effect publishing an Intelligent Cube
has on the performance of your Intelligence Server host machine. Swap
space is controlled by the operating system of a computer and using this
space for the transfer of data into RAM can negatively affect the
performance of a computer.

You can help to keep the processes of publishing Intelligent Cubes within
RAM alone by defining memory limits for Intelligent Cubes that reflect your
Intelligence Server host's available RAM as well as schedule the publishing
of Intelligent Cubes at a time when RAM usage is low. For information on
scheduling Intelligent Cube publishing, see the In-memory Analytics Help.

To determine memory limits for Intelligent Cubes, you should review the
considerations listed in Determining Memory Limits for Intelligent Cubes,
page 1297. You must also account for the potential peak in memory usage
when publishing an Intelligent Cube, which can be two to four times the size
of an Intelligent Cube.

For example, your Intelligence Server machine has 2 GB of RAM and 2 GB


of swap space. Assume that normal usage of RAM by the operating system
uses .4 GB of RAM. This leaves a possible 1.6 GB of RAM for Intelligent
Cube storage and other tasks.

With this configuration, consider the act of publishing a 1 GB Intelligent


Cube. Assuming the peak in memory usage for publishing this Intelligent
Cube is 2.5 times the size of the Intelligent Cube, the publishing process
requires 2.5 GB. This can take up 1.6 GB of RAM, but it also requires .9 GB
of swap space, as shown in the diagram below.

Copyright © 2024 All Rights Reserved 1300


Syst em Ad m in ist r at io n Gu id e

Once the Intelligent Cube is published, only the 1 GB for the Intelligent Cube
(plus some space for indexing information) is used in RAM and the
remaining .6 GB of RAM and .9 GB of swap space used during the publishing
of the Intelligent Cube is returned to the system, as shown in the image
below.

While the Intelligent Cube can be published successfully, using the swap
space could have an affect on performance of the Intelligence Server
machine.

With the same configuration, consider the act of publishing a .5 GB


Intelligent Cube rather than a 1 GB Intelligent Cube. Assuming the peak in
memory usage for publishing this Intelligent Cube is 2.5 times the size of the
Intelligent Cube, the publishing process requires 1.25 GB. This process can
be handled completely within RAM, as shown in the diagram below.

Copyright © 2024 All Rights Reserved 1301


Syst em Ad m in ist r at io n Gu id e

Once the Intelligent Cube is published, only the .5 GB for the Intelligent
Cube (plus some space for indexing information) is used in RAM and the
remaining RAM used during the publishing of the Intelligent Cube is returned
to the system, as shown in the image below.

Be aware that as more Intelligent Cube data is stored in RAM, less RAM is
available to process publishing an Intelligent Cube. This along with the peak
memory usage of publishing an Intelligent Cube and the hardware resources
of your Intelligence Server host machine should all be considered when
defining memory limits for Intelligent Cube storage per project.

Defining Memory Limits for Intelligent Cubes


You can define limits for the amount of Intelligent Cube memory stored in
Intelligence Server at a given time in the ways described below:

Copyright © 2024 All Rights Reserved 1302


Syst em Ad m in ist r at io n Gu id e

l You can use the amount of data required for all Intelligent Cubes to limit
the amount of Intelligent Cube data that is marked as "Loaded" at one time
for a project. The default is 256 megabytes.

The total size of the loaded Intelligent Cubes for a project is calculated
and compared to the limit you have defined. If an attempt to load an
Intelligent Cube is made that will exceed this limit, one ore more Intelligent
Cubes will be offloaded from the Intelligence Server memory before the
new Intelligent Cube is loaded into memory.

When Memory Mapped Files for Intelligent Cubes are enabled, only the
portion of the cube in memory will be governed by this counter (excluding
the portion of the cube on disk).

l You can use the number of Intelligent Cubes to limit the number of
Intelligent Cube stored in Intelligence Server memory at one time for a
project. The default is 1000 Intelligent Cubes.

The total number of Intelligent Cubes for a project that are stored in
Intelligence Server memory is compared to the limit you define. If an
attempt to load an Intelligent Cube is made that will exceed the numerical
limit, an Intelligent Cube is removed from Intelligence Server memory
before the new Intelligent Cube is loaded into memory.

l Starting in MicroStrategy ONE (June 2024), you can use the amount of
data required for Intelligent Cubes to limit the amount of Intelligent Cube
data stored in the Intelligence Server memory at one time for all projects
when the MicroStrategy products are deployed in container environments.
The default is 50% of the host machine memory, if there is no limit set on
the container.

When Memory Mapped Files for Intelligent Cubes are enabled, only the
portion of the cube in memory will be governed by this counter (excluding
the portion of the cube on disk).

Copyright © 2024 All Rights Reserved 1303


Syst em Ad m in ist r at io n Gu id e

To Define Limits on Intelligence Server Memory Usage by Intelligent


Cubes

1. In Developer, log in to a project that uses Intelligent Cubes. You must


log in using an account with the Administer Cubes privilege.

2. Right-click the project and select Project Configuration.

3. Expand Intelligent Cubes, and then select General.

4. Define the values for the following project governing options:

l Maximum RAM usage (MBytes): Defines the data size, in


megabytes, to limit the amount of Intelligent Cube data that can be
loaded in Intelligence Server memory for a project at one time. The
default value is 256 megabytes.

l Maximum number of cubes: Defines the maximum total number of


Intelligent Cubes that can be created for a project, including
Intelligent Cubes that are not loaded into Intelligence Server's
memory. The default value is 1000 Intelligent Cubes.

Beginning with MicroStrategy 2020 Update 1, this governing setting is


being retired. It will remain available, but the setting will not be
enforced if set below the default value of 1000.

l Maximum cube size allowed for download (MB): Defines the


maximum size of a cube, in megabytes. The default is 200.

5. Click OK.

To Def i n e Li m i t s o n In t el l i gen ce Ser ver M em o r y Usage b y In t el l i gen t


Cu b es i n Co n t ai n er Dep l o ym en t s

Starting in MicroStrategy ONE (June 2024), you can use the following steps
to define limits on Intelligence Server memory usage by Intelligent Cubes in
container deployments. The Maximum memory consumption for Intelligent

Copyright © 2024 All Rights Reserved 1304


Syst em Ad m in ist r at io n Gu id e

Cubes (%) setting specifies the maximum percentage of system RAM that
Intelligent Cubes are allowed to consume.

1. In Workstation, log in to an environment that uses Intelligent Cubes.


You must log in using an account with the Administrator Cube privilege.

2. Right-click the environment and select Properties.

3. Click All Settings in the left navigation.

4. Search for Maximum memory consumption for Intelligent Cubes


(%).

5. Edit the value, as needed.

The default value is 50% of the container memory or 50% of the host
machine memory, if there is no limit set on the container.

6. Click OK.

Defining Limits for Intelligent Cube Indexes


Intelligence Server generates indexes to speed up access to data in
Intelligent Cubes. In very large Intelligent Cubes, these indexes may
significantly increase the size of the Intelligent Cube. You can define limits
for how much the indexes can add to the size of the Intelligent Cube at the
project level, using the Project Configuration Editor.

To Define Limits for the Intelligent Cube Indexes

1. In Developer, log in to the project source for your project.

2. In the Folder List, right-click the project and choose Project


Configuration.

3. Expand Intelligent Cubes, and then select General.

Copyright © 2024 All Rights Reserved 1305


Syst em Ad m in ist r at io n Gu id e

4. Edit the following values, as appropriate:

l Maximum % growth of an Intelligent Cube due to indexes:


Defines the maximum that indexes are allowed to add to the
Intelligent Cube's size, as a percentage of the original size. For
example, a setting of 50 percent defines that a 100 MB Intelligent
Cube can grow to 150 MB because its indexes. If the Intelligent
Cube's size exceeds this limit, the least-used indexes are saved to a
disk and then deleted from memory sequentially until the cube size is
below the upper limit.

l Cube growth check frequency (in mins): Defines, in minutes, how


often the Intelligent Cube's size is checked, and if necessary, how
often the least-used indexes are dropped.

5. Click OK.

Defining Limits for Intelligent Cubes Created using the Import


Data Feature
The Import Data feature allows users to create Intelligent Cubes using
external data sources, such as Excel files, tables in a database, or Freeform
SQL queries. You can define limits on the size of files that users can upload,
and on the total memory available for the Intelligent Cubes they can create.

To Define Limits on Intelligent Cubes Created Using the Import Data


Feature

1. In Developer, log in to a project using an account with the Administer


Cubes privilege.

2. In the Folder List, right-click on the project and select Project


Configuration.

3. Expand Governing Rules, expand Default, and then select Import


Data.

Copyright © 2024 All Rights Reserved 1306


Syst em Ad m in ist r at io n Gu id e

4. Define values for the following options:

l Maximum file size (MB): Defines the maximum size for files that
users can upload and import data from. The default value is 30 MB.
The minimum value is 1 MB, and the maximum value is 9999999 MB.

l Maximum quota per user (MB): Defines the maximum size of all
data import cubes for each individual user. This quota includes the
file size of all data import cubes, regardless of whether they are
published to memory or on disk. You can set the maximum size quota
by entering the following values:

l -1: Unlimited - No limit is placed on the size of data import cubes for
each user.

l 0: Default - The default size limit of 100 MB is applied to each user

l [1, ∞): Specific limit - Entering a value of 1 or greater will apply a


MB limit of that size to each user.

In a clustered environment, this setting applies to all nodes in the


cluster.

5. Click OK.

Memory Mapped Files for Intelligent Cubes


Memory mapped files (MMF) can be used for effective RAM utilization. For
prior releases, either a cube was loaded, residing completely in RAM, or
unloaded. Now we look to leverage operating system capabilities to enable
more cubes to be ‘loaded’ between RAM and disk.

Copyright © 2024 All Rights Reserved 1307


Syst em Ad m in ist r at io n Gu id e

As shown in the screenshot above, only one cube cache can be loaded in
RAM at any given time. With MMF, multiple cube caches can be loaded in
RAM.

Requirements
To use MMF, more disk space than system memory is required. The disk
space is checked at server startup and logged into DSSErrors.log with a
message similar to one of the following errors below. If enabled, it is also
checked on subsequent publishings, with additional logging only if the
amount of disk drops too low.

l Enable MMF as the available disk size (25338925056) on


/iserver-install/BIN/Linux is not less than the total
physical memory size (7890911232)

l Disable MMF as the available disk size (5338925056) on


/iserver-install/BIN/Linux is less than the total
physical memory size (7890911232)

The number of descriptors for Linux, using the nofiles limit, should be set
to at least 65535 as mentioned in Recommended System Settings for Linux.

File and Folder Structure


When enabled, a file or set of files are created for each managed cube under
a CubeGovern folder, with structure \CubeGovern\
{ServerDefiniton}\Server_{machine}_P{project}\MMF\
{CubeInstanceID}. This folder may contain a single file for non-partition
OLAP cubes, or multiple files for partitioned cubes (based on number of
partitions and attributes used) and MTDI cubes (based on number of fact
tables, along with partitioning). The location of this CubeGovern folder may
vary, but can be found from the DSSErrors entry referenced in the
Requirements section. These files are deleted if the cube is completely
unloaded, and created when the cube is published or loaded again.

Copyright © 2024 All Rights Reserved 1308


Syst em Ad m in ist r at io n Gu id e

Enable or Disable MMF


The usage of memory mapped files is configurable through MicroStrategy
Workstation at the environment, project, and cube (excluding live connect)
levels, with the options for the project level shown below.

You can hover over the longer options, such Apply best strategy to
maximize performance with given resources and Turn-off the
capability without exceptions to view their full text.

The behavior for each of these settings is as follows:

Value Behavior

Use inherited value


Uses the setting determined at higher levels, such as
(Not available at the
the project for a cube, an environment for a project, etc.
environment level)

Apply best strategy to maximize Memory mapped files are only created after fetching
performance with given data and only for cubes smaller than approximately 1
resources GB.

Turn-off the capability without


Memory mapped files not used, regardless of the
exceptions
settings at the lower level.
(Not available at the cube level)

Memory mapped files are not used, unless they are


Disable the capability
enabled at a lower level.

Copyright © 2024 All Rights Reserved 1309


Syst em Ad m in ist r at io n Gu id e

Value Behavior

Memory mapped files are used, unless disabled at


Enable the capability lower level or the disk requirement shown above is not
met.

Troubleshooting

File descriptor on Linux

When using memory mapped files (MMFs) on systems with large numbers of
loaded cubes, particularly partitioned cubes, it is possible to exceed the OS-
configurable limit of open files per process. The following examples show
the errors you may encounter if this limit is met through usage of this feature
against a Linux Intelligence server.

The following error is from publishing a cube in MicroStrategy Web.

(QueryEngine encounter error: MFileSystem:OpenStream:


::fopen failed for '/iserver-
install/BIN/Linux/MSIReg.reg_lock'. System Error (EMFILE)
--- Too many open files. Error in Process method of
Component: QueryEngineServer, Project Nico's MD
Population Opt for Cubes, Job 908, Error Code= -
214721544.)

The same error is encountered from republishing a cube within the


dashboard editor in MicroStrategy Workstation.

Copyright © 2024 All Rights Reserved 1310


Syst em Ad m in ist r at io n Gu id e

MMFs are not generated on disk

Please check the available disk space and refer to the previous
Requirements section.

Loading Intelligent Cubes when Intelligence Server Starts


When Intelligence Server is started there are various tasks that are
processed to prepare a reporting environment. You can include loading all
published Intelligent Cubes as one of the tasks completed when Intelligence
Server is started. This affects when the load time required for Intelligent
Cubes occurs.

The considerations for whether to load Intelligent Cubes at Intelligence


Server startup or when a report is executed that accesses a published
Intelligent Cube are described in the table below.

Method Pros Cons

Loading Intelligent l Report runtime performance l The overhead experienced

Copyright © 2024 All Rights Reserved 1311


Syst em Ad m in ist r at io n Gu id e

Method Pros Cons

during Intelligence Server


for reports accessing
startup is increased due to the
Intelligent Cubes is optimized
processing of loading Intelligent
since the Intelligent Cube for
Cubes when Cubes.
the report has already been
Intelligence Server loaded. l All Intelligent Cubes for a
starts project are loaded into
l This practice is a good option
Intelligence Server memory,
if Intelligent Cubes are
regardless of whether they are
commonly used in a project.
used by reports or not.

l The overhead experienced


during Intelligence Server
startup is decreased as
compared to including loading l Report runtime performance for
Intelligent Cubes as part of reports accessing Intelligent
the startup tasks. Cubes can be negatively
Loading Intelligent
affected as the Intelligent Cube
Cubes when a l If Intelligent Cubes are not
must first be loaded into
report is executed required by any reports, then
Intelligence Server.
that accesses a they do not need to be loaded
published into Intelligence Server and
You can also load Intelligent
Intelligent Cube no overhead is experienced.
Cubes manually or with
l This practice is a good option subscriptions after Intelligence
if Intelligent Cubes are Server is started.
supported for a project, but
some of the Intelligent Cubes
are rarely used in the project.

The procedure below describes how to enable or disable loading Intelligent


Cubes when Intelligence Server starts.

The act of loading an Intelligent Cube can require memory resources in the
area of two times greater than the size of an Intelligent Cube. This can
affect performance of your Intelligence Server as well as the ability to load
the Intelligent Cube. For information on how to plan for these memory
requirements, see Governing Intelligent Cube Memory Usage, page 1297.

Copyright © 2024 All Rights Reserved 1312


Syst em Ad m in ist r at io n Gu id e

To Enable or Disable Loading Intelligent Cubes when Intelligence


Server Starts

1. In Developer, log in to a project with a user account with administrative


privileges.

2. Right-click the project and select Project Configuration.

3. Expand Intelligent Cubes, and then select General.

4. Select or clear the Load Intelligent cubes on startup check box to


enable or disable loading Intelligent Cubes when Intelligence Server
starts.

5. Click OK.

Storing Intelligent Cubes in Secondary Storage


Along with storing Intelligent Cubes in Intelligence Server memory, you can
also store them in secondary storage, such as a hard disk. These Intelligent
Cubes can then be loaded from secondary storage into Intelligence Server
memory when reports require access to the Intelligent Cube data.

To Store an Intelligent Cube in Secondary Storage

1. In Developer, log in to a project source with administrative privileges.

2. Right-click the project and select Project Configuration.

3. Expand Intelligent Cubes, and then select General.

4. In the Intelligent Cube file directory area, click ... (the Browse
button).

5. Browse to the folder location to store Intelligent Cubes, and then click
OK.

6. Click OK.

Copyright © 2024 All Rights Reserved 1313


Syst em Ad m in ist r at io n Gu id e

7. From the Folder List, expand Administration, then expand System


Monitors, then expand Caches, and then select Intelligent Cubes.

8. Right-click the Intelligent Cube to store in secondary storage and select


Save to Disk.

You can also define when Intelligent Cubes are automatically saved to
secondary storage, as described in Defining when Intelligent Cubes are
Automatically Saved to Secondary Storage, page 1314 below.

Defining when Intelligent Cubes are Automatically Saved to


Secondary Storage
In addition to manually saving Intelligent Cubes to secondary storage, you
can also define when Intelligent Cubes are automatically saved to secondary
storage.

To Define when Intelligent Cubes are Automatically Saved to


Secondary Storage

1. In Developer, log in to a project source with administrative privileges.

2. From the Administration menu, point to Server, and then select


Configure MicroStrategy Intelligence Server.

3. Expand the Server Definition category, and select Advanced.

4. In the Backup frequency (minutes) field, type the interval (in minutes)
between when Intelligent Cubes are automatically saved to secondary
storage. A value of 0 means that Intelligent Cubes are backed up
immediately after they are created or updated.

Be aware that this option also controls the frequency at which cache
and History List messages are backed up to disk, as described in
Configuring Result Cache Settings, page 1228.

Copyright © 2024 All Rights Reserved 1314


Syst em Ad m in ist r at io n Gu id e

5. Click OK.

6. Restart Intelligence Server for your changes to take effect.

Supporting Connection Mappings in Intelligent Cubes


Connection mappings allow you to assign a user or group in the
MicroStrategy system to a specific login ID on the data warehouse.
Connection mappings are typically used for one of the following reasons:

l To take advantage of one of several RDBMS data security techniques


(security views, split fact tables by rows, split fact tables by columns) that
you may have already created

l To allow users to connect to multiple data warehouses using the same


project

For detailed information about connection mapping, see the Installation and
Configuration Help.

If you use connection mapping in a project that includes Intelligent Cubes,


you should define your Intelligent Cubes to use and support connection
mapping. If you do not define Intelligent Cubes to support connection
mapping when connection mapping is used in a project, users may be able to
access data they are not intended to have access to.

When an Intelligent Cube that supports connection mapping is published, it


uses the connection mapping of the user account which published the
Intelligent Cube. Only users that have this connection mapping can create
and view reports that access this Intelligent Cube. This maintains the data
access security and control defined by your connection mappings.

If an Intelligent Cube needs to be available for multiple connection


mappings, you must publish a separate version of the Intelligent Cube for
each of the required connection mappings.

For example, Intelligent Cube X is created in a project and defined to


support connection mapping. User JDoe who is assigned to connection

Copyright © 2024 All Rights Reserved 1315


Syst em Ad m in ist r at io n Gu id e

mapping A publishes Intelligent Cube X. The Intelligent Cube is published


using connection mapping X. User FJohnson who is assigned connection
mapping B cannot create and execute a report connected to Intelligent Cube
X. To allow FJohnson to create and execute a report connected to Intelligent
Cube X, a user account assigned to connection mapping B must publish the
Intelligent Cube.

To Support Connection Mapping for All Intelligent Cubes in a Project

1. In Developer, log in to a project with a user account with administrative


privileges.

2. Right-click a project and select Project Configuration.

3. Expand Intelligent Cubes, and then select General.

4. Select the Create Intelligent Cubes by database connection check


box.

If you do not use connection mapping, leave this check box cleared.

5. Click OK.

Copyright © 2024 All Rights Reserved 1316


Syst em Ad m in ist r at io n Gu id e

SCH ED ULIN G JOBS AN D


A D M IN ISTRATIVE TASKS

Copyright © 2024 All Rights Reserved 1317


Syst em Ad m in ist r at io n Gu id e

Scheduling is a feature of Intelligence Server that you can use to automate


various tasks. Time-sensitive, time-consuming, repetitive, and bulk tasks
are ideal candidates for scheduling. Running a report or document is the
most commonly scheduled task because scheduling reports, in conjunction
with other features such as caching and clustering, can improve the overall
system performance. Certain administration tasks can also be scheduled.

Intelligence Server executes a task in exactly the same manner if it is


scheduled or not. All governing parameters and error conditions apply to
scheduled tasks just as they apply to other requests.

The scheduling feature is turned on by default. However, you can disable


scheduling in the Intelligence Server Configuration Editor. In the Server
Definition category, in the Advanced subcategory, clear the Use
MicroStrategy Scheduler check box.

Best Practices for Scheduling Jobs and Administrative


Tasks
MicroStrategy recommends the following best practices when scheduling
jobs and administrative tasks:

l Executing simultaneous reports can strain system resources. If you have


many reports or tasks that need to be executed on the same time-based
schedule, consider creating several similar schedules that trigger 15
minutes apart. For example, one schedule triggers at 8 AM every Monday,
and another triggers at 8:15 AM.

l To prevent users from scheduling many simultaneous reports, you can


prevent users from scheduling jobs using a schedule by editing the
schedule's Access Control List (ACL). To do this, in the Schedule
Manager, right-click the schedule and select Properties, then select the
Security tab in the Properties dialog box, and make sure that only users
who can use the schedule have Modify or Full Control access to the

Copyright © 2024 All Rights Reserved 1318


Syst em Ad m in ist r at io n Gu id e

schedule. For more information about ACLs, see Controlling Access to


Objects: Permissions, page 89.

l Establish reasonable limits on how many scheduled jobs are allowed. For
details on this setting, see Limit the Total Number of Jobs, page 1074.

l If you need to create multiple similar subscriptions, you can create them
all at once with the Subscription Wizard. For example, you can subscribe
users to several reports at the same time.

l If you need to temporarily disable a schedule, you can set its start date for
some time in the future. The schedule does not trigger any deliveries until
its scheduled start date.

l In a clustered system, if it is important which node an administrative task


is executed on, use an event-triggered schedule and trigger the event on
that node.

l If many subscriptions are listed in the Subscription Manager, you can filter
the list of subscriptions so that you see the relevant subscriptions.

l When selecting reports to be subscribed to, make sure all the reports with
prompts that require an answer actually have a default answer. If a report
has a prompt that requires an answer but has no default answer, the
subscription cannot run the report successfully because the prompt cannot
be resolved, and the subscription is automatically invalidated and removed
from the system.

l When a scheduled report or document finishes executing, a message can


display in the subscribed user's History List alerting them that the report is
ready to be viewed. The user then opens the message to retrieve the
results. If the request was not completed successfully, the user can view
details of the error message. These messages are available in the History
List folder. For more information about History Lists, see Saving Report
Results: History List, page 1240.

Copyright © 2024 All Rights Reserved 1319


Syst em Ad m in ist r at io n Gu id e

l You can track the successful delivery of a subscribed report or document.


In the Subscription Editor or Subscription Wizard, select the Send
notification to email address check box and specify the email address. A
notification email is sent to the selected address when the subscribed
report or document is successfully delivered to the recipients.

l You can track the failed delivery of subscribed reports or documents. In


the Project Configuration Editor, in the Deliveries: Email notification
category, enable the administrator notification settings for failed
deliveries.

l For best performance, consider configuring the following settings to suit


your subscription needs:

l Tune the Number of scheduled jobs governing setting according to the


size of your hardware. Larger hardware can handle higher settings.

l Enable caching.

l If your database and database machine allow a larger number of


warehouse connections, increasing this number can improve
performance by allowing more jobs to execute against the warehouse.

l Increase the Scheduler session timeout setting.

Exercise caution when changing settings from the default. For details
on each setting, see the appropriate section of this manual.

l To control memory usage, consider configuring the following settings:

l Limit the number of scheduled jobs per project and per Intelligence
Server.

l Increase the User session idle time.

l Enable caching.

l If you are using Distribution Services, see Best Practices for Using
Distribution Services, page 1351.

Copyright © 2024 All Rights Reserved 1320


Syst em Ad m in ist r at io n Gu id e

Creating and Managing Schedules


A schedule is a MicroStrategy object that contains information specifying
when a task is to be executed. One schedule can control several tasks.
Schedules are stored at the project source level and are thus available to all
projects in the project source.

Intelligence Server supports two types of schedules:

l Time-triggered schedules execute at a date and time or on a recurring


date and time. For details, see Time-Triggered Schedules, page 1321.

l Event-triggered schedules execute when the event associated with them is


triggered. For details, see Event-Triggered Schedules, page 1321.

Time-Triggered Schedules
With a time-triggered schedule, you define a date and time at which the
scheduled task is to be run. For example, you can execute a task every
Sunday night at midnight. Time-triggered schedules are useful to allow
large, resource-intensive tasks to run at off-peak times, such as overnight or
over a weekend.

l Time-triggered schedules execute according to the time on the machine


where they were created. For example, a schedule is created using
Developer on a machine that is in the Pacific time zone (GMT -8:00). The
schedule is set to be triggered at 9:00 AM. The machine is connected to
an Intelligence Server in the Eastern time zone (GMT -5:00). The
schedule executes at 12:00 PM Eastern time, which is 9:00 AM Pacific
time.

l In a clustered environment, administrative tasks associated with time-


triggered schedules are executed on only the primary node of the cluster.

Event-Triggered Schedules
An event-triggered schedule causes tasks to occur when an event occurs.
For example, an event may trigger when the database is loaded, or when the

Copyright © 2024 All Rights Reserved 1321


Syst em Ad m in ist r at io n Gu id e

books are closed at the end of a cycle.

When an event is triggered, all tasks tied to that event through an event-
triggered schedule begin processing. For more information about events,
including how to create them, see About Events and Event-Triggered
Schedules, page 1326.

In a clustered environment, administrative tasks associated with event-


triggered schedules are executed on only the node of the cluster that
triggered the event.

Creating Schedules
To create schedules, you must have the privileges Create Configuration
Object and Create and Edit Schedules and Events. In addition, you need to
have Write access to the Schedule folder. For information about privileges
and permissions, see Controlling Access to Application Functionality, page
88.

To create effective and useful schedules, you must have a clear


understanding of your users' needs and the usage patterns of the overall
system. Schedules must be created before they are linked to any tasks.

To Create a Schedule

1. In Developer, log in to a project source.

2. Expand Administration, then expand Configuration Managers, and


then select Schedules. The list of schedules for the project source
displays on the right-hand side.

3. From the File menu, point to New, and then select Schedule.

4. Step through the wizard, entering the required information:

l To create a time-triggered schedule, when prompted for the schedule


type, select Time-triggered. Then select the frequency and time the
schedule is triggered.

Copyright © 2024 All Rights Reserved 1322


Syst em Ad m in ist r at io n Gu id e

l To create an event-triggered schedule, when prompted for the


schedule type, select Event-triggered. Then select the event that
triggers the schedule.

5. When you reach the Summary page of the Wizard, review your choices
and click Finish.

You can also create a schedule with the Create Schedule script for
Command Manager. For detailed syntax, see the Create Schedule script
outline in Command Manager Help.

Managing Schedules
You can add, remove, or modify schedules through the Schedule Manager.
You can modify the events that trigger event-triggered schedules through
the Event Manager. For instructions on using the Event Manager, see About
Events and Event-Triggered Schedules, page 1326.

You can also specify that certain schedules can execute subscriptions
relating only to certain projects. For instructions, see Restricting Schedules,
page 1324.

To Manage Your Schedules in the Schedule Manager

1. In Developer, log in to a project source.

2. Expand Administration, then expand Configuration Managers, and


then select Schedules. The list of schedules for the project source
displays on the right-hand side.

3. To manage your schedules, select from the tasks below:

l To create a new schedule, see Creating Schedules, page 1322.

l To modify a schedule, right-click the schedule and select Edit. The


Schedule Wizard opens with that schedule's information. Step
through the wizard and make any changes.

l To delete a schedule, right-click the schedule and select Delete.

Copyright © 2024 All Rights Reserved 1323


Syst em Ad m in ist r at io n Gu id e

l To find all subscriptions that use one of the schedules, right-click the
schedule and select Search for dependent subscriptions.

Restricting Schedules

You may want to restrict some schedules so that they can be used only by
subscriptions in specific projects. For example, your On Sales Database
Load schedule may not be relevant to your Human Resources project. You
can configure the Human Resources project so that the On Sales Database
Load schedule is not listed as an option for subscriptions in that project.

You may also want to restrict schedules so that they cannot be used to
subscribe to certain reports. For example, your very large All Worldwide
Sales Data document should not be subscribed to using the Every Morning
schedule. You can configure the All Worldwide Sales Data document so that
the Every Morning schedule is not listed as an option for subscriptions to
that document.

To Restrict Schedules for a Project

1. In MicroStrategy Web, log in to the project you are restricting schedules


for. You must log in as a user with administrative access to the
MicroStrategy Web preferences.

2. Click the MicroStrategy icon, then click Preferences.

3. In the left column, click Project Defaults, and then click Schedule.

4. Select Only allow users to subscribe to the schedules below.

5. The left column lists schedules that users are not allowed to subscribe
to. The right column lists schedules that users are allowed to subscribe
to.

Copyright © 2024 All Rights Reserved 1324


Syst em Ad m in ist r at io n Gu id e

When you first select this option, no schedules are allowed. All
schedules are listed by default in the left column, and the right column
is empty.

6. To allow users to subscribe to a schedule, select the schedule and click


the right arrow. The schedule moves to the right column.

7. When you are finished selecting the schedules that users are allowed to
subscribe to in this project, click Save.

To Restrict Schedules for a Report or Document

1. In MicroStrategy Web, log in to the project you are restricting schedules


for. You must log in as a user with administrative access to the
MicroStrategy Web preferences.

2. Execute the report or document.

3. From the Tools menu, select Report Options.

4. Select the Advanced tab.

5. Select Only allow users to subscribe to the schedules below.

6. The left column lists schedules that users are not allowed to subscribe
to. The right column lists schedules that users are allowed to subscribe
to.

When you first select this option, no schedules are allowed. All
schedules are listed by default in the left column, and the right column
is empty.

7. To allow users to subscribe to a schedule, select the schedule and click


the right arrow. The schedule moves to the right column.

8. When you are finished selecting the schedules that users are allowed to
subscribe to in this project, click OK.

Copyright © 2024 All Rights Reserved 1325


Syst em Ad m in ist r at io n Gu id e

About Events and Event-Triggered Schedules


Subscriptions and tasks that are based on event-triggered schedules (see
Event-Triggered Schedules, page 1321) execute when a MicroStrategy
event is triggered. These triggers do not need to be defined in advance. A
system external to Intelligence Server is responsible for determining
whether the conditions for triggering an event are met. For more information
on how to trigger events, see Triggering Events, page 1327.

Once Intelligence Server has been notified that the event has taken place,
Intelligence Server performs the tasks associated with the corresponding
schedule.

In a clustered environment, administrative tasks associated with event-


triggered schedules are executed only by the node on which the event is
triggered. MicroStrategy recommends that you use event-triggered
schedules in situations where it is important to control which node performs
certain tasks.

If projects are distributed asymmetrically across the cluster, when you


assign an event-triggered schedule to a project, make sure you trigger the
event on all nodes on which that project is loaded. See Managing Scheduled
Administration Tasks, page 1331.

Creating Events
You can create events in Developer using the Event Manager.

To Create an Event in Developer

1. In Developer, log in to a project source. You must log in as a user with


the Create And Edit Schedules And Events privilege.

2. Go to Administration > Configuration Managers > Events. The list of


events for the project source displays on the right-hand side.

Copyright © 2024 All Rights Reserved 1326


Syst em Ad m in ist r at io n Gu id e

3. Select File > New > Event.

4. Name the new event.

To Create an Event Using Command Manager

You can create events with the following Command Manager script:

CREATE EVENT event_name [DESCRIPTION description];

By default, this script is in the folder C:\Program Files\


MicroStrategy\Command Manager\Outlines\.

Triggering Events
MicroStrategy Command Manager can trigger events from the Windows
command line. By executing Command Manager scripts, external systems
can trigger events and cause the associated tasks to be run. For more
information about Command Manager, see Chapter 15, Automating
Administrative Tasks with Command Manager.

For example, you want to execute several reports immediately after a


database load occurs so that these reports always have a valid cache
available. You create an event called OnDBLoad and associate it with an
event-triggered schedule. You then subscribe those reports to that
schedule.

At the end of the database load routine, you include a statement to add a
line to a database table, DB_LOAD_COMPLETE, that indicates that the
database load is complete. You then create a database trigger that checks
to see when the DB_LOAD_COMPLETE table is updated, and then executes
a Command Manager script. That script contains the following line:

TRIGGER EVENT "OnDBLoad";

When the script is executed, the OnDBLoad event is triggered, and the
schedule is executed.

Copyright © 2024 All Rights Reserved 1327


Syst em Ad m in ist r at io n Gu id e

You can also use the MicroStrategy SDK to develop an application that
triggers an event. You can then cause the database trigger to execute this
application.

Triggering Events Manually

You can manually trigger events using the Event Manager. This is primarily
useful in a testing environment. In a production system, it may not be
practical for the administrator to be present to trigger event-based
schedules.

To Trigger an Event Manually

1. In Developer, log in to a project source. You must log in as a user with


the Trigger Event privilege.

2. Go to Administration > Configuration Managers > Events.

3. Right-click an event and select Trigger.

Scheduling Administrative Tasks


In addition to scheduling reports and documents execution, you can instruct
Intelligence Server to perform certain administrative tasks according to a
schedule. For example, you can delete all History List messages every
month, or idle a project once a week for maintenance and then resume it an
hour later.

To schedule an administrative task, you must have the Administer


Subscriptions privilege and any privileges required for that task.

Copyright © 2024 All Rights Reserved 1328


Syst em Ad m in ist r at io n Gu id e

To Schedule an Administrative Task

1. In Developer, from the Administration menu, point to Scheduling and


then select Schedule Administration Tasks.

2. To schedule tasks for a project, select that project. To schedule tasks


for the project source, select All Projects.

3. Choose a task from the action list. For descriptions of the tasks, see the
table below.

4. Select one or more schedules for the task.

5. Set any additional options required for the task.

6. Click OK.

The table below lists the tasks that can be scheduled for a project. Some of
the tasks can also be scheduled at the project source level, affecting all
projects in that project source.

Task Description

Cache or History List management tasks

Delete all report caches for the project. For more information, see
Managing Result Caches, page 1221.
Delete
caches
Typically the Invalidate Caches task is sufficient to clear the report
caches.

Clean Delete orphaned entries and ownerless inbox messages from the History
History List List database. For more information, see Managing History Lists, page
database 1254

Delete Delete all history list messages for the project or project source. For more
History List information, see Managing History Lists, page 1254.
messages
(project or This maintenance request can be large. Schedule the History List
project deletions for times when Intelligence Server is not busy, such as when

Copyright © 2024 All Rights Reserved 1329


Syst em Ad m in ist r at io n Gu id e

Task Description

users are not sending requests to the system. Alternatively, delete


source) History Lists in increments; for example, delete the History Lists of
groups of users at different times, such as at 1 AM, 2 AM, and so on.

Invalidate the report caches in a project. The invalid caches are


Invalidate
automatically deleted once all references to them have been deleted. For
caches
more information, see Managing Result Caches, page 1221.

Purge
Delete the element caches for a project. For more information, see Deleting
element
All Element Caches, page 1274.
caches

Intelligent Cube management tasks

Activate Publish an Intelligent Cube to Intelligence Server, making it available for


Intelligent use in reports. For more information, see Chapter 11, Managing Intelligent
Cubes Cubes.

Deactivate
Unpublish an Intelligent Cube from Intelligence Server. For more
Intelligent
information, see Chapter 11, Managing Intelligent Cubes.
Cubes

Delete
Delete an Intelligent Cube from the server. For more information, see
Intelligent
Chapter 11, Managing Intelligent Cubes.
Cube

Update
Update a published Intelligent Cube. For more information, see Chapter
Intelligent
11, Managing Intelligent Cubes.
Cubes

Project management tasks

Cause the project to stop accepting certain types of requests. For more
Idle project
information, see Setting the Status of a Project, page 48.

Bring the project back into normal operation from an unloaded state. For
Load project
more information, see Setting the Status of a Project, page 48.

Resume Bring the project back into normal operation from an idle state. For more
project information, see Setting the Status of a Project, page 48.

Copyright © 2024 All Rights Reserved 1330


Syst em Ad m in ist r at io n Gu id e

Task Description

Take a project offline to users and remove the project from Intelligence
Unload
Server memory. For more information, see Setting the Status of a Project,
project
page 48.

Miscellaneous management tasks

Batch LDAP
import Import LDAP users into the MicroStrategy system. For more information,
(project see Manage LDAP Authentication, page 189.
source only)

Delete
unused
managed Remove the unused managed objects created for Freeform SQL, Query
objects Builder, and MDX cube reports. For more information, see Delete Unused
(project or Schema Objects: Managed Objects, page 822.
project
source)

Deliver APNS Deliver a push notification for a Newsstand subscription to a mobile device.
Push For more information, see the MicroStrategy Mobile Design and
Notification Administration Guide.

Managing Scheduled Administration Tasks


The Scheduled Maintenance view of the System Administration monitor lists
all the scheduled administrative tasks for a project source. From this view
you can see information about all the scheduled tasks or delete one or more
tasks. For more information about using the System Administration monitor,
see Managing and Monitoring Projects, page 44.

To Manage Scheduled Administration Tasks

1. In Developer, log in to a project source. You must log in as a user with


the Administer Subscriptions privilege.

2. Expand Administration, and then expand System Administration.

Copyright © 2024 All Rights Reserved 1331


Syst em Ad m in ist r at io n Gu id e

3. Select Scheduled Maintenance.

4. To view detailed information about a scheduled task, right-click the task


and select Quick View.

5. To delete a scheduled task, right-click the task and select Expire.

Users are not notified when a task they have scheduled is deleted.

Scheduling Administrative Tasks in a Clustered System


When you set up several Intelligence Server machines in a cluster, you can
distribute projects across those clustered machines (or nodes) in any
configuration. Each node can host a different subset of projects. For more
information about clustering Intelligence Servers, see Chapter 9, Cluster
Multiple MicroStrategy Servers.

To determine which server handles each scheduled administrative task, use


the following guidelines:

l Tasks that are based on time-based schedules are executed on the


primary node for each project. You can find a project's primary node using
the Cluster view of the System Administration monitor.

l Tasks that are based on event-triggered schedules are executed on the


node on which the event is triggered. The administrator must be sure to
trigger the event on all nodes (that is, all machines) that are running the
project for which the event-based schedule is assigned.

You can see which nodes are running which projects using the Cluster view
of the System Administration monitor. For details on using the Cluster view
of the System Administration monitor, see Manage Your Clustered System,
page 1168.

Copyright © 2024 All Rights Reserved 1332


Syst em Ad m in ist r at io n Gu id e

Scheduling Reports and Documents: Subscriptions


Normally, Intelligence Server executes report or document requests
immediately after they are made. A subscription allows these requests to be
executed according to a schedule specified by the administrator. Users can
create subscriptions for themselves, or system administrators can subscribe
users to reports. In addition, if you have a Distribution Services license, you
can deliver subscribed reports or documents to other users by email, file,
FTP, or print.

Scheduling reports and documents execution reduces the load on the


system in the following ways:

l You can create caches for frequently accessed reports and documents,
which provides fast response times to users without generating additional
load on the database system.

l Large, long-running reports and documents can be postponed to later


when the system load is lighter.

A subscription for a document creates or updates only that document's


cache for the default mode of the document (HTML, PDF, Excel, or
XML/Flash). If the document is viewed in other modes, it does not use this
cache. For more information about how Intelligence Server determines
whether to use a cache, see Cache Matching Algorithm, page 1213.

When a user subscribes or is subscribed to a report or document, that user's


personalization selections apply to the subscription. Personalization
selections can include language choice, delivery method, delivery location,
delivery format, and so on. Personalization options vary depending on what
a report or document supports, whether the user's MicroStrategy
environment is internationalized in the appropriate language for the user,
and so on.

This section provides the following information about subscriptions:

Copyright © 2024 All Rights Reserved 1333


Syst em Ad m in ist r at io n Gu id e

l Types of Subscriptions, page 1334

l Creating Subscriptions, page 1336

l Managing Subscriptions, page 1346

Types of Subscriptions
You can create the following types of subscriptions for a report or document:

l Cache update subscriptions refresh the cache for the specified report or
document. For example, your system contains a set of standard weekly
and monthly reports. These reports should be kept in cache because they
are frequently accessed. Certain tables in the database are refreshed
weekly, and other tables are refreshed monthly. Whenever these tables
are updated, the appropriate caches should be refreshed.

Cache update subscriptions often use event-triggered schedules because


caches generally do not need to be refreshed unless the underlying data
changes from an event like a data warehouse load. For additional
suggestions for scheduling strategies, see Managing Result Caches, page
1221. For detailed information about caches, see Result Caches, page
1203.

WebDAV subscriptions are a special type of cache update subscriptions.


These subscriptions update an Intelligence Server folder whose contents
are hosted by a WebDAV server so that the information in the folder can
be served to mobile devices. For information about WebDAV folders, see
the Advanced Reporting Help.

l History List subscriptions create a History List message for the specified
report or document. Users can then retrieve the report or document from
the History List message in their History List folder. For detailed
information about the History List, see Saving Report Results: History List,
page 1240.

l Mobile subscriptions deliver the report or document to a mobile device,


such as an iPhone or an Android device, via MicroStrategy Mobile. These

Copyright © 2024 All Rights Reserved 1334


Syst em Ad m in ist r at io n Gu id e

subscriptions are available if you own the MicroStrategy Mobile product.


For detailed information about mobile subscriptions and MicroStrategy
Mobile, see the MicroStrategy Mobile Administration Help .

l Intelligent Cube update subscriptions retrieve the most recent


information for an Intelligent Cube from the data warehouse and then
publish that Intelligent Cube. Like cache update subscriptions, Intelligent
Cube update subscriptions are good candidates for event-triggered
schedules. For detailed information about Intelligent Cubes, see the In-
memory Analytics Help.

l Email subscriptions deliver a report or document to one or more email


addresses.

l File subscriptions save the report or document as an Excel or PDF file to a


disk location on the network.

l Print subscriptions automatically print a report or document from a


specified printer.

l FTP subscriptions automatically save the report or document to a location


on an FTP server in the file format the user chooses: command separated
values (CSV), PDF, HTML, MS Excel, and plain text.

Email, file, print, and FTP subscriptions are available if you have purchased
a Distribution Services license. For information on purchasing Distribution
Services, contact your MicroStrategy account representative.

Distribution Services Subscriptions


If you have a Distribution Services license, you can set up information flows
for yourself and other users by subscribing to report and document
deliveries. Users can freely personalize these deliveries by selecting
delivery formats and locations, such as:

l Format: HTML, PDF, Excel, ZIP file, plain text, CSV, bulk export, or .mstr
(dashboard) file

Copyright © 2024 All Rights Reserved 1335


Syst em Ad m in ist r at io n Gu id e

l Delivery location: Email, network printer, FTP location, file server


(including portals and PCs), or the user's MicroStrategy History List, which
serves as a report archive and immediately informs the user of the delivery
by email

Reports or documents that are subscribed to for delivery through


Distribution Services can be compressed and password protected. Standard
MicroStrategy security credentials are applied for each user subscribed to
receive a report or document.

Before you can use Distribution Services to deliver reports and documents,
you must create the appropriate devices, transmitters, and contacts. For
detailed information on these objects and instructions on setting up
Distribution Services system, see Configuring and Administering Distribution
Services, page 1351.

Creating Subscriptions
This section provides detailed instructions for subscribing to a report or
document.

You can create subscriptions in the following ways:

l You can subscribe to an individual report or document from the Report


Editor or Document Editor in Developer or through MicroStrategy Web
(see To Subscribe to a Report or Document in Developer, page 1338 or To
Create a Subscription in MicroStrategy Web, page 1338).

Use this method to create WebDAV folder update subscriptions.

A History List message is generated when a report or document is


executed in Web by a schedule.

l If you have a Distribution Services license, you can subscribe multiple


users to an individual report or document through MicroStrategy Web (see
To Create a Subscription in MicroStrategy Web, page 1338).

Copyright © 2024 All Rights Reserved 1336


Syst em Ad m in ist r at io n Gu id e

l You can create multiple cache, History List, Intelligent Cube, or Mobile
subscriptions at one time for a user or user group using the Subscription
Wizard in Developer (see To Create Multiple Subscriptions at One Time in
Developer, page 1338).

l If you have purchased a license for Command Manager, you can use
Command Manager scripts to create and manage your schedules and
subscriptions. For instructions on creating these scripts with Command
Manager, see Chapter 15, Automating Administrative Tasks with
Command Manager, or see the Command Manager Help. (From within
Command Manager, select Help.)

To create any subscriptions, you must have the Schedule Request privilege.

To create email, file, FTP, or print subscriptions, you must have a


MicroStrategy Web license, a Distribution Services license, and the
appropriate privileges in the Distribution Services privilege group. For
example, to create an email subscription you must have the Use Distribution
Services and Subscribe to Email privileges.

To create an alert-based subscription, you must also have the Web Create
Alert privilege (under the Web Analyst privilege group).

To create mobile subscriptions, you must have a MicroStrategy Mobile license.

To subscribe other users to a report or document, you must have the Web
Subscribe Others privilege (under the Web Professional group). In addition, to
subscribe others in Developer, you must have the Administer Subscriptions,
Configure Subscription Settings, and Monitor Subscriptions privileges (under
the Administration group).

To subscribe a dynamic address list to a report or document, you must have the
Subscribe Dynamic Address List privilege. For information about dynamic
address lists, see Using a Report to Specify the Recipients of a Subscription,
page 1340.

Copyright © 2024 All Rights Reserved 1337


Syst em Ad m in ist r at io n Gu id e

To Subscribe to a Report or Docum ent in Developer

Only History List, cache, Intelligent Cube, and Mobile subscriptions can be
created in Developer.

1. In Developer, select the report, document, Intelligent Cube, or WebDAV


folder to be delivered according to a schedule.

2. From the File menu, point to Schedule Delivery To, and select the
type of subscription to create. For a list of the types of subscriptions,
see Types of Subscriptions, page 1334.

3. Type a name and description for the subscription.

4. From the Schedule drop-down list, select a schedule for the


subscription.

5. Click OK.

To Create Multiple Subscriptions at One Time in Developer

Only History List, cache, Intelligent Cube, and Mobile subscriptions can be
created in Developer.

1. In Developer, from the Administration menu, point to Scheduling, and


then select Subscription Creation Wizard

2. Step through the Wizard, specifying a schedule and type for the
subscriptions, and the reports and documents that are subscribed to.

3. Click Finish.

To Create a Subscription in MicroStrategy Web

1. In MicroStrategy Web, on the reports page, under the name of the


report/document that you want to create a subscription for, click the
Subscriptions icon .

Copyright © 2024 All Rights Reserved 1338


Syst em Ad m in ist r at io n Gu id e

This icon becomes visible when you point to the name of the report or
document.

2. Select Add Subscription for the type of subscription you want to


create. For a list of the types of subscriptions, see Types of
Subscriptions, page 1334.

3. Type a name and description for the subscription.

4. From the Schedule drop-down list, select a schedule for the


subscription.

5. To add additional users to the subscription, click To. Select the users
or groups and click OK.

6. Click OK.

Prompted Reports and Subscriptions


A subscribed report can contain prompts. How and whether the report is
executed depends on the prompt definition. For additional information about
how prompts are defined, see the Prompts section in the Advanced
Reporting Help.

To ensure that a prompted report in a subscription is executed properly, the


prompt must be required and must have either a default answer or a
personalized answer. The following table explains how Intelligence Server
resolves the different possible scenarios that can occur for each prompt in a
subscribed report.

Default /
Prompt Personal
Result
Required? Answer
present?

The prompt is ignored because it is not required; the report is


No No
executed, but it is not filtered by the prompt.

Copyright © 2024 All Rights Reserved 1339


Syst em Ad m in ist r at io n Gu id e

Default /
Prompt Personal
Result
Required? Answer
present?

The prompt and its default or personal answer are ignored


No Yes because the prompt is not required; the report is executed, but
it is not filtered by the prompt.

The report is not executed. No answer is provided to the


Yes No required prompt so MicroStrategy cannot complete the report
without user interaction.

The report is executed. The prompt is answered with a


Yes Yes personal answer if one is available or with the default answer
if a personal answer is not provided.

Using a Report to Specify the Recipients of a Subscription


If you have a Distribution Services license, you can use a report to
dynamically specify the recipients for a subscription.

To create a dynamic recipient list, you first create a special source report
that contains all the necessary information about the recipients of the
subscription. You then use the source report to define the dynamic list in
MicroStrategy Web. The new dynamic recipient list appears in the list of
Available Recipients when defining a new subscription to a standard report
or document. When the subscription is executed, only the addresses
returned by the source report are included in the delivery.

The information in the source report includes email addresses, user IDs, and
chosen devices to which to deliver standard MicroStrategy reports and
documents. Each address in the source report must be linked to a
MicroStrategy user. Any security filters and access control lists (ACLs) that
are applied to the address's linked user are also applied to any reports and
documents that are sent to the address.

Copyright © 2024 All Rights Reserved 1340


Syst em Ad m in ist r at io n Gu id e

If you have existing Narrowcast Server subscriptions, this feature contains


an option in the Select Reports dialog box that allows you to use Narrowcast
Server source reports. Narrowcast Server source reports contained
subscription information in the page-by elements. When you create a source
report to support a dynamic recipient list, you can designate the page-by
elements as the location where the system should locate subscription
information, thus enabling you to reuse your existing Narrowcast Server
source reports. Steps to choose this option when creating a dynamic
recipient list are in the MicroStrategy Web Help.

The procedure below describes how to create a source report that provides
the physical addresses, linked MicroStrategy user IDs, and device type
information necessary to create a dynamic recipient list. For steps to create
a dynamic recipient list using this source report, see the MicroStrategy Web
Help.

You must have a Distribution Services license.

To create a dynamic recipient list, you must have the Create Dynamic Address
List privilege.

To subscribe a dynamic address list to a report or document, you must have the
Subscribe Dynamic Address List privilege.

To Create a Source Report to Support a Dynamic Recipient List

1. In MicroStrategy Web, create a grid report containing at least three


columns. The columns correspond with each of the three required
subscription properties:

l Physical address. For example, this might be provided by a customer


email attribute form of the Customer attribute.

l A MicroStrategy user ID to be linked to the address. For example, this


might be provided by a customer ID attribute form of the Customer
attribute.

Copyright © 2024 All Rights Reserved 1341


Syst em Ad m in ist r at io n Gu id e

l Device. This attribute form uses a 32-character hexadecimal string.


For example, this may be provided by a preferred format/device
attribute form of the Customer attribute.

The data type for the user ID and device columns must be VARCHAR, not
CHAR.

2. Save the report with a name and description that makes the report's
purpose as a source report for a dynamic recipient list clear.

3. You can now use this source report to create a new dynamic recipient
list in MicroStrategy Web. For steps to create a dynamic recipient list
using this source report, see the MicroStrategy Web Help.

Personalizing Email and File Subscriptions


You can personalize your email and file subscriptions with macros in the File
Name, Subject, Message, ZIP File Name, and Subfolder fields. These
macros are automatically replaced with the appropriate text when the
subscription is delivered.

For example, you create an email subscription to a report named Daily


Revenue. You want the subject of the email to include the name of the
report. In the Subscription Editor, in the Subject field, you type
Subscription To {&ContentName}. When the report is delivered, the
subject of the email is Subscription to Daily Revenue. Later, the report is
changed to include profit, and the name of the report is changed to Daily
Revenue and Profit. The subscription is now delivered with the subject
Subscription to Daily Revenue and Profit, without any change to the
subscription.

You can also use macros to personalize the delivery location and backup
delivery location for a file device. For details, including a list of the macros
available for file devices, see Creating and Managing Devices, page 1363.

Copyright © 2024 All Rights Reserved 1342


Syst em Ad m in ist r at io n Gu id e

The following table lists the macros that can be used in email and file
subscriptions, and the fields in which they can be used:

Description Macro Fields

Subject,
Date the subscription is sent {&Date}
File Name

Subject, File
Time the subscription is sent {&Time}
Name

Subject,
Name of the recipient {&RecipientName}
File Name

User login {&UserLogin} All fields

Name of the subscription {&Subscription} All fields

Project that contains the subscribed


{&Project} All fields
report/document

{&PromptNumber&}
Name of a prompt in the subscribed
(where Number is the All fields
report/document
number of the prompt)

Name of the subscribed report/document {&ContentName} All fields

Report or document details for the Subject,


{&ContentDetails}
subscribed report/document Message

Name of the attribute used for bursting (file {[Attribute Name]@


File Name
subscriptions) [Attribute Form]}

Name of the attribute used for creating


{[Attribute Name]@ Sub-folder
subfolders when bursting (file
[Attribute Form]} (bursting)
subscriptions)

Delivering Parts of Reports Across Multiple Files: Bursting File


Subscriptions
Large MicroStrategy reports and documents are often broken up into
separate pages by attributes. In a similar way, with Distribution Services,

Copyright © 2024 All Rights Reserved 1343


Syst em Ad m in ist r at io n Gu id e

you can split up, or burst, a report or document into multiple files. When the
subscription is executed, a separate file is created for each element of each
attribute selected for bursting. Each file has a portion of data according to
the attributes used to group data in the report (page-by axis) or document
(group-by axis).

For example, you may have a report with information for all regions. You
could place Region in the page-by axis and burst the file subscription into
the separate regions. This creates one report file for each region.

As a second example, if you choose to burst your report using the Region
and Category attributes, a separate file is created for each combination of
Region and Category, such as Central and Books as a report, Central and
Electronics as another, and so on.

When creating the subscription for PDF, Excel, plain text, and CSV file
formats, you can use macros to ensure that each file has a unique name. For
example, if you choose to burst your document using the Region and
Category attributes, you can provide {[Region]@[DESC]},
{[Category]@[DESC]} as the file name. When the subscription is
executed, each file name begins with the names of the attribute elements
used to generate the file, such as Central, Books or Central, Electronics.

You must execute a prompted document to make it available for bursting.

To Burst a File Subscription Across Multiple Files

1. Create a file subscription in MicroStrategy Web by following the steps


in To Create a Subscription in MicroStrategy Web, page 1338, or edit
an existing file subscription in MicroStrategy Web.

2. In the Subscription Editor, click Burst… The Select Bursting Criteria


options are displayed. All attributes used to group data in the report or
document are shown in the Available Attributes list.

Copyright © 2024 All Rights Reserved 1344


Syst em Ad m in ist r at io n Gu id e

3. From the Available Attributes list, select the attributes to use to break
up the data, then click the right arrow to move those attributes to the
Selected Attributes list.

4. To change the order of attributes for bursting, select an attribute in the


Selected Attributes list, then click the up or down arrow.

5. In the File Name field, type a name for the burst files. You can use
macros to ensure that each file has a unique name.

6. Click OK.

Delivering Parts of Reports Across Multiple Files: Bursting File


Subscriptions to Subfolders
Large MicroStrategy reports and documents can be divided into separate
pages by attributes. In a similar way with Distribution Services, you can
break, or burst, a report or document into multiple subfolders, with each
subfolder containing report or document with a portion of data divided by the
attributes in the report's page-by or the document's group-by axis. When the
subscription is executed, subfolders are dynamically created, if they do not
already exist, with the names of the attribute elements. To do this, you
provide macro text as part of the bursting subfolder name when creating the
file subscription. Each attribute in the macro uses the syntax {[Attribute
Name]@[Attribute Form]}.

For example, if your report has Manager in the page-by axis, you may burst
the report into subfolders using the Manager's last name. In this case, you
provide macro text {[Manager]@[Last Name]} as the bursting subfolder
name.

You can create multiple levels of subfolders if your report or document is


grouped by multiple attributes. As a second example, you could have
Manager folders with Category subfolders in each. This macro text may be
entered in the subfolder name as {[Manager]@[Last Name]}-
{[Manager]@[First Name]}\{[Category]@[DESC]}. The result of this
bursting example is shown in the image below. One of the subscribed

Copyright © 2024 All Rights Reserved 1345


Syst em Ad m in ist r at io n Gu id e

reports with books data is in the Books subfolder in the manager's subfolder
named Abram-Crisby.

In the example above, the Reports\FileDev1 path was defined as part of the
file device used for the subscription. The file name has the date and time
appended to the report name because the file device definition has the
Append timestamp to file name check box selected.

To Burst a File Subscription Across Multiple Subfolders

1. Create a file subscription in MicroStrategy Web by following the steps


in To Create a Subscription in MicroStrategy Web, page 1338 or edit an
existing file subscription in MicroStrategy Web.

2. In the Subscription Editor, click Burst… The Select Bursting Criteria


options are displayed. All attributes used to group data in the report or
document are shown in the Available Attributes list.

3. From the Available Attributes list, select any attribute to use to create
the subfolders, then click the right arrow to move the attribute to the
Selected Attributes list. The Sub-folder field displays below or to the
right of the File Name field.

4. To change the order of attributes for bursting, select an attribute in the


Selected Attributes list, then click the up or down arrow.

5. In the File Name field, type a name for the files to be created. You can
use macros to ensure that each file has a unique name.

6. In the Sub-folder field (the one below or to the right of the File Name
field), type a macro to dynamically create the subfolders.

7. Click OK.

Managing Subscriptions
This section contains the following information:

Copyright © 2024 All Rights Reserved 1346


Syst em Ad m in ist r at io n Gu id e

l Tasks for Managing Subscriptions, page 1347

l Administering Subscriptions, page 1349

l Result Caches and Subscriptions, page 1350

l Subscriptions in a Clustered System, page 1350

Tasks for Managing Subscriptions


The table below lists common subscription and delivery-related tasks that
users or administrators can perform, and where to perform those tasks, in
both MicroStrategy Web and Developer. Note that some tasks can be
performed only in MicroStrategy Web.

The steps in the table take you to the main interface to complete the task.
For detailed steps, click Help once you are in the main interface.

User task Where to perform the task

l In MicroStrategy Web: In a report or document, from the


Define preferences for a Report Home or Document Home menu, select
report/document to be Subscribe to, then select History List or Mobile.
delivered to a user's History
List folder, mobile device, or
l In Developer: In a report or document, from the File

system cache. menu, select Schedule delivery to, then select History
List, Update cache, or Mobile.

Define preferences for a


report/document to be
delivered to an email l In MicroStrategy Web, from a report or document, from
address, network storage the Report Home or Document Home menu, select
location, FTP location, or Subscribe to, then select Email, File, Printer, or FTP.
printer (Distribution Services
only).

l In MicroStrategy Web: Click Preferences on the left of


Define personal subscription
any page. For History List delivery, select Project
preferences to all reports or
Defaults on the left, then select History List. For Email,
documents, in one location.
File, Printer, FTP delivery, select User Preferences on

Copyright © 2024 All Rights Reserved 1347


Syst em Ad m in ist r at io n Gu id e

User task Where to perform the task

the left, then select Email Addresses, File Locations,


Printer Location, or FTP Locations.

l In Developer: From the Tools menu, select My


Subscriptions.

l In MicroStrategy Web, add an alert to a report; to do this,


Set up alert-based run a report, right-click a metric on the report, and select
subscriptions. Alerts. In the Alerts Editor, after you set up the alert, set
up the subscription by selecting Delivery Settings.

l In MicroStrategy Web, if you own Distribution Services:


In a report or document, from the Report Home or
Document Home menu, select Subscribe to, then select
History List, Mobile, Email, File, Printer, or FTP.

l In MicroStrategy Web, if you do not own Distribution


Schedule a report/document
Services: In a report or document, from the Report Home
to be sent to another user.
or Document Home menu, select Add to History List or
Add to Mobile.

l In Developer: In a report or document, from the File


menu, select Schedule delivery to, then select History
List, Update cache, or Mobile.

l In MicroStrategy Web: click My Subscriptions on the left


of any page. In the Unsubscribe column on the right,
Unsubscribe from a report or select a check box and click Unsubscribe.
document. l In Developer: From the Tools menu, select My
Subscriptions. Right-click a subscription, then select
Unsubscribe.

l In MicroStrategy Web, click My Subscriptions on the left


of any page. In the Action column, click the Edit icon for
Change subscription details the report/document whose subscription you want to edit.
for a report or document. l In Developer: From the Tools menu, select My
Subscriptions. Right-click a subscription, then select
Edit.

Copyright © 2024 All Rights Reserved 1348


Syst em Ad m in ist r at io n Gu id e

User task Where to perform the task

l In MicroStrategy Web, from a report, from the Tools


Configure who can subscribe menu, select Report Options, then select the Delivery
to each report. tab. Choose to allow all users, specified users, or no
users to subscribe to the report. For steps, see the help.

l In MicroStrategy Web, from a document in Design mode,


from the Tools menu, select Document Options. On the
Configure who can
left under Document Properties, click Delivery and then
subscribe to each document.
choose to allow all users, specified users, or no users to
subscribe to the document. For steps, see the help.

Administering Subscriptions
You can create, remove, or modify subscriptions through the Subscription
Manager.

You can set the maximum number of subscriptions of each type that each
user can have for each project. This can prevent excessive load on the
system when subscriptions are executed. By default, there is no limit to the
number of subscriptions. You set these limits in the Project Configuration
Editor, in the Governing Rules: Default: Subscriptions category.

To Manage Your Subscriptions in the Subscription Manager

1. In Developer, log in to a project source.

2. Expand Administration, then expand Configuration Managers, and


then select Subscriptions.

3. To manage your subscriptions, select from the tasks below:

l To create a subscription, right-click in the Subscription Manager and


select Subscription Creation Wizard. . Follow the instructions in
Creating Subscriptions, page 1336.

Copyright © 2024 All Rights Reserved 1349


Syst em Ad m in ist r at io n Gu id e

l To modify a subscription, right-click the subscription and select Edit.


Make any changes and click OK.

l To delete a subscription, right-click the subscription and select


Delete.

l To filter the subscriptions that are listed, right-click in the


Subscription Manager and select Filter. Specify the filtering criteria
and click OK.

Result Caches and Subscriptions


By default, if a cache is present for a subscribed report or document, the
report or document uses the cache instead of re-executing the report or
document. If no cache is present, one is created when the report or
document is executed. For information about how result (report or
document) caches are used in MicroStrategy, see Result Caches, page
1203.

When you create a subscription, you can force the report or document to re-
execute against the warehouse even if a cache is present, by selecting the
Re-run against the warehouse check box in the Subscription Wizard. You
can also prevent the subscription from creating a new cache by selecting the
Do not create or update matching caches check box.

You can change the default values for these check boxes in the Project
Configuration Editor, in the Caching: Subscription Execution category.

Subscriptions in a Clustered System


When you set up several Intelligence Server machines in a cluster, you can
distribute projects across those clustered machines (or nodes) in any
configuration. Each node can host a different subset of projects. For more
information, including instructions, on clustering Intelligence Servers, see
Chapter 9, Cluster Multiple MicroStrategy Servers.

Copyright © 2024 All Rights Reserved 1350


Syst em Ad m in ist r at io n Gu id e

Subscriptions in a clustered system are load-balanced across all nodes of


the cluster that host the project containing the subscribed report or
document. Subscriptions are load-balanced by the number of subscription
jobs created. One subscription job is created for each user or user group in
the subscription. For example, if User A and User Group B are subscribed to
a dashboard, the subscription creates one job for User A, and a second job
for User Group B. In a two-node cluster, the subscription for User A would
execute on one node, and the subscription for User Group B would execute
on the other node.

Configuring and Administering Distribution Services


MicroStrategy Distribution Services provides high-volume and high-
efficiency distribution of reports, documents, and dashboards to email
addresses, file servers, networked printers, FTP locations, and devices such
as mobile phones. Distribution Services also supports various MicroStrategy
Mobile-related features.

Distribution Services functionality is set up and enabled by an administrator


in Developer and is used by all types of users through subscribing to
deliveries in MicroStrategy Web. Administrators can also subscribe one or
more users to a delivery.

This section explains the Distribution Services functionality and steps to set
it up in your MicroStrategy system.

For details about statistics logging for email, file, print, and FTP deliveries,
see Statistics on Subscriptions and Deliveries, page 2952.

This section contains the following information:

Best Practices for Using Distribution Services


MicroStrategy recommends the following best practices when scheduling
Distribution Services subscriptions, in addition to the best practices given
above:

Copyright © 2024 All Rights Reserved 1351


Syst em Ad m in ist r at io n Gu id e

l For best results, follow the steps listed in High-Level Checklist to Set Up a
Report Delivery System, page 1353.

l PDF, plain text, and CSV file formats generally offer the fastest delivery
performance. Performance can vary, depending on items including your
hardware, operating system, network connectivity, and so on.

l The performance of the print delivery method depends on the speed of the
printer.

l When sending very large reports or documents:

l Enable the zipping feature for the subscription so that files are smaller.

l Use bulk export instead of the CSV file format. Details on bulk exporting
are in the Reports section of the Advanced Reporting Help.

l Schedule subscription deliveries to occur when your Intelligence Server


is experiencing low traffic.

l If your organization is processing a smaller number of subscriptions, such


as 100 or fewer, better performance may be achieved by sending each
subscription to the largest number of recipients possible. This can be
achieved by designing reports or documents that answer business
questions for the widest variety of analysts and by adding prompts to the
report or document. For an introduction to creating and adding prompts to
a report, see the Basic Reporting Help.

For information about how prompts are answered in subscribed reports,


see Creating Subscriptions, page 1336.

If your organization is processing many subscriptions, such as 1,000 or


more, better performance may be achieved by sending the largest number
of subscriptions possible to the fewest recipients. For example, it may be
possible to send all of a team's subscriptions to a project manager, who
can then present and distribute the subscribed-to reports in team
meetings.

Copyright © 2024 All Rights Reserved 1352


Syst em Ad m in ist r at io n Gu id e

If you are processing many subscriptions, consider using the bulk export
feature. Details on bulk exporting are in the Reports section of the
Advanced Reporting Help.

l When creating contacts, make sure that each contact has at least one
address for each delivery type. Otherwise the contact does not appear in
the list of contacts for subscriptions that are for a delivery type that the
contact has no address for. For example, if a contact does not have an
email address, when an email subscription is being created, that contact
does not appear in the list of contacts.

l When selecting reports to be subscribed to, make sure none of the reports
have prompts that require an answer and have no default answer. If a
report has a prompt that requires an answer but has no default answer, the
subscription cannot run the report successfully, and the subscription is
automatically removed from the system.

l Use macros to dynamically specify the delivery location and backup


delivery location for a file device (see Creating and Managing Devices,
page 1363).

l The maximum file size of dashboard (.mstr) files that can be sent through
Distribution Services is defined by the MicroStrategy (.mstr) file size
(MB) setting. To access the setting, in MicroStrategy Developer right-click
on the project and select Project Configuration… Then, In the Project
Configuration dialog box, choose Project Definition > Governing Rules
> Default > Result sets. The maximum .mstr file size is 2047 MB.

High-Level Checklist to Set Up a Report Delivery System


The following high-level checklist describes what you need to do to set up a
report delivery system in MicroStrategy using Distribution Services.

Understand your users' requirements for subscribing to reports and where they
want them delivered.

Have administrator privileges.

Copyright © 2024 All Rights Reserved 1353


Syst em Ad m in ist r at io n Gu id e

Have a license to use Distribution Services.

If you use MicroStrategy Narrowcast Services, during the upgrade to


MicroStrategy Distribution Services be sure to use the Migrate Subscriptions
for Web Deliveries wizard. This wizard is available from the MicroStrategy
Developer Tools menu. For details on each option in the wizard, click Help.

For complete steps to perform a MicroStrategy upgrade, see the Upgrade


Help.

1. Modify existing transmitters or create new transmitters according to


your requirements. Distribution Services comes with default email, file,
print, mobile, and FTP transmitters, but if you use these you should
modify their settings to suit your environment.

l For best practices for working with transmitters, see Creating and
Managing Transmitters, page 1355.

l For steps to modify a transmitter, see Creating and Managing


Transmitters, page 1355.

l For steps to create a new transmitter, see Creating and Managing


Transmitters, page 1355.

2. Modify existing devices or create new devices according to your


requirements. Distribution Services comes with default devices, but if
you use these you should modify their settings to suit the systems in
your environment.

l For best practices for working with devices, see Creating and
Managing Devices, page 1363.

l For steps to modify a device, see Creating and Managing Devices,


page 1363.

l For steps to create a new device, see Creating and Managing


Devices, page 1363.

3. Create contacts so users can subscribe to reports and documents.

Copyright © 2024 All Rights Reserved 1354


Syst em Ad m in ist r at io n Gu id e

Creating and Managing Transmitters


A transmitter is a MicroStrategy software component that Distribution
Services uses to package subscribed reports and documents into files or
emails, and send those files or emails to recipients.

Distribution Services comes with multiple types of transmitters: email


(SMTP), file, print, FTP, and mobile. For example, a file transmitter
packages and delivers reports in the form of files (PDF, HTML, MS Excel,
plain text, and CSV formats) to file storage locations on network computers.
A print transmitter sends reports to network printers for printing.

When a user subscribes to a MicroStrategy report, the report is sent to the


appropriate transmitter for packaging and delivery. For example, if the
report is to be delivered to a file location on a network computer, the report
is sent to a file transmitter for conversion to the appropriate file format for
delivery. Similarly, if the report is to be delivered in the form of an email to a
user's email address, the report is sent to an email transmitter for
appropriate packaging and delivery.

A transmitter uses the settings specified in devices to determine how reports


are packaged and delivered to the required delivery location. For example,
some devices may indicate that reports should be packaged using MIME
encoding, but others might specify UUEncoding. For information on devices
and their settings, see Creating and Managing Devices, page 1363.

Notification for transmission failures can be configured for email


transmitters as described below. Notification for file and print transmission
failures can be configured at the project level, using the Project
Configuration Editor.

You create and configure transmitters using the Transmitter Editor.

Recommended Maintenance Tasks for Transmitters


Periodically verify all email addresses where delivery success or failure
notification emails are being sent. You can see these email addresses in the

Copyright © 2024 All Rights Reserved 1355


Syst em Ad m in ist r at io n Gu id e

Transmitter Editor, on the Notification tab.

Best Practices for Working with Transmitters


l Configure a device to use each type of transmitter and test a delivery
using the devices to make sure the transmitters are effective and the
devices are working.

l You can easily test an email transmitter by using the Save to File check
box on the Email Transmitter Editor's Message Output tab.

l To quickly create a new transmitter, duplicate an existing transmitter (such


as an out-of-the-box transmitter provided by MicroStrategy), and then
change its settings as required.

Viewing and Modifying a Transmitter and Accessing the


Transmitter Editor
Using the Transmitter Editor, you can view and modify the definition of a
transmitter, rename the transmitter, duplicate the transmitter, and so on.

To View a Transmitter or Modify its Settings

1. From the Developer Folder List, expand Administration, expand


Delivery Managers, and select Transmitters.

2. In the Transmitter List area on the right, right-click the transmitter that
you want to view or change settings for.

3. Select Edit.

4. Change the transmitter settings as desired.

5. Click OK.

Creating a Transmitter
In Developer, you can create the following types of transmitters:

Copyright © 2024 All Rights Reserved 1356


Syst em Ad m in ist r at io n Gu id e

l Email: An email transmitter transforms a subscribed report or document


and attaches it to an email and sends the email to the inbox of the
recipient.

l File: A file transmitter transforms a subscribed report or document into a


file (PDF, HTML, Microsoft Excel, plain text, or CSV format) and sends the
file to a file storage location such as a folder on a network computer.

l Print: A print transmitter sends the subscribed report or document to a


network printer.

l FTP: An FTP transmitter sends the subscribed report or document to an


FTP server.

l Mobile: An iPad or iPhone transmitter sends the subscribed report or


document to a user's iPad or iPhone.

When a user subscribes to a MicroStrategy report, the report is sent to the


appropriate transmitter for packaging and delivery. For example, if the
report is to be delivered to a file location on a computer, the report is sent to
a file transmitter for packaging and delivery. Similarly, if the report is to be
delivered in the form of an email to an email recipient, the report is sent to
an email transmitter for packaging and delivery.

You create new transmitters when you need a specific combination of


properties and settings for a file, email, print, FTP, or mobile transmitter to
package files.

A quick way to create a new transmitter is to duplicate an existing


transmitter and then edit its settings to meet the needs for the new
transmitter. This is a time-saving method if a similar transmitter already
exists, or you want to duplicate the default MicroStrategy transmitter. To
duplicate a transmitter, right-click the transmitter that you want to duplicate
and select Duplicate.

You create and configure transmitters using the Transmitter Editor.

Copyright © 2024 All Rights Reserved 1357


Syst em Ad m in ist r at io n Gu id e

Creating an Em ail Transm itter

An email transmitter creates an email and transforms the subscribed report


or document into an attachment to the email, then sends the email to the
inbox of the recipients who subscribed to the file.

Once an email transmitter is created, you can create email devices that are
based on that transmitter. When you create a device, the transmitter
appears in the list of existing transmitters in the Select Device Type dialog
box. The settings you specified above for the email transmitter apply to all
email devices that will be based on the transmitter.

To Create an Email Transmitter

1. From the Developer Folder List, expand Administration, expand


Delivery Managers, and select Transmitters.

2. Right-click in the Transmitter List area on the right, select New, and
select Transmitter.

3. Select Email and click OK.

4. Change the transmitter settings as desired.

5. Click OK.

Creating a File Transm itter

A file transmitter transforms a subscribed report or document into a file


format that the user chooses while subscribing to the report or document.
The file transmitter then sends the file to a file storage location on a network
computer.

Once a file transmitter is created, you can create file devices that are based
on this transmitter. When you create a device, the transmitter appears in the
list of existing transmitters in the Select Device Type dialog box. The
settings you specified above for the file transmitter apply to all file devices
that will be based on the transmitter.

Copyright © 2024 All Rights Reserved 1358


Syst em Ad m in ist r at io n Gu id e

For information on creating a file device, see Creating and Managing


Devices, page 1363.

To Create a File Transmitter

1. From the Developer Folder List, expand Administration, expand


Delivery Managers, and select Transmitters.

2. Right-click in the Transmitter List area on the right, select New, then
select Transmitter.

3. Select File and click OK.

4. Change the transmitter settings as desired.

5. Click OK.

Creating a Print Transm itter

A print transmitter sends the subscribed report or document to a network


printer.

Once a print transmitter is created, you can create print devices that are
based on the transmitter. When you create a device, the transmitter appears
in the list of existing transmitters in the Select Device Type dialog box. The
settings you specified above for the print transmitter apply to all print
devices that are based on the transmitter.

For information on creating a print device, see Creating and Managing


Devices, page 1363.

To Create a Print Transmitter

1. From the Developer Folder List, expand Administration, expand


Delivery Managers, and select Transmitters.

Copyright © 2024 All Rights Reserved 1359


Syst em Ad m in ist r at io n Gu id e

2. Right-click in the Transmitter List area on the right, select New, and
select Transmitter.

3. Select Print and click OK.

4. Change the transmitter settings as desired.

5. Click OK.

Creating an FTP Transm itter

An FTP transmitter transforms a subscribed report or document into a file


format that the user chooses while subscribing to the report or document.
The FTP transmitter then sends the file to a location on an FTP server.

Once an FTP transmitter is created, you can create FTP devices that are
based on the transmitter. When you create a device, the transmitter appears
in the list of existing transmitters in the Select Device Type dialog box. The
settings you specified above for the FTP transmitter apply to all FTP devices
that will be based on the transmitter.

For information on creating an FTP device, see Creating and Managing


Devices, page 1363.

To Create an FTP Transmitter

1. From the Developer Folder List, expand Administration, expand


Delivery Managers, and select Transmitters.

2. Right-click in the Transmitter List area on the right, select New, then
select Transmitter.

3. Select FTP and click OK.

4. Change the transmitter settings as desired.

5. Click OK.

Copyright © 2024 All Rights Reserved 1360


Syst em Ad m in ist r at io n Gu id e

Creating an iPad Transm itter

An iPad transmitter is used to push real-time updates of reports or


documents to a user's iPad. The transmitter transforms the subscribed
report or document into a form that can be displayed on the iPad, then it
sends the report or document to the subscriber's iPad.

After an iPad subscription transmitter is created, you can create iPad


delivery devices that are based on the transmitter. When you create a
device, the transmitter appears in the list of existing transmitters in the
Select Device Type dialog box.

For information on creating an iPad device, see Creating and Managing


Devices, page 1363.

To Create a iPad Subscription Transmitter

1. From the Developer Folder List, expand Administration, expand


Delivery Managers, and select Transmitters.

2. Right-click in the Transmitter List area on the right, select New, and
then Transmitter.

3. Select iPad Push Notifications and click OK.

4. Specify a name and description for the transmitter. The description


should include information about settings for this transmitter to help
users distinguish it from other transmitters, so they know when to
choose this transmitter when associating devices with it.

5. Click OK.

Creating an iPhone Transm itter

An iPhone transmitter is used to push real-time updates of reports or


documents to a user's iPhone. The transmitter transforms the subscribed
report or document into a form that can be displayed on the iPhone, then
sends the report or document to the subscriber's iPhone.

Copyright © 2024 All Rights Reserved 1361


Syst em Ad m in ist r at io n Gu id e

After an iPhone transmitter is created, you can create iPhone delivery


devices that are based on the transmitter. When you create a device, the
transmitter appears in the list of existing transmitters in the Select Device
Type dialog box.

For information on creating an iPhone device, see Creating and Managing


Devices, page 1363.

To Create a iPad Subscription Transmitter

1. From the Developer Folder List, go to Administration > Delivery


Managers > Transmitters.

2. Right-click in the Transmitter List area on the right, select New, and
then Transmitter.

3. Select iPhone Push Notifications and click OK.

4. Specify a name and description for the transmitter. The description


should include information about settings for this transmitter to help
users distinguish it from other transmitters, so they know when to
choose this transmitter when associating devices with it.

5. Click OK.

Deleting a Transmitter
You can delete a transmitter if you no longer need to use it.

You cannot delete a transmitter if devices depend on the transmitter. You must
first delete any devices that depend on the transmitter.

To Delete a Transmitter

1. From the Developer Folder List, expand Administration, expand


Delivery Managers, and select Transmitters.

Copyright © 2024 All Rights Reserved 1362


Syst em Ad m in ist r at io n Gu id e

2. In the Transmitter List area on the right, right-click the transmitter that
you want to delete.

3. Select Delete. The Confirm Delete Object message is displayed. See


the prerequisite above to be sure you have properly prepared the
system to allow the transmitter to be deleted.

4. Click Yes.

Creating and Managing Devices


A device specifies the format of a MicroStrategy report or document and the
transmission process to send the report or document to users who subscribe
to that report or document.

For example, if you want to send reports via email, and your recipients use
an email client such as Microsoft Outlook, you can create a Microsoft
Outlook email device that has settings appropriate for working with Outlook.
If you need to send reports to a file location on a computer on your network,
you can create a file device specifying the network file location. If you want
to send reports to a printer on your network, you can create a printer device
specifying the network printer location and printer properties.

In Developer, you can create the following types of devices:

l Email: An email device automatically sends a report or document in the


form of an email to an email address. It can also send the report in the
form of a user-selected file format as an attachment with the email.

l File: A file device automatically sends a MicroStrategy report or document


in a file format that the user chooses when subscribing to the report, to a
file delivery location on a computer on your network. Users can choose
from the following file formats: CSV (comma-separated values), PDF,
HTML, MS Excel, and plain text. When a user subscribes to a report or
document, the file device sends the report or document to the specified
location when the subscription requires it to be sent. You can specify your

Copyright © 2024 All Rights Reserved 1363


Syst em Ad m in ist r at io n Gu id e

network file location and file properties for the file device to deliver the file
to. For steps to create a file device, see Creating a File Device.

l Print: A print device automatically sends a report or document to a


specified printer on your network. You can define the printer properties for
the default print device or you can use the standard printer defaults. For
steps to create a print device, see Creating a Print Device, page 1372.

l FTP: An FTP device automatically sends a MicroStrategy report or


document, in a file format the user chooses, to a delivery location on an
FTP server. Users can choose from the following file formats: CSV
(command separated values), PDF, HTML, MS Excel, and plain text. Users
subscribe to a report or document, which triggers the FTP device to send
the report or document to the specified location when the subscription
requires it to be sent. For steps to create an FTP device, see Creating an
FTP Device.

l Mobile: iPad or iPhone devices automatically send a report or document to


a user's iPad or iPhone. For steps to create a mobile device, see Creating
an iPhone/iPad Device These subscriptions are available if you have
MicroStrategy Mobile or MicroStrategy Library. To support mobile
subscriptions, see Upgrading the API for the Apple Push Notification
Service to Support Mobile Subscriptions.

You create new devices when you need a specific combination of properties
and settings for a device to deliver files. You can create a new device in two
ways. You can either create a completely new device and enter all the
supporting information for the device manually, or you can duplicate an
existing device and edit the supporting information so it suits your new
device. You create and configure devices using the Device Editor.

Devices can be created in a direct connection (two-tier) mode, but print and
file locations for those devices are not validated by the system. Print and
file locations for devices created when in server connection mode (three-
tier) are automatically validated by MicroStrategy.

Copyright © 2024 All Rights Reserved 1364


Syst em Ad m in ist r at io n Gu id e

Recommended Maintenance Tasks for Devices


l Periodically verify all delivery locations to be sure they are active and
available.

l For file delivery locations, use the Device Editor's File: General tab and
File: Advanced Properties tab.

l For printer locations, use the Device Editor's Print: General tab and
Print: Advanced Properties tab.

l For FTP locations, use the Device Editor's FTP: General tab.

l Test a delivery using each device to make sure that the device settings are
still effective and any system changes that have occurred do not require
changes to any device settings.

l If you experience printing or delivery timeouts, use the Device Editor's


File: Advanced Properties tab and Print: Advanced Properties tab to
change timeout, retry, and other delivery settings.

Best Practices for Working with Devices


l You can allow users to select their own file delivery or print locations. Use
the Device Editor's File: General tab and Print: General tab to allow user-
defined file delivery and print locations. Any user-defined location
overrides the primary file delivery or print location specified in the File
Location or Printer Location field, which, in turn, overrides any backup
file delivery or print location specified in the File: Advanced Properties tab
or Print: Advanced Properties tab.

l If you have a new email client that you want to use with Distribution
Services functionality, create a new email device and apply settings
specific to your new email application. To create a new device quickly, use
the Duplicate option and then change the device settings so they suit your
new email application.

Copyright © 2024 All Rights Reserved 1365


Syst em Ad m in ist r at io n Gu id e

l If you rename a device or change any settings of a device, test the device
to make sure that the changes allow the device to deliver reports or
documents successfully for users.

Viewing and Modifying a Device and Accessing the Device


Editors
Use the Device Editor to view and modify the definition of a device, rename
the device, and so on.

To View a Device or Change its Settings

1. From the Developer Folder List, go to Administration > Delivery


Managers > Devices.

2. In the Device List area on the right, right-click the device that you want
to view or change settings for, and select Edit.

3. Change the device settings as desired.

4. Click OK.

To rename a device, right-click the device and select Rename. Type a new
name, and then press Enter. When you rename a device, the contacts and
subscriptions using the device are updated automatically.

Creating a File Device


A file device can automatically send a report or document in the form of a file
to a storage location such as a folder on a computer on your network. Users
subscribe to a report or document that triggers the file device to send the
subscribed report or document to the specified location when the
subscription requires it to be sent.

You create a new device when you need a specific combination of properties
and settings for a file device to deliver files.

Copyright © 2024 All Rights Reserved 1366


Syst em Ad m in ist r at io n Gu id e

You must specify the file properties and the network file location for the file
device to deliver files to. You can include properties for the delivered files
such as having the system set the file to Read-only, label it as Archive, and
so on.

A quick way to create a new file device is to duplicate an existing device and
then edit its settings to meet the needs for this new device. This is a time-
saving method if a similar device already exists, or you want to duplicate the
default MicroStrategy file device. To duplicate a device, right-click the
device that you want to duplicate and select Duplicate.

To Create a New File Device

1. From the Developer Folder List, go to Administration > Delivery


Managers > Devices.

2. Right-click in the Device List area on the right, select New, and then
Device.

3. Select File and click OK.

4. Change the device settings as desired.

5. Click OK.

Once the file device is created, it appears in the list of existing file devices
when you create an address (in this case, a path to a file storage location
such as a folder) for a MicroStrategy user or a contact. You select a file
device and assign it to the address you are creating. When a user
subscribes to a report to be delivered to this address, the report is delivered
to the file delivery location specified in that address, using the delivery
settings specified in the associated file device.

Default File Locations and Perm issions

When a new device is created the following default values are applied to the
file. They can be accessed from the Device Editor: File window:

Copyright © 2024 All Rights Reserved 1367


Syst em Ad m in ist r at io n Gu id e

The ACL of a file is largely determined by the parent folder (and recursively
to the root drive) which is determined before delivery. The administrator is
responsible for setting the ACL of the parent folder to meet specific security
needs.

Gen er al Tab
l File Location: <MicroStrategy Installation
Path>\FileSubscription

l File System Options: The Create required folders and Append


timestamp to file name options are enabled.

Ad van ced Pr o p er t i es Tab


l Windows: The Read only checkbox will be enabled.

l Unix/Linux: Access rights for the file will be set to rw-r--r--

Personalizing File Locations

You can dynamically specify the File Location and Backup File Location
in a file device using macros. For example, if you specify the File Location
as C:\Reports\{&RecipientName}\, all subscriptions using that file
device are delivered to subfolders of C:\Reports\. Subscribed reports or
documents for each recipient are delivered to a subfolder with that
recipient's name, such as C:\Reports\Jane Smith\ or
C:\Reports\Hiro Protagonist\.

The table below lists the macros that can be used in the File Location and
Backup File Location fields in a file device:

Description Macro

Date on which the subscription is sent {&Date}

Time at which the subscription is sent {&Time}

Copyright © 2024 All Rights Reserved 1368


Syst em Ad m in ist r at io n Gu id e

Description Macro

Name of the recipient {&RecipientName}

User ID (32-character GUID) of the recipient {&RecipientID}

Distribution Services address that the subscription is delivered to {&AddressName}

{&RecipientList
File path that a dynamic recipient list subscription is delivered to
Address}

You can also have a subscription dynamically create subfolders according to


attributes in a report's page-by axis or a document's group-by area and
place the report or document there. For steps, see Creating Subscriptions,
page 1336.

Delivering Files from a UNIX Intelligence Server to a Windows File Location

If your Intelligence Server machine is using UNIX or Linux, you can


configure your system to deliver files to locations on Windows machines.

This process uses Sharity software to resolve the Windows file location as a
mount on the UNIX machine. Intelligence Server can then treat the Windows
file location as though it were a UNIX file location.

You must have a license for MicroStrategy Distribution Services before you can
use file subscriptions.

Sharity must be installed on the Intelligence Server machine. For information


about Sharity, see the Sharity website at:
http://www.obdev.at/products/sharity/index.html.

Copyright © 2024 All Rights Reserved 1369


Syst em Ad m in ist r at io n Gu id e

To Set Up File Delivery from a UNIX Intelligence Server to a Windows


Location

1. Make sure Sharity is configured on the Intelligence Server machine.

2. Create a new file device or edit an existing file device (see Creating a
File Device, page 1366).

3. In the File Device Editor, on the Cross-Platform Delivery with


Sharity™ tab, select the Enable delivery from Intelligence Server
running on UNIX to Windows check box.

4. In the User Name field, type the Windows network login that is used to
access the Windows file location for mounting on the Intelligence
Server.

5. In the Password field, type the password for that user name.

6. In the Mount Root field, type the location on the Intelligence Server
machine where the mount is stored. Make sure this is a properly formed
UNIX path, using forward slashes / to separate directories. For
example:
/bin/Sharity/Mount1

7. Click OK.

Creating an Email Device


An email device automatically sends emails, which contain reports or
documents that have been subscribed to by users, or for users by other
users or administrators. You create a new email device whenever you need
a specific combination of properties and settings to deliver files via email.
For example, an email sent through Microsoft Outlook requires a device with
settings that are different from an email sent through a web-based email
account.

Copyright © 2024 All Rights Reserved 1370


Syst em Ad m in ist r at io n Gu id e

You can specify various MIME options for the emails sent by an email
device, such as the type of encoding for the emails, the type of attachments
the emails, can support, and so on.

A quick way to create a new email device is to duplicate an existing device


and then edit its settings to meet the needs for this new device. This is a
time-saving method if you have a similar device already created, or you want
to make use of the default MicroStrategy email device. To duplicate a
device, right-click the device that you want to duplicate and select
Duplicate.

An understanding of your organization's email server or other email delivery


systems.

To Create a New Email Device

1. From the Developer Folder List, go to Administration > Delivery


Managers > Devices.

2. Right-click in any open space in the Device List area on the right, select
New, and then Device.

3. Select Email and click OK.

4. Change the device settings as desired.

5. Click OK .

Once an email device is created, it appears in the list of existing email


devices when you create an address for a MicroStrategy user or a contact.
You select an email device and assign it to the address you are creating.
When a user subscribes to a report to be sent to this address, the report is
sent to the email recipient specified in that address, using the delivery
settings specified in the associated email device.

Copyright © 2024 All Rights Reserved 1371


Syst em Ad m in ist r at io n Gu id e

Creating a Print Device


A print device sends a report or document to a specified network printer,
where the report or document is automatically printed. You create a new
print device whenever you need a specific combination of properties and
settings to deliver files to a printer. You can create a new print device,
define new printer properties for the default print device that comes with
MicroStrategy, or use the default device with its default printer settings.

If you are creating a print device to deliver a report or document to a


dynamic recipient list, you can dynamically specify the printer location using
the {&RecipientListAddress} macro. When the subscription is run, the
macro is replaced with the Physical Address specified in its dynamic
recipient list. For more information on dynamic recipient lists, see Creating
Subscriptions, page 1336.

The selected printer must be added to the list of printers on the machine on
which Intelligence Server is running.

To Create a New Print Device

1. From the Developer Folder List, go to Administration > Delivery


Managers > Devices.

2. Right-click in the Device List area on the right, select New, and then
Device.

3. Select Print and click OK.

4. Change the device settings as desired.

5. Click OK.

Once a print device is created, it appears in the list of existing print devices
when you create an address (in this case, a path to the printer) for a
MicroStrategy user or a contact. You select a print device and assign it to
the address you are creating. When a user subscribes to a report to be sent
to this address, the report is sent to the printer specified in that address,

Copyright © 2024 All Rights Reserved 1372


Syst em Ad m in ist r at io n Gu id e

using the delivery settings specified in the associated print device. For
details on creating an address for a user or on creating a contact and adding
addresses to the contact, click Help.

Creating an FTP Device


An FTP device automatically sends a report or document in the form of a file
to a location on an FTP server. Users subscribe to a report or document,
which triggers the FTP device to send the subscribed report or document to
the specified location when the subscription requires it to be sent.

You create a new device whenever you need a specific combination of


properties and settings for an FTP device to deliver files.

A quick way to create a new FTP device is to duplicate an existing device


and then edit its settings to meet the needs for this new device. This is a
time-saving method if you have a similar device already created, or you want
to make use of the default MicroStrategy FTP device. To duplicate a device,
right-click the device that you want to duplicate and select Duplicate.

To Create a New FTP Device

1. From the Developer Folder List, go to Administration > Delivery


Managers > Devices.

2. Right-click in the Device List area on the right, select New, and then
Device.

3. Select FTP and click OK.

4. Change the device settings as desired.

5. Click OK.

Once the FTP device is created, it appears in the list of existing FTP
devices. When you create an address for a MicroStrategy user or a contact,
you can select an FTP device and assign it to the address you are creating.
When a user subscribes to a report to be delivered to this address, the

Copyright © 2024 All Rights Reserved 1373


Syst em Ad m in ist r at io n Gu id e

report is delivered to the delivery location specified in that address, using


the delivery settings specified in the associated FTP device. For details on
creating an address for a user or on creating a contact and adding
addresses to the contact, click Help.

Creating an iPhone/iPad Device


An iPhone/iPad delivery device is used to automatically push real-time
updates of reports or documents to a user's iPhone/iPad, when the report or
document is updated. Users subscribe to a report or document, which
triggers the iPhone/iPad device to send the subscribed report or document
to the user's iPhone/iPad.

You create a new device whenever you need a specific combination of


properties and settings for an iPhone/iPad device to deliver reports or
documents.

A quick way to create a new iPhone/iPad device is to duplicate an existing


device and then edit its settings to meet the needs for the new device. This
is a time-saving method if you have a similar device already created, or you
want to duplicate the default MicroStrategy iPhone/iPad device. To duplicate
a device, right-click the device that you want to duplicate and select
Duplicate.

To Cr eat e an i Ph o n e/ i Pad Devi ce

1. In Developer, from the Folder List on the left, go to Administration >


Delivery Managers > Devices.

2. Right-click in the Device List area on the right, select New, and then
Device.

3. Select Mobile Client iPhone or Mobile Client iPad and click OK.

4. The default APNS server address and port number are

Copyright © 2024 All Rights Reserved 1374


Syst em Ad m in ist r at io n Gu id e

api.push.apple.com and 443, as shown below.

After an iPhone/iPad delivery device is created, you see it in the list of


existing iPhone/iPad devices when you create an address for a
MicroStrategy user or a contact. You select an iPhone/iPad device and
assign it to the address you are creating. When a user subscribes to a report
to be delivered to this address, the report is delivered to the iPhone/iPad
specified in that address, using the delivery settings specified in the
associated iPhone/iPad device.

Copyright © 2024 All Rights Reserved 1375


Syst em Ad m in ist r at io n Gu id e

Deleting a Device
You can delete a device if it is no longer needed.

Update the contacts and subscriptions that are using the device by replacing
the device with a different one. To do this, check whether the device you want
to delete is used by any existing addresses:

l To find contacts, from the Developer Folder List, go to Administration >


Delivery Managers > Contacts. In View Options, select the device name.

l To find subscriptions that are dependent on the device, right-click each


contact and select Search for dependent subscriptions.

To Delete a Device

1. From the Developer Folder List, go to Administration > Delivery


Managers > Devices.

2. In the Device List area on the right, right-click the device you want to
delete.

3. Select Delete.

Upgrading the API for the Apple Push Notification Service to


Support Mobile Subscriptions
Starting in MicroStrategy 2020 Update 2, the Intelligence server supports
the HTTP/2-based APNS API to send remote notification requests to Apple
devices. The legacy protocol is no longer supported starting in November
2020.

If you are upgrading from 11.2.x to 11.3, you must upgrade the metadata. If
the mobile device already exists in the metadata, the server name and port
number are changed dynamically to adapt to the new API. No manual work is
necessary.

Copyright © 2024 All Rights Reserved 1376


Syst em Ad m in ist r at io n Gu id e

You can also update the default value for the server name and port number
by editing the mobile device through Developer, using the procedure below.

To m o d i f y t h e i Ph o n e/ i Pad d evi ce

1. In Developer, from the Folder List on the left, go to Administration >


Delivery Managers > Devices.

2. Right-click a device and select Edit.

3. Enter your device settings.

Copyright © 2024 All Rights Reserved 1377


Syst em Ad m in ist r at io n Gu id e

4. Click OK.

After upgrading to the new API, the Apple feedback server is no longer used,
since the APNS server sends per-notification feedback to the Intelligence
server. Developer hides the APNS feedback service.

Now the default push notification server address has been upgraded for the
server using the new API.

Copyright © 2024 All Rights Reserved 1378


Syst em Ad m in ist r at io n Gu id e

A D M IN ISTERIN G
M ICRO STRATEGY W EB
AN D M OBILE

Copyright © 2024 All Rights Reserved 1379


Syst em Ad m in ist r at io n Gu id e

As a MicroStrategy system administrator, you may be responsible for


managing MicroStrategy Web and Mobile environments. Some of these
tasks are performed in the Developer interface, such as managing user and
group privileges for Web users, or registering a project in server (3-tier)
mode so it can be available in Web. However, other administrative
parameters are set using the MicroStrategy Web or Mobile Server
administrative interface. In addition, configuring your mobile devices can be
done through the Mobile Server Administrator.

In addition to the information in this section, each option in the


MicroStrategy Web or Mobile Server administration interface is documented
in the relevant Help system.

Assigning Privileges for MicroStrategy Web


MicroStrategy Web products are available in three different editions, each
having an associated set of privileges. The number of users assignable to
any one edition is based on your license agreement.

MicroStrategy provides these editions for MicroStrategy Web products:

l Web Professional or Web Professional

l Web Analyst or Web Analyst

l Web Reporter or Web Reporter

The privileges available in each edition are listed in the List of Privileges
section. You can also print a report of all privileges assigned to each user
based on license type; to do this, see Audit Your System for the Proper
Licenses, page 734.

All MicroStrategy Web users that are licensed for MicroStrategy Report
Services may view and interact with a document in Flash Mode. Certain
interactions in Flash Mode have additional licensing requirements:

Copyright © 2024 All Rights Reserved 1380


Syst em Ad m in ist r at io n Gu id e

l Users are required to license MicroStrategy Web Analyst to pivot row or


column position in a grid or cross-tabular grid of data in Flash Mode.

l Users are required to license MicroStrategy Web Professional to modify


the properties of Widgets used in a document in Flash Mode.

A user assigned to an edition is entitled to a complete set, or identified


subset, of the privileges listed for that edition.

If a user is assigned to multiple user groups, the privileges of those groups


are additive, and determine the edition usage of that particular user. For
example, if a user is a member of both the Finance and the Accounting user
groups, privileges for that user are equivalent to the cumulative set of
privileges assigned to those two groups.

One privilege, Web Administration, can be assigned to any edition of Web


user. This privilege allows the user to access the Web Administrator page to
manage server connections, and to access the Project defaults link on the
Preferences page to set defaults for all users.

The MicroStrategy security model enables you to set up user groups that can
have subgroups within them, thus creating a hierarchy. The following applies
to the creation of user subgroups:

l A child subgroup automatically inherits privileges assigned to its parent


group.

l A child subgroup can be assigned other privileges in addition to inherited


privileges.

User groups corresponding to the three editions of MicroStrategy Web


products are predefined with the appropriate privilege sets. These user
groups are available under the User Group folder in the Administration folder
for your project.

You need project administration privileges to view and modify user group
definitions.

Copyright © 2024 All Rights Reserved 1381


Syst em Ad m in ist r at io n Gu id e

See your license agreement as you determine how each user is assigned to
a given privilege set. MicroStrategy Web products provide three Web
editions (Professional, Analyst, Reporter), defined by the privilege set
assigned to each.

Assigning privileges outside those designated for each edition changes the
user's edition. For example, if you assign to a user in a Web Reporter group
a privilege available only to a Web Analyst, MicroStrategy considers the
user to be a Web Analyst user.

Within any edition, privileges can be removed for specific users or user
groups. For more information about security and privileges, see Chapter 2,
Setting Up User Security.

License Manager enables you to perform a self-audit of your user base and,
therefore, helps you understand how your licenses are being used. For more
information, see Audit and Update Licenses, page 728.

Defining Project Defaults


If you have the Web Administration privilege, you can set the default options
for one or more projects in the Preferences section. The Project defaults link
is displayed only if you have the Web Administration privilege.

Any changes you make to the project defaults become the default settings
for the current project or for all Web projects if you select the Apply to all
projects on the current MicroStrategy Intelligence Server (server
name) option from the drop-down list.

The project defaults include user preference options, which each user can
override, and other project default settings accessible only to the
administrator.

For information on the History List settings, see Saving Report Results:
History List, page 1240.

Copyright © 2024 All Rights Reserved 1382


Syst em Ad m in ist r at io n Gu id e

Loading and Applying Default Values


The Load Default Values option works differently on the Project defaults and
the User preferences pages:

l When the administrator who is setting the Project defaults clicks Load
Default Values, the original values shipped with the MicroStrategy Web
products are loaded on the page.

l When users who are setting User preferences click Load Default Values,
the project default values that the administrator set on the Project defaults
pages are loaded.

The settings are not saved until you click Apply. If you select Apply to all
projects on the current Intelligence Server (server name) from the drop-
down menu, the settings are applied to all projects, not just the one you are
currently configuring.

Setting User Preferences and Project Defaults


Users can change the individual settings for their user preference options by
accessing them via the Preferences link at the top of the Web page.
However, you can set what default values the users see for these options.
To do this, click the Preferences link, then click the Project defaults link on
the left-hand side of the page (under the "Preferences Level" heading).

You can then set the defaults for several categories, including the following:

l General

l Folder Browsing

l Grid display

l Graph display

l History List

l Export Reports

Copyright © 2024 All Rights Reserved 1383


Syst em Ad m in ist r at io n Gu id e

l Print Reports (PDF)

l Drill mode

l Prompts

l Report Services

l Security (see note below for Web)

l Project display (see note below for Web)

l Office

l Color Palette

l Email Addresses

l File Locations

l Printer Locations

l FTP Locations

l Dynamic Adress Lists

Some of the following categories are displayed only in certain


circumstances. For example, the Report Services link appears only if you
have a license to use Report Services.

Each category has its own page and includes related settings that are
accessible only to users with the Web Administration privilege. For details
on each setting, see the MicroStrategy Web Help for the Web Administrator.

Using Additional Security Features for MicroStrategy


Web and Mobile
This section describes how MicroStrategy Web and Mobile products can be
made more secure by using standard Internet security technologies such as
firewalls, digital certificates, and encryption.

Copyright © 2024 All Rights Reserved 1384


Syst em Ad m in ist r at io n Gu id e

For information on enabling secure, encrypted communications for Web, see


Chapter , Configuring Web, Mobile Server, and Web Services to Require
SSL Access.

Using Firewalls
A firewall enforces an access control policy between two systems. A firewall
can be thought of as something that exists to block certain network traffic
while permitting other network traffic. Though the actual means by which this
is accomplished varies widely, firewalls can be implemented using both
hardware and software, or a combination of both.

Firewalls are most frequently used to prevent unauthorized Internet users


from accessing private networks connected to the Internet, especially
intranets. If you use MicroStrategy Web or Mobile products over the Internet
to access projects on an Intelligence Server that is most likely on an
intranet, there is the possibility that a malicious user can exploit the security
hole created by the connection between the two systems.

Therefore, in many environments and for a variety of reasons you may want
to put a firewall between your Web servers and the Intelligence Server or
cluster. This does not pose any problems for the MicroStrategy system, but
there are some things you need to know to ensure that the system functions
as expected. Another common place for a firewall is between the Web clients
and the Web or Mobile server.

Another common place for a firewall is between the Web clients and the Web
or Mobile server. The following diagram shows how a MicroStrategy system
might look with firewalls in both of these locations:

Copyright © 2024 All Rights Reserved 1385


Syst em Ad m in ist r at io n Gu id e

Regardless of how you choose to implement your firewalls, you must make
sure that the clients can communicate with MicroStrategy Web and Mobile
Servers, that MicroStrategy Web and Mobile can communicate with
Intelligence Server, and vice versa. To do this, certain communication ports
must be open on the server machines and the firewalls must allow Web
server and Intelligence Server communications to go through on those ports.
Most firewalls have some way to specify this. Consult the documentation
that came with your firewall solution for details.

To Enable Communication through a Firewall

1. Client Web browsers communicate with MicroStrategy Web on port 80


(HTTP). So, if you have a firewall between your clients and
MicroStrategy Web servers, you must make sure port 80 is allowed to
send and receive requests through the firewall.

Depending on how you deployed Web Universal, it may communicate


on a different port number.

2. MicroStrategy Web products can communicate with Intelligence Server


using any port that is greater than 1024. By default, the ports are 34952
and 34962. If you have a firewall between your Web servers and
Intelligence Server, you must make sure port 34952 and 34962 are
allowed to send and receive TCP/IP requests through the firewall.

You can change this port number. See the steps in the next procedure
To Change the Port through which MicroStrategy Web and Intelligence
Server Communicate, page 1388 to learn how.

3. You must configure your firewall to allow MicroStrategy Web products


to communicate with Intelligence Server using port 3333. This is in
addition to the port configured in the previous step of this procedure.

4. The MicroStrategy Listener Service communicates with MicroStrategy


Web products and Intelligence Server on port 30172. So, if you are

Copyright © 2024 All Rights Reserved 1386


Syst em Ad m in ist r at io n Gu id e

using the Listener Service, you must make sure port 30172 is allowed
to send and receive TCP/IP and UDP requests through the firewall. You
cannot change this port number.

5. The MicroStrategy Intelligence Server REST Listener listens on port


34962 for REST requests. So, if you have a firewall, you must make
sure 34962 is allowed to receive TCP requests through the firewall. If
you change this port (34962) to a different one through Configuration
Wizard, you need to modify Inbound Rules for the Firewall accordingly.

6. MicroStrategy Messaging Services uses ports 2181, 9092, 2888, and


3888 to communicate with other MicroStrategy Services, such as the
Intelligence Server, New Export Engine, MicroStrategy Identity Server
and Platform Analytics. If you have a firewall between MicroStrategy
Services you must make sure these four ports are allowed to send and
receive TCP requests through the firewall.

7. MicroStrategy Topology uses ports 8300 and 8301 to communicate


between agents. If you have a firewall between MicroStrategy Services
you must make sure these two ports are allowed to send and receive
TCP/UDP requests through the firewall.

The MicroStrategy Services are as follows:

l MicroStrategy Intelligence Server

l MicroStrategy Web Universal

l MicroStrategy Library

l MicroStrategy Mobile

l MicroStrategy Messaging Services

l MicroStrategy Platform Analytics

l MicroStrategy Certificate Store

l Usher Security Services

Copyright © 2024 All Rights Reserved 1387


Syst em Ad m in ist r at io n Gu id e

To Change the Port through which MicroStrategy Web and


Intelligence Server Communicate

By default, MicroStrategy Web and Intelligence Server communicate with


each other using port 34952 (Web Universal may use a different port
depending on how you deployed it). If you want to change this, you must
change it for both the Web servers and the Intelligence Servers. The port
numbers on both sides must match.

If you are using clusters, you must make sure that all machines in the Web
server cluster can communicate with all machines in the Intelligence Server
cluster.

To Ch an ge t h e Po r t N u m b er f o r In t el l i gen ce Ser ver

1. In Developer, log in to the project source that connects to the server


whose port you want to change.

2. In the Service Manager, click Options.

3. On the Intelligence Server Options tab, type the port number you want
to use in the Port Number box. Save your changes.

4. A message appears telling you to restart Intelligence Server. Click OK.

5. Restart Intelligence Server.

6. In Developer, right-click the project source that connects to the


Intelligence Server whose port number you changed and choose
Modify Project Source.

7. On the Connection tab, enter the new port number and click OK.

You must update this port number for all project sources in your
system that connect to this Intelligence Server.

Copyright © 2024 All Rights Reserved 1388


Syst em Ad m in ist r at io n Gu id e

To Ch an ge t h e Po r t N u m b er f o r M i cr o St r at egy Web

1. Open the Administrator page in MicroStrategy Web.

2. If your MicroStrategy Web product is connected to the Intelligence


Server whose port number you changed, click Disconnect to
disconnect it. You cannot change the port while connected to an
Intelligence Server.

It probably is not connected because the MicroStrategy Web product


does not yet know the new port number you assigned to Intelligence
Server.

3. In the entry that corresponds to the appropriate Intelligence Server,


click Modify (in the Properties column, all the way to the right).

4. In the Port box, type the port number you want to use. This port number
must match the port number you set for Intelligence Server. An entry of
0 means use port 34952 (the default).

5. Click Save.

If the port numbers for your MicroStrategy Web product and


Intelligence Server do not match, you get an error when the
MicroStrategy Web product tries to connect to Intelligence Server.

Using Cookies
A cookie is a piece of information that is sent to your Web browser—along
with an HTML page—when you access a Web site or page. When a cookie
arrives, your browser saves this information to a file on your hard drive.
When you return to the site or page, some of the stored information is sent
back to the Web server, along with your new request. This information is
usually used to remember details about what a user did on a particular site
or page for the purpose of providing a more personal experience for the
user. For example, you have probably visited a site such as Amazon.com
and found that the site recognizes you. It may know that you have been

Copyright © 2024 All Rights Reserved 1389


Syst em Ad m in ist r at io n Gu id e

there before, when you last visited, and maybe even what you were looking
at the last time you visited.

MicroStrategy Web products use cookies for a wide variety of things. In fact,
they use them for so many things that the application cannot work without
them. Cookies are used to hold information about user sessions,
preferences, available projects, language settings, window sizes, and so on.
For a complete and detailed reference of all cookies used in MicroStrategy
Web and MicroStrategy Web Universal, see the MicroStrategy Web Cookies
section.

Using Encryption
Encryption is the translation of data into a sort of secret code for security
purposes. The most common use of encryption is for information that is sent
across a network so that a malicious user cannot gain anything from
intercepting a network communication. Sometimes information stored in or
written to a file is encrypted. The SSL technology described earlier is one
example of an encryption technology.

MicroStrategy Web products can use encryption in many places, but by


default, most are not used unless you enable them.

Encryption in MicroStrategy Web Products


You can encrypt all communication between the Web server and Intelligence
Server. Additional overhead is involved in encrypting and decrypting all this
network traffic so you may see a noticeable performance degradation if
encryption is enabled. However, if you are working with sensitive or
confidential information, this may be an acceptable trade-off.

Copyright © 2024 All Rights Reserved 1390


Syst em Ad m in ist r at io n Gu id e

To Encrypt All Communication Between MicroStrategy Web Products


and Intelligence Server

1. Go to the Administrator Page.

2. At the top of the page or in the column on the left, click Security to see
the security settings.

3. Within the Encryption area, select one of the following encryption


options:

l No encryption (default): Data between Web and Intelligence Server


is not encrypted.

l SSL: Uses Secure Socket Layer (SSL) encryption to secure data


between Web and Intelligence Server. This is the recommended
option for secure communications. For instructions to set up SSL
encryption for Web, see Chapter , Configuring Web, Mobile Server,
and Web Services to Require SSL Access.

4. Click Save. Now all communication between the Web server and
Intelligence Server is encrypted.

Applying File-Level Security


It is important to remember that no matter what kind of security you set up,
there is always the possibility that a malicious user can bypass it all by
gaining access to the physical machine that hosts the Web application. For
this reason you should make sure that the machine is in a secure location
and that you restrict access to the files stored on it using the standard file-
level security offered by the operating system.

In typical production environments, only a small number of administrative


users are allowed to log on to server machines. All other users either have
very limited access to the files and applications on the machine or, better
yet, no access at all.

Copyright © 2024 All Rights Reserved 1391


Syst em Ad m in ist r at io n Gu id e

For example, with Microsoft IIS, by default only the "Internet guest user"
needs access to the virtual directory. This is the account under which all file
access occurs for Web applications. In this case, the Internet guest user
needs the following privileges to the virtual directory: read, write, read and
execute, list folder contents, and modify.

However, only the administrator of the Web server should have these
privileges to the Admin folder in which the Web Administrator pages are
located. When secured in this way, if users attempt to access the
Administrator page, the application prompts them for the machine's
administrator login ID and password.

In addition to the file-level security for the virtual directory and its contents,
the Internet guest user also needs full control privileges to the Log folder in
the MicroStrategy Common Files, located by default in C:\Program
Files (x86)\Common Files\MicroStrategy. This ensures that any
application errors that occur while a user is logged in can be written to the
log files.

The file-level security described above is all taken care of for you when you
install the ASP.NET version of MicroStrategy Web using Microsoft IIS.
These details are just provided for your information.

If you are using the J2EE version of MicroStrategy Web you may be using a
different Web server, but most Web servers have similar security
requirements. Consult the documentation for your particular Web server for
information about file-level security requirements.

Sample MicroStrategy System


The following diagram summarizes what a typical MicroStrategy system
might look like if you take into account firewalls, digital certificates, and
encryption:

Copyright © 2024 All Rights Reserved 1392


Syst em Ad m in ist r at io n Gu id e

Integrating Narrowcast Server with MicroStrategy


Web products
It is possible to enable Scheduled Delivery and Send Now features in
MicroStrategy Web products. The Scheduled delivery option allows users to
have a report sent to an e-mail address that they specify on a certain
schedule, to a printer or to a file location. These schedules are defined in

Copyright © 2024 All Rights Reserved 1393


Syst em Ad m in ist r at io n Gu id e

MicroStrategy Narrowcast Server and are separate from the schedules


maintained in Intelligence Server. The Send Now option allows users to
send a report immediately to an e-mail address that they specify.

You must have MicroStrategy Narrowcast Server installed and configured


before the Scheduled e-mail and Send Now options work. See that product's
documentation for more information.

For more detailed information about this, see the Installation and
Configuration Help.

To configure the Subscription Portal delivery option for MicroStrategy Web


products, either the folder or the drive where the Subscription Engine is
installed must be shared while the system is being configured. That is, the
service running the Subscription Administrator must have read and write
access to either:

l The folder where the Subscription Engine is installed

l The entire drive where the Subscription Engine is installed

MicroStrategy Narrowcast Server and MicroStrategy Web products can


automatically share this drive for the local Administrators group. The
Subscription Administrator service should run under an account that is a
member of the local Administrators group. You can unshare the drive or
folder after the system is configured. If you do not want to automatically
share the drive, perform the steps listed here.

To Share the Folder where the Subscription Engine is Installed

1. Modify the Admin.properties file located on the Subscription Engine


machine:

..\MicroStrategy\Narrowcast Server\Subscription
Engine\build\server\

Copyright © 2024 All Rights Reserved 1394


Syst em Ad m in ist r at io n Gu id e

Modify the file contents so the corresponding two lines are as follows:

TransactionEngineLocation=machine_name:\\Subscription
Engine\\build\\server
TransactionEngineLocation=MACHINE_NAME:/Subscription Engine/build/server

where machine_name is the name of the machine where the


Subscription Engine is installed.

2. Share the folder where the Subscription Engine is installed for either
the local Administrators group or for the account under which the
Subscription Administrator service account runs. This folder must be
shared as Subscription Engine.

You should ensure that the password for this account does not expire.

If the Subscription Engine machine's drive is shared and unshared


multiple times, the following Windows message displays: "System
Error: The network name was deleted."

This message does not indicate a problem. Click OK to make the


Subscription Administrator service functional.

3. Restart the Subscription Administrator service.

Enabling Users to Install MicroStrategy Office from


Web
This information applies to the legacy MicroStrategy Office add-in, the
add-in for Microsoft Office applications which is no longer actively
developed.

It was substituted with a new add-in, MicroStrategy for Office, which


supports Office 365 applications. The initial version does not yet have all
the functionalities of the previous add-in.

Copyright © 2024 All Rights Reserved 1395


Syst em Ad m in ist r at io n Gu id e

If you are using MicroStrategy 2021 Update 2 or a later version, the legacy
MicroStrategy Office add-in cannot be installed from Web.;

For more information, see the MicroStrategy for Office page in the Readme
and the MicroStrategy for Office Help.

From the MicroStrategy Web Administrator page, you can designate the
installation directory path to MicroStrategy Office, and also determine
whether a link to Office installation information appears in the MicroStrategy
Web interface.

You must install and deploy MicroStrategy Web Services to allow the
installation of MicroStrategy Office from MicroStrategy Web. For information
about deploying MicroStrategy Web Services, see the MicroStrategy for Office
Help.

To specify the path to MicroStrategy Office and determine


whether users can install MicroStrategy Office from Web
1. In Windows, go to Start > Programs > MicroStrategy Tools > Web
Administrator.

2. Click Connect.

3. Underneath Web Server on the left, click MicroStrategy Office.

4. In the Path to MicroStrategy Office Installation field, type the base


URL of your MicroStrategy Web Services machine, for example:
http://server:port/Web_Services_virtual_directory/Office

MicroStrategy Web automatically attaches /Lang_


xxxx/officeinstall.htm to the end of the URL, where Lang_xxxx
refers to the currently defined language in MicroStrategy Web. For
example, if the language in MicroStrategy Web is set to English, a
completed URL may appear as follows:

http://localhost/MicroStrategyWS/office/Lang_
1033/officeinstall.htm

Copyright © 2024 All Rights Reserved 1396


Syst em Ad m in ist r at io n Gu id e

5. Test the URL path by clicking Go.

6. Click your browser's Back button to return to the Web Administration -


MicroStrategy Office settings page.

7. To ensure that an Install MicroStrategy Office link is


displayed at the top of users' project selection and login pages in
MicroStrategy Web, select the Show link to installation page for all
users on the Projects and Login pages check box. When users click
the 'Install MicroStrategy Office' link, a page opens with instructions on
how to install MicroStrategy Office on their machine.

8. Click Save.

FAQs for Configuring and Tuning MicroStrategy Web


Products
How can I Configure MicroStrategy Web and the Library
Server for SameSite Cookie support?
Chrome Web Browser version 80 and above introduces new changes which
may impact embedding. For more information, see KB484005: Chrome v80
Cookie Behavior and the Impact on MicroStrategy Deployments.

How do I Configure my MicroStrategy Web Environment if I


have a User Community of x Users? How much Hardware am I
Going to Need?
This information is addressed in the MicroStrategy Knowledge Base.

How do Time-Out Settings in MicroStrategy Web and


Intelligence Server Affect MicroStrategy Web Users?
Several settings related to session time-out may affect MicroStrategy Web
users.

Copyright © 2024 All Rights Reserved 1397


Syst em Ad m in ist r at io n Gu id e

First, in the Intelligence Server Configuration Editor, under Governing


Rules: Default: General, the value in the Web user session idle time
(sec) field determines the number of seconds a user can remain idle before
being logged out of Intelligence Server.

Second, in the web.config file, located by default in C:\Program Files


(x86)\MicroStrategy\Web ASPx, the time-out setting determines the
number of minutes after which the .NET session object is released if it has
not been accessed. This time-out is independent of the Intelligence Server
time-out above.

Below is the section of the web.config file that contains the time-out
setting (line 5):

1 <sessionStatemode="InProc"
2 stateConnectionString="tcpip=127.0.0.1:42424"
3 sqlConnectionString="data source=127.0.0.1;user id=sa;password="
4 cookieless="false"
5 timeout="20"
6 />

This setting does not affect Web Universal because it does not use .NET
architecture.

A third setting is the MicroStrategy Web Administration setting Allow


automatic login if session is lost. This setting is in the Security section of
the Web Administration page. It enables users to be automatically
reconnected to Intelligence Server if the session is lost.

This setting does not automatically reconnect the .NET session object.

The following table demonstrates how the previous settings interact in


various combinations.

Copyright © 2024 All Rights Reserved 1398


Syst em Ad m in ist r at io n Gu id e

Intelligence web.config Allow automatic login User idle


Result
Server time-out time-out if session is lost time

30 User must log back


45 minutes 20 minutes Either
minutes in

30 User must log back


20 minutes 45 minutes No
minutes in

User is
30
20 minutes 45 minutes Yes automatically
minutes
logged back in

60 User must log back


20 minutes 45 minutes Either
minutes in

A fourth group of settings is whether Web user sessions can be backed up


and recovered. That is, if the user was viewing a report, document, or
dashboard when the session was ended, when the user logs back in to Web,
they can click a link to return to that report, document, or dashboard. If this
is enabled, you can configure where and for how long the session is stored
on disk. After the session is expired, the user cannot recover the session.

To configure these settings, access the Intelligence Server Configuration


Editor, select the Governing Rules: Default: Temporary Storage
Settingscategory. To enable the feature, select the Enable Web User
Session Recovery on Logout check box, and in the Session Recovery
backup expiration (hrs) field, type the number of hours you want to allow a
session to be stored. In Session Recovery and Deferred Inbox storage
directory, specify the folder where the user session information is stored.

How can I Tune my MicroStrategy Web Server for Best


Performance?
l Clustering multiple web servers improves performance. For more
information about this, see Chapter 9, Cluster Multiple MicroStrategy
Servers.

Copyright © 2024 All Rights Reserved 1399


Syst em Ad m in ist r at io n Gu id e

l You can modify certain settings in the MicroStrategy Web server machine
or application for best performance. Details for MicroStrategy Web and
Web Universal follow:

MicroStrategy Web (ASP.NET)


The most significant things you can do:

l Tune Microsoft's Internet Information Services (IIS). For details, see the
MicroStrategy Tech Notes TN11275 and TN7449.

l Increase the server machine's Java Virtual Machine heap size. For
information on doing this, see MicroStrategy Tech Note TN6446.

MicroStrategy Web Universal (J2EE)


Tuning actions for the J2EE version of MicroStrategy Web Universal vary
according to the Web server you are using. For tuning details, see the
appropriate section in the Installation and Configuration Help.

See the documentation for your particular Web application server for
additional tuning information. In general, these are the things you can do:

l Use the MicroStrategy Web server instead of the application server to


serve static files (such as CSS, JavaScript).

l Precompile JSPs according to the platform you are using.

l Increase the application server's Java Virtual Machine heap size.

Copyright © 2024 All Rights Reserved 1400


Syst em Ad m in ist r at io n Gu id e

COM BIN IN G
A D M IN ISTRATIVE TASKS
WITH SYSTEM M AN AGER

Copyright © 2024 All Rights Reserved 1401


Syst em Ad m in ist r at io n Gu id e

MicroStrategy System Manager lets you combine multiple, sequential


processes for your MicroStrategy environment into a single workflow that
can be deployed at a scheduled time or on demand. You can create
workflows for different tasks, such as installing, maintaining, and upgrading
MicroStrategy environments; backing up projects; and launching or shutting
down Cloud instances. These workflows can be deployed using a standard
interface, an interactive command line process, or a completely silent
configuration process.

l Creating a Workflow, page 1402: Includes steps to create a workflow using


System Manager, as well as information on all the components required to
create a workflow.

l Defining Processes, page 1447: Includes information on all the processes


that can be included in a System Manager workflow. System Manager
provides a set of MicroStrategy and non-MicroStrategy processes to
include in a workflow.

l Deploying a Workflow, page 1545: Includes information on how to deploy a


System Manager workflow. This includes deploying a workflow using a
standard interface, an interactive command line process, and a completely
silent configuration process, which is suited for OEM deployments.

Creating a Workflow
You use System Manager to create a workflow visually, by dragging and
dropping processes and linking them together. This allows you to see the
step-by-step process that leads the workflow from one process to the next.
This visual approach to creating a workflow can help you to notice
opportunities to troubleshoot and error check processes as part of a
workflow.

The steps provided below show you how to create a workflow using System
Manager. Additional details on the various components that constitute a
System Manager workflow are provided after these steps.

Copyright © 2024 All Rights Reserved 1402


Syst em Ad m in ist r at io n Gu id e

It can be beneficial to determine the purpose of your workflow and plan the
general logical order of the workflow before using System Manager.

To Create a System Manager Workflow

The steps provided below are expressed as a linear process. However, as


you create a workflow, the steps of creating and modifying processes,
connectors, decisions, parameters, and other components of a workflow can
be interchanged as the requirements for a workflow are determined.

1. Open System Manager.

2. Select one of the following options to create a workflow:

l New Workflow: Creates a new workflow. The steps below assume


that you have selected this option to create a new workflow.

l Open Workflow: Opens an existing workflow. You can create a new


workflow based on an existing workflow.

l Templates: Opens a sample, template workflow for a configuration


scenario. The sample workflows provide a framework for certain
tasks, letting you modify the details for each process to work with the
components and tools in your environment. For information on some
of the available sample workflows, see Sample Workflows:
Templates, page 1437.

To Create Processes in a Workflow

1. From the Connectors and processes pane, double-click a process to


add it to the workflow. You can then move the process around so that it
fits into the organization of the workflow.

Copyright © 2024 All Rights Reserved 1403


Syst em Ad m in ist r at io n Gu id e

By default, the first process created in a workflow is automatically


defined as an entry process, and all other processes are automatically
disabled as entry processes.

2. Right-click the process and select Rename. Type a new name for the
process.

3. Select the process, and then select Properties in the pane on the right
side. Provide all the required information for the process. For details on
the properties required for each process, see Defining Processes, page
1447.

You can also use parameters to supply information for a process. To


use a parameter in a process or decision, you must use the following
syntax: ${ParameterName}

In the syntax listed above, ParameterName is the name of the


parameter. During execution, this is replaced with the value for the
parameter. Defining the parameters for a workflow is described in To
Define the Parameters for a Process, page 1405, which is a part of this
procedure.

4. While providing the information for a process, you can review the exit
codes for a process. On the Properties pane, scroll down to the bottom
and click Show Description, as shown in the image below.

Detailed information on each exit code for a process is displayed. For


additional information on how you can use exit codes to help create a

Copyright © 2024 All Rights Reserved 1404


Syst em Ad m in ist r at io n Gu id e

workflow, see Determining Process Resolution Using Exit Codes, page


1535.

Exit code -4242424 is a general exit code that is shared among all
processes. This exit code indicates that either the user canceled the
workflow manually, or the reason for the process error cannot be
determined.

5. Repeat the steps for To Create Processes in a Workflow, page 1403 to


create all the processes required for a workflow.

To Define the Param eters for a Process

Each workflow has one set of parameters, which can be defined on the
Parameters pane on the right side. The parameters can be used to provide
values for a process when the workflow is executed. Using parameters can
also let you provide this information in a secure fashion. For more
information on how to include parameters in a workflow, including importing
parameters from a file, see Using Parameters for Processes, page 1536.

To Define the Logical Order of a Workflow

Once you have all the processes required for a workflow, you can begin
to define the logical order of the workflow by creating connectors
between all the processes. Each process in a workflow needs to connect
to another process in the workflow, otherwise the workflow could end
prematurely. You can also define a process as an entry processes of a
workflow, create decisions to direct the logical order of a workflow, and
add comments to provide further information and explanation to a
workflow.

While defining the logical order of a workflow, you may find that additional
processes are required. Processes can be added at any time while creating
a workflow.

Copyright © 2024 All Rights Reserved 1405


Syst em Ad m in ist r at io n Gu id e

1. From the Connectors and processes pane, select from the following
types of connectors:

l Success: The green arrow, to the left, is the success connector. If


the process is completed with an exit code that is defined as a
successful status, the process that the success connector points to is
the next process that is attempted. If you use a success connector
from a process, it is recommended that you also provide a failure
connector.

l Failure: The red arrow, in the middle, is the failure connector. If the
current process is completed with an exit code that is defined as a
failure status, the process that the failure connector points to is the
next process that is attempted. If you use a failure connector from a
process, it is recommended that you also provide a success
connector.

l Continue: The white arrow, to the right, is the continue connector.


Regardless of the status of the exit code for the current process, the
process that the continue connector points to is the next process that
is attempted. If you use the continue connector from a process, you
cannot use any other connectors for that process.

With a connector type selected, click the process to start from and drag
to the process to proceed to next in the workflow. A connector is drawn
between the two processes. If you use the success and failure
connectors for a process, you must do this process for each connector.

These steps must be repeated for every process. For information on


how to use connectors to define the logical order of a workflow, see
Using Connectors to Create the Logical Order of a Workflow, page
1412.

Copyright © 2024 All Rights Reserved 1406


Syst em Ad m in ist r at io n Gu id e

2. From the Connectors and processes pane, select the Decision icon,
and then click in the workflow area. A decision process is created in the
workflow, as shown in the image below.

Decisions provide the ability to determine the next process in a


workflow based on specific exit codes for the previous process, rather
than just the simple success or failure of a process. For examples of
how decisions can be used to define the logical order of a workflow, see
Using Decisions to Determine the Next Step in a Workflow, page 1415.

Create as many decisions as you need for your workflow. Each decision
should use a success and a failure connector to other processes in the
workflow.

3. To enable or disable a process as an entry process for the workflow, in


the workflow area, right-click the process and select Toggle Entry
Process.

An entry process is a process that can be selected as the first process


to attempt when the workflow is executed. For information on how to
use entry processes, see Using Entry Processes to Determine the First
Step in a Workflow, page 1414.

A process that is defined as an entry process is displayed with a green


flag symbol, as shown in the image below.

By default, the first process created in a workflow is defined as an entry


process, and all other processes are disabled as entry processes.

Copyright © 2024 All Rights Reserved 1407


Syst em Ad m in ist r at io n Gu id e

4. To process related tasks one by one, from the Connectors and


processes pane, select the Iterative Retrieval icon, and then click in
the workflow area. An iterative retrieval process is created in the
workflow, as shown in the image below.

With an iterative retrieval process, you can have a workflow retrieve


information from sources including a folder, the contents of a file, or a
System Manager parameter. This information can then be passed to
another process in the System Manager workflow for processing a task.
For example, by using an iterative retrieval process, a folder that stores
weekly update packages can be analyzed to determine how many
update packages need to be applied for a week, and then apply these
updates one by one.

For information on how you can use the iterative retrieval process to
perform related tasks one by one in a workflow, see Processing Related
Tasks One by One, page 1422.

5. To create a split execution process, from the Connectors and


processes pane, select the Split Execution icon, and then click in the
workflow area. A split execution process is created in the workflow, as
shown in the image below.

A split execution process lets you start multiple threads in a workflow to


perform parallel processing of the tasks. This can speed up a workflow
for systems that can handle the parallel processing.

Copyright © 2024 All Rights Reserved 1408


Syst em Ad m in ist r at io n Gu id e

As tasks are executed simultaneously, you can also determine if


additional tasks in the workflow should be processed if tasks that were
performed in parallel are completed. From the Connectors and
processes pane, select the Merge Execution icon, and then click in the
workflow area. A merge execution process is created in the workflow,
as shown in the image below.

For information on how you can use the split execution and merge
execution to handle the parallel processing of tasks in a workflow, see
Once a workflow execution is split into multiple paths, each task is
performed independently of the other tasks. However, while the tasks
are done independently, all the tasks may need to be completed before
performing other tasks later in the workflow. For example, you can
create a DSN and start Intelligence Server as separate tasks at the
same time, but you may need both of those tasks to be fully complete
before starting another task that requires the DSN to be available and
Intelligence Server to be operational. To support this workflow, you can
use the merge execution process to combine multiple paths back into
one workflow path. For example, the merge execution process shown
below combines the three tasks performed in parallel back into one
execution after the three tasks are completed., page 1426.

6. To create a comment in the workflow, from the Connectors and


processes pane, select the Comment icon, and then click in the
workflow area.

Comments can be used to explain the design of a workflow. For


example, you can use comments to explain the paths of a decision
process, as shown in the image below.

Copyright © 2024 All Rights Reserved 1409


Syst em Ad m in ist r at io n Gu id e

You can add as many comments as needed to explain a workflow. Be


aware that the comments are viewable only in System Manager and
cannot be displayed to a user while the workflow is being executed. For
information on how to use comments to add context to a workflow, see
Using Comments to Provide Context and Information to a Workflow,
page 1431.

To Define the End of a Workflow

1. Create an exit process, which ends the workflow and can explain how
the workflow ended. From the Connectors and processes pane, select
the Exit Workflow icon, and then click in the workflow area. An exit
process is created in the workflow, as shown in the image below.

2. With the exit process selected, from the Properties pane, you can
choose to have the exit process return the exit code from the previous
process or return a customized exit code. For more information on how
to use exit processes to end a workflow, see Using Exit Processes to
End a Workflow, page 1421.

3. Create connectors from any processes that should be followed by


ending the workflow. All processes should lead to another process in
the workflow, a decision, or an exit process.

Copyright © 2024 All Rights Reserved 1410


Syst em Ad m in ist r at io n Gu id e

To Validate a Workflow

1. From the Workflow menu, select Validate Workflow. One of the


following messages is displayed:

l If the workflow is listed as valid, click OK.

l If the workflow is not valid, click Details to review the reasons why
the workflow is not valid. Click OK and make any required changes to
the workflow. Once all changes are made, validate the workflow
again.

For information on what is checked when validating a workflow, see


Validating a Workflow, page 1432.

2. Click Save Workflow As.

3. Click Save.

To Deploy a Workflow

The steps below show you how to deploy a workflow from within System
Manager. For information on deploying a workflow from the command line or
as a silent process, see Deploying a Workflow, page 1545.

1. From the View menu, select Options.

2. In the Log file path field, type the path of a log file, or use the folder
(browse) icon to browse to a log file. All results of deploying a workflow
are saved to the file that you select.

3. Click OK.

4. From the Workflow menu, go to Execute Workflow > Run


Configuration.

5. From the Starting process drop-down list, select the process to act as
the first process in the workflow. You can select only a process that has
been enabled as an entry process for the workflow.

Copyright © 2024 All Rights Reserved 1411


Syst em Ad m in ist r at io n Gu id e

6. In the Parameters area, type any parameters required to execute the


processes in the workflow, which may include user names, passwords,
and other values. To include multiple parameter and value pairs, you
must enclose each parameter in double quotes (" ") and separate
each parameter and value pair with a space. For example,
"UserName=User1" "Password=1234" is valid syntax to provide
values for the parameters UserName and Password.

For information on supplying parameters for a workflow, see Using


Parameters for Processes, page 1536.

7. Click Run . As the workflow is being executed, the results of each


process are displayed in the Console pane. The results are also saved
to the log file that you specified earlier.

If you need to end the workflow prematurely, from the Workflow menu,
select Terminate Execution. A dialog box is displayed asking you to
verify your choice to terminate the execution of the workflow. Click Yes
to terminate the workflow. If some processes in the workflow have
already been completed, those processes are not rolled back.

Using Connectors to Create the Logical Order of a Workflow


When a process in a workflow is completed, the next step to take in a
workflow is determined using connectors. Connectors determine the logical
order of a workflow according to the exit code of the process they are
coming from. You can select from the following types of connectors:

l Success: The green arrow, to the left, is the success connector. If a


process is completed with an exit code that is defined as a successful
status, the process that the success connector points to is the next
process that is attempted. If you use a success connector from a process,
it is recommended that you also provide a failure connector. Without a

Copyright © 2024 All Rights Reserved 1412


Syst em Ad m in ist r at io n Gu id e

failure connector, the workflow may unexpectedly end with the current
process.

l Failure: The red arrow, in the middle, is the failure connector. If a process
is completed with an exit code that is defined as a failure status, the
process that the failure connector points to is the next process that is
attempted. If you use a failure connector from a process, it is
recommended that you also provide a success connector. Without a
success connector, the workflow may unexpectedly end with the current
process.

l Continue: The white arrow, to the right, is the continue connector.


Regardless of the status of the exit code for a process, the process that
the continue connector points to is the next process that is attempted. If
you use the continue connector from a process, you cannot use any other
connectors for that process.

When a connector is added to a workflow, it is drawn from one process to


another. The arrow for the connector points to the next process to attempt in
a workflow, and the start of the connector links to the process that was just
completed.

It is common to use a combination of success and failure connectors to lead


from a process. These connectors allow you to continue with the main
workflow if the process was successful, and end the workflow or
troubleshoot the problem if the process was unsuccessful. For example, the
steps of a workflow shown in the image below show success and failure
connectors leading from a decision process.

Copyright © 2024 All Rights Reserved 1413


Syst em Ad m in ist r at io n Gu id e

The first decision process shown in the image above determines if


Intelligence Server is operational. If so, the workflow follows the success
connector to continue on to a Command Manager script to perform various
configurations. If Intelligence Server is not operational, the workflow follows
the failure connector on an alternative path to attempt to start Intelligence
Server before attempting the Command Manager script.

This example also includes a few continue connectors. For example, the
Start Intelligence Server process uses a continue connector to lead to a
decision process. The decision process is then used to determine the exit
code of the previous process. For examples of how decisions can be used to
define the logical order of a workflow, see Using Decisions to Determine the
Next Step in a Workflow, page 1415.

Using Entry Processes to Determine the First Step in a


Workflow
When you deploy a workflow, you can choose which process is the first to
attempt in a workflow. This allows you to skip steps that have already been
accomplished or are not required in certain environments. Being able to
select the process to begin with can also be helpful when creating a
workflow as part of testing and troubleshooting the steps in a workflow.

An entry process is any process in a workflow that can be selected as the


first process to attempt in a workflow. You can enable and disable processes
in a workflow as available entry processes for the workflow. By default, the

Copyright © 2024 All Rights Reserved 1414


Syst em Ad m in ist r at io n Gu id e

first process created in a workflow is defined as an entry process; all other


processes are disabled as entry processes.

To be able to select a process in a workflow as the first process to attempt, it


must be enabled as an entry process. In the workflow area, right-click a
process and select Toggle Entry Process. This enables or disables a
process as an entry process for the workflow. A process that is defined as an
entry process is displayed with a green flag symbol, as shown in the image
below.

Although any process, other than an exit process, can be enabled as an


entry process for a workflow, you should limit the steps that are enabled as
entry processes for various reasons:

l Some steps in a workflow may not work as entry processes. For example,
a decision process that relies on the exit code of the previous process
should not be enabled as an entry process. This is because the decision
process could not retrieve the required exit code. Without the ability to
retrieve an exit code, the decision process would not be able to perform a
comparison, and the workflow would appear to be unresponsive.

l When deploying a workflow using System Manager, each available entry


process is listed. Providing many available entry processes can cause
confusion as to which entry process to use to begin the workflow.

l When deploying a workflow, starting at a certain step can cause previous


steps to be skipped entirely, depending on the logical order of the
workflow. Ensure that skipping certain steps still allows the workflow to be
valid in the scenarios that it will be used in.

Using Decisions to Determine the Next Step in a Workflow


When a process is completed, the simple success or failure of a process is
not always enough to determine the next step to take in a workflow. Decision

Copyright © 2024 All Rights Reserved 1415


Syst em Ad m in ist r at io n Gu id e

processes can be used to compare process exit codes, parameters, and


other values to provide additional control over the next step to take in a
workflow. You can also use decision processes to check for the existence of
a file or folder, as well as if the file or folder is empty.

To add a decision process to your workflow, from the Connectors and


processes pane, select the Decision icon, and then click in the workflow
area. A decision process is created in the workflow, as shown in the image
below.

To Compare Parameters, Constants, and Exit Codes

1. Select the option Parameter/Exit Code Comparison.

2. Select to use a parameter or an exit code as the first item for the
comparison:

l Parameter or constant: Select this option to provide a parameter or


constant for comparison. You must type the parameter name or the
constant value.

l Previous process exit code: Select this option to use the exit code
of the previous process in the comparison. Using the exit code of a
process allows you to determine in greater detail why a process was
successful or unsuccessful. This allows you to take more specific
action to troubleshoot potential problems in a workflow.

For example, if you attempt to execute a Command Manager script as


part of a workflow, this type of process can fail for various reasons. If
the process fails with an exit code equal to four, this indicates that a
connection could not be made to perform the script. For this exit code,
a decision process could lead to a process to start Intelligence Server.
However, if the process fails with an exit code equal to six, this

Copyright © 2024 All Rights Reserved 1416


Syst em Ad m in ist r at io n Gu id e

indicates that the script has a syntax error. For this exit code, a
decision process could lead to an exit process, so the workflow could
be ended and the Command Manager script could be manually
reviewed for syntax errors.

3. From the Comparison operator drop-down list, select the operator for
the comparison.

4. In the Comparison item 2 field, type a value. It is common to type a


constant value to compare a parameter or exit code to.

5. In the Output parameters area, you can specify a parameter in the


Previous process exit code drop-down list. The parameter specified
is updated with the value of the exit code from the process that was
completed just before the decision process. You can use this technique
if you need multiple decision processes to determine the next course of
action, which is described in Using Multiple Decision Processes to
Troubleshoot a Workflow , page 1418 below.

If you do not need to use the exit code from the previous process later
in the workflow, you can leave the Previous process exit code drop-
down list blank.

To Check for the Existence of a File or Folder

1. Select the option File/Directory Check.

2. In File/Directory Path, type the path to the file or directory to check.


You can also click the folder icon to browse to and select a file or
directory.

3. From the File/Directory Check Condition drop-down list, select one of


the following options:

l Exists: Select this option to check only if the file or directory exists.
The decision process returns as true if the file or directory can be
found.

Copyright © 2024 All Rights Reserved 1417


Syst em Ad m in ist r at io n Gu id e

l Exists and not empty: Select this option to check if the file or
directory exists, and if the file or directory is empty. For files, this
check verifies that some information is in the file. For directories, this
check verifies whether any other files or folders are in the directory.
The decision process returns as true if the file or directory exists, and
the file or directory has some type of content available.

Using Multiple Decision Processes to Troubleshoot a Workflow


When you are creating a workflow, you can use multiple decision processes
to take more specific action on process exit codes and troubleshoot potential
problems in a workflow.

When a process in a workflow is completed, it can either be a success or


failure. Additionally, certain processes can fail for multiple reasons.
Although a single decision process can determine if a process was a
success or failure, you need to use multiple decision processes to qualify
how a process failed. By qualifying why a process failed, you can more
accurately troubleshoot the process and, in some cases, even take action in
the workflow itself to fix the problem.

For example, if you attempt to execute a Command Manager script as part of


a workflow, this type of process can fail for various reasons. If the process
fails with an exit code equal to four, this indicates that a connection could
not be made to perform the script. For this exit code, a decision process
could lead to a process to start Intelligence Server. However, if the process
fails with an exit code equal to six, this indicates that the script has a syntax
error. For this exit code, a decision process could lead to an exit process, so
the workflow could be ended and the Command Manager script could be
manually reviewed for syntax errors. T troubleshooting scenario is shown in
the workflow below.

Copyright © 2024 All Rights Reserved 1418


Syst em Ad m in ist r at io n Gu id e

The first decision process (labeled as Success or failure?) can determine


whether the Command Manager script was a success or a failure.
Additionally, this decision process uses the Previous process exit code to
store the exit code for the Command Manager script process into a
parameter called Decision. You must use the Previous process exit code
to store the exit code for the original Command Manager process so that this
exit code can be used in the other decision processes.

In a chain of multiple-decision processes, you should use the Previous


process exit code option only in the first decision process. This is because
once this exit code is stored in a parameter, you can then reuse that
parameter in later decision processes as a comparison item. If you were to
mistakenly include the same parameter in the Previous process exit code
option for one of the later decision processes, the parameter would be
updated to have the exit code of the previous decision process. This would

Copyright © 2024 All Rights Reserved 1419


Syst em Ad m in ist r at io n Gu id e

then overwrite the original exit code, which would prevent you from
comparing the original exit code in the later decision processes.

If the script was a success, the first decision process allows the workflow to
continue. If the script fails, a second decision process is started. This
second decision process (labeled as Failed to connect to Intelligence
Server?) uses the value previously stored in the Decision parameter to
determine if the exit code is equal to four. With an exit code equal to four,
this decision process can attempt to start Intelligence Server and then
attempt to run the Command Manager script again. If this second decision
process fails, which means the exit code is not equal to four, a third decision
process (labeled as Script syntax error?) is started.

This third decision process again uses the value that was stored in the
Decision parameter by the first decision process to determine if the exit
code is equal to six. With an exit code equal to six, this decision process can
send an email to a someone to review the Command Manager script for
syntax errors, and it can attach the script to the email. Once the email is
sent, the workflow is exited. If this final decision process fails, that means
the Command Manager script failed for another reason. In this case, the
workflow is exited for additional troubleshooting.

When using multiple decision processes to qualify the resolution of a


previous process, be aware that as long as you store the original exit code in
a parameter, you can use as many decision processes as necessary.

Additionally, this technique of using multiple decision processes is a good


practice for processes that are important to the overall success or failure of
a workflow. However, using this technique for every process in a workflow
could cause the workflow to become overly complex and difficult to create
and follow. For example, processes that send emails likely do not require
involved troubleshooting in the workflow itself, but a process that attempts
to start Intelligence Server may benefit from including potential
troubleshooting steps.

Copyright © 2024 All Rights Reserved 1420


Syst em Ad m in ist r at io n Gu id e

Using Exit Processes to End a Workflow


When a workflow is deployed, it is important to be able to notify whoever is
deploying the workflow when and how the workflow has ended. An exit
process allows you to end a workflow and explain how the workflow ended.

To add an exit process to your workflow, from the Connectors and processes
pane, select the Exit Workflow icon, and then click in the workflow area. An
exit process is created in the workflow, as shown in the image below.

With the process selected, from the Properties pane, you can define what
type of exit code is provided when the exit code is reached:

l Use previous process exit code: Select this option to return the exit
code of the process that was completed just before the exit process. If you
use this option you can use the same exit process from multiple processes
in the workflow, and the exit code returned provides information on
whatever process led to the exit process. For example, the steps of a
workflow shown in the image below show two processes leading to the
same exit process.

When the workflow completes, the same exit process returns the exit code
either on the decision process that determines if Intelligence Server can
be started, or the process that completes a Command Manager script.

Copyright © 2024 All Rights Reserved 1421


Syst em Ad m in ist r at io n Gu id e

l Use customized exit code: Select this option to define your own exit
code for the exit process by typing in the available field. This allows you to
create exit codes customized to your needs. You can use only numeric
values for the customized exit code.

If you use this option, you may want to use multiple exit processes in a
workflow. You can then define each exit process with a unique exit code.
This can explain what path the workflow took and how it ended. This can
be helpful because workflows can have multiple possible paths including a
successful path where all processes were completed and unsuccessful
paths where the workflow had to be ended prematurely.

Every workflow should include at least one exit process. Ensuring that
processes either lead to another process or to an exit process provides a
consistent expectation for the results of a workflow.

Processing Related Tasks One by One


System Manager supports processing related tasks one by one and
determining how many related tasks are available. This can be done using
the iterative retrieval process. With such a process, you can have a workflow
retrieve information from sources including a folder, the contents of a file, or
a System Manager parameter. This information can then be passed to
another process in the System Manager workflow for processing a task.

For example, you have multiple projects that require object updates on an
intermittent schedule. At the start of each week, any updates that are
required are included in a separate update package for each project, and all
update package files are stored in a folder. The number of update packages
required for a week varies depending on requirements of the various
projects. By using the iterative retrieval process, the folder that stores the
weekly update packages can be analyzed to determine how many update
packages need to be applied for the week. The workflow shown below then
retrieves these update packages from the folder one by one, applying the
update package, emailing the project administrator, and using the iterative
retrieval process to retrieve the next update package.

Copyright © 2024 All Rights Reserved 1422


Syst em Ad m in ist r at io n Gu id e

The iterative retrieval process automatically determines the number of


update packages in the folder, which allows you to run the same workflow
each week without having to modify the workflow to account for varying
numbers of update packages from week to week. Once all update packages
are processed and no more update packages can be retrieved, the iterative
retrieval process exits with a failure exit code to signify that no more
information is available for retrieval.

With the process selected, from the Properties pane, you can define how the
iterative retrieval process retrieves information to be processed as part of a
System Manager workflow:

l Files in Directory: Select this option to retrieve files from a folder. When
retrieving files from a folder, be aware that each time a file is retrieved, it
is stored in the same parameter and thus provided to the same process in
the System Manager workflow. This means that the System Manager
process that uses these files must be able to process all files in a folder. In
the example update package scenario, the folder must contain only update
packages. If, for example, a text file was stored in the folder, retrieving

Copyright © 2024 All Rights Reserved 1423


Syst em Ad m in ist r at io n Gu id e

this text file and passing it to the import package process would cause an
error in the workflow.

Click the folder icon to browse to and select a folder, or type the full path
in the Directory Name field. You must also determine how the files are
retrieved, using the following options:

l File Names Only: Select this option to retrieve only the name of the file,
including the file extension. If you clear this check box, the full file path
to the file is retrieved, which is commonly required if you need the
location of the file for other processes in the System Manager workflow.

l All Files: Select this option to retrieve files from only the top-level
folder.

l All Files and Subfolders Recursively: Select this option to retrieve


files from the top-level folder and all subfolders.

l Content of File: Select this option to retrieve the contents of a file. Click
the folder icon to browse to and select a file, or type the full path in the
File Name field. You must also determine if a separator is used to
segment the content within the file, using the following option:

l Separator: Select this check box to retrieve the contents of a file in


multiple, separate segments. Type the separator character or characters
that are used in the file to denote separate sections of content. For
example, you can type a comma (,) if the content is separated using
commas. You can also use characters such as \n, \t, and \s to
represent the new line, tab, and space separators, respectively.

If you clear this check box, the entire contents of the file is returned in a
single retrieval.

l Parameter: Select this option to retrieve the contents of a parameter.


From the Parameter Name drop-down list, select a parameter that is
included in the System Manager workflow. You must also determine if a
separator is used to segment the content within the parameter, using the

Copyright © 2024 All Rights Reserved 1424


Syst em Ad m in ist r at io n Gu id e

following option:

l Separator: Select this check box to retrieve the contents of a parameter


in multiple, separate segments. Type the separator character or
characters that are used in the parameter to denote separate sections of
content. For example, you can type a comma (,) if the content is
separated using commas. You can also use the characters \n, \t, and
\s to represent the new line, tab, and space separators, respectively.

If you clear this check box, the entire contents of the parameter is returned
in a single retrieval.

l Output Parameter: The information retrieved must be stored in a


parameter so that it can be passed to another process in the System
Manager workflow. Select an output parameter from the drop-down list.

Processing Multiple Tasks Simultaneously


System Manager supports executing tasks in parallel in a workflow. This
takes advantage of a system's processing power to complete the tasks more
quickly. This is done by using the split execution process. With a split
execution process, you can have a workflow process two or more tasks at
the same time. The split execution process shown below takes a linear
workflow and begins to process three tasks in parallel.

Copyright © 2024 All Rights Reserved 1425


Syst em Ad m in ist r at io n Gu id e

When using split executions to process multiple tasks in a workflow at the


same time, consider the following best practices:

l Ensure that the tasks do not depend on each other. Workflows are often
linear processes that require that one task is completed before starting
another task. For example, you cannot run certain Command Manager
scripts until Intelligence Server is started. This means a task to start
Intelligence Server should not be done in parallel with other tasks that
require Intelligence Server to be operational.

l Consider the amount of processing that is required to perform the tasks in


parallel, relative to your available system resources. While performing
multiple tasks at once can save time, it can also slow down overall
performance if the required system resources are not available. Even if a
workflow is created to start multiple tasks, you can limit the number of
tasks that are performed in parallel to prevent overloading the system, as
described in Limiting the Number of Parallel Tasks to Prevent Over
Consumption of System Resources, page 1429.

l Split execution processes can use only the continue connector (see Using
Connectors to Create the Logical Order of a Workflow, page 1412) to link
to new tasks to perform in parallel. You must also use two or more
continue connectors, as a split execution is meant to split a workflow into
at least two paths to perform in parallel.

Once a workflow execution is split into multiple paths, each task is


performed independently of the other tasks. However, while the tasks are
done independently, all the tasks may need to be completed before
performing other tasks later in the workflow. For example, you can create a
DSN and start Intelligence Server as separate tasks at the same time, but
you may need both of those tasks to be fully complete before starting
another task that requires the DSN to be available and Intelligence Server to
be operational. To support this workflow, you can use the merge execution
process to combine multiple paths back into one workflow path. For
example, the merge execution process shown below combines the three

Copyright © 2024 All Rights Reserved 1426


Syst em Ad m in ist r at io n Gu id e

tasks performed in parallel back into one execution after the three tasks are
completed.

For each merge execution process, you must supply a time out value. This
time out value is the amount of time, in seconds, that is allowed to complete
all the parallel tasks that are connected to the merge execution process. The
time starts to count down once the first task connected to a merge execution
process is completed. How the remaining tasks connected to the merge
execution are processed depends on the connectors used to continue from
the merge execution process:

It is recommended that you use the success and failure connectors to exit
the merge process:

l Success connector: If each task that is connected to a merge execution is


completed in the allotted time, the workflow continues to the configuration
that is linked to the merge execution with the success connector.

l Failure connector: If at least one task connected to the merge execution is


not completed in the allotted time, or all other paths have been ended

Copyright © 2024 All Rights Reserved 1427


Syst em Ad m in ist r at io n Gu id e

without reaching the merge execution process, the workflow continues to


the configuration that is linked to the merge execution with the failure
connector.

Although merge execution processes are helpful to continue the workflow


when certain tasks are completed, you do not have to merge any or all paths
that are started with a split execution process. Each task performed in
parallel with other tasks can come to separate completions using standard
exit processes (see Using Exit Processes to End a Workflow, page 1421).
For example, in the workflow shown below, both DSN creation configurations
must be completed to also process the Execute SQL configuration. However,
the path that starts with an Intelligence Server startup configuration
continues on to completion regardless of whether any of the other tasks are
completed.

Copyright © 2024 All Rights Reserved 1428


Syst em Ad m in ist r at io n Gu id e

Limiting the Number of Parallel Tasks to Prevent Over


Consumption of System Resources
While creating a workflow, the split execution process can be used to start
as many tasks at the same time as required. However, each additional task
that is attempted in parallel requires additional system resources. If your
system cannot handle the additional processing requirements to complete
all the tasks in parallel, this can slow down the workflow execution and the
entire system's performance.

To avoid these types of performance issues, you can limit the number of
tasks that can be processed at the same time for a workflow. This ensures
that even if a workflow requests a certain number of tasks to be processed at
the same time, only the specified limit is allowed to run at a time.

The default value for the limit is the greater of either the number of CPUs for
the system or 2. Although the number of CPUs for the system is a
reasonable default, be aware of the following:

l Systems can process more tasks simultaneously than the number of CPUs
available.

l Systems can have multiple CPUs, but this does not necessarily mean all
the CPUs are available to the user who is deploying a workflow. For
example, consider a Linux machine with eight CPUs available. In this
scenario, the Maximum Thread default value is 8. However, the user
account that is being used to deploy the workflow may be allowed to use
only one CPU for the Linux machine. When determining the maximum
number of tasks to run simultaneously in System Manager workflows, you
should understand details about system resource configuration.

As a workflow is deployed, any tasks over the set limit are put into a queue.
For example, if a split execution process attempts to start five tasks, but the
Maximum Threads option is set at three, two of the tasks are immediately
put in the queue. Once a task is completed, the next task in the queue can
begin processing.

Copyright © 2024 All Rights Reserved 1429


Syst em Ad m in ist r at io n Gu id e

In terms of queueing and processing tasks, each separate configuration is


considered as a separate task. Once a configuration is completed, the
configuration that it links to next might not be the next configuration to be
processed. For example, a split execution process attempts to start five
tasks, as shown in the image below.

The Maximum Threads option is set at three, which means that two of the
tasks are immediately put in the queue. Assume then that one of the
three tasks being processed (Task A) comes to completion, and it links to
another task in the workflow (Task B). Rather than immediately starting
to process Task B, the workflow must first process the tasks that were
already included in the queue (Task E and Task F). This puts Task B
behind the two existing tasks already in the queue.

To Define the Parallel Task Limit

1. From the View menu, select Options

2. In the Maximum Concurrent Threads field, type the maximum number


of tasks that can processed at the same time. The default value for this
option is the greater of either the number of CPUs for the system or 2.

3. Click OK.

Copyright © 2024 All Rights Reserved 1430


Syst em Ad m in ist r at io n Gu id e

Using Comments to Provide Context and Information to a


Workflow
Workflows can be made more helpful if you add information about why
certain steps are performed or explain the logical order of the workflow. You
can include this type of information in a workflow by adding comments to the
workflow.

To add a comment to your workflow, from the Connectors and processes


pane, select the Comment icon, and then click in the workflow area. A
comment is created in the workflow.

You can then type the information for the comment. You can also resize the
comment and move it to the required location in a workflow.

You can use comments to explain to the workflow's design. For example,
you can use comments to explain the paths of a decision process, as shown
in the image below.

Another benefit of using comments is to provide information directly in the


workflow area. For example, the image below shows a workflow with a
comment that explains the Command Manager script process.

Copyright © 2024 All Rights Reserved 1431


Syst em Ad m in ist r at io n Gu id e

The same information in the comment is included in the description for the
Command Manager script process. However, providing the information in a
comment allows this information to be displayed directly in the workflow
area.

You can add as many comments as needed to explain a workflow. Be aware


that the comments are viewable only in System Manager and cannot be
displayed to a user while the workflow is being deployed.

Validating a Workflow
Validating a workflow is an important step in creating a workflow. Although
validating a workflow does not guarantee that every process will be
completed successfully when deploying a workflow, it helps to limit the
possibility for errors during the deployment.

While you are creating a workflow, you can use System Manager to validate
the workflow. This validation process performs the following checks on the
workflow:

l The workflow contains at least one entry process. This is required so that
the workflow has at least one process to use as the first step in the
workflow.

l All processes have values for all required properties. For example, if you
are creating a DSN, you must supply a name for the DSN, the machine that
stores the data source, the port number, and other required values for the
data source type.

The validation checks only that values exist for all required properties, not
whether the values are valid for the process.

To use System Manager to validate a workflow, from the Workflow menu,


select Validate Workflow. One of the following messages is displayed:

l The workflow is valid: This message is displayed if it passes all the


checks listed above. Click OK to close the message.

Copyright © 2024 All Rights Reserved 1432


Syst em Ad m in ist r at io n Gu id e

l Incomplete workflow: This message is displayed if parts of the workflow


are not valid. Click Details to display the invalid portions of the workflow.
Continue to fix the workflow and perform the validation until you see the
message, "The workflow is valid." Click OK to close the message.

Additional validations that can be done manually on a workflow are


described in Manually Validating a Workflow, page 1433 below.

Manually Validating a Workflow


As part of validating your workflow, you should manually validate additional
aspects of the workflow. These additional validations help reduce the
potential for issues to develop when deploying a workflow. This includes the
following validations:

l Each process has either one continue connector or one success connector
and one failure connector leading from it. This ensures that each process
continues on to another step in the workflow regardless of whether the
process is successful or unsuccessful. For more information on correctly
supplying connectors for a workflow, see Using Connectors to Create the
Logical Order of a Workflow, page 1412.

l The workflow has at least one exit process. Exit processes verify that a
workflow deployment has completed. For more information on how you can
use exit processes in a workflow, see Using Exit Processes to End a
Workflow, page 1421.

l Step through the logical order of the workflow and double-check that all
the possible paths make sense with the purpose of the workflow. You can
also use this as an opportunity to check for parts of the workflow that could
become cyclical. For example, in the workflow shown in the image below,
a potential cyclical path is highlighted with purple, dashed arrows.

Copyright © 2024 All Rights Reserved 1433


Syst em Ad m in ist r at io n Gu id e

Although this cyclical path would let the workflow attempt to start
Intelligence Server multiple times, if Intelligence Server cannot be started
successfully, the workflow could continue to execute until it was manually
ended. An alternative would be to modify the logical order of the workflow
to attempt to start Intelligence Server a second time, but end the workflow
if the second attempt also fails. This new path is shown in the image
below.

Copyright © 2024 All Rights Reserved 1434


Syst em Ad m in ist r at io n Gu id e

As an alternative to modifying a workflow to avoid loops, you can also use


the Update Parameters configuration (see Performing System Processes,
page 1493). This configuration lets you update a parameter, including
incrementally, which allows you to end a loop in a workflow after a
specified number of attempts (see Supporting Loops in a Workflow to
Attempt Configurations Multiple Times, page 1435 below).

Supporting Loops in a Workflow to Attempt Configurations


Multiple Times
When deploying a workflow, it may be necessary to perform the same
configuration multiple times. For example, if you attempt to start Intelligence
Server but it does not start successfully, you can continue to attempt to start
Intelligence Server until it starts successfully, or the workflow is ended. To
support this type of a workflow, you can include a loop in your workflow.

Loops should commonly be avoided in workflows because they can cause a


workflow to continue to perform the same actions repeatedly with no way to
end the workflow. However, you can use decision processes and the Update
Parameters process (see Performing System Processes, page 1493) to
support loops in workflows. By including the Update Parameters process in a

Copyright © 2024 All Rights Reserved 1435


Syst em Ad m in ist r at io n Gu id e

workflow, you can keep track of how many times a loop in a workflow is
repeated. After a certain amount of attempts, the loop can be exited even if
the required configuration was not completed successfully.

For example, the workflow shown below uses a loop to attempt to start
Intelligence Server, multiple times if necessary, before performing a
Command Manager script that requires Intelligence Server to be operational.

With the workflow shown above, if Intelligence Server starts successfully the
first time, the Command Manager script is executed next and the loop is not
needed. However, if starting Intelligence Server is not successful, the first
thing that occurs is that the Update Loop Counter configuration updates a
parameter for the workflow. A parameter named Loop is included in the
workflow with the initial value of zero, and the Update Loop Counter
configuration updates this parameter with the following statement:

${Loop} + 1

Using this statement, the Loop parameter is increased by one each time the
Update Loop Counter configuration is executed. Once the Loop parameter
has been increased, a decision process is used to check the value of the
Loop parameter. If the Loop parameter is less than three, the configuration

Copyright © 2024 All Rights Reserved 1436


Syst em Ad m in ist r at io n Gu id e

to start Intelligence Server is attempted again. This allows the configuration


to start Intelligence Server to be attempted three times. If Intelligence
Server still cannot start successfully, the loop is discontinued and the
workflow is stopped.

To use split and merge executions in a workflow that uses logical loops, see
Once a workflow execution is split into multiple paths, each task is
performed independently of the other tasks. However, while the tasks are
done independently, all the tasks may need to be completed before
performing other tasks later in the workflow. For example, you can create a
DSN and start Intelligence Server as separate tasks at the same time, but
you may need both of those tasks to be fully complete before starting
another task that requires the DSN to be available and Intelligence Server
to be operational. To support this workflow, you can use the merge
execution process to combine multiple paths back into one workflow path.
For example, the merge execution process shown below combines the three
tasks performed in parallel back into one execution after the three tasks are
completed., page 1426.

Sample Workflows: Templates


System Manager includes sample, template workflows that you can use to
learn how to create workflows in System Manager and use as building blocks
for your own workflows.

From the System Manager home page, you can access the template
workflows in the Templates section. To choose from the full list of template
workflows, click the More Templates folder.

Once the workflow is open in System Manager, you can select each process
in the workflow to review the task that it performs for the workflow. You can
also modify the properties of each process so that the workflow can be used
to configure and administer your environment. For information on the
properties available for each type of process available using System
Manager, see Defining Processes, page 1447.

Copyright © 2024 All Rights Reserved 1437


Syst em Ad m in ist r at io n Gu id e

Template: Configuring Intelligence Server


The template 01ConfigureIntelligenceServer.smw can be used to
configure Intelligence Server. The template includes the following tasks:

l Creates a new DSN to store a metadata.

l Configures Intelligence Server to connect to the new DSN.

l Creates a new project source, which allows access to the metadata.

Before using this template, be sure the following prerequisites are met:

l A database location used to store a MicroStrategy metadata. By default,


the template creates a DSN for a Microsoft SQL Server database. You can
swap in a process that matches the database type that you use to store
your metadata. For a list of processes that can be used to create DSNs,
see Creating Data Source Names, page 1471.

l Separate response files used to connect Intelligence Server to the new


DSN and to create a new project source. These response files can be
created using MicroStrategy Configuration Wizard, as described in the
Installation and Configuration Help.

Template: Configuring MicroStrategy Suite


The template 02ReportingSuiteSetup.smw can be used to configure
MicroStrategy Suite. The MicroStrategy Suite is a MicroStrategy offering
that lets you evaluate MicroStrategy as a departmental solution. This
template includes the following tasks:

l Creates a new metadata.

l Configures Intelligence Server to connect to the new metadata.

l Creates a new project source, which allows access to the new metadata.

l Creates a new database instance for the MicroStrategy Suite.

Copyright © 2024 All Rights Reserved 1438


Syst em Ad m in ist r at io n Gu id e

l Creates a new project for the MicroStrategy Suite and connects it to the
new database instance.

l Sends an email notification that describes the success of configuring the


MicroStrategy Suite.

Before using this template, be sure the following prerequisites are met:

l Access to the MicroStrategy Suite software.

l Separate response files used to create a new metadata, connect


Intelligence Server to the new DSN, and create a new project source.
These response files can be created using MicroStrategy Configuration
Wizard, as described in the Installation and Configuration Help.

l Separate Command Manager scripts used to create a database instance,


create a new project, and connect the new project to the new database
instance. These scripts can be created using Command Manager, as
described in Chapter 15, Automating Administrative Tasks with Command
Manager.

Template: Upgrading MicroStrategy Web, Including


Customizations
The template 03UpgradeWebWithCustomizations.smw can be used to
upgrade your MicroStrategy Web environment. This upgrade workflow also
supports including any customizations that you made to your MicroStrategy
Web environment. This template includes the following tasks:

l Stops the web application server that hosts MicroStrategy Web.

l Creates a backup copy of MicroStrategy Web customization files.

l Creates a copy of the new web archive (.war) file to deploy the new
version of MicroStrategy Web.

l Restarts the web application server, which extracts the contents of the
.war file.

Copyright © 2024 All Rights Reserved 1439


Syst em Ad m in ist r at io n Gu id e

l Copies the MicroStrategy Web customization files into the newly deployed
environment.

l Stops and then restarts the web application server, which deploys the new
MicroStrategy Web environment, including any customizations.

Before using this template, be sure the following prerequisites are met:

l Access to any MicroStrategy Web customizations that are to be applied to


the upgraded MicroStrategy Web environment. Review the MicroStrategy
Software Development Library (MSDL) before upgrading MicroStrategy
Web customizations for important upgrading best practices information.

l Access to the .war file for the version of MicroStrategy Web to upgrade to.

l A file used to start the web application server. By default, the template
expects an Apache Tomcat web application server. You can swap in a file
that starts your web application server.

Template: Upgrading a Metadata and Executing an Integrity Test


The template 04UpgradeMetadata.smw can be used to upgrade a
metadata and execute an integrity test after the upgrade is complete. This
template includes the following tasks:

l Creates a backup copy of the metadata. An email is sent if a backup copy


cannot be created.

l Upgrades the metadata. An email is sent if the upgrade is not completed


successfully. As part of a successful upgrade, the backup file is
compressed into a zip file, and the original backup file is deleted.

l Executes an Integrity Manager baseline test on the upgraded metadata.

Before using this template, be sure the following prerequisites are met:

l Access to the metadata, and a SQL statement that can be used to create a
copy of the metadata. By default, the template expects the metadata to be

Copyright © 2024 All Rights Reserved 1440


Syst em Ad m in ist r at io n Gu id e

stored in a Microsoft SQL Server database. You can change the supplied
SQL script to reflect the SQL syntax required for the database
management system that you use to store your metadata.

l A response file used to upgrade the metadata. This response file can be
created using MicroStrategy Configuration Wizard, as described in the
Installation and Configuration Help.

l A test file that defines how to perform the automated test of reports and
documents for the metadata. This file can be created using Integrity
Manager, as described in Creating an Integrity Test, page 1580.

Template: Retrieving the Status of Intelligence Server


The template 05IntelligenceServerAvailability.smw can be used to
retrieve the status of Intelligence Server and start Intelligence Server if it is
not operational. This template includes the following tasks:

l Retrieves the status of Intelligence Server.

l Attempts to start Intelligence Server if it is not running.

l Sends an email notification that describes the success or failure of starting


Intelligence Server.

Before using this template, be sure the following prerequisite is met:

Access to an Intelligence Server.

Template: Migrating Objects Between Two Projects and


Executing an Integrity Test
The template 06ObjectMigration.smw can be used to migrate objects
between two projects and execute an integrity test after the object migration
is complete. This template can be used to migrate a project from a testing
environment to a production environment. This template includes the
following tasks:

Copyright © 2024 All Rights Reserved 1441


Syst em Ad m in ist r at io n Gu id e

This template is not provided if System Manager is installed on a UNIX or


Linux environment.

l Retrieves the status of Intelligence Server and attempts to start


Intelligence Server if it is not operational. If Intelligence Server cannot be
started, an email is sent and the workflow is ended.

l Merges two projects into a single project.

l Applies an update package to the merged project. An update package is a


file containing a set of object definitions and conflict resolution rules.

l Restarts Intelligence Server and executes an Integrity Manager test on the


merged project.

l Sends an email notification if any of the project migration steps fails.

Before using this template, be sure the following prerequisites are met:

l A file that defines how the duplicate projects are to be merged. This file is
created using the Project Merge Wizard. For steps on how to create this
configuration file, see Merge Projects with the Project Merge Wizard, page
811.

l An update package file that defines how a project is to be duplicated. This


file is created using MicroStrategy Object Manager. For steps on how to
create this update package, see Copy Objects in a Batch: Update
Packages, page 786.

l A test file that defines how to perform the automated test of reports and
documents for the project. This file can be created using Integrity
Manager, as described in Creating an Integrity Test, page 1580.

Template: Including a Cloud-Based Environment to Increase


Intelligence Server Capacity
The template 07AddIntelligenceServerCapacity.smw can be used to
include a cloud-based environment to increase Intelligence Server capacity.
This template includes the following tasks:

Copyright © 2024 All Rights Reserved 1442


Syst em Ad m in ist r at io n Gu id e

l Launches an Amazon EC2 cloud-based environment, which can be used to


increase Intelligence Server capacity.

l Ensures that MicroStrategy Listener is running, which is required to


communicate with Intelligence Server.

l Attempts to start Intelligence Server.

l Searches through a response file used to create a project source. The


Intelligence Server machine name is modified to match the machine name
for the cloud-based environment.

l Creates a new project source to connect to the cloud-based environment.

l Searches through a Command Manager script file used to join the cloud-
based environment to an Intelligence Server cluster. The Intelligence
Server machine name is modified to match the machine name for the
cloud-based environment.

l Joins the cloud-based environment to an Intelligence Server cluster.

l Sends an email notification that describes the success of adding the


cloud-based environment to the Intelligence Server cluster.

Before using this template, be sure the following prerequisites are met:

l Access to an Amazon EC2 cloud-based environment, including all relevant


support files and information. Refer to your third-party Amazon EC2
documentation for information on the requirements to support a cloud-
based environment.

l A response file used to create a new project source. This response file can
be created using MicroStrategy Configuration Wizard, as described in the
Installation and Configuration Help.

l A Command Manager script file used to join the cloud-based environment


to an Intelligence Server cluster. This script can be created using
Command Manager, as described in Creating and Executing Scripts, page
1556.

Copyright © 2024 All Rights Reserved 1443


Syst em Ad m in ist r at io n Gu id e

Template: Restarting Intelligence Server


The template 08IntelligenceServerRe-Start.smw can be used to
restart Intelligence Server and notify users of the scheduled restart. This
template includes the following tasks:

l Sends an email to users as a warning that Intelligence Server is about to


be restarted.

l Attempts to restart Intelligence Server and determines the success or


failure of the restart.

l Sends an email to either the administrator or the broader user community,


depending on whether the restart was successful.

Template: Updating Projects with Multiple Update Packages


The template 09MigrateMultiplePacakgesUsingLoop.smw can be
used to roll back a recent update package for multiple projects as well as
apply a new update package. This template also serves as an example of
successfully using loops in a System Manager workflow. This template
includes the following tasks:

l Downloads update package files from an SFTP server.

l Creates a parameter that determines how many times the loop in the
workflow has been completed. This parameter is used to choose the
correct update packages and to exit the loop in the workflow at the proper
time.

l Checks for all required update package files and undo package files.

l Sends an email to an administrator if some package files are not available.

l Modifies a Command Manager script to select a different undo package


and update package for each loop through the workflow.

l Creates an undo package to roll back changes that were made to a project
using an update package.

Copyright © 2024 All Rights Reserved 1444


Syst em Ad m in ist r at io n Gu id e

l Completes the undo package to roll back changes for the project, and then
completes a new update package to update the objects for the project.

l Sends an email to an administrator verifying that the updates to the project


were completed.

l Continues to loop through the workflow to do the same type of updates for
other projects, or ends the workflow after updating four projects with these
changes.

Before using this template, be sure the following prerequisites are met:

l Undo package files that define how to roll back the changes made by an
update package for a project. This file is created using MicroStrategy
Object Manager. For steps on how to create this undo package, see Copy
Objects in a Batch: Update Packages, page 786.

l Update package files that define how a project is to be updated. This file is
created using MicroStrategy Object Manager. For steps on how to create
this update package, see Copy Objects in a Batch: Update Packages,
page 786.

l Command Manager script files that are used to create and administer the
undo package files. These script files can be created using Command
Manager, as described in Creating and Executing Scripts, page 1556.

Template: Publishing Intelligent Cubes and Workflow


Troubleshooting
The template 10PublishCubesWithValidation.smw can be used to
publish Intelligent Cubes, and as an example of a workflow that uses the
Decision process to troubleshoot the System Manager workflow. This
template includes the following tasks:

l Employs an iterative retrieval process to retrieve information from a text


file on the Intelligent Cubes to be published.

l Uses Command Manager script files to publish Intelligent Cubes.

Copyright © 2024 All Rights Reserved 1445


Syst em Ad m in ist r at io n Gu id e

l Uses multiple Decision processes to determine the success or failure of


publishing the Intelligent Cubes.

l Sends emails about the success or failure of publishing the Intelligent


Cubes.

Before using this template, be sure the following prerequisites are met:

l A text file that includes the information required to publish the Intelligent
Cubes. Each line of the file must include two columns. The first column
provides the Intelligent Cube name, and the second column provides the
full path to the Command Manager script files used to publish the
Intelligent Cube.

l Two Command Manager script files used to publish Intelligent Cubes.


These script files can be created using Command Manager, as described
in Creating and Executing Scripts, page 1556.

Template: Launching Cloud-Based Environments in Parallel


The template 11ParallelExecutionOfWorkflows.smw can be used to
launch multiple cloud-based environments. It also is an example of using
parallel execution in System Manager. This template includes the following
tasks:

l Uses a split execution process to start two threads for the workflow to
perform parallel processing.

l Launches two Amazon EC2 cloud-based environments in parallel, which


can be used to increase Intelligence Server capacity.

l Checks to see if the cloud-based environments were launched


successfully.

l Sends emails about the success or failure of launching the cloud-based


environments.

Before using this template, be sure the following prerequisite is met:

Copyright © 2024 All Rights Reserved 1446


Syst em Ad m in ist r at io n Gu id e

Access to an Amazon EC2 cloud-based environment, including all relevant


support files and information. Refer to your third-party Amazon EC2
documentation for information on the requirements to support a cloud-based
environment.

Template: Creating and Sharing Update Packages


The template 12CreateSharePackage.smw can be used to create a
project update package and share that update package on an SFTP server.
This template includes the following tasks:

l Retrieves the status of Intelligence Server.

l Attempts to start Intelligence Server if it is not running.

l Uses an .xml file to create an update package.

l Uploads the update package file to an SFTP server.

l Sends an email notification about the availability of the update package.

Before using this template, be sure the following prerequisites are met:

l Access to an Intelligence Server.

l Access to an SFTP server to store the update package.

l An .xml file that can be used to create an update package.

l A text file that includes a list of people to notify about the availability of the
newly created update package.

Defining Processes
The tasks that are completed as part of a System Manager workflow are
determined by the processes that you include. System Manager provides a
set of MicroStrategy and non-MicroStrategy processes to include in a
workflow. These processes can be categorized as follows:

Copyright © 2024 All Rights Reserved 1447


Syst em Ad m in ist r at io n Gu id e

l Configuring MicroStrategy Components, page 1448

l Managing Projects, page 1453

l Administering Intelligence Servers and other MicroStrategy Services,


page 1459

l Automating Administrative Tasks, page 1462

l Verifying Reports and Documents, page 1467

l Creating Data Source Names, page 1471

l Completing a Separate System Manager Workflow, page 1489

l Performing System Processes, page 1493

l Administering Cloud-Based Environments, page 1518

System Manager workflows often require information about the result of a


process to determine the next step to follow in the workflow. An exit code is
provided when a process is completed that is part of a System Manager
workflow. This exit code indicates whether the process was successful. For
additional information on how to review the exit codes for a process, see
Determining Process Resolution Using Exit Codes, page 1535.

Although all necessary configuration information can be provided for each


process, some scenarios require that the details about the process be
provided when the workflow is executed. Parameters provide the flexibility of
including required configuration information when the workflow is executed.
For information on how parameters can be used to provide configuration
information for a process, see Using Parameters for Processes, page 1536.

Configuring MicroStrategy Components


After installing MicroStrategy, a few configurations need to be completed to
set up a MicroStrategy environment.

Copyright © 2024 All Rights Reserved 1448


Syst em Ad m in ist r at io n Gu id e

Creating Metadata, History List, and Statistics Repository Tables


You can create metadata, History List, and statistics repositories as part of
the process to configure a MicroStrategy environment. Repositories for your
metadata, History List, and statistics tables are created in the data source
specified by the DSNs that you connect to.

For background information on creating metadata, History List, and statistics


repositories, see the Installation and Configuration Help.

To perform these types of configurations, in System Manager, from the


Connectors and processes pane, add the Configuration Wizard process to
your workflow. The following information is required to create metadata,
History List, and statistics repositories:

Metadata, History List, and statistics repositories can be part of the same
process or included in their own separate processes in a System Manager
workflow. Including them as one process allows you to do all these
configurations in a single process. However, including them in separate
processes allows you to find and fix errors specific to each separate type of
repository configuration and perform each configuration at different stages
of the workflow.

l Response File: The MicroStrategy Configuration Wizard response file that


defines how to create metadata, History List, and statistics repositories.
Click the folder icon to browse to and select a response file. For
information on how to create a Configuration Wizard response file, see the
Installation and Configuration Help.

l Notes: Information to describe this process as part of the workflow.

Configuring Intelligence Server


You can create, use, or delete server definitions that are used to provide a
connection between Intelligence Server and your MicroStrategy metadata.

Copyright © 2024 All Rights Reserved 1449


Syst em Ad m in ist r at io n Gu id e

For background information on configuring Intelligence Server, see the


Installation and Configuration Help.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Configuration Wizard process to your workflow.
The following information is required to create, use, or delete server
definitions to configure Intelligence Server:

l Response File: The MicroStrategy Configuration Wizard response file that


defines how to configure Intelligence Server. Click the folder icon to
browse to and select a response file. For information on how to create a
Configuration Wizard response file, see the Installation and Configuration
Help.

l Notes: Information to describe this process as part of the workflow.

Creating Project Sources


You can create project sources as part of your System Manager workflow. A
project source contains the configuration information that each client system
requires to access an existing project. It stores the location of the metadata
repository and Intelligence Server that is used to run the project. A project
source determines how Developer, MicroStrategy Web, and other client
applications access the metadata.

For background information on creating project sources, see the Installation


and Configuration Help.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Configuration Wizard process to your workflow.
The following information is required to create project sources:

l Response File: The MicroStrategy Configuration Wizard response file that


defines how to create project sources. Click the folder icon to browse to
and select a response file. For information on how to create a
Configuration Wizard response file, see the Installation and Configuration

Copyright © 2024 All Rights Reserved 1450


Syst em Ad m in ist r at io n Gu id e

Help.

l Notes: Information to describe this process as part of the workflow.

Upgrading Intelligence Server Components and Migrating History


List Repositories
You can upgrade Intelligence Server components and migrate your History
List from a file-based system to a database-based system. The Intelligence
Server upgrade must be performed before any other upgrade or migration
actions. For background information on upgrading MicroStrategy, see the
Upgrade Help.

To perform these types of configurations, in System Manager, from the


Connectors and processes pane, add the Configuration Wizard process to
your workflow. The following information is required to upgrade Intelligence
Server components and migrate History List repositories:

l Response File: The MicroStrategy Configuration Wizard response file that


defines how to upgrade Intelligence Server components or migrate your
History List from a file-based system to a database-based system. Click
the folder icon to browse to and select a response file. For information on
how to create a Configuration Wizard response file, see the Installation
and Configuration Help.

l Notes: Information to describe this process as part of the workflow.

Upgrading Statistics Repositories


You can upgrade the statistics tables in your statistics repository to the new
version of MicroStrategy. This statistics table upgrade ensures that your
MicroStrategy Enterprise Manager environment can benefit from new
features and enhancements in the most recent release of MicroStrategy.

You must perform an upgrade of your Intelligence Server components


before any other upgrade or migration actions.

Copyright © 2024 All Rights Reserved 1451


Syst em Ad m in ist r at io n Gu id e

For background information on upgrading statistics repositories, see the


Upgrade Help. For information on Enterprise Manager, see the Enterprise
Manager Help.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Configuration Wizard process to your workflow.
The following information is required to upgrade statistics repositories:

l Response File: The MicroStrategy Configuration Wizard response file that


defines how to upgrade the statistics tables in your statistics repository to
the new version of MicroStrategy. Click the folder icon to browse to and
select a response file. For information on how to create a Configuration
Wizard response file, see Installation and Configuration Help.

l Notes: Information to describe this process as part of the workflow.

Migrating Narrowcast Server Web Delivery Subscriptions to


MicroStrategy Distribution Services
You can migrate MicroStrategy web delivery subscriptions from a
Narrowcast Server environment to Distribution Services. MicroStrategy web
delivery subscriptions include email, file, FTP, mobile, and print
subscriptions created from MicroStrategy Web. These subscriptions are
created when a user in MicroStrategy Web subscribes to a report or
document.

Migrating these subscriptions from Narrowcast Server to Distribution


Services allows the subscriptions to be centralized within Intelligence
Server rather than a separate Narrowcast Server.

You must perform an upgrade of your Intelligence Server components


before any other upgrade or migration actions.

For background information on migrating Narrowcast web delivery


subscriptions to MicroStrategy Distribution Services, see the Upgrade Help.
For information on configuring and using Distribution Services, see
Configuring and Administering Distribution Services, page 1351.

Copyright © 2024 All Rights Reserved 1452


Syst em Ad m in ist r at io n Gu id e

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Configuration Wizard process to your workflow.
The following information is required to migrate MicroStrategy web delivery
subscriptions from a Narrowcast Server environment to Distribution
Services:

l Response File: The MicroStrategy Configuration Wizard response file that


defines how to migrate MicroStrategy web delivery subscriptions from a
Narrowcast Server environment to Distribution Services. Click the folder
icon to browse to and select a response file. For information on how to
create a Configuration Wizard response file, see Installation and
Configuration Help.

l Notes: Information to describe this process as part of the workflow.

Managing Projects
A MicroStrategy business intelligence application consists of many objects
within projects. These objects are ultimately used to create reports and
documents that display data to the end user. As in other software systems,
these objects should be developed and tested before they can be used in a
production system. Once in production, projects need to be managed to
account for new requirements and previously unforeseen circumstances.
This process is referred to as the project life cycle.

With System Manager, you can include these project management tasks in a
workflow. This lets you create, manage, and update your projects silently,
which can be done during off-peak hours and system down times. In
performing project maintenance in this way, users of the MicroStrategy
system are less affected by project maintenance.

System Manager supports the following project creation and maintenance


tasks:

l Merging Duplicate Projects to Synchronize Objects, page 1454

l Duplicating Projects, page 1455

Copyright © 2024 All Rights Reserved 1453


Syst em Ad m in ist r at io n Gu id e

l Updating Project Objects, page 1456

l Creating a Package to Update Project Objects, page 1458

Merging Duplicate Projects to Synchronize Objects


You can merge duplicate projects to synchronize many objects between
duplicate projects as part of a System Manager workflow. This process
migrates an entire project. All objects are copied to the destination project.
Any objects that are present in the source project but not the destination
project are created in the destination project.

For background information on merging duplicate projects, see Merge


Projects to Synchronize Objects, page 809.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Project Merge (Windows Only) process to your
workflow. The following information is required to merge duplicate projects:

l Project Merge XML File: The file that defines how the duplicate projects
are to be merged. This file is created using the Project Merge Wizard. For
steps on how to create this configuration file, see Merge Projects with the
Project Merge Wizard, page 811.

For the password fields listed below, you can use the button to the right of
the password fields to determine whether the password characters are
shown or asterisks are displayed instead.

l Source Project Source Password: The password to access the source


project source. The user name to access the source project source is
provided in the configuration file created in Project Merge Wizard.

l Destination Project Source Password: The password to access the


destination project source. The user name to access the destination
project source is provided in the configuration file created in Project Merge
Wizard.

Copyright © 2024 All Rights Reserved 1454


Syst em Ad m in ist r at io n Gu id e

l Source Metadata Password: The password to access the source


metadata. The user name to access the source metadata is provided in the
configuration file created in Project Merge Wizard.

l Destination Metadata Password: The password to access the destination


metadata. The user name to access the destination metadata is provided
in the configuration file created in Project Merge Wizard.

l Update the metadata if the metadata of the destination project is


older than the source project: Forces a metadata update of the
destination metadata if it is older than the source metadata and this check
box is selected. The merge is not executed unless the destination
metadata is the same version as or more recent than the source metadata.

l Update the schema of the destination project at the end: If this check
box is selected, the system updates the schema of the destination project
after the merge is completed. This update is required when you make any
changes to schema objects such as facts, attributes, or hierarchies.

Do not use this option if the configuration file contains an instruction to


update the schema.

l Forcefully take over locks if any of the sessions are locked: If this
check box is selected, the system takes ownership of any metadata locks
that exist on the source or destination projects. If this check box is cleared
and sessions are locked, the project merge cannot be completed.

l Notes: Information to describe this process as part of the workflow.

Duplicating Projects
You can duplicate projects as part of a System Manager workflow. If you
want to copy objects between two projects, MicroStrategy recommends that
the projects have related schemas. This means that one must have originally
been a duplicate of the other, or both must have been duplicates of a third
project.

Copyright © 2024 All Rights Reserved 1455


Syst em Ad m in ist r at io n Gu id e

For background information on duplicating projects, see Duplicate a Project,


page 753.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Project Duplication (Windows Only) process to
your workflow. The following information is required to duplicate projects:

l XML Configuration File: The file that defines how a project is to be


duplicated. This file is created using the Project Duplication Wizard. For
steps on how to create this configuration file, see The Project Duplication
Wizard, page 755.

l Base Project Password: The password for the source project's project
source. You can use the button to the right of this password field to
determine whether the password characters are shown or asterisks are
displayed instead.

l Target Project Password: The password for the destination project's


project source. You can use the button to the right of this password field to
determine whether the password characters are shown or asterisks are
displayed instead.

l Update Target Metadata: If this check box is selected, the system forces
a metadata update of the destination metadata if it is older than the source
metadata. The duplication is not executed unless the destination metadata
is the same version as or more recent than the source metadata.

l Overwrite the project name specified in the configuration file: The


new name to use for the destination project. Select the check box and type
a new name to replace the name specified in the XML settings file.

l Notes: Information to describe this process as part of the workflow.

Updating Project Objects


You can use an update package as part of a System Manager workflow. An
update package is a file containing a set of object definitions and conflict

Copyright © 2024 All Rights Reserved 1456


Syst em Ad m in ist r at io n Gu id e

resolution rules. It allows you to save the objects you want to copy in an
update package and import that package into destination projects later.

For background information on updating projects using update packages,


see Copy Objects in a Batch: Update Packages, page 786.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Import Package process to your workflow. The
following information is required to update a project using an upgrade
package:

l Project Source Name: The name of the project source that contains the
project to update objects in using the update package.

l Login: The name of a valid user to log in to the project source.

l Password: The password for the user name that you provided to log in to
the project source. You can use the button to the right of the Password
field to determine whether the password characters are shown or asterisks
are displayed instead.

l Package file: The update package file that defines how a project is to be
duplicated. This file is created using MicroStrategy Object Manager. For
steps to create this update package, see Copy Objects in a Batch: Update
Packages, page 786.

If you are importing a package that is stored on a machine other than the
Intelligence Server machine, ensure that the package can be accessed by
the Intelligence Server machine.

l Destination Project Name: Determines whether the update package is a


project update package or a configuration update package:

l If the update package is a project update package, select this check box
and type the name of the project to update objects in using the update
package.

Copyright © 2024 All Rights Reserved 1457


Syst em Ad m in ist r at io n Gu id e

l If the update package is a configuration update package, clear this


check box.

l Use logging: If this check box is selected, the system logs the update
package process. Click the folder icon to browse to and select the file to
save the update package results to. If this check box is cleared, no log is
created.

l Forcefully acquire locks: If this check box is selected, the system takes
ownership of any locks that exist. If this check box is cleared and sessions
are locked, the update package cannot be completed.

l Notes: Information to describe this process as part of the workflow.

Creating a Package to Update Project Objects


You can create an update package as part of a System Manager workflow.
An update package is a file containing a set of object definitions and conflict
resolution rules. It allows you to save the objects you want to copy in an
update package, and import that package into any number of destination
projects at a later date.

For background information on creating update packages, see Copy Objects


in a Batch: Update Packages, page 786.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Create Package process to your workflow. The
following information is required to create an upgrade package:

l Package XML File: The .xml file that contains the definition to create a
package file. You can use Object Manager to create this .xml file, as
described in Copy Objects in a Batch: Update Packages, page 786.

l Source Project Source Password: The password for the user account
you used to create the package .xml file. This authentication information is
used to log in to the project source. You can use the button to the right of

Copyright © 2024 All Rights Reserved 1458


Syst em Ad m in ist r at io n Gu id e

the password field to determine whether the password characters are


shown or asterisks are displayed instead.

l Source Metadata Password: The password for the user account you used
to create the package .xml file. This authentication information is used to
log in to the project metadata. You can use the button to the right of the
password field to determine whether the password characters are shown
or asterisks are displayed instead.

l Notes: Information to describe this process as part of the workflow.

Administering Intelligence Servers and other MicroStrategy


Services
Intelligence Server and other MicroStrategy services must be operational to
complete certain processes that are part of a System Manager workflow. To
support this requirement, you can include the administration of Intelligence
Server and other MicroStrategy services as part of a workflow.

Starting, Stopping, or Restarting MicroStrategy Services


You can start, stop, or restart MicroStrategy services as part of a System
Manager workflow. This helps to ensure Intelligence Server is operational,
which is required to perform various processes. You can also stop
Intelligence Server to make system-wide updates and then restart
Intelligence Server once all updates are made.

To perform these types of configuration, in System Manager, from the


Connectors and processes pane, add the Manage MicroStrategy Service
process to your workflow. The following information is required to start, stop,
or restart a MicroStrategy service:

l Action: Determines whether to start, stop, or restart the MicroStrategy


service. Select the required action from this drop-down list.

l You can determine which machine to administer its services for using one
of the following options:

Copyright © 2024 All Rights Reserved 1459


Syst em Ad m in ist r at io n Gu id e

l Local machine: This option performs the start, stop, or restart action for
the service of the machine used to deploy the workflow.

l Remote machine: This option lets you specify the machine that hosts
the service to perform the start, stop, or restart action for. You must
provide the information listed below:

l Machine Name: The name of the machine that hosts the service.

l Login: The name of a valid user to administer the service.

l Password: The password for the user name that you provided to
administer the service. You can use the button to the right of the
Password field to determine whether the password characters are
shown or asterisks are displayed instead.

l Service Type: Determines the service to start, stop, or restart. From this
drop-down list, you can select one of the following MicroStrategy services:
o MicroStrategy Intelligence Server: The main service for your
MicroStrategy reporting environment. It provides the authentication,
clustering, governing, and other administrative management
requirements for your MicroStrategy reporting environment.
o MicroStrategy Listener: Also known as Test Listener. A ping utility that
allows you to check the availability of an Intelligence Server on your
network, whether a DSN can connect to a database, and whether a
project source name can connect to a project source. From any machine
that has the Test Listener installed and operational, you can get
information about other MicroStrategy services available on the network
without having to actually go to each machine.
o MicroStrategy Enterprise Manager Data Loader: The service for
Enterprise Manager that retrieves data for the projects for which
statistics are being logged. This data is then loaded into the Enterprise
Manager lookup tables for further Enterprise Manager reporting and

Copyright © 2024 All Rights Reserved 1460


Syst em Ad m in ist r at io n Gu id e

analysis.
o MicroStrategy Distribution Manager: The service for Narrowcast
Server that distributes subscription processing across available
Execution Engines.
o MicroStrategy Execution Engine: The service for Narrowcast Server
that gathers, formats, and delivers the content to the devices for a
subscription.
o Notes: Information to describe this process as part of the workflow.

Determining the Status of MicroStrategy Services


You can retrieve the status of a MicroStrategy service as part of a System
Manager workflow. This can help to ensure that required MicroStrategy
services are operational, which can be required to perform various
processes.

To retrieve this information, in System Manager, from the Connectors and


processes pane, add the Get Service Status process to your workflow. The
following information is required to retrieve the status of a MicroStrategy
service:

You can determine the machine for which to retrieve the service status by
using one of the following options:

l Local machine: Retrieves the status for the service of the machine used
to deploy the workflow.

l Remote machine: Lets you specify the machine that hosts the service to
retrieve the status for the service. If you select this option, you must type
the name of the machine that hosts the service.

l Service Type: Determines the service to start, stop, or restart. From this
drop-down list, you can select one of the following MicroStrategy services:

Copyright © 2024 All Rights Reserved 1461


Syst em Ad m in ist r at io n Gu id e

o MicroStrategy Intelligence Server: The main service for your


MicroStrategy reporting environment. It provides the authentication,
clustering, governing, and other administrative management
requirements for your MicroStrategy reporting environment.
o MicroStrategy Listener: Also known as Test Listener. A ping utility that
allows you to check the availability of an Intelligence Server on your
network, whether a DSN can connect to a database, and whether a
project source name can connect to a project source. From any machine
that has the Test Listener installed and operational, you can get
information about other MicroStrategy services available on the network
without having to actually go to each machine.
o MicroStrategy Enterprise Manager Data Loader: The service for
Enterprise Manager that retrieves data for the projects for which
statistics are being logged. This data is then loaded into the Enterprise
Manager lookup tables for further Enterprise Manager reporting and
analysis.
o MicroStrategy Distribution Manager: The service for Narrowcast
Server that distributes subscription processing across available
Execution Engines.
o MicroStrategy Execution Engine: The service for Narrowcast Server
that gathers, formats, and delivers the content to the devices for a
subscription.
o Notes: Information to describe this process as part of the workflow.

Automating Administrative Tasks


You can perform various administrative and application development tasks
by using text commands that can be saved as scripts or entered as
commands to be completed as part of a System Manager workflow. These
scripts and commands are created using Command Manager.

For an introduction to Command Manager, see Chapter 15, Automating


Administrative Tasks with Command Manager.

Copyright © 2024 All Rights Reserved 1462


Syst em Ad m in ist r at io n Gu id e

Managing Configurations for Project Sources


You can use text commands as part of a System Manager workflow as a
script or entered directly as statements, to add, delete, or update large
numbers of users and user groups, as well as manage various configuration
settings for project sources.

For an introduction to Command Manager, see Chapter 15, Automating


Administrative Tasks with Command Manager.

To perform this configuration, in System Manager, from the Connectors and


Processes pane, add the Intelligence Server process to your workflow. The
following information is required to execute a Command Manager script or
statements:

l Connection Information: Determines whether to use a connection-less


session or to connect directly to a project source:

l Connection-less Session: Defines the script execution as a


connection-less session, which means a connection is not immediately
made to a project source. A connection is required to perform any tasks
included in the commands. You can use this option when the Command
Manager statements include the required connection information.

l Connect To A Project Source: Defines the project source to connect to


for the statement execution. Provide the following information:

l Project Source: The name of the project source to connect to.

l Login: The name of a valid user to connect to the project source.

l Password: The password for the user name that you provided to
connect to the project source. Use the button to the right of the
Password field to determine whether the password characters are
shown or asterisks are displayed instead.

MicroStrategy does not recommend using quotation marks in your


passwords. If you are running MicroStrategy in Windows and your

Copyright © 2024 All Rights Reserved 1463


Syst em Ad m in ist r at io n Gu id e

password contains one or more quotation marks ("), you must replace
them with two quotation marks ("") and enclose the entire password in
quotes. For example, if your password is 1"2"3'4'5, you must enter the
password as "1""2""3'4'5".

l Execution: Choose whether the Command Manager statements to run are


in a script file or the ones that you enter here:

l Script File (.scp): Browse to and select the Command Manager script
file that defines all the tasks to be completed.

l Execute script statements: Type in the Command Manager statement


or statements to be completed.

l Export Results To an XML File: If selected, the system logs the


execution results, error messages, and status messages to a single XML
file. Click the folder icon to browse to and select an XML file.

l Display Output On The Console: If selected and the script is not


encrypted, the system displays the results on the command line used to
execute the script or statements.

l Stop Script Execution On Error: If execution causes a critical error and


this check box is selected, the system terminates the execution. Clear this
check box to allow the execution to continue even if critical errors are
encountered.

l Suppress Hidden Object(s) In The Results: If this check box is selected,


the system omits hidden objects in the execution results. Hidden objects
are MicroStrategy metadata objects whose HIDDEN property is set as true.

l Logging Information: Defines how the results of running the Command


Manager script or statements are logged. Select one of the following
options:

Copyright © 2024 All Rights Reserved 1464


Syst em Ad m in ist r at io n Gu id e

l Log Output To Default Location: Logs all results to the default folder.

l Log Output To Specified File: Logs all results to the log file specified.
You can browse to and select a log file.

l Split Output Into Three Defaults (Results, Failure, and Success):


Logs all results to three separate log files. The default log files are
CmdMgrResults.log, CmdMgrFail.log, and CmdMgrSuccess.log,
respectively.

l Split Output Into Three Specified Files: Logs all results of execution
to three separate log files that you choose:

l Results File: Includes any information provided by successful LIST


statements that were executed.

l Failure File: Includes a list of statements that were not executed


successfully.

l Success File: Includes a list of statements that were executed


successfully.

l Include Instructions In The Log File(s): If selected, the system


includes the statements in the log file or files.

l Include File Log Header: If selected, the system includes a header at


the beginning of each log file that contains information such as the
version of Command Manager used.

l Include Error Codes in the Log File(s): If selected, the system


includes any error codes returned during the workflow in the log file or
files.

l Notes: Information to describe this process as part of the workflow.

Managing Configurations for Project Sources Using Command


Manager Runtime Statements
Developers of OEM applications that use embedded MicroStrategy projects
may need flexibility in configuring their environment. Command Manager

Copyright © 2024 All Rights Reserved 1465


Syst em Ad m in ist r at io n Gu id e

Runtime is a slimmed-down version of the Command Manager command-line


executable for use with these OEM applications.

Command Manager Runtime uses a subset of the commands that are


available in the full version of Command Manager. Command Manager
Runtime statements can be included in a System Manager workflow as a
script or as statements entered in the workflow.

To perform this configuration, in System Manager, from the Connectors and


Processes pane, add the Intelligence Server (Runtime) process to your
workflow. The information required to execute a Command Manager script or
statements is the same information required for a standard Command
Manager script or statements, which is described in Managing
Configurations for Project Sources, page 1463. If you try to execute
statements that are not available in Command Manager Runtime as part of a
System Manager workflow, the script or statements fail with an exit code of
12.

Managing Configurations for Narrowcast Server Metadatas


MicroStrategy Command Manager lets you manage various configuration
settings within the MicroStrategy platform, for Narrowcast Servers.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Narrowcast Server (Windows Only) process to
your workflow. The information required to execute a Command Manager
script or statements used to manage Narrowcast Servers includes the same
information required for Command Manager script or statements used to
manage project sources, which is described in Administering Cloud-Based
Environments, page 1518. In addition to this required information, Command
Manager scripts or statements used to manage Narrowcast Servers also
require the following information:

l DSN: The data source name that points to the database that stores the
Narrowcast Server repository. If the DSN requires specific permissions,

Copyright © 2024 All Rights Reserved 1466


Syst em Ad m in ist r at io n Gu id e

select the Authentication for DSN check box to provide a valid user name
and password.

l Database: The database that stores the Narrowcast Server repository.


Type the name of the database that resides in the DSN you specified in the
DSN field. The DSN field is part of the options described in Administering
Cloud-Based Environments, page 1518.

l System Prefix: The database prefix used to identify the Narrowcast


Server repository.

Verifying Reports and Documents


You can run automated tests to determine how specific changes in a project
environment, such as the regular maintenance changes to metadata objects
or hardware and software upgrades, affect the reports and documents in that
project as part of a System Manager workflow. These types of test can
ensure that reports and documents are working as intended, as well as
determining the performance of any new or updated MicroStrategy
deployments.

For background information on running automated tests of reports and


documents using MicroStrategy Integrity Manager, see Chapter 16, Verifying
Reports and Documents with Integrity Manager.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Integrity Manager process to your workflow. The
following information is required to run an automated test of reports and
documents:

l MTC Configuration File: The test file that defines how to perform the
automated test of reports and documents. This file is created using
Integrity Manager. For steps on how to create this test file, see Creating
an Integrity Test, page 1580.

l Base Project Password: The password for the user specified in the test
file to log in to the base project. This is not required for a baseline-versus-

Copyright © 2024 All Rights Reserved 1467


Syst em Ad m in ist r at io n Gu id e

project or baseline-versus-baseline integrity test. You can use the button


to the right of this password field to determine whether the password
characters are shown or asterisks are displayed instead. Refer to
Specifying Passwords for Multiple User Accounts and Special Characters,
page 1470 below for information on providing multiple passwords or
passwords that use special characters for an Integrity Manager test.

l Target Project Password: The password for the user specified in the test
file to log in to the destination project. This is not required for a single-
project or baseline-versus-baseline integrity test. You can use the button
to the right of this password field to determine whether the password
characters are shown or asterisks are displayed instead. Refer to
Specifying Passwords for Multiple User Accounts and Special Characters,
page 1470 below for information on providing multiple passwords or
passwords that use special characters for an Integrity Manager test.

You can use the following parameters to provide alternative test information
and details when running an Integrity Manager test as part of a workflow. All
parameters are optional, and if you clear the check box for a parameter
listed below, any required information for that parameter is provided by the
Integrity Manager test file instead:

l Output Directory: The directory for any results. Click the folder icon to
browse to and select an output directory.

l Log File: Click the folder icon to browse to and select a log file directory.

l Base Baseline File: Click the folder icon to browse to and select a
baseline file for the base project.

l Target Baseline File: Click the folder icon to browse to and select a
baseline file for the target project.

l Base Server Name: The name of the machine that is running the
Intelligence Server that hosts the base project for the test.

l Base Server Port: The port that Intelligence Server is using. The default
port is 34952.

Copyright © 2024 All Rights Reserved 1468


Syst em Ad m in ist r at io n Gu id e

l Target Server Name: The name of the machine that is running the
Intelligence Server that hosts the target project for the test.

l Target Server Port: The port that Intelligence Server is using. The default
port is 34952.

l Base Project Name: The name of the base project for the test.

l Login(s) for Base Project: The login accounts required to run any reports
or documents in the base project for the test. For multiple logins, enclose
all logins in double quotes ("") and separate each login with a comma (,).

l Target Project Name: The name of the target project for the test.

l Login(s) for Target Project: The login accounts required to run any
reports or documents in the base project for the test. For multiple logins,
enclose all logins in double quotes ("") and separate each login with a
comma (,).

l Test Folder GUID: The GUID of the test folder. If this option is used, the
reports and documents specified in the Integrity Manager test file are
ignored. Instead, Integrity Manager executes all reports and documents in
the specified folder.

This option can only be used with a single-project integrity test or a


project-versus-project integrity test.

l Load Balancing for Base Server: Determines whether to use load


balancing for the base server. If this option is used, it overrides the setting
in the Integrity Manager test file.

l Load Balancing for Target Server: Determines whether to use load


balancing for the target server. If this option is used, it overrides the
setting in the Integrity Manager test file.

l Notes: Information to describe this process as part of the workflow.

Copyright © 2024 All Rights Reserved 1469


Syst em Ad m in ist r at io n Gu id e

Specifying Passwords for Multiple User Accounts and Special


Characters
An Integrity Manager test can include multiple user accounts as part of the
test, as well as user accounts that include special characters in their
passwords.

To use multiple user accounts for testing, the passwords associated with
each user account must also be provided. If your Integrity Manager test
includes multiple user accounts, use the following rules to provide any
required passwords for the base project and target project:

l You must include a password for each user account defined in the Integrity
Manager test configuration file. However, if all user accounts use a blank
password, you can leave the base project and target project password
fields blank to indicate that a blank password is used for each user
account.

l Enclose the full list of passwords in double quotes (").

l Separate each password using a comma (,).

l The passwords must be listed in the order that user accounts are defined
in the Integrity Manager test. Use Integrity Manager to review the test file
as required to determine the proper order.

l If a subset of user accounts use blank passwords, use a space to indicate


a blank password. For example, if the second user account included in an
Integrity Manager test has a blank password, you can define the password
list as:
"password1, ,password3"

An Integrity Manager test can include user accounts that include special
characters in their passwords. Use the following rules to denote special
characters in passwords for the base project and target project:

Copyright © 2024 All Rights Reserved 1470


Syst em Ad m in ist r at io n Gu id e

l If a password includes a single quote (') or comma (,), you must enclose
the entire password in single quotes. For example, for the password
sec,ret, you must type this password as 'sec,ret'.

l To denote a single quote (') in a password, use two single quotes. For
example, for the password sec'ret, you must type this password as
'sec''ret'.

l To denote a double quote (") in a password, type &quot;. For example,


for the password sec"ret, you must type this password as
sec&quot;ret.

l To denote an ampersand (&) in a password, type &amp;. For example, for


the password sec&ret, you must type this password as sec&amp;ret.

Creating Data Source Names


Establishing communication between MicroStrategy and your databases or
other data sources is an essential step in configuring MicroStrategy
products for reporting and analysis of your data. A data source name (DSN)
allows MicroStrategy to connect and communicate to your data sources. For
background information on creating and supporting DSNs, see the
Installation and Configuration Help.

System Manager allows you to create DSNs for the following types of
databases:

l DB2 UDB, page 1472

l UDB iSeries/DB2 for i, page 1473

l DB2 z/OS , page 1474

l Greenplum, page 1476

l Hive, page 1477

l Informix, page 1478

l Informix XPS , page 1479

Copyright © 2024 All Rights Reserved 1471


Syst em Ad m in ist r at io n Gu id e

l Microsoft SQL Server, page 1480

l Oracle, page 1483

l PostgreSQL, page 1486

l Salesforce, page 1487

l Sybase ASE, page 1488

Creating a DSN using System Manager can be successful or unsuccessful


for various reasons, which are denoted using exit codes. For information on
determining the possible exit codes of creating a DSN using System
Manager, see Determining Process Resolution Using Exit Codes, page
1535.

DB2 UDB
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the DB2 UDB process to your workflow. The following
information is required to create a DSN for DB2 UDB when running against
DB2:

l Data Source Name: A name to identify the DB2 UDB data source
configuration in MicroStrategy. For example, Finance or DB2-Serv1 can
serve to identify the connection.

l IP Address: The IP address or name of the machine that runs the DB2
UDB server.

l TCP Port: The DB2 UDB server listener's port number. In most cases, the
default port number is 50000, but you should check with your database
administrator for the correct number.

l Database Name: The name of the database to connect to by default,


which is assigned by the database administrator.

l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is

Copyright © 2024 All Rights Reserved 1472


Syst em Ad m in ist r at io n Gu id e

cleared and a DSN with the same name exists on the system, no DSN is
created, and the DSN is not updated.

l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check
box is selected, you must provide the following connection information:

l Username: The name of a valid user for the database.

l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are shown
or asterisks are displayed instead.

l Notes: Information to describe this process as part of the workflow.

UDB iSeries/DB2 for i


To perform this configuration, in System Manager, from the Connectors and
processes pane, add the DB2 UDB iSeries process to your workflow. The
following information is required to create a DSN for UDB iSeries/DB2 for i:

l Data Source Name: A name to identify the DB2 for i data source
configuration in MicroStrategy. For example, Finance or DB2fori-1 can
serve to identify the connection.

l IP Address: The IP Address of the machine where the catalog tables are
stored. This can be either a numeric address, such as 123.456.789.98,
or a host name. If you use a host name, it must be in the HOSTS file of the
machine or a DNS server.

l Collection: The name that identifies a logical group of database objects.

l Location: The DB2 location name, which is defined during the local DB2
installation.

Copyright © 2024 All Rights Reserved 1473


Syst em Ad m in ist r at io n Gu id e

l Isolation Level: The method by which locks are acquired and released by
the system.

l Package Owner: The package's AuthID if you want to specify a fixed user
to create and modify the packages on the database. The AuthID must have
authority to execute all the SQL in the package.

l TCP Port: The DB2 DRDA listener process's port number on the server
host machine provided by your database administrator. The default port
number is usually 446.

l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created, and the DSN is not updated.

l Test Connection: Tests the DSN information provided to determine if a


successful connection can be made. If this check box is cleared, no
connection test is performed. If this check box is selected, you must
provide the following connection information:

l Username: The name of a valid user for the database.

l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are shown
or asterisks are displayed instead.

l Notes: Information to describe this process as part of the workflow.

DB2 z/OS
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the DB2 z/OS process to your workflow. The following
information is required to create a DSN for DB2 z/OS:

Copyright © 2024 All Rights Reserved 1474


Syst em Ad m in ist r at io n Gu id e

l Data Source Name: A name to identify the DB2 z/OS data source
configuration in MicroStrategy. For example, Finance or DB2UDBz/OS-1
can serve to identify the connection.

l IP Address: The IP Address of the machine where the catalog tables are
stored. This can be either a numeric address such as 123.456.789.98,
or a host name. If you use a host name, it must be in the HOSTS file of the
machine or a DNS server.

l Collection: The name that identifies a logical group of database objects,


which is also the current schema. On DB2 z/OS, the user ID should be
used as the Collection.

l Location: The DB2 z/OS location name, which is defined during the local
DB2 z/OS installation. To determine the DB2 location, you can run the
command DISPLAY DDF.

l Package Collection: The collection or location name where bind


packages are created and stored for searching purposes.

l Package Owner: The package's AuthID if you want to specify a fixed user
to create and modify the packages on the database. The AuthID must have
authority to execute all the SQL in the package.

l TCP Port: The DB2 DRDA listener process's port number on the server
host machine provided by your database administrator. The default port
number is usually 446.

l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.

l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check
box is selected, you must provide the following connection information:

Copyright © 2024 All Rights Reserved 1475


Syst em Ad m in ist r at io n Gu id e

l Username: The name of a valid user for the database.

l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are shown
or asterisks are displayed instead.

l Notes: Information to describe this process as part of the workflow.

Greenplum
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the Greenplum process to your workflow. The
following information is required to create a DSN for Greenplum:

l Data Source Name: A name to identify the Greenplum data source


configuration in MicroStrategy. For example, Finance or Greenplum-1
can serve to identify the connection.

l Host Name: The name or IP address of the machine on which the


Greenplum data source resides. The system administrator or database
administrator assigns the host name.

l Port Number: The port number for the connection. The default port
number for Greenplum is usually 5432. Check with your database
administrator for the correct number.

l Database Name: The name of the database to connect to by default. The


database administrator assigns the database name.

l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.

l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check

Copyright © 2024 All Rights Reserved 1476


Syst em Ad m in ist r at io n Gu id e

box is selected, you must provide the following connection information:

l Username: The name of a valid user for the database.

l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are shown
or asterisks are displayed instead.

l Notes: Information to describe this process as part of the workflow.

Hive
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the Hive process to your workflow. The following
information is required to create a DSN for Apache Hive:

l Data Source Name: A name to identify the Apache Hive data source
configuration in MicroStrategy. For example, Finance or ApacheHive-1
can serve to identify the connection.

l Host Name: The name or IP address of the machine on which the Apache
Hive data source resides. The system administrator or database
administrator assigns the host name.

l Port Number: The port number for the connection. The default port
number for Apache Hive is usually 10000. Check with your database
administrator for the correct number.

l Database Name: The name of the database to connect to by default. If no


database name is provided, the default database is used for the
connection. The database administrator assigns the database name.

l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.

Copyright © 2024 All Rights Reserved 1477


Syst em Ad m in ist r at io n Gu id e

l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed.

l Notes: Information to describe this process as part of the workflow.

Informix
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the Informix process to your workflow. The following
information is required to create a DSN for Informix Wire Protocol:

l Data Source Name: A name to identify the Informix data source


configuration in MicroStrategy. For example, Finance or Informix-1 can
serve to identify the connection.

l Server Name: The client connection string designating the server and
database to be accessed.

l Host Name: The name of the machine on which the Informix server
resides. The system administrator or database administrator assigns the
host name.

l Port Number: The Informix server listener's port number. The default port
number for Informix is commonly 1526.

l Database Name: The name of the database to connect to by default,


which is assigned by the database administrator.

l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.

l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check
box is selected, you must provide the following connection information:

Copyright © 2024 All Rights Reserved 1478


Syst em Ad m in ist r at io n Gu id e

l Username: The name of a valid user for the database.

l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are shown
or asterisks are displayed instead.

l Notes: Information to describe this process as part of the workflow.

Informix XPS
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the Informix XPS (Windows Only) process to your
workflow. The following information is required to create a DSN for Informix
XPS:

l Data Source Name: A name to identify the Informix data source


configuration in MicroStrategy. For example, Finance or Informix-1 can
serve to identify the connection.

l Database: The name of the database to connect to by default, which is


assigned by the database administrator.

l Server Name: The client connection string designating the server and
database to be accessed.

l Host Name: The name of the machine on which the Informix server
resides. The system administrator or database administrator assigns the
host name.

l Service Name: The service name, as it exists on the host machine. The
system administrator assigns the service name.

l Protocol Type: The protocol used to communicate with the server. Select
the appropriate protocol from this drop-down list.

l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is

Copyright © 2024 All Rights Reserved 1479


Syst em Ad m in ist r at io n Gu id e

cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.

l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check
box is selected, you must provide the following connection information:

l Username: The name of a valid user for the database.

l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are shown
or asterisks are displayed instead.

l Notes: Information to describe this process as part of the workflow.

Microsoft SQL Server


To perform this configuration, in System Manager, from the Connectors and
processes pane, add the Microsoft SQL Server process to your workflow.
The following information is required to create a DSN for Microsoft SQL
Server:

l Data Source Name: A name to identify the Microsoft SQL Server data
source configuration in MicroStrategy. For example, Personnel or
SQLServer-1 can serve to identify the connection.

l Windows: Select this option if you are configuring the Microsoft SQL
Server driver on Windows:

l Server Name: The name of a SQL Server on your network, in the format
ServerName_or_IPAddress,PortNumber. For example, if your
network supports named servers, you can specify an address such as
SQLServer-1,1433. You can also specify the IP address such as
123.45.678.998,1433.

Copyright © 2024 All Rights Reserved 1480


Syst em Ad m in ist r at io n Gu id e

Additionally, if you use named instances to distinguish SQL Server


databases, you can include the named instance along with either the
server name or IP address using the format
ServerName\NamedInstance or IPAddress\NamedInstance. The
following are examples of providing the server name for your SQL Server
database:

123.45.678.998\Instance1,1433

SQLServer-1\Instance1,1433

l Database Name: The name of the database to connect to by default.


The database administrator assigns the database name.

l Use Windows NT authentication for login: Select this check box to


use Windows NT authentication to pass a user's credentials on the
Windows machine to execute against a SQL Server database.

If you use Windows NT authentication with SQL Server, you must enter the
Windows NT account user name and password in Service Manager. For
background information on Service Manager, see Running Intelligence
Server as an Application or a Service, page 30.

l UNIX: Select this option if you are configuring the MicroStrategy-branded


version of the Microsoft SQL Server driver for use on UNIX and Linux:

l Server Name: The name of a SQL Server on your network. For example,
if your network supports named servers, you can specify an address
such as SQLServer-1. You can also specify the IP address such as
123.45.678.998. Contact your system administrator for the server
name or IP address.

Additionally, if you use named instances to distinguish SQL Server


databases, you can include the named instance along with either the
server name or IP address using the format
ServerName\NamedInstance or IPAddress\NamedInstance. The

Copyright © 2024 All Rights Reserved 1481


Syst em Ad m in ist r at io n Gu id e

following are examples of providing the server name for your SQL Server
database:

SQLServer-1\Instance1

123.45.678.998\Instance1

l Database Name: The name of the database to connect to by default.


The database administrator assigns the database name.

l Port Number: The port number for the connection. The default port
number for SQL Server is usually 1433. Check with your database
administrator for the correct number.

l Enable SQL Database (Azure) support: Defines whether the DSN is


created to support SQL Azure. Select this check box if the DSN is used
to access a SQL Azure data source.

l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.

l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check
box is selected, you must provide the following connection information:

l Username: The name of a valid user for the database.

l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are shown
or asterisks are displayed instead.

l Notes: Information to describe this process as part of the workflow.

Copyright © 2024 All Rights Reserved 1482


Syst em Ad m in ist r at io n Gu id e

Microsoft Access
The MicroStrategy ODBC Driver for SequeLink allows you to access
Microsoft Access databases stored on a Windows machine from an
Intelligence Server hosted on a UNIX or Linux machine.

Steps on how to perform the necessary configurations on the various


machines to support this type of configuration are provided in the
Installation and Configuration Help.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Microsoft Access (Windows Only) process to
your workflow. The following information is required to create a DSN for
Microsoft Access:

l Data Source Name: A name to identify the Microsoft SQL Server data
source configuration in MicroStrategy. For example, Personnel or
MicrosoftAccess-1 can serve to identify the connection.

l Database: The name of the database to connect to by default. Click the


folder icon to browse to and select a Microsoft Access database.

l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.

l Test Connection: Tests the DSN information provided to determine if a


successful connection can be made. If this check box is cleared, no
connection test is performed.

l Notes: Information to describe this process as part of the workflow.

Oracle
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the Oracle process to your workflow. The following
information is required to create a DSN for Oracle Wire Protocol:

Copyright © 2024 All Rights Reserved 1483


Syst em Ad m in ist r at io n Gu id e

l Data Source Name: A name to identify the Oracle data source


configuration in MicroStrategy. For example, Finance or Oracle-1 can
serve to identify the connection. A DSN is required for any Oracle Wire
Protocol connection. Depending on whether you want to use a standard
connection or a TNSNames connection, refer to one of the following lists
of options below:

l Standard Connection: A standard connection is configured through


Oracle Wire Protocol with the following connection parameters:

l Host Name: The name of the Oracle server to be accessed. This can
be a server name such as Oracle-1 or an IP address such as
123.456.789.98.

l Port Number: The Oracle listener port number provided by your


database administrator. The default port number is usually 1521.

l One of the following parameters; which one you choose is up to your


personal preference:

l SID: The Oracle System Identifier for the instance of Oracle running
on the server. The default SID is usually ORCL.

l Service Name: The global database name, which includes the


database name and the domain name. For example, if your database
name is finance and its domain is business.com the service
name is finance.business.com.

l Alternate Servers: A list of alternate database servers to enable


connection failover for the driver. If the primary database server entered
as the SID or service name is unavailable, a connection to the servers in
this list is attempted until a connection can be established. You can list
the servers in SID or service name format, as shown in the following
examples:

Copyright © 2024 All Rights Reserved 1484


Syst em Ad m in ist r at io n Gu id e

l Using an SID: (HostName=DB_server_name:


PortNumber=1526:SID=ORCL)

l Using a Service Name: (HostName=DB_server_name:


PortNumber=1526:ServiceName=service.name.com)

l TNSNames Connection: A TNSNames connection uses a


TNSNAMES.ORA file to retrieve host, port number, and SID information
from a server (alias or Oracle net service name) listed in the
TNSNAMES.ORA file. A TNSNames connection requires the following
parameters:

l Server Name: A server name, which is included in a TNSNAMES.ORA


file included in the TNSNames File field below.

l TNSNames File: The location of your TNSNAMES.ORA file. Make sure


to enter the entire path to the TNSNAMES.ORA file, including the file
name itself. You can specify multiple TNSNAMES.ORA files.

l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.

l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check
box is selected, you must provide the following connection information:

l User Name: The name of a valid user for the database.

l Password: The password for the user name you provided to connect to
the database. You can use the button to the right of the Password field
to determine whether the password characters are shown or asterisks
are displayed instead.

l Notes: Information to describe this process as part of the workflow.

Copyright © 2024 All Rights Reserved 1485


Syst em Ad m in ist r at io n Gu id e

PostgreSQL
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the PostgreSQL process to your workflow. The
following information is required to create a DSN for PostgreSQL:

l Data Source Name: A name to identify the PostgreSQL data source


configuration in MicroStrategy. For example, Finance or PostgreSQL-1
can serve to identify the connection.

l Host Name: The name or IP address of the machine on which the


PostgreSQL database resides. The system administrator or database
administrator assigns the host name.

l Port Number: The port number for the connection. The default port
number for PostgreSQL is usually 5432. Check with your database
administrator for the correct number.

l Database Name: The name of the database to connect to by default. The


database administrator assigns the database name.

l Default User ID: The name of a valid user for the PostgreSQL database.

l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.

l Test Connection: Tests the DSN information provided to determine if a


successful connection can be made. If this check box is cleared, no
connection test is performed. If this check box is selected, you must
provide the following connection information:

l Username: The name of a valid user for the database.

l Password: The password for the default user name that you provided.
You can use the button to the right of the Password field to determine
whether the password characters are shown or asterisks are displayed
instead.

Copyright © 2024 All Rights Reserved 1486


Syst em Ad m in ist r at io n Gu id e

l Notes: Information to describe this process as part of the workflow.

Salesforce
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the Salesforce process to your workflow. The
following information is required to create a DSN for Salesforce:

l Data Source Name: A name to identify the Salesforce data source


configuration in MicroStrategy. For example, Finance or Salesforce-1 can
serve to identify the connection.

l Host Name: The host name to connect to Salesforce.com. You can keep
the default value of login.salesforce.com.

l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.

l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check
box is selected, you must supply the following information to test the
connection:

l Username: The user name of a user account for Salesforce.com. The


user name syntax is [email protected], where UserName is
the specific user account.

l Password: The password for the Salesforce.com user account that was
supplied. The password syntax is PasswordSecuritytoken, where
Password is the password for the user account and Securitytoken is
the additional security token required to access Salesforce.com. Do not
use any spaces or other characters to separate the password and
security token.

Copyright © 2024 All Rights Reserved 1487


Syst em Ad m in ist r at io n Gu id e

As part of configuring a connection to your Salesforce.com system, you


can include the password and security token as part of the database
login, which is a component of a database instance used to access the
DSN in MicroStrategy. For steps to create a database login, which you
can use to provide the Salesforce.com password and security token, see
the Installation and Configuration Help.

l Notes: Information to describe this process as part of the workflow.

Sybase ASE
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the Sybase ASE process to your workflow. The
following information is required to create a DSN for Sybase ASE:

l Data Source Name: A name to identify the Sybase ASE data source
configuration in MicroStrategy. For example, Finance or SybaseASE-1 can
serve to identify the connection.

l Network Address: The network address, in the format ServerName_or_


IPAddress,PortNumber. For example, if your network supports named
servers, you can specify an address such as SybaseASE-1,5000. You
can also specify the IP address such as 123.456.789.98,5000. Contact
your system administrator for the server name or IP address.

l Database Name: The name of the database to connect to by default. The


database administrator assigns the database name.

l Enable Unicode support (UTF8): Select this check box if the database
supports UNICODE.

l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.

Copyright © 2024 All Rights Reserved 1488


Syst em Ad m in ist r at io n Gu id e

l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check
box is selected, you must provide the following connection information:

l Username: The name of a valid user for the database.

l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are shown
or asterisks are displayed instead.

l Notes: Information to describe this process as part of the workflow.

Completing a Separate System Manager Workflow


Rather than include all required processes in a single System Manager
workflow, you can group processes into separate workflows. These separate
workflows can then be combined in another workflow by including the
separate workflows as processes.

By separating tasks into multiple workflows, you can then re-use these
workflows as components of other larger workflows. For example, starting
Intelligence Server and troubleshooting this service may be required for
multiple workflows that you create. You can include the steps to start and
troubleshoot Intelligence Server into a separate workflow, and then use this
workflow in all the workflows that require these steps.

Once you have created a workflow, you can include it as a configuration in


another workflow. In System Manager, from the Connectors and processes
pane, add the Execute System Manager Workflow process to your
workflow. The following information is required:

l Workflow File: Click the folder icon to browse to and select a System
Manager workflow file. This is the workflow that is included as a process in
the current workflow.

Copyright © 2024 All Rights Reserved 1489


Syst em Ad m in ist r at io n Gu id e

l Starting Process: Select this check box to specify the first process to
attempt for the workflow. Type the name of the process, including the
proper case, in the field below. Ensure that the process is enabled as an
entry process for the workflow. For steps to enable a process as an entry
process, see Using Entry Processes to Determine the First Step in a
Workflow, page 1414.

l Use a Parameter File: Select this check box to specify a parameters file
to provide values for the parameters of the workflow. Click the folder icon
to browse to and select a parameters file for the workflow. For information
on using parameters in a workflow, see Using Parameters for Processes,
page 1536. You can also specify parameter values using the Use Console
Parameters option described below.

l Use a Customized Log File: Select this check box to specify a log file to
save all results of the workflow to. Click the folder icon to browse to and
select a log file. This lets you separate the results of each workflow into
individual log files. If you clear this check box, the results of the workflow
are included in the log file for the main workflow.

l Use Console Parameters: Select this check box to manually supply


values for parameters of the process. Type the parameters and their
values in the field below. If you also use the Use a Parameter File option
described above, these values overwrite any values provided in the
parameters file. For additional information on how the value of a
parameter is determined, see Using Parameters for Processes, page
1536.

l Display Output on the Console: Select this check box to output all
results to the System Manager console. If this check box is cleared, the
results of any actions taken as part of this System Manager workflow are
not displayed on the console and instead only provided in any specified
log files.

Copyright © 2024 All Rights Reserved 1490


Syst em Ad m in ist r at io n Gu id e

l Exit code options:

l Personalize Success Exit Code(s): Select this check box to specify the
exit codes that indicate successful execution of the underlying workflow.
Type the exit codes in the text box, separating multiple codes with a
comma. Valid exit codes must be an integer. The success exit codes you
specify here map to a new exit code of 0, which is passed on to the
larger workflow to indicate that this workflow executed successfully.

l Personalize Failure Exit Code(s): Select this check box to specify the
exit codes that indicate failed execution of the underlying workflow. Type
the exit codes in the text box, separating multiple codes with a comma.
Valid exit codes must be an integer. The failure exit codes you specify
here map to a new exit code of -1, which is passed on to the larger
workflow to indicate that this workflow failed.

If you do not use the Personalize Exit Code(s) options, or if you configure
them incorrectly, one of the following exit codes will be passed on to the
larger workflow:

l 1: Indicates an undefined execution result, which is treated as a


successful execution. This success exit code is passed on to the larger
workflow if you do not use the Personalize Exit Code(s) options and the
workflow executes, regardless of whether the execution is successful or
not.

l -2: Indicates that the input format of the specified exit codes is incorrect,
for example, if you use an exit code that is not an integer, or if you
separate multiple codes with anything other than a comma.

l -3: Indicates that there is at least one conflict in the personalized exit
codes. For example, if you use exit code 4 in both the Success Exit Code
(s) list and the Failure Exit Code(s) list.

Copyright © 2024 All Rights Reserved 1491


Syst em Ad m in ist r at io n Gu id e

l -5555: Indicates that the underlying workflow failed to initialize. For


example, if the workflow is incomplete, it will not start.

l Notes: Information to describe this process as part of the workflow.

Retrieving MicroStrategy Information


You can retrieve various information about the MicroStrategy software,
which is installed for the machine System Manager is running on, as part of
a System Manager workflow. Each MicroStrategy property that you retrieve
must be stored in a parameter for the workflow (see Using Parameters for
Processes, page 1536).

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Retrieve MicroStrategy Properties process to
your workflow. The following information is required to retrieve information
on the MicroStrategy installation:

l MicroStrategy Property: The information about the system that is


retrieved. You can select from the following options:

l Home Path: The path that acts as the home directory for the
MicroStrategy installation. This path includes MicroStrategy
configuration files that can be modified after a successful installation.

l Common Path: The path that contains important files. The types of files
included in this path varies depending on your operating system, but it
can include files such as log files, SQL files, WAR files, JAR files,
libraries, and more.

l Build Version: The build version number of the MicroStrategy software.


This version number can be helpful when troubleshooting a
MicroStrategy system and when working with MicroStrategy Technical
Support.

l Release Version: The major release version of the MicroStrategy


software, such as 9.2.1.

Copyright © 2024 All Rights Reserved 1492


Syst em Ad m in ist r at io n Gu id e

l Parameter: The System Manager parameter that is used to store the


MicroStrategy information that is retrieved.

l Retrieve this additional property: Select this check box to retrieve


additional information about the MicroStrategy installation. For each of
these check boxes that you select, an additional MicroStrategy Property
and Parameter pair is made available.

l Notes: Information to describe this process as part of the workflow.

Performing System Processes


In addition to the various MicroStrategy configurations that can be
completed as part of a System Manager workflow, you can also perform
various system processes. This lets you perform system processes such as
copying, moving, or deleting a file.

You can also execute any process that uses system or third-party tools. This
lets you perform custom processes that can be executed from the system's
command line.

The system processes that are supported include:

l Encrypting/Decrypting Text or Files, page 1494

l Copying a File or Folder, page 1495

l Deleting a File or Folder, page 1498

l Moving a File or Folder, page 1499

l Find and Replace Information in a File, page 1500

l Renaming a File or Folder, page 1502

l Unzipping a Compressed File, page 1503

l Compressing Files into a Zip File, page 1503

l Downloading Files from an FTP or SFTP Site , page 1505

l Uploading Files to an FTP or SFTP Site , page 1506

Copyright © 2024 All Rights Reserved 1493


Syst em Ad m in ist r at io n Gu id e

l Executing a SQL Statement, page 1508

l Sending an Email, page 1511

l Delaying a Workflow to Allow for Task Completion, page 1513

l Updating Workflow Parameters, page 1515

l Retrieving Machine Information, page 1516

Encrypting/Decrypting Text or Files


You can configure a process to encrypt or decrypt specified text or a file.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Cryptographic Service process to your workflow.
The following information is required to perform this process:

l Action: Select either Encrypt or Decrypt from the drop-down list. Encrypt
algorithmically encodes plain text into a non-readable form. Decrypt
deciphers the encrypted text back to its original plain text form.

The Decrypt action only works on text that was encrypted using the
Encrypt action. Also, files encoded using the Encrypt action must be
decrypted using the Decrypt action. Other encryption/decryption
programs will not work.

l Password: Select the check box and type the required password if a
specific password is required to perform this process. If this option is not
selected, it will use the default password specified by System Manager.

l Text: Select this option and type the text to be encrypted or decrypted in
the text box. This is useful for encrypting or decrypting a small amount of
text.

l File: Select this option and click the folder icon to select the file to encrypt
or decrypt. This option is useful if you have a large amount of text to
encrypt or decrypt.

Copyright © 2024 All Rights Reserved 1494


Syst em Ad m in ist r at io n Gu id e

l Output File: Click the folder icon to select the file in which to store the
encrypted or decrypted results.

l Overwrite: Select this check box to overwrite the output file if it already
exists.

l Notes: Information to describe this process as part of the workflow.

Performing Custom Processes


You can execute a custom process as part of a System Manager workflow.
This can be any process that uses system or third-party tools. However, the
process must be executable from the system's command line.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Execute Application process to your workflow.
The following information is required to execute a custom process:

l Application To Execute: The command to execute the custom process.


This command must meet the syntax requirements of the system it is
executed on.

l Execute In System Shell: Select this check box to execute the


application and any parameters in a Windows command prompt or UNIX
shell. If you select this option, the exit code for this process represents the
success or failure of creating a new Windows command prompt or UNIX
shell. If you clear this option, the exit code for this process represents the
success or failure of executing the application, which could fail if an
incorrect application name or path is used.

l Notes: Information to describe this process as part of the workflow.

Copying a File or Folder


You can copy a file or folder as part of a System Manager workflow.

Copyright © 2024 All Rights Reserved 1495


Syst em Ad m in ist r at io n Gu id e

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Copy Files process to your workflow. The
following information is required to copy a file or folder:

l Source File or Directory: The location of the file or folder to copy. If the
path to a file is provided, only that file is copied. If the path to a folder is
provided, the folder along with all the files within it are copied. Click the
folder icon to browse to and select a file or folder.

You can also use wildcard characters (* and ?) to select files or folders to
copy. For example, you can use the syntax *.txt to copy all files with the
extension .txt in a folder. For additional examples of how you can use
these wildcard characters, see Using Wildcard Characters in Processes,
page 1543.

l Destination File or Directory: The location of the file or folder to copy


the file or folder to.

l If you are copying a file, you can provide a path to a specific folder
location and file name to store the new copy.

l If you are copying a folder or have used wildcard characters to select


multiple files or folders, you can provide a folder location at which to
store the files or folders.

l If the location you provide does not exist, a new directory is created with
the name of the destination and all source files are copied to the
directory. Click the folder icon to browse to and select a file or folder.

l Overwrite: If this check box is selected, the system replaces the


destination file or folder with the same name as the source file or folder
provided. If this check box is cleared and a file or folder with the same
name exists on the system, the source file or folder is not copied to the
specified location.

l Notes: Information to describe this process as part of the workflow.

Copyright © 2024 All Rights Reserved 1496


Syst em Ad m in ist r at io n Gu id e

Creating a File or Folder


You can create a file or folder as part of a System Manager workflow.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Create File process to your workflow. The
following information is required to create a file or folder:

l Select Type: Determines whether to create a file or folder. Select either


File or Directory.

l Parent Directory: The location in which to create the file or folder in. Click
the folder icon to browse to and select a folder.

l File or Directory Name: The name for the new file or folder:

l For files, type any file name and extension to create an empty file of that
file type. Be aware that this process does not validate whether the file
type is valid.

l For folders, type the folder name. Along with creating a single folder at
the parent directory location, you can create a series of subfolders by
using backslashes (\). For example, if the parent location is C:\, you can
create the following folders:

l Type test. This creates a single folder C:\test.

l Type test1\test2\test3. This creates the folder structure


C:\test1\test2\test3.

l Notes: Information to describe this process as part of the workflow.

Determining the Number of Files in a Folder


You can determine the number of files in a folder as part of a System
Manager workflow.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Count Files process to your workflow. The
following information is required to determine the number of files in a folder:

Copyright © 2024 All Rights Reserved 1497


Syst em Ad m in ist r at io n Gu id e

l The Directory: The location of the top level folder to determine the
number of files. Click the folder icon to browse to and select a folder.

l File Filter: Select this option to apply a single filter to the files that are to
be included in the count of files in a folder. You can then type the filter,
including wildcard characters such as an asterisk (*) to represent multiple
characters, and a question mark (?) to represent a single character. For
example, if you type *.exe, only files that end with the .exe extension are
included in the count. If you type test?.exe, files such as test1.exe,
test2.exe, test3.exe, and testA.exe are included in the count. If
you clear this check box, all files in a folder are included in the final count.

l Among All Files: Select this option to count files only in the top-level
folder.

l Among All Files and Subfolders Recursively: Select this option to


count files in the top-level folder and all subfolders.

l Output Parameter: The number of files in the folder must be stored in a


parameter so that it can be passed to another process in the System
Manager workflow. Select an output parameter from the drop-down list to
store this value.

l Notes: Information to describe this process as part of the workflow.

Deleting a File or Folder


You can delete a file or folder as part of a System Manager workflow.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Delete Files process to your workflow. The
following information is required to delete a file or folder:

l File or Directory: The location of the file or folder to delete. If the path to
a file is provided, only that file is deleted. If the path to a folder is
provided, the folder and all the files in it are deleted. Click the folder icon
to browse to and select a file or folder.

Copyright © 2024 All Rights Reserved 1498


Syst em Ad m in ist r at io n Gu id e

You can also use wildcard characters (* and ?) to select files or folders for
deletion. For example, you can use the syntax *.txt to delete all files
with the extension .txt in a folder. For additional examples of how you can
use these wildcard characters, see Using Wildcard Characters in
Processes, page 1543.

l Notes: Information to describe this process as part of the workflow.

Moving a File or Folder


You can move a file or folder to a new location as part of a System Manager
workflow. When a file or folder is moved, the file or folder only exists in the
new location provided. This means the file or folder is no longer available in
the original location it was moved from.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Move Files process to your workflow. The
following information is required to move a file or folder:

l Source File or Directory: The location of the file or folder to move. If the
path to a file is provided, only that file is moved. If the path to a folder is
provided, the folder along with all the files and folders within it are moved.
Click the folder icon to browse to and select a file or folder.

You can also use wildcard characters (* and ?) to select files or folders to
move. For example, you can use the syntax *.txt to move all files with
the extension .txt in a folder. For additional examples of how you can use
these wildcard characters, see Using Wildcard Characters in Processes,
page 1543

l Destination File or Directory: The location of the file or folder to move


the file or folder to.

l If you are moving a file, you can provide a path to a specific folder
location and file name to store the file.

Copyright © 2024 All Rights Reserved 1499


Syst em Ad m in ist r at io n Gu id e

l If you are moving a folder or have used wildcard characters to select


multiple files or folders, you can provide a folder location at which to
store the files or folders.

l If the location you provide does not exist, a new directory is created with
the name of the destination and all source files will be copied to this
directory. Click the folder icon to browse to and select a file or folder.

l Overwrite: If this check box is selected, the system replaces the


destination file or folder with the same name as the source file or folder
provided. If this check box is cleared and a file or folder with the same
name exists on the system, the file or folder is not moved to the specified
location.

l Notes: Information to describe this process as part of the workflow.

Find and Replace Information in a File


You can search a file for various keywords and phrases, and then replace
this information with new content, as part of a System Manager workflow.
These changes can be applied by overwriting the file or by creating a new
file with all the applicable changes.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Find and Replace File Content process to your
workflow. The following information is required to find and replace content in
a file:

l Source File: The location of the file to search for content to replace. Click
the folder icon to browse to and select a file.

l Destination File: The location and name of the file that is created with all
content replacements. You can create a new file to retain a copy of the
original file, or select the same file as the source file to overwrite the
existing file. To overwrite the existing file, you must also select the option
Overwrite Destination File If It Already Exists described below. Click the
folder icon to browse to and select a file.

Copyright © 2024 All Rights Reserved 1500


Syst em Ad m in ist r at io n Gu id e

l Overwrite Destination File If It Already Exists: If this check box is


selected, the system replaces the original file with an updated version of
the file that has all relevant content updates applied. If this check box is
cleared and a file with the same name exists on the system, the file is not
updated.

l Match Case: If this check box is selected, the system replaces keywords
and phrases if the content and the case of the content matches. If this
check box is cleared, keywords and phrases are replaced if the content
matches, regardless of the case.

l Keyword: The keyword or phrase to search for in the file. The search finds
and replaces all instances of the keyword or phrase in the file. You must
type the keyword or phrase exactly; wildcard characters cannot be used.
To replace multiple lines in the file, use $\n$ to indicate a line break.

l Value: The content used to replace the keyword or phrase. To replace a


keyword with multiple lines, use $\n$ to indicate a line break.

For example, if you have an XML file that includes multiple instances of
the same address, and the person or company with that address has
recently moved to another city, you can find and replace all instances of
the customer address. If the XML for the address is:

<address1>123 Main Street</address1>


<city>Vienna</city>
<state>Virginia</state>
<zip>22180</zip>

In the Keyword text box enter the following:

<address1>123 Main
Street</address1>$\n$<city>Vienna</city>$\n$<state>Virginia</state>$\n$<zip>
22180</zip>

If the new address should read as follows in the XML:

Copyright © 2024 All Rights Reserved 1501


Syst em Ad m in ist r at io n Gu id e

<address1>4000 Connecticut Ave NW</address1>


<address2>Suite 600</address2>
<city>Washington</city>
<state>District of Columbia</state>
<zip>20008</zip>

In the Value text box, type the following:

<address1>4000 Connecticut Ave NW</address1>$\n$<address2>Suite


600</address2>$\n$<city>Washington</city>$\n$<state>District of
Columbia</state>$\n$<zip>20008</zip>

l Use This Additional Keyword / Value Pair: If this check box is selected,
the system includes a find and replace action to search for and replace a
given keyword or phrase. Each of these check boxes includes a single,
additional find and replace action. For each find and replace action that
you include, you must provide the following information:

l Keyword: The keyword or phrase to search for in the file. The search
finds and replaces all instances of the keyword or phrase within the file.
You must type the keyword or phrase exactly; wildcard characters
cannot be used. If you want to replace multiple lines within the file, you
can use $\n$ to indicate a line break.

l Value: The content used to replace the keyword or phrase. If you want to
replace a keyword with multiple lines, you can use $\n$ to indicate a
line break.

l Notes: Information to describe this process as part of the workflow.

Renaming a File or Folder


You can rename a file or folder as part of a System Manager workflow.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Rename Files process to your workflow. The
following information is required to rename a file or folder:

Copyright © 2024 All Rights Reserved 1502


Syst em Ad m in ist r at io n Gu id e

l Source File or Directory: The location of the file or folder to rename.


Click the folder icon to browse to and select a file or folder.

l New Name of File or Directory: The new name for the file or folder.

l Append Current Date: Determines whether the current date is


automatically added to the end of the new name. The date is added in a
YYYY-MM-DD format, such as NewName-2015-12-21.txt.

l Notes: Information to describe this process as part of the workflow.

Unzipping a Compressed File


You can extract the contents of a compressed file as part of a System
Manager workflow. The files are extracted to the location that you specify.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Unzip Files process to your workflow. The
following information is required to extract the contents of a compressed file:

l Zip File: The location of the compressed file to extract, which can use
either zip or gzip format. Click the folder icon to browse to and select a
file.

l Output Directory: The location of where the files in the compressed file
are to be extracted to. Click the folder icon to browse to and select a
folder.

l Overwrite: Replaces any existing files in the output directory with the files
that are being extracted. If this check box is cleared and a file with the
same name exists in the output directory, the file is not updated.

l Notes: Information to describe this process as part of the workflow.

Compressing Files into a Zip File


You can compress files and the contents of folders into a zip file as part of a
System Manager workflow. The files are extracted to the location that you
specify.

Copyright © 2024 All Rights Reserved 1503


Syst em Ad m in ist r at io n Gu id e

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Zip Files process to your workflow. The following
information is required to compress files and folders into a zip file:

l Source File or Directory: The location of the file or folders to include in the zip file. If
you select a folder, all of the contents of the folder are included in the zip file, which
includes the subfolders and their content. Click the folder icon to browse to and select
files and folders.

l You can also use wildcard characters (* and ?) to select files or folders to compress
into a zip file. For example, you can use the syntax *.txt to select all files with the
extension .txt in a folder for compression into a zip file. For additional examples of
how you can use these wildcard characters, see Using Wildcard Characters in
Processes, page 1543.

l Output File: The location and name of the final compressed zip file. Click
the folder icon to browse to and select an existing zip file.

l Operations for Output File: Determines how an existing zip file is


updated. If this check box is cleared and an existing zip file is found, the
zip file is not updated and the files are not compressed into a zip file. If
you select this check box, you have the following options:

l Overwrite: If an existing zip file is found, the old version is completely


replaced by a new zip file.

l Append: If an existing zip file is found, the new files and folders are
added to the existing zip file.

However, if a folder already exists in the same location in the zip file, it
is ignored along with any contents of the folder. This means that if a
folder has new files, they are not included as part of appending files to
the existing zip file.

l Notes: Information to describe this process as part of the workflow.

Copyright © 2024 All Rights Reserved 1504


Syst em Ad m in ist r at io n Gu id e

Downloading Files from an FTP or SFTP Site


You can download files from an FTP or SFTP site as part of a System
Manager workflow. These files are downloaded and saved to a folder that
you select.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Download using FTP process to your workflow.
The following information is required to download files from an FTP or SFTP
site:

l FTP Server: The URL for the FTP or SFTP site. You must also define
whether the site allows anonymous access or requires a user name and
password:

l Port Number: The port number to access the FTP or SFTP site. By
default a value of 22 is expected. Select this check box and type the port
number for your FTP or SFTP site.

l Anonymous: Defines the connection to the FTP site as anonymous. You


cannot use this option if you are connecting to an SFTP site. Type an
account for the anonymous connection, such as an email address.

l Login: Defines the connection to the FTP or SFTP site as one that
requires a user name and password to log into the FTP or SFTP site.
You must provide the following information:

l User Name: The name of a valid user for the FTP or SFTP site.

l Password: The password for the user name that you provided to
connect to the FTP or SFTP site. You can use the button to the right of
the Password field to determine whether the password characters are
shown or asterisks are displayed instead.

l Use SFTP: Encrypts the entire download communication. You must


have a secure FTP site for this encryption to work successfully. If you
clear this check box, the communication is not encrypted.

Copyright © 2024 All Rights Reserved 1505


Syst em Ad m in ist r at io n Gu id e

If you have both an FTP and an SFTP site, you can choose to clear
this check box to use the FTP site, or select this check box to encrypt
the communication and use the SFTP site. However, if you only have
an FTP site or an SFTP site, your use of this option must reflect the
type of site you are using.

l Download Options: Determines whether to download a single file or


multiple files:

l Single File: Downloads a single file from the FTP or SFTP site. Type the
location of the file on the FTP or SFTP site to download.

l Multiple Files: Downloads multiple files from a directory on the FTP or


SFTP site. You must provide the following information:

l Remote Directory: The folder within the FTP or SFTP site to


download files from.

l All Files: Downloads all the files directly within the folder selected.
Subfolders are not downloaded recursively if you select this option.

l All Files And Subfolders Recursively: Downloads all the files and
subfolders recursively, within the folder selected.

l Download To Directory: The location of the folder to download the files


from the FTP site to. Click the folder icon to browse to and select a folder.

l Overwrite: If this check box is selected, the system replaces files with the
same name as the files or folders downloaded from the FTP or SFTP site.
If this check box is cleared and a file or folder with the same name exists
on the system, the file or folder is not downloaded from the FTP or SFTP
site.

l Notes: Information to describe this process as part of the workflow.

Uploading Files to an FTP or SFTP Site


You can upload files to an FTP or SFTP site as part of a System Manager
workflow. These files are uploaded to the FTP or SFTP site that you select.

Copyright © 2024 All Rights Reserved 1506


Syst em Ad m in ist r at io n Gu id e

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Upload using FTP process to your workflow. The
following information is required to upload files to an FTP or SFTP site:

l FTP Server: The URL for the FTP or SFTP site. You must also define
whether the site allows anonymous access or requires a user name and
password:

l Port Number: The port number to access the FTP or SFTP site. By
default a value of 22 is expected. Select this check box and type the port
number for your FTP or SFTP site.

l Anonymous: Defines the connection to the FTP site as anonymous. You


cannot use this option if you are connecting to an SFTP site. Type an
account for the anonymous connection, such as an email address.

l Login: Defines the connection to the FTP or SFTP site as one that
requires a user name and password to log into the FTP or SFTP site.
You must provide the following information:

l User Name: The name of a valid user for the FTP or SFTP site.

l Password: The password for the user name that you provided to
connect to the FTP or SFTP site. You can use the button to the right of
the Password field to determine whether the password characters are
shown or asterisks are displayed instead.

l Use SFTP: Encrypts the entire upload communication. You must have
a secure FTP site for this encryption to work successfully. If you clear
this check box, the communication is not encrypted.

If you have both an FTP and an SFTP site, you can choose to clear
this check box to use the FTP site, or select this check box to encrypt
the communication and use the SFTP site. However, if you only have
an FTP site or an SFTP site, your use of this option must reflect the
type of site you are using.

Copyright © 2024 All Rights Reserved 1507


Syst em Ad m in ist r at io n Gu id e

l Upload Options: Determines whether to upload a single file or multiple


files:

l Single File: Uploads a single file to the FTP or SFTP site. Click the
folder icon to browse to and select a file.

l Multiple Files: Uploads multiple files from a directory to the FTP or


SFTP site. You must provide the following information:

l Local Directory: The local folder to upload the files from. Click the
folder icon to browse to and select a folder.

l All Files: Uploads all the files directly within the folder selected.
Subfolders are not uploaded recursively if you select this option.

l All Files And Subfolders Recursively: Uploads all the files and
subfolders recursively, within the folder selected.

l Upload To Remote Directory: The location of the folder to upload the


files to in the FTP or SFTP site. Type the FTP or SFTP site location.

l Overwrite: If this check box is selected, the system replaces files with
the same name as the files or folders uploaded to the FTP or SFTP site.
If this check box is cleared and a file or folder with the same name exists
on the FTP or SFTP site, the file or folder is not uploaded.

l Notes: Information to describe this process as part of the workflow.

Executing a SQL Statement


You can execute a SQL statement against a database as part of a System
Manager workflow. This lets you perform tasks such as updating tables in a
database.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Execute SQL process to your workflow. The
following information is required to execute a SQL statement against a
database:

Copyright © 2024 All Rights Reserved 1508


Syst em Ad m in ist r at io n Gu id e

l Connection Information: Determines whether to connect using a data


source name (DSN) or a connection string:

l Specify a DSN: Defines the connection to the database through the use
of a DSN. You must provide the following information:

l Data Source Name: The DSN used to access the database.

l Authentication for DSN: Determines if authentication is included as


part of the SQL statement. Be aware that some SQL statements can
require specific permissions, which means that authentication would
be required. Select this check box to authenticate the connection, and
supply the following information:

l Login: The name of a valid user for the database.

l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are
shown or asterisks are displayed instead.

l Specify a JDBC Connection String: Defines the connection to the


database through the use of a JDBC connection string. Type a valid
connection string in the field provided.

l Encoding: From this drop-down list, select the character encoding for
the data source you are connecting to:

l Non UTF-8: Select this option if the data source uses a character
encoding other than UTF-8. This can support character encodings
such as UTF-16 and USC-2. This encoding option is selected by
default.

l UTF-8: Select this option if the data source uses UTF-8 character
encoding. For example, Teradata databases may require UTF-8
encoding.

Copyright © 2024 All Rights Reserved 1509


Syst em Ad m in ist r at io n Gu id e

l Execution: Determines whether to use a SQL script to supply the SQL


statements, or provide a single SQL statement directly in the workflow:

l Execute the Contents of an Input File: Uses a SQL script file to


provide the SQL statements. The SQL script file can contain multiple
SQL statements to be executed. The syntax of the SQL must be valid for
the database it is executed against. Click the folder icon to browse to
and select a SQL script file.

l Execute a Single SQL Statement: Lets you type a single SQL


statement for execution. The syntax of the SQL must be valid for the
database it is executed against, and the statement must end with a
semicolon.

l Save Execution Output Into a File: If this check box is selected, the
system saves all resulting output of executing the SQL statements to the
selected file. No output or data is included in the file for SQL statements
that do not return any output, such as create table or update table
statements. Click the folder icon to browse to and select a file, which can
either be a .txt or .csv file.

If this check box is cleared, the output of executing the SQL statements is
not saved to a file.

l Include column headers in the output: Determines whether the column


headers are included as part of the SQL statement output. By default, this
check box is cleared and the column header information is not included in
any output that is saved for the SQL statement. This can be helpful if you
plan to use the output of a SQL statement to update the value of a
parameter in your System Manager workflow.

If you select this check box, the column header information is provided in
the SQL output along with the associated values. This can provide
additional context to the values.

Copyright © 2024 All Rights Reserved 1510


Syst em Ad m in ist r at io n Gu id e

l Output Parameters: As part of executing SQL, you can store any results
in parameters:

l SQL Execution Result: The resulting output of executing the SQL


statements. Select a parameter from the drop-down list to store the SQL
result.

l Notes: Information to describe this process as part of the workflow.

Sending an Email
You can send an email as part of a System Manager workflow. The email can
include the results of the workflow, which can provide verification of what
processes have been successfully completed.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Send Email process to your workflow. The
following information is required to send an email:

l From: The email address of the sender. For an email sent from a System
Manager workflow, you must type the email address of the person who
deploys the workflow.

l To: The email addresses for the intended primary recipients of the email.
Use a comma to separate each email address.

l Cc: The email addresses of the secondary recipients who should receive a
copy of the email addressed to the primary recipients. Select the check
box to enter the email addresses. Use a comma to separate each email
address.

l Bcc: The email addresses of the recipients who should receive the email
while concealing their email address from the other recipients. Select the
check box to enter the email addresses. Use a comma to separate each
email address.

l Message Subject: The title of the email that is displayed in the subject
line. This can be used to give a brief description of the purpose behind

Copyright © 2024 All Rights Reserved 1511


Syst em Ad m in ist r at io n Gu id e

deploying the workflow. Select the check box to enter the message
subject.

l Message Body: The main content of the email. This can give additional
details on what was completed as part of the workflow and next steps for a
user or administrator to take. Select the check box to enter the message
content.

l HTML: Defines the body content of the email to be provided in HTML


format. If you clear this check box, the content is provided in plain text
format.

l High Importance: Defines the email as having high importance. If this


check box is cleared, the email is sent without any importance defined for
the email.

l Attach System Manager Log: If this check box is selected, the system
includes the System Manager log file as an attachment to the email. This
log file includes all the results of the workflow up to the time of the email
request. Any processes in the workflow that are completed after the email
request are not included in the log file. If this check box is cleared, the log
file is not attached to the email.

l Attach Any Other File: If this check box is selected, the system includes
a file as an attachment to the email. Click the folder icon to browse to and
select a file to include as an attachment. You can also use wildcard
characters if the folder or file name is not known when creating the
workflow (see Using Wildcard Characters in Processes, page 1543).

l If you need to send multiple files, you can do one of the following:

l Compress the required files into a single file such as a .zip file. You can
include compressing files into a single .zip file as part of a System
Manager workflow, using the process described in Compressing Files
into a Zip File, page 1503.

Copyright © 2024 All Rights Reserved 1512


Syst em Ad m in ist r at io n Gu id e

l Use wildcard characters (* and ?) to select multiple files in a folder. For


examples of how you can use these wildcard characters, see Using
Wildcard Characters in Processes, page 1543.

l Outgoing SMTP Server: If this check box is selected, the system lets you
define the outgoing SMTP server to use to send the email. If this check
box is cleared, a default SMTP server is used to send the email. If you
choose to specify an SMTP server, you must provide the following
information:

l SMTP Server: The SMTP server to use to send the email.

l You must select the type of port used for the SMTP server. Contact your
SMTP server administrator to determine the proper port type:

l Plain Text: Defines the connection to the SMTP sever in plain text,
without using any security protocol. By default, this option is selected.

l TLS Port: Defines the connection to the SMTP server as using a


Transport Layer Security port.

l SSL Port: Defines the connection to the SMTP server as using a


Secure Sockets Layer port.

l Port Number: The port number for the SMTP server.

l User Name: The name of a user account that has the necessary rights to
send emails using the SMTP server.

l User Password: The password for the user name that you provided to
send emails using the SMTP server. You can use the button to the right
of the Password field to determine whether the password characters are
shown or asterisks are displayed instead.

l Notes: Information to describe this process as part of the workflow.

Delaying a Workflow to Allow for Task Completion


While deploying a System Manager workflow, some processes can take a
considerable amount of time. In certain scenarios, your workflow may need

Copyright © 2024 All Rights Reserved 1513


Syst em Ad m in ist r at io n Gu id e

these processes to be completed before other processes in the workflow can


be started. To support this scenario, you can include a process in your
workflow to wait for a specific amount of time.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Wait process to your workflow. The following
information is required to delay the workflow:

l Waiting Time (sec): The number of seconds to remain on the current wait
process before proceeding to the next process in a workflow. Type a
numeric, integer value to represent the number of seconds to wait before
proceeding to the next process in a workflow.

You can add additional time to the waiting process using the following
options:

You must supply a valid numerical value for the seconds of the wait
process, regardless of whether you define the minutes and hours for the
wait process. You can type a value of zero (0) to define the wait process
as a length of time in only minutes and hours.

l Minutes: Select this check box to determine the number of minutes to


remain on the current wait process before proceeding to the next
process in a workflow. Type a numeric, integer value to represent the
number of minutes to wait before proceeding to the next process in a
workflow. This time is added to any seconds or hours also defined for
the wait process.

l Hours: Select this check box to determine the number of hours to remain
on the current wait process before proceeding to the next process in a
workflow. Type a numeric, integer value to represent the number of
hours to wait before proceeding to the next process in a workflow. This
time is added to any seconds or minutes also defined for the wait
process.

l Notes: Information to describe this process as part of the workflow.

Copyright © 2024 All Rights Reserved 1514


Syst em Ad m in ist r at io n Gu id e

Updating Workflow Parameters


While deploying a System Manager workflow, you can update the values of
parameters that are used in the workflow. Updating parameters during
workflow deployment can allow you to react to changes made as part of
deploying a workflow. This technique can also be used to help exit a loop in
a workflow that is used for troubleshooting purposes, such as checking the
availability of an active Intelligence Server.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Update Parameters process to your workflow. The
following information is required to update parameters for a workflow:

l Parameter Name: The name of the workflow parameter to update.

l Resolve the value from: Determines if the parameter value is updated


using the contents of a file or a registry. If you clear this check box, the
constant value or equation you provide in the New Value field is used to
update the parameter. If you select this check box, you must choose one
of the following:

l File: Updates the parameter value with the entire contents of a file. If
you select this option, you must type the full path to the file in the New
Value field. You can use .txt or .csv files to update the value of a
parameter.

l Registry: Updates the parameter value with the value of a registry key.
If you select this option, you must type the full path to the registry key in
the New Value field.

l New Value: The new value to assign to the parameter. If you selected the
Resolve the value from check box listed above, you must type the full path
to the file or registry key.

If the Resolve the value from check box is cleared, in addition to


providing constant values such as integers or strings of characters, you
can also use equations to update parameter values. To build these

Copyright © 2024 All Rights Reserved 1515


Syst em Ad m in ist r at io n Gu id e

equations, you can include the parameter's value by typing


${ParameterName}, where ParameterName is the name of the
parameter that you are updating. You can then include any of the
arithmetic operators +, -, /, and * along with other numeric values. For
example, you can create a Loop parameter, and update its value with the
following new value equation:

${Loop} + 1

it increases the value of the Loop parameter by one each time the Update
Parameters configuration is processed in the workflow. This type of
parameter value update supports exiting loops in a workflow after a certain
number of attempts. For best practices on using the Update Parameters
process to support loops in workflows, see Supporting Loops in a
Workflow to Attempt Configurations Multiple Times, page 1435.

l Update this additional parameter: Determines if an additional parameter


is updated as part of the parameter update process. For each Update this
additional parameter check box you select, you must type a Parameter
Name and New Value in the respective fields.

l Notes: Information to describe this process as part of the workflow.

Retrieving Machine Information


You can retrieve information about the machine that System Manager is
running on as part of a System Manager workflow. Each system property
that you retrieve must be stored in a parameter for the workflow (see Using
Parameters for Processes, page 1536).

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Retrieve System Properties process to your
workflow. The following information is required to retrieve information on the
machine:

Copyright © 2024 All Rights Reserved 1516


Syst em Ad m in ist r at io n Gu id e

l System property: The information about the system that is retrieved. You
can select from the following options:

l Operating System Name: The descriptive name of the operating


system, such as Red Hat Enterprise Linux.

l Operating System Version: The version number of the operating


system. The version numbering of operating systems varies greatly, so it
is important to also know the operating system name along with the
operating system version.

l User Home Directory: The path that acts as the current user's home
directory, which can be used to store files if other paths are restricted for
security reasons.

l IP Address: The IP address of the system, which can be used to


connect to the system.

l Hostname: The host name of the system, which can be used to connect
to the system.

l Java Virtual Machine (JVM) bit-size: The size allowed for the Java
Virtual Machine, which is also often referred to as the heap size. This
determines how much memory can be used to perform various Java
tasks. You can tune this value to improve the performance of your
machine.

l Local Machine Date: The date and time for the system. The time is
returned as the time zone for the system. If the time zone for the system
is changed, you must restart System Manager to return the new time
zone for the machine.

l Parameter: The System Manager parameter that is used to store the


machine information that is retrieved.

l Retrieve this additional property: Select this check box to retrieve


additional information about the machine. For each of these check boxes

Copyright © 2024 All Rights Reserved 1517


Syst em Ad m in ist r at io n Gu id e

that you select, an additional System property and Parameter pair is made
available.

l Notes: Information to describe this process as part of the workflow.

Administering Cloud-Based Environments


If your MicroStrategy environment includes cloud-based environments, you
can create an Amazon Machine Image (AMI) and get its status. You can also
launch, manage, and terminate your cloud-based environments as part of a
System Manager workflow.

Creating an Image
You can create an Amazon Machine Image (AMI) from an Amazon EBS-
backed instance as part of the System Manager workflow. An Amazon
Machine Image is a template that contains the software configuration for
your server. While creating an image, ensure that the EBS-backed instance
is either running or stopped.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Create Image process to your workflow. The
following information is required to create an Amazon Cloud image:

l Credential Properties File: The file that includes your secretkey and
accesskey for your account. Click the folder icon to browse to and select a
credential properties file.

l Existing Instance ID: ID of an Amazon EBS-backed instance that is either


running or stopped.

l Name: Name for the new image.

l Description: Description for the new image.

l Set No Reboot: Select this check box to prohibit the Amazon EC2 from
shutting down the Amazon EBS-backed instance before creating the new
image. If you clear this check box, the Amazon EC2 attempts to shut down

Copyright © 2024 All Rights Reserved 1518


Syst em Ad m in ist r at io n Gu id e

EBS-backed instance before creating the new image and then restarts the
instance.

l Block Device Mapping: A block device is a storage device that is


physically attached to a computer or accessed remotely as if it were
physically attached to the computer. Hard disks, CD-ROM drives, and
flash drives are a few examples of block devices. A block device mapping
defines the block devices to be attached to an AMI. This argument is
passed in the form of devicename=blockdevice. Where, devicename
is the name of the device within Amazon EC2 and blockdevice can be
one of the following:

l none: To omit a mapping of the device from the AMI used to launch the
instance, specify none. For example: "/dev/sdc=none".

l ephemeralN: To add an instance store volume to the device, specify


ephemeralN, where N is the volume number. The range of valid volume
numbers is 0 to 3. For example: "/dev/sdc=ephemeral0".

l snapshot-id:volume-size:delete-on-termination:volume-
type:iops. Where

l snapshot-id is the ID of the snapshot to use to create the block


device. To add an EBS volume (for EBS-backed instance only), specify
the snapshot id. For example "/dev/sdh=snap-7eb96d16".

l volume-size is the size of the volume in GB. To add an empty EBS


volume, omit the snapshot id and specify a volume size. For example
"/dev/sdh=:200".

l delete-on-termination is to indicate whether the EBS volume


should be deleted on termination (true or false). The default value
is true. To prevent the volume from being deleted on termination of
the instance, specify false. For example "/dev/sdh=snap-
7eb96d16::false".

Copyright © 2024 All Rights Reserved 1519


Syst em Ad m in ist r at io n Gu id e

l volume-type:iops is the volume type (standard or io1). The


default value is standard. For example, "/dev/sdh=:standard".
To create a provisioned Input/Output Operations Per Second (IOPS)
volume, specify io1 and the number of IOPS that the volume supports.
For example "/dev/sdh= io1:500".

All of these variables are optional. You can choose to use any or all of
them. Refer to your Amazon third-party documentation for additional
examples, updates, and information on the block device variables
listed below.

l Output Parameters: When a cloud-based image is created, various output


parameters are provided that include details about the cloud-based
environment. It is recommended that you include parameter (see Using
Parameters for Processes, page 1536) for the following output parameter,
so that the value can be saved and used for other processes:

l New AMI ID: The newly created image ID for the Amazon Machine
Image (AMI).

l Notes: Information to describe this process as part of the workflow.

Getting Cloud Image Status


Once your Amazon Cloud image is created, you can determine its state. For
example, you can determine if an image is available or has not yet been
registered.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Get Image Status process to your workflow. The
following information is required to get the state of your Amazon Cloud
image:

l Credential Properties File: The file that includes your secretkey and
accesskey for your account. Click the folder icon to browse to and select a
credential properties file.

Copyright © 2024 All Rights Reserved 1520


Syst em Ad m in ist r at io n Gu id e

l AMI ID: The image ID for the Amazon Machine Image (AMI) to use for your
cloud-based environment. Type the image ID, which you can retrieve from
Amazon's cloud resources.

l Notes: Information to describe this process as part of the workflow.

Launching Cloud-Based Environments


You can launch your Amazon cloud-based environments as part of a System
Manager workflow.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Launch Instance process to your workflow. The
following information is required to launch a cloud-based environment:

l Credential Properties File: The file that includes your secretkey and
accesskey for your account. Click the folder icon to browse to and select a
credential properties file.

l AMI ID: The image ID for the Amazon Machine Image (AMI) to use for your
cloud-based environment. Type the image ID, which you can retrieve from
Amazon's cloud resources.

l Instance Type: The image type for your cloud-based environment, which
determines the computing capacity of the cloud-based environment. Select
the appropriate instance type from the drop-down list.

l Zone: The zone, or network, that the cloud-based environment is launched


and deployed to. Type the name for the zone.

l Key Pair Name: Select this check box to create the key pair name, which
acts as a password to access the cloud-based environment once it is
launched. If you clear this check box, this security method is not used with
the cloud-based environment.

l Name Tag: Select this check box to create a name to distinguish the
cloud-based environment. If you clear this check box, no name is provided
for the cloud-based environment.

Copyright © 2024 All Rights Reserved 1521


Syst em Ad m in ist r at io n Gu id e

l Security Group: Select this check box to create new security groups or
use existing security groups. Use a semicolon (;) to separate multiple
security groups. If you clear this check box, no security groups are used
for the cloud-based environment.

l Output Parameters: When a cloud-based environment is launched,


various output parameters are provided that include details about the
cloud-based environment. It is recommended that you include parameters
(see Using Parameters for Processes, page 1536) for the following output
parameters, so that the values can be saved and used for other processes:

l Public IP Address: The public IP address of the cloud-based


environment.

l Private IP Address: The private IP address of the cloud-based


environment.

l Instance ID: The instance ID of the cloud-based environment. This


instance ID is required to terminate a cloud-based environment (see
Terminating Cloud-Based Environments, page 1523).

l Public DNS Name: The public Domain Name System (DNS) name of the
cloud-based environment, which is provided upon launching an instance.
Using the Amazon EC2 console, you can view the public DNS name for a
running instance.

l Private DNS Name: The private Domain Name System (DNS) name of
the cloud-based environment, which is provided upon launching an
instance. Using the Amazon EC2 console, you can view the private DNS
name for a running instance.

l Notes: Information to describe this process as part of the workflow.

Managing Cloud-Based Environments


Once your Amazon cloud-based environment is launched, you can start,
stop, and force stop the cloud-based environment as part of a System
Manager workflow.

Copyright © 2024 All Rights Reserved 1522


Syst em Ad m in ist r at io n Gu id e

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Manage Instances process to your workflow. The
following information is required to manage a cloud-based environment:

l Credential Properties File: The file that includes your secretkey and
accesskey for your account. Click the folder icon to browse to and select a
credential properties file.

l Instance ID: The instance ID of the cloud-based environment.

l Action: The list of actions—that is, start, stop, or force stop—that can be
performed on your cloud-based environment. Select the appropriate action
from the drop-down list.

l Output Parameters: When a cloud-based environment is launched,


various output parameters are provided that include details about the
cloud-based environment. It is recommended that you include parameters
(see Using Parameters for Processes, page 1536) for the following output
parameters, so that the values can be saved and used for other processes:

l Public IP Address(es): The public IP address of the cloud-based


environment.

l Private IP Address(es): The private IP address of the cloud-based


environment.

l Notes: Information to describe this process as part of the workflow.

Terminating Cloud-Based Environments


You can terminate your Amazon cloud-based environments as part of a
System Manager workflow.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Terminate Instance process to your workflow. The
following information is required to terminate a cloud-based environment:

Copyright © 2024 All Rights Reserved 1523


Syst em Ad m in ist r at io n Gu id e

l Credential Properties File: The file that includes your secretkey and
accesskey for your account. Click the folder icon to browse to and select a
credential properties file.

l Instance ID: The instance ID of the cloud-based environment.

l Notes: Information to describe this process as part of the workflow.

Creating a vApp
You can create a new vApp as part of a System Manager workflow. A vApp is
a collection of one or more virtual machines that can be deployed as a
single, cloud-based environment.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Create vApp process to your workflow. The
following information is required to create a vApp:

If you are unsure of any of the option values required to create a vApp,
contact the vCloud administrator for the necessary information.

l vCloud Server Name: The machine name or IP address of a vCloud


director server. The syntax for providing a vCloud host name is
HostName:PortNumber, where HostName is the machine name or IP
address, and PortNumber is the port number for the host.

l User Name: The name of a user account that has the necessary rights to
work with and create vApps.

l Login as Administrator: Select this check box to log in to vCloud as an


administrator.

l Password: The password for the user name that you provided to create
the vApp. You can use the button to the right of the Password field to
determine whether the password characters are shown or asterisks are
displayed instead.

l Organization Name: The organization that authenticates the user.

Copyright © 2024 All Rights Reserved 1524


Syst em Ad m in ist r at io n Gu id e

l Virtual Datacenter: The name of the virtual datacenter that allocates the
system resources for a vCloud environment.

l New vApp Name: The name that is used to identify the vApp.

l Add VM: Select this check box to also create a virtual machine for the
vApp. If you select this check box, you must provide the following
information to create a virtual machine:

l Catalog Name: The name of the catalog that stores the template that
you use to create the virtual machine.

l Template Name: The name of the template required to create the virtual
machine. A template defines the initial setup and configuration of a
virtual machine.

l Start the vApp: Determines if the virtual machine and its associated
vApp are powered on so that it can be used after the creation process is
completed. Select this check box to power on the virtual machine and its
associated vApp. If you do not select this option, you can use the
Manage VM process to power on the virtual machine at a later time (see
Starting, Stopping, and Restarting a Virtual Machine, page 1526).

l Notes: Information to describe this process as part of the workflow.

Starting, Stopping, and Restarting a vApp


Once a vApp is created, you can start, stop, and restart the vApp as part of a
System Manager workflow. A vApp must be powered on for users to access
and work with a vApp. You may need to power off or shut down a vApp to
perform various administrative maintenance on the vApp.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Manage vApp process to your workflow. The
following information is required to manage a vApp:

If you are unsure about any of the option values required to manage a vApp,
contact the vCloud administrator for the necessary information.

Copyright © 2024 All Rights Reserved 1525


Syst em Ad m in ist r at io n Gu id e

l vCloud Server Name: The machine name or IP address of a vCloud


director server. The syntax for providing a vCloud host name is
HostName:PortNumber, where HostName is the machine name or IP
address, and PortNumber is the port number for the host.

l User Name: The name of a user account that has the necessary rights to
work with vApps.

l Login as Administrator: Select this check box to log in to vCloud as an


administrator.

l Password: The password for the user name that you provided to create
the vApp. You can use the button to the right of the Password field to
determine whether the password characters are shown or asterisks are
displayed instead.

l Organization Name: The organization that authenticates the user.

l Action: The type of action to perform on the vApp. Actions performed on a


vApp affect the availability of all virtual machines included in the vApp.
You can select one of the following actions:

l Start: Starts a vApp so that users can access and work with a vApp.

l Stop: Stops a vApp through a vCloud request, which makes the vApp
unavailable to users. This type of vCloud power off request can be
monitored by the vCloud system to determine the success or failure of
the action.

l Virtual Datacenter: The name of the virtual datacenter that allocates the
system resources for a vCloud environment.

l vApp Name: The name of the vApp to start, stop, or restart.

l Notes: Information to describe this process as part of the workflow.

Starting, Stopping, and Restarting a Virtual Machine


Once a vApp is created, you can start, stop, and restart a virtual machine
that is included in a vApp as part of a System Manager workflow. A virtual

Copyright © 2024 All Rights Reserved 1526


Syst em Ad m in ist r at io n Gu id e

machine must be powered on for users to access and work with a virtual
machine. You may need to power off or shut down a virtual machine to
perform various administrative maintenance tasks on the virtual machine.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Manage VM process to your workflow. The
following information is required to manage a virtual machine:

If you are unsure about any of the option values required to manage a
virtual machine, contact the vCloud administrator for the necessary
information.

l vCloud Server Name: The machine name or IP address of a vCloud


director server. The syntax for providing a vCloud host name is
HostName:PortNumber, where HostName is the machine name or IP
address, and PortNumber is the port number for the host.

l User Name: The name of a user account that has the necessary rights to
work with vApps and virtual machines.

l Login as Administrator: Select this check box to log in to vCloud as an


administrator.

l Password: The password for the user name that you provided to create
the vApp. You can use the button to the right of the Password field to
determine whether the password characters are shown or asterisks are
displayed instead.

l Organization Name: The organization that authenticates the user.

l Action: The type of action to perform on the virtual machine. You can
select one of the following actions:

l Power on: Starts a virtual machine so that users can access and work
with the virtual machine.

l Power off: Stops a virtual machine through a vCloud request, which


makes the virtual machine unavailable to users. This type of vCloud

Copyright © 2024 All Rights Reserved 1527


Syst em Ad m in ist r at io n Gu id e

power off request can be monitored by the vCloud system to determine


the success or failure of the action.

l Virtual Datacenter: The name of the virtual datacenter that allocates the
system resources for a vCloud environment.

l vApp Name: The name of the vApp that contains the virtual machine to
start, stop, or restart.

l VM Name: The name of the virtual machine within the vApp to start, stop,
or restart.

l Notes: Information to describe this process as part of the workflow.

Duplicating a vApp
You can duplicate a vApp as part of a System Manager workflow. A vApp is a
collection of one or more virtual machines, which can be deployed as a
single cloud-based environment.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Copy vApp process to your workflow. The
following information is required to duplicate a vApp:

If you are unsure about any of the option values required to duplicate a
vApp, contact the vCloud administrator for the necessary information.

l vCloud Server Name: The machine name or IP address of a vCloud


director server. The syntax for providing a vCloud host name is
HostName:PortNumber, where HostName is the machine name or IP
address, and PortNumber is the port number for the host.

l User Name: The name of a user account that has the necessary rights to
work with and create vApps.

l Login as Administrator: Select this check box to log in to vCloud as an


administrator.

Copyright © 2024 All Rights Reserved 1528


Syst em Ad m in ist r at io n Gu id e

l Password: The password for the user name that you provided to create
the vApp. You can use the button to the right of the Password field to
determine whether the password characters are shown or asterisks are
displayed instead.

l Organization Name: The organization that authenticates the user.

l Virtual Datacenter: The name of the virtual datacenter that allocates the
system resources for a vCloud environment.

l Source vApp Name: The name of the vApp to duplicate.

l Destination vApp Name: The name for the duplicate copy of the vApp.

l Start the vApp: Determines if the duplicate copy of the vApp is powered
on so that it can be used after the duplication process is completed. Select
this check box to power on the vApp. If you do not select this option, you
can use the Manage vApp process to power on the vApp at a later time
(see Starting, Stopping, and Restarting a vApp, page 1525).

l Notes: Information to describe this process as part of the workflow.

Deleting a vApp
You can delete a vApp as part of a System Manager workflow. A vApp is a
collection of one or more virtual machines, which can be deployed as a
single cloud-based environment.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Delete vApp process to your workflow. The
following information is required to delete a vApp:

If you are unsure about any of the option values required to delete a vApp,
contact the vCloud administrator for the necessary information.

l vCloud Server Name: The machine name or IP address of a vCloud


director server. The syntax for providing a vCloud host name is

Copyright © 2024 All Rights Reserved 1529


Syst em Ad m in ist r at io n Gu id e

HostName:PortNumber, where HostName is the machine name or IP


address, and PortNumber is the port number for the host.

l User Name: The name of a user account that has the necessary rights to
work with and delete vApps.

l Login as Administrator: Select this check box to log in to vCloud as an


administrator.

l Password: The password for the user name that you provided to create
the vApp. You can use the button to the right of the Password field to
determine whether the password characters are shown or asterisks are
displayed instead.

l Organization Name: The organization that authenticates the user.

l Virtual Datacenter: The name of the virtual datacenter that allocates the
system resources for a vCloud environment.

l vApp Name: The name of the vApp to delete.

l Notes: Information to describe this process as part of the workflow.

Deleting a Virtual Machine


You can delete a virtual machine that is part of a vApp as part of a System
Manager workflow.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Delete VM process to your workflow. The following
information is required to delete a virtual machine:

If you are unsure of any of the option values required to delete a virtual
machine, contact the vCloud administrator for the necessary information.

l vCloud Server Name: The machine name or IP address of a vCloud


director server. The syntax for providing a vCloud host name is
HostName:PortNumber, where HostName is the machine name or IP

Copyright © 2024 All Rights Reserved 1530


Syst em Ad m in ist r at io n Gu id e

address, and PortNumber is the port number for the host.

l User Name: The name of a user account that has the necessary rights to
work with and delete virtual machines within vApps.

l Login as Administrator: Select this check box to log in to vCloud as an


administrator.

l Password: The password for the user name that you provided to create
the vApp. You can use the button to the right of the Password field to
determine whether the password characters are shown or asterisks are
displayed instead.

l Organization Name: The organization that authenticates the user.

l Virtual Datacenter: The name of the virtual datacenter that allocates the
system resources for a vCloud environment and includes the vApp that
hosts the virtual machine to be deleted.

l vApp Name: The name of the vApp that hosts the virtual machine that is to
be deleted.

l VM Name: The name of the virtual machine to delete.

l Notes: Information to describe this process as part of the workflow.

Creating a Virtual Machine


You can create a new virtual machine and include it in a vApp as part of a
System Manager workflow. A vApp is a collection of one or more virtual
machines that can be deployed as a single, cloud-based environment.

To perform this configuration, in System Manager, from the Connectors and


processes pane, add the Add VM process to your workflow. The following
information is required to create a virtual machine:

If you are unsure of any of the option values required to create a virtual
machine within a vApp, contact the vCloud administrator for the necessary
information.

Copyright © 2024 All Rights Reserved 1531


Syst em Ad m in ist r at io n Gu id e

l vCloud Server Name: The machine name or IP address of a vCloud


director server. The syntax for providing a vCloud host name is
HostName:PortNumber, where HostName is the machine name or IP
address, and PortNumber

l User Name: The name of a user account that has the necessary rights to
work with and create vApps.

l Login as Administrator: Select this check box to log in to vCloud as an


administrator.

l Password: The password for the user name that you provided to create
the vApp. You can use the button to the right of the Password field to
determine whether the password characters are shown or asterisks are
displayed instead.

l Organization Name: The organization that authenticates the user.

l Source: These options determine if the new virtual machine is created as


a duplicate of an existing virtual machine or a new virtual machine is
created using a template:

l From vApp: This option duplicates a virtual machine that already exists
in the vApp:

l Virtual Datacenter: The name of the virtual datacenter that allocates


the system resources for a vCloud environment, and includes the vApp
that hosts the virtual machine to be duplicated.

l vApp Name: The name of the vApp that includes the virtual machine
to duplicate.

l From template: This option creates a new virtual machine, using a


template definition. A template defines the initial setup and configuration
of a virtual machine:

Copyright © 2024 All Rights Reserved 1532


Syst em Ad m in ist r at io n Gu id e

l Catalog Name: The name of the catalog that stores the template that
you use to create the virtual machine.

l Template Name: The name of the template required to create the


virtual machine.

l VM Name: The name of the virtual machine to duplicate from a vApp or


create from a template.

l Destination: These options determine where the new virtual machine is


created:

l Virtual Datacenter: The name of the virtual datacenter that allocates


the system resources for a vCloud environment and includes the vApp
that will host the new virtual machine.

l vApp Name: The name of the vApp that will host the new virtual
machine.

l Configure New VM: These options determine additional details about the
new virtual machine:

l Full Name: The name for the virtual machine that is created.

l Computer Name: Select this check box to provide the host name of the
new virtual machine. If you clear this check box, the name that you
specified for Full Name is also used for this host name.

l Local Administrator Password: Select this check box to provide an


administrator password for the virtual machine. If you clear this check
box, a password is generated or the password in the template used to
create the virtual machine is used.

l Administrator Password: The password for the administrator. You


can use the button to the right of the Password field to determine
whether the password characters are shown or asterisks are displayed
instead.

Copyright © 2024 All Rights Reserved 1533


Syst em Ad m in ist r at io n Gu id e

l Number of Times to Auto Logon: The number of times the


administrator can start the VM without reentering the login information.

l Require Administrator to Change Password on First Login: Select


this check box to require that the Administrator changes the password
upon the first login.

l Network and IP Assignment: Select this check box to provide a


network name and determine how IP addresses are assigned. This helps
to ensure that multiple virtual machines do not use the same IP address,
which can cause IP conflict issues in your vCloud system. If you clear
this check box, the network and IP assignment configuration is
determined by the template used to create the virtual machine. When
selecting this check box, type the name of the network in the Network
Name field, and select one of the following IP assignment options:

l DHCP: The IP address is assigned dynamically by a DHCP service on


the specified network.

l Static IP Pool: A single, static IP address is allocated automatically


from a collection of IP addresses for the network.

l Static Manual: A single, static IP address is allocated. You must type


the IP address in the text field. Ensure that the IP address is valid for
your network.

l Output Parameters: As part of the virtual machine creation process, you


can store important information about the new virtual machine in
parameters:

l Public IP Address: The IP address used to access the new virtual


machine. Select a parameter from the drop-down list to store the
information in that parameter.

l Computer Name: The host name for the new virtual machine. Select a
parameter from the drop-down list to store the information in that
parameter.

Copyright © 2024 All Rights Reserved 1534


Syst em Ad m in ist r at io n Gu id e

l Local Administrator Password: The administrator password for the


virtual machine. Select a parameter from the drop-down list to store the
information in that parameter.

l Notes: Information to describe this process as part of the workflow.

Determining Process Resolution Using Exit Codes


System Manager workflows often require information about the resolution of
a process to determine the next step to follow in the workflow. An exit code
is provided when a process that is part of a System Manager workflow
comes to completion. This exit code provides information on whether the
process was successful.

Along with determining the success or failure of a process, an exit code can
also provide additional information on why the process was a success or a
failure.

While providing the information for a process, you can review the exit codes
for a process. On the Properties pane, scroll down to the bottom and click
Show Description, as shown in the image below.

Detailed information on each exit code for a process is displayed.

The exit codes for a custom process are dependent on that custom process.
Refer to any documentation related to the custom process to determine
possible exit codes.

You can use these exit codes to determine the next step to take in a
workflow:

Copyright © 2024 All Rights Reserved 1535


Syst em Ad m in ist r at io n Gu id e

l Using the success and failure connectors lets you guide the workflow
based on whether the process was completed with a success or failure exit
code. For additional information on how connectors determine the logical
order of a workflow based on the exit code of the process they are coming
from, see Using Connectors to Create the Logical Order of a Workflow,
page 1412.

l Using a decision process, you can guide the workflow according to error
codes rather than just whether the process was considered successful or
unsuccessful. This can help to support additional troubleshooting and
error checking during a workflow. For examples of how decisions can be
used to guide a workflow on more than just the success or failure of a
process, see Using Decisions to Determine the Next Step in a Workflow,
page 1415.

Using Parameters for Processes


While all the necessary configuration information can be provided for each
process, some scenarios require that the details about the process be
provided when the workflow is executed. This can be required for the
scenarios listed below:

l Storing user credentials within System Manager introduces a security risk.

l Configuration information is not known until or during the actual


configuration.

To provide a flexible solution to these types of problems, System Manager


lets you define parameters as part of your workflow. These parameters can
be used to define configuration information for the processes supported by
System Manager. The values of these parameters can be provided as part of
the workflow, as part of a parameters file to execute the workflow, and as
input from the user performing the workflow from the command line.

Copyright © 2024 All Rights Reserved 1536


Syst em Ad m in ist r at io n Gu id e

Creating Parameters for a Workflow


A workflow has one set of parameters that is shared for all processes. The
parameters that are created for a workflow can be used in any configuration
task that can accept parameters as values in a process. Parameters can
also be used in decisions in a workflow.

The steps below show you how to create parameters for a workflow.

To Create Parameters for a Workflow

This procedure assumes you are creating new parameters for a workflow.
For information on importing parameters for a workflow, see Importing
Parameters into a Workflow, page 1538.

1. Open System Manager.

l To open System Manager in a Windows environment:

1. Start > All Programs > MicroStrategy Products > System


Manager.

l To open System Manager in a UNIX or Linux environment:

1. In a Linux console window, browse to HOME_PATH where HOME_


PATH is the specified home directory during installation.

2. Browse to the folder bin.

3. Type mstrsysmgrw, and then press Enter.

The System Manager home page is displayed.

2. Expand the Properties and parameters pane on the right side of


System Manager, and click Parameters near the bottom.

3. Click Add new parameter (displayed as a green plus symbol) to create


a new parameter. Name and Value fields are displayed.

4. Type the following information:

Copyright © 2024 All Rights Reserved 1537


Syst em Ad m in ist r at io n Gu id e

l Name: The name for the parameter. This is the name that is used to
identify the parameter in a process or decision within the workflow.

l Value: The value that is used in place of the parameter when the
workflow is executed. This works as the default value for the
parameter if no value for the parameter is given from the command
line when the workflow is executed. For information on the
precedence of providing values for parameters, see Providing
Parameter Values during Deployment of a Workflow, page 1542.

If the parameter provides sensitive information such as user


passwords, you can leave the value blank. However, be aware that
these parameters must be provided a value when the workflow is
executed.

l Confidential: Select the check box to turn off any logging and
feedback information for parameter values that are updated by a
process in your workflow (defined as an output parameter of a
process). For example, if you save the result of a SQL execution to a
parameter, this result is hidden from any System Manager logs. If the
parameter value for a confidential parameter has to be shown in the
feedback console, it is displayed as asterisks instead of the actual
value. For information on the feedback console, see Using System
Manager to Test and Deploy a Workflow, page 1545.

5. Once a parameter is created in a workflow, you can use it in a workflow,


as described in Using Parameters in a Workflow, page 1541. You can
also use the Update Parameters process (see Performing System
Processes, page 1493) to update the value of a parameter during the
deployment of a workflow.

Im porting Param eters into a Workflow

You can import parameters into a workflow that have been saved as a
parameters response file. This lets you update the values for your workflow.

Copyright © 2024 All Rights Reserved 1538


Syst em Ad m in ist r at io n Gu id e

When parameters are imported into a workflow, any existing parameters are
updated with the values included in the parameters file. Parameters can only
be updated when importing a parameters file. This means that if a parameter
does not already exist in a workflow, it is not created when importing the
parameters file.

Additionally, if parameters are in the workflow that are not defined in the
parameters file, the value for the parameters is not updated during the
import process.

The workflow you are importing parameters for already has parameters defined
for it. Only these parameters can be updated by importing a parameters file.

To Import Parameters into a Workflow

1. Open System Manager.

l To open System Manager in a Windows environment:

1. Start > All Programs > MicroStrategy Products > System


Manager.

l To open System Manager in a UNIX or Linux environment:

1. In a Linux console window, browse to HOME_PATH where HOME_


PATH is the specified home directory during installation.

2. Browse to the folder bin.

3. Type mstrsysmgrw, and then press Enter.

The System Manager home page is displayed.

2. Expand the Properties and parameters pane on the right side of


System Manager, and click Parameters near the bottom.

3. From the Workflow menu, select Import Parameter File.

4. Select the parameters file to import and click Open. You are returned to
System Manager and the parameters are updated accordingly. If the

Copyright © 2024 All Rights Reserved 1539


Syst em Ad m in ist r at io n Gu id e

changes are not what you expected, you can click Clear to undo all the
parameter updates.

Exporting Param eters to a File

You can export the parameters in a workflow to a file. This file can serve
various purposes:

l You can import parameters into other workflows.

l You can modify the parameter file and apply updates to the original
workflow.

l You can modify the parameter file and include it during execution to make
changes just before execution.

l You can modify the parameter file to include comments, which can provide
additional information on the parameters and their values. To include a
comment in a parameters file you can use the characters // or # to denote
a line in the parameters file as a comment. Any line that begins with either
// or # is ignored when using the parameters file with System Manager.

The steps below show you how to export the parameters of a workflow to a
file.

To Export Parameters of a Workflow to a File

1. Open System Manager.

l To open System Manager in a Windows environment:

1. Start > All Programs > MicroStrategy Products > System


Manager.

Copyright © 2024 All Rights Reserved 1540


Syst em Ad m in ist r at io n Gu id e

l To open System Manager in a UNIX or Linux environment:

1. In a Linux console window, browse to HOME_PATH where HOME_


PATH is the specified home directory during installation.

2. Browse to the folder bin.

3. Type mstrsysmgrw, and then press Enter.

The System Manager home page is displayed.

2. From the Workflow menu, select Export Parameter File.

3. In the File name field, type a name for the parameters file.

4. Click Save.

Using Parameters in a Workflow


Parameters can be used in processes or decisions of a workflow to provide
flexibility as to when the information is provided.

Parameters can be included in any option that takes some type of text or
numeric data as input. For example, a Password field can take a parameter
that supplies a password to access the task or system resource for a
process. However, check boxes and any other options that do not accept
text or numeric data cannot use parameters.

To use a parameter in a process or decision, you must use the following


syntax:

${ParameterName}

In the syntax listed above, ParameterName is the name of the parameter.


During execution, this is replaced with the value for the parameter.

The values for parameters can be provided in a few different ways. For
information on how parameter values can be provided and the precedence of
each option, see Providing Parameter Values during Deployment of a
Workflow, page 1542 below.

Copyright © 2024 All Rights Reserved 1541


Syst em Ad m in ist r at io n Gu id e

Providing Param eter Values during Deploym ent of a Workflow

The value for a parameter can be provided in the following ways:

l When defining the parameters for the workflow. These values act as the
default value of the parameter.

l In a parameters file. This file can be used during the execution of a


workflow to provide updated values for the parameters.

l From the command line during execution of a workflow. This lets the user
executing the process provide sensitive information such as user
passwords on the command line rather than saving them in a workflow.

l You can also use the Update Parameters process (see Performing System
Processes, page 1493) to update the value of a parameter during the
deployment of a workflow.

When a workflow is executed, parameters are replaced with their respective


values, as described below:

l If the value for a parameter is provided from the command line during
execution, this value is used. Any values for the parameter provided in a
parameters file or default values provided in the workflow are ignored.

l If the value for a parameter is not provided from the command line during
execution, but a value for the parameter is provided in a parameters file,
the value from the parameters file is used. The default value provided in
the workflow is ignored.

l If the value for a parameter is not provided in a parameters file or from the
command line during execution, the default value provided when defining
a parameter in a workflow is used.

The image below summarizes how the value of a parameter is determined


when executing a workflow:

Copyright © 2024 All Rights Reserved 1542


Syst em Ad m in ist r at io n Gu id e

Using Wildcard Characters in Processes


System Manager allows you to use wildcard characters to provide
configuration information for some of the processes possible with a System
Manager workflow. Using wildcard characters to provide configuration
information for processes in a System Manager workflow can allow you to:

l Refer to folders or files that do not exist yet or do not have known names.
For example, a file or folder can be created as part of the same System
Manager workflow. If the full name of the file or folder is not known (for
example, the file name itself might include creation time information) you
can use wildcard characters to refer to the expected file or folder.

l Select multiple files for a single process, such as attaching multiple files to
an email. For example, rather than listing a single file, you can use wild
cards to select all .txt files in a folder.

System Manager processes that support wild cards as part of their


configuration include:

l Sending an email (see Performing System Processes, page 1493)

l Deleting files or folders (see Performing System Processes, page 1493)

l Moving files (see Performing System Processes, page 1493)

l Copying files (see Performing System Processes, page 1493)

Copyright © 2024 All Rights Reserved 1543


Syst em Ad m in ist r at io n Gu id e

l Compressing files into a zip file (see Performing System Processes, page
1493)

For the configurations of a System Manager process that can use wildcard
characters, the following characters are supported:

l The * (asterisk) character: You can use * to represent one or more


characters. Some examples of how you can use this wildcard character
include:

l *.txt

This syntax would search for and select all .txt files in a given folder.

l filename.*

This syntax would search for and select all files, regardless of file extension, with the
name filename.

l *.*

This syntax would select all files in a given folder.

l *

This syntax would search for and select all files and folders in a given folder.

l The ? (question mark) character: You can use ? to represent any single
character. Some examples of how you can use this wildcard character
include:

l filename?.ini

This syntax would search for and select all .ini files with the name filename and a
single character. For example, the syntax config?.ini would select files such as
config1.ini, configA.ini, and so on.

l filename.??

This syntax would search for and select all files with the name filename and any
two character file extension.

Copyright © 2024 All Rights Reserved 1544


Syst em Ad m in ist r at io n Gu id e

You can also use a combination of both * and ? wildcard characters.

Deploying a Workflow
Once you create a workflow, you can deploy the workflow to attempt the
processes that are included in the workflow. System Manager provides the
following methods for deploying a workflow:

l Using System Manager to Test and Deploy a Workflow, page 1545: System
Manager's interface can be used to test and deploy a workflow.

l Using the Command Line to Deploy a Workflow, page 1548: System


Manager's command line version can be used to deploy a workflow without
the use of an interface. This can be beneficial for silent configuration
routines and OEM deployments.

Using System Manager to Test and Deploy a Workflow


Once you create a workflow using System Manager, you can use the same
System Manager interface to test and deploy a workflow.

Be aware that some processes are dependent on the machine that you use
to deploy the workflow. For example, if you include processes to create
DSNs, the DSNs are created on the machine that you use to deploy the
workflow.

The steps below show you how to deploy a workflow from within System
Manager.

You have created a workflow and saved it in a location that can be accessed
from the machine that you are deploying the workflow on. Steps to create a
workflow are provided in Creating a Workflow, page 1402.

System Manager is installed. This tool is installed as part of the general


MicroStrategy product suite.

Copyright © 2024 All Rights Reserved 1545


Syst em Ad m in ist r at io n Gu id e

You have installed any MicroStrategy products and components that are
required for the processes of a workflow. For the products and components
required for each process, see Defining Processes, page 1447.

If required, you have created a parameters file to provide values for the
parameters of the workflow and saved it in a location that can be accessed
from the machine that you are deploying the workflow on.

To Deploy a Workflow Using System Manager

1. Open System Manager.

l To open System Manager in a Windows environment:

1. Start > All Programs > MicroStrategy Products > System


Manager.

l To open System Manager in a UNIX or Linux environment:

1. In a Linux console window, browse to HOME_PATH where HOME_


PATH is the specified home directory during installation.

2. Browse to the folder bin.

3. Type mstrsysmgrw, and then press Enter.

The System Manager home page is displayed.

2. From the File menu, select Open Workflow.

3. Browse to the workflow file, select the file, and then click Open. The
workflow is displayed within System Manager.

4. If you need to supply values for the parameters in the workflow by


importing a parameters file, perform the steps provided in Importing
Parameters into a Workflow, page 1538.

5. From the View menu, select Options.

Copyright © 2024 All Rights Reserved 1546


Syst em Ad m in ist r at io n Gu id e

6. In the Log file path field, type the path of a log file or use the folder
(browse) icon to browse to a log file. All results of deploying a workflow
are saved to the file that you select.

7. In the Maximum Concurrent Threads field, type the maximum number


of tasks that can processed at the same time. This ensures that even if
a workflow requests a certain number of tasks to be processed at the
same time, only the specified limit is allowed to run at the same time.
The default value for this option is either the number of CPUs for the
current system, or 2, whichever value is greater. For information on
creating workflows that execute multiple tasks at the same time and
how to limit the number of simultaneous tasks, see Processing Multiple
Tasks Simultaneously, page 1425 and Limiting the Number of Parallel
Tasks to Prevent Over Consumption of System Resources, page 1429
respectively.

8. Click OK.

9. From the Workflow menu, point to Execute Workflow, and then select
Run Configuration.

You can execute a single process in a workflow to test the process, or


to perform the process separately. To execute a single process, right-
click the process and select Execute Process.

10. From the Starting process drop-down list, select the process to act as
the first process in the workflow. You can only select processes that
have been enabled as entry processes for the workflow.

11. In the Parameters area, type any parameters required to execute the
processes in the workflow, which can include user names, passwords,
and other values. To include multiple parameter and value pairs, you
must enclose each parameter in double quotes (" ") and separate
each parameter and value pair using a space. The following example
contains the syntax to provide values for the parameters UserName and

Copyright © 2024 All Rights Reserved 1547


Syst em Ad m in ist r at io n Gu id e

Password:
"UserName=User1" "Password=1234"

For information on supplying parameters for a workflow, see Using


Parameters for Processes, page 1536.

12. Click Run to begin the workflow. As the workflow is being executed the
results of each process are displayed in the Console pane. You can use
the Console pane to review additional details on the results of each
process and export these details. The results are also saved to the log
file that you specified earlier. If you marked any process parameters as
Confidential, the parameter value will either not be displayed in the
feedback console and logs, or it will be masked and displayed as
asterisks instead of the actual value.

If you need to end the workflow prematurely, from the Workflow menu,
select Terminate Execution. A dialog box is displayed asking you to
verify your choice to terminate the execution of the workflow. To
terminate the execution of the workflow, click Yes. If some processes
in the workflow have already been completed, those processes are not
rolled back.

Using the Command Line to Deploy a Workflow


Once you create a workflow using System Manager, you can use a command
line version of System Manager interface to deploy a workflow. The
command line version lets you deploy a workflow without having to use an
interface, which may be useful for silent configuration routines and OEM
deployments.

Be aware that some processes are dependent on the machine that you use
to deploy the workflow. For example, if you include processes to create
DSNs, the DSNs are created on the machine that you use to deploy the
workflow.

The command line version of System Manager is a one-line command line


tool. This means that the command to begin the deployment is included in a

Copyright © 2024 All Rights Reserved 1548


Syst em Ad m in ist r at io n Gu id e

single statement. The syntax of the statement depends on the environment


you are deploying the workflow on:

l Windows: MASysMgr.exe, followed by the parameters listed below.

l UNIX and Linux: mstrsysmgr, followed by the parameters listed below.

Of the parameters listed below, only -w to specify a workflow file is required;


all other parameters are optional:

l -w "WorkflowFile": This parameter is required to specify the workflow


to deploy. WorkflowFile is the path to the workflow file. For example, -w
"C:\Create DSNs.smw" is valid syntax to deploy the Create
DSNs.smw workflow file in a Windows environment.

l -s "EntryProcess": This parameter can be used to specify the first


process to attempt for the workflow. Only processes that have been
enabled as entry processes (see Using Entry Processes to Determine the
First Step in a Workflow, page 1414) can be used as the first process in a
workflow. EntryProcess is the name of the process as it is defined in the
workflow.

l -f "ParametersFile": This parameter can be used to specify a


parameters file, which supplies values for the parameters in the workflow.
ParametersFile is the path to the parameters file. For example, -f
"C:\Parameters.smp" is valid syntax to use the Parameters.smp
parameter file in a Windows environment. For information on creating a
parameters file, see Using Parameters for Processes, page 1536.

l -l "LogFile": This parameter can be used to specify a log file. All


results of deploying a workflow are saved to the file that you specify.
LogFile is the path to the log file. For example, -l "C:\Workflow
Results.log" is valid syntax to use the Workflow Results.log log
file in a Windows environment.

l -showoutput: This parameter can be used to display all the results of


deploying the workflow to the command line. If you are deploying a

Copyright © 2024 All Rights Reserved 1549


Syst em Ad m in ist r at io n Gu id e

workflow as a completely silent process, excluding this option prevents


these results from being displayed on the command line. The results can
still be retrieved from the log file after deployment is complete.

l -p "ParameterName1=Value1 ParameterName2=Value2": This


parameter can be used to specify values for parameters of the workflow.
Any parameter values that are provided in this way are used in place of
values provided in the workflow itself, as well as provided through a
parameters file. Providing parameter values directly during command line
execution is often required to supply login and password credentials
specific to the machine or user environment for a given deployment.

To include multiple parameter and value pairs, you must enclose each
parameter in double quotes (" ") and separate each parameter and value
pair using a space. For example, -p "UserName=User1"
"Password=1234" is valid syntax to provide values for the parameters
UserName and Password.

The steps below show you how to deploy a workflow using the command line
version of System Manager.

You have created a workflow and saved it in a location that can be accessed
from the machine that you are deploying the workflow on. You have created a
workflow. Steps to create a workflow are provided in Creating a Workflow, page
1402.

System Manager is installed. This tool is installed as part of the general


MicroStrategy product suite.

You have installed any MicroStrategy products and components that are
required for the processes of the workflow. For the products and components
required for each process, see Defining Processes, page 1447.

If required, you have created a parameters file to provide values for the
parameters of the workflow and saved it in a location that can be accessed
from the machine that you are deploying the workflow on.

Copyright © 2024 All Rights Reserved 1550


Syst em Ad m in ist r at io n Gu id e

To Deploy a Workflow Using the Command Line Version of System


Manager

1. Open a command line.

2. Check to verify that System Manager is installed on the machine:

l Windows: Type MASysMgr.exe and press Enter.

l UNIX and Linux: Type mstrsysmgr and press Enter.

If help information for using the command line version of System


Manager is displayed, this means that System Manager is installed
correctly.

3. Type the command to deploy the workflow:

l Windows: Type MASysMgr.exe and include the parameters listed


above in Using the Command Line to Deploy a Workflow, page 1548
as required. For example, the command below is a valid command to
deploy a System Manager workflow on a Windows environment:

MASysMgr.exe -w "C:\Create DSNs.smw" -s "Create Oracle DSN" -f


"C:\Parameters.smp" -l "C:\Workflow Results.log" -showoutput -p
"UserName=User1" "Password=1234"

l UNIX and Linux: Type mstysysmgr and include the parameters,


listed above in Using the Command Line to Deploy a Workflow, page
1548, as required. For example, the command below is a valid
command to deploy a System Manager workflow on a UNIX or Linux
environment:

mstrsysmgr -w "$HOME/Create DSNs.smw" -s "Create Oracle DSN" -f


"$HOME/Parameters.smp" -l "$HOME/Workflow Results.log" -showoutput -p
"UserName=User1" "Password=1234"

Copyright © 2024 All Rights Reserved 1551


Syst em Ad m in ist r at io n Gu id e

4. Once you have typed the full command, press Enter. The workflow is
started and results are saved to the log file, as well as displayed on the
screen if you included the parameter -showoutput.

Supporting a Silent Deployment with the Command Line


The command line version of System Manager lets you support silent and
OEM deployments of your workflows. You can support silent and OEM
deployments of System Manager using the techniques listed below:

l Ensure that the machine that is to be used for the deployment meets all
the prerequisites listed in Using the Command Line to Deploy a Workflow,
page 1548.

l Determine the syntax to deploy the workflow using the command line
version of System Manager. The required and optional parameters are
described in Using the Command Line to Deploy a Workflow, page 1548.
This syntax can then be used in one of the following ways:

l Log in to the machine to perform the deployment from, and use the steps
provided in To Deploy a Workflow Using the Command Line Version of
System Manager, page 1551 to deploy the workflow.

l Send the required syntax to the user or administrator of the machine to


perform the deployment from. Along with the required syntax, provide
information on the parameters that the user needs to provide in the
command line request. This user can then follow the steps provided in
To Deploy a Workflow Using the Command Line Version of System
Manager, page 1551 to deploy the workflow.

l Review the results of the deployment using the log file specified to verify
that the required processes were completed successfully.

Copyright © 2024 All Rights Reserved 1552


Syst em Ad m in ist r at io n Gu id e

A UTOM ATIN G
A D M IN ISTRATIVE TASKS
WITH COM M AN D
M AN AGER

Copyright © 2024 All Rights Reserved 1553


Syst em Ad m in ist r at io n Gu id e

MicroStrategy Command Manager lets you perform various administrative


and application development tasks by using text commands that can be
saved as scripts. You can manage configuration settings in the
MicroStrategy platform for either project sources or Narrowcast Server
metadatas. With Command Manager you can change multiple configuration
settings all at once, without using the Developer or Narrowcast
Administrator interface. You can also create scripts to be run at times when
it would not be convenient for you to make the changes.

The Command Manager script engine uses a unique syntax that is similar to
SQL and other such scripting languages. For a complete guide to the
commands and statements used in Command Manager, see the Command
Manager Help help.

Using Command Manager


With Command Manager you can change multiple configuration settings all
at once as part of an automated script. For example, you can change the
system to allow more low priority jobs to complete at night than during
regular hours. To do this, you could create a script to increase the number of
low priority database connections and modify several Intelligence Server
governor settings. Then, you could schedule the script to run at 8 P.M. You
could then create another script that changes the database connections and
Intelligence Server settings back for daytime use, and schedule that script to
run at 6 A.M.

To schedule a script to run at a certain time, use the Windows AT command


with the cmdmgr executable. For the syntax for using the executable, see
Executing a Command Manager Script, page 1560.

Here are more examples of tasks you can perform using Command Manager:

l User management: Add, remove, or modify users or user groups; list user
profiles

Copyright © 2024 All Rights Reserved 1554


Syst em Ad m in ist r at io n Gu id e

l Security: Grant or revoke user privileges; create security filters and apply
them to users or groups; change security roles and user profiles; assign or
revoke ACL permissions; disconnect users or disable their accounts

l Server management: Start, stop, or restart Intelligence Server; configure


Intelligence Server settings; cluster Intelligence Server machines; change
database connections and logins; manage error codes and customize
output data; disconnect active sessions on server or project

l Database management: create, modify, and delete connections,


connection mappings, logins, and database instances

l Project management: List or kill jobs; change a project's mode (idle,


resume); expire and delete caches; change filter or metric definitions;
manage facts and attributes; manage folders; update the project's
schema; manage shortcuts; manage hidden properties; create tables and
update warehouse catalog tables

l Scheduling: Trigger an event to run scheduled reports

l Narrowcast Server administration: Start and stop a Narrowcast Server;


start, stop, and schedule Narrowcast Server services; add, modify, and
remove subscription book users; define and remove user authentication

Privileges Required for Using Command Manager


Any users who want to use Command Manager must have the Use Command
Manager privilege. In addition, they must have the usual privileges for any
system maintenance tasks they want to perform. For example, to modify the
number of low priority database connections, the user must have the Create
And Edit Database Instances And Connections privilege.

A common way to delegate administrative tasks that can be performed with


Command Manager is to grant a user the Use Command Manager privilege
along with one or more security roles. The user can then perform all tasks
related to that security role and is prohibited from performing other tasks.

Copyright © 2024 All Rights Reserved 1555


Syst em Ad m in ist r at io n Gu id e

For full access to all Command Manager functionality, a user must have all
privileges in the Common, Distribution Services, and Administration groups,
except for Bypass All Object Security Access Checks.

Creating and Executing Scripts


From the Command Manager graphical interface, you can create and
execute Command Manager scripts. The script editor has many of the same
features as a standard text editor, with copy/paste and one-level undo
functionality. Other features of the script editor include a script syntax
checker, color-coded script syntax (see Color-Coding the Text in a Script,
page 1556), and sample script outlines (see Script Outlines, page 1557).

Command Manager also includes a command line interface for use in


environments that do not support the graphical interface, such as certain
Linux shell environments, or terminal connections. For instructions on using
the Command Manager command line interface, see Using Command
Manager from the Command Line, page 1570.

To Start the Command Manager Graphical Interface

In Windows: From the Windows Start menu, go to All Programs >


MicroStrategy Tools > Command Manager.

In Linux: Browse to the MicroStrategy Home folder, then to the /bin


subfolder. Type mstrcmdmgrw and press Enter.

For more information about using Command Manager and for script syntax,
see Command Manager Help.

Color-Coding the Text in a Script


The Command Manager script editor can display color-coded text according
to its function in the script or procedure.

In a Command Manager script:

Copyright © 2024 All Rights Reserved 1556


Syst em Ad m in ist r at io n Gu id e

l Reserved words display as blue.

l Words or phrases in quotation marks display as gray.

l Numbers display as red. Dates display as red with blue slashes.

l GUIDs display as green.

l All other text appears in black.

In a Command Manager procedure:

l Keywords, such as if or boolean, display as purple, and bold.

l Functions, classes, and methods display as red.

l Command Manager statements display as blue.

l Comments display as green.

l All other text appears in black.

Script Outlines
The Command Manager script outlines help you insert script statements with
the correct syntax into your scripts. Outlines are preconstructed statements
with optional features and user-defined parameters clearly marked.

Outlines are grouped by the type of objects that they affect. The outlines
that are available to be inserted depend on whether the active Script window
is connected to a project source or a Narrowcast server. Only the outlines
that are relevant to the connected metadata source are available.

To Insert an Outline Into a Script

1. Start the Command Manager graphical interface.

2. Connect to a metadata source.

3. From the Edit menu, select Insert Outline.

4. Navigate the Outline tree to locate the outline you want, and select it.

Copyright © 2024 All Rights Reserved 1557


Syst em Ad m in ist r at io n Gu id e

5. Click Insert to place the selected outline into the script.

6. Click Cancel.

7. Modify the script as needed.

Procedures in Command Manager


Command Manager procedures are reusable scripts that can be executed
from other scripts. You can reuse procedures with different input values, so
that the procedure performs the same task in a slightly different way.
Procedures can use Command Manager syntax, or they can be written in the
Java programming language and incorporate Command Manager statements
in Java commands.

For example, you can create a procedure called NewUser that creates a user
and adds the user to groups. You can then call this procedure from another
Command Manager script, supplying the name of the user and the groups.
To use the procedure to create a user named KHuang and add the user to
the group Customers, use the following syntax:

EXECUTE PROCEDURE "NewUser" ("KHuang", "Customers");

where NewUser is the name of the procedure, and KHuang and Customers
are the inputs to the procedure.

Procedures are available only for use with project sources. Procedures
cannot be used with Narrowcast Server statements.

Command Manager contains many sample procedures that you can view and
modify. These are stored in the following Command Manager directory:
\Outlines\Procedure_Outlines\Sample_Procedures\

For instructions on how to use procedures, see the Command Manager Help.

Copyright © 2024 All Rights Reserved 1558


Syst em Ad m in ist r at io n Gu id e

Using Java in Command Manager Procedures


Java is a simple yet powerful programming language that is widely used in
the software industry. Java can be integrated into Command Manager
procedures to automate repetitive tasks such as creating multiple users, or
recursively listing all the folders in a project. Java is supported in Command
Manager out-of-the-box; no additional software must be installed to execute
Java commands.

To include Java in a Command Manager script, you write a procedure


containing the Java code, and execute the procedure from a Command
Manager script. Java cannot be included directly in a Command Manager
script. For detailed instructions on using Java in procedures, see the
Command Manager Help. (From within the Command Manager graphical
interface, press F1.)

Java is supported only in procedures, and procedures are supported only


with project sources. Java commands cannot be used in scripts to be
executed against a Narrowcast Server metadata.

Do not use the System.exit command to exit a procedure. This command


terminates the entire Command Manager process.

Command Manager provides two special commands that can be used by


Java scripts to execute Command Manager commands:

l execute runs any Command Manager command, but it does not return the
results.

l executeCapture runs any Command Manager command and returns the


results in a ResultSet object. This object behaves like a standard
ResultSet object in Java: you can iterate through the results and retrieve
individual items, which can then be used to extract properties of the
results. This enables you to use the results elsewhere in the procedure.

Copyright © 2024 All Rights Reserved 1559


Syst em Ad m in ist r at io n Gu id e

For a detailed list of the ResultSet columns used in each Command


Manager LIST statement, see the statement syntax guide for that statement
in the Command Manager Help.

Executing a Command Manager Script


You can execute Command Manager scripts in the following ways:

l From the Command Manager graphical interface (see Creating and


Executing Scripts, page 1556)

l From the Command Manager command line interface (see Using


Command Manager from the Command Line, page 1570)

l Invoke the Command Manager executable, including necessary


parameters such as the script file to run, from the Windows scheduler,
Windows command prompt, or other applications such as system
management software.

Command Manager Runtime is a lightweight version of Command Manager


for bundling with OEM applications. Command Manager Runtime has fewer
execution options and supports fewer statements than Command Manager.
For more information about Command Manager Runtime, see Using
Command Manager with OEM Software, page 1571.

Command Manager does not automatically lock a project or configuration


when it executes statements. To avoid metadata corruption, use the LOCK
PROJECT or LOCK CONFIGURATION statements in any Command
Manager scripts that make changes to a project or server configuration. For
more information about locking and unlocking a project or configuration, see
Project and Configuration Locking, page 1565.

Copyright © 2024 All Rights Reserved 1560


Syst em Ad m in ist r at io n Gu id e

To Execute a Script from the Command Manager Graphical Interface

1. Start the Command Manager graphical interface:

l In Windows: From the Windows Start menu, point to All Programs,


then MicroStrategy Tools, and then choose Command Manager.

l In Linux: Browse to the MicroStrategy Home folder, then to the /bin


subfolder. Type mstrcmdmgrw and press Enter.

2. Connect to a project source or Narrowcast Server.

3. Open the script. (From the File menu, select Open.)

4. From the Connection menu, select Execute. The script executes.

To Execute a Script from the Command Manager Command Line


Interface

For specific command syntax for the command line interface, see the
Command Manager Help.

1. From the command line, type cmdmgr.exe -interactive and press


Enter. The Command Manager command line interface opens, in
console mode, with an active connection-less project source
connection.

2. Connect to a project source or Narrowcast Server using the


CONNECTMSTR or CONNECTNCS command.

3. To load a script and execute it, type EXECFILE filename, where


filename is the name of the script. The script is loaded into the
command line interface and executed.

To invoke Command Manager from Another Application

Call the cmdmgr.exe command with the following parameters:

Copyright © 2024 All Rights Reserved 1561


Syst em Ad m in ist r at io n Gu id e

If the project source name, the input file, or an output file contain a space in
the name or path, you must enclose the name in double quotes.

Effect Parameters

Connection (required; choose one)

-n
Connect to a project source
ProjectSourceName

If -p is omitted, Command Manager assumes a null -u UserName


password.
[-p Password]

Initiate a connection-less project source session -connlessMSTR

-w ODBC_DSN

Connect to a Narrowcast Server -u UserName

[-p Password]
If -p or -s are omitted, Command Manager assumes a null
password or system prefix. -d Database

[-s SystemPrefix]

Initiate a connection-less Narrowcast Server session -connlessNCS

[-d Database]
If -s is omitted, Command Manager assumes a null system
prefix. [-s SystemPrefix]

Script input (required)

Specify the script file to be executed

-f InputFile
If this parameter is omitted, the Command Manager GUI is
launched.

Script output (optional; choose only one)

Log script results, error messages, and status messages to a


-o OutputFile
single file

Log script results, error messages, and status messages to


-break
separate files, with default file names of:

Copyright © 2024 All Rights Reserved 1562


Syst em Ad m in ist r at io n Gu id e

Effect Parameters

CmdMgrResults.log

CmdMgrFail.log

CmdMgrSuccess.log

Log script results, error messages, and status messages to


separate files, with specified names -or ResultsFile

-of FailFile
You can omit one or more of these parameters. For
example, if you want to log only error messages, use only -os SuccessFile
the -of parameter.

Script output options (optional)

Begin each log file with a header containing information such as


-h
the version of Command Manager used

Print instructions in each log file and on the console

This option is ignored if the script is encrypted. For -i


information about encrypted Command Manager scripts,
see Encrypting Command Manager Scripts, page 1564.

If an Intelligence Server error occurred, print the Intelligence


Server error code and the Command Manager exit code in each -e
log file and on the console

Display script output on the console -showoutput

Save the results of the script in an CSV file -csv CSVFile

Save the results of the script in an XML file -xml XMLFile

Omit hidden objects in the script results. Hidden objects are


-suppresshidden
MicroStrategy metadata objects whose HIDDEN property is set.

Execution options (optional)

Halt script execution on critical errors (see Handling Execution


-stoponerror
Errors, page 1566)

Copyright © 2024 All Rights Reserved 1563


Syst em Ad m in ist r at io n Gu id e

A full list of parameters can also be accessed from a command prompt by


entering cmdmgr.exe -help.

By default, the executable is installed in the following directory:

Program Files (x86)\MicroStrategy\Command Manager

Encrypting Command Manager Scripts


By default, Command Manager scripts are saved in plain text format. This
can create a security risk if your script contains a user name and password,
such as for the CONNECT SERVER statement. You can avoid this security
risk by saving these scripts in an encrypted format.

If you create a batch file to execute a Command Manager script from the
command line, the password for the project source or Narrowcast Server
login must be stored in plain text in the batch file. You can protect the
security of this information by encrypting the script and having it connect to
a project source or Narrowcast Server when it is executed, using the
CONNECT SERVER statement. You can then execute the script from a
connection-less session, which does not require a user name or password.
The user name and password are provided in the Command Manager script,
as part of the CONNECT SERVER statement. For detailed syntax
instructions for using the CONNECT SERVER statement, see the Command
Manager Help (from within the Command Manager graphical interface, press
F1).

When you encrypt a script, you specify a password for the script. This
password is required to open the script, either in the Command Manager
graphical interface, or using the LOADFILE command in the Command
Manager command line interface. Because a script must be opened before it
can be executed in the Command Manager graphical interface, the password
is required to execute the script from the graphical interface as well.
However, the password is not required to execute the script from the
command line or through the command line interface.

Copyright © 2024 All Rights Reserved 1564


Syst em Ad m in ist r at io n Gu id e

The password for an encrypted script cannot be blank, cannot contain any
spaces, and is case-sensitive.

Project and Configuration Locking


Command Manager does not automatically lock a project or configuration
when it executes statements. Thus, any time you alter a project metadata or
Intelligence Server configuration with a Command Manager script, it is
possible that another user could alter the metadata or configuration at the
same time. This can cause metadata or configuration inconsistencies, and in
the worst case may require you to reinstall Intelligence Server or restore
your project from a backup.

To avoid these inconsistencies, use the LOCK PROJECT or LOCK


CONFIGURATION statements in any Command Manager scripts that make
changes to a project or server configuration. These statements place a lock
on the metadata or configuration. A metadata lock prevents other
MicroStrategy users from modifying any objects in the project in Developer
or MicroStrategy Web. A configuration lock prevents other MicroStrategy
users from modifying any configuration objects, such as users or groups, in
the project source.

When other users attempt to open an object in a locked project or


configuration, a message informs them that the project or configuration is
locked because another user is modifying it. Users can then choose to open
the object in read-only mode or view more details about the lock.

Command Manager has two kinds of locks:

l Transient locks are automatically released after disconnecting.

l Permanent locks are released only after an UNLOCK command or when


the project is manually unlocked. Permanent locks are indicated by the
word PERMANENT in the LOCK command.

Copyright © 2024 All Rights Reserved 1565


Syst em Ad m in ist r at io n Gu id e

If you lock a project or configuration in a Command Manager script, make


sure you release the lock at the end of the script with the UNLOCK
PROJECT or UNLOCK CONFIGURATION statement.

Handling Syntax Errors


Syntax errors occur when Command Manager encounters an instruction that
it does not understand. This can be due to a typographical error (CERATE
for CREATE, for example) or a statement that does not follow the required
syntax in another way. For examples of the correct syntax for all Command
Manager statements, see the Help.

When Command Manager encounters a syntax error, it displays the portion


of the instruction set where the error was detected in the Script window and
highlights the instruction. An error message is also displayed on the
Messages tab of the Script window. Finally, if logging is enabled in the
Options dialog box, the error message in the Messages tab is written to the
log file.

Handling Execution Errors


Execution errors occur when an instruction is formed correctly but returns an
unexpected result when it is executed. For example, attempting to delete a
user who does not exist in the MicroStrategy metadata generates an
execution error.

Command Manager recognizes two classes of execution errors:

l Critical errors occur when the main part of the instruction is not able to
complete. These errors interrupt script execution when the Stop script
execution on error option is enabled (GUI) or when the -stoponerror
flag is used (command line).

For example, if you submit an instruction to create a user, user1, that


already exists in the MicroStrategy metadata database, Command
Manager cannot create the user. Because creating the user is the main
part of the instruction, this is a critical error. If the Stop script execution

Copyright © 2024 All Rights Reserved 1566


Syst em Ad m in ist r at io n Gu id e

on error option is enabled, the script stops executing and any further
instructions are ignored.

l Noncritical errors occur when the main part of the instruction is able to
complete. These errors never interrupt script execution.

For example, if you submit an instruction to create a MicroStrategy user


group with two members, user1 and user2, but user2 does not exist in
the MicroStrategy metadata database, Command Manager can still create
the group. Because creating the group is the main part of the instruction
(adding users is secondary), this is a noncritical error.

An error message is written to the Messages tab of the Script window for all
execution errors, critical or noncritical. In addition, if logging is enabled in
the Options dialog box, the error message is written to the log file.

Command Manager and Prompted Objects


Command Manager cannot manipulate prompted objects. For example, it
cannot alter the properties of a metric that contains a prompt, and it cannot
create subscriptions for a report that contains a prompt.

This restriction extends to prompts at any level of nesting. For example, if


you have a custom group that contains a prompted metric, Command
Manager cannot alter the properties of that custom group.

If you attempt to execute a statement that manipulates a prompted object,


Command Manager returns a noncritical execution error.

Timeout Errors
To avoid locking up the system indefinitely, Command Manager has a built-
in timeout limit of 20 minutes. If a statement has been executing for 20
minutes with no response from Intelligence Server, Command Manager
reports a request timeout error for that command and executes the next
instruction in the script. However, Command Manager does not attempt to
abort the command. In some cases, such as database-intensive tasks such

Copyright © 2024 All Rights Reserved 1567


Syst em Ad m in ist r at io n Gu id e

as purging the statistics database, the task may continue to execute even
after Command Manager reports a timeout error.

The following statements are not subject to the 20-minute Command


Manager timeout limit. A script containing these statements continues
executing until Intelligence Server reports that the task has succeeded or
failed.

l Create Project statement

l Update Project statement

l Update Privileges statement

l Import Package statement

Command Manager Script Syntax


The Command Manager script engine uses a unique syntax that is similar to
SQL and other such scripting languages. For a complete guide to the
commands and statements used in Command Manager, see the Command
Manager Help.

A Command Manager script consists of one or more script statements. Each


statement ends with a semicolon (;).

Statements consist of one or more tokens. A token is a word, a list of words


enclosed in quotation marks, or a symbol. A token is recognized by
Command Manager as an individual unit with a specific meaning. Tokens
can be:

l reserved words, which are words with a specific meaning in a Command


Manager script. For a complete list of reserved words, see the Command
Manager Help.

l identifiers, which are words that the user provides as parameters for the
script. For example, in the statement LIST MEMBERS FOR USER GROUP
"Managers"; the word Managers is an identifier. Identifiers must be

Copyright © 2024 All Rights Reserved 1568


Syst em Ad m in ist r at io n Gu id e

enclosed in quotation marks.

In general, either double quotes or single quotes can be used to enclose


identifiers. However, if you want to include either single quotes or double
quotes as part of an identifier, you must either enclose that identifier in
the other kind of quotes, or put a caret in front of the interior quote. For
example, to refer to a metric named Count of "Outstanding"
Customer Ratings, you would need to use one of the following
methods:

Use single quotes to enclose the identifier:

'Count of "Outstanding" Customer Ratings'

Use double quotes to enclose the identifier and put carets in front of the
interior double quotes:

"Count of ^"Outstanding^" Customer Ratings"

If your identifier contains double-byte characters, such as characters


used in the Korean, Japanese, or Chinese character sets, you must
enclose the identifier in square brackets [ ]. If the identifier is also
enclosed in quotation marks, these square brackets must be placed inside
the quotation marks.

l symbols, such as ; , ' " ^

The caret (^) functions as an escape character. It causes any other


special character that follows it to be treated literally and not interpreted
as a special character. If you want to include a literal caret in your
statement, you must precede it with another caret. For example, if you
have a user group named ^Control, in Command manager scripts you
must refer to it as ^^Control.

l numbers in any notation

l dates

Copyright © 2024 All Rights Reserved 1569


Syst em Ad m in ist r at io n Gu id e

l object GUIDs

l other special characters such as carriage returns, tabs, or spaces

Using Command Manager from the Command Line


In addition to the graphical user interface and the command line execution,
Command Manager has a text-based command line interface. With this
interface, you can create and execute Command Manager scripts in an
environment where the graphical user interface is unavailable, such as when
accessing a UNIX system via telnet.

When you start the command line interface, it is in console mode, with a
connection-less project source connection. The command prompt in console
mode displays the metadata source and user to which Command Manager is
connected.

Entering a Command Manager script instruction switches Command


Manager into edit mode. From edit mode you can continue typing your script.
You can also save or execute the script.

To see a list of instructions for the command line interface, from the
command line interface type help and press Enter. A list of Command
Manager command line instructions and an explanation of their effects is
displayed.

To Start the Command Manager Command Line Interface

From the command line, type cmdmgr.exe -interactive and press


Enter. The Command Manager command line interface opens, in console
mode, with an active connection-less project source connection.

Copyright © 2024 All Rights Reserved 1570


Syst em Ad m in ist r at io n Gu id e

Using Command Manager with OEM Software


Developers of Original Equipment Manufacturer (OEM) applications that use
embedded MicroStrategy projects may find that they need flexibility in
configuring their environment. Command Manager Runtime is a slimmed-
down version of the Command Manager command-line executable for use
with these OEM applications. For information about obtaining Command
Manager Runtime, contact your MicroStrategy sales representative.

Command Manager Runtime uses a subset of the commands available for


the full version of Command Manager. If you try to execute a script with
statements that are not available in Command Manager Runtime, the script
fails with the message, "You are not licensed to run this command." For a list
of the commands available in Command Manager Runtime, with syntax and
examples for each command, see the Command Manager Runtime.

Copyright © 2024 All Rights Reserved 1571


Syst em Ad m in ist r at io n Gu id e

VERIFYIN G REPORTS AN D
D OCUM EN TS WITH
I N TEGRITY M AN AGER

Copyright © 2024 All Rights Reserved 1572


Syst em Ad m in ist r at io n Gu id e

MicroStrategy Integrity Manager is an automated comparison tool designed


to streamline the testing of MicroStrategy reports and documents in projects.
This tool can determine how specific changes in a project environment, such
as the regular maintenance changes to metadata objects or hardware and
software upgrades, affect the reports and documents in that project.

For instance, you may want to ensure that the changes involved in moving
your project from a development environment into production do not alter
any of your reports. Integrity Manager can compare reports in the
development and the production projects, and highlight any differences. This
can assist you in tracking down discrepancies between the two projects.

You can use Integrity Manager to execute reports or documents from a


single MicroStrategy project to confirm that they remain operational after
changes to the system. Integrity Manager can execute any or all reports
from the project, note whether those reports execute, and show you the
results of each report.

Integrity Manager can also test the performance of an Intelligence Server by


recording how long it takes to execute a given report or document. You can
execute the reports or documents multiple times in the same test and record
the time for each execution cycle, to get a better idea of the average
Intelligence Server performance time. For more information about
performance tests, see Testing Intelligence Server Performance, page 1576.

For reports you can test and compare the SQL, grid data, graph, Excel, or
PDF output. For documents you can test and compare the Excel or PDF
output, or test whether the documents execute properly. If you choose not to
test and compare the Excel or PDF output, no output is generated for the
documents. Integrity Manager still reports whether the documents executed
successfully and how long it took them to execute.

l To execute an integrity test on a project, you must have the Use Integrity
Manager privilege for that project.

Copyright © 2024 All Rights Reserved 1573


Syst em Ad m in ist r at io n Gu id e

l Integrity Manager can only test projects in Server (three-tier) mode.


Projects in Direct Connection (two-tier) mode cannot be tested with this
tool.

l To test the Excel export of a report or document, you must have Microsoft
Excel installed on the machine running Integrity Manager.

What is an Integrity Test?


In an integrity test, Integrity Manager executes reports or documents from a
base project and informs you as to which reports and documents failed to
execute. Depending on the type of integrity test, Integrity Manager may
compare those reports and documents against those from another project, or
from a previously established baseline. An integrity test may also involve
comparing reports and/or documents from two previously established
baselines, and not executing against an Intelligence Server at all.

The Integrity Manager Wizard walks you through the process of setting up
integrity tests. You specify what kind of integrity test to run, what reports or
documents to test, and the execution and output settings. Then you can
execute the test immediately, or save the test for later use and re-use. For
information on reusing tests, see Saving and Loading a Test, page 1582.

Types of Integrity Tests


A single-project integrity test confirms that reports and documents from a
project execute to completion, without errors. This is useful when changes
have been made somewhere in the system, and you want to ensure that
none of the changes cause execution errors in your reports or documents.

In a single-project test, Integrity Manager executes the specified reports and


documents. It then displays a list of the reports along with whether the
execution of each report or document succeeded or failed. If a report or
document failed, you can double-click on the report name in the results list
to see what error message was generated.

Copyright © 2024 All Rights Reserved 1574


Syst em Ad m in ist r at io n Gu id e

In addition to the single-project integrity test, Integrity Manager supports


these types of comparative integrity tests:

l Project-versus-project integrity tests compare reports and/or documents


from two different projects. This is useful when you are moving a project
from one environment to another (for instance, out of development and
into production), and you want to ensure that the migration does not cause
changes in any reports or documents in the project.

l Baseline-versus-project integrity tests compare reports and/or


documents from a project against a previously established baseline. The
baseline can be established by running a single-project integrity test, or
taken from a previous execution of a project-versus-project integrity test.

Baseline-versus-project tests can be used as an alternative to project-


versus-project tests when no base project is available, or when running
against a production Intelligence Server would be too costly in terms of
system resources. Also, by using baseline-versus-project tests a user can
manually change the results which they want to compare the target project
with.

l Baseline-versus-baseline integrity tests compare reports and/or


documents from two previously established baselines against each other.
These baselines can be established by running single-project integrity
tests (see below), or taken from a previous execution of a project-versus-
project integrity test.

These tests can be useful if you have existing baselines from previous
tests that you want to compare. For example, your system is configured in
the recommended project life cycle of development > test > production (for
more information on this life cycle, see the Managing your projects section
in the System Administration Help). You have an existing baseline from a
single project test of the production project, and the results of a project
versus project test on the development and test projects. In this situation,
you can use a baseline versus baseline test to compare the production
project to the test project

Copyright © 2024 All Rights Reserved 1575


Syst em Ad m in ist r at io n Gu id e

In each of these comparative tests, Integrity Manager executes the specified


reports and documents in both the baseline and the target. You can compare
the report data, generated SQL code, graphs, Excel exports, and PDF output
for the tested reports; you can compare the Excel exports and PDF output
for tested documents, or test the execution of the documents without
exporting the output. Integrity Manager informs you which reports and
documents are different between the two projects, and highlights in red the
differences between them.

Testing Intelligence Server Performance


In addition to testing reports and documents for execution and for accuracy
between projects, Integrity Manager can determine how long it takes an
Intelligence Server to execute a given set of reports or documents. This is
called a performance test. You can execute the reports and documents in
the integrity test multiple times, to get a better idea of the average time it
takes to execute each report.

In a performance test, Integrity Manager records the time it takes to execute


each report or document. If the reports and documents are being executed
more than once, Integrity Manager records each execution time. You can
view the minimum, maximum, and average execution time for each report or
document in the Results Summary area. In a comparative integrity test, you
can also view the difference in time between the baseline and target reports
and documents.

Performance Test Best Practices


The results of a performance test can be affected by many factors. The
following best practices can help ensure that you get the most accurate
results from a performance test:

l Performance comparison tests should be run as single-project integrity


tests. This reduces the load on Integrity Manager and ensures that the
recorded times are as accurate as possible.

Copyright © 2024 All Rights Reserved 1576


Syst em Ad m in ist r at io n Gu id e

To compare performance on two Intelligence Servers, MicroStrategy


recommends following the steps below:

1. Perform a single project test against one project, saving the


performance results.

2. Perform a single project test against the second project, saving the
performance results.

3. Compare the two performance results in a baseline-versus-baseline


test.

l Wait until the performance test is complete before attempting to view the
results of the test in Integrity Manager. Otherwise the increased load on
the Integrity Manager machine may cause the recorded times to be
increased for reasons not related to Intelligence Server performance.

l If you are using a baseline-versus-project test or a baseline-versus-


baseline test, make sure that the tests have processed the reports and/or
documents in the same formats. Execution times are not recorded for each
format, only for the aggregate generation of the selected formats. Thus,
comparing a baseline with SQL and Graph data against a test of only SQL
data is likely to give inaccurate results.

l If the Use Cache setting is selected on the Select Execution Settings page
of the Integrity Manager Wizard, make sure that a valid cache exists for
testing material. Otherwise the first execution cycle of each report takes
longer than the subsequent cycles, because it must generate the cache for
the other cycles to use. One way to ensure that a cache exists for each
object is to run a single-project integrity test of each object before you run
the performance test.

This setting only affects reports, and does not apply to documents.

l In the Integrity Manager wizard, on the Select Execution Settings page,


make sure Concurrent Jobs is set to 1. This causes Intelligence Server to

Copyright © 2024 All Rights Reserved 1577


Syst em Ad m in ist r at io n Gu id e

run only one report or document at a time, and provides the most accurate
benchmark results for that Intelligence Server.

l The Cycles setting on the Select Processing Options page of the Integrity
Manager Wizard indicates how many times each report or document is
executed. A high value for this setting can dramatically increase the
execution time of your test, particularly if you are running many reports or
documents, or several large reports and documents.

l Use 64-bit Integrity Manager when the comparison data is large. The
default position of 64-bit Integrity Manager is under C:\Program Files
(x86)\MicroStrategy\Integrity Manager called MIntMgr_
64.exe. Additionally, use 64-bit Integrity Manager if you have memory
issues.

Best Practices for Using Integrity Manager


MicroStrategy recommends the following best practices when using Integrity
Manager:

l Run large integrity tests during off-peak hours, so that the load on
Intelligence Server from the integrity test does not interfere with normal
operation. You can execute integrity tests from the command line using a
scheduler, such as the Windows AT scheduler. For information about
executing integrity tests from the command line, see Executing a Test from
the Command Line, page 1584.

l Before performing a system upgrade, such as a database upgrade or a


MicroStrategy metadata upgrade, create a baseline of the reports you
want to test. You can create this baseline by executing a single-project
integrity test. Then, after the upgrade, you can verify the upgrade process
by executing a baseline-versus-project test of the baseline and the
upgrade project.

Copyright © 2024 All Rights Reserved 1578


Syst em Ad m in ist r at io n Gu id e

l Understand how Integrity Manager answers prompted reports, and how


you can configure the answers to prompted reports, as described in
Executing Prompted Reports with Integrity Manager, page 1589.

l If you are having trouble comparing prompted reports, you can save static
versions of those reports in a "regression test" folder in each project, and
use those static reports for integrity tests.

l In a comparative integrity test, you must have the same OS version and
the same font installed on your machine to use the Graph view to compare
two PDF reports. Font rendering on a PDF is version and OS specific, so
differences may result in formatting issues, which can affect comparison
results.

l If your MicroStrategy security configuration involves security filters, make


sure that the user executing the integrity test has the same security filters
for both projects. For example, you can create a test user who has the
same security filter for each project, and execute all integrity tests under
this user.

l Alternately, you can execute the test using multiple MicroStrategy users,
as described in Executing a Test Under Multiple MicroStrategy User
Accounts, page 1594. Make sure that the users that you are comparing
have matching security filters. For example, if User1 is assigned security
filter FilterA in project Project1, make sure you compare the
reports with a user who is also assigned security filter FilterA in
project Project2.

l When you are comparing graph reports and noting the differences between
the graphs, adjust the Granularity slider so that the differences are
grouped in a way that is useful. For more information about how Integrity
Manager evaluates and groups differences in graph and PDF reports, see
Grouping Differences in Graph and PDF Reports, page 1601.

l If you are executing a performance test, follow the best practices


described in Testing Intelligence Server Performance, page 1576.

Copyright © 2024 All Rights Reserved 1579


Syst em Ad m in ist r at io n Gu id e

l When running Integrity Manager testing on a large number of objects, you


may need to increase the memory available to the Integrity Manager
process. You can do it by specifying a parameter when launching Integrity
Manager.

Heap size should not exceed the available memory on the machine from
which Integrity Manager is launched.

For example, on a machine with 16GB of memory, you should be able to


safely use 12 GB for Integrity Manager.

l Open Integrity Manager in command line with the -Xmx flag and the
corresponding memory size, such as -Xmx12G for 12 GB or -
Xmx10240m for 10,240 MB.

For example, to execute Integrity Manager with 12 GB of memory, run


the following :

MIntMgrW_64.exe -Xmx12G

Creating an Integrity Test


The following high-level procedure provides an overview of the steps
involved in creating an integrity test. For an explanation of the information
required at any given page in the wizard, see the Help (from the wizard, click
Help, or press F1).

To Create an Integrity Test

1. Start Integrity Manager. Start > All Programs > MicroStrategy


Products > Integrity Manager.

2. From the File menu, select Create Test.

3. Select the type of test you want to create:

Copyright © 2024 All Rights Reserved 1580


Syst em Ad m in ist r at io n Gu id e

l To compare reports and documents from two projects, select Project


versus project.

l To compare reports and documents against a previously established


baseline, select Baseline versus project.

l To compare reports and documents from two previously established


baselines, select Baseline versus baseline.

l To confirm that reports and documents in a project execute without


errors, select Single project.

4. Specify the baselines and projects to be tested. For each project,


provide a MicroStrategy login and password with the Use Integrity
Manager privilege for that project.

5. Select the reports and/or documents to be tested. You can select


individual reports or documents, or entire folders. You can also select
search objects; in this case, Integrity Manager tests all reports and
documents from the results of the search object.

If you select any Intelligent Cube reports, make sure that the
Intelligent Cube the reports are based on has been published before
you perform the integrity test. Integrity Manager can test the SQL of
Intelligent Cubes even if they have not been published, but cannot test
Intelligent Cube reports based on an unpublished Intelligent Cube.

6. Specify test execution options, such as how to answer any unanswered


prompts, what details to log, and whether to use report caches.

7. Select what types of analysis to perform. For reports, you can analyze
any or all of the grid data, underlying SQL, graph data, Excel export, or
PDF output. For documents you can analyze the Excel export or PDF
output.

Copyright © 2024 All Rights Reserved 1581


Syst em Ad m in ist r at io n Gu id e

Only reports that have been saved in Graph or Grid/Graph view can be
analyzed as graphs.

You can also select to record the execution time of each report and/or
document, to analyze the performance of Intelligence Server.

8. Review the information presented on the Summary page.

9. Click Save Test. Navigate to the desired directory, enter a file name,
and click OK.

For instructions on executing a saved test, see Saving and Loading a


Test, page 1582.

10. To execute the test immediately, regardless of whether you saved the
settings, click Run. The Integrity Manager Wizard closes and Integrity
Manager begins to execute the selected reports and documents. As the
reports execute, the results of each report or document appear in the
Results Summary area of the Integrity Manager interface.

Saving and Loading a Test


When you have set up a test using the Integrity Manager Wizard, you can
choose to save your settings to a file. This enables you to execute the same
test at a later time without having to re-create the test. For example, a metric
currently being developed is causing errors in several reports. You can
execute a test on those reports to check whether the metric still causes
execution errors. Saving the test settings makes it easy to run this test once
the latest version of the metric is ready.

For security reasons, the passwords for the project logins (provided on the
Enter Base Project Information page and Enter Target Project Information
page) are not saved to the test file. You must re-enter these passwords
when you load the test.

Copyright © 2024 All Rights Reserved 1582


Syst em Ad m in ist r at io n Gu id e

To Save Test Settings

1. Step through the Integrity Manager Wizard and answer its questions.
For detailed instructions, see Creating an Integrity Test, page 1580.

2. When you reach the Summary page of the Integrity Manager Wizard,
click Save Test.

3. Navigate to the desired folder and enter a file name to save the test as.
By default this file will have an extension of .mtc.

4. Click OK.

You can execute the test immediately by clicking Run. The Integrity
Manager Wizard closes and Integrity Manager begins to execute the
selected reports and documents. As they execute, their results appear in the
Results Summary area of the Integrity Manager interface.

To Load a Previously Saved Test

1. In Integrity Manager, from the File menu select Load Test.

2. Navigate to the file containing your test information and open it.

The default extension for integrity test files is .mtc.

3. Step through the wizard and confirm the settings for the test.

4. At the Enter Base Project Information page and Enter Target


Project Information page, enter the password for the login used to
access the base or target project.

5. When you reach the Summary page, review the information presented
there. When you are satisfied that the test settings shown are correct,
click Run. The Integrity Manager wizard closes and Integrity Manager
begins to execute the selected reports and documents. As they
execute, their results appear in the Results Summary area of the
Integrity Manager interface.

Copyright © 2024 All Rights Reserved 1583


Syst em Ad m in ist r at io n Gu id e

Executing an Integrity Test


After creating or loading an integrity test, you can execute it by clicking Run
from the Summary page of the Integrity Manager wizard. You can also
execute a saved test from the command line, without launching the Integrity
Manager graphical interface. For instructions, see Executing a Test from the
Command Line, page 1584.

You can also re-run reports in a test that has just finished execution. For
example, a number of reports in an integrity test may fail because of an error
in a metric. You can correct the metric and then re-run those reports to
confirm that the reports now match. To re-run the reports, select them, and
then from the Run menu, select Refresh selected items.

Executing a Test from the Command Line


Integrity Manager's command line interface enables you to execute a test
without having to load the graphical interface, or to schedule a test to run at
specific times or dates. For example, you perform routine maintenance on
your data warehouse every month. Using the Windows AT command or the
UNIX scheduler, you can schedule a baseline-versus-project test to run
every month immediately after routine database maintenance. This ensures
that no reports are broken during maintenance.

If you are running Integrity Manager in a Windows environment, you must be


logged in to Windows with an Administrator account. In addition, if you are
running Integrity Manager directly from the command prompt, you must set
the command prompt to run with full administrative privileges. To do this,
right-click on the command prompt shortcut and select Run As. Clear the
Run this program with restricted access check box and click OK.

To Execute a Previously Saved Integrity Test from the Command Line

After creating and saving a test (for instructions, see Saving and Loading a
Test, page 1582), call the Integrity Manager executable MIntMgr.exe with

Copyright © 2024 All Rights Reserved 1584


Syst em Ad m in ist r at io n Gu id e

the parameters listed in the table below. All parameters are optional except
the -f parameter, which specifies the integrity test file path and name.

By default, the executable is installed in the following directory:

C:\Program Files (x86)\MicroStrategy\Integrity Manager

Effect Parameters

Integrity test file path and name -f FileName

Base system password.

For instructions on how to specify multiple passwords, or -b BasePassword


passwords using special characters, see Password Syntax,
page 1587.

Target system password.

For instructions on how to specify multiple passwords, or -t TargetPassword


passwords using special characters, see Password Syntax,
page 1587.

The following parameters modify the execution of the test. They do not modify the .mtc
test file.

Output directory.
-o OutputDirectory
This directory must exist before the test can be executed.

-logfile
Log file path and name.
LogfileName

Base server name. -bserver BaseServer

-tserver
Target server name.
TargetServer

Base server port number. -bport BasePort

Target server port number. -tport TargetPort

-bproject
Base project.
BaseProject

Copyright © 2024 All Rights Reserved 1585


Syst em Ad m in ist r at io n Gu id e

Effect Parameters

-tproject
Target project.
TargetProject

-blogin BaseLogin
Login for base project.
-blogin
For multiple logins, enclose all logins in double quotes (") "BaseLogin1, ..,
and separate each login with a comma (,). BaseLoginN"

-tlogin TargetLogin
Login for target project.
-tlogin
For multiple logins, enclose all logins in double quotes (") "TargetLogin1, ..,
and separate each login with a comma (,) TargetLoginN"

Base baseline file path and name.


-bbaselinefile
The GUIDs of objects to be tested in the baseline file must BaseBaselineFile
match any GUIDs specified in the .mtc file.

Target baseline file path and name.


-tbaselinefile
The GUIDs of objects to be tested in the baseline file must TargetBaselineFile
match any GUIDs specified in the .mtc file.

Whether to use load balancing in the base project, that is,


whether to execute the reports and documents across all nodes
-bloadbalance true
of the cluster ( True ) or on a single node ( False ).
-bloadbalance false
If this option is used, it overrides the setting in the integrity
test file.

Whether to use load balancing in the target project, that is,


whether to execute the reports and documents across all nodes
-tloadbalance true
of the cluster ( True ) or on a single node ( False ).
-tloadbalance false
If this option is used, it overrides the setting in the integrity
test file.

GUID of the test folder. If this option is used, the reports and -folderid
documents specified in the integrity test file are ignored. FolderGUID

Copyright © 2024 All Rights Reserved 1586


Syst em Ad m in ist r at io n Gu id e

Effect Parameters

Instead, Integrity Manager executes all reports and documents in


the specified folder.

This option can only be used with a single-project integrity


test or a project-versus-project integrity test.

Password Syntax

When specifying passwords with special characters, or specifying multiple


passwords, use the following syntax:

l If a password contains a single quote (') or a comma (,), that character


must be preceded by a single quote.

l If a password contains a double quote (") that character must be


substituted by &quot;. If a password contains an ampersand (&) that
character must be substituted by &amp;.

l For example, if the password is 12'&ABC"12,3 then the password must


be specified as 12''&amp;ABC&quot;12',3.

l If multiple logins are used, a password must be specified for each login.
The entire list of passwords must be enclosed in double quotes (") and the
passwords must be separated by a comma (,).

l If multiple passwords are used and a user in the base project or target
project has an empty password, the position of that user's password in the
list of passwords is indicated by a space between commas.

For example, if the users for an integrity test are User1, User2, and User3,
and User2 has an empty password, the list of passwords is "password1,
,password3".

Copyright © 2024 All Rights Reserved 1587


Syst em Ad m in ist r at io n Gu id e

Command Line Exit Codes


When an integrity test that has been executed from the command line ends,
it returns a number. This number is an exit code. If the script ends
unexpectedly, this exit code can help you find the cause of the error.

To view the error code, in the same command prompt window as the test
execution, type echo %ERRORLEVEL% and press Enter.

Exit code Meaning

The test execution succeeded and all reports have a status of


0
Matched.

The test execution succeeded, but at least one report has a status
1
other than Matched.

Integrity Manager was unable to establish a connection to


2 Intelligence Server, or the connection was interrupted during the
test.

Either your Integrity Manager license has expired, or you do not


3 have the privileges necessary to run Integrity Manager. You can
view license information in License Manager.

The test execution failed. For more information about this error, see
4
the integrity test log for this test.

5 The test execution was aborted by the user.

Manually Editing an Integrity Test


If you need to make minor changes to an integrity test, it may be faster to
make those changes by editing the test file, rather than stepping through the
Integrity Manager Wizard.

The test file is a plain-text XML file, and can be edited in a text editor, such
as Notepad. For an explanation of all the XML tags included in the test file,
see List of Tags in the Integrity Test File, page 1606.

Copyright © 2024 All Rights Reserved 1588


Syst em Ad m in ist r at io n Gu id e

Executing a Test Against a Remote Intelligence Server


Integrity Manager uses the Windows TCP/IP hosts file to contact remote
Intelligence Servers. This file contains server names and IP addresses for
other networked machines that can be accessed from this machine.

In Windows, to execute an integrity test against an Intelligence Server on a


machine other than the one Integrity Manager is running on, you need to add
an entry to the hosts file for the machine Integrity Manager is running on.

To Add an Entry to the Hosts File

1. In the Windows system folder, navigate to the


\system32\drivers\etc folder.

2. Open the hosts file with a text editor, such as Notepad.

3. For each Intelligence Server machine that you want to test against, add
a line to the file in the same format as the examples given in the file.

4. Save and close the hosts file. You can now execute integrity tests
against the Intelligence Servers specified in the file.

Executing Prompted Reports with Integrity Manager


In a prompted report, the user specifies certain objects, such as the
elements of an attribute, or the range of values for a metric. For an
introduction to prompts, see the Basic Reporting Help.

Integrity Manager can use any of the following methods to resolve prompts:

l Personal answer: Personal answers are default prompt answers that are
saved for individual MicroStrategy logins. Any prompts with personal
answers saved for the login using Integrity Manager can be resolved using
those personal answers.

l Default object answer: A prompted report can have two possible default
answers: a default answer saved with the prompt, and a default answer

Copyright © 2024 All Rights Reserved 1589


Syst em Ad m in ist r at io n Gu id e

saved with the report. These default answers can be used to resolve the
prompt. If both default answers exist, Integrity Manager uses the answer
saved with the report.

l Integrity Manager user-defined answer: Any required value and


hierarchy prompts can be answered according to the defaults provided in
the Select Prompt Settings page. You can provide default answers for
value prompts, and a default number of elements for hierarchy prompts.

l Integrity Manager internal answer: Integrity Manager can use its


internal logic to attempt to answer any other required prompts without
default answers. For example, a prompt that requires a certain number of
elements to be selected from a list can be answered by selecting the
minimum number of elements from the beginning of the list.

By default Integrity Manager uses all of these options, in the order listed
above. You can disable some options or change the order of the options in
the Advanced Options dialog box in the Integrity Manager Wizard.

For example, you may want to never use your personal answers to answer
prompts, and use the user-defined answers instead of the default answers
for value prompts. You can configure the user-defined answers for value
prompts in the Select Prompt Settings page. Then, in the Advanced Options
dialog box, clear the Personal answer check box and move Integrity
Manager user-defined answer above Default object answer.

Optional Prompts
You control whether Integrity Manager answers optional prompts on the
Select Prompt Settings page of the Integrity Manager Wizard.

l To answer optional prompts in the same way as required prompts, select


the Answer optional prompts check box.

l To leave optional prompts that do not have default or personal answers


unanswered, clear the Answer optional prompts check box.

Copyright © 2024 All Rights Reserved 1590


Syst em Ad m in ist r at io n Gu id e

Using Non-Default Personal Answers in Prompts


By default, when Integrity Manager answers a prompt with a personal
answer, it uses only the default personal answer for each prompt. If a prompt
does not have a default personal answer for the current user, Integrity
Manager moves to the next method of prompt resolution.

To change this default, in the Advanced Options dialog box, select the
Group personal prompt answers by their names option. When this option
is selected, Integrity Manager executes each report/document once for each
personal answer for each prompt in the report/document. If multiple prompts
in the report/document have personal answers with the same name, those
personal answers are used for each prompt in a single execution of the
report/document.

For personal prompt answers to be grouped, the answers must have the
exact same name. For example, if the base project contains a personal
prompt answer named AnswerA and the target project contains a personal
prompt answer named Answer_A, those prompt answers will not be
grouped together.

For example, consider a report with two prompts, Prompt1 and Prompt2. The
user executing the report has personal answers for each of these prompts.
The personal answers are named as follows:

Prompt Answers

Prompt1 AnswerA, AnswerB

AnswerA, AnswerC,
Prompt2
AnswerD

Integrity Manager executes this report four times, as shown in the table
below:

Copyright © 2024 All Rights Reserved 1591


Syst em Ad m in ist r at io n Gu id e

Execution Prompt 1 answer Prompt 2 answer

1 Personal answer AnswerA Personal answer AnswerA

2 Personal answer AnswerB (next prompt answer method)

3 (next prompt answer method) Personal answer AnswerC

4 (next prompt answer method) Personal answer AnswerD

Since Prompt1 and Prompt2 both have a personal answer saved with the
name AnswerA, Integrity Manager groups those answers together in a single
execution. Only Prompt1 has an answer named AnswerB, so Integrity
Manager executes the report with AnswerB for Prompt1 and uses the next
available method for answering prompts to answer Prompt2. In the same
way, only Prompt2 has answers named AnswerC and AnswerD, so when
Integrity Manager executes the report using those answers for Prompt2 it
uses the next available prompt answer method for Prompt1.

Unanswered Prompts
If a prompt cannot be answered by Integrity Manager, the report execution
fails and the report's status changes to Not Supported. A detailed
description of the prompt that could not be answered can be found in the
Details tab of the Report Data area for that failed report. To view this
description, select the report in the Results summary area and then click the
Details tab.

You can configure Integrity Manager to open a Not Supported report in


MicroStrategy Web. You can answer any prompts manually and save the
report. Integrity Manager then executes the newly saved report, using the
specified prompt answers.

Prompts that cannot be answered by Integrity Manager's internal logic


include:

Copyright © 2024 All Rights Reserved 1592


Syst em Ad m in ist r at io n Gu id e

l Prompts that cannot be answered at all, such as an element list prompt


that contains no elements in the list

l Level prompts that use the results of a search object to generate a list of
possible levels

l Prompted metric qualifications (used in filters or custom groups)

l MDX expression prompts

To Resolve Unanswered Prompts in MicroStrategy Web

Co n f i gu r e t h e In t egr i t y Test t o Op en t h e Rep o r t s i n M i cr o St r at egy Web

1. Create an integrity test. Step through the Integrity Manager Wizard and
enter the information required on each page.

2. In the Select Prompt Settings page, click Advanced Options.

3. Select the Link to MicroStrategy Web for unresolved prompts check


box.

4. In the URL for Base connection and URL for Target Connection
fields, type the URL for the baseline and target projects' Web servers.
To test each URL, click the Test button. If it is correct, a browser
window opens at the main MicroStrategy Web page for that server.

The default URL for MicroStrategy Web is:


http://webservername/MicroStrategy/
asp/Main.aspx

where webservername is the name of your MicroStrategy Web server


machine.

5. Click OK.

6. Finish defining the test, then execute it.

Copyright © 2024 All Rights Reserved 1593


Syst em Ad m in ist r at io n Gu id e

Reso l ve t h e Pr o m p t s i n M i cr o St r at egy Web

1. If any reports contain prompts that cannot be resolved by Integrity


Manager, the Link to MicroStrategy Web for Unresolved Prompts dialog
box opens.

2. To save the report with the correct prompt answers, click the report's
name in the dialog box.

If a Login dialog box opens, select an authentication method, enter a


username and password, and click OK

3. Answer the prompts for the report and save it. Depending on your
choices in the Advanced Options dialog box, you may need to save the
report as a static, unprompted report.

4. In Integrity Manager, click Continue.

To continue the integrity test without re-running the report, click


Ignore. The report is listed in the Results Summary area with a status
of Not Supported. To skip all future requests to resolve prompts in
MicroStrategy Web for this integrity test, click Ignore All.

Executing a Test Under Multiple MicroStrategy User Accounts


When you create an integrity test, you can specify multiple MicroStrategy
user accounts to execute the reports and documents in the test.

For example, your MicroStrategy system may use security filters to restrict
access to data for different users. If you know the MicroStrategy login and
password for a user who has each security filter, you can run the integrity
test under each of these users to ensure that the security filters are working
as designed after an upgrade. You can also compare a set of reports from
the same project under two different users to ensure that the users are
seeing the same data.

On the Enable Multiple Logins page of the Integrity Manager Wizard, you
specify the authentication method, MicroStrategy login, and password for

Copyright © 2024 All Rights Reserved 1594


Syst em Ad m in ist r at io n Gu id e

each user. Integrity Manager executes each report/document in the integrity


test under each user account, one account at a time, in the order the
accounts are listed. If you are executing a comparative integrity test, the
results from the first user in the base project are compared with the results
from the first user in the target project, and so on.

For example, you create a project-versus-project integrity test with reports


Report1, Report2, and Report3. You are testing the reports with users Alice
and Carol in the base project. You want to compare Alice's results in the
base project with Bob's results in the target project, and Carol's results in
the base project with Alice's results in the target project, so you configure
the Enable Multiple Logins page as follows:

When the test is executed, the reports are executed in the following order:

Report Base project report Target project report


execution and user and user

1 Report1 Alice Report1 Bob

2 Report2 Alice Report2 Bob

3 Report3 Alice Report3 Bob

4 Report1 Carol Report1 Alice

5 Report2 Carol Report2 Alice

6 Report3 Carol Report3 Alice

Note that the reports executed by Alice in the base project are compared
with the reports executed by Bob in the target project, and the reports
executed by Carol in the base project are compared with the reports
executed by Alice in the target project.

Copyright © 2024 All Rights Reserved 1595


Syst em Ad m in ist r at io n Gu id e

To Execute a Test with Multiple Users

1. Create an integrity test, including the information described in the steps


below. Step through the Integrity Manager Wizard and enter the
information required on each page. For details about the information
required on each page, click Help to open the help for that page of the
wizard.

2. On the Welcome page, select the Enable Multiple Logins check box.

3. On the Enable Multiple Logins page, for each user, specify the
authentication mode, login, and password.

4. Make sure the users are in the order that you want the test to be
executed in. In addition, if you are creating a comparative integrity test,
make sure that the users whose results you want to compare are paired
up correctly in the tables.

5. Finish stepping through the wizard and entering the required


information. When the test is executed, each report/document is
executed under each specified user account.

Ignoring Dynamic SQL When Comparing SQL


Dynamic SQL generates SQL statements that are partially created at the
time of execution. Dynamic SQL may be generated differently in the base
project and in the target project, so it can cause reports to be flagged as Not
Matched even if the report SQL is otherwise identical.

You can configure Integrity Manager to ignore dynamic SQL in its


comparison. To do this, make changes in two places: a report's VLDB
properties and in Integrity Manager.

Copyright © 2024 All Rights Reserved 1596


Syst em Ad m in ist r at io n Gu id e

To Configure Integrity Manager to Ignore Dynamic SQL

1. For reports that use dynamic SQL, enclose the dynamic SQL in
identifying SQL comments. Enter the comments in the VLDB properties
Pre/Post statements.

For example, before each section of dynamic SQL, include a beginning


comment line, such as:

/* BEGIN DYNAMIC SQL */

At the end of each section of dynamic SQL, include an ending comment


line, such as:

/* END DYNAMIC SQL */

2. In Integrity Manager, create a comparative integrity test by stepping


through the Integrity Manager wizard.

3. On the Select Processing Options page, select the SQL/MDX check


box, then click Advanced Options.

4. Select the SQL/MDX category.

5. In the Dynamic SQL Start field, type the text that matches the text you
entered in the VLDB properties to indicate the beginning of the dynamic
SQL. For this example, type /* BEGIN DYNAMIC SQL */

6. In the End field, type the text that matches the text you entered in the
VLDB properties to indicate the end of the dynamic SQL. For this
example, type /* END DYNAMIC SQL */

7. Click OK, then continue through the wizard.

Matching Equivalent SQL Strings


Sometimes reports in the base project and the target project include SQL
that is functionally equivalent but slightly different. For example, reports in
the base project might use a table prefix of TEST while reports in the target
project use a table prefix of PROD. You want Integrity Manager to treat the

Copyright © 2024 All Rights Reserved 1597


Syst em Ad m in ist r at io n Gu id e

table prefixes as identical for purposes of comparison, because reports that


differ only in their table prefixes should be considered identical.

In this case, you can use the SQL Replacement feature to replace TEST with
PREFIX in the base project, and PROD with PREFIX in the target project.
Now, when Integrity Manager compares the report SQL, it treats all
occurrences of TEST in the base and PROD in the target as PREFIX, so they
are not considered to be differences.

The changes made by the SQL Replacement Table are not stored in the SQL
files for each report. Rather, Integrity Manager stores those changes in
memory when it executes the integrity test.

Access the SQL Replacement feature from the Advanced Options dialog
box, on the Select Processing Options page of the Integrity Manager wizard.

Viewing the Results of a Test


Once you have started executing a test, information about the reports and
documents being tested appears in the Results Summary area of Integrity
Manager. This area lists all the selected reports and documents, by name
and path. Each report or document also shows one of the following statuses:

l Pending reports and documents have not yet begun to execute.

l Running reports and documents are in the process of executing.

In a performance test, this status appears as Running (#/#). The first


number is the current execution cycle. The second number is the number
of times the report or document will be executed in the test.

l Paused (#/#) reports and documents, in a performance test, have


executed some but not all of their specified number of cycles when the test
execution is paused. The first number is the number of cycles that have
been executed. The second number is the number of times the report or
document will be executed in the test.

Copyright © 2024 All Rights Reserved 1598


Syst em Ad m in ist r at io n Gu id e

l Completed reports and documents have finished their execution without


errors.

l Timed Out reports and documents did not finish executing in the time
specified in the Max Timeout field in the Select Execution Settings page.
These reports and documents have been canceled by Integrity Manager
and will not be executed again during this run of the test.

l Error indicates that an error has prevented this report or document from
executing correctly. To view the error, double-click the status. The report
details open in the Report Data area of Integrity Manager, below the
Results Summary area. The error message is listed in the Execution
Details section.

l Not Supported reports and documents contain one or more prompts for
which an answer could not be automatically generated. To see a
description of the errors, double-click the status. For details of how
Integrity Manager answers prompts, see Executing Prompted Reports with
Integrity Manager, page 1589.

Additional information for Completed reports and documents is available in


the Data, SQL, Graph, and Excel columns:

l Matched indicates that the results from the two projects are identical for
the report or document. In a single-project integrity test, Matched
indicates that the reports and documents executed successfully.

l Not Matched indicates that a discrepancy exists between the two


projects for the report or document. To view the reports or documents from
each project in the Report Data area, select them in the Results Summary
area.

l Not Compared indicates that Integrity Manager was unable to compare


the reports and documents for this type of analysis. This can be because
the report or document was not found in the target project, because one or
more prompts are not supported by Integrity Manager, or because an error
prevented the report or document from executing.

Copyright © 2024 All Rights Reserved 1599


Syst em Ad m in ist r at io n Gu id e

l Not Available indicates that Integrity Manager did not attempt to


execute the report or document for this type of analysis. This may be
because this type of analysis was not selected on the Select Processing
Options page, or (if N/A is present in the Graph column) because the
report was not saved as a Graph or Grid/Graph.

To view a Completed report or document and identify discrepancies, select


its entry in the Results Summary. The report or document appears in the
Report Data area of Integrity Manager, below the Results Summary.

In a comparative integrity test, both the base and the target report or
document are shown in the Report Data area. Any differences between the
base and target are highlighted in red, as follows:

l In the Data, SQL, or Excel view, the differences are printed in red. In Data
and Excel view, to highlight and bold the next or previous difference, click
the Next Difference or Previous Difference icon.

l In the Graph view, the current difference is circled in red. To circle the
next or previous difference, click the Next Difference or Previous
Difference icon. To change the way differences are grouped, use the
Granularity slider. For more information about differences in graph
reports, see Grouping Differences in Graph and PDF Reports, page 1601.

Viewing graphs in Overlap layout enables you to switch quickly between the
base and target graphs. This layout makes it easy to compare the
discrepancies between the two graphs.

Viewing and Editing Notes


Notes are used to track additional information about reports and documents.
You can view the notes attached to a report or document in the Notes tab of
the Report Data area.

l Users of Integrity Manager can view, add, and edit notes even if they do
not have the privileges to view, add, or edit notes in MicroStrategy Web or

Copyright © 2024 All Rights Reserved 1600


Syst em Ad m in ist r at io n Gu id e

Developer.

l Notes are not supported on versions of Intelligence Server prior to 9.0. If


Integrity Manager connects to an Intelligence Server of version 8.1.2 or
earlier, the Notes tab displays the message "Notes are not supported for
this connection."

l In a baseline-versus-project or baseline-versus-baseline test, the notes


for the baselines can be viewed but not edited.

To make sure you are viewing the most recent version of the notes, click
Refresh. Integrity Manager contacts Intelligence Server and retrieves the
latest version of the notes attached to the report or document.

To add a note, enter the new note and click Submit. To edit the notes, click
Edit, make changes to the listed notes, and click Submit.

If a Login dialog box opens, select an authentication method, enter a


username and password, and click OK.

Grouping Differences in Graph and PDF Reports


You must have the same OS version and the same font installed on your
machine to compare two PDF reports. Font rendering on a PDF is version
and OS specific, so differences may result in formatting issues, which can
affect comparison results.

When Integrity Manager compares two graph or PDF reports, it saves the
graphs as .png or .pdf files. It then performs a pixel-by-pixel comparison of
the two images. If any pixels are different in the base and target graph, the
graph or PDF is considered Not Matched.

Adjacent pixel differences are grouped together and treated as a single


difference. When you view the graph or PDF reports, Integrity Manager
draws a red boundary around the currently selected difference. To navigate
through the differences, use the Next Difference and Previous Difference
icons on the Report Data toolbar.

Copyright © 2024 All Rights Reserved 1601


Syst em Ad m in ist r at io n Gu id e

Each difference has a boundary of unchanged pixels that is treated as part


of the difference. You can adjust the size of this boundary with the
Granularity slider on the Report Data toolbar. Increasing the granularity
causes multiple differences near each other to be treated as a single
difference. This can be useful when you want to treat the changes to the
formatting of a title or legend as a single difference, so that you can quickly
navigate to any other differences.

In the image below, the title for the graph has been changed between the
baseline and the target. In the base graph, the title is in normal font; in the
target, it is in italic font.

The white space between the words is the same in both the base and target
reports. When the granularity is set to a low level, this unchanged space
causes Integrity Manager to treat each word as a separate difference, as
seen below:

If the granularity is set to a higher level, the space between the words is no
longer sufficient to cause Integrity Manager to treat each word as a separate
difference. The differences in the title are all grouped together, as seen
below:

Copyright © 2024 All Rights Reserved 1602


Syst em Ad m in ist r at io n Gu id e

Accessing the Saved Results of a Test


When you execute a test, Integrity Manager saves the results of that test to
a location specified in the Select Execution Settings page of the Integrity
Manager Wizard. If the option labeled Store output in a time stamped
sub-folder of this directory is selected, the test results are stored in a
subfolder of the specified output folder. Otherwise, the test results are
stored directly in the output folder.

A summary of the test results is available in HTML, in the file


ResultsSummary.html. This file gathers data from the file
ResultsSummary.xml and formats the data with the stylesheets
style.css and ResultsSummary.xsl.

While the test is executing, a temporary results file, temp.xml, is created.


This file is updated as each report or document completes execution. If the
system crashes during test execution, the most recent results are stored in
this file.

Report Execution Output


Within the output folder, Integrity Manager creates a folder named images
to store the images used in the ResultsSummary files. For a comparative
integrity test, a folder named common is created to hold the serialized
comparison files.

Integrity Manager also creates a separate folder within the output folder for
the report or document results from each project. These folders are named
after the Intelligence Server machines on which the projects are kept.

l For the baseline server, _0 is appended to the machine name to create the
name of the folder.

l For the target server, _1 is appended to the machine name.

For example, the image below is taken from a machine that executes a
project-versus-project integrity test at nine AM on the first Monday of each
month. The baseline project is on a machine named ARCHIMEDES, and the

Copyright © 2024 All Rights Reserved 1603


Syst em Ad m in ist r at io n Gu id e

target project is on a machine named PYTHAGORAS. The folder for the


results from the baseline project is archimedes_0, and the folder for the
results from the target project is pythagoras_1.

In a baseline-versus-project integrity test, the baseline folder is named


baseline_0. In a baseline-versus-baseline integrity test, the baseline
folder is named baseline_0 and the target folder is named baseline_1.

Each results folder contains a number of files containing the results of each
report that is tested. These files are named <ID>_<GUID>.<ext>, where
<ID> is the number indicating the order in which the report was executed,
<GUID> is the report object GUID, and <ext> is an extension based on the
type of file. The report results are saved in the following files:

l SQL is saved in plain text format, in the file <ID>_<GUID>.sql.

In a comparative integrity test, if you select the Save color-coded SQL


differences to an HTML file check box, the SQL is also saved in HTML
format, in the file <ID>_<GUID>.htm. In this file, the SQL that is different
from the SQL in the other project's version of the report is highlighted in
red.

l Grid data is saved in CSV format, in the file <ID>_<GUID>.csv, but only
if you select the Save CSV files check box in the Advanced Options dialog
box.

l Graph data is saved in PNG format, in the file <ID>_<GUID>.png, but


only if the report has been saved in Graph or Grid/Graph format.

Copyright © 2024 All Rights Reserved 1604


Syst em Ad m in ist r at io n Gu id e

l Excel data is saved in XLS format, in the file <ID>_<GUID>.xls, but only
if you select the Save XLS files check box in the Advanced Options dialog
box.

l PDF data is saved in PDF format, in the file <ID>_<GUID>.pdf.

l Notes are saved in plain text format, in the file <ID>_


<GUID>.notes.txt. This file is created even if the corresponding report
does not have notes.

l Only report results for formats requested in the Select Processing Options
page during test setup are generated.

l SQL, graph, and PDF data are always saved if they are generated. Grid
and Excel data are only saved if you choose to save those results during
test creation. Notes are always saved.

l Integrity Manager also creates a file named <ID>_<GUID>.ser for each


report or document. These files contain serialized binary data that
Integrity Manager uses when you open a previously saved set of test
results, and are not intended for use by end users. These files are stored
in the same folder as the test results.

Each results folder also contains a file called baseline.xml that provides
a summary of the tested reports. This file is used to provide a baseline
summary for baseline-versus-project and baseline-versus-baseline integrity
tests.

To Open a Previously Saved Set of Test Results

l In Integrity Manager, go to File > Open Results.

l Browse to the location of the saved test.

l Select the ResultsSummary.xml file and click Open.

Copyright © 2024 All Rights Reserved 1605


Syst em Ad m in ist r at io n Gu id e

List of Tags in the Integrity Test File


When you save an integrity test, it is saved as an XML file, with an extension
of .MTC. For instructions on saving or loading an integrity test, see Saving
and Loading a Test, page 1582.

If needed, you can edit the integrity test file with any XML editor or text
editor, such as Notepad. The table below lists all the XML tags in an integrity
test file, with an explanation of each tag.

XML Tag Function

General test information

Type of integrity test, as displayed in the ResultsSummary


file:

Project versus Project Integrity Test

Baseline versus Project Integrity Test


Execution_Mode
Baseline versus Baseline Integrity Test

Single Project Integrity Test

This value is for display and localization purposes


only.

Type of integrity test, as executed by Integrity Manager:

1: Project versus Project integrity test

Execution_Mode_Value 2: Single Project integrity test

3: Baseline versus Project integrity test

4: Baseline versus Baseline integrity test

LocalVersion Version of Integrity Manager that created the test.

Whether this integrity test supports multiple logins:

isMultiUser true: This integrity test supports multiple logins.

false: This integrity test does not support multiple logins.

Copyright © 2024 All Rights Reserved 1606


Syst em Ad m in ist r at io n Gu id e

XML Tag Function

Which ConnectionIndex (0 or 1) indicates the base


Base_Connection_Index connection. The other ConnectionIndex is the target
connection.

Base or Target connection information

Except in a single project integrity test, this section is repeated for both the base
connection and the target connection.

0 or 1, depending on the value of Base_Connection_


ConnectionIndex= Index and whether the information below is for the base
or target connection.

Server_Name Name or IP address of the Intelligence Server.

Port Port number of the Intelligence Server.

Login authentication mode corresponding to the Login


tag below. If isMultiUser is set to true, there can be
multiple Authentication_Mode and Login tag pairs.

1: Standard
Authentication_Mode
2: Windows

16: LDAP

32: Database

Login ID corresponding to the Authentication_Mode


Login tag above. If isMultiUser is set to true, there can be
multiple Authentication_Mode and Login tag pairs.

Project Name of the project.

Project_DssID GUID of the project.

Version The version of Intelligence Server that hosts the project.

Whether to use load balancing across the cluster for this


connection, that is, whether to execute the
Use_Load_Balancing
reports/documents across all nodes of the cluster or on a
single node:

Copyright © 2024 All Rights Reserved 1607


Syst em Ad m in ist r at io n Gu id e

XML Tag Function

true: Use load balancing.

false: Do not use load balancing.

Whether this connection uses a baseline file:

baselineConnection true: This connection uses a baseline.

false: This connection uses a live Intelligence Server.

The full path to the baseline file, if


baselineFile
baselineConnection is set to true.

Objects to be tested

This section must be repeated for each object included in the integrity test.

Type of object to be processed by Integrity Manager:

3: Report

8: Folder
Type
18: Shortcut

39: Search object

55: Document

GUID GUID of the object.

Name Name of the object.

Path Path to the object within the project.

Rounds (This entry is deprecated.)

Object type.

If Type is set to 3:

768: Grid view report


Reporttype
769: Graph view report

770: SQL view report

774: Grid/Graph view report

Copyright © 2024 All Rights Reserved 1608


Syst em Ad m in ist r at io n Gu id e

XML Tag Function

776: Intelligent Cube

778: Transaction

4096: Datamart report

If Type is set to 55:

14081: Document

Whether embedded search objects are processed by the


integrity test:
chaseSearches
true: Process embedded search objects.

false: Do not process embedded search objects.

Whether to match objects by ID or path name:

objMatchType 0: Match by ID.

1: Match by path name.

Whether object matching is used. This is only available for


Project to Project tests:

true: Object matching is used. This allows you to select


Use_Obj_Match
which object from the base project is compared to which
object from the target project.

false: Object matching is not used.

Only displays if Use_Obj_Match is true. Inside the map is


one or more Entry statements which each contain a
Key/Value pair for a mapped object where:
Obj_Match_Map
Key is the GUID of the object in the base project

Value is the GUID of the object in the target project

Prompt settings

Custom answer for text prompts. To provide multiple


textAnswer custom answers for text prompts, include each answer in
a separate textAnswer node.

Copyright © 2024 All Rights Reserved 1609


Syst em Ad m in ist r at io n Gu id e

XML Tag Function

Whether custom answers are provided for text prompts:

textAnswerIsNull true: Custom answers are not provided for text prompts.

false: Custom answers are provided for text prompts.

Custom answer for numeric or Big Decimal prompts. To


numberAnswer provide multiple custom answers for these prompts,
include each answer in a separate numberAnswer node.

Whether custom answers are provided for numeric and Big


Decimal prompts:

true: Custom answers are not provided for numeric and


numberAnswerIsNull
Big Decimal prompts.

false: Custom answers are provided for numeric and Big


Decimal prompts.

Custom answer for date prompts. To provide multiple


dateAnswer custom answers for date prompts, include each answer in
a separate dateAnswer node.

Whether custom answers are provided for date prompts:

dateAnswerIsNull true: Custom answers are not provided for date prompts.

false: Custom answers are provided for date prompts.

Number of elements that Integrity Manager selects to


numberElementHierPrompt
answer element hierarchy prompts.

Whether a custom value is provided for the number of


elements used to answer element hierarchy prompts:

numberElementHierPrompt true: A custom value is not provided for element hierarchy


IsNull prompts.

false: A custom value is provided for element hierarchy


prompts.

Whether optional prompts are answered in this integrity


answerOptionalPrompt
test:

Copyright © 2024 All Rights Reserved 1610


Syst em Ad m in ist r at io n Gu id e

XML Tag Function

true: Optional and required prompts are answered.

false: Only required prompts are answered.

The prompt answer sources to be used by this integrity


test, in the order that they are to be used, separated by
commas. Negative numbers indicate that this answer
source is disabled.

All four prompt answer sources must be included in


this parameter.

1: Personal answer
PromptAnswerSource
-1: Personal answer (disabled)

2: Default object answers

-2: Default object answers (disabled)

3: Integrity Manager user-defined answer

-3: Integrity Manager user-defined answer (disabled)

4: Integrity Manager internal answer

-4: Integrity Manager internal answer (disabled)

A prompt answer source to be used by this integrity test. If


multiple prompt answer sources are specified, each must
have its own PromptAnswerSource_VAL entry, in the
order that they are to be used. Values include:

PromptAnswerSource_VAL • Personal answer

• Default object answers

• Integrity Manager user-defined answer

• Integrity Manager internal answer

Whether to open reports with unanswered prompts in


isLinkPopup
MicroStrategy Web:

Copyright © 2024 All Rights Reserved 1611


Syst em Ad m in ist r at io n Gu id e

XML Tag Function

true: Open reports with unanswered prompts.

false: Do not execute reports with unanswered prompts.

If isLinkPopup is set to true, the URL for the


BaseURL
MicroStrategy Web server for the base project.

If isLinkPopup is set to true, the URL for the


TargetURL
MicroStrategy Web server for the target project.

Whether to use only default personal prompt answers, or


group personal prompt answers by their names:

USE_DEFAULT: Use only default personal prompt


Personal_Answer_Option
answers for each prompt.

GROUP_BY_NAME: Group personal prompt answers by


their names.

If Personal_Answer_Option is set to USE_DEFAULT,


this must be set to Use only default personal prompt

Personal_Answer_Option_ answer for each prompt.


Desc
If Personal_Answer_Option is set to GROUP_BY_
NAME, this must be set to Group personal prompt
answers by their names.

Execution Settings

Maximum time, in minutes, that a report can run before


MaxTimeout
Integrity Manager cancels it.

numSimultaneous Maximum number of simultaneous report/document


Executions executions during the integrity test.

Whether to use the cached version of a report, if one is


available:

useCache true: Use the report cache.

false: Do not use the report cache; execute each report


against the Intelligence Server.

Copyright © 2024 All Rights Reserved 1612


Syst em Ad m in ist r at io n Gu id e

XML Tag Function

Full path to the location where the integrity test results are
Output_Directory
saved.

Whether to store the test results in a subdirectory of the


Output_Directory , named by date and time of the
integrity test execution:
isAppendDateToOutputDir
true: Store results in a time-stamped subdirectory of the
specified directory.

false: Store results in the specified directory.

Enable or disable logging:

LogLevel 1: Logging is enabled.

-5: Logging is disabled.

LogFile Full path to the log file.

Processing options

Whether to enable data comparison for reports:

isDataEnabled true: Enabled.

false: Disabled.

Whether to enable SQL comparison for reports:

isSQLEnabled true: Enabled.

false: Disabled.

Whether to enable graph comparison for reports:

isGraphEnabled true: Enabled.

false: Disabled.

Whether to enable Excel comparison for reports:

isExcelEnabled true: Enabled.

false: Disabled.

isPdfEnabled Whether to enable PDF comparison for reports:

Copyright © 2024 All Rights Reserved 1613


Syst em Ad m in ist r at io n Gu id e

XML Tag Function

true: Enabled.

false: Disabled.

Whether to enable Excel comparison for documents:

isRsdExcelEnabled true: Enabled.

false: Disabled.

Whether to enable PDF comparison for documents:

isRsdPdfEnabled true: Enabled.

false: Disabled.

Whether to enable execution for documents:

isRsdExecEnabled true: Enabled.

false: Disabled.

reportCycles Number of performance test cycles to run for each report.

Number of performance test cycles to run for each


documentCycles
document.

SQL processing options

Whether to save the generated SQL to an HTML file with


differences highlighted in red:
isColorCodeSQL
true: Enabled.

false: Disabled.

dynamicSQLStart Text marking the beginning of any dynamic SQL.

dynamicSQLEnd Text marking the end of any dynamic SQL.

from SQL to be replaced by the SQL indicated by the to tag.

to SQL to replace the SQL indicated by the from tag.

Where to apply the SQL replacement:


applyTo
1: Base only.

Copyright © 2024 All Rights Reserved 1614


Syst em Ad m in ist r at io n Gu id e

XML Tag Function

2: Target only.

3: Base and target.

Data processing options

Whether to save the data for each report as a CSV file:

isCSVEnabled true: Enabled.

false: Disabled.

Excel processing options

For all Excel processing options, if the option is left blank, the setting for that option is
imported from the user's MicroStrategy Web export preferences, as per the Use
Default option in the Integrity Manager Wizard.

Whether to save data for each report as an XLS file:

isXLSEnabled true: Enabled.

false: Disabled.

Whether to include the report title in the Excel chart:

ExportReportTitle 0: Do not export the report title.

-1: Export the report title.

Whether to include which report objects are grouped in a


page-by selection in the Excel chart:
ExportPageByInfo
0: Do not export the page-by information.

-1: Export the page-by information.

Whether to include the report filter details in the Excel


chart:
isExportFilterDetails
true: Export the filter details.

false: Do not export the filter details.

Whether to remove the extra "Metrics" column from the


isRemoveColumn
Excel chart:

Copyright © 2024 All Rights Reserved 1615


Syst em Ad m in ist r at io n Gu id e

XML Tag Function

0: Remove the extra column.

1: Do not remove the extra column.

2: Use the default setting in the MicroStrategy Web


preferences.

Whether to include all report objects in the Excel chart, or


only the objects in the default page-by selection:
ExpandAllPages
0: Export only the default page-by.

-1: Export all objects.

Excel version of the exported file:

1: Excel 2000.
excelVersion
2: Excel XP/2003.

4: Excel 2007 or newer.

Whether to export metric values as text or as numeric


values:
isExportMetricAsText
true: Export metrics as text.

false: Export metrics as numeric values.

Whether to export data header values as text or as


numeric values:
isExportHeaderAsText
true: Export headers as text.

false: Export headers as numeric values.

Whether to export each page of the report to a separate


sheet in the Excel file:
isSeparateSheets
true: Export each page as a separate sheet.

false: Export the entire report on a single sheet.

Whether to export graphs in the report as live Excel


isLiveCharts
graphs, or as static images:

Copyright © 2024 All Rights Reserved 1616


Syst em Ad m in ist r at io n Gu id e

XML Tag Function

true: Export graphs as live Excel graphs.

false: Export graphs as static images in the Results


folder.

Whether images and graphs in the report can be accessed


from the Excel spreadsheet without having to run
MicroStrategy Web:

ExcelEmbedImages 0: Images and graphs are not embedded in the


spreadsheet, and cannot be accessed without running the
report in MicroStrategy Web.

-1: Images and graphs are embedded in the spreadsheet.

Whether MicroStrategy Office can refresh reports after


they have been exported to Excel:

true: Reports can be refreshed from Office.

false: Reports are static and cannot be refreshed from


Office.

This information applies to the legacy MicroStrategy


Office add-in, the add-in for Microsoft Office
applications which is no longer actively developed.
isOfficeRefresh It was substituted with a new add-in, MicroStrategy
for Office, which supports Office 365 applications.
The initial version does not yet have all the
functionalities of the previous add-in.

If you are using MicroStrategy 2021 Update 2 or a


later version, the legacy MicroStrategy Office add-in
cannot be installed from Web.;

For more information, see the MicroStrategy for


Office page in the Readme and the MicroStrategy for
Office Help.

Text of the custom header added to the Excel


ExcelReportHeader
spreadsheet.

Copyright © 2024 All Rights Reserved 1617


Syst em Ad m in ist r at io n Gu id e

XML Tag Function

The location of the custom header in the Excel export:

ExcelReportHeader 0: Display the custom header before other report headers.


Location
1: Display the custom header after other report headers.

2: The custom header replaces any other report headers.

ExcelReportFooter Text of the custom footer added to the Excel spreadsheet.

PDF processing options

For all PDF processing options, if the option is left blank or not listed in the MTC file,
that option is processed using the default setting in Intelligence Server's PDF
generation options.

Whether to adjust the font to fit the report to a certain


percentage of the PDF page ( ScalePercentage ), or to
fit a certain number of report pages on the page
Scaling ( ScalePagesWide and ScalePagesTall ):

0: Use ScalePercentage .

1: Use ScalePagesWide and ScalePagesTall .

ScalePercentage Percentage to scale the font if Scaling is set to 0.

Number of report pages per PDF page width, if Scaling


ScalePagesWide
is set to 1.

Number of report pages per PDF page height, if Scaling


ScalePagesTall
is set to 1.

Whether to print the report's grid and graph on the same


page:
GridandGraph
0: Print the grid and graph on separate PDF pages.

1: Print the grid and graph on the same PDF page.

Page orientation:

Orientation 0: Portrait.

1: Landscape.

Copyright © 2024 All Rights Reserved 1618


Syst em Ad m in ist r at io n Gu id e

XML Tag Function

Whether to include a cover page:

PrintCoverDetails 0: Do not print a cover page.

1: Print a cover page.

What to include in the cover page, if


PrintCoverDetails is set to 1:
CoverPageDetails
Contents 0: Report filter details.

1: Report details.

The location of the cover page, if PrintCoverDetails


is set to 1:
CoverPageLocation
0: After the report.

1: Before the report.

Whether to include all report objects in the PDF, or only


objects in the default page-by selection:
ExpandAllPages
0: Export only the default page-by.

1: Export all objects.

Paper size of the PDF:

0: Letter (8.5"x11")

1: Legal (8.5"x14")

2: Executive (7.25"x10.5")
PaperType
3: Folio (8.5"x13")

4: A3 (11.69"x16.54")

5: A4 (8.27"x11.69")

6: A5 (5.83"x8.27")

MarginLeft Left margin, in inches.

MarginRight Right margin, in inches.

Copyright © 2024 All Rights Reserved 1619


Syst em Ad m in ist r at io n Gu id e

XML Tag Function

MarginTop Top margin, in inches.

MarginBottom Bottom margin, in inches.

MaxHeaderSize Maximum header size, in inches.

MaxFooterSize Maximum footer size, in inches.

Whether to use bitmaps for graphs:

GraphFormat 10: Use bitmaps for graphs.

11: Do not use bitmaps for graphs.

Whether to use draft quality bitmaps for graphs, if


GraphFormat is set to 11:
PrintQuality
96: Use draft quality bitmaps.

288: Use fine quality bitmaps.

Whether to embed fonts in the PDF:

EmbedFonts 0: Do not embed fonts.

1: Embed fonts.

HeaderLeft Left page header.

HeaderCenter Center page header.

HeaderRight Right page header.

FooterLeft Left page footer.

FooterCenter Center page footer.

FooterRight Right page footer.

ReportHeader Report header.

Performance processing options

Whether to include in the performance test the execution


inclErrorRptIn
times from reports/documents that do not complete
Performance
execution:

Copyright © 2024 All Rights Reserved 1620


Syst em Ad m in ist r at io n Gu id e

XML Tag Function

true: Include the execution times from all


reports/documents.

false: Include only the execution times from


reports/documents that execute successfully.

Copyright © 2024 All Rights Reserved 1621


Syst em Ad m in ist r at io n Gu id e

SQL GEN ERATION AN D


D ATA PROCESSIN G : VLDB
PROPERTIES

Copyright © 2024 All Rights Reserved 1622


Syst em Ad m in ist r at io n Gu id e

VLDB properties allow you to customize the SQL that MicroStrategy


generates, and determine how data is processed by the Analytical Engine.
You can configure properties such as SQL join types, SQL inserts, table
creation, Cartesian join evaluation, check for null values, and so on.

VLDB properties can provide support for unique configurations and optimize
performance in special reporting and analysis scenarios. You can use the
VLDB Properties Editor to alter the syntax or behavior of a SQL statement
and take advantage of unique, database-specific optimizations. You can
also alter how the Analytical Engine processes data in certain situations,
such as subtotals with consolidations and sorting null values.

Each VLDB property has two or more VLDB settings which are the different
options available for a VLDB property. For example, the Metric Join Type
VLDB property has two VLDB settings, Inner Join and Outer Join.

Some of the qualities that make VLDB properties valuable are:

l Complete database support: VLDB properties allow you to easily


incorporate and take advantage of new database platforms and versions.

l Optimization: You can take advantage of database-specific settings to


further enhance the performance of queries.

l Flexibility: VLDB properties are available at multiple levels so that the SQL
generated for one report, for example, can be manipulated separately from
the SQL generated for another, similar report. For a diagram, see Order of
Precedence, page 1624.

Modifying any VLDB property should be performed with caution only after
understanding the effects of the VLDB settings you want to apply. A given
VLDB setting can support or optimize one system setup, but the same
setting can cause performance issues or errors for other systems. Use this
manual to learn about the VLDB properties before modifying any default
settings.

Copyright © 2024 All Rights Reserved 1623


Syst em Ad m in ist r at io n Gu id e

Supporting Your System Configuration


Different SQL standards among various database platform (DBMS) types
require that some VLDB properties are initialized to different default settings
depending on the DBMS used. For example, when using a Microsoft Access
2000 database, the Join Type VLDB property is set to Join 89. This type of
initialization ensures that different DBMS types can be supported. These
initializations are also used as the default VLDB settings for the respective
DBMS type. To create and review a detailed list of all the default VLDB
settings for different DBMS types, see Default VLDB Settings for Specific
Data Sources, page 1925.

VLDB properties also help you configure and optimize your system. You can
use MicroStrategy for different types of data analysis on a variety of data
warehouse implementations. VLDB properties offer different configurations
to support or optimize your reporting and analysis requirements in the best
way.

For example, you may find that enabling the Set Operator Optimization
VLDB property provides a significant performance gain by utilizing set
operators such as EXCEPT and INTERSECT in your SQL queries. On the
other hand, this property must offer the option to be disabled, since not all
DBMS types support these types of operators. VLDB properties offer you a
choice in configuring your system.

Order of Precedence
VLDB properties can be set at multiple levels, providing flexibility in the way
you can configure your reporting environment. For example, you can choose
to apply a setting to an entire database instance or only to a single report
associated with that database instance.

The following diagram shows how VLDB properties that are set for one level
take precedence over those set for another.

Copyright © 2024 All Rights Reserved 1624


Syst em Ad m in ist r at io n Gu id e

The arrows depict the override authority of the levels, with the report level
having the greatest authority. For example, if a VLDB property is set one
way for a report and the same property is set differently for the database
instance, the report setting takes precedence.

Properties set at the report level override properties at every other level.
Properties set at the template level override those set at the metric level, the
database instance level, and the DBMS level, and so on.

A limited number of properties can be applied at each level.

Accessing and Working with VLDB Properties


Opening the VLDB Properties Editor
You can change the VLDB settings for different levels using the VLDB
Properties Editor. (Levels are described in Order of Precedence, page
1624.) You can access the VLDB Properties Editor in several ways,
depending on what level of MicroStrategy objects you want to impact with
your VLDB property changes. For example, you can apply a setting to an
entire database instance, or only to a single report associated with that
database instance.

Copyright © 2024 All Rights Reserved 1625


Syst em Ad m in ist r at io n Gu id e

When you access the VLDB Properties Editor for a database instance, you
see the most complete set of the VLDB properties. However, not all
properties are available at the database instance level. The rest of the
access methods have a limited number of properties available depending on
which properties are supported for the selected object/level.

The table below describes every way to access the VLDB Properties Editor:

To set VLDB
properties at Open the VLDB Properties Editor this way
this level

Attribute In the Attribute Editor, on the Tools menu, select VLDB Properties.

Choose one of the following:

In the Database Instance Manager, right-click the database instance


Database you want to modify VLDB settings for, and choose VLDB Properties.
Instance
In the Project Configuration Editor, select the Database Instances:
SQL data warehouses or the Database Instances: MDX data
warehouses category, then click VLDB Properties.

In the Metric Editor, on the Tools menu, point to Advanced Settings,


Metric
and then select VLDB Properties.

In the Project Configuration Editor, expand Project definition, and


Project select Advanced. In the Project-Level VLDB settings area, click
Configure.

In the Report Editor or Report Viewer, on the Data menu, select VLDB
Report (or
Properties. This is also the location in which you can access the VLDB
Intelligent Cube)
Properties Editor for Intelligent Cubes.

Template In the Template Editor, on the Data menu, select VLDB Properties.

In the Transformation Editor, on the Tools menu, select VLDB


Properties. Only one property (Transformation Role Processing) is
Transformation
available at this level. All other VLDB properties must be accessed
from one of the other levels listed in this table.

Copyright © 2024 All Rights Reserved 1626


Syst em Ad m in ist r at io n Gu id e

Only a single property, called Unbalanced or Ragged Hierarchy, can be set


at the hierarchy level. This property's purpose and instructions to set it are
described in the MDX Cube Reporting Help.

VLDB properties exist at the filter level and the function level, but they are
not accessible through the VLDB Properties Editor.

All VLDB properties at the DBMS level are used for initialization and
debugging only. You cannot modify a VLDB property at the DBMS level.

The VLDB Properties Editor has the following areas:

l VLDB Settings list: Shows the list of folders into which the VLDB
properties are grouped. Expand a folder to see the individual properties.
The settings listed depend on the level at which the VLDB Properties
Editor was accessed (see the table above). For example, if you access the
VLDB Properties Editor from the project level, you only see Analytical
Engine properties.

l Options and Parameters box: Where you set or change the parameters
that affect the SQL syntax.

l SQL preview box: (Only appears for VLDB properties that directly impact
the SQL statement.) Shows a sample SQL statement and how it changes
when you edit a property.

When you change a property from its default, a check mark appears on the
folder in which the property is located and on the property itself.

Creating a VLDB Settings Report


A VLDB settings report displays all the current settings for each VLDB
property that is available through a given instance of the VLDB Properties
Editor. Part of a sample report of settings is shown below for VLDB
properties available at the report level:

Copyright © 2024 All Rights Reserved 1627


Syst em Ad m in ist r at io n Gu id e

For each report, you can also decide whether to:

l Display the physical setting names alongside the names that appear in the
interface. The physical setting names can be useful when you are working
with MicroStrategy Technical Support to troubleshoot the effect of a VLDB
property.

l Display descriptions of the values for each setting. This displays the full
description of the option chosen for a VLDB property.

l Hide all settings that are currently set to default values. This can be useful
if you want to see only those properties and their settings which have been
changed from the default.

The steps below show you how to create a VLDB settings report. A common
scenario for creating a VLDB settings report is to create a list of default
VLDB settings for the database or other data source you are connecting to,
which is described in Default VLDB Settings for Specific Data Sources, page
1925.

Copyright © 2024 All Rights Reserved 1628


Syst em Ad m in ist r at io n Gu id e

To Create a VLDB Settings Report

1. Open the VLDB Properties Editor to display the VLDB properties for the
level at which you want to work. (For information on accessing the
VLDB Properties Editor, see Opening the VLDB Properties Editor, page
1625.)

2. From the Tools menu, select Create VLDB Settings Report.

3. A report is generated that displays all VLDB properties available at the


level from which you accessed the VLDB Properties Editor. It also
displays all current settings for each VLDB property.

4. You can choose to have the report display or hide the information
described above, by selecting the appropriate check boxes.

5. You can copy the content in the report using the Ctrl+C keys on your
keyboard. Then paste the information into a text editor or word
processing program (such as Microsoft Word) using the Ctrl+V keys.

Viewing and Changing VLDB Properties


You can change VLDB properties to alter the syntax of a SQL statement and
take advantage of database-specific optimizations.

Modifying any VLDB property should be performed with caution only after
understanding the effects of the VLDB settings that you want to apply. A
given VLDB setting can support or optimize one system setup, but the same
setting can cause performance issues or errors for other systems. Use this
manual to learn about the VLDB properties before modifying any default
settings.

Some VLDB properties are characterized as "advanced properties":


advanced properties are relevant only to certain projects and system
configurations. To work with advanced VLDB properties, see Viewing and
Changing Advanced VLDB Properties, page 1630.

Copyright © 2024 All Rights Reserved 1629


Syst em Ad m in ist r at io n Gu id e

To View and Change VLDB Properties

1. Open the VLDB Properties Editor to display the VLDB properties for the
level at which you want to work. (For information on object levels, see
Order of Precedence, page 1624.)

2. Modify the VLDB property you want to change. For use cases,
examples, sample code, and other information on every VLDB property,
see Details for All VLDB Properties, page 1636.

3. If necessary, you can ensure that a property is set to the default. At the
bottom of the Options and Parameters area for that property (on the
right), select the Use default inherited value check box. Next to this
check box name, information appears about what level the setting is
inheriting its default from.

4. Click Save and Close.

5. You must also save in the object or editor window through which you
accessed the VLDB Properties Editor. For example, if you accessed the
VLDB properties by opening the Metric Editor and then opening the
VLDB Properties Editor, after you click Save and Close in the VLDB
Properties Editor, you must also click Save and Close in the Metric
Editor to save your changes to VLDB properties.

Viewing and Changing Advanced VLDB Properties


By default, some VLDB properties are hidden when you open the VLDB
Properties Editor. These properties are categorized as advanced VLDB
properties because in general they are used infrequently and are relevant to
only certain projects and system configurations. These settings are not
dependent on any user privileges.

When modifying advanced VLDB properties, the same caution should be


taken as when modifying any other VLDB property.

Copyright © 2024 All Rights Reserved 1630


Syst em Ad m in ist r at io n Gu id e

To Display the Advanced Properties

1. Open the VLDB Properties Editor to display the VLDB properties for the
level at which you want to work. (For information on object levels, see
Order of Precedence, page 1624.)

2. From the Tools menu, select Show Advanced Settings.

3. Modify the VLDB property you want to change. For use cases,
examples, sample code, and other information on every VLDB property,
see Details for All VLDB Properties, page 1636.

4. If necessary, you can ensure that a property is set to the default. At the
bottom of the Options and Parameters area for that property (on the
right), select the Use default inherited value check box. Next to this
check box name, information appears about what level the setting is
inheriting its default from.

5. Click Save and Close.

6. You must also save in the object or editor window through which you
accessed the VLDB Properties Editor. For example, if you accessed the
VLDB properties by opening the Metric Editor and then opening the
VLDB Properties Editor, after you click Save and Close in the VLDB
Properties Editor, you must also click Save and Close in the Metric
Editor to save your changes to VLDB properties.

Setting All VLDB Properties to Default


You can return all VLDB properties (those displayed in your chosen instance
of the VLDB Properties Editor) to the default settings recommended for your
database platform by MicroStrategy.

If you perform this procedure, any changes you may have made to any or all
VLDB properties displayed in the chosen view of the VLDB Properties Editor
will be lost. For details on which VLDB properties are displayed depending

Copyright © 2024 All Rights Reserved 1631


Syst em Ad m in ist r at io n Gu id e

on how you access the VLDB Properties Editor, see Details for All VLDB
Properties, page 1636.

To Set All VLDB Property Settings to their Default Status

1. Use either or both of the following methods to see your system's VLDB
properties that are not set to default. You should know which VLDB
properties you will be affecting when you return properties to their
default settings:

l Generate a report listing VLDB properties that are not set to the
default settings. For steps, see Creating a VLDB Settings Report,
page 1627, and select the check box named Do not show settings
with Default values.

l Display an individual VLDB property by viewing the VLDB property


whose default/non-default status you are interested in. (For steps,
see Viewing and Changing VLDB Properties, page 1629.) At the
bottom of the Options and Parameters area for that property (on the
right), you can see whether the Use default inherited value check
box is selected. Next to this check box name, information appears
about what level the setting is inheriting its default from.

2. Open the VLDB Properties Editor to display the VLDB properties that
you want to set to their original defaults. (For information on object
levels, see Order of Precedence, page 1624.)

3. In the VLDB Properties Editor, you can identify any VLDB properties
that have had their default settings changed, because they are
identified with a check mark. The folder in which the property is stored
has a check mark on it (as shown on the Joins folder in the example
image below), and the property name itself has a check mark on it (as
shown on the gear icon in front of the Cartesian Join Warning property
name in the second image below).

Copyright © 2024 All Rights Reserved 1632


Syst em Ad m in ist r at io n Gu id e

4. From the Tools menu, select Set all values to default. See the
warning above if you are unsure about whether to set properties to the
default.

5. In the confirmation window that appears, click Yes. All VLDB properties
that are displayed in the VLDB Properties Editor are returned to their
default settings.

Copyright © 2024 All Rights Reserved 1633


Syst em Ad m in ist r at io n Gu id e

6. Click Save and Close to save your changes and close the VLDB
Properties Editor.

7. You must also save in the object or editor window through which you
accessed the VLDB Properties Editor. For example, if you accessed the
VLDB properties by opening the Metric Editor and then opening the
VLDB Properties Editor, after you click Save and Close in the VLDB
Properties Editor, you must also click Save and Close in the Metric
Editor to save your changes to VLDB properties.

Upgrading the VLDB Options for a Particular Database Type


The database connection type specifies the type of database that the
database instance represents, for example, Oracle 8i or Netezza 4.x. (The
database connection type is specified on the General tab of the Database
Instances Editor.) This setting ensures that the appropriate default VLDB
properties, SQL syntax, and functions are used for your database type.

You must have Administrator privileges to upgrade the metadata. For


information on upgrading the metadata and your MicroStrategy environment,
see the Upgrade Help. When the metadata updates the database type
information:

l It loads new database types.

l It loads updated properties for existing database types that are still
supported.

l It keeps properties for existing database types that are no longer


supported. If an existing database type does not have any updates, but the
properties for it have been removed, the process does not remove them
from your metadata.

The steps below show you how to upgrade database types.

l You have upgraded your MicroStrategy environment, as described in the


Upgrade Help.

Copyright © 2024 All Rights Reserved 1634


Syst em Ad m in ist r at io n Gu id e

l You have an account with administrative privileges.

To Update Database Types

1. In Developer, log in to a project source using an account with


administrative privileges.

2. From the Folder List, go to Administration > Configuration


Managers > Database Instances.

3. Right-click any database instance and select Edit.

4. To the right of the Database connection type drop-down list, click


Upgrade.

5. Click Load.

6. Use the arrows to add any required database types by moving them
from the Available database types list to the Existing database
types list.

7. Click OK twice.

Modifying the VLDB Properties for a Warehouse Database


Instance
If your database vendor updates its functionality, you may want to reset
some VLDB properties in MicroStrategy. For example, if the timeout period
is set too low and too many report queries are being cut off, you may want to
modify the SQL Time Out (Per Pass) setting.

For descriptions and examples of all VLDB properties and to see what
properties can be modified, see Details for All VLDB Properties, page 1636.

To modify the VLDB properties related to a database instance, use the


appropriate steps from the table in Opening the VLDB Properties Editor,
page 1625 to access the VLDB Properties Editor for the database instance.
Then follow the steps for Viewing and Changing VLDB Properties, page
1629.

Copyright © 2024 All Rights Reserved 1635


Syst em Ad m in ist r at io n Gu id e

Details for All VLDB Properties


Modify VLDB properties with caution and only after understanding the
effects of the VLDB settings you want to apply. A given VLDB setting can
support or optimize one system setup, but the same setting can cause
performance issues or errors for other systems. Use this section to learn
about the VLDB properties before modifying any default settings.

Subtotals Over Consolidations Compatibility


Consolidations allow users to group specific attribute elements together and
place the group on a report template as if the group was an attribute. The
elements of a consolidation can have arithmetic calculations performed on
them. The Subtotals over Consolidations Compatibility property allows you
to determine how the Analytical Engine calculates consolidations.

l Evaluate subtotals over consolidation elements and their


corresponding attribute elements (behavior for 7.2.x and earlier)
(default): In MicroStrategy version 7.2.x and earlier, if a calculation
includes a consolidation, the Analytical Engine calculates subtotals across
the consolidation elements as well as across all attribute elements that
comprise the consolidation element expressions.

l Evaluate subtotals over consolidation elements only (behavior for


7.5 and later): In MicroStrategy version 7.5 and later, if a calculation
includes a consolidation this setting allows the Analytical Engine to
calculate only those elements that are part of the consolidation.

When you enable this setting, be aware of the following requirements and
options:

This VLDB property must be set at the project level for the calculation to be
performed correctly.

The setting takes effect when the project is initialized, so after this setting is
changed you must reload the project or restart Intelligence Server.

Copyright © 2024 All Rights Reserved 1636


Syst em Ad m in ist r at io n Gu id e

After you enable this setting, you must enable subtotals at either the
consolidation level or the report level. If you enable subtotals at the
consolidation level, subtotals are available for all reports in which the
consolidation is used. (Consolidation Editor > Elements menu > Subtotals >
Enabled.) If you enable subtotals at the report level, subtotals for
consolidations can be enabled on a report-by-report basis. (Report Editor >
Report Data Options > Subtotals > Yes. If Default is selected, the Analytical
Engine reverts to the Enabled/Disabled property as set on the consolidation
object itself.)

If the project is registered on an Intelligence Server version 7.5.x but is


accessed by clients using Developer version 7.2.x or earlier, leave this
property setting on "Evaluate subtotals over consolidation elements and
their corresponding attribute elements." Otherwise, metric values may
return as zeroes when Developer 7.2.x users execute reports with
consolidations, or when they pivot in such reports.

Change this property from the default only when all Developer clients have
upgraded to MicroStrategy version 7.5.x.

Levels at Which You Can Set This

Project only

Three consolidations called Super Regions are created, defined as follows:

l East (({Cust Region=Northeast} + {Cust Region=Mid-Atlantic}) + {Cust


Region=Southeast})

l Central ({Cust Region=Central} + {Cust Region=South})

l West ({Cust Region=Northwest} + {Cust Region=Southwest})

With the first setting selected, "Evaluate subtotals over consolidation elements
and their corresponding attribute elements," the report appears as follows:

Copyright © 2024 All Rights Reserved 1637


Syst em Ad m in ist r at io n Gu id e

The Total value is calculated for more elements than are displayed in the Super
Regions column. The Analytical Engine is including the following elements in
the calculation: East + (Northeast + Mid-Atlantic + Southeast) + Central +
(Central + South) + West + (Northwest + Southwest).

With the second setting selected, "Evaluate subtotals over consolidation


elements only," and with subtotals enabled, the report appears as follows:

The Total value is now calculated for only the Super Regions consolidation
elements. The Analytical Engine is including only the following elements in the
calculation: East + Central + West.

Apply Filter Options for Queries Against In-Memory Datasets


Apply Filter Options for queries against in-memory datasets is an advanced
property that is hidden by default. See Viewing and Changing Advanced
VLDB Properties, page 1630. for information on how to display this property.

Apply Filter Options for queries against in-memory datasets determines how
many times the view filter is applied, which can affect the final view of data.

Consider this simple report, which shows yearly cost:

Copyright © 2024 All Rights Reserved 1638


Syst em Ad m in ist r at io n Gu id e

You create a Yearly Cost derived metric that uses the following definition:

Sum(Cost){!Year%}

The level definition of {!Year%} defines the derived metric to ignore filtering
related to Year and to perform no grouping related to Year (for explanation and
examples of defining the level for metrics, see the Advanced Reporting Help ).
This means that this derived metric displays the total cost for all years, as
shown in the report below:

You can also further filter this report using a view filter. For example, a view
filter is applied to this report, which restricts the results to only 2014, as shown
below:

By default, only Cost for 2014 is displayed, but Yearly Cost remains the same
since it has been defined to ignore filtering and grouping related to Year. This
is supported by the default option Apply view filter to passes touching fact
tables and last join pass of the Apply Filter Options for queries against in-
memory datasets VLDB property.

If analysts of this report are meant to be more aware of the cost data that goes
into the total of Yearly Cost, you can modify the Apply Filter Options for queries
against in-memory datasets VLDB property to use the option Apply view filter

Copyright © 2024 All Rights Reserved 1639


Syst em Ad m in ist r at io n Gu id e

only to passes touching fact tables. This displays the other elements of Year,
as shown in the report below:

You have the following options for the Apply Filter Options for queries
against in-memory datasets VLDB property:

l Apply view filter only to passes touching fact tables: This option
applies the view filter to only SQL passes that touch fact tables, but not to
the last pass that combines the data. As shown in the example above, this
can include additional information on the final display by removing the
view filter from the final display of the report.

l Apply view filter to passes touching fact tables and last join pass
(default): This option applies the view filter to SQL passes that touch fact
tables as well as the last pass that combines the data. As shown in the
example above, this applies the view filter to the final display of the report
to ensure that the data meets the restrictions defined by the view filter.

Level s at Wh i ch Yo u Can Set Th i s

Project, report, and template

Custom Group Display for Joint Elements


The Custom Group Display for Joint Elements VLDB property determines
whether to display all attribute elements or just a single attribute element for
custom groups that include multiple attributes for a single custom group
element. A custom group must meet the following criteria for this VLDB
property to affect the display of the custom group elements:

Copyright © 2024 All Rights Reserved 1640


Syst em Ad m in ist r at io n Gu id e

l Two or more attributes are included in the qualifications for a single


custom group element. This includes custom group elements that are
defined using the following filtering techniques:

Multiple filter qualifications that are based on attributes are used to define
a custom group element. For example, you can include one filter
qualification that filters data for only the year 2011, and another filter
qualification that filters data for the Northeast region. This would include
both the attributes Year and Region for the custom group element. Steps
to create filter qualifications for custom group elements are provided in the
Advanced Reporting Help.

A joint element list is used to define the custom group element. A joint
element list is a filter that allows you to join attribute elements and then
filter on that attribute result set. In other words, you can select specific
element combinations, such as quarter and category. Steps to create a
joint element list are provided in the Advanced Reporting Help.

l The individual attribute elements must be displayed for each custom group
element. For steps to display the individual attribute elements for a custom
group element, see the Advanced Reporting Help.

For custom groups that meet the criteria listed above, the Custom Group
Display for Joint Elements VLDB property provides the following formatting
options:

l Display element names from all attributes in the joint element


(default): Displays all of the attribute elements that are included in the
filter qualifications for the custom group element. For example, the
attributes Region and Category are used in a joint element list, which is
then used to create a custom group element. When this custom group is
included in a report, the attribute elements, for each qualification of the
joint element list, are displayed for the custom group elements, as shown
in the report below:

Copyright © 2024 All Rights Reserved 1641


Syst em Ad m in ist r at io n Gu id e

The attribute elements for both Region and Category are displayed for
each custom group element.

l Display element names from only the first attribute in the joint
element: Displays only one attribute element for the attributes that are
included in the filter qualifications for the custom group element. An
attribute element from the attribute that is first in terms of alphabetical
order is displayed for the custom group. For example, the attributes
Region and Category are used in separate filter qualifications, which are
then used to create a custom group element. When this custom group is
included in a report, the Category attribute element is displayed for the
custom group elements, as shown in the report below.

Only the attribute elements for the Category attribute are displayed. The
attribute elements for Region are not displayed because Category is first
in terms of alphabetical order.

Level s at Wh i ch Yo u Can Set Th i s

Project only

Display Null On Top


The Display Null on Top VLDB property determines where NULL values
appear when you sort data. The default is to display the NULL values at the
top of a list of values when sorting.

Copyright © 2024 All Rights Reserved 1642


Syst em Ad m in ist r at io n Gu id e

Wherever NULL values occur in a report, they appear as user-defined


strings. NULL values result from a variety of scenarios. NULL values can
come from data retrieved from the database, from cross-tabulation on a
report, or from data aggregation on a report. You can specify the characters
or strings that appear for NULL values. To do this, access the Project
Configuration Editor, select the Report definition: Null values category,
and type the strings you want to display in the appropriate fields.

Level s at Wh i ch Yo u Can Set Th i s

Project, report, and template

Document Grids from Multiple Datasets


The Document Grids from Multiple Datasets property determines whether
objects in Grid/Graphs in documents must come from a single dataset or can
come from multiple datasets.

l Objects in document grids must come from the grid's source


dataset: If you select this option, objects in a Grid/Graph must come from
a single dataset, the source dataset used by the Grid/Graph. For example,
a document contains two datasets. Dataset 1 contains Region and
Revenue; Dataset 2 contains Region and Profit. You cannot create a
Grid/Graph with Region, Revenue, and Profit. You can use this option for
backwards compatibility with existing documents.

l Allow objects in document grids to come from multiple datasets: By


default, a single Grid/Graph can contain objects from multiple datasets,
providing additional levels of data analysis. A Grid/Graph can contain
Region and Revenue from Dataset 1 as well as Profit from Dataset 2.

See the Document Creation Help for background information on creating


grids or graphs in documents, including using multiple datasets on a single
grid or graph.

Copyright © 2024 All Rights Reserved 1643


Syst em Ad m in ist r at io n Gu id e

Level s at Wh i ch Yo u Can Set Th i s

Project only

Evaluation Ordering
Evaluation Ordering is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

An evaluation order is the order in which the MicroStrategy Analytical


Engine performs different kinds of calculations during the data population
stage. The Evaluation Ordering property determines the order in which
calculations are resolved. MicroStrategy objects that are included in the
evaluation order include consolidations, compound smart metrics, report
limits, subtotals, derived metrics, and derived elements. Some result data
can differ depending on the evaluation order of these objects.

l 6.x order - Calculate derived metric/smart compound metric before


derived elements/consolidation and all subtotals as smart: This
option is used primarily to support backward compatibility. It is
recommended in most scenarios to update your project to use the 9.x
evaluation order described below.

l 7.x order - Calculate derived metric/smart compound metric before


derived elements/consolidation and all subtotals as non-smart: This
option allows you to modify the order of certain calculations relative to the
default 9.x order. Additionally, all subtotals including the total subtotal are
not calculated as smart subtotals. Smart subtotals are commonly used to
calculate subtotals that provide ratios or percentages.

l 9.x order - Calculate derived elements/consolidation before derived


metric/smart compound metric, "total" subtotal as smart and other
subtotals as non-smart: This default option is recommended in most
scenarios. For example, calculating the total subtotal as a smart subtotal
allows it to calculate ratios and percentages accurately in most cases. The
order of the other calculations also supports the most common data

Copyright © 2024 All Rights Reserved 1644


Syst em Ad m in ist r at io n Gu id e

analysis requirements. A common case that can require a different


evaluation order than the default 9.x order is the calculation and display of
ratio and percentage values. If your report does not display values as
expected, select the other evaluation orders for your report and re-execute
the report to view the new results.

To review the evaluation order of a report, in Developer, view the report in


SQL View. In the SQL View, the section listed as Analytical engine
calculation steps describes the order in which the various report
objects are evaluated. To change the evaluation order for a report using
Developer, on the Report Editor, from the Data menu, select Report Data
Options. Expand the Calculations category, and select Evaluation Order.
Clear the Use default evaluationorder check box to define your own
evaluation order.

See the Advanced Reporting Help for examples of how you can modify the
evaluation order of objects in a project.

Level s at Wh i ch Yo u Can Set Th i s

Project, report, and template

Filtering on String Values

The Filtering on String Values VLDB property determines whether filters


consider trailing spaces in attribute elements. This can affect the data that is
restricted when filtering data. This VLDB property has the following options:

l Do not trim trailing spaces: Attribute elements that include trailing


spaces can be returned as separate attribute elements when filtering on
the attribute. For example, an attribute has two attribute elements, one
with the description information "South" and the other with the description
information "South " which has an extra trailing space at the end. By
selecting this option, these attribute elements can be returned as separate

Copyright © 2024 All Rights Reserved 1645


Syst em Ad m in ist r at io n Gu id e

attribute elements when filtering on the attribute.

l Trim trailing spaces: Attribute elements that include trailing spaces are
not returned as separate attribute elements when filtering on the attribute.
Instead, any trailing spaces are ignored. For example, an attribute has two
attribute elements, one with the description information "South" and the
other with the description information "South " which has an extra trailing
space at the end. By selecting this option, only a single South attribute
element is returned when filtering on the attribute. Since trailing spaces
are commonly viewed as an error in the data, it is recommended that you
use this default Trim trailing spaces option to ignore any trailing spaces.

Level s at Wh i ch Yo u Can Set Th i s

Project only

Metric Level Determination


The Metric Level Determination VLDB property defines how the level of data
is stored for reports. This level is used to make other determinations for the
report such as the level at which to report metric data. This VLDB property
has the following options:

l Include only lowest-level attributes in metric level (default): The


report's level is defined using only the lowest-level attributes available in
the report. This option correctly reflects the level of data in the report
while also optimizing the amount of resources required to define the level
of the report.

For example, a report includes the attributes Year, Month, Category, and
Subcategory. The Year and Month attributes are from the same hierarchy
and Month is the lowest-level attribute from that hierarchy on the report.
Similarly, the Category and Subcategory attributes are from the same
hierarchy and Subcategory is the lowest-level attribute from that hierarchy
on the report. When selecting this option for the Metric Level
Determination VLDB property, the level of the report is defined as Month

Copyright © 2024 All Rights Reserved 1646


Syst em Ad m in ist r at io n Gu id e

and Subcategory. It can be defined in this way because these are the
lowest-level attributes from the hierarchies that are present on the report.

This level can then be used with metrics to determine the level at which
their data must be reported. If the physical schema of your project
matches the expected logical schema, correct metric data is displayed and
the resources required to determine the report level are optimized.

l Include higher-level related attributes in metric level: The report's


level is defined using all attributes available in the report. This option
correctly reflects the level of data in the report, but it can require
additional resources to define the level of the report.

Consider the example used to describe the previous option. If the physical
schema of your project matches the expected logical schema, then
including only the lowest-level attributes displays correct metric data.
However, differences between your physical schema and expected logical
schema can cause unexpected data to be displayed if only the lowest level
attributes are used to define the level of the report.

For example, while the relationship between the Category and


Subcategory attributes is defined as a one-to-many relationship, the data
in your data source reflects a many-to-many relationship. Because of this
mismatch, including only the lowest-level attributes can return unexpected
metric data. By selecting this option for the Metric Level Determination
VLDB property, the additional higher-level attributes are included when
defining the level of the report and can be used to return the metric data as
it exists in the data source. However, while this helps return accurate data
in these types of scenarios, the higher-level attributes require additional
resources to define the level of the report.

Level s at Wh i ch Yo u Can Set Th i s

Project and report

Copyright © 2024 All Rights Reserved 1647


Syst em Ad m in ist r at io n Gu id e

Null Checking for Analytical Engine


The Null Checking for Analytical Engine VLDB property determines whether
or not null values are interpreted as zeros when the Analytical Engine
calculates data.

The default option is for aggregation calculations to ignore nulls and for
scalar calculations to treat null values as zero. Any projects that existed
prior to upgrading metadata to MicroStrategy ONE retain their original
VLDB property settings. See the Advanced Reporting Help. for more
information on this setting.

Changes made to this VLDB setting can cause differences to appear in your
data output. Metrics using count or average, metrics with dynamic
aggregation set to count or average, as well as thresholds based on such
metrics could be impacted by altered calculation behavior.

Level s at Wh i ch Yo u Can Set Th i s

Project, report, template, and metric

Copyright © 2024 All Rights Reserved 1648


Syst em Ad m in ist r at io n Gu id e

Remove Missing Units in Documents


If you remove or replace a dataset, controls on the document that contain
data that is no longer available from the dataset will be updated and will no
longer contain data from the replaced or removed dataset. For a Grid/Graph,
objects that are available in another dataset are updated to contain data
from the other dataset. The Remove Missing Units in Documents property
then determines how any objects that are still missing are displayed in
Grid/Graphs:

l Remove objects not available in the source dataset(s): The missing


objects are not displayed in the Grid/Graph. If the Grid/Graph only
contains missing objects, it is displayed as an empty placeholder.

l Do not remove objects not available in the source dataset(s): The


headers for the missing objects are displayed in the Grid/Graph, without
any data. MicroStrategy recommends that objects missing from datasets
are displayed. This can alert you if objects are removed from a report used
as a dataset.

For example, a document contains two datasets. Dataset 1 has Category,


Region, and the Revenue and Cost metrics. Dataset 2 has Category,
Subcategory, and the Revenue and Profit metrics. A Grid/Graph containing
the objects from Dataset 1 is displayed on the document. A portion of the
Grid/Graph is shown below, in Interactive Mode in MicroStrategy Web:

Copyright © 2024 All Rights Reserved 1649


Syst em Ad m in ist r at io n Gu id e

Dataset 1 is removed from the document. Because Category and Revenue


are available from Dataset 2, they continue to be displayed on the
Grid/Graph. Since Region and Cost are no longer available in any dataset on
the document, they are considered missing objects. Which option is selected
in the Remove Missing Units in Documents property then determines how
any objects that are still missing are displayed in Grid/Graphs, as described
below:

l Remove objects not available in the source dataset(s): The missing


objects are not displayed in the Grid/Graph, as shown below:

l Do not remove objects not available in the source dataset(s):


Headers for the missing objects are displayed in the Grid/Graph, as shown

Copyright © 2024 All Rights Reserved 1650


Syst em Ad m in ist r at io n Gu id e

below:

Regardless of the property setting, a text field that contains a dataset object
(such as an attribute or a metric) will display the object name instead of
values. For example, a text field displays {Region} instead of North, South,
and so on.

For an example that uses multiple datasets in a single Grid/Graph, see the
Document Creation Help.

Level s at Wh i ch Yo u Can Set Th i s

Project and document

Subtotal Dimensionality Aware


MicroStrategy 7i (7.2.x and later) has the ability to detect the level of a
metric and subtotal it accordingly. The Subtotal Dimensionality Aware
property allows you to choose between the 7.1 and earlier subtotaling
behavior (FALSE) and the 7.2.x and later subtotaling behavior (TRUE).
MicroStrategy recommends that you set this property to TRUE.

If this property is set to True, and a report contains a metric that is


calculated at a higher level than the report level, the subtotal of the metric is
calculated based on the metric's level. For example, a report at the Quarter
level containing a yearly sales metric shows the yearly sales as the subtotal
instead of summing the rows on the report.

Level s at Wh i ch Yo u Can Set Th i s

Project, report, template, and metric

Copyright © 2024 All Rights Reserved 1651


Syst em Ad m in ist r at io n Gu id e

Exam p l e

Quarterly Dollar Sales metric is defined as

Sum(Revenue) Dimensionality = Quarter

Yearly Dollar Sales metric is defined as

Sum(Revenue) Dimensionality = Year

Quarterly Dollar
Year Quarter Yearly Dollar Sales
Sales

2022 1 100 600

2022 2 200 600

2022 3 100 600

2022 4 200 600

Grand 2400 or 600 depending on the setting (see


600
Total below)

I f Su bt ot a l Di m e n si on a l i t y Aw a r e i s Se t t o FALSE

The quarterly subtotal is calculated as 600, that is, a total of the Quarterly
Dollar Sales values. The yearly subtotal is calculated as 2400, the total of
the Yearly Dollar Sales values. This is how MicroStrategy 7.1 calculates the
subtotal.

I f Su bt ot a l Di m e n si on a l i t y Aw a r e i s Se t t o TRU E

The quarterly subtotal is still 600. Intelligence Server is aware of the level of
the Yearly Dollar Sales metric, so rather than adding the column values, it
correctly calculates the Yearly Dollar Sales total as 600.

Providing Access to Intelligent Cube Data: Dynamic Sourcing


The table below summarizes the Dynamic sourcing VLDB properties.
Additional details about each property, including examples where

Copyright © 2024 All Rights Reserved 1652


Syst em Ad m in ist r at io n Gu id e

necessary, are available by clicking on the links in the table.

Property Description Possible Values Default Value

Aggregate tables
contain the same
data as
corresponding
detail tables and
the aggregation Aggregate tables

function is SUM contain the same


Aggregate Defines whether dynamic data as
Table sourcing is enabled or Aggregate tables corresponding
Validation disabled for aggregate tables. contain either less detail tables and
data or more data the aggregation
than their function is SUM
corresponding
detail tables and/or
the aggregation
function is not
SUM

Attribute columns
in fact tables and
lookup tables do
not contain NULLs
and all attribute
elements in fact Attribute columns

tables are present in fact tables and

in lookup tables lookup tables do


Defines whether dynamic
Attribute not contain NULLs
sourcing is enabled or Attribute columns
Validation and all attribute
disabled for attributes. in fact tables or elements in fact
lookup tables may tables are present
contain NULLs in lookup tables
and/or some
attribute elements
in fact tables are
not present in
lookup tables

Copyright © 2024 All Rights Reserved 1653


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

Defines whether the Intelligent


Cube Parse log is displayed in Disable Cube
Enable Cube the SQL View of an Intelligent Parse Log in SQL Disable Cube
Parse Log in Cube. This log helps View Parse Log in SQL
SQL View determine which reports use Enable Cube Parse View
dynamic sourcing to connect Log in SQL View
to the Intelligent Cube.

Enable Disable dynamic


Defines whether dynamic sourcing for report
Dynamic Enable dynamic
sourcing is enabled or
Sourcing for Enable dynamic sourcing for report
disabled for reports.
Report sourcing for report

Defines whether the extended


mismatch log is displayed in Disable Extended
Enable the SQL View of a report. The Mismatch Log in
SQL View Disable Extended
Extended extended mismatch log helps
Mismatch Log in
Mismatch Log determine why a metric Enable Extended SQL View
in SQL View prevents the use of dynamic Mismatch Log in
sourcing is provided in the SQL View
extended mismatch log.

Defines whether the mismatch


log is displayed in the SQL Disable Mismatch
Enable View of a report. This log helps Log in SQL View Disable Mismatch
Mismatch Log determine why a report that
Enable Mismatch Log in SQL View
in SQL View can use dynamic sourcing
cannot connect to a specific Log in SQL View

Intelligent Cube.

Defines whether the Report Disable Report


Parse log is displayed in the Parse Log in SQL
Enable Report SQL View of a report. This log View Disable Report
Parse Log in helps determine whether the Parse Log in SQL
SQL View report can use dynamic Enable Report View
sourcing to connect to an Parse Log in SQL

Intelligent Cube. View

Copyright © 2024 All Rights Reserved 1654


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

Enable dynamic
Defines whether dynamic sourcing for metric
Metric Enable dynamic
sourcing is enabled or
Validation Disable dynamic sourcing for metric
disabled for metrics.
sourcing for metric

Use case
insensitive string
Defines whether dynamic comparison with Use case
String dynamic sourcing
sourcing is enabled or insensitive string
Comparison
disabled for attributes that are Do not allow any comparison with
Behavior
used in filter qualifications. string comparison dynamic sourcing
with dynamic
sourcing

Aggregate Table Validation


Aggregate Table Validation is an advanced VLDB property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

Reports that use aggregate tables are available for dynamic sourcing by
default, but there are some data modeling conventions that should be
considered when using dynamic sourcing.

In general, aggregate tables allow accurate data to be returned to reports


from Intelligent Cubes through dynamic sourcing. However, if the aggregate
tables use an aggregation other than Sum, or there is different data between
aggregate tables and other tables in the data warehouse, this can cause
aggregate tables to return incorrect data when dynamic sourcing is used. An
example of an aggregate table not containing the same data is if an
aggregate table includes data for years 2006, 2007, and 2008 but the lookup
table for Year only includes data for 2007 and 2008.

Copyright © 2024 All Rights Reserved 1655


Syst em Ad m in ist r at io n Gu id e

You can enable and disable dynamic sourcing for aggregate tables by
modifying the Aggregate Table Validation VLDB property. This VLDB
property has the following options:

l Aggregate tables contain the same data as corresponding detail


tables and the aggregation function is SUM (default): This is the
default option for aggregate tables, which enables aggregate tables for
dynamic sourcing.

l Aggregate tables contain either less data or more data than their
corresponding detail tables and/or the aggregation function is not
SUM: This option disables dynamic sourcing for aggregate tables. This
setting should be used if your aggregate tables are not modeled to support
dynamic sourcing. The use of an aggregation function other than Sum or
the mismatch of data in your aggregate tables with the rest of your data
warehouse can cause incorrect data to be returned to reports from
Intelligent Cubes through dynamic sourcing.

You can disable dynamic sourcing individually for reports that use aggregate
tables or you can disable dynamic sourcing for all reports that use aggregate
tables within a project. While the definition of the VLDB property at the
project level defines a default for all reports in the project, any modifications
at the report level take precedence over the project level definition. For
information on defining a project-wide dynamic sourcing strategy, see the In-
memory Analytics Help.

Level s at Wh i ch Yo u Can Set Th i s

Project, report, and template

Attribute Validation
Attribute Validation is an advanced VLDB property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

Copyright © 2024 All Rights Reserved 1656


Syst em Ad m in ist r at io n Gu id e

Attributes are available for dynamic sourcing by default, but there are some
data modeling conventions that should be considered when using dynamic
sourcing.

In general, if attributes use outer joins accurate data can be returned to


reports from Intelligent Cubes through dynamic sourcing. However, if
attributes use inner joins, which is a more common join type, you should
verify that the attribute data can be correctly represented through dynamic
sourcing.

Two scenarios can cause attributes that use inner joins to return incorrect
data when dynamic sourcing is used:

l Attribute information in lookup and fact tables includes NULL values.

l All attribute elements in fact tables are not also present in lookup tables.

You can enable and disable dynamic sourcing for attributes by modifying the
Attribute Validation VLDB property. This VLDB property has the following
options:

l Attribute columns in fact tables and lookup tables do not contain


NULLs and all attribute elements in fact tables are present in lookup
tables (default): This option enables attributes for dynamic sourcing.

l Attribute columns in fact tables and lookup tables may contain


NULLs and/or some attribute elements in fact tables are not present
in lookup tables: This option disables dynamic sourcing for attributes.
This setting should be used if your attribute data is not modeled to support
dynamic sourcing. The inclusion of NULLs in your attribute data or a
mismatch between available attribute data in your fact and lookup tables
can cause incorrect data to be returned to reports from Intelligent Cubes
through dynamic sourcing.

You can disable dynamic sourcing for attributes individually or you can
disable dynamic sourcing for all attributes within a project. While the
definition of the VLDB property at the project level defines a default for all
attributes in the project, any modifications at the attribute level take

Copyright © 2024 All Rights Reserved 1657


Syst em Ad m in ist r at io n Gu id e

precedence over the project level definition. For information on defining a


project-wide dynamic sourcing strategy, see the In-memory Analytics Help.

Level s at Wh i ch Yo u Can Set Th i s

Project and attribute

Enable Cube Parse Log in SQL View


Enable Cube Parse Log in SQL View is an advanced VLDB property that is
hidden by default. For information on how to display this property, see
Viewing and Changing Advanced VLDB Properties, page 1630.

The Intelligent Cube parse log helps determine which reports use dynamic
sourcing to connect to an Intelligent Cube, as well as why some reports
cannot use dynamic sourcing to connect to an Intelligent Cube. By default,
the Intelligent Cube parse log can only be viewed using the MicroStrategy
Diagnostics and Performance Logging tool. You can also allow this log to be
viewed in the SQL View of an Intelligent Cube.

This VLDB property has the following options:

l Disable Cube Parse Log in SQL View (default): This option allows the
Intelligent Cube parse log to only be viewed using the MicroStrategy
Diagnostics and Performance Logging tool.

l Enable Cube Parse Log in SQL View: Select this option to allow the
Intelligent Cube parse log to be viewed in the SQL View of an Intelligent
Cube. This information can help determine which reports use dynamic
sourcing to connect to the Intelligent Cube.

Level s at Wh i ch Yo u Can Set Th i s

Intelligent Cube and project

Copyright © 2024 All Rights Reserved 1658


Syst em Ad m in ist r at io n Gu id e

Enable Dynamic Sourcing for Report


Enable Dynamic Sourcing for Report is an advanced VLDB property that is
hidden by default. For information on how to display this property, see
Viewing and Changing Advanced VLDB Properties, page 1630.

By default, dynamic sourcing is disabled for reports, and they therefore


retrieve their results by running against the data warehouse. You can enable
dynamic sourcing for a report so that active Intelligent Cubes (that are also
enabled for dynamic sourcing) are checked to see if the report can retrieve
its data from an Intelligent Cube. If an Intelligent Cube fits the data
requirements of a report, the report can be run without executing against the
data warehouse.

You can enable dynamic sourcing for reports by modifying the Enable
Dynamic Sourcing for Report VLDB property. This VLDB property has the
following options:

l Disable dynamic sourcing for report: Dynamic sourcing is disabled for


reports.

l Enable dynamic sourcing for report (default): Dynamic sourcing is


enabled for reports.

You can enable dynamic sourcing for reports individually or you can enable
dynamic sourcing for all reports within a project. While the definition of the
VLDB property at the project level defines a default for all reports in the
project, any modifications at the report level take precedence over the
project level definition. For information on defining a project-wide dynamic
sourcing strategy, see the In-memory Analytics Help.

Level s at Wh i ch Yo u Can Set Th i s

Project, report, and template

Copyright © 2024 All Rights Reserved 1659


Syst em Ad m in ist r at io n Gu id e

Enable Extended Mismatch Log in SQL View


Enable Extended Mismatch Log in SQL View is an advanced VLDB property
that is hidden by default. For information on how to display this property, see
Viewing and Changing Advanced VLDB Properties, page 1630.

The extended mismatch log helps determine why a metric prevents the use
of dynamic sourcing is provided in the extended mismatch log. This
information is listed for every metric that prevents the use of dynamic
sourcing. By default, the extended mismatch log can only be viewed using
the MicroStrategy Diagnostics and Performance Logging tool. You can also
allow this log to be viewed in the SQL View of a report.

The extended mismatch log can increase in size quickly and thus is best
suited for troubleshooting purposes.

This VLDB property has the following options:

l Disable Extended Mismatch Log in SQL View (default): This option


allows the extended mismatch log to only be viewed using the
MicroStrategy Diagnostics and Performance Logging tool.

l Enable Extended Mismatch Log in SQL View: Select this option to allow
the extended mismatch log to be viewed in the SQL View of a report. This
information can help determine why a report that can use dynamic
sourcing cannot connect to a specific Intelligent Cube.

Level s at Wh i ch Yo u Can Set Th i s

Report, template, and project

Enable Mismatch Log in SQL View


Enable Mismatch Log in SQL View is an advanced VLDB property that is
hidden by default. For information on how to display this property, see
Viewing and Changing Advanced VLDB Properties, page 1630.

Copyright © 2024 All Rights Reserved 1660


Syst em Ad m in ist r at io n Gu id e

The mismatch log helps determine why a report that can use dynamic
sourcing cannot connect to a specific Intelligent Cube. By default, the
mismatch log can only be viewed using the MicroStrategy Diagnostics and
Performance Logging tool. You can also allow this log to be viewed in the
SQL View of a report.

This VLDB property has the following options:

l Disable Mismatch Log in SQL View (default): This option allows the
mismatch log to only be viewed using the MicroStrategy Diagnostics and
Performance Logging tool.

l Enable Mismatch Log in SQL View: Select this option to allow the
mismatch log to be viewed in the SQL View of a report. This information
can help determine why a report that can use dynamic sourcing cannot
connect to a specific Intelligent Cube.

Level s at Wh i ch Yo u Can Set Th i s

Report, template, and project

Enable Report Parse Log in SQL View


Enable Report Parse Log in SQL View is an advanced VLDB property that is
hidden by default. For information on how to display this property, see
Viewing and Changing Advanced VLDB Properties, page 1630.

The report parse log helps determine whether the report can use dynamic
sourcing to connect to an Intelligent Cube. By default, the report parse log
can only be viewed using the MicroStrategy Diagnostics and Performance
Logging tool. You can also allow this log to be viewed in the SQL View of a
report.

This VLDB property has the following options:

l Disable Report Parse Log in SQL View (default): This option allows the
report parse log to only be viewed using the MicroStrategy Diagnostics

Copyright © 2024 All Rights Reserved 1661


Syst em Ad m in ist r at io n Gu id e

and Performance Logging tool.

l Enable Report Parse Log in SQL View: Select this option to allow the
report parse log to be viewed in the SQL View of a report. This information
can help determine whether the report can use dynamic sourcing to
connect to an Intelligent Cube.

Level s at Wh i ch Yo u Can Set Th i s

Report, template, and project

Metric Validation
Metric Validation is an advanced VLDB property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

Metrics are available for dynamic sourcing by default, but there are some
data modeling conventions that should be considered when using dynamic
sourcing.

In general, if metrics use outer joins, accurate data can be returned to


reports from Intelligent Cubes through dynamic sourcing. However, if
metrics use inner joins, which is a more common join type, you should verify
that the metric data can be correctly represented through dynamic sourcing.

If the fact table that stores data for metrics includes NULL values for metric
data, this can cause metrics that use inner joins to return incorrect data
when dynamic sourcing is used. This scenario is uncommon.

You can enable and disable dynamic sourcing for metrics by modifying the
Metric Validation VLDB property. This VLDB property has the following
options:

l Enable dynamic sourcing for metric (default): This option enables


metrics for dynamic sourcing.

Copyright © 2024 All Rights Reserved 1662


Syst em Ad m in ist r at io n Gu id e

l Disable dynamic sourcing for metric: This option disables dynamic


sourcing for metrics. This setting should be used if your metric data is not
modeled to support dynamic sourcing. The inclusion of NULLs in fact
tables that contain your metric data can cause incorrect data to be
returned to reports from Intelligent Cubes through dynamic sourcing.

You can disable dynamic sourcing for metrics individually or you can disable
dynamic sourcing for all metrics within a project. While the definition of the
VLDB property at the project level defines a default for all metrics in the
project, any modifications at the metric level take precedence over the
project level definition. For information on defining a project-wide dynamic
sourcing strategy, see the In-memory Analytics Help.

Level s at Wh i ch Yo u Can Set Th i s

Project and metric

String Comparison Behavior


String Comparison Behavior is an advanced VLDB property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

To ensure that dynamic sourcing can return the correct results for attributes,
you must also verify that filtering on attributes achieves the same results
when executed against your database versus an Intelligent Cube.

The results returned from a filter on attributes can potentially return different
results when executing against the database versus using dynamic sourcing
to execute against an Intelligent Cube. This can occur if your database is
case-sensitive and you create filter qualifications that qualify on the text
data of attribute forms.

If your database is case-sensitive, this is enforced for the filter qualification.


However, filtering for an Intelligent Cube is handled by the Analytical Engine
which does not enforce case sensitivity.

Copyright © 2024 All Rights Reserved 1663


Syst em Ad m in ist r at io n Gu id e

Consider a filter qualification that filters on customers that have a last name
beginning with the letter h. If your database is case-sensitive and uses
uppercase letters for the first letter in a name, a filter qualification using a
lowercase h is likely to return no data. However, this same filter qualification
on the same data stored in an Intelligent Cube returns all customers that
have a last name beginning with the letter h.

You can configure this dynamic sourcing behavior for attributes by modifying
the String Comparison Behavior VLDB property. This VLDB property has the
following options:

l Use case insensitive string comparison with dynamic sourcing


(default): When attempting to use dynamic sourcing, it allows filter
qualifications to qualify on the text data of attribute forms without
enforcing case sensitivity.

This is a good option if your database does not enforce case sensitivity. In
this scenario, dynamic sourcing returns the same results that would be
returned by the filter qualification if the report was executed against the
database.

l Do not allow any string comparison with dynamic sourcing: This


option disables dynamic sourcing for attributes when a filter qualification
is used to qualify on the text data of attribute forms.

This is a good option if your database is case sensitive. In this scenario,


dynamic sourcing could return different results than what would be
returned by the filter qualification if the report was executed against the
database.

You can modify this VLDB property for attributes individually or you can
modify it for all attributes within a project. While the definition of the VLDB
property at the project level defines a default for all attributes in the project,
any modifications at the attribute level take precedence over the project
level definition. For information on defining a project-wide dynamic sourcing
strategy, see the In-memory Analytics Help.

Copyright © 2024 All Rights Reserved 1664


Syst em Ad m in ist r at io n Gu id e

Level s at Wh i ch Yo u Can Set Th i s

Project and attribute

Exporting Report Results from MicroStrategy: Export Engine


The table below summarizes the Export Engine VLDB properties. Additional
details about each property, including examples where necessary, are
provided in the sections following the table.

Default
Property Description Possible Values
Value

GUID of Attributes in Lets you identify attributes


A list of attribute
Profit and Loss that include empty elements,
ID values, each
Hierarchy (Separated which can then be ignored NULL
one separated
By ':') that has Dummy when exporting to Microsoft
using a colon (:).
Rows to be Removed Excel or to a PDF file.

GUID of Attributes in Profit and Loss Hierarchy (Separated By ':')


that has Dummy Rows to be Removed
GUID of attributes in profit and loss hierarchy (separated by ':') that has
dummy rows to be removed is an advanced property that is hidden by
default. For instructions on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

The GUID of attributes in profit and loss hierarchy (separated by ':') that has
dummy rows to be removed VLDB property lets you identify attributes that
include empty elements, which can then be ignored when exporting to
Microsoft Excel or to a PDF file. This is useful when creating financial line
item attributes as part of supporting a financial reporting solution in
MicroStrategy. For a detailed explanation of how to support financial
reporting in MicroStrategy, along with using this VLDB property to identify
attributes that include empty elements, refer to the Project Design Help .

Copyright © 2024 All Rights Reserved 1665


Syst em Ad m in ist r at io n Gu id e

To identify attributes that include empty elements, type the ID value for each
attribute in the text field for this VLDB property. To determine the ID value
for an attribute object, navigate to an attribute in Developer, right-click the
attribute, and then select Properties. Details about the attribute, including
the ID value are displayed.

If you need to identify multiple attributes as having empty elements,


separate each attribute ID using a colon (:).

Levels at Which You Can Set This

Project only

Customizing SQL Queries: Freeform SQL


The table below summarizes the Freeform SQL VLDB properties. Additional
details about each property, including examples where necessary, are
provided in the sections following the table.

Default
Property Description Possible Values
Value

Do not turn off warnings


for Freeform SQL
statements with empty
results, such as updates.
Do not turn
Turn off warnings for off warnings
Ignore
Provides the flexibility to Freeform SQL for Freeform
Empty
display or hide warnings when a statements with empty SQL
Result for
Freeform SQL statement results, such as updates. statements
Freeform
returns an empty result. with empty
SQL Turn off warnings for
results, such
Freeform SQL
as updates.
statements that return
multiple result sets with
an empty first result set
and return second result

Copyright © 2024 All Rights Reserved 1666


Syst em Ad m in ist r at io n Gu id e

Default
Property Description Possible Values
Value

set, such as stored


procedures.

XQuery Lets you validate Transaction


Success Services reports that use User-defined. false
Code XQuery.

Ignore Empty Result for Freeform SQL


Ignore Empty Result for Freeform SQL is an advanced property that is
hidden by default. For information on how to display this property, see
Viewing and Changing Advanced VLDB Properties, page 1630.

The Ignore Empty Result for Freeform SQL VLDB property provides the
flexibility to display or hide warnings when a Freeform SQL statement
returns an empty result.

Freeform SQL is intended to be used to return results that can be displayed


on a Freeform SQL report. However, Freeform SQL can also be used to
execute SQL statements that create tables, update tables, or perform other
database maintenance tasks. These types of actions do not return any
results and therefore would return a warning when executing a Freeform
SQL report. If you routinely use Freeform SQL for these purposes, you can
hide these warnings since an empty result set is expected.

This VLDB property has the following options:

l Do not turn off warnings for Freeform SQL statements with empty
results, such as updates (default): This option allows warnings to be
displayed when a Freeform SQL statement causes a Freeform SQL report
to return an empty result. This is a good option if you use Freeform SQL to
return and display data with Freeform SQL reports.

Copyright © 2024 All Rights Reserved 1667


Syst em Ad m in ist r at io n Gu id e

l Turn off warnings for Freeform SQL statements with empty results,
such as updates: Select this option to hide all warnings when a Freeform
SQL statement causes a Freeform SQL report to return an empty result.
This is a good option if you commonly use Freeform SQL to execute
various SQL statements that are not expected to return any report results.
This prevents users from seeing a warning every time a SQL statement is
executed using Freeform SQL.

However, be aware that if you also use Freeform SQL to return and display
data with Freeform SQL reports, no warnings are displayed if the report
returns a single empty result.

l Turn off warnings for Freeform SQL statements that return multiple
result sets with an empty first result set and return second result
set, such as stored procedures: Select this option to hide all warnings
when a Freeform SQL report returns an initial empty result, followed by
additional results that include information. Stored procedures can
sometimes have this type of behavior as they can include statements that
do not return any results (such as update statements or create table
statements), followed by statements to return information from the
updated tables. This prevents users from seeing a warning when these
types of stored procedures are executed using Freeform SQL.

If you select this option and a Freeform SQL report returns only a single
empty result, then a warning is still displayed.

Levels at Which You Can Set This

Database instance and report

XQuery Success Code


XQuery Success Code is an advanced property that is hidden by default. For
instructions on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

Copyright © 2024 All Rights Reserved 1668


Syst em Ad m in ist r at io n Gu id e

The XQuery Success Code VLDB property lets you validate Transaction
Services reports that use XQuery. MicroStrategy Transaction Services and
XQuery allow you to access and update information available in third-party
web services data sources. The steps to create a Transaction Services
report using XQuery are provided in the Advanced Reporting Help.

When Transaction Services and XQuery are used to update data for third-
party web services, sending the data to be updated is considered as a
successful transaction. By default, any errors that occur for the third-party
web service during a transaction are not returned to MicroStrategy.

To check for errors, you can include logic in your XQuery syntax to
determine if the transaction successfully updated the data within the third-
party web service. Just after the XQuery table declaration, you can include
the following syntax:

<ErrorCode>{Error_Code}</ErrorCode>
<ErrorMessage>{Error_Message}</ErrorMessage>

In the syntax above:

l Error_Code is a variable that you must define in your XQuery statement


to retrieve the success or error code from the third-party web service, for
the action that attempts the transaction. The logic to return an error code
depends on the third-party web service that you are attempting to perform
the transaction on.

l Error_Message is either a static error message that you supply, or a


variable that you must define in your XQuery statement to retrieve any
resulting error message from the third-party web service.

By including this syntax in your XQuery statement, the XQuery Success


Code VLDB property is used to validate the transaction. The information
returned by the Error_Code variable is compared to the value supplied for
the XQuery Success Code. By default, the XQuery Success Code is defined
as "false", but you can type any valid string. If the Error_Code and XQuery
Success Code are identical, then the content in the Error_Message is not
returned and the transaction is returned as a success. However, if the

Copyright © 2024 All Rights Reserved 1669


Syst em Ad m in ist r at io n Gu id e

Error_Code returns any value other than the XQuery Success Code, the
content for the Error_Message is returned. This lets you validate each
transaction that is sent to the third-party web service.

Levels at Which You Can Set This

Database instance and report

Limiting Report Rows, SQL Size, and SQL Time-Out:


Governing
The table below summarizes the Governing VLDB properties. Additional
details about each property, including examples where necessary, are
available by clicking on the links in the table.

Possible Default
Property Description
Values Value

Determines whether a commit statement ON


Autocommit is automatically issued after each SQL ON
statement for a database connection. OFF

The maximum number of rows returned to


-1 (Use value
Intermediate the server for each intermediate pass. (0 User-
from higher
Row Limit = unlimited number of rows; -1 = use defined
level)
value from higher level.)

Maximum
Maximum size of SQL string accepted by User-
SQL/MDX 65536
ODBC driver (bytes). defined
Size

The maximum number of rows returned to


-1 (Use value
Results Set the Server for the final result set. (0 = User-
from higher
Row Limit unlimited number of rows; -1 = use value defined
level)
from higher level.)

Limiting Single SQL pass time-out in seconds. (0 0 (Time limit


User-
Report Rows, = time limit not enforced by this governing not enforced
defined
SQL Size, and setting) by this

Copyright © 2024 All Rights Reserved 1670


Syst em Ad m in ist r at io n Gu id e

Possible Default
Property Description
Values Value

SQL Time-
governing
Out:
setting)
Governing

Autocommit
The Autocommit VLDB property determines whether a commit statement is
automatically issued after each SQL statement for a database connection.
You have the following options:

l ON: A commit is automatically issued after each SQL statement by


the database connection: By default, a commit is issued automatically
after each SQL statement. This allows you to query a database without
having to manually issue commit statements and other required
transaction control commands.

l OFF: No commit is automatically issued after each SQL statement by


the database connection: Commit statements are not issued
automatically after each SQL statement.

Multiple SQL statements are required for various reporting and analysis
features in MicroStrategy. When multiple SQL statements are used, each
can be viewed as a separate transaction. If your database is being
updated by a separate transaction, ETL process, or other update, this
can cause data inconsistency with each SQL statement, since each SQL
statement is returned as a separate transaction. Disabling automatic
commit statements includes all SQL statements as a single transaction,
which can be used in conjunction with other database techniques to
ensure data consistency when reporting and analyzing a database that is
being updated. For example, if reporting on an Oracle database you can
use this in conjunction with defining the isolation level of the SQL
statements.

Copyright © 2024 All Rights Reserved 1671


Syst em Ad m in ist r at io n Gu id e

Be aware that if you disable automatic commit statements for each SQL
statement, these transaction control commands must be included for the
report. If you are using Freeform SQL or creating your own SQL
statement for use in MicroStrategy, these can be included directly in
those SQL statements. For reports that use SQL that is automatically
generated by MicroStrategy, you can use the Pre/Post Statement VLDB
properties (see Customizing SQL Statements: Pre/Post Statements, page
1768) to provide the required transaction control commands.

Level s at Wh i ch Yo u Can Set Th i s

Project and report

Intermediate Row Limit


The Intermediate Row Limit VLDB property is used to limit the number of
rows of data returned to the server from pure SELECT statements issued
apart from the final pass. Apart from the final pass, pure SELECT
statements are usually executed if there are analytical functions or partition
pre-queries to process. Since the partition pre-queries return only a handful
of rows, the SELECT statements issued for analytical function processing
decide the number of rows set in most cases. If the limit is exceeded, the
report fails with an error message. When it is set to the default, the
Intermediate Row Limit takes the value of the Result Set Row Limit VLDB
property at the report (highest) level.

The table below explains the possible values and their behavior:

Value Behavior

0 No limit on number of rows returned

Number Number of rows returned is limited to the specified number

Level s at Wh i ch Yo u Can Set Th i s

Report only

Copyright © 2024 All Rights Reserved 1672


Syst em Ad m in ist r at io n Gu id e

Maximum SQL/MDX Size


The Maximum SQL/MDX Size property specifies the SQL size (in bytes) on a
pass-by-pass basis. If the limit is exceeded, the report execution is
terminated and an error message is returned. The error message usually
mentions that a SQL/MDX string is longer than a corresponding limitation.
The limit you choose should be based on the size of the SQL string accepted
by your ODBC driver.

The table below explains the possible values and their behavior:

Value Behavior

0 No limit on SQL pass size

Number The maximum SQL pass size (in bytes) is limited to the specified number

By selecting the check box Use default inherited value, the value is set to the
Default default for the database type used for the related database instance. The
default size varies depending on the database type.

Increasing the maximum to a large value can cause the report to fail in the
ODBC driver. This is dependent on the database type you are using.

Level s at Wh i ch Yo u Can Set Th i s

Database instance only

Results Set Row Limit


The Results Set Row Limit VLDB property is used to limit the number of rows
returned from the final results set SELECT statements issued. This property
is report-specific.

If the report result set exceeds the limit specified in the Result Set Row
Limit, the report execution is terminated.

Copyright © 2024 All Rights Reserved 1673


Syst em Ad m in ist r at io n Gu id e

This property overrides the Number of report result rows setting in the
Project Configuration Editor: Governing Rules category.

When the report contains a custom group, this property is applied to each
element in the group. Therefore, the final result set displayed could be
larger than the predefined setting. For example, if you set the Result Set
Row Limit to 1,000, it means you want only 1,000 rows to be returned. Now
apply this setting to each element in the custom group. If the group has
three elements and each uses the maximum specified in the setting (1,000),
the final report returns 3,000 rows.

The table below explains the possible values and their behavior:

Value Behavior

0 Unlimited number of result rows

-1 Use the default value from a higher level

Number The maximum number of rows

Level s at Wh i ch Yo u Can Set Th i s

Report only

SQL Time Out (Per Pass)


The SQL Time Out VLDB property is used to avoid lengthy intermediate
passes. If any pass of SQL runs longer than the set time (in seconds), the
report execution is terminated.

The table below explains the possible values and their behavior:

Value Behavior

0 This governing setting does not impose a time limit on SQL pass execution.

The maximum amount of time (in seconds) a SQL pass can execute is limited to
Number
the specified number.

Copyright © 2024 All Rights Reserved 1674


Syst em Ad m in ist r at io n Gu id e

Level s at Wh i ch Yo u Can Set Th i s

Database instance and report

Retrieving Data: Indexing


The table below summarizes the Indexing VLDB properties. Additional
details about each property, including examples where necessary, are
available by clicking on the links in the table.

Property Description Possible Values Default Value

Don't allow the


creation of indexes
on metric columns
Determines whether or not Don't allow the
Allow Index on to allow the creation of Allow the creation of creation of
Metric indexes on fact or metric indexes on metric indexes on metric
columns. columns (if the columns
Intermediate Table
Index setting is set to
create)

Defines the string that is


Index Post appended at the end of the
String and CREATE INDEX statement. User-defined NULL
Index Qualifier For example:

IN INDEXSPACE

Defines the prefix to use


when automatically creating
indexes for intermediate
Retrieving
SQL passes. The prefix is User-defined NULL
Data: Indexing
added to the beginning of
the CREATE INDEX
statement.

Index Post Defines the string to parse


String and in between the CREATE and User-defined NULL
Index Qualifier INDEX words. For example:

Copyright © 2024 All Rights Reserved 1675


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

CLUSTERED

Don't create an index

Create partitioning
key (typically
applicable to MPP
systems)
Determines whether and
Intermediate Create partitioning Don't create an
when to create an index for
Table Index key and secondary index
the intermediate table.
index on intermediate
table

Create only
secondary index on
intermediate table

Determines the maximum


number of columns that
Max Columns
replace the column wildcard
in Column User-defined 0 (No limit)
("!!!") in pre and post
Placeholder
statements. 0 = all columns
(no limit).

Determines the maximum


Max Columns number of columns that can
User-defined No limit
in Index be included in partition key
or index.

Create primary key


(where applicable) if Create primary
Determines whether a the intermediate table key (where
primary key is created index setting is set to applicable) if the
Primary Index create a primary
instead of a partitioning key intermediate table
Type index.
for databases that support index setting is
both types, such as UDB. Create primary set to create a
index/partitioning key primary index.
(where applicable) if

Copyright © 2024 All Rights Reserved 1676


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

the intermediate table


index setting is set to
create a primary
index.

Create index after


Defines whether an index is inserting into table Create index after
Secondary
created before or after inserting into
Index Order Create index before
inserting data into a table. table
inserting into table

Create Composite
Index for Temporary
Table Column Create Composite
Defines what type of index Indexing
Secondary Index for
is created for temporary
Index Type Create Individual Temporary Table
table column indexing.
Indexes for Column Indexing
Temporary Table
Column Indexing

Allow Index on Metric


Allow Index on Metric is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

The Allow Index on Metric property determines whether or not to use fact or
metric columns in index creation. You can see better performance in
different environments, especially in Teradata, when you add the fact or
metric column in the index. Usually, the indexes are created on attribute
columns; but with this setting, the fact or metric columns are added as well.
All fact or metric columns are added.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Copyright © 2024 All Rights Reserved 1677


Syst em Ad m in ist r at io n Gu id e

Exam p l e

Do not allow creation of indexes on fact or metric columns (default)

create table ZZT8L005Y1YEA000 (


CATEGORY_ID BYTE,
REGION_ID BYTE,
YEAR_ID SHORT,
WJXBFS1 DOUBLE,
WJXBFS2 DOUBLE)
insert into ZZT8L005Y1YEA000
select a13.[CATEGORY_ID] AS CATEGORY_ID,
a15.[REGION_ID] AS REGION_ID,
a16.[YEAR_ID] AS YEAR_ID,
sum((a11.[QTY_SOLD] * (a11.[UNIT_PRICE] -
a11.[DISCOUNT]))) as WJXBFS1,
sum((a11.[QTY_SOLD] * ((a11.[UNIT_PRICE] -
a11.[DISCOUNT]) - a11.[UNIT_COST]))) as
WJXBFS2
from [ORDER_DETAIL] a11,
[LU_ITEM] a12,
[LU_SUBCATEG] a13,
[LU_EMPLOYEE] a14,
[LU_CALL_CTR] a15,
[LU_DAY] a16
where a11.[ITEM_ID] = a12.[ITEM_ID] and
a12.[SUBCAT_ID] = a13.[SUBCAT_ID] and
a11.[EMP_ID] = a14.[EMP_ID] and
a14.[CALL_CTR_ID] = a15.[CALL_CTR_ID] and
a11.[ORDER_DATE] = a16.[DAY_DATE]
and a15.[REGION_ID] in (1)
group by a13.[CATEGORY_ID],
a15.[REGION_ID],
a16.[YEAR_ID]
create index ZZT8L005Y1YEA000_i on ZZT8L005Y1YEA000
(CATEGORY_ID, REGION_ID, YEAR_ID)

Allow the creation of indexes on fact or metric columns

This example is the same as the example above except that the last line of
code should be replaced with the following:

create index ZZT8L005YAGEA000_i on ZZT8L005YAGEA000


(CATEGORY_ID, REGION_ID, YEAR_ID, WJXBFS1, WJXBFS2)

Copyright © 2024 All Rights Reserved 1678


Syst em Ad m in ist r at io n Gu id e

Index Prefix

This property allows you to define the prefix to add to the beginning of the
CREATE INDEX statement when automatically creating indexes for
intermediate SQL passes.

For example, the index prefix you define appears in the CREATE INDEX
statement as shown below:

create index(index prefix)


IDX_TEMP1(STORE_ID, STORE_DESC)

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Index Post String and Index Qualifier


The Index Post String and Index Qualifier property can be used to customize
the CREATE INDEX statement. Indexes can be created when the
Intermediate Table Type is set to Permanent Tables, Temporary Tables, and
Views (most platforms do not support indexes on views). These two settings
can be used to specify the type of index to be created and the storage
parameters as provided by the specific database platform. If the Index Post
String and Index Qualifier are set to a certain string, then for all the CREATE
INDEX statements, the Index Post String and Index Qualifier are applied.

The create index syntax pattern is as follows:

l All platforms except Teradata:

create <<Index Qualifier>> index i_[Table Name] on


[Table Name] ([Column List]) <<Index Post String>>

l Teradata:

create <<Index Qualifier>> index i_[Table Name] ([Column


List]) on [Table Name] <<Index Post String>>

Copyright © 2024 All Rights Reserved 1679


Syst em Ad m in ist r at io n Gu id e

Exam p l e

Index Post String

The Index Post String setting allows you to add a custom string to the end of
the CREATE INDEX statement.

Index Post String = /* in tablespace1 */


create index IDX_TEMP1(STORE_ID, STORE_DESC) /* in
"tablespace1*/

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Intermediate Table Index


The Intermediate Table Index property is used to control the primary and
secondary indexes generated for platforms that support them. This property
is for permanent tables and temporary tables, where applicable. In the VLDB
Properties Editor, select an option to view example SQL statements used by
various databases for the selected option:

l Don't create an index (default)

l Create partitioning key (typically applicable to MPP systems)

l Create portioning key and secondary index on intermediate table

l Create only secondary index on intermediate table

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Max Columns in Column Placeholder


Max Columns in Column Placeholder is an advanced property that is hidden
by default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

Copyright © 2024 All Rights Reserved 1680


Syst em Ad m in ist r at io n Gu id e

The Max Columns in Column Placeholder property controls the maximum


number of columns that replace the column wildcard ("!!!") in pre and post
statements. This limit applies to both the primary and the secondary
indexes.

The table below explains the possible values and their behavior:

Value Behavior

0 All attribute ID columns go into the index

Number The maximum number of attribute ID columns to use with the wildcard

Level s at Wh i ch Yo u Can Set Th i s

Database instance only

Max Columns in Index


The Max Columns in Index property controls the maximum number of
columns that can be used when creating an index. This limit applies to both
primary and secondary indexes. If the maximum is five columns but there are
10 columns available to index, the first five are selected. However, each
attribute has a "weight" that you can set. When SQL is generated, the
attributes are selected in ascending order of "weight." By combining
Attribute Weights and the Max Columns in Index properties, you can
designate any attribute to be included in the index.

You can define attribute weights in the Project Configuration Editor. Select
the Report definition: SQL generation category, and in the Attribute
weights section, click Modify.

The table below explains the possible values and their behavior:

Copyright © 2024 All Rights Reserved 1681


Syst em Ad m in ist r at io n Gu id e

Value Behavior

0 All attribute ID columns are placed in the index

Number The maximum number of attribute ID columns that are placed in the index

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Primary Index Type


Primary Index Type is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

The Primary Index Type property determines the pattern for creating primary
keys and indexes. In the VLDB Properties Editor, select an option to view
example SQL statements used by various databases for the selected option.
The examples also display whether the option is applicable for a given
database type. If you select an option that is not applicable for the database
type that you use, then the other option is used automatically. While this
ensures that the primary index type is correct for your database, you should
select an option that is listed as applicable for the database that you use.

Some databases such as DB2 UDB support both primary index type options.
Use the example SQL statements and your third-party database
documentation to determine the best option for your environment.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Copyright © 2024 All Rights Reserved 1682


Syst em Ad m in ist r at io n Gu id e

Secondary Index Order


The Secondary Index Order VLDB property allows you to define whether an
index is created before or after inserting data into a table. This VLDB
property has the following options:

l Create index after inserting into table (default): This option creates the
index after inserting data into a table, which is a good option to support
most database and indexing strategies.

l Create index before inserting into table: This option creates the index
before inserting data into a table, which can improve performance for
some environments, including Sybase IQ. The type of index created can
also help to improve performance in these types of environments, and can
be configured with the Secondary Index Type VLDB property (see
Secondary Index Order, page 1683).

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Secondary Index Type


The Secondary Index Type VLDB property allows you to define what type of
index is created for temporary table column indexing. This VLDB property
has the following options:

l Create Composite Index for Temporary Table Column Indexing


(default): This option creates composite indexes for temporary table
column indexing. This is a good option to support most database and
indexing strategies.

l Create Individual Indexes for Temporary Table Column Indexing: This


option creates individual indexes for temporary table column indexing.
This can improve performance for some environments, including Sybase
IQ. The order in which the index is created can also help to improve
performance in these types of environments, and can be configured with

Copyright © 2024 All Rights Reserved 1683


Syst em Ad m in ist r at io n Gu id e

the Secondary Index Order VLDB property (see Secondary Index Type,
page 1683).

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Relating Column Data with SQL: Joins


The table below summarizes the Joins VLDB properties. Additional details
about each property, including examples where necessary, are available by
clicking on the links in the table.

Property Description Possible Values Default Value

Attribute to
Join When Controls whether tables Join common key on
Key From are joined only on the both sides
Join common key on
Neither Side common keys or on all Join common both sides
can be common columns for attributes (reduced)
Supported by each table. on both sides
the Other Side

Controls whether two


fact tables are directly
joined together. If you
choose Temp Table
Join, the Analytical
Base Table Engine calculates Temp table join
Join for results independently Temp table join
Template from each fact table and Fact table join

places those results into


two intermediate tables.
These intermediate
tables are then joined
together.

Cartesian Join Allows the Do not reevaluate Do not reevaluate


Evaluation MicroStrategy SQL cartesian joins cartesian joins

Copyright © 2024 All Rights Reserved 1684


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

Engine to use a new


algorithm for evaluating
Reevaluate cartesian
whether or not a
joins
Cartesian join is
necessary.

Execute

Cancel execution

Cancel execution only

Action that occurs when when warehouse table

the Analytical Engine is involved in either


Cartesian Join side of cartesian join
generates a report that Execute
Warning
contains a Cartesian If only one side of
join. cartesian join
contains warehouse
tables, SQL will be
executed without
warning

Do not preserve all


the rows for metrics
higher than template
level

Preserve all the rows


for metrics higher
Allows users to choose than template level Do not preserve all
Downward
how to handle metrics w/o report filter the rows for metrics
Outer Join
which have a higher higher than template
Option Preserve all the rows
level than the template. level
for metrics higher
than template level
with report filter

Do not do downward
outer join for
database that support

Copyright © 2024 All Rights Reserved 1685


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

full outer join

Do not do downward
outer join for
database that support
full outer join, and
order temp tables in
last pass by level

Controls which lookup


tables are included in
the join against the fact
table. For a partial star
join, the Analytical No star join
DSS Star Join No star join
Engine joins the lookup Partial star join
tables of all attributes
present in either the
template or the filter or
metric level, if needed.

Normal FROM clause


order as generated by
the engine

Move last table in


Determines whether to normal FROM clause
use the normal FROM order to the first
Normal FROM clause
From Clause clause order as Move MQ table in order as generated
Order generated by the normal From clause by the engine
Analytical Engine or to order to the last (for
switch the order. RedBrick)

Reverse FROM
clause order as
generated by the
engine

Full Outer Join Indicates whether the


No support No support
Support database platform

Copyright © 2024 All Rights Reserved 1686


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

supports full outer joins. Support

Join 89

Join 92

SQL 89 Inner Join

Join Type Type of column join. and Cross Join and Join 89
SQL 92 Outer Join

SQL 89 Inner Join


and SQL 92 Outer
Join and Cross Join

Partially based on
attribute level
(behavior prior to
version 8.0.1) Partially based on
Determines how lookup
Lookup Table Fully based on attribute level
tables are loaded for join
Join Order attribute level. Lookup (behavior prior to
operations.
tables for lower level version 8.0.1)
attributes are joined
before those for
higher level attributes

Max Tables in Maximum number of


User-defined No limit
Join tables to join together.

Action that occurs when


the Analytical Engine
Max Tables in generates a report that Execute
Cancel execution
Join Warning exceeds the maximum Cancel execution
number of tables in the
join limit.

Defines when outer joins Do not perform outer


Nested join on nested Do not perform outer
are performed on
Aggregation aggregation join on nested
metrics that are defined
Outer Joins aggregation
with nested aggregation Do perform outer join

Copyright © 2024 All Rights Reserved 1687


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

on nested
aggregation when all
formulas have the
same level

Do perform downward
functions. outer join on nested
aggregation when all
formulas can
downward outer join
to a common lower
level

Preserving
Data Using
Outer Joins

Preserve common
elements of final pass
result table and
lookup/relationship
table

Preserve all final


result pass elements

Preserve all elements Preserve common


Preserve All
Perform an outer join to of final pass result elements of final
Final Pass
the final result set in the table with respect to pass result table and
Result
final pass. lookup table but not lookup/relationship
Elements
relationship table table.

Do not listen to per


report level setting,
preserve elements of
final pass according
to the setting at
attribute level. If this
choice is selected at

Copyright © 2024 All Rights Reserved 1688


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

attribute level, it will


be treated as
preserve common
elements (that is,
choice 1)

Preserve common
elements of lookup
and final pass result
table

Preserve lookup table


elements joined to
final pass result table
based on fact table
keys Preserve common
Preserve All Perform an outer join to
Preserve lookup table elements of lookup
Lookup Table the lookup table in the
elements joined to and final pass result
Elements final pass.
final pass result table table
based on template
attributes without
filter

Preserve lookup table


elements joined to
final pass result table
based on template
attributes with filter

Attribute to Join When Key From Neither Side can be Supported


by the Other Side
The Attribute to join when key from neither side can be supported by the
other side is an advanced property that is hidden by default. For information
on how to display this property, see Viewing and Changing Advanced VLDB
Properties, page 1630.

Copyright © 2024 All Rights Reserved 1689


Syst em Ad m in ist r at io n Gu id e

This VLDB property becomes obsolete when you change your Data Engine
version to 2021 or above. See KB484738 for more information.

This VLDB property determines how MicroStrategy joins tables with common
columns. The options for this property are:

l Join common key on both sides (default): Joins on tables only use
columns that are in each table, and are also keys for each table.

l Join common attributes (reduced) on both sides: Joins between tables


use all common attribute columns to perform the join. This functionality
can be helpful in a couple of different scenarios.

l You have two different tables named Table1 and Table2. Both tables
share 3 ID columns for Year, Month, and Date along with other columns
of data. Table1 uses Year, Month, and Date as keys while Table2 uses
only Year and Month as keys. Since the ID column for Date is not a key
for Table2, you must set this option to include Day to join the tables
along with Year and Month.

l You have a table named Table1 that includes the columns for the
attributes Quarter, Month of Year, and Month. Since Month is a child of
Quarter and Month of Year, its ID column is used as the key for Table1.
There is also a temporary table named TempTable that includes the
columns for the attributes Quarter, Month of Year, and Year, using all
three ID columns as keys of the table. It is not possible to join Table1
and TempTable unless you set this option because they do not share
any common keys. If you set this option, Table1 and TempTable can join
on the common attributes Quarter and Month of Year.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Copyright © 2024 All Rights Reserved 1690


Syst em Ad m in ist r at io n Gu id e

Base Table Join for Template


The Base Table Join for Template is an advanced property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

When reports contain metrics from different fact tables or a compound


metric made up of data from different fact tables, then the Base Table Join
for Template property can be used to choose between intermediate table
joins and base tables joins. The property is mainly performance-related. If
intermediate table join is chosen, then the type of intermediate table is
governed by the Intermediate Table Type VLDB property (see Intermediate
Table Type, page 1915 in the Table Properties section).

Caution must be taken when changing this setting since the results can be
different depending on the types of metrics on the report.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l e

Use Temp Table Join (default)

select a11.MARKET_NBR MARKET_NBR,


sum(a11.CLE_SLS_DLR)
CLEARANCESAL
into #ZZTIS00H5D3SP000
from HARI_MARKET_DIVISION a11
group by a11.MARKET_NBR
select a11.MARKET_NBR MARKET_NBR,
sum(a11.COST_AMT)
COSTAMOUNT
into #ZZTIS00H5D3SP001
from HARI_COST_MARKET_DIV a11
group by a11.MARKET_NBR
select pa1.MARKET_NBR MARKET_NBR,
a11.MARKET_DESC MARKET_DESC,
pa1.CLEARANCESAL WJXBFS1,
pa2.COSTAMOUNT WJXBFS2
from #ZZTIS00H5D3SP000 pa1
left outer join #ZZTIS00H5D3SP001 pa2
on (pa1.MARKET_NBR = pa2.MARKET_NBR)
left outer join HARI_LOOKUP_MARKET a11

Copyright © 2024 All Rights Reserved 1691


Syst em Ad m in ist r at io n Gu id e

on (pa1.MARKET_NBR = a11.MARKET_NBR)

Use Fact Table Join

select a11.MARKET_NBR MARKET_NBR,


max(a13.MARKET_DESC) MARKET_DESC,
sum(a12.CLE_SLS_DLR) CLEARANCESAL,
sum(a11.COST_AMT) COSTAMOUNT
from HARI_COST_MARKET_DIV a11
join HARI_MARKET_DIVISION a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT
and a11.DIVISION_NBR = a12.DIVISION_NBR
and a11.MARKET_NBR = a12.MARKET_NBR)
join HARI_LOOKUP_MARKET a13
on (a11.MARKET_NBR = a13.MARKET_NBR)
group by a11.MARKET_NBR

Cartesian Join Evaluation


Cartesian Join Evaluation is an advanced property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

This property allows the MicroStrategy SQL Engine to use a new algorithm
for evaluating whether or not a Cartesian join is necessary. The new
algorithm can sometimes avoid a Cartesian join when the old algorithm
cannot. For backward compatibility, the default is the old algorithm. If you
see Cartesian joins that appear to be avoidable, use this property to
determine whether the engine's new algorithm avoids the Cartesian join.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l es

Do Not Reevaluate Cartesian Joins (default)

select a12.ATTR1_ID ATTR1_ID,

Copyright © 2024 All Rights Reserved 1692


Syst em Ad m in ist r at io n Gu id e

max(a12.ATTR1_DESC) ATTR1_DESC,
a13.ATTR2_ID ATTR2_ID,
max(a13.ATTR2_DESC) ATTR2_DESC,
count(a11.FACT_ID) METRIC
from FACTTABLE a11
cross join LU_TABLE1 a12
join LU_TABLE2 a13
on (a11.ATTR3_ID = a13.ATTR3_ID and
a12.ATTR1_ID = a13.ATTR1_CD)
group by a12.ATTR1_ID,
a13.ATTR2_ID

Reevaluate the Cartesian Joins

select a12.ATTR1_ID ATTR1_ID,


max(a12.ATTR1_DESC) ATTR1_DESC,
a13.ATTR2_ID ATTR2_ID,
max(a13.ATTR2_DESC) ATTR2_DESC,
count(a11.FACT_ID) METRIC
from FACTTABLE a11
join LU_TABLE2 a13
on (a11.ATTR3_ID = a13.ATTR3_ID)
join LU_TABLE1 a12
on (a12.ATTR1_ID = a13.ATTR1_CD)
group by a12.ATTR1_ID,
a13.ATTR2_ID

Cartesian Join Warning


Cartesian joins are usually costly to perform. However, a Cartesian join of
two warehouse tables is much more costly than a Cartesian join of two
intermediate tables.

l Execute (default): When any Cartesian join is encountered, execution


continues without warning.

l Cancel execution: When a report contains any Cartesian join, execution


is canceled.

l Cancel execution only when warehouse table is involved in either


side of Cartesian join: The execution is canceled only when a warehouse
table is involved in a Cartesian join. In other words, the Cartesian join is
allowed when all tables involved in the join are intermediate tables.

Copyright © 2024 All Rights Reserved 1693


Syst em Ad m in ist r at io n Gu id e

l If only one side of Cartesian join contains warehouse tables, SQL


will be executed without warning: When all tables involved in the
Cartesian join are intermediate tables, the SQL is executed without
warning. This option also allow a Cartesian join if a warehouse table is
only on one side of the join and cancels it if both sides are warehouse
tables.

l In the rare situation when a warehouse table is Cartesian-joined to an


intermediate table, the execution is usually canceled. However, there may
be times when you want to allow this to execute. In this case, you can
choose the option: If only one side of Cartesian join contains
warehouse tables, SQL will be executed without warning. If this
option is selected, the execution is canceled only when warehouse tables
are involved in both sides of the Cartesian join.

l Some Cartesian joins may not be a direct table-to-table join. If one join
"Cartesian joins" to another join, and one of the joins contains a
warehouse table (not an intermediate table), then the execution is either
canceled or allowed depending on the option selected (see below). For
example, if (TT_A join TT_B) Cartesian join (TT_C join WH_D) the
following occurs based on the following settings:

l If the setting Cancel execution only when warehouse table is involved


in Cartesian join is selected, execution is canceled. In the above
example, execution is canceled because a warehouse table is used, even
though TT_A, TT_B, and TT_C are all intermediate tables.

l If the setting If only one side of Cartesian... is selected, SQL runs


without warning. In the above example, execution continues because a
warehouse table (WH_D) is used on only one side of the join.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Copyright © 2024 All Rights Reserved 1694


Syst em Ad m in ist r at io n Gu id e

Downward Outer Join Option


Downward Outer Join Option is an advanced property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

To understand Downward Outer Join, consider the following report that


contains the attribute Store and two metrics, Sales Per Store (M1) and
Inventory Per Region (M2). The attribute Region is a parent of Store. Both
M1 and M2 are set to Outer Join.

Sales Per Store Inventory Per Region


Store
(M1) (M2)

Traditionally, the outer join flag is ignored, because M2 (at Region level) is
higher than the report level of Store. It is difficult to preserve all of the stores
for a metric at the Region level. However, you can preserve rows for a metric
at a higher level than the report. Since M2 is at the region level, it is
impossible to preserve all regions for M2 because the report only shows
Store. To do that, a downward join pass is needed to find all stores that
belong to the region in M2, so that a union is formed among all these stores
with the stores in M1.

When performing a downward join, another issue arises. Even though all the
stores that belong to the region in M2 can be found, these stores may not be
those from which M2 is calculated. If a report filters on a subset of stores,
then M2 (if it is a filtered metric) is calculated only from those stores, and
aggregated to regions. When a downward join is done, either all the stores
that belong to the regions in M2 are included or only those stores that
belong to the regions in M2 and in the report filter. Hence, this property has
three options.

Copyright © 2024 All Rights Reserved 1695


Syst em Ad m in ist r at io n Gu id e

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l e

Using the above example and applying a filter for Atlanta and Charlotte, the
default Do not preserve all the rows for metrics higher than template
level option returns the following results. Note that Charlotte does not
appear because it has no sales data in the fact table; the outer join is
ignored. The outer join flag on metrics higher than template level is ignored.

Sales Per Store Inventory Per Region


Store
(M1) (M2)

Atlanta 100 300

Using Preserve all the rows for metrics higher than template level
without report filter returns the results shown below. Now Charlotte
appears because the outer join is used, and it has an inventory, but
Washington appears as well because it is in the Region, and the filter is not
applied.

Sales Per Store Inventory Per


Store
(M1) Region (M2)

Atlanta 100 300

Charlotte 300

Washington 300

Using Preserve all the rows for metrics higher than template level with
report filter produces the following results. Washington is filtered out but
Charlotte still appears because of the outer join.

Copyright © 2024 All Rights Reserved 1696


Syst em Ad m in ist r at io n Gu id e

Sales Per Store Inventory Per Region


Store
(M1) (M2)

Atlanta 100 300

Charlotte 300

For backward compatibility, the default is to ignore the outer join flag for
metrics higher than template level. This is the SQL Engine behavior for
MicroStrategy 6.x or lower, as well as for MicroStrategy 7.0 and 7.1.

DSS Star Join


DSS Star Join is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

The DSS Star Join property specifies whether a partial star join is performed
or not. A partial star join means the lookup table of a column is joined if and
only if a column is in the SELECT clause or involved in a qualification in the
WHERE clause of the SQL. In certain databases, for example, RedBrick and
Teradata, partial star joins can improve SQL performance if certain types of
indexes are maintained in the data warehouse. Notice that the lookup table
joined in a partial star join is not necessarily the same as the lookup table
defined in the attribute form editor. Any table that acts as a lookup table
rather than a fact table in the SQL and contains the column is considered a
feasible lookup table.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l es

No Star Join (default)

Copyright © 2024 All Rights Reserved 1697


Syst em Ad m in ist r at io n Gu id e

select distinct a11.PBTNAME PBTNAME


from STORE_ITEM_PTMAP a11
where a11.YEAR_ID in (1994)
select a11.ITEM_NBR ITEM_NBR,
a11.CLASS_NBR CLASS_NBR,
a13.ITEM_DESC ITEM_DESC,
a13.CLASS_DESC CLASS_DESC,
a11.STORE_NBR STORE_NBR,
a14.STORE_DESC STORE_DESC,
sum(a11.REG_SLS_DLR) WJXBFS1
from STORE_ITEM_94 a11,
LOOKUP_DAY a12,
LOOKUP_ITEM a13,
LOOKUP_STORE a14
where a11.CUR_TRN_DT = a12.CUR_TRN_DT and
a11.CLASS_NBR = a13.CLASS_NBR and
a11.ITEM_NBR = a13.ITEM_NBR and
a11.STORE_NBR = a14.STORE_NBR
and a12.YEAR_ID in (1994)
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a13.ITEM_DESC,
a13.CLASS_DESC,
a11.STORE_NBR,
a14.STORE_DESC

Partial Star Join

select distinct a11.PBTNAME PBTNAME


from STORE_ITEM_PTMAP a11,
LOOKUP_YEAR a12
where a11.YEAR_ID = a12.YEAR_ID
and a11.YEAR_ID in (1994)
Pass1 - Duration: 0:00:00.49
select a11.ITEM_NBR ITEM_NBR,
a11.CLASS_NBR CLASS_NBR,
a13.ITEM_DESC ITEM_DESC,
a13.CLASS_DESC CLASS_DESC,
a11.STORE_NBR STORE_NBR,
a14.STORE_DESC STORE_DESC,
sum(a11.REG_SLS_DLR) WJXBFS1
from STORE_ITEM_94 a11,
LOOKUP_DAY a12,
LOOKUP_ITEM a13,
LOOKUP_STORE a14
where a11.CUR_TRN_DT = a12.CUR_TRN_DT and
a11.CLASS_NBR = a13.CLASS_NBR and
a11.ITEM_NBR = a13.ITEM_NBR and
a11.STORE_NBR = a14.STORE_NBR
and a12.YEAR_ID in (1994)
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a13.ITEM_DESC,
a13.CLASS_DESC,
a11.STORE_NBR,

Copyright © 2024 All Rights Reserved 1698


Syst em Ad m in ist r at io n Gu id e

a14.STORE_DESC

From Clause Order


Some database platforms, such as Oracle and RedBrick, perform better
depending on the order of the tables in the FROM clause. The FROM Clause
Ordering property alters the order that the tables appear in the FROM
clause. The MicroStrategy SQL Engine normally puts the fact table first in
the FROM clause. When the property is set to switch the FROM clause
order, the fact table is moved to the second table in the clause. However, if
there are two fact tables in the FROM clause, it switches the order of the two
tables.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l es

Normal FROM clause order as generated by the engine

select a12.CUSTOMER_ID CUSTOMER_ID,


sum(a11.ORDER_AMT) WJXBFS1
from ORDER_FACT a11
join LU_ORDER a12
on (a11.ORDER_ID = a12.ORDER_ID)
group by a12.CUSTOMER_ID

Switch FROM clause order as generated by the engine

select a12.CUSTOMER_ID CUSTOMER_ID,


sum(a11.ORDER_AMT) WJXBFS1
from LU_ORDER a12
join ORDER_FACT a11
on (a11.ORDER_ID = a12.ORDER_ID)
group by a12.CUSTOMER_ID

Move MQ Table in normal FROM clause order to the last (for RedBrick)

Copyright © 2024 All Rights Reserved 1699


Syst em Ad m in ist r at io n Gu id e

This setting is added primarily for RedBrick users. The default order of table
joins is as follows:

1. Join the fact tables together.

2. Join the metric qualification table.

3. Join the relationship table.

4. Join the lookup tables if needed.

This option changes the order to the following:

1. Join the fact tables together.

2. Join the relationship table.

3. Join the lookup tables.

4. Join the metric qualification table.

Full Outer Join Support


Full Outer Join Support is an advanced property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

The Full Outer Join Support property specifies whether the database
platform supports full outer join syntax:

l No support (default): Full outer joins are not supported or processed to


return results. This can help to prevent costly outer join queries and also
avoids errors for databases that do not support full outer joins.
Additionally, if your database does not support the COALESCE function,
you should set this property to No support.

l Support: Full outer joins are attempted when required by your report or
dashboard actions. By selecting this option, the Join Type VLDB property
is assumed to be Join 92 and any other setting in Join Type is ignored.
Additionally, the COALESCE function can be included in the SQL query.

Copyright © 2024 All Rights Reserved 1700


Syst em Ad m in ist r at io n Gu id e

Since full outer joins can require a lot of database and Intelligence
Server resources, and full outer joins are not supported for all databases,
it is recommended to enable support for individual reports first. If your
results are returned successfully and full outer joins are used often for
your report or dashboard environment, you can consider enabling support
for the entire database. However, enabling full outer join support for
specific reports is recommended if full outer joins are only used for a
small to moderate amount of reporting needs. Creating a template with
full outer join support enabled can save report developers time when
requiring full outer joins.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l es

Full Outer Join Not Supported (default)

select a12.YEAR_ID YEAR_ID,


sum(a11.TOT_SLS_DLR) TOTALSALESCO
into #ZZTIS00H5MJMD000
from HARI_REGION_DIVISION a11
join HARI_LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
where a12.MONTH_ID = 199411
group by a12.YEAR_ID
select a12.YEAR_ID YEAR_ID,
sum(a11.TOT_SLS_DLR) TOTALSALESCO
into #ZZTIS00H5MJMD001
from HARI_REGION_DIVISION a11
join HARI_LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
where a12.MONTH_ID = 199311
group by a12.YEAR_ID
select pa1.YEAR_ID YEAR_ID
into #ZZTIS00H5MJOJ002
from #ZZTIS00H5MJMD000 pa1
union
select pa2.YEAR_ID YEAR_ID
from #ZZTIS00H5MJMD001 pa2
select distinct pa3.YEAR_ID YEAR_ID,
a11.YEAR_DESC YEAR_DESC,
pa1.TOTALSALESCO TOTALSALESCO,
pa2.TOTALSALESCO TOTALSALESCO1
from #ZZTIS00H5MJOJ002 pa3

Copyright © 2024 All Rights Reserved 1701


Syst em Ad m in ist r at io n Gu id e

left outer join #ZZTIS00H5MJMD000 pa1


on (pa3.YEAR_ID = pa1.YEAR_ID)
left outer join #ZZTIS00H5MJMD001 pa2
on (pa3.YEAR_ID = pa2.YEAR_ID)
left outer join HARI_LOOKUP_YEAR a11
on (pa3.YEAR_ID = a11.YEAR_ID)

Full Outer Join Supported

select a12.YEAR_ID YEAR_ID,


sum(a11.TOT_SLS_DLR) TOTALSALESCO
into #ZZTIS00H5MKMD000
from HARI_REGION_DIVISION a11
join HARI_LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
where a12.MONTH_ID = 199411
group by a12.YEAR_ID
select a12.YEAR_ID YEAR_ID,
sum(a11.TOT_SLS_DLR) TOTALSALESCO
into #ZZTIS00H5MKMD001
from HARI_REGION_DIVISION a11
join HARI_LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
where a12.MONTH_ID = 199311
group by a12.YEAR_ID
select distinct coalesce(pa1.YEAR_ID,
pa2.YEAR_ID) YEAR_ID,
a11.YEAR_DESC YEAR_DESC,
pa1.TOTALSALESCO TOTALSALESCO,
pa2.TOTALSALESCO TOTALSALESCO1
from #ZZTIS00H5MKMD000 pa1
full outer join #ZZTIS00H5MKMD001 pa2
on (pa1.YEAR_ID = pa2.YEAR_ID)
left outer join HARI_LOOKUP_YEAR a11
on (coalesce(pa1.YEAR_ID, pa2.YEAR_ID) = a11.YEAR_ID)

Join Type
The Join Type property determines which ANSI join syntax pattern to use.
Some databases, such as Oracle, do not support the ANSI 92 standard yet.
Some databases, such as DB2, support both Join 89 and Join 92. Other
databases, such as some versions of Teradata, have a mix of the join
standards and therefore need their own setting.

MicroStrategy uses different defaults for the join type based on the database
you are using. This is to support the most common scenarios for your

Copyright © 2024 All Rights Reserved 1702


Syst em Ad m in ist r at io n Gu id e

databases. When selecting a different join type than the default, it is


recommended to test this with a report rather than the entire database. By
using this strategy you can determine if the join type functions correctly for
your database while also providing the required performance.

If the Full Outer Join Support VLDB property (see Join Type, page 1702) is
set to Support, this property is ignored and the Join 92 standard is used.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l es

Join 89 (default)

select a22.STORE_NBR STORE_NBR,


max(a22.STORE_DESC) STORE_DESC,
a21.CUR_TRN_DT CUR_TRN_DT,
sum(a21.REG_SLS_DLR) WJXBFS1
from STORE_DIVISION a21,
LOOKUP_STORE a22
where a21.STORE_NBR = a22.STORE_NBR
group by a22.STORE_NBR,
a21.CUR_TRN_DT

Join 92

select a21.CUR_TRN_DT CUR_TRN_DT,


a22.STORE_NBR STORE_NBR,
max(a22.STORE_DESC) STORE_DESC,
sum(a21.REG_SLS_DLR) WJXBFS1
from STORE_DIVISION a21
join LOOKUP_STORE a22
on (a21.STORE_NBR = a22.STORE_NBR)
group by a21.CUR_TRN_DT,
a22.STORE_NBR

SQL 89 Inner Join and SQL 92 Outer Join

create table ZZOL00 as


select a23.STORE_NBR STORE_NBR,
a23.MARKET_NBR MARKET_NBR,

Copyright © 2024 All Rights Reserved 1703


Syst em Ad m in ist r at io n Gu id e

a22.DEPARTMENT_NBR DEPARTMENT_NBR,
a21.CUR_TRN_DT CUR_TRN_DT
from LOOKUP_DAY a21,
LOOKUP_DEPARTMENT a22,
LOOKUP_STORE a23
select a21.MARKET_NBR MARKET_NBR,
max(a24.MARKET_DESC) MARKET_DESC,
sum((a22.COST_AMT * a23.TOT_SLS_DLR)) SUMTSC
from ZZOL00 a21
left outer join COST_STORE_DEP a22
on (a21.DEPARTMENT_NBR = a22.DEPARTMENT_NBR and
a21.CUR_TRN_DT = a22.CUR_TRN_DT and
a21.STORE_NBR = a22.STORE_NBR)
left outer join STORE_DEPARTMENT a23
on (a21.STORE_NBR = a23.STORE_NBR and
a21.DEPARTMENT_NBR = a23.DEPARTMENT_NBR and
a21.CUR_TRN_DT = a23.CUR_TRN_DT),
LOOKUP_MARKET a24
where a21.MARKET_NBR = a24.MARKET_NBR
group by a21.MARKET_NBR

SQL 89 Inner Join and SQL 92 Outer & Cross

create table ZZOL00 as


select a23.STORE_NBR STORE_NBR,
a23.MARKET_NBR MARKET_NBR,
a22.DEPARTMENT_NBR DEPARTMENT_NBR,
a21.CUR_TRN_DT CUR_TRN_DT
from LOOKUP_DAY a21
cross join LOOKUP_DEPARTMENT a22
cross join LOOKUP_STORE a23
select a21.MARKET_NBR MARKET_NBR,
max(a24.MARKET_DESC) MARKET_DESC,
sum((a22.COST_AMT * a23.TOT_SLS_DLR)) SUMTSC
from ZZOL00 a21
left outer join COST_STORE_DEP a22
on (a21.DEPARTMENT_NBR = a22.DEPARTMENT_NBR
and
a21.CUR_TRN_DT = a22.CUR_TRN_DT and
a21.STORE_NBR = a22.STORE_NBR)
left outer join STORE_DEPARTMENT a23
on (a21.STORE_NBR = a23.STORE_NBR and
a21.DEPARTMENT_NBR = a23.DEPARTMENT_NBR and
a21.CUR_TRN_DT = a23.CUR_TRN_DT),
LOOKUP_MARKET a24
where a21.MARKET_NBR = a24.MARKET_NBR
group by a21.MARKET_NBR

Copyright © 2024 All Rights Reserved 1704


Syst em Ad m in ist r at io n Gu id e

Lookup Table Join Order


Lookup Table Join Order is an advanced property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

This property determines how lookup tables are loaded for being joined. The
setting options are

l Partially based on attribute level (behavior prior to version 8.0.1) (default)

l Fully based on attribute level. Lookup tables for lower level attributes are
joined before those for higher level attributes

If you select the first option, lookup tables are loaded for join in alphabetic
order.

If you select the second option, lookup tables are loaded for join based on
attribute levels, and joining is performed on the lowest level attribute first.

Level s at Wh i ch Yo u Can Set Th i s

Report, template, and project

Max Tables in Join


Max Tables in Join is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

The Max Tables in Join property works together with the Max Tables in Join
Warning property. It specifies the maximum number of tables in a join. If the
maximum number of tables in a join (specified by the Max Tables In Join
property) is exceeded, then the Max Tables in Join Warning property
decides the course of action.

The table below explains the possible values and their behavior:

Copyright © 2024 All Rights Reserved 1705


Syst em Ad m in ist r at io n Gu id e

Value Behavior

0 No limit on the number of tables in a join

Number The maximum number of tables in a join is set to the number specified

Level s at Wh i ch Yo u Can Set Th i s

Database instance only

Max Tables in Join Warning


Max Tables in Join Warning is an advanced property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

The Max Tables in Join Warning property works in conjunction with the Max
Tables in Join property. If the maximum number of tables in a join (specified
by the Max Tables in Join property) is exceeded, then this property controls
the action taken. The options are to either continue or cancel the execution.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Nested Aggregation Outer Joins


The Nested Aggregation Outer Joins VLDB property allows you define when
outer joins are performed on metrics that are defined with nested
aggregation functions. A nested aggregation function is when one
aggregation function is included within another aggregation function. For
example, Sum(Count(Expression)) uses nested aggregation because
the Count aggregation is calculated within the Sum aggregation.

These types of metrics can experience unexpected behavior when


attempting to use outer joins. This VLDB property provides the following

Copyright © 2024 All Rights Reserved 1706


Syst em Ad m in ist r at io n Gu id e

options to control the outer join behavior for metrics that use nested
aggregation:

l Do not perform outer join on nested aggregation (default): Outer joins


are not used for metrics that use nested aggregation, even if the metric is
defined to use an outer join. This option reflects the behavior of all pre-9.0
MicroStrategy releases.

l Do perform outer join on nested aggregation when all formulas have


the same level: If all the inner metrics have the same level, which is lower
than the report level, and the formula join type for the outer metric is set to
outer, then an outer join is performed on the inner metrics.

l Do perform downward outer join on nested aggregation when all


formulas can downward outer join to a common lower level:
Regardless of whether the inner metrics have the same level, if more than
one inner metric has a level which is the child of the levels of other inner
metrics and the formula join type for the outer metric is set to outer, then a
downward outer join is performed on the relevant inner metrics. The
behavior of the downward outer join follows the Downward Outer Join
Option VLDB property (see Nested Aggregation Outer Joins, page 1706).

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Preserving Data Using Outer Joins


For the next two properties, consider the following simple example data.

Store Table (lookup)

Store ID Store Name

1 East

Copyright © 2024 All Rights Reserved 1707


Syst em Ad m in ist r at io n Gu id e

Store ID Store Name

2 Central

3 South

6 North

Fact Table

Store ID Year Dollar Sales

1 2002 1000

2 2002 2000

3 2002 5000

1 2003 4000

2 2003 6000

3 2003 7000

4 2003 3000

5 2003 1500

The Fact table has data for Store IDs 4 and 5, but the Store table does not
have any entry for these two stores. On the other hand, notice that the North
Store does not have any entries in the Fact table. This data is used to show
examples of how the next two properties work.

Preserve All Final Pass Result Elements


Preserve All Final Pass Result Elements is an advanced VLDB property that
is hidden by default. For an introduction to this property, see Preserving
Data Using Outer Joins. For information on how to display this property, see
Viewing and Changing Advanced VLDB Properties, page 1630.

Copyright © 2024 All Rights Reserved 1708


Syst em Ad m in ist r at io n Gu id e

The following Preserve All Final Pass Result Elements VLDB property
settings determine how to outer join the final result, as well as the lookup
and relationship tables:

l If you choose the default Preserve common elements of final pass


result table and lookup/relationship table option, the SQL Engine
generates an equi-join. Therefore, you only see elements that are common
to both tables.

l If you choose the Preserve all final result pass elements option, the
SQL Engine generates an outer join, and your report contains all of the
elements that are in the final result set. When this setting is turned ON,
outer joins are generated for any joins from the fact table to the lookup
table, as well as to any relationship tables. This is because it is hard to
distinguish which table is used as a lookup table and which table is used
as a relationship table, the two roles one table often plays. For example,
LOOKUP_DAY serves as both a lookup table for the Day attribute, as well
as a relationship table for Day and Month.

This setting should not be used in standard data warehouses, where the
lookup tables are properly maintained and all elements in the fact table
have entries in the respective lookup table. It should be used only when a
certain attribute in the fact table contains more (unique) attribute
elements than its corresponding lookup table. For example, in the
example above, the Fact Table contains sales for five different stores, but
the Store Table contains only four stores. This should not happen in a
standard data warehouse because the lookup table, by definition, should
contain all the attribute elements. However, this could happen if the fact
tables are updated more often than the lookup tables.

l If you choose the Preserve all elements of final pass result table with
respect to lookup table but not relationship table option, the SQL
Engine generates an inner join on all passes except the final pass; on the
final pass it generates an outer join.

Copyright © 2024 All Rights Reserved 1709


Syst em Ad m in ist r at io n Gu id e

l If you choose the Do not listen to per report level setting, preserve
elements of final pass according to the setting at attribute level. If
this choice is selected at attribute level, it will be treated as preserve
common elements (that is, choice 1) option at the database instance,
report, or template level, the setting for this VLDB property is used at the
attribute level. This value should not be selected at the attribute level. If
you select this setting at the attribute level, the VLDB property is set to the
Preserve common elements of final pass result table and lookup
table option.

This setting is useful if you have only a few attributes that require different
join types. For example, if among the attributes in a report only one needs
to preserve elements from the final pass table, you can set the VLDB
property to Preserve all final pass result elements setting for that one
attribute. You can then set the report to the Do not listen setting for the
VLDB property. When the report is run, only the attribute set differently
causes an outer join in SQL. All other attribute lookup tables will be joined
using an equal join, which leads to better SQL performance.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, template, and attribute

Exam p l es

The first two examples below are based on the Preserve All Final Pass
Result Elements, page 1708 example above. The third example, for the
Preserve all elements of final pass result table with respect to lookup
table but not relationship table option, is a separate example designed to
reflect the increased complexity of that option's behavior.

Example: Preserve common elements of final pass result table and


lookup/relationship table

A report has Store and Dollar Sales on the template.

Copyright © 2024 All Rights Reserved 1710


Syst em Ad m in ist r at io n Gu id e

The "Preserve common elements of final pass result table and lookup table"
option returns the following results using the SQL below.

Store Dollar Sales

East 5000

Central 8000

South 12000

select a11.Store_id Store_id,


max(a12.Store) Store,
sum(a11.DollarSls) WJXBFS1
from Fact a11
join Store a12
on (a11.Store_id = a12.Store_id)
group by a11.Store_id

Example: Preserve all final result pass elements

A report has Store and Dollar Sales on the template.

The "Preserve all final result pass elements" option returns the following
results using the SQL below. Notice that the data for Store_IDs 4 and 5 are
now shown.

Store Dollar Sales

East 5000

Central 8000

South 12000

3000

1500

Copyright © 2024 All Rights Reserved 1711


Syst em Ad m in ist r at io n Gu id e

select a11.Store_id Store_id,


max(a12.Store) Store,
sum(a11.DollarSls) WJXBFS1
from Fact a11
left outer join Store a12
on (a11.Store_id = a12.Store_id)
group by a11.Store_id

Example: Preserve all elements of final pass result table with respect to lookup
table but not to relationship table

A report has Country, Metric 1, and Metric 2 on the template. The following
fact tables exist for each metric:

CALLCENTER_
Fact 1
ID

1 1000

2 2000

1 1000

2 2000

3 1000

4 1000

EMPLOYEE_ID Fact 2

1 5000

2 6000

1 5000

2 6000

3 5000

4 5000

5 1000

Copyright © 2024 All Rights Reserved 1712


Syst em Ad m in ist r at io n Gu id e

The SQL Engine performs three passes. In the first pass, the SQL Engine
calculates metric 1. The SQL Engine inner joins the "Fact Table (Metric 1)"
table above with the call center lookup table "LU_CALL_CTR" below:

CALLCENTER_ID COUNTRY_ID

1 1

2 1

3 2

to create the following metric 1 temporary table, grouped by country, using


the SQL that follows:

COUNTRY_ID Metric 1

1 6000

2 1000

create table ZZSP00 nologging as


select a12.COUNTRY_ID COUNTRY_ID,
sum((a11.QTY_SOLD * a11.DISCOUNT))
WJXBFS1
from ORDER_DETAIL a11,
LU_CALL_CTR a12
where a11.CALL_CTR_ID = a12.CALL_CTR_ID
group by a12.COUNTRY_ID

In the second pass, metric 2 is calculated. The SQL Engine inner joins the
"Fact Table (Metric 2)" table above with the employee lookup table "LU_
EMPLOYEE" below:

EMPLOYEE_ID COUNTRY_ID

1 1

Copyright © 2024 All Rights Reserved 1713


Syst em Ad m in ist r at io n Gu id e

EMPLOYEE_ID COUNTRY_ID

2 2

3 2

To create the following metric 2 temporary table, grouped by country, using


the SQL that follows:

COUNTRY_ID Metric 2

1 10000

2 17000

create table ZZSP01 nologging as


select a12.COUNTRY_ID COUNTRY_ID,
sum(a11.FREIGHT) WJXBFS1
from ORDER_FACT a11,
LU_EMPLOYEE a12
where a11.EMP_ID = a12.EMP_ID
group by a12.COUNTRY_ID

In the third pass, the SQL Engine uses the following country lookup table,
"LU_COUNTRY":

COUNTRY_ID COUNTRY_DESC

1 United States

3 Europe

The SQL Engine left outer joins the METRIC1_TEMPTABLE above and the
LU_COUNTRY table. The SQL Engine then left outer joins the METRIC2_
TEMPTABLE above and the LU_COUNTRY table. Finally, the SQL Engine
inner joins the results of the third pass to produce the final results.

Copyright © 2024 All Rights Reserved 1714


Syst em Ad m in ist r at io n Gu id e

The "Preserve all elements of final pass result table with respect to lookup
table but not to relationship table" option returns the following results using
the SQL below.

COUNTRY_
COUNTRY_ID Metric 1 Metric 2
DESC

1 United States 6000 10000

2 1000 17000

select pa1.COUNTRY_ID COUNTRY_ID,


a11.COUNTRY_NAME COUNTRY_NAME,
pa1.WJXBFS1 WJXBFS1,
pa2.WJXBFS1 WJXBFS2
from ZZSP00 pa1,
ZZSP01 pa2,
LU_COUNTRY a11
where pa1.COUNTRY_ID = pa2.COUNTRY_ID and
pa1.COUNTRY_ID = a11.COUNTRY_ID (+)

Preserve All Lookup Table Elements


For an introduction to this property, see Preserving Data Using Outer Joins,
page 1707.

The Preserve All Lookup Table Elements VLDB property is used to show all
attribute elements that exist in the lookup table, even though there is no
corresponding fact in the result set. For example, your report contains Store
and Sum(Sales), and it is possible that a store does not have any sales at
all. However, you want to show all the store names in the final report, even
those stores that do not have sales. To do that, you must not rely on the
stores in the sales fact table. Instead, you must make sure that all the stores
from the lookup table are included in the final report. The SQL Engine needs
to do a left outer join from the lookup table to the fact table.

It is possible that there are multiple attributes on the template. To keep all
the attribute elements, Analytical Engine needs to do a Cartesian Join

Copyright © 2024 All Rights Reserved 1715


Syst em Ad m in ist r at io n Gu id e

between involved attributes' lookup tables before doing a left outer join to
the fact table.

In MicroStrategy 7.1, this property was known as Final Pass Result Table
Outer Join to Lookup Table.

Pr eser ve Co m m o n El em en t s o f Lo o ku p an d Fi n al Pass Resu l t Tab l e


(Def au l t ).

The Analytical Engine does a normal (equal) join to the lookup table.

Pr eser ve Lo o ku p Tab l e El em en t s Jo i n ed t o Fi n al Pass Resu l t Tab l e Based


o n Fact Tab l e Keys.

Sometimes the fact table level is not the same as the report or template
level. For example, a report contains Store, Month, Sum(Sales) metric, but
the fact table is at the level of Store, Day, and Item. There are two ways to
keep all the store and month elements:

l Do a left outer join first to keep all attribute elements at the Store, Day,
and Item level, then aggregate to the Store and Month level.

l Do aggregation first, then do a left outer join to bring in all attribute


elements.

This option is for the first approach. In the example given previously, it
makes two SQL passes:

Pass 1: LOOKUP_STORE cross join LOOKUP_DAY cross join LOOKUP_


ITEM è TT1

Pass 2: TT1 left outer join Fact_Table on (store, day, item)

The advantage of this approach is that you can do a left outer join and
aggregation in the same pass (pass 2). The disadvantage is that because
you do a Cartesian join with the lookup tables at a much lower level (pass 1),
the result of the Cartesian joined table (TT1) can be very large.

Copyright © 2024 All Rights Reserved 1716


Syst em Ad m in ist r at io n Gu id e

Pr eser ve Lo o ku p Tab l e El em en t s Jo i n ed t o Fi n al Pass Resu l t Tab l e Based


o n Tem p l at e At t r i b u t es Wi t h o u t Fi l t er .

This option corresponds to the second approach described above. Still using
the same example, it makes three SQL passes:

l Pass 1: aggregate the Fact_Table to TT1 at Store and Month. This is


actually the final pass of a normal report without turning on this setting.

l Pass 2: LOOKUP_STORE cross join LOOKUP_MONTH è TT2

l Pass 3: TT2 left outer join TT1 on (store, month)

This approach needs one more pass than the previous option, but the cross
join table (TT2) is usually smaller.

Pr eser ve Lo o ku p Tab l e El em en t s Jo i n ed t o Fi n al Pass Resu l t Tab l e Based


o n Tem p l at e At t r i b u t es w i t h Fi l t er .

This option is similar to Option 3. The only difference is that the report filter
is applied in the final pass (Pass 3). For example, a report contains Store,
Month, and Sum(Sales) with a filter of Year = 2002. You want to display
every store in every month in 2002, regardless of whether there are sales.
However, you do not want to show any months from other years (only the 12
months in year 2002). Option 4 resolves this issue.

When this VLDB setting is turned ON (Option 2, 3, or 4), it is assumed that


you want to keep ALL elements of the attributes in their lookup tables.
However, sometimes you want such a setting to affect only some of the
attributes on a template. For a report containing Store, Month, Sum(Sales),
you may want to show all the store names, even though they have no sales,
but not necessarily all the months in the LOOKUP_MONTH table. In 7i, you
can individually select attributes on the template that need to preserve
elements. This can be done from the Data menu, selecting Report Data
Option, and then choosing Attribute Join Type. Notice that the 4 options
shown on the upper right are the same as those in the VLDB dialog box
(internally they are read from the same location). In the lower-right part, you
see individual attributes. By default, all attributes are set to Outer, which

Copyright © 2024 All Rights Reserved 1717


Syst em Ad m in ist r at io n Gu id e

means that every attribute participates with the Preserve All Lookup Tables
Elements property. You still need to turn on this property to make it take
effect, which can be done using either this dialog box or the VLDB dialog
box.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Example
The Preserve common elements of lookup and final pass result table
option simply generates a direct join between the fact table and the lookup
table. The results and SQL are as follows.

Store Dollar Sales

East 5000

Central 8000

South 12000

select a11.Store_id Store_id,


max(a12.Store) Store,
sum(a11.DollarSls) WJXBFS1
from Fact a11
join Store a12
on (a11.Store_id = a12.Store_id)
group by a11.Store_id

The "Preserve lookup table elements joined to final pass result table based
on fact keys" option creates a temp table that is a Cartesian join of all lookup
table key columns. Then the fact table is outer joined to the temp table. This
preserves all lookup table elements. The results and SQL are as below:

Copyright © 2024 All Rights Reserved 1718


Syst em Ad m in ist r at io n Gu id e

Store Dollar Sales

East 5000

Central 8000

South 12000

North

select distinct a11.Year Year


into #ZZOL00
from Fact a11
select pa1.Year Year,
a11.Store_id Store_id
into #ZZOL01
from #ZZOL00 pa1
cross join Store a11
select pa2.Store_id Store_id,
max(a12.Store) Store,
sum(a11.DollarSls) WJXBFS1
from #ZZOL01 pa2
left outer join Fact a11
on (pa2.Store_id = a11.Store_id and
pa2.Year = a11.Year)
join Store a12
on (pa2.Store_id = a12.Store_id)
group by pa2.Store_id
drop table #ZZOL00
drop table #ZZOL01

The "Preserve lookup table elements joined to final pass result table based
on template attributes without filter" option preserves the lookup table
elements by left outer joining to the final pass of SQL and only joins on
attributes that are on the template. For this example and the next, the filter
of "Store not equal to Central" is added. The results and SQL are as follows:

Store Dollar Sales

East 5000

Central

Copyright © 2024 All Rights Reserved 1719


Syst em Ad m in ist r at io n Gu id e

Store Dollar Sales

South 12000

North

select a11.Store_id Store_id,


sum(a11.DollarSls) WJXBFS1
into #ZZT5X00003UOL000
from Fact a11
where a11.Store_id not in (2)
group by a11.Store_id
select a11.Store_id Store_id,
a11.Store Store,
pa1.WJXBFS1 WJXBFS1
from Store a11
left outer join #ZZT5X00003UOL000 pa1
on (a11.Store_id = pa1.Store_id)
drop table #ZZT5X00003UOL000

The "Preserve lookup table elements joined to final pass result table based
on template attributes with filter" option is the newest option and is the same
as above, but you get the filter in the final pass. The results and SQL are as
follows:

Store Dollar Sales

East 5000

South 12000

North

select a11.Store_id Store_id,


sum(a11.DollarSls) WJXBFS1
into #ZZT5X00003XOL000
from Fact a11
where a11.Store_id not in (2)
group by a11.Store_id
select a11.Store_id Store_id,
a11.Store Store,
pa1.WJXBFS1 WJXBFS1

Copyright © 2024 All Rights Reserved 1720


Syst em Ad m in ist r at io n Gu id e

from Store a11


left outer join #ZZT5X00003XOL000 pa1
on (a11.Store_id = pa1.Store_id)
where a11.Store_id not in (2)
drop table #ZZT5X00003XOL000

Modifying Third-Party Cube Sources in MicroStrategy: MDX


The table below summarizes the MultiDimensional Expression (MDX) related
VLDB properties. These properties apply only to MDX cube reports using
data from an MDX cube. MDX cubes are also referred to as MDX cube
sources. MicroStrategy supports reporting and analysis with SAP BW,
Microsoft Analysis Services, Hyperion Essbase, and IBM Cognos TM1.
Additional details about each property, including examples where
necessary, are provided in the sections following the table.

In the table below, the default values for each VLDB property are the general
defaults that can be applied most broadly for the set of certified MDX cube
sources. Certain VLDB properties use different default settings depending
on which MDX cube source you are using. To determine all default VLDB
property settings for the MDX cube source you are reporting on, follow the
steps provided in Default VLDB Settings for Specific Data Sources, page
1925.

Property Description Possible Values Default Value

Defines the date


format used in your
Format for
MDX cube source.
Date/Time
This ensures the date User-defined DD.MM.YYYY
Values Coming
data is integrated into
from Data Source
MicroStrategy
correctly.

Determines how MDX Do not add a fake Add a fake


MDX Add Fake
cube reports that only measure to an attribute- measure to an
Measure
include attributes are only MDX report attribute-only MDX

Copyright © 2024 All Rights Reserved 1721


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

processed in order to Add a fake measure to


improve performance an attribute-only MDX report
in certain scenarios. report

Do not add the non-


empty keyword in the
MDX select clause
Add the non-empty
Add the non-empty
Determines whether keyword in the
keyword in the MDX
MDX Add Non or not data is MDX select clause
select clause only if
Empty returned from rows only if there are
there are metrics on the
that have null values. metrics on the
report
report
Always add the non-
empty keyword in the
MDX select clause

Defines whether the


metric values in MDX metric values are
MicroStrategy MDX formatted per column MDX metric values
MDX Cell
cube reports inherit are formatted per
Formatting MDX metric values are
their value formatting column
from an MDX cube formatted per cell

source.

Determines how null


values are identified Only include the
if you use the affected hierarchy in the
"has measure values" Only include the
MDX has Modifying Third-Party
set definition affected hierarchy
Measure Values Cube Sources in
in the "has
in Other MicroStrategy: MDX Include all template measure values"
Hierarchies VLDB property to hierarchies in the "has set definition
ignore null values measure values" set
coming from MDX definition
cube sources.

MDX Level Determines whether Use actual level


Use actual level number
Number level (from the bottom number

Copyright © 2024 All Rights Reserved 1722


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

of the hierarchy up) or


generation (from the
Use generation number
Calculation top of the hierarchy
to calculate level
Method down) should be used
number
to populate the report
results.

Allows you to specify


what measure values
are defined as NULL
MDX Measure values, which can
Values to Treat help to support how User-defined X
as Null your SAP
environment handles
non-calculated
measures.

No non-empty
Determines how null
optimization
values from an MDX
cube source are Non-empty optimization,
ignored using the use default measure
MDX Non Empty non-empty keyword Non-empty optimization, No non-empty
Optimization when attributes from use first measure on optimization
different hierarchies template
(dimensions) are
Non-empty optimization,
included on the same
use all measures on
MDX cube report.
template

Do not remember the


name of the measure
Defines how the
dimension Do not remember
MDX Remember name of the measure
the name of the
Measure dimension is Remember the name of
measure
Dimension Name determined for an the measure dimension
dimension
MDX cube source.
Read the name of the
measure dimension from

Copyright © 2024 All Rights Reserved 1723


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

the "Name of Measure


Dimension" VLDB
setting

Determines whether
TopCount is used in Do not use TopCount in
place of Rank and the place of Rank and Use TopCount
MDX TopCount Order to support Order instead of Rank
Support certain MicroStrategy
Use TopCount instead of and Order
features such as
metric filter Rank and Order

qualifications.

Do not treat a date


qualification on a key
form as a date
MDX Treat Key qualification on an ID Treat a date
Determines how date
Date form: qualification on a
qualifications are
Qualification as key form as a date
processed for MDX Treat a date
ID Date qualification on an
cube sources. qualification on a key
Qualification ID form
form as a date
qualification on an ID
form

Supports an MDX Do not verify the level of


cube reporting literals in limit or filter Do not verify the
MDX Verify Limit expressions
scenario in which level of literals in
Filter Literal
filters are created on Verify the level of literals limit or filter
Level
attribute ID forms and in limit or filter expressions
metrics. expressions

Modifying Third-
Defines the name of
Party Cube
the measures
Sources in User-defined [Measures]
dimension in an MDX
MicroStrategy:
cube source.
MDX

Copyright © 2024 All Rights Reserved 1724


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

Determines the
maximum number of
rows that each MDX
MDX Query
query can return User-defined -1
Result Rows
against MSAS and
Essbase for a
hierarchical attribute.

Format for Date/Time Values Coming from Data Source


Date data can be stored in a variety of formats in MDX cube sources. To
ensure that your date data from your MDX cube source is integrated into
MicroStrategy with the correct format, you can use the Format for Date/Time
Values Coming from Data Source VLDB property to define the date format
used in your MDX cube source.

The default date format is DD.MM.YYYY.

The date of July 4, 1776 is represented as 04.07.1776.

See the MDX Cube Reporting Help for information on supporting MDX cube
source date data in MicroStrategy.

Level s at Wh i ch Yo u Can Set Th i s

Database instance only

MDX Add Fake Measure


MDX Add Fake Measure is an advanced property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

It is a common practice to include both attributes and metrics on an MDX


cube report. However, MDX cube reports can also contain only attributes to

Copyright © 2024 All Rights Reserved 1725


Syst em Ad m in ist r at io n Gu id e

review attribute information. If this type of MDX cube report accesses data
that is partitioned within the MDX cube source, the report can require
additional resources and impact the performance of the report. To avoid this
performance issue, the MDX Add Fake Measure VLDB property provides the
following options:

l Do not add fake measure to attribute-only MDX report: MDX cube


reports that only contain attributes without any metrics are processed as
normal. This can cause additional processing to be required for this type
of MDX cube report if it accesses data that is partitioned within the MDX
cube source. This is the default option for SAP and TM1 MDX cube
sources.

l Add a fake measure to an attribute-only MDX report: MDX cube reports


that only contain attributes without any metrics also include an additional
structure that acts as a metric, although no metrics are displayed on the
report. This can improve performance of MDX cube reports that only
contain attributes and also access data that is partitioned within the MDX
cube source. This is the default option for SSAS and Essbase MDX cube
sources.

Level s at Wh i ch Yo u Can Set Th i s

Database instance and report

MDX Add Non Empty


MDX Add Non Empty is an advanced VLDB property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

The MDX Add Non Empty VLDB property determines how null values are
returned to MicroStrategy from an MDX cube source and displayed on MDX
cube reports. To determine whether null data should be displayed on MDX
cube reports, when attributes from different hierarchies (dimensions) are

Copyright © 2024 All Rights Reserved 1726


Syst em Ad m in ist r at io n Gu id e

included on the same MDX cube report, see MDX Non Empty Optimization,
page 1732.

You can choose from the following settings:

l Do not add the non-empty keyword in the MDX select clause: When
this option is selected, data is returned from rows that contain data and
rows that have null metric values (similar to an outer join in SQL). The null
values are displayed on the MDX cube report.

l Add the non-empty keyword in the MDX select clause only if there
are metrics on the report (default): When this option is selected, and
metrics are included on an MDX cube report, data is not returned from the
MDX cube source when the default metric in the MDX cube source has null
data. Any data not returned is not included on MDX cube reports (similar to
an inner join in SQL). If no metrics are present on an MDX cube report,
then all values for the attributes are returned and displayed on the MDX
cube report.

l Always add the non-empty keyword in the MDX select clause: When
this option is selected, data is not returned from the MDX cube source
when a metric on the MDX cube report has null data. Any data not returned
is not included on MDX cube reports (similar to an inner join in SQL).

Level s at Wh i ch Yo u Can Set Th i s

Database instance and report

See the MDX Cube Reporting Help for more information on MDX sources.

Do not add the non-empty keyword in the MDX select clause

with set [dim0_select_members] as '{[0D_SOLD_TO].[LEVEL01].members}'


set [dim1_select_members] as '{[0CALQUARTER].[LEVEL01].members}'
select {[Measures].[3STVV9JH7ATAV9YJN06S7ZKSQ]} on columns,CROSSJOIN
(hierarchize({[dim0_select_members]}), hierarchize({[dim1_select_members]}))
dimension properties [0D_SOLD_TO].[20D_SOLD_TO], [0D_SOLD_TO].[10D_SOLD_TO]
on rows
from [0D_DECU/QCUBE2]

Copyright © 2024 All Rights Reserved 1727


Syst em Ad m in ist r at io n Gu id e

Add the non-empty keyword in the MDX select clause

with set [dim0_select_members] as '{[0D_SOLD_TO].[LEVEL01].members}'


set [dim1_select_members] as '{[0CALQUARTER].[LEVEL01].members}'
select {[Measures].[3STVV9JH7ATAV9YJN06S7ZKSQ]} on columns, non empty
CROSSJOIN(hierarchize({[dim0_select_members]}), hierarchize({[dim1_select_
members]})) dimension properties [0D_SOLD_TO].[20D_SOLD_TO], [0D_SOLD_TO].
[10D_SOLD_TO] on rows
from [0D_DECU/QCUBE2]

MDX Cell Formatting


With the MDX Cell Formatting VLDB property, you can specify for the metric
values in MicroStrategy MDX cube reports to inherit their value formatting
from an MDX cube source. This enables MicroStrategy MDX cube reports to
use the same data formatting available in your MDX cube source. It also
maintains a consistent view of your MDX cube source data in MicroStrategy.

Inheriting value formats from your MDX cube source also enables you to
apply multiple value formats to a single MicroStrategy metric.

This VLDB property has the following options:

l MDX metric values are formatted per column (default): If you select this
option, MDX cube source formatting is not inherited. You can only apply a
single format to all metric values on an MDX cube report.

l MDX metric values are formatted per cell: If you select this option, MDX
cube source formatting is inherited. Metric value formats are determined
by the formatting that is available in the MDX cube source, and metric
values can have different formats.

For examples of using these options and steps to configure your MDX cube
sources properly, see the MDX Cube Reporting Help.

Level s at Wh i ch Yo u Can Set Th i s

Database instance and report

Copyright © 2024 All Rights Reserved 1728


Syst em Ad m in ist r at io n Gu id e

MDX has Measure Values in Other Hierarchies


MDX Has Measure Values In Other Hierarchies is an advanced property that
is hidden by default. For information on how to display this property, see
Viewing and Changing Advanced VLDB Properties, page 1630.

This VLDB property determines how null values are identified if you use the
MDX Non Empty Optimization VLDB property (see MDX has Measure Values
in Other Hierarchies, page 1729) to ignore null values coming from MDX
cube sources.

If you define the MDX Non Empty Optimization VLDB property as No non-
empty optimization, then this VLDB property has no effect on how null
values are ignored. If you use any other option for the MDX Non Empty
Optimization VLDB property, you can choose from the following settings:

l Only include the affected hierarchy in the "has measure values" set
definition: Only a single hierarchy on the MDX cube report is considered
when identifying and ignoring null values. This requires fewer resources to
determine the null values, but some values can be mistakenly identified as
null values in scenarios such as using calculated members in an MDX
cube source.

l Include all template hierarchies in the "has measure values" set


definition: All hierarchies that are part of an MDX cube report are
considered when identifying and ignoring null values. This can help to
ensure that some values are not lost when MicroStrategy ignores null
values from the MDX cube source. Including all hierarchies to identify null
values can require additional system resources and time to complete.

Level s at Wh i ch Yo u Can Set Th i s

Database instance and report

Copyright © 2024 All Rights Reserved 1729


Syst em Ad m in ist r at io n Gu id e

MDX Level Number Calculation Method


MDX Level Number Calculation is an advanced property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

This VLDB property is useful only for MDX cube reports that access an
Oracle Hyperion Essbase MDX cube source. To help illustrate the
functionality of the property, consider an unbalanced hierarchy with the
levels Products, Department, Category, SubCategory, Item, and SubItem.
The image below shows how this hierarchy is populated on a report in
MicroStrategy.

The level SubItem causes the hierarchy to be unbalanced, which displaces


the levels of the hierarchy when populated on a report in MicroStrategy. For
more information on unbalanced and ragged hierarchies, see the MDX Cube
Reporting Help.

You can choose from the following settings:

l Use actual level number (default): When this option is selected, an


unbalanced or ragged hierarchy from Essbase is populated on a grid from
the bottom of the hierarchy up, as shown in the image above.

l Use generation number to calculate level number: When this option is


selected, an unbalanced or ragged hierarchy from Essbase is populated

Copyright © 2024 All Rights Reserved 1730


Syst em Ad m in ist r at io n Gu id e

on a grid from the top of the hierarchy down. If this setting is selected for
the example scenario described above, the report is populated as shown
in the image below.

The unbalanced hierarchy is now displayed on the report with an accurate


representation of the corresponding levels.

Setting this VLDB property to Add the generation number property for a
ragged hierarchy from Essbase can cause incorrect formatting.

Level s at Wh i ch Yo u Can Set Th i s

Database instance and report

MDX Measure Values to Treat as Null


MDX Measure Values to Treat as Null is an advanced VLDB property that is
hidden by default. For information on how to display this property, see
Viewing and Changing Advanced VLDB Properties, page 1630.

The MDX Measure Values to Treat as Null VLDB property allows you to
specify what measure values are defined as NULL values, which can help to
support how your SAP environment handles non-calculated measures. The
default value to treat as NULL is X. This supports defining non-calculated
measures as NULL values for SAP 7.4 environments.

Copyright © 2024 All Rights Reserved 1731


Syst em Ad m in ist r at io n Gu id e

Level s at Wh i ch Yo u Can Set Th i s

Database instance and report

MDX Non Empty Optimization


MDX Non Empty Optimization is an advanced VLDB property that is hidden
by default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

The MDX Non Empty Optimization VLDB property determines how null
values from an MDX cube source are ignored using the non-empty keyword
when attributes from different hierarchies (dimensions) are included on the
same MDX cube report.

You can choose from the following settings:

l No non-empty optimization (default): The non-empty keyword is not


included during the cross join of data. By selecting this option, all null data
is included on the MDX cube report. Including all null data can require
more system resources to perform the necessary cross joins.

l Non-empty optimization, use default measure: The non-empty keyword


is added to any required cross joins based on the default measure within
the MDX cube source. Data is only displayed on an MDX cube report for
rows in which the default measure within the MDX cube source has data. If
you use this option, you can also control whether null values from MDX
cube sources are ignored using the VLDB property MDX Has Measure
Values In Other Hierarchies (see MDX has Measure Values in Other
Hierarchies, page 1729).

l Non-empty optimization, use first measure on template: The non-


empty keyword is added to any required cross joins based on the first
metric used on an MDX cube report. Data is only displayed on an MDX
cube report for rows in which the first metric used on an MDX cube report
has data. For example, if Revenue and Profit metrics are on an MDX cube
report and Revenue is in the first column (left-most column), the non-

Copyright © 2024 All Rights Reserved 1732


Syst em Ad m in ist r at io n Gu id e

empty keyword is added based on the Revenue metric. In this scenario,


null or empty data may still be returned for the Profit metric. If you use this
option, you can also control whether null values from MDX cube sources
are ignored using the VLDB property MDX Has Measure Values In Other
Hierarchies (See MDX has Measure Values in Other Hierarchies, page
1729).

l Non-empty optimization, use all measures on template: The non-


empty keyword is added to any required cross joins based on all metrics
used on an MDX cube report. Data is only displayed on an MDX cube
report for rows in which at least one of the metrics used on an MDX cube
report has data. For example, Revenue and Profit metrics are on an MDX
cube report, which includes the following data:

Year Category Revenue Profit

Books $1,000,000 $300,000

Electronics $2,500,000
2008
Movies $500,000

Music

By selecting this option, the following data would be returned on the MDX
cube report:

Year Category Revenue Profit

Books $1,000,000 $300,000

2008 Electronics $2,500,000

Movies $500,000

The Music row does not appear because all the metrics have null values.
If you use this option, you can also control whether null values from MDX

Copyright © 2024 All Rights Reserved 1733


Syst em Ad m in ist r at io n Gu id e

cube sources are ignored using the VLDB property MDX Has Measure
Values In Other Hierarchies (See MDX Non Empty Optimization, page
1732).

Level s at Wh i ch Yo u Can Set Th i s

Database instance and report

MDX Remember Measure Dimension Name


MDX Remember Measure Dimension Name is an advanced property that is
hidden by default. For information on how to display this property, see
Viewing and Changing Advanced VLDB Properties, page 1630.

This VLDB property defines how the name of the measure dimension is
determined for an MDX cube source. You can choose from the following
settings:

l Do not remember the name of the measure dimension: The MDX cube
source is not analyzed to determine the name of the measure dimension.
Since most MDX cube sources use [Measures] as the measure
dimension name and MicroStrategy recognizes this default name, this
option is recommended for most MDX cube sources.

l Remember the name of the measure dimension: The MDX cube source
is analyzed to determine the name of the measure dimension. The name
returned is then used later when querying the MDX cube source. This
option can be used when an MDX cube source does not use [Measures]
as the measure dimension name, which is the default used for most MDX
cube sources. Essbase is the MDX cube source that most commonly uses
a measure dimension name other than [Measures].

l Read the name of the measure dimension from the "Name of Measure
Dimension" VLDB setting: The measure dimension name defined using
the Name of Measure Dimension VLDB property (see MDX Remember
Measure Dimension Name, page 1734) is used as the measure dimension
name. You can use this option if the MDX cube source does not use

Copyright © 2024 All Rights Reserved 1734


Syst em Ad m in ist r at io n Gu id e

[Measures] as the measure dimension name, and you know what


alternative name is used for the measure dimension.

Level s at Wh i ch Yo u Can Set Th i s

Database instance only

MDX TopCount Support


MDX TopCount Support is an advanced property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

This VLDB property determines whether TopCount is used in place of Rank


and Order to support certain MicroStrategy features such as metric filter
qualifications. TopCount can be used with SAP BW and Microsoft Analysis
Services MDX cube sources.

You can choose from the following settings:

l Do not use TopCount in the place of Rank and Order: The functions
Rank and Order are always used instead of TopCount. This option
supports backwards compatibility.

l Use TopCount instead of Rank and Order (default): The function


TopCount is automatically used in place of Rank and Order when
necessary to support certain MicroStrategy features. This includes
scenarios such as using metric filter qualifications on MDX cube reports.

Level s at Wh i ch Yo u Can Set Th i s

Database instance and report

MDX Treat Key Date Qualification as ID Date Qualification


MDX Treat Key Date Qualification As ID Date Qualification is an advanced
VLDB property that is hidden by default. For information on how to display

Copyright © 2024 All Rights Reserved 1735


Syst em Ad m in ist r at io n Gu id e

this property, see Viewing and Changing Advanced VLDB Properties, page
1630.

The MDX Treat Key Date Qualification As ID Date Qualification VLDB


property determines how date qualifications are processed for MDX cube
sources. You can choose from the following settings:

l Do not treat a date qualification on a key form as a date qualification


on an ID form: This option processes date qualifications by using the
member properties. While this can impact performance, you can use this
option to support date qualifications on data that cannot be processed by
using the unique name.

l Treat a date qualification on a key form as a date qualification on an


ID form (default): This option provides the best performance for
processing date qualifications by using the unique name rather than the
member properties.

Level s at Wh i ch Yo u Can Set Th i s

Database instance and report

MDX Verify Limit Filter Literal Level


MDX Verify Limit Filter Literal Level is an advanced property that is hidden
by default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

This VLDB property supports a unique scenario when analyzing MDX cube
reports. An example of this scenario is provided below.

You have an MDX cube report that includes a low level attribute on the
report, along with some metrics. You create a filter on the attribute's ID
form, where the ID is between two ID values. You also include a filter on a
metric. Below is an example of such an MDX cube report definition:

Copyright © 2024 All Rights Reserved 1736


Syst em Ad m in ist r at io n Gu id e

When you run the report, you receive an error that alerts you that an
unexpected level was found in the result. This is because the filter on the
attribute's ID form can include other levels due to the structure of ID values
in some MDX cube sources. When these other levels are included, the
metric filter cannot be evaluated correctly by default.

You can support this type of report by modifying the MDX Verify Limit Filter
Literal Level. This VLDB property has the following options:

l Do not verify the level of literals in limit or filter expressions


(default): While the majority of MDX cube reports execute correctly when
this option is selected, the scenario described above will fail.

l Verify the level of literals in limit or filter expressions: Selecting this


option for an MDX cube report allows reports fitting the scenario described
above to execute correctly. This is achieved by adding an intersection in
the MDX statement to support the execution of such an MDX cube report.
For example, the MDX cube report described in the scenario above
executes correctly and displays the following data.

Level s at Wh i ch Yo u Can Set Th i s

Database instance and report

Copyright © 2024 All Rights Reserved 1737


Syst em Ad m in ist r at io n Gu id e

Name of Measure Dimension


Name of Measure Dimension is an advanced property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

This VLDB property defines the name of the measures dimension in an MDX
cube source. The default name for the measures dimension is [Measures].
If your MDX cube source uses a different name for the measures dimension,
you must modify this VLDB property to match the name used in your MDX
cube source. Requiring this change is most common when connecting to
Essbase MDX cube sources, which do not always use [Measures] as the
measure dimension name.

Identifying the name of the measure dimension is also configured using the
MDX Remember Measure Dimension Name VLDB property, as described in
Name of Measure Dimension, page 1738.

Level s at Wh i ch Yo u Can Set Th i s

Database instance and report

MDX Query Result Rows


MDX Query Result Rows is an advanced property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties.

This VLDB property allows users to set the maximum number of rows that
each MDX query can return against MSAS and Essbase for a hierarchical
attribute. The default value is -1, which means there’s no limit. Any positive
integer value can be set as the maximum number of rows. Any values other
than positive integers are treated as default, meaning they have no limit.
When the limit is exceeded, an error message appears.

Level s at Wh i ch Yo u Can Set Th i s

Database instance

Copyright © 2024 All Rights Reserved 1738


Syst em Ad m in ist r at io n Gu id e

Calculating Data: Metrics


The table below summarizes the Metrics VLDB properties. Additional details
about each property, including examples where necessary, are provided in
the sections following the table.

Property Description Possible Values Default Value

The Analytical Engine


can either:

Perform the non-


aggregation calculation Use subquery
Absolute Non- with a subquery, or
Agg Metric Query Use temp table as Use subquery
Place the results that set in the Fallback
Type
would have been Table Type setting
selected from a subquery
into an intermediate
table and join that table
to the rest of the query.

This property controls


whether the non-
aggregation calculation
is performed before or Calculate non-
Compute Non- after an Analytical aggregation before
Agg Before/After Engine calculation. Use OLAP Calculate non-
OLAP Functions this property to Functions/Rank aggregation before
(For Example, determine, for example,
Calculate non- OLAP
Rank) Calculated whether the engine ranks
aggregation after Functions/Rank
in Analytical the stores and then
Engine performs a non- OLAP

aggregation calculation, Functions/Rank

or performs the non-


aggregation calculation
first.

Count Compound Compound attributes are COUNT expression COUNT expression


Attribute usually counted by enabled enabled

Copyright © 2024 All Rights Reserved 1739


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

concatenating the keys


of all the attributes that
form the key. If the
database platform does COUNT expression
not support COUNT on disabled
concatenated strings,
this property should be
disabled.

Some database platforms


do not support count on a
column (COUNT(COL)). Use COUNT
COUNT(column) (column) Use COUNT
This property converts
Support (column)
the COUNT(COL) Use COUNT(*)
statement to a COUNT
(*).

Do not use the


Allows you to choose metric name as the
whether you want to use default metric Do not use the
Default to Metric the metric name as the column alias metric name as the
Name column alias or whether default metric
to use a MicroStrategy- Use the metric name column alias
generated name. as the default metric
column alias

Add ".0" to integer


constant in metric
This property determines expression Add ".0"' to integer
Integer Constant
whether to add a ".0" constant in metric
in Metric Do Not Add ".0" to
after the integer. expression
integer constant in
metric expression

Determines how values Disallow joins based


for metrics are on unrelated Disallow joins
Join Across
calculated when common attributes based on unrelated
Datasets
unrelated attributes, common attributes
from different datasets of Allow joins based on

Copyright © 2024 All Rights Reserved 1740


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

a dashboard or
unrelated common
document, are included
attributes:
with metrics.

Max Metric Alias Maximum size of the


User-defined 256
Size metric alias string

Type of join used in a Inner Join


Metric Join Type Inner Join
metric. Outer Join

Influences the behavior


for non-aggregation Optimized for less
Non-Agg Metric metrics by either fact table access Optimized for less
Optimization optimizing for smaller Optimized for fact table access
temporary tables or for smaller temp table
less fact table access.

Do nothing

Indicates how to handle Check for NULL in Check for NULL in


Null Check arithmetic operations all queries temp table join
with NULL values. only
Check for NULL in
temp table join only

One pass

Multiple count
distinct, but count
expression must be
Indicates how to handle the same No count distinct,
COUNT (and other
Separate COUNT Multiple count use select distinct
aggregation functions)
DISTINCT distinct, but only one and count(*)
when DISTINCT is
count distinct per instead
present in the SQL.
pass

No count distinct,
use select distinct
and count(*) instead

Copyright © 2024 All Rights Reserved 1741


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

Determines the
evaluation order to
support variance and False
Smart Metric
variance percentage False
Transformation True
transformations on smart
metric or compound
metric results.

Use only the


grouping property of
a level metric for
dynamic aggregation
(default):

Use only the


grouping property of
a level subtotal for
Determines how the level dynamic
of calculation is defined aggregation: Use only the
Subtotal for metrics that are grouping property
Dimensionality included on reports that Use both the of a level metric for
Use utilize the OLAP Services grouping and dynamic
feature dynamic filtering property of a aggregation
aggregation. level metric for
dynamic
aggregation:

Use both the


grouping and
filtering property of a
level subtotal for
dynamic
aggregation:

Define metrics that


should be used to False
Transformable
perform transformations False
AggMetric True
on compound metrics
that use nested

Copyright © 2024 All Rights Reserved 1742


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

aggregation.

7.1 style. Apply


transformation to all
applicable attributes
7.1 style. Apply
Indicates how to handle 7.2 style. Only apply
Transformation transformations to
the transformation dates transformation to
Role Processing all applicable
calculation. highest common attributes
child when it is
applicable to
multiple attributes

Do nothing

Check for zero in all


Indicates how to handle Check for zero in
Zero Check queries
division by zero. all queries
Check for zero in
temp table join only

Absolute Non-Agg Metric Query Type


Absolute Non-Agg Metric Query Type is an advanced property that is hidden
by default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

When a report contains an absolute non-aggregation metric, the pass that


gets the non-aggregated data can be performed in a subquery or in a
temporary table.

l Use Temp Table as set in the Fallback Table Type setting: When this
option is set, the table creation type follows the option selected in the
VLDB property Fallback Table Type. The SQL Engine reads the Fallback
Table Type VLDB setting and determines whether to create the
intermediate table as a true temporary table or a permanent table.

Copyright © 2024 All Rights Reserved 1743


Syst em Ad m in ist r at io n Gu id e

In most cases, the default Fallback Table Type VLDB setting is Temporary
table. However, for a few databases, like UDB for 390, this option is set to
Permanent table. These databases have their Intermediate Table Type
defaulting to True Temporary Table, so you set their Fallback Table Type
to Permanent. If you see permanent table creation and you want the
absolute non-aggregation metric to use a True Temporary table, set the
Fallback Table Type to Temporary table on the report as well.

l Use subquery (default): With this setting, the engine performs the non-
aggregation calculation with a subquery.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l es

Use Sub-query

select a11.CLASS_NBR CLASS_NBR,


a12.CLASS_DESC CLASS_DESC,
sum(a11.TOT_SLS_QTY) WJXBFS1
from DSSADMIN.MARKET_CLASS a11,
DSSADMIN.LOOKUP_CLASS a12
where a11.CLASS_NBR = a12.CLASS_NBR
and (((a11.MARKET_NBR)
in (select s21.MARKET_NBR
from DSSADMIN.LOOKUP_STORE s21
where s21.STORE_NBR in (3, 2, 1)))
and ((a11.MARKET_NBR)
in (select min(c11.MARKET_NBR)
from DSSADMIN.LOOKUP_MARKET c11
where ((c11.MARKET_NBR)
in (select s21.MARKET_NBR
from DSSADMIN.LOOKUP_STORE s21
where s21.STORE_NBR in (3, 2, 1))))))
group by a11.CLASS_NBR,
a12.CLASS_DESC

Use Temporary Table as Set in the Fallback Table Type Setting

create table TPZZOP00 as


select min(c11.MARKET_NBR) WJXBFS1
from DSSADMIN.LOOKUP_MARKET c11

Copyright © 2024 All Rights Reserved 1744


Syst em Ad m in ist r at io n Gu id e

where ((c11.MARKET_NBR)
in (select s21.MARKET_NBR
from DSSADMIN.LOOKUP_STORE s21
where s21.STORE_NBR in (3, 2, 1)))
select a11.CLASS_NBR CLASS_NBR,
a12.CLASS_DESC CLASS_DESC,
sum(a11.TOT_SLS_QTY) WJXBFS1
from DSSADMIN.MARKET_CLASS a11,
TPZZOP00 pa1,
DSSADMIN.LOOKUP_CLASS a12
where a11.MARKET_NBR = pa1.WJXBFS1 and
a11.CLASS_NBR = a12.CLASS_NBR
and ((a11.MARKET_NBR)
in (select s21.MARKET_NBR
from DSSADMIN.LOOKUP_STORE s21
where s21.STORE_NBR in (3, 2, 1)))
group by a11.CLASS_NBR,
a12.CLASS_DESC

Compute Non-Agg Before/After OLAP Functions (For Example,


Rank) Calculated in Analytical Engine
Compute Non-Agg Before/After OLAP Functions/Rank is an advanced
property that is hidden by default. For information on how to display this
property, see Viewing and Changing Advanced VLDB Properties, page 1630.

When reports contain calculations based on non-aggregation metrics, this


property controls the order in which the non-aggregation and calculations
are computed.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l es

Calculate Non-Aggregation Before Analytical (default)

select a12.YEAR_ID YEAR_ID,


sum(a11.TOT_SLS_QTY) WJXBFS1
from HARI_REGION_DIVISION a11
join HARI_LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
where a11.CUR_TRN_DT)

Copyright © 2024 All Rights Reserved 1745


Syst em Ad m in ist r at io n Gu id e

in (select min(a11.CUR_TRN_DT)
from HARI_LOOKUP_DAY a11
group by a11.YEAR_ID))
group by a12.YEAR_ID
create table #ZZTIS00H5J7MQ000(
YEAR_ID DECIMAL(10, 0))
[Placeholder for an analytical SQL]
select a12.YEAR_ID YEAR_ID,
max(a13.YEAR_DESC) YEAR_DESC,
sum(a11.TOT_SLS_QTY) TSQDIMYEARNA
from HARI_REGION_DIVISION a11
join HARI_LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
join #ZZTIS00H5J7MQ000 pa1
on (a12.YEAR_ID = pa1.YEAR_ID)
join HARI_LOOKUP_YEAR a13
on (a12.YEAR_ID = a13.YEAR_ID)
where ((a11.CUR_TRN_DT)
in (select min(a15.CUR_TRN_DT)
from #ZZTIS00H5J7MQ000 pa1
join HARI_LOOKUP_DAY a15
on (pa1.YEAR_ID = a15.YEAR_ID)
group by pa1.YEAR_ID))
group by a12.YEAR_ID

Calculate Non-Aggregation After Analytical

select a11.CUR_TRN_DT CUR_TRN_DT,


a12.YEAR_ID YEAR_ID,
sum(a11.TOT_SLS_QTY) WJXBFS1
from HARI_REGION_DIVISION a11
join HARI_LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
group by a11.CUR_TRN_DT,
a12.YEAR_ID
create table #ZZTIS00H5J8NB000(
CUR_TRN_DT DATETIME,
YEAR_ID DECIMAL(10, 0),
WJXBFS1 FLOAT)
[Placeholder for an analytical SQL]
insert into #ZZTIS00H5J8NB000 values (CONVERT
(datetime, '1993-12-01 00:00:00', 120), 1993,
44)

[The rest of the INSERT statements have been omitted from display].

select distinct pa1.YEAR_ID YEAR_ID,


pa1.WJXBFS1 WJXBFS1
from #ZZTIS00H5J8NB000 pa1
where ((pa1.CUR_TRN_DT)
in (select min(c11.CUR_TRN_DT)

Copyright © 2024 All Rights Reserved 1746


Syst em Ad m in ist r at io n Gu id e

from HARI_LOOKUP_DAY c11


group by c11.YEAR_ID))
create table #ZZTIS00H5J8MQ001(
YEAR_ID DECIMAL(10, 0),
WJXBFS1 FLOAT)
[Placeholder for an analytical SQL]
select a12.YEAR_ID YEAR_ID,
max(a13.YEAR_DESC) YEAR_DESC,
sum(a11.TOT_SLS_QTY) TSQDIMYEARNA
from HARI_REGION_DIVISION a11
join HARI_LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
join #ZZTIS00H5J8MQ001 pa2
on (a12.YEAR_ID = pa2.YEAR_ID)
join HARI_LOOKUP_YEAR a13
on (a12.YEAR_ID = a13.YEAR_ID)
where ((a11.CUR_TRN_DT)
in (select min(a15.CUR_TRN_DT)
from #ZZTIS00H5J8MQ001 pa2
join HARI_LOOKUP_DAY a15
on (pa2.YEAR_ID = a15.YEAR_ID)
group by pa2.YEAR_ID))
group by a12.YEAR_ID

Count Compound Attribute


Count Compound Attribute is an advanced property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

Compound attributes are usually counted by concatenating the keys of all of


the attributes that form the key.

If your database platform does not support COUNT on concatenated strings,


the Count Compound Attribute property should be disabled.

Level s at Wh i ch Yo u Can Set Th i s

Database instance only

Exam p l es

COUNT expression enabled (default)

Copyright © 2024 All Rights Reserved 1747


Syst em Ad m in ist r at io n Gu id e

select a21.DIVISION_NBR DIVISION_NBR,


max(a22.DIVISION_DESC) DIVISION_DESC,
count(distinct char(a21.ITEM_NBR) || ||
char(a21.CLASS_NBR)) ITEM_COUNT
from LOOKUP_ITEM a21
join LOOKUP_DIVISION a22
on (a21.DIVISION_NBR = a22.DIVISION_NBR)
group by a21.DIVISION_NBR

COUNT expression disabled

create table TEMP1 as


select distinct a21.DIVISION_NBR DIVISION_NBR,
a21.ITEM_NBR ITEM_NBR,
a21.CLASS_NBR CLASS_NBR
from LOOKUP_ITEM a21
select a22.DIVISION_NBR DIVISION_NBR,
max(a22.DIVISION_DESC) DIVISION_DESC,
count(a21.ITEM_NBR) ITEM_COUNT
from TEMP1 a21
join LOOKUP_DIVISION a22
on (a21.DIVISION_NBR = a22.DIVISION_NBR)
group by a22.DIVISION_NBR

COUNT(column) Support
COUNT(column) Support is an advanced property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

The COUNT(column) Support property is used to specify whether COUNT on


a column is supported or not. If it is not supported, the COUNT(column) is
computed by using intermediate tables and COUNT(*).

Level s at Wh i ch Yo u Can Set Th i s

Database instance only

Exam p l es

Use COUNT(column)

Copyright © 2024 All Rights Reserved 1748


Syst em Ad m in ist r at io n Gu id e

select a11.STORE_NBR STORE_NBR,


max(a12.STORE_DESC) STORE_DESC,
count(distinct a11.COST_AMT) COUNTDISTINCT
from HARI_COST_STORE_DEP a11
join HARI_LOOKUP_STORE a12
on (a11.STORE_NBR = a12.STORE_NBR)
group by a11.STORE_NBR

Use COUNT(*)

select a11.STORE_NBR STORE_NBR,


a11.COST_AMT WJXBFS1
into #ZZTIS00H5JWDA000
from HARI_COST_STORE_DEP a11
select distinct pa1.STORE_NBR STORE_NBR,
pa1.WJXBFS1 WJXBFS1
into #ZZTIS00H5JWOT001
from #ZZTIS00H5JWDA000 pa1
where pa1.WJXBFS1 is not null
select pa2.STORE_NBR STORE_NBR,
max(a11.STORE_DESC) STORE_DESC,
count(*) WJXBFS1
from #ZZTIS00H5JWOT001 pa2
join HARI_LOOKUP_STORE a11
on (pa2.STORE_NBR = a11.STORE_NBR)
group by pa2.STORE_NBR

Default to Metric Name


Default to Metric Name is an advanced property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

Default to Metric Name allows you to choose whether you want to use the
metric name or a MicroStrategy-generated name as the column alias. When
metric names are used, only the first 20 standard characters are used. If you
have different metrics, the metric names start with the same 20 characters.
It is hard to differentiate between the two, because they are always the
same. The Default to Metric Name option does not work for some
international customers.

If you choose to use the metric name and the metric name begins with a
number, the letter M is attached to the beginning of the name during SQL

Copyright © 2024 All Rights Reserved 1749


Syst em Ad m in ist r at io n Gu id e

generation. For example, a metric named 2003Revenue is renamed


M2003Revenue. This occurs because Teradata does not allow a leading
number in a metric name.

If you select the option Use the metric name as the default metric
column alias, you should also set the maximum metric alias size. See
Default to Metric Name, page 1749 below for information on setting this
option.

Level s at Wh i ch Yo u Can Set Th i s

Database instance only

Exam p l es

Do not use the metric name as the default metric column alias (default)

insert into ZZTSU006VT7PO000


select a11.[MONTH_ID] AS MONTH_ID,
a11.[ITEM_ID] AS ITEM_ID,
a11.[EOH_QTY] AS WJXBFS1
from [INVENTORY_Q4_2003] a11,
[LU_MONTH] a12,
[LU_ITEM] a13
where a11.[MONTH_ID] = a12.[MONTH_ID] and
a11.[ITEM_ID] = a13.[ITEM_ID]
and (a13.[SUBCAT_ID] in (25)
and a12.[QUARTER_ID] in (20034))

Use the metric name as the default metric column alias

insert into ZZPO00


select a11.[MONTH_ID] AS MONTH_ID,
a11.[ITEM_ID] AS ITEM_ID,
a11.[EOH_QTY] AS Endonhand
from [{|Partition_Base_Table|}] a11,
[LU_MONTH] a12,
[LU_ITEM] a13
where a11.[MONTH_ID] = a12.[MONTH_ID] and
a11.[ITEM_ID] = a13.[ITEM_ID]
and (a13.[SUBCAT_ID] in (25)
and a12.[QUARTER_ID] in (20034))

Copyright © 2024 All Rights Reserved 1750


Syst em Ad m in ist r at io n Gu id e

Integer Constant in Metric


The Integer Constant in Metric property determines whether or not to add a
".0" after the integer. This prevents incorrect integer division, for example,
2/7 = 0. Normally a ".0" is added to an integer constant to have a float
division (2.0/7.0 = 0.286). Some databases have trouble with this change,
because some database functions only work with integer data types. This
property allows you to turn OFF the addition of the ".0" if you have a
database that does not properly handle the .0 added after the integer.

Level s at Wh i ch Yo u Can Set Th i s

Database instance and metric

Join Across Datasets


Join Across Datasets is an advanced VLDB property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

The Join Across Datasets VLDB property determines how values for metrics
are calculated when unrelated attributes from different datasets of a
dashboard or document are included with metrics. For example, consider a
dashboard with two separate datasets that include the following data:

The datasets are displayed below as simple grid visualizations within a


dashboard.

Notice that one dataset includes the Region attribute, however the other
dataset only includes Category. The Region attribute is also not directly
related to the Category attribute, but it is included with Category in one of
the two datasets.

Copyright © 2024 All Rights Reserved 1751


Syst em Ad m in ist r at io n Gu id e

On this dashboard, you choose to create a new grid visualization with


Region and Sales. These objects are not on the same dataset, so this
requires combining the data from different datasets. By default, data is not
joined for the unrelated attributes Category and Region, and the following
data is displayed:

The data for Sales is displayed as $260 for both Regions, which is the total
sales of all regions. In most scenarios, this sales data should instead reflect
the data for each region. This can be achieved by allowing data to be joined
for the unrelated attributes Category and Region, which then displays the
following data:

Now the data for Sales displays $185 for North (a combination of the sales
for Books and Electronics, which were both for the North region) and $85 for
South (sales for Movies, which was for the South region).

l Disallow joins based on unrelated common attributes: By default, data


is not joined for unrelated attributes that are included on the same dataset.
This option is to support backward compatibility.

l Allow joins based on unrelated common attributes: Data is joined for


unrelated attributes that are included on the same dataset. This can allow
metric data to consider unrelated attributes on the same dataset to
logically combine the data, and thus provides results that are more
accurate and intuitive in most cases.

Level s at Wh i ch Yo u Can Set Th i s

Project and dashboard. To define this behavior for a dashboard open in


Visual Insight, from the File menu, select Document Properties. You can
then select to allow joins across datasets.

Copyright © 2024 All Rights Reserved 1752


Syst em Ad m in ist r at io n Gu id e

Max Metric Alias Size


Max Metric Alias Size is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

Max Metric Alias Size allows you to set the maximum size of the metric alias
string. This is useful for databases that only accept a limited number of
characters for column names.

Set the Max Metric Alias Size VLDB property for the following gateways:

Gateway Max Metric Alias Size

SQL Server 2012

SQL Server 2014

SQL Server 2016 128

SQL Server 2017

SQL Server 2019

Azure Synapse 128

Db2 128

PostgreSQL 63

Oracle 12c 30

Oracle 12cR2

Oracle 18c
128
Oracle 19c

Oracle 21c

Redshift 127

Teradata 15.x
128
Teradata 16.x

Copyright © 2024 All Rights Reserved 1753


Syst em Ad m in ist r at io n Gu id e

Gateway Max Metric Alias Size

Teradata 17

Google BigQuery 128

Snowflake 256

You should set the maximum metric alias size to fewer characters than your
database's limit. This is because, in certain instances, such as when two
column names are identical, the SQL engine adds one or more characters to
one of the column names during processing to be able to differentiate
between the names. Identical column names can develop when column
names are truncated.

For example, if your database rejects any column name that is more than 30
characters and you set this VLDB property to limit the maximum metric alias
size to 30 characters, the example presented by the following metric names
still causes your database to reject the names during SQL processing:

l Sales Metric in Fairfax County for 2002

l Sales Metric in Fairfax County for 2003

The system limits the names to 30 characters based on the VLDB option you
set in this example, which means that the metric aliases for both columns is
as follows:

l SALESMETRICINFAIRFAXCOUNTYFOR2 (30 characters)

l SALESMETRICINFAIRFAXCOUNTYFOR21 (31 characters)

The SQL engine adds a 1 to one of the names because the truncated
versions of both metric names are identical. That name is then 31 characters
long and so the database rejects it.

Therefore, in this example you should use this feature to set the maximum
metric alias size to fewer than 30 (perhaps 25), to allow room for the SQL

Copyright © 2024 All Rights Reserved 1754


Syst em Ad m in ist r at io n Gu id e

engine to add one or two characters during processing in case the first 25
characters of any of your metric names are the same.

Level s at Wh i ch Yo u Can Set Th i s

Database instance only

Metric Join Type


Metric Join Type is used to determine how to combine the result of one
metric with that of other metrics. When this property is set to Outer Join, all
the result rows of this metric are kept when combining results with other
metrics. If there is only one metric on the report, this property is ignored.

There are multiple places to set this property:

l At the DBMS and database instance levels, it is set in the VLDB


Properties Editor. This setting affects all the metrics in this project, unless
it is overridden at a lower level.

l At the metric level, it can be set in either the VLDB Properties Editor or
from the Metric Editor's Tools menu, and choosing Metric Join Type. The
setting is applied in all the reports that include this metric.

l At the report level, it can be set from the Report Editor's Data menu, by
pointing to Report Data Options, and choosing Metric Join Type. This
setting overrides the setting at the metric level and is applied only for the
currently selected report.

There is a related but separate property called Formula Join Type that can
also be set at the metric level. This property is used to determine how to
combine the result set together within this metric. This normally happens
when a metric formula contains multiple facts that cause the Analytical
Engine to use multiple fact tables. As a result, sometimes it needs to
calculate different components of one metric in different intermediate tables
and then combine them. This property can only be set in the Metric Editor

Copyright © 2024 All Rights Reserved 1755


Syst em Ad m in ist r at io n Gu id e

from the Tools menu, by pointing to Advanced Settings, and then choosing
Formula Join Type.

Both Metric Join Type and Formula Join Type are used in the Analytical
Engine to join multiple intermediate tables in the final pass. The actual logic
is also affected by another VLDB property, Full Outer Join Support. When
this property is set to YES, it means the corresponding database supports
full outer join (92 syntax). In this case, the joining of multiple intermediate
tables makes use of outer join syntax directly (left outer join, right outer join,
or full outer join, depending on the setting on each metric/table). However, if
the Full Outer Join Support is NO, then the left outer join is used to simulate
a full outer join. This can be done with a union of the IDs of the multiple
intermediate tables that need to do an outer join and then using the union
table to left outer join to all intermediate tables, so this approach generates
more passes. This approach was also used by MicroStrategy 6.x and earlier.

Also note that when the metric level is higher than the template level, the
Metric Join Type property is normally ignored, unless you enable another
property, Downward Outer Join Option. For detailed information, see
Relating Column Data with SQL: Joins, page 1684.

Level s at Wh i ch Yo u Can Set Th i s

Database instance and metric

Non-Agg Metric Optimization


Non-Agg Metric Optimization is an advanced property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

Non-Agg Metric Optimization influences the behavior of non-aggregation


metrics by either optimizing for smaller temporary tables or for less fact
table access. This property can help improve query performance depending
on the fact table size and the potential temporary table size. It may be more
effective to create a larger temporary table so that you can avoid using the

Copyright © 2024 All Rights Reserved 1756


Syst em Ad m in ist r at io n Gu id e

even larger fact table. If you are short on temporary table space or insert
much data from the fact table into the temporary table, it may be better to
use the fact table multiple times rather than create temporary tables. Your
choice for this property depends on your data and report definitions.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l es

Optimized for less fact table access (default)

The following example first creates a fairly large temporary table, but then
never touches the fact table again.

select a11.REGION_NBR REGION_NBR,


a11.REGION_NBR REGION_NBR0,
a12.MONTH_ID MONTH_ID,
a11.DIVISION_NBR DIVISION_NBR,
a11.CUR_TRN_DT CUR_TRN_DT,
a11.TOT_SLS_DLR WJXBFS1
into ZZNB00
from REGION_DIVISION a11
join LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
select pa1.REGION_NBR REGION_NBR,
pa1.MONTH_ID MONTH_ID,
min(pa1.CUR_TRN_DT) WJXBFS1
into ZZMB01
from ZZNB00 pa1
group by pa1.REGION_NBR,
pa1.MONTH_ID
select pa1.REGION_NBR REGION_NBR,
pa1.MONTH_ID MONTH_ID,
count(pa1.WJXBFS1) WJXBFS1
into ZZNC02
from ZZNB00 pa1
join ZZMB01 pa2
on (pa1.CUR_TRN_DT = pa2.WJXBFS1 and
pa1.MONTH_ID = pa2.MONTH_ID and
pa1.REGION_NBR = pa2.REGION_NBR)
group by pa1.REGION_NBR,
pa1.MONTH_ID
select distinct pa3.REGION_NBR REGION_NBR,
a13.REGION_DESC REGION_DESC,
a12.CUR_TRN_DT CUR_TRN_DT,
pa3.WJXBFS1 COUNTOFSALES
from ZZNC02 pa3

Copyright © 2024 All Rights Reserved 1757


Syst em Ad m in ist r at io n Gu id e

join LOOKUP_DAY a12


on (pa3.MONTH_ID = a12.MONTH_ID)
join LOOKUP_REGION a13
on (pa3.REGION_NBR = a13.REGION_NBR)

Optimized for smaller temp table

The following example does not create the large temporary table but must
query the fact table twice.

select a11.REGION_NBR REGION_NBR,


a12.MONTH_ID MONTH_ID,
min(a11.CUR_TRN_DT) WJXBFS1
into ZZOP00
from REGION_DIVISION a11
join LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
group by a11.REGION_NBR,
a12.MONTH_ID
select a11.REGION_NBR REGION_NBR,
a12.MONTH_ID MONTH_ID,
count(a11.TOT_SLS_DLR) COUNTOFSALES
into ZZMD01
from REGION_DIVISION a11
join LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
join ZZOP00 pa1
on (a11.CUR_TRN_DT = pa1.WJXBFS1 and
a11.REGION_NBR = pa1.REGION_NBR and
a12.MONTH_ID = pa1.MONTH_ID)
group by a11.REGION_NBR,
a12.MONTH_ID
select distinct pa2.REGION_NBR REGION_NBR,
a13.REGION_DESC REGION_DESC,
a12.CUR_TRN_DT CUR_TRN_DT,
pa2.COUNTOFSALES COUNTOFSALES
from ZZMD01 pa2
join LOOKUP_DAY a12
on (pa2.MONTH_ID = a12.MONTH_ID)
join LOOKUP_REGION a13
on (pa2.REGION_NBR = a13.REGION_NBR)

Null Check
The Null Check VLDB property indicates how to handle arithmetic operations
with NULL values. If Null Check is enabled, the NULL2ZERO function is
added, which changes NULL to 0 in any arithmetic calculation (+,-,*,/).

Copyright © 2024 All Rights Reserved 1758


Syst em Ad m in ist r at io n Gu id e

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Separate COUNT DISTINCT


Separate Count Distinct is an advanced property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

Separate Count Distinct indicates how to handle COUNT (and other


aggregation functions) when DISTINCT is present in the SQL.

Level s at Wh i ch Yo u Can Set Th i s

Database instance

Smart Metric Transformation


Smart Metric Transformation is an advanced property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

Due to the evaluation order used for smart metrics, compound metrics, and
transformations, creating transformation metrics to display the variance or
variance percentage of a smart metric or compound metric can return
unexpected results in some scenarios.

For definitions and examples of smart metrics, compound metrics, and


transformation metrics, see the Advanced Reporting Help.

For example, the report sample shown below includes quarterly profit
margins. Transformation metrics are included to display the last quarter's
profit margin (Last Quarter's (Profit Margin) and the variance of
the profit margin and last quarter's profit margin ((Profit Margin -
(Last Quarter's (Profit Margin))).

Copyright © 2024 All Rights Reserved 1759


Syst em Ad m in ist r at io n Gu id e

Since Profit Margin is a smart metric, the transformation metric that


calculates the variance displays unexpected results. For example, consider
the report row highlighted in the report example above. The profit margin for
2011 Q3 is 15.07% and the profit margin for 2011 Q2 is 14.98%. Both of
these calculations are correct. However, an incorrect value of 15.68% is
displayed as the variance.

You can modify the evaluation order to return correct variance results by
defining the Smart Metric Transformation VLDB property as True. After
making this change, the report displays the following results.

The variance is now displayed as 0.09%, which is the correct variance


calculation (15.07% - 14.98% = 0.09%).

The Smart Metric Transformation VLDB property has the following options:

Copyright © 2024 All Rights Reserved 1760


Syst em Ad m in ist r at io n Gu id e

l False (default): Select this option for backwards compatibility with existing
transformation metrics based on smart metrics or compound metrics.

l True: Select this option to modify the evaluation order to support


transformation metrics that calculate a variance or variance percentage,
based on the results of a smart metric or compound metric. Be aware that
to apply this functionality to derived metrics you must select this option at
the project level.

Level s at Wh i ch Yo u Can Set Th i s

Project and metric

Subtotal Dimensionality Use


Subtotal Dimensionality Use is an advanced property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

Subtotal Dimensionality Use determines how the level of calculation is


defined for metrics that are included on reports that use dynamic
aggregation, which is an OLAP Services feature. This VLDB property has the
following options:

l Use only the grouping property of a level metric for dynamic


aggregation (default): The dimensionality, or level, of the metric is used
to define how the metric data is calculated on the report when dynamic
aggregation is also used. When selecting this option, only the grouping
option for a level metric is used to calculate metric data. For detailed
examples and information on defining the dimensionality of a metric, refer
to the documentation on level metrics provided in the Advanced Reporting
Help.

l Use only the grouping property of a level subtotal for dynamic


aggregation: The dimensionality, or level, of the metric's dynamic
aggregation function is used to define how the metric data is calculated on

Copyright © 2024 All Rights Reserved 1761


Syst em Ad m in ist r at io n Gu id e

the report when dynamic aggregation is also used. You can define the
level of calculation for a metric's dynamic aggregation function by creating
a subtotal, and then defining the level of calculation for that subtotal.
When selecting this option, only the grouping option for a subtotal is used
to calculate metric data. For detailed examples and information on
creating subtotals, refer to the Advanced Reporting Help.

l Use both the grouping and filtering property of a level metric for
dynamic aggregation: The dimensionality, or level, of the metric is used
to define how the metric data is calculated on the report when dynamic
aggregation is also used. When selecting this option, both the grouping
and filtering options for a level metric are used to calculate metric data.
For detailed examples and information on defining the dimensionality of a
metric, refer to the documentation on level metrics provided in the
Advanced Reporting Help.

l Use both the grouping and filtering property of a level subtotal for
dynamic aggregation: The dimensionality, or level, of the metric's
dynamic aggregation function is used to define how the metric data is
calculated on the report when dynamic aggregation is also used. You can
define the level of calculation for a metric's dynamic aggregation function
by creating a subtotal, and then defining the level of calculation for that
subtotal. When selecting this option, both the grouping and filtering
options for a subtotal are used to calculate metric data. For detailed
examples and information on creating subtotals, refer to the Advanced
Reporting Help.

Exam p l e

Consider a metric that performs a simple sum of cost data by using the
following metric definition:

Sum(Cost) {~+}

Copyright © 2024 All Rights Reserved 1762


Syst em Ad m in ist r at io n Gu id e

This metric is named Cost, and the syntax {~+} indicates that it calculates
data at the level of the report it is included on. Another metric is created with
the following metric definition:

Sum(Cost) {~+}

This metric also uses a subtotal for its dynamic aggregation function that
uses the following definition:

Sum(x) {~+, !Year , !Category }

Notice that the function for this subtotal includes additional level information
to perform the calculation based on the report level, Year, and Category. As
shown in the image below, this subtotal function, named Sum
(Year,Category) is applied as the metric's dynamic aggregation function.

This metric is named Cost (subtotal dimensionality). This metric along with
the simple Cost metric is displayed on the report shown below, which also
contains the attributes Year, Region, and Category.

Copyright © 2024 All Rights Reserved 1763


Syst em Ad m in ist r at io n Gu id e

Notice that the values for these two metrics are the same. This is because
no dynamic aggregation is being performed, and the Subtotal Dimensionality
Use VLDB property is also using the default option of Use dimensionality
from metric for dynamic aggregation. With this default behavior still applied,
the attribute Year can be removed from the grid of the report to trigger
dynamic aggregation, as shown in the report below.

The metric values are still the same because both metrics are using the level
of the metric. If the Subtotal Dimensionality Use VLDB property for the
report is modified to use the option Use dimensionality from subtotal for
dynamic aggregation, this affects the report results as shown in the report
below.

Copyright © 2024 All Rights Reserved 1764


Syst em Ad m in ist r at io n Gu id e

The Cost (subtotal dimensionality) metric now applies the level defined in
the subtotal function that is used as the metric's dynamic aggregation
function. This displays the same Cost value for all categories in the
Northeast region because the data is being returned as the total for all years
and categories combined.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, template, and metric

Transformable AggMetric
The Transformable AggMetric VLDB property allows you to define what
metrics should be used to perform transformations on compound metrics
that use nested aggregation.

For example, you create two metrics. The first metric, referred to as Metric1,
uses an expression of Sum(Fact) {~+, Attribute+}, where Fact is a
fact in your project and Attribute is an attribute in your project used to
define the level of Metric1. The second metric, referred to as Metric2, uses
an expression of Avg(Metric1){~+}. Since both metrics use aggregation
functions, Metric2 uses nested aggregation.

Including Metric2 on a report can return incorrect results for the following
scenario:

l A transformation shortcut metric is defined on Metric2.

l Metric1 is defined at a lower level than the report level.

Copyright © 2024 All Rights Reserved 1765


Syst em Ad m in ist r at io n Gu id e

In this scenario, the transformation is applied to the outer metric, which in


this case is Metric2. To perform the transformation correctly, the
transformation should be applied for the inner metric, which in this case is
Metric1. To apply the transformation to Metric1 in this scenario, you can use
the Transformable AggMetric VLDB property. The options are:

l False (default): The metric uses default transformation behavior. This


option should be used for all metrics except for those metrics that are
defined for a scenario similar to Metric2 described above.

l True: The metric is defined as a metric to use to perform a transformation


when it is included in another metric through nested aggregation. This
option should be used only for metrics that are defined for a scenario
similar to Metric2 described above.

Level s at Wh i ch Yo u Can Set Tt h i s

Metric only

Transformation Role Processing


The Transformation Role Processing property is only available from the
Transformation Editor. From the Transformation Editor, select Schema
Objects and then choose Transformations. Right-click an object from the
right pane and select Edit.

The Transformation Role Processing property lets you choose how


transformation dates are calculated when there are multiple attributes to
transform. The example below considers the common Day, Week, and Month
schema setup. The schema has Week and Month as a parent to Day. Week
and Month are unrelated. This Month, Week, and Day hierarchy setup is a
common scenario where this property makes a difference.

Exam p l e

You have a report with Week, Sales, and Last Year Sales on the template,
filtered by Month. The default behavior is to calculate the Last Year Sales

Copyright © 2024 All Rights Reserved 1766


Syst em Ad m in ist r at io n Gu id e

with the following SQL. Notice that the date transformation is done for Month
and Week.

insert into ZZT6T02D01


select a14.DAT_YYYYWW DAT_YYYYWW,
sum(a11.SALES) SALESLY
from FT1 a11
join TRANS_DAY a12
on (a11.DAT_YYYYMMDD = a12.DAT_YYYYMMDD)
join TRANS_DAY_MON a13
on (a12.DAT_YYYYYYMM = a13.DAT_LYM)
join TRANS_DAY_WEEK a14
on (a12.DAT_YYYYWW = a14.DAT_LYW)
where a13.DAT_YYYYMM in (200311)
group by a14.DAT_YYYYWW

The new behavior applies transformation only to the highest common child
when it is applicable to multiple attributes. The SQL is shown in the following
syntax. Notice that the date transformation is done only at the Day level,
because Day is the highest common child of Week and Month. So the days
are transformed, and then you filter for the correct Month, and then Group by
Week.

insert into ZZT6T02D01


select a12.DAT_YYYYWW DAT_YYYYWW,
sum(a11.SALES) SALESLY
from FT1 a11
join TRANS_DAY a12
on (a11.DAT_YYYYMMDD = a12.DAT_YYYYMMLYT)
where a12.DAT_YYYYMM in (200311)
group by a12.DAT_YYYYWW

Zero Check
The Zero Check VLDB property indicates how to handle division by zero. If
zero checking is enabled, the ZERO2NULL function is added, which changes
0 to NULL in the denominator of any division calculation.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Copyright © 2024 All Rights Reserved 1767


Syst em Ad m in ist r at io n Gu id e

Customizing SQL Statements: Pre/Post Statements


The table below summarizes the Pre/Post Statements VLDB properties.
Additional details about each property, including examples and a list of wild
cards, are available by clicking on the links in the table.

Default
Property Description Possible Values
Value

Cleanup Post Appends string after final drop


User-defined NULL
Statement statement.

Data Mart
SQL to be SQL statements included after the
Executed CREATE statement used to create User-defined NULL
After Data the data mart.
Mart Creation

Data Mart
SQL to be
SQL statements included before the
Executed
INSERT statement used to insert User-defined NULL
Before
data into the data mart.
Inserting
Data

Data Mart
SQL to be SQL statements included before the
Executed CREATE statement used to create User-defined NULL
Prior to Data the data mart.
Mart Creation

Drop database
connection after Drop
Defines whether the database running user- database
Drop defined SQL
connection is dropped after user- connection
Database
defined SQL is executed on the Do not drop after running
Connection
database. database user-defined
connection after SQL
running user-

Copyright © 2024 All Rights Reserved 1768


Syst em Ad m in ist r at io n Gu id e

Default
Property Description Possible Values
Value

defined SQL

Element
Browsing SQL statements issued after element
User-defined NULL
Post browsing requests.
Statement

Element
SQL statements issued before
Browsing Pre User-defined NULL
element browsing requests.
Statement

SQL statements issued between


multiple insert statements. For the
Insert Mid first four statements, each contains
User-defined NULL
Statement 1-5 single SQL. The last statement can
contain multiple SQL statements
concatenated by ";".

SQL statements issued after create,


after first insert only for explicit temp
Insert Post table creation. For the first four
Statement 1- statements, each contains single User-defined NULL
5 SQL. The last statement can contain
multiple SQL statements
concatenated by ";".

SQL statements issued after create


before first insert only for explicit
temp table creation. For the first four
Insert Pre
statements, each contains single User-defined NULL
Statement 1-5
SQL. The last statement can contain
multiple SQL statements
concatenated by ";".

Report Post SQL statements issued after report


Statement 1- requests. For the first four User-defined NULL
5 statements, each contains single

Copyright © 2024 All Rights Reserved 1769


Syst em Ad m in ist r at io n Gu id e

Default
Property Description Possible Values
Value

SQL. The last statement can contain


multiple SQL statements
concatenated by ";".

SQL statements issued before report


requests. For the first four
Report Pre statements, each contains single
User-defined NULL
Statement 1-5 SQL. The last statement can contain
multiple SQL statements
concatenated by ";".

SQL statements issued after creating


new table and insert. For the first
Table Post
four statements, each contains single
Statement 1- User-defined NULL
SQL. The last statement can contain
5
multiple SQL statements
concatenated by ";".

SQL statements issued before


creating new table. For the first four
Table Pre statements, each contains single
User-defined NULL
Statement 1-5 SQL. The last statement can contain
multiple SQL statements
concatenated by ";".

You can insert the following syntax into strings to populate dynamic
information by the SQL Engine:

l !!! inserts column names, separated by commas (can be used in Table


Pre/Post and Insert Pre/Mid statements).

l !! inserts an exclamation (!) (can be used in Table Pre/Post and Insert


Pre/Mid statements). Note that "!!=" inserts a not equal to sign in the SQL
statement.

Copyright © 2024 All Rights Reserved 1770


Syst em Ad m in ist r at io n Gu id e

l ??? inserts the table name (can be used in Data Mart Insert/Pre/Post
statements, Insert Pre/Post, and Table Post statements).

l ;; inserts a semicolon (;) in Statement5 (can be used in all Pre/Post


statements). Note that a single ";" (semicolon) acts as a separator.

l !a inserts column names for attributes only (can be used in Table Pre/Post
and Insert Pre/Mid statements).

l !d inserts the date (can be used in all Pre/Post statements).

l !f inserts the report path (can be used in all Pre/Post statements except
Element Browsing). An example is: \MicroStrategy
Tutorial\Public Objects\Reports\MicroStrategy Platform
Capabilities\Ad hoc Reporting\Sorting\Yearly Sales

l !i inserts the job priority of the report which is represented as an integer


from 0 to 999 (can be used in all Pre/Post statements).

l !o inserts the report name (can be used in all Pre/Post statements).

l !u inserts the user name (can be used in all Pre/Post statements).

l !j inserts the Intelligence Server Job ID associated with the report


execution (can be used in all Pre/Post statements).

l !r inserts the report GUID, the unique identifier for the report object that is
also available in the Enterprise Manager application (can be used in all
Pre/Post statements).

l !t inserts a timestamp (can be used in all Pre/Post statements).

l !p inserts the project name with spaces omitted (can be used in all
Pre/Post statements).

l !z inserts the project GUID, the unique identifier for the project (can be
used in all Pre/Post statements).

Copyright © 2024 All Rights Reserved 1771


Syst em Ad m in ist r at io n Gu id e

l !s inserts the user session GUID, the unique identifier for the user's
session that is also available in the Enterprise Manager application (can
be used in all Pre/Post statements).

l The # character is a special token that is used in various patterns and is


treated differently than other characters. One single # is absorbed and two
# are reduced to a single #. For example to show three # characters in a
statement, enter six # characters in the code. You can get any desired
string with the right number of # characters. Using the # character is the
same as using the ; character.

The table below shows the location of some of the most important
VLDB/DSS settings in a Structured Query Language (SQL) query structure.
If the properties in the table are set, the values replace the corresponding
tag in the query:

Tag VLDB properties (MSTR 7.x)

<1> Report PreStatement (1-5)

<2> Table PreStatement (1-5)

<3> Table Qualifier

<4> Table Descriptor

<5> Table Prefix

<6> Table Option

<7> Table Space

<8> Create PostString

<9> Pre DDL COMMIT

<10> Insert PreStatement (1-5)

<11> Insert Table Option

<12> SQL Hint

Copyright © 2024 All Rights Reserved 1772


Syst em Ad m in ist r at io n Gu id e

Tag VLDB properties (MSTR 7.x)

<13> Post DDL COMMIT

<14> Insert PostString

<15> Insert MidStatement (1-5)

<16> Table PostStatement (1-5)

<17> Index Qualifier

<18> Index PostString

<19> Select PostString

<20> Report PostStatement (1-5)

<21> Commit after Final Drop

<22> Cleanup PostStatement

Query Structure

<1>
<2>
CREATE <3> TABLE <4> <5><table name> <6>
(<fields' definition>)
<7>
<8>
<9>(COMMIT)
<10>
INSERT INTO <5><table name><11>
SELECT <12> <fields list>
FROM <tables list>
WHERE <joins and filter>
<13>(COMMIT)
<14>
<15>
<16>
CREATE <17> INDEX <index name> ON
<fields list>
<18>
SELECT <12> <fields list>
FROM <tables list>
WHERE <joins and filter>
<19>
<20>
DROP TABLE TABLENAME

Copyright © 2024 All Rights Reserved 1773


Syst em Ad m in ist r at io n Gu id e

<21>
<22>

The Commit after Final Drop property (<21>) is sent to the warehouse even
if the SQL View for the report does not show it.

Cleanup Post Statement


The Cleanup Post Statement property allows you to insert your own SQL
string after the final DROP statement. There are five settings, numbered 1-5.
Each text string entered in Cleanup Post Statement 1 through Cleanup Post
Statement 4 is executed separately as a single statement. To execute more
than 5 statements, insert multiple statements in Cleanup Post Statement 5,
separating each statement with a ";". The SQL Engine then breaks it into
individual statements using ";" as the separator and executes the statements
separately.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l e

In the following example the setting values are:

Cleanup Post Statement1=/* Cleanup Post Statement1 */


Create table TABLENAME
(ATTRIBUTE_COL1 VARCHAR(20),
FORM_COL2 CHAR(20),
FACT_COL3 FLOAT)
primary index (ATTRIBUTE_COL1, FORM_COL2)
insert into TABLENAME
select A1.COL1,
A2.COL2,
A3.COL3
from TABLE1 A1,
TABLE2 A2,
TABLE3 A3
where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5
insert into TABLENAME
select A1.COL1,

Copyright © 2024 All Rights Reserved 1774


Syst em Ad m in ist r at io n Gu id e

A2.COL2,
A3.COL3
from TABLE4 A1,
TABLE5 A2,
TABLE6 A3
where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5

create index IDX_TEMP1(STORE_ID, STORE_DESC)


select A1.STORE_NBR,
max(A1.STORE_DESC)
from LOOKUP_STORE
Where A1 A1.STORE_NBR = 1
group by A1.STORE_NBR
drop table TABLENAME
/* Cleanup Post Statement 1*/

Data Mart SQL to be Executed After Data Mart Creation


The Data mart SQL to be executed after data mart creation VLDB property
allows you to define SQL statements that are included after data mart
creation. These SQL statements are included after the CREATE statement for
the data mart table. This allows you to customize the statement used to
create data marts.

Level s at Wh i ch Yo u Can Set Th i s

Database instance and data mart

Data Mart SQL to be Executed Before Inserting Data


The Data mart SQL to be executed before inserting data VLDB property
allows you to define SQL statements issued before inserting data into a data
mart. These SQL statements are included before the INSERT statement for
the data mart table. This allows you to customize the statement used to
insert data into data marts.

Level s at Wh i ch Yo u Can Set Th i s

Database instance and data mart

Copyright © 2024 All Rights Reserved 1775


Syst em Ad m in ist r at io n Gu id e

Data Mart SQL to be Executed Prior to Data Mart Creation


The Data mart SQL to be executed prior to data mart creation VLDB property
allows you to define SQL statements that are included before data mart
creation. These SQL statements are included before the CREATE statement
for the data mart table. This allows you to customize the statement used to
create data marts.

Level s at Wh i ch Yo u Can Set Th i s

Database instance and data mart

Drop Database Connection


The Drop Database Connection VLDB property allows you to define whether
the database connection is dropped after user-defined SQL is executed on
the database. This VLDB property has the following options:

l Drop database connection after running user-defined SQL (default):


The database connection is dropped after user-defined SQL is executed
on the database. This ensures that database connections are not left open
and unused for extended periods of time after user-defined SQL is
executed.

l Do not drop database connection after running user-defined SQL:


The database connection remains open after user-defined SQL is
executed on the database. This can keep the database connection open
for additional user-defined SQL statements to be executed.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Element Browsing Post Statement


The Element Browsing Post Statement VLDB property is used to insert
custom SQL statements after the completion of all element browsing

Copyright © 2024 All Rights Reserved 1776


Syst em Ad m in ist r at io n Gu id e

requests. For example, an element browsing request occurs when a user


expands an attribute to view its attribute elements.

Including SQL statements after the completion of element browsing requests


can allow you to define the priority of element browsing requests to be
higher or lower than the priority for report requests. You can also include
any other SQL statements required to better support element browsing
requests. You can include multiple statements to be executed. Each
statement must be separated by a semicolon (;). The SQL Engine then
executes the statements separately.

If you modify the Element Browsing PostStatement VLDB property, the


statements defined in the Report Post Statement VLDB property are not
used for element browsing requests. Priority of report requests and other
post-report SQL statements can be defined using the Report Post Statement
VLDB properties, which are described in Element Browsing Post Statement,
page 1776.

Level s at Wh i ch Yo u Can Set Th i s

Database instance only

Element Browsing Pre Statement


The Element Browsing Pre Statement VLDB property is used to insert
custom SQL statements at the beginning of all element browsing requests.
For example, an element browsing request occurs when a user expands an
attribute to view its attribute elements.

Including SQL statements prior to element browsing requests can allow you
to define the priority of element browsing requests to be higher or lower than
the priority for report requests. You can also include any other SQL
statements required to better support element browsing requests. You can
include multiple statements to be executed, separated by a semicolon (;).
The SQL Engine then executes the statements separately.

Copyright © 2024 All Rights Reserved 1777


Syst em Ad m in ist r at io n Gu id e

If you do modify the Element Browsing PreStatement VLDB property, the


statements defined in the Report Pre Statement VLDB property are not used
for element browsing requests. Priority of report requests and other pre-
report SQL statements can also be defined using the Report Pre Statement
VLDB properties, which are described in Element Browsing Pre Statement,
page 1777.

Level s at Wh i ch Yo u Can Set Th i s

Database instance only

Insert Mid Statement


The Insert Mid Statement property is used to insert your own custom SQL
strings between the first INSERT INTO SELECT statement and subsequent
INSERT INTO SELECT statements inserting data into the same table. There
are five settings in total, numbered 1-5. Each text string entered in Insert
Mid Statement 1 through Insert Mid Statement 4 is executed separately as a
single statement. To execute more than 5 statements, you can put multiple
statements in Insert Mid Statement 5, separating each statement with a ";".
The SQL Engine then breaks it into individual statements using ";" as the
separator and executes the statements separately.

Multiple INSERT INTO SELECT statements to the same table occur in


reports involving partition tables and outer joins. The UNION Multiple Inserts
VLDB property affects this property. If the UNION Multiple Inserts VLDB
property is set to Use Union, there is only one insert into the intermediate
table. This setting is applicable when the Intermediate Table Type VLDB
property is set to Permanent or Temporary table.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l es

In the following example, the setting values are:

Copyright © 2024 All Rights Reserved 1778


Syst em Ad m in ist r at io n Gu id e

Insert MidStatement1=/* ??? Insert MidStatement1 */

UNION Multiple Inserts = Do Not Use UNION

select a11.PBTNAME PBTNAME


from HARI_STORE_ITEM_PTMAP a11
create table ZZTIS00H5YAPO000 (
ITEM_NBR DECIMAL(10, 0),
CLASS_NBR DECIMAL(10, 0),
STORE_NBR DECIMAL(10, 0),
XKYCGT INTEGER,
TOTALSALES FLOAT)
insert into ZZTIS00H5YAPO000
select a11.ITEM_NBR ITEM_NBR,
a11.CLASS_NBR CLASS_NBR,
a11.STORE_NBR STORE_NBR,
0 XKYCGT,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_ITEM_93 a11
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a11.STORE_NBR
/* ZZTIS00H5YAPO000 Insert MidStatement1 */
insert into ZZTIS00H5YAPO000
select a11.ITEM_NBR ITEM_NBR,
a11.CLASS_NBR CLASS_NBR,
a11.STORE_NBR STORE_NBR,
1 XKYCGT,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_ITEM_94 a11
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a11.STORE_NBR
select pa1.ITEM_NBR ITEM_NBR,
pa1.CLASS_NBR CLASS_NBR,
max(a11.ITEM_DESC) ITEM_DESC,
max(a11.CLASS_DESC) CLASS_DESC,
pa1.STORE_NBR STORE_NBR,
max(a12.STORE_DESC) STORE_DESC,
sum(pa1.TOTALSALES) TOTALSALES
from ZTIS00H5YAPO000 pa1
join HARI_LOOKUP_ITEM a11
on (pa1.CLASS_NBR = a11.CLASS_NBR and
pa1.ITEM_NBR = a11.ITEM_NBR)
join HARI_LOOKUP_STORE a12
on (pa1.STORE_NBR = a12.STORE_NBR)
group by pa1.ITEM_NBR,
pa1.CLASS_NBR,
pa1.STORE_NBR

UNION Multiple Inserts = Use UNION

select a11.PBTNAME PBTNAME


from HARI_STORE_ITEM_PTMAP a11

Copyright © 2024 All Rights Reserved 1779


Syst em Ad m in ist r at io n Gu id e

create table ZZTIS00H5YEPO000 (


ITEM_NBR DECIMAL(10, 0),
CLASS_NBR DECIMAL(10, 0),
STORE_NBR DECIMAL(10, 0),
XKYCGT INTEGER,
TOTALSALES FLOAT)
insert into ZZTIS00H5YEPO000
select a11.ITEM_NBR ITEM_NBR,
a11.CLASS_NBR CLASS_NBR,
a11.STORE_NBR STORE_NBR,
0 XKYCGT,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_ITEM_93 a11
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a11.STORE_NBR
union all
select a11.ITEM_NBR ITEM_NBR,
a11.CLASS_NBR CLASS_NBR,
a11.STORE_NBR STORE_NBR,
1 XKYCGT,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_ITEM_94 a11
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a11.STORE_NBR
select pa1.ITEM_NBR ITEM_NBR,
pa1.CLASS_NBR CLASS_NBR,
max(a11.ITEM_DESC) ITEM_DESC,
max(a11.CLASS_DESC) CLASS_DESC,
pa1.STORE_NBR STORE_NBR,
max(a12.STORE_DESC) STORE_DESC,
sum(pa1.TOTALSALES) TOTALSALES
from ZZTIS00H5YEPO000 pa1
join HARI_LOOKUP_ITEM a11
on (pa1.CLASS_NBR = a11.CLASS_NBR and
pa1.ITEM_NBR = a11.ITEM_NBR)
join HARI_LOOKUP_STORE a12
on (pa1.STORE_NBR = a12.STORE_NBR)
group by pa1.ITEM_NBR,
pa1.CLASS_NBR,
pa1.STORE_NBR

Insert Post Statement


This property is used to insert your custom SQL statements after CREATE
and after the first INSERT INTO SELECT statement for explicit temp table
creation. There are five settings, numbered 1-5. Each text string entered in
Insert Post Statement 1 through Insert Post Statement 4 is executed
separately as a single statement. To execute more than 5 statements, insert
multiple statement in Insert Post Statement 5, separating each statement

Copyright © 2024 All Rights Reserved 1780


Syst em Ad m in ist r at io n Gu id e

with a ";". The SQL Engine then breaks it into individual statements using ";"
as the separator and executes the statements separately.

Multiple INSERT INTO SELECT statements to the same table occur in


reports involving partition tables and outer joins. The UNION Multiple Inserts
VLDB property does not affect this property, but the Table Creation Type
property does. The Table Creation Type property is applicable when the
Intermediate Table Type VLDB property is set to Permanent or Temporary
table.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l e

In the following example, the setting values are:

Insert PostStatement1=/* ??? Insert PostStatement1 */

Table Creation Type= Explicit

select a11.PBTNAME PBTNAME


from HARI_STORE_ITEM_PTMAP a11
create table ZZTIS00H601PO000 (
ITEM_NBR DECIMAL(10, 0),
CLASS_NBR DECIMAL(10, 0),
STORE_NBR DECIMAL(10, 0),
XKYCGT INTEGER,
TOTALSALES FLOAT)
insert into ZZTIS00H601PO000
select a11.ITEM_NBR ITEM_NBR,
a11.CLASS_NBR CLASS_NBR,
a11.STORE_NBR STORE_NBR,
0 XKYCGT,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_ITEM_93 a11
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a11.STORE_NBR
insert into ZZTIS00H601PO000
select a11.ITEM_NBR ITEM_NBR,
a11.CLASS_NBR CLASS_NBR,
a11.STORE_NBR STORE_NBR,
1 XKYCGT,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_ITEM_94 a11
group by a11.ITEM_NBR,

Copyright © 2024 All Rights Reserved 1781


Syst em Ad m in ist r at io n Gu id e

a11.CLASS_NBR,
a11.STORE_NBR
/* ZZTIS00H601PO000 Insert PostStatement1 */
select pa1.ITEM_NBR ITEM_NBR,
pa1.CLASS_NBR CLASS_NBR,
max(a11.ITEM_DESC) ITEM_DESC,
max(a11.CLASS_DESC) CLASS_DESC,
pa1.STORE_NBR STORE_NBR,
max(a12.STORE_DESC) STORE_DESC,
sum(pa1.TOTALSALES) TOTALSALES
from ZZTIS00H601PO000 pa1
join HARI_LOOKUP_ITEM a11
on (pa1.CLASS_NBR = a11.CLASS_NBR and
pa1.ITEM_NBR = a11.ITEM_NBR)
join HARI_LOOKUP_STORE a12
on (pa1.STORE_NBR = a12.STORE_NBR)
group by pa1.ITEM_NBR,
pa1.CLASS_NBR,
pa1.STORE_NBR

Table Creation Type= Implicit

select a11.PBTNAME PBTNAME


from HARI_STORE_ITEM_PTMAP a11
select a11.ITEM_NBR ITEM_NBR,
a11.CLASS_NBR CLASS_NBR,
a11.STORE_NBR STORE_NBR,
0 XKYCGT,
sum(a11.TOT_SLS_DLR) TOTALSALES
into ZZTIS00H60BPO000
from HARI_STORE_ITEM_93 a11
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a11.STORE_NBR
insert into ZZTIS00H60BPO000
select a11.ITEM_NBR ITEM_NBR,
a11.CLASS_NBR CLASS_NBR,
a11.STORE_NBR STORE_NBR,
1 XKYCGT,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_ITEM_94 a11
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a11.STORE_NBR
select pa1.ITEM_NBR ITEM_NBR,
pa1.CLASS_NBR CLASS_NBR,
max(a11.ITEM_DESC) ITEM_DESC,
max(a11.CLASS_DESC) CLASS_DESC,
pa1.STORE_NBR STORE_NBR,
max(a12.STORE_DESC) STORE_DESC,
sum(pa1.TOTALSALES) TOTALSALES
from ZTIS00H60BPO000 pa1
join HARI_LOOKUP_ITEM a11
on (pa1.CLASS_NBR = a11.CLASS_NBR and
pa1.ITEM_NBR = a11.ITEM_NBR)

Copyright © 2024 All Rights Reserved 1782


Syst em Ad m in ist r at io n Gu id e

join HARI_LOOKUP_STORE a12


on (pa1.STORE_NBR = a12.STORE_NBR)
group by pa1.ITEM_NBR,
pa1.CLASS_NBR,
pa1.STORE_NBR

Insert Pre Statement


The Insert Pre Statement property is used to insert your custom SQL
statements after CREATE but before the first INSERT INTO SELECT
statement for explicit temp table creation. There are five settings, numbered
1-5. Each text string entered in Insert Pre Statement 1 through Insert Pre
Statement 4 is executed separately as a single statement. To execute more
than 5 statements, insert multiple statements in Insert Pre Statement 5,
separating each statement with a ";". The SQL Engine then breaks it into
individual statements using ";" as the separator and executes the statements
separately.

Multiple INSERT INTO SELECT statements to the same table occur in


reports involving partition tables and outer joins. The UNION Multiple Inserts
VLDB property does not affect this property, but the Table Creation Type
property does. The Table Creation Type property is applicable when the
Intermediate Table Type VLDB property is set to Permanent or Temporary
table.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l es

In the following examples, the setting values are:

Insert PreStatement1=/* ??? Insert


PreStatement1 */

Table Creation Type= Explicit

Copyright © 2024 All Rights Reserved 1783


Syst em Ad m in ist r at io n Gu id e

select a11.PBTNAME PBTNAME


from HARI_STORE_ITEM_PTMAP a11
create table ZZTIS00H601PO000 (
ITEM_NBR DECIMAL(10, 0),
CLASS_NBR DECIMAL(10, 0),
STORE_NBR DECIMAL(10, 0),
XKYCGT INTEGER,
TOTALSALES FLOAT)
/* ZZTIS00H601PO000 Insert PreStatement1 */
insert into ZZTIS00H601PO000
select a11.ITEM_NBR ITEM_NBR,
a11.CLASS_NBR CLASS_NBR,
a11.STORE_NBR STORE_NBR,
0 XKYCGT,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_ITEM_93 a11
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a11.STORE_NBR
insert into ZZTIS00H601PO000
select a11.ITEM_NBR ITEM_NBR,
a11.CLASS_NBR CLASS_NBR,
a11.STORE_NBR STORE_NBR,
1 XKYCGT,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_ITEM_94 a11
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a11.STORE_NBR
select pa1.ITEM_NBR ITEM_NBR,
pa1.CLASS_NBR CLASS_NBR,
max(a11.ITEM_DESC) ITEM_DESC,
max(a11.CLASS_DESC) CLASS_DESC,
pa1.STORE_NBR STORE_NBR,
max(a12.STORE_DESC) STORE_DESC,
sum(pa1.TOTALSALES) TOTALSALES
from ZZTIS00H601PO000 pa1
join HARI_LOOKUP_ITEM a11
on (pa1.CLASS_NBR = a11.CLASS_NBR and
pa1.ITEM_NBR = a11.ITEM_NBR)
join HARI_LOOKUP_STORE a12
on (pa1.STORE_NBR = a12.STORE_NBR)
group by pa1.ITEM_NBR,
pa1.CLASS_NBR,
pa1.STORE_NBR

Table Creation Type= Implicit

select a11.PBTNAME PBTNAME


from HARI_STORE_ITEM_PTMAP a11
select a11.ITEM_NBR ITEM_NBR,
a11.CLASS_NBR CLASS_NBR,
a11.STORE_NBR STORE_NBR,
0 XKYCGT,
sum(a11.TOT_SLS_DLR) TOTALSALES

Copyright © 2024 All Rights Reserved 1784


Syst em Ad m in ist r at io n Gu id e

into ZZTIS00H60BPO000
from HARI_STORE_ITEM_93 a11
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a11.STORE_NBR
insert into ZZTIS00H60BPO000
select a11.ITEM_NBR ITEM_NBR,
a11.CLASS_NBR CLASS_NBR,
a11.STORE_NBR STORE_NBR,
1 XKYCGT,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_ITEM_94 a11
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a11.STORE_NBR
select pa1.ITEM_NBR ITEM_NBR,
pa1.CLASS_NBR CLASS_NBR,
max(a11.ITEM_DESC) ITEM_DESC,
max(a11.CLASS_DESC) CLASS_DESC,
pa1.STORE_NBR STORE_NBR,
max(a12.STORE_DESC) STORE_DESC,
sum(pa1.TOTALSALES) TOTALSALES
from ZZTIS00H60BPO000 pa1
join HARI_LOOKUP_ITEM a11
on (pa1.CLASS_NBR = a11.CLASS_NBR and
pa1.ITEM_NBR = a11.ITEM_NBR)
join HARI_LOOKUP_STORE a12
on (pa1.STORE_NBR = a12.STORE_NBR)
group by pa1.ITEM_NBR,
pa1.CLASS_NBR,

Report Post Statement


The Report Post Statement property is used to insert custom SQL
statements after the final SELECT statement but before the DROP
statements. There are five settings, numbered 1-5. Each text string entered
in Report Post Statement 1 through Report Post Statement 4 is executed
separately as a single statement. To execute more than 5 statements, insert
multiple statements in Report Post Statement 5, separating each statement
with a ";". The SQL Engine then breaks them into individual statements
using ";" as the separator and executes the statements separately.

If you do not modify the Element Browsing Post Statement VLDB property,
the statements defined in this Report Post Statement VLDB property are
also used for element browsing requests. For example, an element browsing
request occurs when a user expands an attribute to view its attribute

Copyright © 2024 All Rights Reserved 1785


Syst em Ad m in ist r at io n Gu id e

elements. To define statements that apply only to element browsing


requests, seeElement Browsing Post Statement.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l e

In the following example, the setting values are:

Create table TABLENAME


(ATTRIBUTE_COL1 VARCHAR(20),
FORM_COL2 CHAR(20),
FACT_COL3 FLOAT)
primary index (ATTRIBUTE_COL1, FORM_COL2)
insert into TABLENAME
select A1.COL1,
A2.COL2,
A3.COL3
from TABLE1 A1,
TABLE2 A2,
TABLE3 A3
where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5
insert into TABLENAME
select A1.COL1,
A2.COL2,
A3.COL3
from TABLE4 A1,
TABLE5 A2,
TABLE6 A3
where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5

create index IDX_TEMP1(STORE_ID, STORE_DESC)


select A1.STORE_NBR,
max(A1.STORE_DESC)
from LOOKUP_STORE
Where A1 A1.STORE_NBR = 1
group by A1.STORE_NBR
/* Report Post Statement 1*/
drop table TABLENAME

Report Pre Statement


The Report Pre Statement property is used to insert custom SQL statements
at the beginning of the Report SQL. There are five settings, numbered 1-5.
Each text string entered in Report Pre Statement 1 through Report Pre

Copyright © 2024 All Rights Reserved 1786


Syst em Ad m in ist r at io n Gu id e

Statement 4 is executed separately as a single statement. To execute more


than 5 statements, insert multiple statements in Report Pre Statement 5,
separating each statement with a ";". The SQL Engine then breaks them into
individual statements using ";" as the separator and executes the statements
separately.

If you do not modify the Element Browsing Pre Statement VLDB property,
the statements defined in this Report Pre Statement VLDB property are also
used for element browsing requests. For example, an element browsing
request occurs when a user expands an attribute to view its attribute
elements. To define statements that apply only to element browsing
requests, seeElement Browsing Pre Statement.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l e

In the following example, the setting values are:

Report Pre Statement1=/* Report Pre Statement1 */


/* Report Pre Statement 1*/
Create table TABLENAME
(ATTRIBUTE_COL1 VARCHAR(20),
FORM_COL2 CHAR(20),
FACT_COL3 FLOAT)
primary index (ATTRIBUTE_COL1, FORM_COL2)
insert into TABLENAME
select A1.COL1,
A2.COL2,
A3.COL3
from TABLE1 A1,
TABLE2 A2,
TABLE3 A3
where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5
insert into TABLENAME
select A1.COL1,
A2.COL2,
A3.COL3
from TABLE4 A1,
TABLE5 A2,
TABLE6 A3
where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5

Copyright © 2024 All Rights Reserved 1787


Syst em Ad m in ist r at io n Gu id e

create index IDX_TEMP1(STORE_ID, STORE_DESC)


select A1.STORE_NBR,
max(A1.STORE_DESC)
from LOOKUP_STORE
Where A1 A1.STORE_NBR = 1
group by A1.STORE_NBR
drop table TABLENAME

Multi-Source Report Pre and Post Statements


Report Pre and Post SQL statements can be applied and executed against
multiple databases used as a data source for reports. This includes both in-
memory and connect live Intelligent Cubes. The Pre and Post SQL
statements do not have to be placed in a particular order in the overall query
as they operate independently from one another.

The example below shows an instance of how pre and post statements at
both the report level and database instance level are applied and executed
against multiple sources.

Copyright © 2024 All Rights Reserved 1788


Syst em Ad m in ist r at io n Gu id e

For examples of the syntax required for these statements, see the Report
Pre Statement and Report Post Statement sections.

Table Post Statement


The Table Post Statement property is used to insert custom SQL statements
after the CREATE TABLE and INSERT INTO statements. There are five
settings, numbered 1-5. Each text string entered in Table Post Statement 1
through Table Post Statement 4 is executed separately as a single
statement. To execute more than 5 statements, insert multiple statements in
Table Post Statement 5, separating each statement with a ";". The SQL
Engine then breaks them into individual statements using ";" as the
separator and executes the statements separately. This property is
applicable when the Intermediate Table Type VLDB property is set to
Permanent or Temporary table or Views. The custom SQL is applied to every
intermediate table or view.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l e

In the following example, the setting values are:

Table PostStatement1=/* ??? Table PostStatement1 */


select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
into #ZZTIS00H63PMQ000
from HARI_STORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
/* #ZZTIS00H63PMQ000 Table PostStatement 1*/
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC,
a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_DEPARTMENT a11
join #ZZTIS00H63PMQ000 pa1
on (a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR)

Copyright © 2024 All Rights Reserved 1789


Syst em Ad m in ist r at io n Gu id e

join HARI_LOOKUP_DEPARTMENT a12


on (a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR)
join HARI_LOOKUP_STORE a13
on (a11.STORE_NBR = a13.STORE_NBR)
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR

Table Pre Statement


The Table Pre Statement property is used to insert custom SQL statements
before the CREATE TABLE statement. There are five settings, numbered 1-
5. Each text string entered in Table Pre Statement 1 through Table Pre
Statement 4 is executed separately as a single statement. To execute more
than 5 statements, insert multiple statements in Table Pre Statement 5,
separating each statement with a ";". The SQL Engine then breaks them into
individual statements using ";" as the separator and executes the statements
separately. This property is applicable when the Intermediate Table Type
VLDB property is set to Permanent or Temporary table or Views. The custom
SQL is applied to every intermediate table or view.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l e

In the following example, the setting values are:

Table PreStatement1=/* ??? Table


PreStatement1 */
/*Table PreStatement 1*/
create table ZZTIS00H63RMQ000 (
DEPARTMENT_NBR DECIMAL(10, 0),
STORE_NBR DECIMAL(10, 0))
insert into ZZTIS00H63RMQ000
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HARI_STORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,

Copyright © 2024 All Rights Reserved 1790


Syst em Ad m in ist r at io n Gu id e

max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC,
a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_DEPARTMENT a11
join ZZTIS00H63RMQ000 pa1
on (a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR)
join HARI_LOOKUP_DEPARTMENT a12
on (a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR)
join HARI_LOOKUP_STORE a13
on (a11.STORE_NBR = a13.STORE_NBR)
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR

Optimizing Queries
The table below summarizes the Query Optimizations VLDB properties.
Additional details about each property, including examples where
necessary, are provided in the sections following the table.

Possible
Property Description Default Value
Values

(default) Final
pass CAN do
aggregation and
Determines whether the join lookup Final pass CAN do
Engine calculates an tables in one
Additional Final aggregation and
aggregation function and a pass
Pass Option join lookup tables
join in a single pass or in
One additional in one pass
separate passes in the SQL.
final pass only to
join lookup
tables

Apply filter only


to passes
Apply filter only to
Apply Filter Indicates during which pass touching
passes touching
Options the report filter is applied. warehouse tables
warehouse tables
Apply filter to

Copyright © 2024 All Rights Reserved 1791


Syst em Ad m in ist r at io n Gu id e

Possible
Property Description Default Value
Values

passes touching
warehouse tables
and last join
pass, if it does a
downward join
from the temp
table level to the
template level

Apply filter to
passes touching
warehouse tables
and last join pass

Use Count
Use Count (Attribute@ID) to
(Attribute@ID) to calculate total
calculate total element number
element number (uses count
Controls how the total (uses count
Attribute Element distinct if
number of rows are distinct if
Number Count necessary)
calculated for incremental necessary)
Method
fetch.
Use ODBC For Tandem

cursor to databases,

calculate total the default is

element number Use ODBC


Cursor.

Do not select
distinct elements
Determines how distinct for each partition Do not select
Count Distinct
counts of values are retrieved distinct elements
with Partitions Select distinct
from partitioned tables. for each partition
elements for
each partition

Custom Group Helps optimize custom group Treat banding as Treat banding as

Copyright © 2024 All Rights Reserved 1792


Syst em Ad m in ist r at io n Gu id e

Possible
Property Description Default Value
Values

normal
calculation
banding when using the
Count Banding method. You Use standard
can choose to use the case statement
Banding Count standard method that uses syntax
normal calculation
Method the Analytical Engine or
Insert band
database-specific syntax, or
range to
you can choose to use case
database and
statements or temp tables.
join with metric
value

Treat banding as
Helps optimize custom group normal
banding when using the calculation
Points Banding method. You
Use standard
Custom Group can choose to use the
case statement Treat banding as
Banding Points standard method that uses
syntax normal calculation
Method the Analytical Engine or
database-specific syntax, or Insert band range
you can choose to use case to database and
statements or temp tables. join with metric
value

Treat banding as
normal
Helps optimize custom group
calculation
banding when using the Size
Banding method. You can Use standard
Custom Group choose to use the standard case statement
Treat banding as
Banding Size method that uses the syntax
normal calculation
Method Analytical Engine or
Insert band
database-specific syntax, or
range to
you can choose to use case
database and
statements or temp tables.
join with metric
value

Copyright © 2024 All Rights Reserved 1793


Syst em Ad m in ist r at io n Gu id e

Possible
Property Description Default Value
Values

Do not normalize
Intelligent Cube
data

Normalize
Intelligent Cube
data in
Intelligence
Server

Normalize
Intelligent Cube
data in database
using
Intermediate
Table Type
Normalize
Data Population Defines if and how Intelligent Normalize Intelligent Cube
for Intelligent Cube data is normalized to Intelligent Cube data in Intelligence
Cubes save memory resources. data in database Server
using Fallback
Type

Normalize
Intelligent Cube
data basing on
dimensions with
attribute lookup
filtering

Normalize
Intelligent Cube
data basing on
dimensions with
no attribute
lookup filtering

Data Population Defines if and how report Do not normalize Do not normalize

Copyright © 2024 All Rights Reserved 1794


Syst em Ad m in ist r at io n Gu id e

Possible
Property Description Default Value
Values

report data

Normalize report
data in
Intelligence
Server

Normalize report
data in database
using
Intermediate
data is normalized to save
for Reports Table Type report data
memory resources.
Normalize report
data in database
using Fallback
Table Type

Normalize report
data basing on
dimensions with
attribute lookup
filtering

Sort attribute
elements based
on the attribute
ID form for each
Default Sort Determines whether the sort attribute Sort attribute
Behavior for order of attribute elements on elements based on
Attribute reports considers special sort Sort attribute the attribute ID
Elements in order formatting defined for elements based form for each
Reports attributes. on the defined attribute
'Report Sort'
setting of all
attribute forms
for each attribute

Dimensionality Determines level (dimension) Use relational Use relational

Copyright © 2024 All Rights Reserved 1795


Syst em Ad m in ist r at io n Gu id e

Possible
Property Description Default Value
Values

replacement for non parent- model


Model child related attributes in the Use dimensional model
same hierarchy. model

Enable Engine
Enable or disable the Attribute Role
Analytical Engine's ability to feature Disable Engine
Engine Attribute
treat attributes defined on the Attribute Role
Role Options Disable Engine
same column with the same feature
expression as attribute roles. Attribute Role
feature

Enable Filter tree


Determines if metric optimization for
Filter Tree qualifications that are metric Enable Filter tree
Optimization for included in separate passes qualifications optimization for
Metric of SQL are included in a Disable Filter metric
Qualifications single pass of SQL when tree optimization qualifications
possible. for metric
qualifications

Determines whether data that Enable


is transferred between Incremental Data
Intelligence Server and a Transfer Disable
Incremental Data
database is performed using Incremental Data
Transfer Disable
a single transfer of data or Transfer
multiple, incremental Incremental Data

transfers of data Transfer

Determines how many


Maximum
queries can be executed in
Parallel Queries User-defined 2
parallel as part of parallel
Per Report
query execution support

Allows you to choose how to


MD Partition Use count(*) in Use count(*) in
handle prequerying the
Prequery Option prequery prequery
metadata partition.

Copyright © 2024 All Rights Reserved 1796


Syst em Ad m in ist r at io n Gu id e

Possible
Property Description Default Value
Values

Use constant in
prequery

Use MultiSource
Option to access
multiple data
sources Use MultiSource
Defines which technique to
Multiple Data Option to access
use to support multiple data Use database
Source Support multiple data
sources in a project. gateway support source
to access
multiple data
sources

Preserve
Defines whether OLAP backwards Preserve
functions support backwards compatibility with
OLAP Function backwards
compatibility or reflect 8.1.x and earlier
Support compatibility with
enhancements to OLAP
Recommended 8.1.x and earlier
function logic.
with 9.0 and later

Disable parallel
query execution

Enable parallel
Determines whether
query execution
MicroStrategy attempts to
for multiple data
Parallel Query execute multiple queries in Disable parallel
source reports
Execution parallel to return report query execution
only
results faster and publish
Intelligent Cubes. Enable parallel
query execution
for all reports
that support it

Parallel Query Determines whether reports Disable parallel Disable parallel


Execution and Intelligent Cubes include query execution query execution
Improvement an estimate in the percent of improvement improvement

Copyright © 2024 All Rights Reserved 1797


Syst em Ad m in ist r at io n Gu id e

Possible
Property Description Default Value
Values

estimate in SQL
view
processing time that would be
Estimate in SQL saved if parallel Query Enable parallel estimate in SQL
View execution was used to run query execution view
multiple queries in parallel. improvement
estimate in SQL
view

Use ODBC
Rank Method if ranking (MSTR 6
Determines how calculation method) Use ODBC ranking
DB Ranking Not
ranking is performed. (MSTR 6 method).
Used Analytical engine
performs rank

Remove
aggregation
according to key
Determines whether to keep of FROM clause Remove
Remove
or remove aggregations in aggregation
Aggregation Remove
SQL queries executed from according to key of
Method aggregation
MicroStrategy. FROM clause
according to key
of fact tables (old
behavior)

Remove
aggregation and
Group By when Remove
Determines whether Group Select level is aggregation and
Remove Group By and aggregations are identical to From Group By when
by Option used for attributes with the level Select level is
same primary key. Remove identical to From
aggregation and level
Group By when
Select level

Copyright © 2024 All Rights Reserved 1798


Syst em Ad m in ist r at io n Gu id e

Possible
Property Description Default Value
Values

contains all
attribute(s) in
From level

Disable
optimization to
remove repeated
tables in full Enable
Determines whether an outer join and left optimization to
Remove outer join passes
optimization for outer join remove repeated
Repeated Tables
processing is enabled or Enable tables in full outer
for Outer Joins
disabled. optimization to join and left outer
remove repeated join passes
tables in full
outer join and left
outer join passes

Disable Set
Operator
Allows you to use set Optimization
operators in sub queries to
combine multiple filter Enable Set Disable Set
Set Operator Operator
qualifications. Set operators Operator
Optimization Optimization (if
are only supported by certain Optimization
database platforms and with supported by

certain sub query types. database and


[Sub Query
Type])

Level 0: No
optimization
Level 4: Level 2 +
Determines the level by which
SQL Global Level 1: Remove Merge All Passes
SQL queries in reports are
Optimization Unused and with Different
optimized.
Duplicate Passes WHERE

Level 2: Level 1 +

Copyright © 2024 All Rights Reserved 1799


Syst em Ad m in ist r at io n Gu id e

Possible
Property Description Default Value
Values

Merge Passes
with Different
SELECT

Level 3: Level 2 +
Merge Passes,
which only hit DB
Tables, with
different WHERE

Level 4: Level 2 +
Merge All Passes
with Different
WHERE

WHERE EXISTS
(SELECT * ...)

WHERE EXISTS
(SELECT col1,
col2...)

WHERE COL1 IN
(SELECT
s1.COL1...) Use Temporary
falling back to Table, falling back
Allows you to determine the
EXISTS to EXISTS
Sub Query Type type of subquery used in
(SELECT * ...) (SELECT *...) for
engine-generated SQL.
for multiple correlated
columns IN subquery

WHERE (COL1,
COL2...) IN
(SELECT
s1.COL1,
s1.COL2...)

Use Temporary
Table, falling

Copyright © 2024 All Rights Reserved 1800


Syst em Ad m in ist r at io n Gu id e

Possible
Property Description Default Value
Values

back to EXISTS
(SELECT *...) for
correlated
subquery

WHERE COL1 IN
(SELECT
s1.COL1...)
falling back to
EXISTS
(SELECT col1,
col2 ...) for
multiple columns
IN

Use Temporary
Table, falling
back to IN
(SELECT COL)
for correlated
subquery

Always join with


transformation
table to perform
transformation Use transformation
Defines whether to attempt to formula instead of
Transformation Use
improve performance of join with
Formula transformation
reports that use expression- transformation
Optimization formula instead
based transformations. table when
of join with possible
transformation
table when
possible

Unrelated Filter Determines whether the Remove Remove unrelated


Options Analytical Engine should unrelated filter filter

Copyright © 2024 All Rights Reserved 1801


Syst em Ad m in ist r at io n Gu id e

Possible
Property Description Default Value
Values

Keep unrelated
filter

Keep unrelated
keep or remove the unrelated filter and put
filter. condition from
unrelated
attributes in one
subquery group

Determines whether the Use the 8.1.x


Unrelated Filter Analytical Engine should behavior: Use the 8.1.x
Options for keep or remove the unrelated
Use the 9.0.x behavior
Nested Metrics filters when using nested
metrics. behavior:

Determines the table used for Use lookup table


WHERE Clause
qualifications in the WHERE Use fact table
Driving Table Use fact table
clause.

Additional Final Pass Option


Additional Final Pass Option is an advanced property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

The Additional Final Pass Option determines whether the Engine calculates
an aggregation function and a join in a single pass or in separate passes in
the SQL.

Level s at Wh i ch Yo u Can Set Th i s

Report, template, and database instance

It is recommended that you use this property on reports. You must update
the metadata to see the property populated in the metadata.

Copyright © 2024 All Rights Reserved 1802


Syst em Ad m in ist r at io n Gu id e

Exam p l e

The following SQL example was created using SQL Server metadata and
warehouse.

Consider the following structure of lookup and fact tables:

l LU_Emp_Mgr has 4 columns, namely: Emp_ID, Emp_Desc, Mgr_ID, and


Mgr_Desc

In this structure, Emp_ID is the primary key of LU_Emp_Mgr table

l LU_Dept has 2 columns, namely: Dept_ID and Dept_Desc

In this structure, Dept_ID is the primary key of LU_Dept table

l Fact table Emp_Dept_Salary has 3 columns, namely: Emp_ID, Dept_ID,


and fact Salary

From the above warehouse structure, define the following schema objects:

l Attribute Employee with 2 forms: Employee@ID (defined on column Emp_


ID) and Employee@Desc (defined on column Emp_Desc)

l Attribute Manager with 2 forms: Manager@ID (defined on column Mgr_ID)


and Manager@Desc (defined on column Mgr_Desc)

l Attribute Department with 2 forms: Department@ID (defined on column


Dept_ID) and Department@Desc (defined on column Dept_Desc)

l Fact Fact_Salary, which is defined on Salary column

The Manager attribute is defined as the parent of the Employee attribute


via LU_Emp_Mgr table. This is a common practice in a star schema.

Create two metrics that are defined as

l Salary_Dept = Sum(Fact_Salary)){~+, Department+}

l Salary = Avg(Salary_Dept){~+}

Copyright © 2024 All Rights Reserved 1803


Syst em Ad m in ist r at io n Gu id e

In a report called Employee_Salary, put the Salary metric on a template with


the Manager attribute. In this example, the Employee_Salary report
generates the following SQL:

1 Pass0
2 select a12.Mgr_Id Mgr_Id,
3 a11.Dept_Id Dept_Id,
4 sum(a11.Salary) WJXBFS1
5 into #ZZTUW0200LXMD000
6 from dbo.Emp_Dept_Salary a11
7 join dbo.Emp_Mgr a12
8 on (a11.Emp_Id = a12.Emp_Id)
9 group by a12.Mgr_Id,
10 a11.Dept_Id
11 Pass1
12 select pa1.Mgr_Id Mgr_Id,
13 max(a11.Mgr_Desc) Mgr_Desc,
14 avg(pa1.WJXBFS1) WJXBFS1
15 from #ZZTUW0200LXMD000 pa1
16 join dbo.Emp_Mgr a11
17 on (pa1.Mgr_Id = a11.Mgr_Id)
18 group by pa1.Mgr_Id
19 Pass2
20 drop table #ZZTUW0200LXMD000

The problem in the SQL pass above, in lines 14-17, is that the join condition
and the aggregation function are in a single pass. The SQL joins the
ZZTUW0200LXMD000 table to the Emp_Mgr table on column Mgr_ID, but
Mgr_ID is not the primary key to the LU_Emp_Mgr table. Therefore, there
are many rows on the LU_Emp_Mgr table with the same Mgr_ID. This results
in a repeated data problem.

Clearly, if both the conditions, aggregation and join, do not exist on the
same table, this problem does not occur.

To resolve this problem, select the option One additional final pass only
to join lookup tables in the VLDB Properties Editor. With this option
selected, the report, when executed, generates the following SQL:

1 Pass0
2 select a12.Mgr_Id Mgr_Id,
3 a11.Dept_Id Dept_Id,
4 sum(a11.Salary) WJXBFS1
5 into #ZZTUW01006IMD000
6 from dbo.Emp_Dept_Salary a11

Copyright © 2024 All Rights Reserved 1804


Syst em Ad m in ist r at io n Gu id e

7 join dbo.Emp_Mgr a12


8 on (a11.Emp_Id = a12.Emp_Id)
9 group by a12.Mgr_Id,
10 a11.Dept_Id
11 Pass1
12 select pa1.Mgr_Id Mgr_Id,
13 avg(pa1.WJXBFS1) WJXBFS1
14 into #ZZTUW01006IEA001
15 from #ZZTUW01006IMD000 pa1
16 group by pa1.Mgr_Id
17 Pass2
18 select distinct pa2.Mgr_Id Mgr_Id,
19 a11.Mgr_Desc Mgr_Desc,
20 pa2.WJXBFS1 WJXBFS1
21 from #ZZTUW01006IEA001 pa2
22 join dbo.Emp_Mgr a11
23 on (pa2.Mgr_Id = a11.Mgr_Id)
24 Pass3
25 drop table #ZZTUW01006IMD000
26 Pass4
27 drop table #ZZTUW01006IEA001

In this SQL, lines 12-13 and 21-23 show that the Engine calculates the
aggregation function, which is the Average function, in a separate pass and
performs the join operation in another pass.

Apply Filter Options


The Apply Filter property has three settings. The common element of all
three settings is that report filters must be applied whenever a warehouse
table is accessed. The settings are

l Apply filter only to passes touching warehouse tables (default): This


is the default option. It applies the filter to only SQL passes that touch
warehouse tables, but not to other passes. This option works in most
situations.

l Apply filter to passes touching warehouse tables and last join pass,
if it does a downward join from the temporary table level to the
template level: The filter is applied in the final pass if it is a downward
join. For example, you have Store, Region Sales, and Region Cost on the
report, with the filter "store=1." The intermediate passes calculate the total
sales and cost for Region 1 (to which Store 1 belongs). In the final pass, a

Copyright © 2024 All Rights Reserved 1805


Syst em Ad m in ist r at io n Gu id e

downward join is done from the Region level to the Store level, using the
relationship table LOOKUP_STORE. If the "store = 1" filter in this pass is
not applied, stores that belong to Region 1 are included on the report.
However, you usually expect to see only Store 1 when you use the filter
"store=1." So, in this situation, you should choose this option to make sure
the filter is applied in the final pass.

l Apply filter to passes touching warehouse tables and last join pass:
The filter in the final pass is always applied, even though it is not a
downward join. This option should be used for special types of data
modeling. For example, you have Region, Store Sales, and Store Cost on
the report, with the filter "Year=2002." This looks like a normal report and
the final pass joins from Store to Region level. But the schema is
abnormal: certain stores do not always belong to the same region, perhaps
due to rezoning. For example, Store 1 belongs to Region 1 in 2002, and
belongs to Region 2 in 2003. To solve this problem, put an additional
column Year in LOOKUP_STORE so that you have the following data.

Store Region Year

1 1 2002

1 2 2003

...

Apply the filter Year=2002 to your report. This filter must be applied in the
final pass to find the correct store-region relationship, even though the
final pass is a normal join instead of a downward join.

In t er act i o n w i t h Ot h er VLDB Pr o p er t i es

Two other VLDB properties, Downward Outer Join Option and Preserve All
Lookup Table Elements, have an option to apply the filter. If you choose
those options, then the filter is applied accordingly, regardless of what the
value of Apply Filter Option is.

Copyright © 2024 All Rights Reserved 1806


Syst em Ad m in ist r at io n Gu id e

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Attribute Element Number Count Method


Attribute Element Number Count Method is an advanced property that is
hidden by default. For information on how to display this property, see
Viewing and Changing Advanced VLDB Properties, page 1630.

The incremental fetch feature uses a SELECT COUNT DISTINCT query,


introduced in MicroStrategy version 7.1.6. In some cases, this query can be
costly for the data warehouse and make the element browse time longer
than necessary for certain production environments.

To alleviate this problem, the Attribute Element Number Count Method


controls how the total number of rows are calculated. You have the following
options:

l Use Count(Attribute@ID) to calculate total element number (uses


count distinct if necessary) (default): In this case, the database
determines the total number of rows.

l Use ODBC cursor to calculate the total element number: This setting
causes Intelligence Server to determine the total number of rows by
looping through the table after the initial SELECT pass.

The difference between the two approaches is whether the database or


Intelligence Server determines the total number of records. MicroStrategy
recommends using the "Use ODBC cursor..." option (having Intelligence
Server determine the total number of records) if you have a heavily taxed
data warehouse or if the SELECT COUNT DISTINCT query itself introduces
contention in the database. Having Intelligence Server determine the total
number of rows results in more traffic between Intelligence Server and the
database.

Copyright © 2024 All Rights Reserved 1807


Syst em Ad m in ist r at io n Gu id e

For Tandem databases, the default is Use ODBC Cursor to calculate the
total element number.

Level s at Wh i ch Yo u Can Set Th i s

Database instance only

Count Distinct with Partitions


Count Distinct with Partitions is an advanced property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

This property can help improve the performance of queries performed on


multiple partitioned tables which return a distinct count of values. A distinct
count of values allows you to return information such as how many distinct
types of items were sold on a given day. You have the following options:

l Do not select distinct elements for each partition (default): To return a


distinct count of values from multiple partition tables, the tables are first
combined together as one large result table, and then the count distinct
calculation is performed. While this returns the proper results, combining
multiple tables into one table to perform the count distinct calculation can
be a resource-intensive query.

l Select distinct elements for each partition: To return a distinct count of


values from multiple partitioned tables, the size of each partition table is
first reduced by returning only distinct values. These smaller tables are
then combined and a count distinct calculation is performed. This can
improve performance by reducing the size of the partition tables before
they are combined for the final count distinct calculation.

Level s at Wh i ch Yo u Can Set Th i s

Metric, report, template, and database instance

Copyright © 2024 All Rights Reserved 1808


Syst em Ad m in ist r at io n Gu id e

Custom Group Banding Count Method


Custom Group Banding Count Method is an advanced property that is hidden
by default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

The Custom Group Banding Count Method helps optimize custom group
banding when using the Count Banding method. You have the following
options:

l Treat banding as normal calculation (default): Select this option to


allow the MicroStrategy Analytical Engine to perform the custom group
banding.

l Use standard case statement syntax: Select this option to utilize case
statements within your database to perform the custom group banding.

l Insert band range to database and join with metric value: Select this
option to use temporary tables to perform the custom group banding.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l es

The following SQL examples were created in MicroStrategy Tutorial. The


report contains a Custom Group "Customer Value Banding", which uses the
Count method and the Revenue metric. The SQL for each of the three
settings for this property is presented below. All three options start with the
same SQL passes. In this example, the first six passes are the same. The
remaining SQL passes differ depending on the Custom Group Banding
Count Method setting selected.

create table ZZMD00 (CUSTOMER_ID SHORT, WJXBFS1 DOUBLE)


insert into ZZMD00
select a11.CUSTOMER_ID AS CUSTOMER_ID, a11.TOT_DOLLAR_SALES
as WJXBFS1
from CUSTOMER_SLS a11

Copyright © 2024 All Rights Reserved 1809


Syst em Ad m in ist r at io n Gu id e

create table ZZMD01 (WJXBFS1 DOUBLE)


insert into ZZMD01
select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1
from YR_CATEGORY_SLS a11
select pa1.CUSTOMER_ID AS CUSTOMER_ID,
(pa1.WJXBFS1 / pa2.WJXBFS1) as WJXBFS1
from ZZMD00 pa1, ZZMD01 pa2
create table ZZMQ02 (CUSTOMER_ID SHORT, DA57 LONG)
[Placeholder for an analytical SQL]
insert into ZZMQ02 values (DummyInsertValue)

Treat banding as normal calculation (default)

select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1


from CUSTOMER_SLS a11, ZZMQ02 a12
where a11.CUSTOMER_ID = a12.CUSTOMER_ID
select a12.DA57 AS DA57, sum(a11.TOT_DOLLAR_SALES)
as WJXBFS1
from CUSTOMER_SLS a11, ZZMQ02 a12
where a11.CUSTOMER_ID = a12.CUSTOMER_ID
group by a12.DA57
drop table ZZMD00
drop table ZZMD01
drop table ZZMQ02

Use standard case statement syntax

create table ZZOP03 (CUSTOMER_ID SHORT, DA57 LONG)


insert into ZZOP03
select pa3.CUSTOMER_ID AS CUSTOMER_ID,
(case
when (pa3.WJXBFS1 >=1 and pa3.WJXBFS1 < 100.9)
then 1
when (pa3.WJXBFS1 >= 100.9 and pa3.WJXBFS1 200.8)
then 2
when (pa3.WJXBFS1 >= 200.8 and pa3.WJXBFS1 300.7)
then 3
when (pa3.WJXBFS1 >= 300.7 and pa3.WJXBFS1 400.6)
then 4
when (pa3.WJXBFS1 >= 400.6 and pa3.WJXBFS1 500.5)
then 5
when (pa3.WJXBFS1 >= 500.5 and pa3.WJXBFS1 600.4)
then 6
when (pa3.WJXBFS1 >= 600.4 and pa3.WJXBFS1 700.3)
then 7
when (pa3.WJXBFS1 >= 700.3 and pa3.WJXBFS1 800.2)
then 8
when (pa3.WJXBFS1 >= 800.2 and pa3.WJXBFS1 900.1)
then 9
when (pa3.WJXBFS1 >= 900.1 and pa3.WJXBFS1 <= 1000
then 10

Copyright © 2024 All Rights Reserved 1810


Syst em Ad m in ist r at io n Gu id e

end) as DA57
from ZZMQ02 pa3
select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1
from CUSTOMER_SLS a11, ZZOP03 a12
where a11.CUSTOMER_ID = a12.CUSTOMER_ID
select a12.DA57 AS DA57, sum(a11.TOT_DOLLAR_SALES)
as WJXBFS1
from CUSTOMER_SLS a11, ZZOP03 a12
where a11.CUSTOMER_ID = a12.CUSTOMER_ID
group by a12.DA57
drop table ZZMD00
drop table ZZMD01
drop table ZZMQ02
drop table ZZOP03

Insert band range to database and join with metric value

create table ZZOP03 (BandNo LONG, BandStart DOUBLE,


BandEnd DOUBLE)
insert into ZZOP03 values (1, 1, 100.9)
[Insertions for other bands]
create table ZZOP04 (CUSTOMER_ID SHORT, DA57 LONG)
insert into ZZOP04
select pa3.CUSTOMER_ID AS CUSTOMER_ID, pa4.BandNo as DA57
from ZZMQ02 pa3, ZZOP03 pa4
where ((pa3.WJXBFS1 >= pa4.BandStart
and pa3.WJXBFS1 < pa4.BandEnd)
or (pa3.WJXBFS1 = pa4.BandEnd
and pa4.BandNo = 10))
select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1
from CUSTOMER_SLS a11, ZZOP04 a12
where a11.CUSTOMER_ID = a12.CUSTOMER_ID
select a12.DA57 AS DA57, sum(a11.TOT_DOLLAR_SALES)
as WJXBFS1
from CUSTOMER_SLS a11, ZZOP04 a12
where a11.CUSTOMER_ID = a12.CUSTOMER_ID
group by a12.DA57
drop table ZZMD00
drop table ZZMD01
drop table ZZMQ02
drop table ZZOP03
drop table ZZOP04

Custom Group Banding Points Method


Custom Group Banding Point Method is an advanced property that is hidden
by default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

Copyright © 2024 All Rights Reserved 1811


Syst em Ad m in ist r at io n Gu id e

The Custom Group Banding Points Method helps optimize custom group
banding when using the Points Banding method. You can choose to use the
standard method that uses the Analytical Engine or database-specific
syntax, or you can choose to use case statements or temp tables.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l es

The following SQL examples were created in MicroStrategy Tutorial. The


report contains a Custom Group "Customer Value Banding" using the point
method and the Revenue metric. The SQL for each of the three settings for
this property is presented below. All three options start with the same SQL
passes. In this example, the first six passes are the same. The remaining
SQL passes differ depending on the Custom Group Banding Count Method
selected.

create table ZZMD00 (CUSTOMER_ID SHORT, WJXBFS1 DOUBLE)


insert into ZZMD00
select a11.CUSTOMER_ID AS CUSTOMER_ID, a11.TOT_DOLLAR_SALES
as WJXBFS1
from CUSTOMER_SLS a11
create table ZZMD01 (WJXBFS1 DOUBLE)
insert into ZZMD01
select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1
from YR_CATEGORY_SLS a11
select pa1.CUSTOMER_ID AS CUSTOMER_ID,
(pa1.WJXBFS1 / pa2.WJXBFS1) as WJXBFS1
from ZZMD00 pa1, ZZMD01 pa2
create table ZZMQ02 (CUSTOMER_ID SHORT, DA57 LONG)
[Placeholder for an analytical SQL]
insert into ZZMQ02 values (DummyInsertValue)

Treat banding as normal calculation (default)

select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1


from CUSTOMER_SLS a11, ZZMQ02 a12
where a11.CUSTOMER_ID = a12.CUSTOMER_ID
select a12.DA57 AS DA57, sum(a11.TOT_DOLLAR_SALES)
as WJXBFS1
from CUSTOMER_SLS a11, ZZMQ02 a12

Copyright © 2024 All Rights Reserved 1812


Syst em Ad m in ist r at io n Gu id e

where a11.CUSTOMER_ID = a12.CUSTOMER_ID


group by a12.DA57
drop table ZZMD00
drop table ZZMD01
drop table ZZMQ02
Use standard case statement syntax
create table ZZOP03 (CUSTOMER_ID SHORT, DA57 LONG)
insert into ZZOP03
select pa3.CUSTOMER_ID AS CUSTOMER_ID,
(case
when (pa3.WJXBFS1 >= 1 and pa3.WJXBFS1 < 2) then 1
when (pa3.WJXBFS1 >= 2 and pa3.WJXBFS1 <= 3)then 2
end) as DA57
from ZZMQ02 pa3
select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1
from CUSTOMER_SLS a11, ZZOP03 a12
where a11.CUSTOMER_ID = a12.CUSTOMER_ID
select a12.DA57 AS DA57, sum(a11.TOT_DOLLAR_SALES)
as WJXBFS1
from CUSTOMER_SLS a11, ZZOP03 a12
where a11.CUSTOMER_ID = a12.CUSTOMER_ID
group by a12.DA57
drop table ZZMD00
drop table ZZMD01
drop table ZZMQ02
drop table ZZOP03

Insert band range to database and join with metric value

create table ZZOP03 (BandNo LONG, BandStart DOUBLE,


BandEnd DOUBLE)
insert into ZZOP03 values (1, 1, 2)
[Insertions for other bands]
create table ZZOP04 (CUSTOMER_ID SHORT, DA57 LONG)
insert into ZZOP04
select pa3.CUSTOMER_ID AS CUSTOMER_ID, pa4.BandNo as DA57
from ZZMQ02 pa3, ZZOP03 pa4
where ((pa3.WJXBFS1 >= pa4.BandStart
and pa3.WJXBFS1 < pa4.BandEnd)
or (pa3.WJXBFS1 = pa4.BandEnd
and pa4.BandNo = 2))
select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1
from CUSTOMER_SLS a11, ZZOP04 a12
where a11.CUSTOMER_ID = a12.CUSTOMER_ID
select a12.DA57 AS DA57, sum(a11.TOT_DOLLAR_SALES)
as WJXBFS1
from CUSTOMER_SLS a11, ZZOP04 a12
where a11.CUSTOMER_ID = a12.CUSTOMER_ID
group by a12.DA57
drop table ZZMD00
drop table ZZMD01
drop table ZZMQ02
drop table ZZOP03
drop table ZZOP04

Copyright © 2024 All Rights Reserved 1813


Syst em Ad m in ist r at io n Gu id e

Custom Group Banding Size Method


Custom Group Banding Size Method is an advanced property that is hidden
by default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

The Custom Group Banding Size Method helps optimize custom group
banding when using the Size Banding method. You can choose to use the
standard method that uses the Analytical Engine or database-specific
syntax, or you can choose to use case statements or temp tables.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l es

The following SQL examples were created in MicroStrategy Tutorial. The


report contains a Custom Group "Customer Value Banding" that uses the
size method and the Revenue metric. The SQL for each of the three settings
for this property is presented below. All three options start with the same
SQL passes. In this example, the first six passes are the same. The
remaining SQL passes differ depending on the Custom Group Banding
Count Method selected.

create table ZZMD000 (CUSTOMER_ID SHORT, WJXBFS1 DOUBLE)


insert into ZZMD000
select a11.CUSTOMER_ID AS CUSTOMER_ID, a11.TOT_DOLLAR_SALES
as WJXBFS1
from CUSTOMER_SLS a11
create table ZZMD001 (WJXBFS1 DOUBLE)
insert into ZZMD001
select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1
from YR_CATEGORY_SLS a11
select pa1.CUSTOMER_ID AS CUSTOMER_ID,
(pa1.WJXBFS1 / pa2.WJXBFS1) as WJXBFS1
from ZZMD000 pa1, ZZMD001 pa2
create table ZZMQ002 (CUSTOMER_ID SHORT, WJXBFS1 DOUBLE)
[Placeholder for an Analytical SQL]
insert into ZZMQ02 values (DummyInsertValue)

Treat banding as normal calculation

Copyright © 2024 All Rights Reserved 1814


Syst em Ad m in ist r at io n Gu id e

select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1


from CUSTOMER_SLS a11, ZZMQ002 a12
where a11.CUSTOMER_ID = a12.CUSTOMER_ID
select a12.DA57 AS DA57, sum(a11.TOT_DOLLAR_SALES)
as WJXBFS1
from CUSTOMER_SLS a11, ZZMQ002 a12
where a11.CUSTOMER_ID = a12.CUSTOMER_ID
group by a12.DA57
drop table ZZMD000
drop table ZZMD001
drop table ZZMQ002

Use standard CASE statement syntax

create table ZZOP003 (CUSTOMER_ID SHORT, DA57 LONG)


insert into ZZOP003
select pa3.CUSTOMER_ID AS CUSTOMER_ID,
(case
when (pa3.WJXBFS1 >= 0 and pa3.WJXBFS1 < .2) then 1
when (pa3.WJXBFS1 >= .2 and pa3.WJXBFS1 < .4)then 2
when (pa3.WJXBFS1 >= .4 and pa3.WJXBFS1 < .6)then 3
when (pa3.WJXBFS1 >= .6 and pa3.WJXBFS1 < .8)then 4
when (pa3.WJXBFS1 >= .8 and pa3.WJXBFS1 <= 1)then 5
end) as DA57
from ZZMQ002 pa3
drop table ZZMD000
drop table ZZMD001
drop table ZZMQ002
drop table ZZOP003

Insert band range to database and join with metric value

create table ZZOP003 (BandNo LONG, BandStart DOUBLE,


BandEnd DOUBLE)
insert into ZZOP003 values (1, 0, .2)
[Insertions for other bands]
create table ZZOP004 (
CUSTOMER_ID SHORT,
DA57 LONG)
insert into ZZOP004
select pa3.CUSTOMER_ID AS CUSTOMER_ID, pa4.BandNo as DA57
from ZZMQ002 pa3, ZZOP003 pa4
where ((pa3.WJXBFS1 >= pa4.BandStart
and pa3.WJXBFS1 < pa4.BandEnd)
or (pa3.WJXBFS1 = pa4.BandEnd
and pa4.BandNo = 5))
select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1
from CUSTOMER_SLS a11, ZZOP004 a12
where a11.CUSTOMER_ID = a12.CUSTOMER_ID
select a12.DA57 AS DA57,sum(a11.TOT_DOLLAR_SALES) as WJXBFS1
from CUSTOMER_SLS a11, ZZOP004 a12

Copyright © 2024 All Rights Reserved 1815


Syst em Ad m in ist r at io n Gu id e

where a11.CUSTOMER_ID = a12.CUSTOMER_ID


group by a12.DA57
drop table ZZMD000
drop table ZZMD001
drop table ZZMQ002
drop table ZZOP003
drop table ZZOP004

Data Population for Intelligent Cubes


The Data population for Intelligent Cubes VLDB property allows you to
define if and how Intelligent Cube data is normalized to save memory
resources.

When an Intelligent Cube is published, the description information for the


attributes (all data mapped to non-ID attribute forms) included on the
Intelligent Cube is repeated for every row. For example, an Intelligent Cube
includes the attributes Region and Store, with each region having one or
more stores. Without performing normalization, the description information
for the Region attribute would be repeated for every store. If the South
region included five stores, then the information for South would be repeated
five times.

You can avoid this duplication of data by normalizing the Intelligent Cube
data. In this scenario, the South region description information would only
be stored once even though the region contains five stores. While this saves
memory resources, the act of normalization requires some processing time.
This VLDB property provides the following options to determine if and how
Intelligent Cube data is normalized:

l Do not normalize Intelligent Cube data: Intelligent Cube data is not


normalized. The memory resources required for the Intelligent Cube may
be far greater than if one of the other normalization options is performed.
This option is best suited for troubleshooting purposes only.

l Normalize Intelligent Cube data in Intelligence Server (default):


Intelligence Server performs the Intelligent Cube data normalization. This

Copyright © 2024 All Rights Reserved 1816


Syst em Ad m in ist r at io n Gu id e

typically processes the normalization faster than the other normalization


options, but also requires memory resources of Intelligence Server.

This is a good option if you publish your Intelligent Cubes at times when
Intelligence Server use is low. Normalization can then be performed
without affecting your user community. You can use schedules to support
this strategy. For information on using schedules to publish Intelligent
Cubes, see the In-memory Analytics Help .

l The other options available for Intelligent Cube normalization all perform
the normalization within the database. Therefore, these are all good
options if Intelligent Cubes are published when Intelligence Server is in
use by the user community, or any time when the memory resources of
Intelligence Server must be conserved.

You can see improved performance with the database normalization


techniques if the Intelligent Cube is retrieving a large ratio of repeating
data. However, normalizing data within the database is typically slower
than normalizing the data in Intelligence Server. Each database
normalization technique is described below:

l Normalize Intelligent Cube data in database using Intermediate


Table Type: This option is no longer available. If you upgraded a project
from version 9.0.0 and this option was in use, this option is still used
until you manually select a different option. Once you select a different
option, you cannot revert to the behavior for this option.

If you used this option in 9.0.0 and have upgraded to the most recent
version of MicroStrategy, it is recommended that you use a different
Intelligent Cube normalization technique. If the user account for the data
warehouse has permissions to create tables, switch to the option
Normalize Intelligent Cube data in the database. This option is
described below. If the user account does not have permissions to
create tables, switch to the option Normalize Intelligent Cube data in
Intelligence Server.

Copyright © 2024 All Rights Reserved 1817


Syst em Ad m in ist r at io n Gu id e

l Normalize Intelligent Cube data in the database: This database


normalization is a good option if attribute data and fact data are stored
in the same table.

To use this option, the user account for the database must have
permissions to create tables.

l Normalize Intelligent Cube data in the database using relationship


tables: This database normalization is a good option if attribute data
and fact data are stored in separate tables.

To use this option, the user account for the database must have
permissions to create tables.

l Direct loading of dimensional data and filtered fact data: This


database normalization is a good option if attribute data and fact data
are stored in separate tables, and the Intelligent Cube includes the
majority of the attribute elements for each attribute it uses.

This is a resource-intensive option, and for very large Intelligent Cubes,


enabling this setting may deplete your Intelligence Server's system
memory.

To use this option, the user account for the database must have
permissions to create tables. Additionally, using this option can return
different results than the other Intelligent Cube normalization
techniques. For information on these differences, see Data Differences
when Normalizing Intelligent Cube Data Using Direct Loading, page
1818 below.

Dat a Di f f er en ces w h en N o r m al i zi n g In t el l i gen t Cu b e Dat a Usi n g Di r ect


Lo ad i n g

The option Direct loading of dimensional data and filtered fact data can
return different results than the other Intelligent Cube normalization
techniques in certain scenarios. Some of these scenarios and the effect that

Copyright © 2024 All Rights Reserved 1818


Syst em Ad m in ist r at io n Gu id e

they have on using direct loading for Intelligent Cube normalization are
described below:

l There are extra rows of data in fact tables that are not available in the
attribute lookup table. In this case the VLDB property Preserve all final
pass result elements (see Relating Column Data with SQL: Joins, page
1684) determines how to process the data. The only difference between
direct loading and the other normalization options is that the option
Preserve all final result pass elements and the option Preserve all
elements of final pass result table with respect to lookup table but not
relationship table both preserve the extra rows by adding them to the
lookup table.

l There are extra rows of data in the attribute lookup tables that are not
available in the fact tables. With direct loading, these extra rows are
included. For other normalization techniques, the VLDB property Preserve
all lookup table elements (see Relating Column Data with SQL: Joins,
page 1684) determines whether or not to include these rows.

l The Intelligent Cube includes metrics that use OLAP functions. If an


Intelligent Cube includes metrics that use OLAP functions, you should use
an Intelligent Cube normalization technique other than the direct loading
technique to ensure that the data returned is accurate.

OLAP functions are functions such as RunningSum, MovingAvg, and


OLAPMax. For information about how to use OLAP functions, see the
Functions Reference.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Data Population for Reports


The Data population for reports VLDB property allows you to define if and
how report data is normalized to save memory resources.

Copyright © 2024 All Rights Reserved 1819


Syst em Ad m in ist r at io n Gu id e

When a report is executed, the description information for the attributes (all
data mapped to non-ID attribute forms) included on the report is repeated for
every row. For example, a report includes the attributes Region and Store,
with each region having one or more stores. Without performing
normalization, the description information for the Region attribute would be
repeated for every store. If the South region included five stores, then the
information for South would be repeated five times.

You can avoid this duplication of data by normalizing the report data. In this
scenario, the South region description information would only be stored
once even though the region contains five stores. While this saves memory
resources, the act of normalization requires some processing time. This
VLDB property provides the following options to determine if and how report
data is normalized:

l Do not normalize report data (default): Report data is not normalized.


While no extra processing is required to normalize the report data, the
memory resources required for the report are larger than if normalization
was performed. However, reports commonly do not return large result sets
and thus do not suffer from performance issues related to this duplication
of data. Therefore, this option is the default for all reports.

l Normalize report data in Intelligence Server: Intelligence Server


performs the report data normalization. This typically processes the
normalization faster than the other normalization options, but also
requires memory resources of Intelligence Server. This is a good option if
report performance is the top priority.

l The other options available for report data normalization all perform the
normalization within the database. Therefore, these are all good options if
the memory resources of Intelligence Server must be conserved.

You can see improved performance with the database normalization


techniques if the report is retrieving a large ratio of repeating data.
However, normalizing data within the database is typically slower than

Copyright © 2024 All Rights Reserved 1820


Syst em Ad m in ist r at io n Gu id e

normalizing the data in Intelligence Server. Each database normalization


technique is described below:

l Normalize report data in database using Intermediate Table Type:


This option is no longer available. If you upgraded a project from version
9.0.0 and this option was in use, this option is still used until you
manually select a different option. Once you select a different option,
you cannot revert to the behavior for this option.

If you used this option in 9.0.0 and have upgraded to the most recent
version of MicroStrategy, it is recommended that you use a different
report data normalization technique. If the user account for the data
warehouse has permissions to create tables, switch to the option
Normalize report data in the database. This option is described
below. If the user account does not have permissions to create tables,
switch to the option Normalize report data in Intelligence Server.

l Normalize report data in the database: This database normalization is


a good option if attribute data and fact data are stored in the same table.

To use this option, the user account for the database must have
permissions to create tables.

l Normalize report data in the database using relationship tables:


This database normalization is a good option if attribute data and fact
data are stored in separate tables.

To use this option, the user account for the database must have
permissions to create tables.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Default Sort Behavior for Attribute Elements in Reports


The Default Sort Behavior for Attribute Elements in Reports VLDB property
determines whether the sort order of attribute elements on reports considers

Copyright © 2024 All Rights Reserved 1821


Syst em Ad m in ist r at io n Gu id e

special sort order formatting defined for attributes:

l Sort attribute elements based on the attribute ID form for each


attribute (default): Reports automatically use the attribute ID form to sort
the results of a report. This is the default behavior seen prior to defining
any sorting options for the report. If you define a report to use the default
advanced sorting for an attribute, any sorting defined for the attribute
forms of the attribute are then applied to the report. For information on
advanced sorting for reports, see the Advanced Reporting Help.

l Sort attribute elements based on the defined 'Report Sort' setting of


all attribute forms for each attribute: Reports automatically use any
sorting defined for the attribute forms of the attribute. In this scenario, no
advanced sorting needs to be defined for a report to consider any attribute
form sorting defined for the attributes on the report.

An example of where this option can be helpful is when an attribute has an


attribute form that is used solely for sorting the elements of an attribute on
a report. An attribute form like this can be required if the ID values do not
represent the order in which the attribute elements should be displayed by
default on the report, and the specifics of the sort order are not relevant
and therefore should not be displayed to report analysts. This sort order
column can be added to the attribute, defined with a specific sort order for
the attribute, and also defined to not be included as an available report
form. By defining the attribute form in this way and selecting this VLDB
property option, the attribute form is not displayed on reports, but it is still
used to automatically sort the values of the report without having to define
any advanced sorting for the report.

Level s at Wh i ch Yo u Can Set Th i s

Database instance only

Copyright © 2024 All Rights Reserved 1822


Syst em Ad m in ist r at io n Gu id e

Dimensionality Model
Dimensionality Model is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

The Dimensionality Model property is strictly for backward compatibility with


MicroStrategy 6.x or earlier.

l Use relational model (default): For all projects, Use relational model is
the default value. With the Use relational model setting, all the
dimensionality (level) resolution is based on the relationship between
attributes.

l Use dimensional model: The Use dimensional model setting is for cases
where attribute relationship dimensionality (level) resolution is different
from dimension-based resolution. There are very few cases when the
setting needs to be changed to Use dimensional model. The following
situations may require the Use dimensional model setting:

l Metric Conditionality: You have a report with the Year attribute and the
"Top 3 Stores Dollar Sales" metric on the template and the filters Store,
Region, and Year. Therefore, the metric has a metric conditionality of
"Top 3 Stores."

If you change the default of the Remove related report filter element
option in advanced conditionality, the Use dimensional model setting
does not make a difference in the report. For more information
regarding this advanced setting, see the Metrics chapter in the
Advanced Reporting Help.

l Metric Dimensionality Resolution: MicroStrategy 7.x and later does not


have the concept of dimension, but instead has the concept of metric
level. For a project upgraded from 6.x to 7.x, the dimension information
is kept in the metadata. Attributes created in 7.x do not have this
information. For example, you have a report that contains the Year

Copyright © 2024 All Rights Reserved 1823


Syst em Ad m in ist r at io n Gu id e

attribute and the metric "Dollar Sales by Geography." The metric is


defined with the dimensionality of Geography, which means the metric is
calculated at the level of whatever Geography attribute is on the
template. In MicroStrategy 7.x and later, the metric dimensionality is
ignored and therefore defaults to the report level or the level that is
defined for the report.

l Analysis level calculation: For the next situation, consider the following
split hierarchy model.

Market and State are both parents of Store. A report has the attributes
Market and State and a Dollar Sales metric with report level
dimensionality. In MicroStrategy 7.x and later, with the Use relational
model setting, the report level (metric dimensionality level) is Market
and State. To choose the best fact table to use to produce this report,
the Analytical Engine considers both of these attributes. With the Use
dimensional model setting in MicroStrategy 7.x and later, Store is
used as the metric dimensionality level and for determining the best
fact table to use. This is because Store is the highest common
descendent between the two attributes.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Copyright © 2024 All Rights Reserved 1824


Syst em Ad m in ist r at io n Gu id e

Engine Attribute Role Options


Engine Attribute Role Options is an advanced property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

The Engine Attribute Role Options property allows you to share an actual
physical table to define multiple schema objects. There are two approaches
for this feature:

l The first approach is a procedure called table aliasing, where you can
define multiple logical tables in the schema that point to the same physical
table, and then define different attributes and facts on these logical tables.
Table aliasing provides you a little more control and is best when
upgrading or when you have a complex schema. Table aliasing is
described in detail in the Project Design Help.

l The second approach is called Engine Attribute Role. With this approach,
rather than defining multiple logical tables, you only need to define
multiple attributes and facts on the same table. The MicroStrategy Engine
automatically detects "multiple roles" of certain attributes and splits the
table into multiple tables internally. There is a limit on the number of
tables into which a table can split. This limit is known as the Attribute Role
limit. This limit is hard coded to 128 tables. If you are a new MicroStrategy
user starting with 7i or later, it is suggested that you use the automatic
detection (Engine Attribute Role) option.

The algorithm to split the table is as follows:

l If two attributes are defined on the same column from the same table, have
the same expression, and are not related, it is implied that they are playing
different roles and must be in different tables after the split.

l If two attributes are related to each other, they must stay in the same table
after the split.

l Attributes should be kept in as many tables as possible as long as


algorithm 1 is not violated.

Copyright © 2024 All Rights Reserved 1825


Syst em Ad m in ist r at io n Gu id e

Given the diversity of data modeling in projects, the above algorithm cannot
be guaranteed to split tables correctly in all situations. Thus, this property is
added in the VLDB properties to turn the Engine Attribute Role on or off.
When the feature is turned off, the table splitting procedure is bypassed.

Co r r ect Usage Exam p l e

Fact table FT1 contains the columns "Order_Day," "Ship_Day," and "Fact_
1." Lookup table LU_DAY has columns "Day," "Month," and "Year."
Attributes "Ship Day" and "Order Day" are defined on different columns in
FT1, but they share the same column ("Day") on LU_DAY. Also the attributes
"Ship Month" and "Order Month" share the same column "month" in LU_DAY.
The "Ship Year" and "Order Year" attributes are the same as well. During the
schema loading, the Analytical Engine detects the duplicated definitions of
attributes on column "Day," "Month," and "Year." It automatically splits LU_
DAY into two internal tables, LU_DAY(1) and LU_DAY(2), both having the
same physical table name LU_DAY. As a result, the attributes "Ship Day,"
"Ship Month," and "Ship Year" are defined on LU_DAY(1) and "Order Day,"
"Order Month," and "Order Year" are defined on LU_DAY(2). Such table
splitting allows you to display Fact_1 that is ordered last year and shipped
this year.

The SQL appears as follows:

select a1.fact_1
from FT1 a1 join LU_DAY a2 on (a1.order_day=a2.day)
join LU_DAY a3 on (a1.ship_day = a3.day)
where a2.year = 2002 and
a3.year = 2003

Note that LU_DAY appears twice in the SQL, playing different "roles." Also,
note that in this example, the Analytical Engine does not split table FT1
because "Ship Day" and "Order Day" are defined on different columns.

Copyright © 2024 All Rights Reserved 1826


Syst em Ad m in ist r at io n Gu id e

In co r r ect Usage Exam p l e

Fact table FT1 contains columns "day" and "fact_1." "Ship Day" and "Order
Day" are defined on column "day." The Analytical Engine detects that these
two attributes are defined on the same column and therefore splits FT1 into
FT1(1) and FT1(2), with FT1(1) containing "Ship Day" and "Fact 1", and FT
(2) containing "Order Day" and "Fact 1." If you put "Ship Day" and "Order
Day" on the template, as well as a metric calculating "Fact 1," the Analytical
Engine cannot find such a fact. Although externally, FT1 contains all the
necessary attributes and facts, internally, "Fact 1" only exists on either "Ship
Day" or "Order Day," but not both. In this case, to make the report work
(although still incorrectly), you should turn OFF the Engine Attribute Role
feature.

l Because of backward compatibility and because the Analytical Engine's


automatic splitting of tables may be wrong for some data models, this
property's default setting is to turn OFF the Engine Attribute Role feature.

l If this property is turned ON, and you use this feature incorrectly, the
most common error message from the Analytical Engine is

Fact not found at requested level.

l This feature is turned OFF by default starting from 7i Beta 2. Before that,
this feature was turned OFF for upgraded projects and turned ON by
default for new projects. So for some 7i beta users, if you create a new
metadata using the Beta1 version of 7i, this feature may be turned on in
your metadata.

l While updating the schema, if the Engine Attribute Role feature is ON,
and if the Attribute Role limit is exceeded, you may get an error message
from the Engine. You get this error because there is a limit on the number
of tables into which a given table can be split internally. In this case, you
should turn the Engine Attribute Role feature OFF and use table aliasing
instead.

Copyright © 2024 All Rights Reserved 1827


Syst em Ad m in ist r at io n Gu id e

Level s at Wh i ch Yo u Can Set Th i s

Database instance only

Filter Tree Optimization for Metric Qualifications


Filter tree optimization for metric qualifications is an advanced property that
is hidden by default. For information on how to display this property, see
Viewing and Changing Advanced VLDB Properties, page 1630.

The Filter tree optimization for metric qualifications property determines


whether metric qualifications that are included in separate passes of SQL
are included in a single pass of SQL when possible. Metric qualifications can
be included in separate passes of SQL in scenarios such as when the metric
qualifications are used in filter definitions. Having a metric qualification at
each logical level of a filter qualification can include each metric
qualification in a separate pass of SQL. For example, consider a filter
qualification that is structured as follows:

(AttributeQualfication1 AND MetricQualification1) AND


(AttributeQualification2 AND MetricQualification2)

Since MetricQualification1 and MetricQualification2 are at


separate logical levels of the filter qualification, this can cause each metric
qualification to require its own pass of SQL.

You have the following options for this VLDB property:

l Enable Filter tree optimization for metric qualifications: Defines


metric qualifications to be included in the same pass of SQL if possible. In
the scenario described above, MetricQualification1 and
MetricQualification2 are processed in the same pass of SQL. This
can help to improve performance by reducing the number of SQL passes
required.

l Disable Filter tree optimization for metric qualifications: Defines


metric qualifications to be included in SQL passes based on their logical
level in a filter qualification. In the scenario described above,

Copyright © 2024 All Rights Reserved 1828


Syst em Ad m in ist r at io n Gu id e

MetricQualification1 and MetricQualification2 are processed


in different passes of SQL.

Level s at Wh i ch Yo u Can Set Th i s

Report and project

Incremental Data Transfer


The Incremental Data Transfer VLDB property determines whether data that
is transferred between Intelligence Server and a data source is performed
using a single transfer of data or multiple, incremental transfers of data.
Transferring data between Intelligence Server and a data source can be
required for MicroStrategy features such as MicroStrategy MultiSource
Option, data marts, bulk export to export large reports as delimited text files,
and other Analytical Engine features.

This VLDB property has the following options:

l Enable Incremental Data Transfer: Data that is transferred between


Intelligence Server and a data source is transferred using multiple,
incremental transfers of data. This can improve performance in scenarios
where large amounts of data from Intelligence Server are written to a data
source. By transferring the data incrementally, some data can be written to
the data source while additional data is retrieved through Intelligence
Server.

l Disable Incremental Data Transfer (default): Data that is transferred


between Intelligence Server and a data source is transferred using a
single transfer of data.

Level s at Wh i ch Yo u Can Set Th i s

Project and report

Copyright © 2024 All Rights Reserved 1829


Syst em Ad m in ist r at io n Gu id e

Maximum Parallel Queries Per Report


Maximum Parallel Queries Per Report is an advanced property that is hidden
by default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

The Maximum Parallel Queries Per Report property determines how many
queries can be executed in parallel as part of parallel query execution
support. By default, a maximum of two queries can be executed in parallel,
and you can increase this number to perform additional queries in parallel.
For data that is integrated into MicroStrategy using Data Import, the default
maximum number of queries that can be executed in parallel is five. When
determining this maximum, consider the following:

l You must enable parallel Query execution to perform multiple queries in


parallel. To enable parallel Query execution, see Maximum Parallel
Queries Per Report, page 1830.

l The number of queries executed in parallel is also dependent on the report


or Intelligent Cube that is being executed. For example, if the maximum is
set to three but a report only uses two passes of SQL, then only those two
queries can be performed in parallel.

l When multiple queries are executed in parallel, this means that the actual
processing of the multiple queries is performed in parallel on the
database. If a database is required to do too many tasks at the same time
this can cause the response time of the database to slow down, and thus
degrade the overall performance. You should take into account the
databases used to retrieve data and their available resources when
deciding how to restrict parallel Query execution.

Level s at Wh i ch Yo u Can Set Th i s

Project only

Copyright © 2024 All Rights Reserved 1830


Syst em Ad m in ist r at io n Gu id e

MD Partition Prequery Option


The purpose of the MD Partition Prequery Option is to find out which
partition base table is used. The report filter is combined with partition base
table filters. If the intersection of both filters is not empty, then the
corresponding partition base table should be used. A SELECT statement for
each partition base table is generated, and the query result is checked to
see whether it is empty.

There are multiple ways to generate a SELECT statement that checks for the
data, but the performance of the query can differ depending on the platform.
The default value for this property is: "select count(*) …" for all database
platforms, except UDB, which uses "select distinct 1…"

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Multiple Data Source Support


The Multiple data source support VLDB property allows you to choose which
technique to use to support multiple data sources in a project. This VLDB
property has the following options:

l Use MultiSource Option to access multiple data sources (default):


MultiSource Option is used to access multiple data sources in a project.

MicroStrategy includes an extension to Intelligence Server referred to as


MultiSource Option. With this feature, you can connect a project to
multiple relational data sources. This allows you to integrate all your
information from various databases and other relational data sources into
a single MicroStrategy project for reporting and analysis purposes. All
data sources included by using the MultiSource Option are integrated as
part of the same relational schema for a project.

Copyright © 2024 All Rights Reserved 1831


Syst em Ad m in ist r at io n Gu id e

l Use database gateway support to access multiple data sources:


Database gateways are used to access multiple data sources in a project.

You can specify a secondary database instance for a table, which is used
to support database gateways. For example, in your environment you
might have a gateway between two databases such as an Oracle database
and a DB2 database. One of them is the primary database and the other is
the secondary database. The primary database receives all SQL requests
and passes them to the correct database.

Any object using a data source other than the primary data source is
considered as having multiple data sources. Therefore, the Execute Report
that uses multiple data sources privilege is required. This rule also
applies to scenarios when the object uses data only from a non-primary data
source.

For more information on both techniques for connecting to multiple data


sources, see the Project Design Help.

Level s at Wh i ch Yo u Can Set Th i s

Database instance and project

OLAP Function Support


The OLAP function support VLDB property defines whether OLAP functions
support backwards compatibility or reflect enhancements to OLAP function
logic. This VLDB property has the following options:

l Preserve backwards compatibility with 8.1.x and earlier (default):


OLAP functions reflect the functionality of pre-9.0 releases to support
backwards compatibility.

This behavior does not correctly use multiple passes for nested or sibling
metrics that use OLAP functions. It also does not correctly apply attributes
in the SortBy and BreakBy parameters.

Copyright © 2024 All Rights Reserved 1832


Syst em Ad m in ist r at io n Gu id e

l Recommended with 9.0 and later: OLAP functions reflect the


enhancements included in 9.0 and later releases.

This recommended behavior uses multiple passes for nested or sibling


metrics that use OLAP functions. The functions also ignore attributes in
SortBy or BreakBy parameters when the attributes are children of or are
unrelated to the component metric's level.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Parallel Query Execution


Parallel Query Execution is an advanced property that is hidden by default.
See Viewing and Changing Advanced VLDB Properties, page 1630 for more
information on how to display this property.

The Parallel Query Execution property determines whether MicroStrategy


attempts to execute multiple queries in parallel to return report results faster
and publish Intelligent Cubes. This VLDB property has the following options:

l Disable parallel query execution (default): All queries for MicroStrategy


reports and Intelligent Cubes are processed sequentially.

Disabling parallel query execution by default allows you to first verify that
your reports and Intelligent Cubes are executing correctly prior to any
parallel query optimization. If you enable parallel query execution and
errors are encountered or data is not being returned as expected,
disabling parallel query execution can help to troubleshoot the report or
Intelligent Cube.

l Enable parallel query execution for multiple data source reports


only: MicroStrategy attempts to execute multiple queries in parallel for
MicroStrategy reports and Intelligent Cubes that access multiple data
sources. You can access multiple data sources using either MicroStrategy
MultiSource Option, or database gateway support. To enable one of these

Copyright © 2024 All Rights Reserved 1833


Syst em Ad m in ist r at io n Gu id e

options, see Multiple Data Source Support, page 1831.

For reports and Intelligent Cubes that do not use MultiSource Option or
database gateway support to access multiple data sources, all queries are
processed sequentially.

l Enable parallel query execution for all reports that support it:
MicroStrategy attempts to execute multiple queries in parallel for all
MicroStrategy reports and Intelligent Cubes. This option is automatically
used for data that you integrate into MicroStrategy using Data Import.

Ho w Par al l el Qu er y Execu t i o n i s Su p p o r t ed

To support parallel query execution, MicroStrategy analyzes the query logic


that will be run for a report or Intelligent Cube for potential multiple queries.
Multiple queries are used for tasks that require:

l The creation of tables to store intermediate results, which are then used
later in the same query.

These intermediate results must be stored as permanent tables to be


considered for parallel query execution. These permanent tables are
required to ensure that the parallel query execution results are available
for separate database sessions and connections. If database features
including derived tables or common table expressions are used, parallel
query execution cannot be used because these techniques are considered
to be a single query, which cannot be divided into separate pieces.
Therefore, data sources that use permanent tables to store intermediate
results are good candidates for parallel query execution.

MicroStrategy uses derived tables and common table expressions by


default for databases that are well-suited to use these features to store
intermediate results. These databases can often perform their own query
optimizations using either derived tables or common table expressions,
and therefore may be better suited to using these techniques rather than
using MicroStrategy's parallel query execution.

Copyright © 2024 All Rights Reserved 1834


Syst em Ad m in ist r at io n Gu id e

l Selecting independent lookup, relationship, or fact data using SQL


normalization or direct data loading methods. For information on using
these techniques with Intelligent Cubes and reports, see Data Population
for Intelligent Cubes, page 1816 and Data Population for Reports, page
1819 respectively.

l Loading multiple tables imported using Data Import, to publish a dataset.


The option Enable parallel query execution for all reports that support it is
automatically used for data that you integrate into MicroStrategy using
Data Import.

Can d i d at es f o r Par al l el Qu er y Execu t i o n

Simple reports in MicroStrategy may not require multiple queries to return


the required results, so even if parallel query execution is enabled, there
may be no performance benefit. However, there are various MicroStrategy
features and techniques that often require multiple queries and therefore
can benefit the most from parallel query execution, which include:

l Consolidations and custom groups.

l Level metrics and transformation metrics.

l Accessing multiple data sources using MultiSource Option or database


gateway support.

l Accessing data sources that use temporary tables or permanent tables to


store intermediate results.

l Accessing data in multiple tables through the use of Data Import.

If your report or Intelligent Cube uses any of the features listed above, it
may be a good candidate for using parallel query execution. Additionally,
using parallel query execution can be a good option for Intelligent Cubes
that are published during off-peak hours when the system is not in heavy use
by the reporting community. Using parallel query execution to publish these
Intelligent Cubes can speed up the publication process, while not affecting
the reporting community for your system.

Copyright © 2024 All Rights Reserved 1835


Syst em Ad m in ist r at io n Gu id e

There are additional scenarios in MicroStrategy that can require multiple


queries. To help analyze which reports and Intelligent Cubes may benefit
from the use of parallel query execution, you can use the parallel query
execution improvement estimate provided in the SQL view of a report or
Intelligent Cube. For more information on this estimate and disabling or
enabling the inclusion of this estimate, see Parallel Query Execution
Improvement Estimate in SQL View, page 1837.

There are some scenarios where parallel query execution cannot be used.
These are described below:

l When reports contain user-defined data mart SQL, parallel query


execution cannot be used to execute multiple queries in parallel. For
information on data mart Pre/Post Statement VLDB properties, including at
what levels these VLDB properties can be defined, see Customizing SQL
Statements: Pre/Post Statements, page 1768.

l Both MultiSource Option and warehouse partition mapping are used to


return results for a report or Intelligent Cube from multiple data sources.
While the use of MultiSource Option alone can be a good candidate for
parallel query execution, when MultiSource Option is combined with
warehouse partition mapping to return results from multiple data sources,
parallel query execution cannot be used to execute multiple queries in
parallel. For information on using warehouse partition mapping for a
project, see the Project Design Help.

l Microsoft Access databases support parallel query execution for


Intelligent Cubes. However, reports and Intelligent Cubes that require the
creation of temporary tables or insertion of values as part of parallel query
execution are instead processed sequentially for Access databases.

Wh en t o Di sab l e Par al l el Qu er y Execu t i o n

While performing multiple queries in parallel can improve the performance of


query execution in MicroStrategy, it will not provide the best performance or
results in all scenarios.

Copyright © 2024 All Rights Reserved 1836


Syst em Ad m in ist r at io n Gu id e

Parallel query execution is disabled by default to allow you to first verify that
your reports and Intelligent Cubes are executing correctly prior to any
parallel query optimization. If you enable parallel query execution and errors
are encountered or data is not being returned as expected, disabling parallel
query execution can help to troubleshoot the report or Intelligent Cube.

When multiple queries are performed in parallel, the actual processing of the
multiple queries is performed in parallel on the database. If a database is
required to do too many tasks at the same time this can cause the response
time of the database to slow down, and thus degrade the overall
performance. You should take into account the databases used to retrieve
data and their available resources when deciding whether to enable parallel
query execution.

Disabling parallel query execution can be a good option for reports and
Intelligent Cubes that are not used often or ones that do not have strict
performance requirements. If you can disable parallel query execution for
these reports and Intelligent Cubes that do not have a great need for
enhanced performance, that can save database resources to handle other
potentially more important requests.

Additionally, you can limit the number of queries that can be executed in
parallel for a given report or Intelligent Cube. This can allow you to enable
parallel query execution, but restrict how much processing can be done in
parallel on the database. To define the number of passes of SQL that can be
executed in parallel, see Maximum Parallel Queries Per Report, page 1830.

Level s at Wh i ch Yo u Can Set Th i s

Project, report, and template

Parallel Query Execution Improvement Estimate in SQL View


Parallel Query Execution Improvement Estimate in SQL View is an advanced
property that is hidden by default. For information on how to display this
property, see Viewing and Changing Advanced VLDB Properties, page 1630.

Copyright © 2024 All Rights Reserved 1837


Syst em Ad m in ist r at io n Gu id e

The Parallel Query Execution Improvement Estimate in SQL View property


determines whether reports and Intelligent Cubes include an estimate in the
percent of processing time that would be saved if parallel query execution
was used to run multiple queries in parallel. This VLDB property has the
following options:

l Disable parallel query execution estimate in SQL view: An estimate on


the percent of processing time that would be saved if parallel query
execution was used for the report or Intelligent Cube is not displayed in
the SQL view of the report. This can simplify the SQL view if you are
already using parallel query execution or you are not interested in this
estimated improvement.

l Enable parallel query execution estimate in SQL view: An estimate on


the percent of processing time that would be saved if parallel query
execution was used for the report or Intelligent Cube is displayed in the
SQL view of the report. This estimate is provided as a percentage of time
that could be saved by using parallel query execution.

To calculate this estimate, the report or Intelligent Cube is analyzed to


determine if there are multiple queries. If there are multiple queries, this
estimate is calculated by assuming that all applicable queries are run in
parallel.

Be aware that this estimate does not factor in the capabilities of the
database you are using, which can have an effect on the performance of
parallel query execution since the database is what processes the multiple
passes in parallel. Additionally, this estimate assumes that all queries that
can be done in parallel are in fact performed in parallel. If parallel query
execution is enabled, the number of queries that can be performed in
parallel is controlled by the Maximum Parallel SQLs Per Report VLDB
property (see Parallel Query Execution Improvement Estimate in SQL
View, page 1837).

Level s at Wh i ch Yo u Can Set Th i s

Project only

Copyright © 2024 All Rights Reserved 1838


Syst em Ad m in ist r at io n Gu id e

Rank Method if DB Ranking Not Used


The Rank Method property determines which method to use for ranking
calculations. There are three methods for ranking data, and in some cases,
this property is ignored. The logic is as follows:

1. If the metric that is being ranked has to be calculated in the Analytical


Engine, then the ranking is calculated in the Analytical Engine as well.

2. If the database supports the Rank function, then the ranking is done in
the database.

3. If neither of the above criteria is met, then the Rank Method property
setting is used.

The most common form of ranking is referred to as Open Database


Connectivity (ODBC) ranking. This was the standard method used by
MicroStrategy 6.x and earlier. This method makes multiple queries against
the warehouse to determine the ranking. ODBC ranking is the default
ranking technique, when the database does not support native ranking,
because it is more efficient for large datasets.

Analytical Engine ranking generates the result of the ranking operation in


the MicroStrategy Analytical Engine and then moves the result set back to
the warehouse to perform any further operations and compile the final result
set.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Remove Aggregation Method


Remove Aggregation Method is an advanced property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

Copyright © 2024 All Rights Reserved 1839


Syst em Ad m in ist r at io n Gu id e

The Remove Aggregation Method property determines whether to keep or


remove aggregations in SQL queries executed from MicroStrategy. This
VLDB property has the following options:

l Remove aggregation according to key of FROM clause (default):


Aggregations are kept or removed based on the level of data created by
joining all the tables included in the query. If the level of the information
returned in the query (SELECT clause) is the same as the level determined
by joining all required tables (FROM clause) then any unnecessary
aggregations are removed. If these levels are different, then aggregations
must be kept to ensure that correct data is returned. Determining whether
aggregations are necessary after joining all relevant tables helps to
provide valid SQL when the attribute data and the fact data are stored at
different levels.

For example, the report shown in the image below was created in the
MicroStrategy Tutorial project.

To create this report, data must be joined from the tables LU_MONTH, LU_
CUST_CITY, and CITY_MNTH_SLS. Since the attribute lookup tables
combine to have a level of Customer City and Month, and the CITY_MNTH_
SLS table has a level of Customer City and Month, normally this VLDB
property would have no effect on the SQL. However, for the purposes of
this example the LU_MONTH table was modified to include an extra
attribute named Example, and it is not related to the Month attribute.

Copyright © 2024 All Rights Reserved 1840


Syst em Ad m in ist r at io n Gu id e

Because of this additional unrelated attribute, while the report only


displays Month and Customer City, the level of the data is Month,
Customer City, and Example. If you use the other option (Remove
aggregation according to key of fact tables) for this VLDB property, the
following SQL is created:

The SQL statement above uses DISTINCT in the SELECT clause to return
the Month data. However, since there is an additional attribute on the LU_
MONTH table, the correct SQL to use includes aggregations on the data
rather than using DISTINCT. Therefore, if you use this Remove
aggregation according to key of FROM clause option for the VLDB
property, the following SQL is created:

This SQL statement correctly uses aggregation functions and a GROUP BY


clause to return the attribute data.

l Remove aggregation according to key of fact tables: Aggregations are


kept or removed prior to determining the level of data created by joining all
of the tables required for the query. This option can be used for backward
compatibility, which can help to provide the expected data and SQL

Copyright © 2024 All Rights Reserved 1841


Syst em Ad m in ist r at io n Gu id e

statements in scenarios that utilize features such as nested aggregation in


metrics and custom groups.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Remove Group by Option


Remove Group By Option is an advanced property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

The Remove Group By Option property determines when Group By


conditions and aggregations can be omitted in specific SQL generation
scenarios, depending on which of the following options you select:

l Remove aggregation and Group By when Select level is identical to


From level (default): This setting provides the common behavior of
omitting Group By conditions and aggregations if the level of the SELECT
clause is identical to the level of the FROM clause. For example, a SQL
statement that only includes the ID column for the Store attribute in the
SELECT clause and only includes the lookup table for the Store attribute in
the FROM clause does not include any Group By conditions.

l Remove aggregation and Group By when Select level contains all


attribute(s) in From level: You can choose to omit Group By conditions
and aggregations in the unique scenario of including multiple attributes on
a report, which are built from columns of the same table in the data
warehouse. For example, you have separate attributes for shipping data
such as Shipping ID, Shipping Time, and Shipping Location which are
mapped to columns in the same table which has the primary key mapped
to Shipping ID. By selecting this setting, when these three attributes are
included on a report, Group By conditions and aggregations for the
shipping attributes are omitted.

Copyright © 2024 All Rights Reserved 1842


Syst em Ad m in ist r at io n Gu id e

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Remove Repeated Tables for Outer Joins


Remove Repeated Tables For Outer Joins is an advanced property that is
hidden by default. For information on how to display this property, see
Viewing and Changing Advanced VLDB Properties, page 1630.

The Remove Repeated Tables For Outer Joins property determines whether
an optimization for outer join processing is enabled or disabled. You have
the following options:

l Disable optimization to remove repeated tables in full outer join and


left outer join passes: The optimization for outer join processing is
disabled. This can cause outer joins to require additional processing time.

l Enable optimization to remove repeated tables in full outer join and


left outer join passes (default): The optimization for outer join
processing is enabled. This can provide better response time for outer join
processing.

However, if you sort or rank report results and some of the values used
for the sort or rank are identical, you may encounter different sort or rank
orders depending on whether you disable or enable this optimization. To
preserve current sorting or ranking orders on identical values, you may
want to disable this optimization.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Set Operator Optimization


Set Operator Optimization is an advanced property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

Copyright © 2024 All Rights Reserved 1843


Syst em Ad m in ist r at io n Gu id e

The Set Operator Optimization property determines whether to use set


operators, such as EXCEPT and INTERSECT, to combine multiple filter
qualifications rather than their equivalent logical operators such as AND
NOT and AND. Set operators can be used to combine two or more of the
following types of set qualifications:

l Relationship qualifications

l Metric qualifications when combined with other types of set qualifications


with the logical operators AND, NOT, or OR

l Report as filter qualifications when combined with the logical operators


AND, NOT, or OR

l Set operators can only be used to combine the filter qualifications listed
above if they have the same output level. For example, a relationship
qualification with an output level set to Year and Region cannot be
combined with another relationship qualification with an output level of
Year.

l Metric qualifications and report-as-filter qualifications, when combined


with AND, render as inner joins by default to avoid a subquery in the final
result pass. When Set Operator Optimization is enabled, the inner joins
are replaced by subqueries combined using INTERSECT.

l Metric qualifications at the same level are combined into one set
qualification before being applied to the final result pass. This is more
efficient than using a set operator. Consult MicroStrategy Tech Note
TN13536 for more details.

l For more information on filters and filter qualifications, see the Advanced
Filters section of the MicroStrategy Advanced Reporting Guide.

Along with the restrictions described above, SQL set operators also depend
on the subquery type and the database platform. For more information on
sub query type, see Set Operator Optimization, page 1843. Set Operator
Optimization can be used with the following sub query types:

Copyright © 2024 All Rights Reserved 1844


Syst em Ad m in ist r at io n Gu id e

l WHERE COL1 IN (SELECT s1.COL1...) falling back to EXISTS (SELECT *


...) for multiple columns IN

l WHERE (COL1, COL2...) IN (SELECT s1.COL1, s1.COL2...)

l WHERE COL1 IN (SELECT s1.COL1...) falling back to EXISTS (SELECT


col1, col2 ...) for multiple columns IN

l Use Temporary Table, falling back to IN (SELECT COL) for correlated sub
query

If either of the two sub query types that use fallback actions perform a
fallback, Set Operator Optimization is not applied.

The following database platforms support SQL set operators:

Intersect Except Union


Database Intersect Except Union
ALL ALL ALL

ANSI 92 Yes Yes Yes Yes Yes Yes

DB2 UDB Yes Yes Yes Yes Yes Yes

Informix No No No No Yes Yes

Yes
Oracle Yes No No Yes Yes
(Minus)

RedBrick Yes Yes Yes Yes Yes Yes

Yes (2005 Yes (2005


SQL Server No No Yes Yes
and later) and later)

Tandem No No No No No No

Teradata Yes Yes Yes Yes Yes Yes

If you enable Set Operator Optimization for a database platform that does
not support operators such as EXCEPT and INTERSECT, the Set Operator
Optimization property is ignored.

Copyright © 2024 All Rights Reserved 1845


Syst em Ad m in ist r at io n Gu id e

The Set Operator Optimization property provides you with the following
options:

l Disable Set Operator Optimization (default): Operators such as IN and


AND NOT are used in SQL sub queries with multiple filter qualifications.

l Enable Set Operator Optimization (if database support and [Sub


Query Type]): This setting can improve performance by using SQL set
operators such as EXCEPT, INTERSECT, and MINUS in SQL sub queries to
combine multiple filter qualifications that have the same output level. All of
the dependencies described above must be met for SQL set operators to
be used. If you enable SQL set operators for a database platform that does
not support them, this setting is ignored and filters are combined in the
standard way with operators such as IN and AND NOT.

For a further discussion on the Set Operator Optimization VLDB property,


refer to MicroStrategy Tech Note TN13530.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

SQL Global Optimization


The SQL Global Optimization property provides access to level options you
can use to determine whether and how SQL queries are optimized.

In some cases, the SQL Engine generates duplicate or redundant passes,


generates SQL passes that can be combined into one pass, or creates
unnecessary temporary tables. Such SQL queries can have an adverse
effect on performance.

The default option for this VLDB property has changed in 9.0.0. For
information on this change, see SQL Global Optimization, page 1846.

You can set the following SQL Global Optimization options to determine the
extent to which SQL queries are optimized:

Copyright © 2024 All Rights Reserved 1846


Syst em Ad m in ist r at io n Gu id e

l Level 0: No optimization: SQL queries are not optimized.

l Level 1: Remove Unused and Duplicate Passes: Redundant, identical,


and equivalent SQL passes are removed from queries during SQL
generation.

l Level 2: Level 1 + Merge Passes with different SELECT: Level 1


optimization takes place as described above, and SQL passes from
different SELECT statements are consolidated when it is appropriate to do
so.

l Level 3: Level 2 + Merge Passes, which only hit DB Tables, with


different WHERE: Level 2 optimization takes place as described above,
and SQL passes which access database tables with different WHERE
clauses are consolidated when it is appropriate to do so.

l Level 4: Level 2 + Merge All Passes with Different WHERE: This is the
default level. Level 2 optimization takes place as described above, and all
SQL passes with different WHERE clauses are consolidated when it is
appropriate to do so. While Level 3 only consolidates SQL statements that
access database tables, this option also considers SQL statements that
access temporary tables, derived tables, and common table expressions.

l Level 5: Level 2 + Merge All Passes, which hit the same warehouse
fact tables: Level 2 optimization takes place as described above, and
when multiple passes hit the same fact table, a compiled table is created
from the lookup tables of the multiple passes. This compiled table hits the
warehouse fact table only once.

Additionally, if you use either Level 3 or Level 4 SQL Global Optimization,


SQL passes can also be combined for the SQL that is generated for separate
custom group elements.

The SQL optimization available with Level 3 or Level 4 can be applied for
SQL passes that use the functions Plus (+), Minus (-), Times (*), Divide (/),
Unary minus (U-), Sum, Count, Avg (average), Min, and Max. To ensure that

Copyright © 2024 All Rights Reserved 1847


Syst em Ad m in ist r at io n Gu id e

valid SQL is returned, if the SQL passes that are generated use any other
functions, the SQL passes are not combined.

Exa m pl e : Re du n da n t SQL Pa ss

This example demonstrates how some SQL passes are redundant and
therefore removed when the Level 1 or Level 2 SQL Global Optimization
option is selected.

Suppose the following appear on the report template:

l Year attribute

l Region attribute

l Sum(Profit) {~+, Category%} metric (calculates profit for each Category,


ignoring any filtering on Category)

The report generates the following SQL:

l SQL Pass 1: Retrieves the set of categories that satisfy the metric
qualification

SELECT a11.CATEGORY_ID CATEGORY_ID


into #ZZTRH02012JMQ000
FROM YR_CATEGORY_SLS a11
GROUP BY a11.CATEGORY_ID
HAVING sum(a11.TOT_DOLLAR_SALES) > 1000000.0

l SQL Pass 2: Final pass that selects the related report data, but does not
use the results of the first SQL pass:

SELECT a13.YEAR_ID YEAR_ID,


a12.REGION_ID REGION_ID,
max(a14.REGION_NAME) REGION_NAME,
sum((a11.TOT_DOLLAR_SALES - a11.TOT_COST))
WJXBFS1
FROM DAY_CTR_SLS a11
join LU_CALL_CTR a12
on (a11.CALL_CTR_ID = a12.CALL_CTR_ID)
join LU_DAY a13
on (a11.DAY_DATE = a13.DAY_DATE)

Copyright © 2024 All Rights Reserved 1848


Syst em Ad m in ist r at io n Gu id e

join LU_REGION a14


on (a12.REGION_ID = a14.REGION_ID)
GROUP BY a13.YEAR_ID,a12.REGION_ID

SQL Pass 1 is redundant because it creates and populates a temporary


table, #ZZTRH02012JMQ000, that is not accessed again and is unnecessary
to generating the intended SQL result.

If you select either the Level 1: Remove Unused and Duplicate Passes or
Level 2: Level 1 + Merge Passes with different SELECT option, only one
SQL pass—the second SQL pass described above—is generated because it
is sufficient to satisfy the query on its own. By selecting either option, you
reduce the number of SQL passes from two to one, which can potentially
decrease query time.

Exam p l e: Co m b i n ab l e SQL Passes

Sometimes, two or more passes contain SQL that can be consolidated into a
single SQL pass, as shown in the example below. In such cases, you can
select the Level 2: Level 1 + Merge Passes with different SELECT option
to combine multiple passes from different SELECT statements.

Suppose the following appear on the report template:

l Region attribute

l Metric 1 = Sum(Revenue) {Region+} (calculates the total revenue for each


region)

l Metric 2 = Count<FactID=Revenue>(Call Center) {Region+} (calculates


the number of call centers for each region)

l Metric 3 = Metric 1/Metric 2 (Average Revenue = Total Revenue/Number of


Call Centers)

The report generates the following SQL:

Copyright © 2024 All Rights Reserved 1849


Syst em Ad m in ist r at io n Gu id e

l SQL Pass 1: Calculates Metric 1 = Sum(Revenue) {Region+}

SELECT a12.[REGION_ID] AS REGION_ID,


sum(a11.[TOT_DOLLAR_SALES]) AS WJXBFS1
into [ZZTI10200U2MD000]
FROM [CITY_CTR_SLS] a11,
[LU_CALL_CTR] a12
WHERE a11.[CALL_CTR_ID] = a12.[CALL_CTR_ID]
GROUP BY a12.[REGION_ID]

l SQL Pass 2: Calculates Metric 2 = Count<FactID=Revenue>(Call Center)


{Region+}

SELECT a12.[REGION_ID] AS REGION_ID,


count(a11.[CALL_CTR_ID]) AS WJXBFS1
into [ZZTI10200U2MD001]
FROM [CITY_CTR_SLS] a11,
[LU_CALL_CTR] a12
WHERE a11.[CALL_CTR_ID] = a12.[CALL_CTR_ID]
GROUP BY a12.[REGION_ID]

l SQL Pass 3: Final pass that calculates Metric 3 = Metric 1/Metric 2 and
displays the result:

SELECT pa11.[REGION_ID] AS REGION_ID,


a13.[REGION_NAME] AS REGION_NAME,
pa11.[WJXBFS1] AS WJXBFS1,
IIF(ISNULL((pa11.[WJXBFS1] / IIF(pa12.[WJXBFS1]
= 0, NULL,
pa12.[WJXBFS1]))), 0,
(pa11.[WJXBFS1] / IIF(pa12.[WJXBFS1] = 0,
NULL,pa12.[WJXBFS1]))) AS WJXBFS2
FROM [ZZTI10200U2MD000] pa11,
[ZZTI10200U2MD001] pa12,
[LU_REGION] a13
WHERE pa11.[REGION_ID] = pa12.[REGION_ID] and
pa11.[REGION_ID] = a13.[REGION_ID]

Because SQL passes 1 and 2 contain almost exactly the same code, they
can be consolidated into one SQL pass. Notice the italicized SQL in Pass 1
and Pass 2. These are the only unique characteristics of each pass;
therefore, Pass 1 and 2 can be combined into just one pass. Pass 3 remains
as it is.

Copyright © 2024 All Rights Reserved 1850


Syst em Ad m in ist r at io n Gu id e

You can achieve this type of optimization by selecting the Level 2: Level 1
+ Merge Passes with different SELECT option. The SQL that results from
this level of SQL optimization is as follows:

Pa ss 1:

SELECT a12.[REGION_ID] AS REGION_ID,


count(a11.[CALL_CTR_ID]) AS WJXBFS1
sum(a11.[TOT_DOLLAR_SALES]) AS WJXBFS1
into [ZZTI10200U2MD001]
FROM [CITY_CTR_SLS] a11,
[LU_CALL_CTR] a12
WHERE a11.[CALL_CTR_ID] = a12.[CALL_CTR_ID]
GROUP BY a12.[REGION_ID]

Pa ss 2:

SELECT pa11.[REGION_ID] AS REGION_ID,


a13.[REGION_NAME] AS REGION_NAME,
pa11.[WJXBFS1] AS WJXBFS1,
IIF(ISNULL((pa11.[WJXBFS1] / IIF(pa12.[WJXBFS1] = 0, NULL,
pa12.[WJXBFS1]))), 0,
(pa11.[WJXBFS1] / IIF(pa12.[WJXBFS1] = 0, NULL,
pa12.[WJXBFS1]))) AS WJXBFS2
FROM [ZZTI10200U2MD000] pa11,
[ZZTI10200U2MD001] pa12,
[LU_REGION] a13
WHERE pa11.[REGION_ID] = pa12.[REGION_ID] and
pa11.[REGION_ID] = a13.[REGION_ID]

Exam p l e: Co m b i n ab l e SQL Passes, w i t h Di f f er en t WHERE Cl au ses

Sometimes, two or more passes contain SQL with different where clauses
that can be consolidated into a single SQL pass, as shown in the example
below. In such cases, you can select the Level 3: Level 2 + Merge Passes,
which only hit DB Tables, with different WHERE option or the Level 4:
Level 2 + Merge All Passes with Different WHERE option to combine
multiple passes with different WHERE clauses.

Suppose the following appear on the report template:

Copyright © 2024 All Rights Reserved 1851


Syst em Ad m in ist r at io n Gu id e

l Quarter attribute

l Metric 1 = Web Sales (Calculates sales for the web call center)

l Metric 2 = Non-Web Sales (Calculates sales for all non-web call centers)

The report generates the following SQL

Pa ss 1:

create table ZZMD00 (


QUARTER_ID SHORT,
WJXBFS1 DOUBLE)

Pa ss 2:

insert into ZZMD00


select a12.[QUARTER_ID] AS QUARTER_ID,
sum(a11.[TOT_DOLLAR_SALES]) AS WJXBFS1
from [DAY_CTR_SLS] a11,
[LU_DAY] a12
where a11.[DAY_DATE] = a12.[DAY_DATE]
and a11.[CALL_CTR_ID] in (18)
group by a12.[QUARTER_ID]

Pa ss 3:

create table ZZMD01 (


QUARTER_ID SHORT,
WJXBFS1 DOUBLE)

Pa ss 4:

insert into ZZMD01


select a12.[QUARTER_ID] AS QUARTER_ID,
sum(a11.[TOT_DOLLAR_SALES]) AS WJXBFS1
from [DAY_CTR_SLS] a11,
[LU_DAY] a12
where a11.[DAY_DATE] = a12.[DAY_DATE]
and a11.[CALL_CTR_ID] not in (18)
group by a12.[QUARTER_ID]

Copyright © 2024 All Rights Reserved 1852


Syst em Ad m in ist r at io n Gu id e

Pa ss 5:

select pa11.[QUARTER_ID] AS QUARTER_ID,


a13.[QUARTER_DESC] AS QUARTER_DESC0,
pa11.[WJXBFS1] AS WJXBFS1,
pa12.[WJXBFS1] AS WJXBFS2
from [ZZMD00] pa11,
[ZZMD01] pa12,
[LU_QUARTER] a13
where pa11.[QUARTER_ID] = pa12.[QUARTER_ID] and
pa11.[QUARTER_ID] = a13.[QUARTER_ID]

Pass 2 calculates the Web Sales and Pass 4 calculates all non-Web Sales.
Because SQL passes 2 and 4 contain almost exactly the same SQL, they can
be consolidated into one SQL pass. Notice the highlighted SQL in Pass 2
and Pass 4. These are the only unique characteristics of each pass;
therefore, Pass 2 and 4 can be combined into just one pass.

You can achieve this type of optimization by selecting the Level 3: Level 2
+ Merge Passes, which only hit DB Tables, with different WHERE option
or the Level 4: Level 2 + Merge All Passes with Different WHERE option.
The SQL that results from this level of SQL optimization is as follows:

Pa ss 1:

create table ZZT6C00009GMD000 (


QUARTER_ID SHORT,
WJXBFS1 DOUBLE,
GODWFLAG1_1 LONG,
WJXBFS2 DOUBLE,
GODWFLAG2_1 LONG)

Pa ss 2:

insert into ZZT6C00009GMD000


select a12.[QUARTER_ID] AS QUARTER_ID,
sum(iif(a11.[CALL_CTR_ID] in (18),
a11.[TOT_DOLLAR_SALES], NULL))
AS WJXBFS1,
max(iif(a11.[CALL_CTR_ID] in (18), 1, 0))
AS GODWFLAG1_1,
sum(iif(a11.[CALL_CTR_ID] not in (18),
a11.[TOT_DOLLAR_SALES], NULL))

Copyright © 2024 All Rights Reserved 1853


Syst em Ad m in ist r at io n Gu id e

AS WJXBFS2,
max(iif(a11.[CALL_CTR_ID] not in (18), 1, 0))
AS GODWFLAG2_1
from [DAY_CTR_SLS] a11,
[LU_DAY] a12
where a11.[DAY_DATE] = a12.[DAY_DATE]
and (a11.[CALL_CTR_ID] in (18)
or a11.[CALL_CTR_ID] not in (18))
group by a12.[QUARTER_ID]

Pa ss 3:

select pa12.[QUARTER_ID] AS QUARTER_ID,


a13.[QUARTER_DESC] AS QUARTER_DESC0,
pa12.[WJXBFS1] AS WJXBFS1,
pa12.[WJXBFS2] AS WJXBFS2
from [ZZT6C00009GMD000] pa12,
[LU_QUARTER] a13
where pa12.[QUARTER_ID] = a13.[QUARTER_ID]
and (pa12.[GODWFLAG1_1] = 1
and pa12.[GODWFLAG2_1] = 1)

Upgrading From Pre-9.0.x Versions of MicroStrategy


The default option for the SQL Global Optimization VLDB property changed
in MicroStrategy 9.0.0. In pre-9.0.x versions of MicroStrategy, the default
option for this VLDB property was Level 2: Level 1 + Merge Passes with
different SELECT. Starting with MicroStrategy 9.0.0, the default option for
this VLDB property is Level 4: Level 2 + Merge All Passes with Different
WHERE.

When projects are upgraded to 9.0.x, if you have defined this VLDB property
to use the default setting, this new default is applied. This change improves
performance for the majority of reporting scenarios. However, the new
default can cause certain reports to become unresponsive or fail with time-
out errors. For example, reports that contain custom groups or a large
number of conditional metrics may encounter performance issues with this
new default.

Copyright © 2024 All Rights Reserved 1854


Syst em Ad m in ist r at io n Gu id e

You can use Integrity Manager to determine any changes in performance


that your reports may encounter due to upgrading your MicroStrategy
projects. This allows you to determine which reports may encounter
performance issues due to this VLDB property modification.

To resolve this issue for a report, after completing an upgrade, modify the
SQL Global Optimization VLDB property for the report to use the option
Level 2: Level 1 + Merge Passes with different SELECT.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Sub Query Type


Sub Query Type is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

The Sub Query Type property tells the Analytical Engine what type of syntax
to use when generating a subquery. A subquery is a secondary SELECT
statement in the WHERE clause of the primary SQL statement.

The Sub Query Type property is database specific, due to the fact that
different databases have different syntax support for subqueries. Some
databases can have improved query building and performance depending on
the subquery type used. For example, it is more efficient to use a subquery
that only selects the needed columns rather than selecting every column.
Subqueries can also be more efficient by using the IN clause rather than
using the EXISTS function.

The most optimal option depends on your database capabilities. In general


the default setting is WHERE COL1 IN (SELECT s1.COL1...) falling back
to EXISTS (SELECT *...) for multiple columns IN. However, the default
setting is based on the most optimal setting for your database type. See the
table below for database platform exceptions to the default setting. To

Copyright © 2024 All Rights Reserved 1855


Syst em Ad m in ist r at io n Gu id e

review example SQL syntax for each VLDB setting for Sub Query Type, see
HERE EXISTS (Select *…), page 1857.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Dat ab ase excep t i o n s t o t h e d ef au l t set t i n g

Database Default

Use Temporary Table, falling back to EXISTS (SELECT *...) for


DB2 UDB
correlated subquery

DB2 UDB for OS/390 Where Exists (Select *...)

Microsoft Access Use Temporary Table, falling back to EXISTS (SELECT *...) for
2000/2002/2003 correlated subquery

Microsoft Excel Use Temporary Table, falling back to EXISTS (SELECT *...) for
2000/2003 correlated subquery

Netezza Where (col1, col2...) in (Select s1.col1, s1.col2...)

Oracle Where (col1, col2...) in (Select s1.col1, s1.col2...)

PostgreSQL Where (col1, col2...) in (Select s1.col1, s1.col2...)

Where col1 in (Select s1.col1...) falling back to Exists (Select


RedBrick
col1, col2...) for multiple column in

Use Temporary Table, falling back to in (Select col) for correlated


Teradata
subquery

Notice that some options have a fallback action. In some scenarios, the
selected option does not work, so the SQL Engine must fall back to an
approach that always works. The typical scenario for falling back is when
multiple columns are needed in the IN list, but the database does not
support it and the correlated subqueries.

Copyright © 2024 All Rights Reserved 1856


Syst em Ad m in ist r at io n Gu id e

For a further discussion of the Sub Query Type VLDB property, refer to
MicroStrategy Tech Note TN13870.

HERE EXISTS (Select *…)

select a31.ITEM_NBR ITEM_NBR,


a31.CLASS_NBR CLASS_NBR,
sum(a31.REG_SLS_DLR) REG_SLS_DLR
from REGION_ITEM a31
where (exists (select *
from REGION_ITEM r21,
LOOKUP_DAY r22
where r21.CUR_TRN_DT = r22.CUR_TRN_DT
and r22.SEASON_ID in (199501)
and r21.ITEM_NBR = a31.ITEM_NBR
and r21.CLASS_NBR = a31.CLASS_NBR))
group by a31.ITEM_NBR,
a31.CLASS_NBR

WHERE EXISTS (SELECT col1, col2…)

select a31.ITEM_NBR ITEM_NBR,


a31.CLASS_NBR CLASS_NBR,
sum(a31.REG_SLS_DLR) REG_SLS_DLR
from REGION_ITEM a31
where (exists (select a31.ITEM_NBR ITEM_NBR,
a31.CLASS_NBR CLASS_NBR
from REGION_ITEM r21,
LOOKUP_DAY r22
where r21.CUR_TRN_DT = r22.CUR_TRN_DT
and r22.SEASON_ID in (199501)
and r21.ITEM_NBR = a31.ITEM_NBR
and r21.CLASS_NBR = a31.CLASS_NBR))
group by a31.ITEM_NBR,
a31.CLASS_NBR

WHERE COL1 IN (SELECT s1.COL1...) falling back to EXISTS (SELECT * ...) for
multiple columns IN

select a31.ITEM_NBR ITEM_NBR,


sum(a31.REG_SLS_DLR) REG_SLS_DLR
from REGION_ITEM a31
where ((a31.ITEM_NBR)
in (select r21.ITEM_NBR ITEM_NBR,
from REGION_ITEM r21,
LOOKUP_DAY r22

Copyright © 2024 All Rights Reserved 1857


Syst em Ad m in ist r at io n Gu id e

where r21.CUR_TRN_DT = r22.CUR_TRN_DT


and r22.SEASON_ID in (199501)))
group by a31.ITEM_NBR

WHERE (COL1, COL2...) IN (SELECT s1.COL1, s1.COL2...)

select a31.ITEM_NBR ITEM_NBR,


a31.CLASS_NBR CLASS_NBR,
sum(a31.REG_SLS_DLR) REG_SLS_DLR
from REGION_ITEM a31
where ((a31.ITEM_NBR,
a31.CLASS_NBR)
in (select r21.ITEM_NBR ITEM_NBR,
r21.CLASS_NBR CLASS_NBR
from REGION_ITEM r21,
LOOKUP_DAY r22
where r21.CUR_TRN_DT = r22.CUR_TRN_DT
and r22.SEASON_ID in (199501)))
group by a31.ITEM_NBR,
a31.CLASS_NBR

Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated
subquery (default)

create table TEMP1 as


select r21.ITEM_NBR ITEM_NBR,
from REGION_ITEM r21,
LOOKUP_DAY r22
where r21.CUR_TRN_DT = r22.CUR_TRN_DT
and r22.SEASON_ID in 199501
select a31.ITEM_NBR ITEM_NBR,
sum(a31.REG_SLS_DLR) REG_SLS_DLR
from REGION_ITEM a31
join TEMP1 a32
on a31.ITEM_NBR = a32.ITEM_NBR
group by a31.ITEM_NBR

WHERE COL1 IN (SELECT s1.COL1...) falling back to EXISTS (SELECT col1, col2
...) for multiple columns IN

select a31.ITEM_NBR ITEM_NBR,


sum(a31.REG_SLS_DLR) REG_SLS_DLR
from REGION_ITEM a31
where ((a31.ITEM_NBR)

Copyright © 2024 All Rights Reserved 1858


Syst em Ad m in ist r at io n Gu id e

in (select r21.ITEM_NBR ITEM_NBR,


from REGION_ITEM r21,
LOOKUP_DAY r22
where r21.CUR_TRN_DT = r22.CUR_TRN_DT
and r22.SEASON_ID in (199501)))
group by a31.ITEM_NBR

Use Temporary Table, falling back to IN (SELECT COL) for correlated subquery

create table TEMP1 as


select r21.ITEM_NBR ITEM_NBR,
from REGION_ITEM r21,
LOOKUP_DAY r22
where r21.CUR_TRN_DT = r22.CUR_TRN_DT
and r22.SEASON_ID in 199501
select a31.ITEM_NBR ITEM_NBR,
sum(a31.REG_SLS_DLR) REG_SLS_DLR
from REGION_ITEM a31
join TEMP1 a32
on a31.ITEM_NBR = a32.ITEM_NBR
group by a31.ITEM_NBR

Transformation Formula Optimization


Transformation Formula Optimization is an advanced property that is hidden
by default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

The Transformation Formula Optimization VLDB property allows you to


improve the performance of expression-based transformations. Performance
can be improved for reports that include expression-based transformations
and meet the following requirements:

l No attributes on the report grid or the Report Objects of the report are
related to the transformation's member attribute. For example, if a
transformation is defined on the attribute Year of the Time hierarchy, no
attributes in the Time hierarchy can be included on the report grid or
Report Objects.

l The filter of the report does contain attributes that are related to the
transformation's member attribute. For example, if a transformation is

Copyright © 2024 All Rights Reserved 1859


Syst em Ad m in ist r at io n Gu id e

defined on the attribute Year of the Time hierarchy, a filter on another


attribute in the Time hierarchy is included on the report.

For information on expression-based transformations and how to create


them, see the Project Design Help.

If your report includes an expression-based transformation This VLDB


property has the following options:

l Always join with transformation table to perform transformation: A


join with the transformation table is used to perform the transformation.
This option supports backwards compatibility and also serves as a fallback
if optimization cannot be applied for the transformation.

l Use transformation formula instead of join with transformation table


when possible (default): If the transformation is an expression-based
transformation and the report meets the requirements listed above, the
expression is used rather than using a join with the transformation table.

This can improve performance of expression-based transformations by


eliminating the requirement to join with the transformation table. If the
transformation is included on a report that cannot support this
optimization, then a join with the transformation table is automatically
used to support the transformation. An example of this optimization is
shown below.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

The SQL statements shown below display a SQL statement before (Statement
1) and after (Statement 2) applying the transformation optimization.

Statement 1

select a14.CATEGORY_ID CATEGORY_ID,


max(a15.CATEGORY_DESC) CATEGORY_DESC,
sum((a11.QTY_SOLD * (a11.UNIT_PRICE - a11.DISCOUNT)))

Copyright © 2024 All Rights Reserved 1860


Syst em Ad m in ist r at io n Gu id e

WJXBFS1
from ORDER_DETAIL a11
join LU_DAY a12
on (a11.ORDER_DATE = a12.DAY_DATE - 1 YEAR)
join LU_ITEM a13
on (a11.ITEM_ID = a13.ITEM_ID)
join LU_SUBCATEG a14
on (a13.SUBCAT_ID = a14.SUBCAT_ID)
join LU_CATEGORY a15
on (a14.CATEGORY_ID = a15.CATEGORY_ID)
where a12.DAY_DATE = '08/31/2021'
group by a14.CATEGORY_ID

Statement 2

select a14.CATEGORY_ID CATEGORY_ID,


max(a15.CATEGORY_DESC) CATEGORY_DESC,
sum((a11.QTY_SOLD * (a11.UNIT_PRICE - a11.DISCOUNT)))
WJXBFS1
from ORDER_DETAIL a11
join LU_ITEM a13
on (a11.ITEM_ID = a13.ITEM_ID)
join LU_SUBCATEG a14
on (a13.SUBCAT_ID = a14.SUBCAT_ID)
join LU_CATEGORY a15
on (a14.CATEGORY_ID = a15.CATEGORY_ID)
where a11.ORDER_DATE = DATE('08/31/2021') - 1 YEAR
group by a14.CATEGORY_ID

Unrelated Filter Options


Unrelated Filter Options is an advanced property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

MicroStrategy contains the logic to ignore filter qualifications that are not
related to the template attributes, to avoid unnecessary Cartesian joins.
However, in some cases a relationship is created that should not be ignored.
The Unrelated Filter Options property determines whether to remove or keep
unrelated filter qualifications that are included in the report's filter or through
the use of joint element lists. This VLDB property has the following options:

Copyright © 2024 All Rights Reserved 1861


Syst em Ad m in ist r at io n Gu id e

If filter qualifications are included as part of a report as filter, all filter


qualifications are kept on the report regardless of whether they are related
or unrelated to the attributes on the report. For information on using the
report as filter functionality, see the Advanced Reporting Help.

l Remove unrelated filter (default): Any filter qualification with attributes


that are unrelated to any of the attributes on the report are removed. An
example of how this option can modify a report, in comparison to the Keep
unrelated filter and put condition from unrelated attributes in one
subquery group option, is provided below.

l Keep unrelated filter: This option is for backward compatibility. You


should switch to using the Keep unrelated filter and put condition from
unrelated attributes in one subquery group option described below.

l Keep unrelated filter and put condition from unrelated attributes in


one subquery group: Filter qualifications that include attributes that are
unrelated to any of the attributes on the report are kept on the report in
certain scenarios. This means that the filtering is applied to the report.
However, not all unrelated filter qualifications are kept on a report if you
select this option.

For example, you have report with a filter on the Country attribute, and the
Year attribute is on the report template. This example assumes that no
relationship between Country and Year is defined in the schema. In this
case, the filter is removed regardless of this VLDB property setting. This is
because the filter qualification does not include any attributes that could
be related to the attributes on the report.

This setting does keep filter qualifications in certain scenarios. For


example, you have a report that is defined as follows:

l Report filters:

l Filter 1= (Country, Quarter) in {(England, 2008 Q3), (France, 2008


Q1)}

Copyright © 2024 All Rights Reserved 1862


Syst em Ad m in ist r at io n Gu id e

l Report template: Includes the Year attribute

Filter 1 described above could be from a joint element list or a


combination of report filter qualifications. Since this filter qualification
includes the Quarter attribute, which is related to the Year attribute,
selecting this option includes the filtering in the reports. The SQL
generated with each setting is as follows:

l Removed unrelated filter: The filter qualifications on Country are


removed from the report and the report SQL, as shown below:

select distinct a11.[YEAR_ID] AS YEAR_ID


from [LU_QUARTER] a11
where (a11.[QUARTER_ID] = 20083
or a11.[QUARTER_ID] = 20081)

l Keep unrelated filter and put condition from unrelated attributes in one
subquery group: The filter qualifications on Country are included on the
report and in the report SQL, as shown below:

create table ZZSQ00 (


QUARTER_ID SHORT,
GODWFLAG1_1 LONG,
GODWFLAG2_1 LONG)
insert into ZZSQ00
select distinct s22.[QUARTER_ID] AS QUARTER_ID,
iif((s21.[COUNTRY_ID] = 3 and s22.[QUARTER_ID] =
20083), 1, 0) AS GODWFLAG1_1,
iif((s21.[COUNTRY_ID] = 4 and s22.[QUARTER_ID] =
20081), 1, 0) AS GODWFLAG2_1
from [LU_COUNTRY] s21,
[LU_QUARTER] s22
where ((s21.[COUNTRY_ID] = 3
and s22.[QUARTER_ID] = 20083)
or (s21.[COUNTRY_ID] = 4
and s22.[QUARTER_ID] = 20081))
select distinct a13.[YEAR_ID] AS YEAR_ID
from [ZZSQ00] pa11,
[ZZSQ00] pa12,
[LU_QUARTER] a13
where pa11.[QUARTER_ID] = pa12.[QUARTER_ID] and
pa11.[QUARTER_ID] = a13.[QUARTER_ID]
and (pa11.[GODWFLAG1_1] = 1
and pa12.[GODWFLAG2_1] = 1)

Copyright © 2024 All Rights Reserved 1863


Syst em Ad m in ist r at io n Gu id e

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Unrelated Filter Options for Nested Metrics


Unrelated Filter Options for Nested Metrics is an advanced property that is
hidden by default. For information on how to display this property, see
Viewing and Changing Advanced VLDB Properties, page 1630.

The Unrelated Filter Options property determines whether to remove or keep


unrelated filter qualifications when using nested metrics. Nested metrics, or
nested aggregation metrics, are a type of simple metric, where one
aggregation function is enclosed inside another. For additional information
on nested metrics, see the Advanced Reporting Help.

To explain how this VLDB property determines whether to keep or remove


unrelated filter qualifications when using nested metrics, consider the
following example:

l The following example was created in the MicroStrategy Tutorial project,


with its data stored in a Microsoft Access database.

l A report is created that includes the following:

l The Category attribute on the rows of the report.

l A metric on the columns of the report. THe metric is defined as Sum


(ApplySimple("IIf(#0 = 1, #1, 0)", Region@ID, Sum
(Revenue) {~+})) {~}. This metric returns revenue data for the
Northeast region (Region@ID =1) or a zero value.

l A report filter that is defined as Category In List (Books). This


report filter returns data only for the Books category.

For the example explained above, the metric includes the Region attribute
(through the use of Region@ID) and the report filter includes the Category
attribute. Since the Category attribute is unrelated to the Region attribute, it

Copyright © 2024 All Rights Reserved 1864


Syst em Ad m in ist r at io n Gu id e

is considered unrelated to the nested metric's inclusion of the Region


attribute.

This VLDB property has the following options:

l Use the 8.1.x behavior (default): Select this option to use the behavior in
MicroStrategy 8.1.x. In the example described above, this returns the
following SQL statement, which has been abbreviated for clarity:

insert into ZZTTM6REM4ZMD000


select a11.[CATEGORY_ID] AS CATEGORY_ID,
sum(a11.[TOT_DOLLAR_SALES]) AS WJXBFS1
from [YR_CATEGORY_SLS] a11
where a11.[CATEGORY_ID] in (1)
group by a11.[CATEGORY_ID]
select pa11.[CATEGORY_ID] AS CATEGORY_ID,
max(a13.[CATEGORY_DESC]) AS CATEGORY_DESC0,
sum(IIf(a12.[REGION_ID] = 1, pa11.[WJXBFS1], 0))
AS WJXBFS1
from [ZZTTM6REM4ZMD000] pa11,
[LU_REGION] a12,
[LU_CATEGORY] a13
where pa11.[CATEGORY_ID] = a13.[CATEGORY_ID]
group by pa11.[CATEGORY_ID]

While the unrelated filter qualification is kept in the first pass of SQL, it
is removed from the second pass of SQL. This means that the filtering
on Category is applied to the inner aggregation that returns a
summation of revenue for the Northeast region only. However, the
filtering on category is not used in the final summation.

This option can be beneficial for the processing of security filters,


which can create additional unrelated filter qualifications on a report
based on a user's security filter constraints. Selecting this option can
remove some of these unrelated filter qualifications caused by a user's
security filter.

l Use the 9.0.x behavior: Select this option to use the behavior in
MicroStrategy 9.0.x. In the example described above, this returns the
following SQL statement, which has been abbreviated for clarity:

Copyright © 2024 All Rights Reserved 1865


Syst em Ad m in ist r at io n Gu id e

insert into ZZTTM6REM4ZMD000


select a11.[CATEGORY_ID] AS CATEGORY_ID,
sum(a11.[TOT_DOLLAR_SALES]) AS WJXBFS1
from [YR_CATEGORY_SLS] a11
where a11.[CATEGORY_ID] in (1)
group by a11.[CATEGORY_ID]
select pa11.[CATEGORY_ID] AS CATEGORY_ID,
max(a13.[CATEGORY_DESC]) AS CATEGORY_DESC0,
sum(IIf(a12.[REGION_ID] = 1, pa11.[WJXBFS1], 0))
AS WJXBFS1
from [ZZTTM6REM4ZMD000] pa11,
[LU_REGION] a12,
[LU_CATEGORY] a13
where pa11.[CATEGORY_ID] = a13.[CATEGORY_ID] and pa11.[CATEGORY_ID] in (1)
group by pa11.[CATEGORY_ID]

By using the 9.0.x behavior, the unrelated filter qualification is kept in both
SQL passes. This means that the filtering on category is applied to the
inner aggregation that returns a summation of revenue for the Northeast
region only. The filtering on category is also used in the final summation.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

WHERE Clause Driving Table


The Where Clause Driving Table property tells the Analytical Engine what
type of column is preferred in a qualification of a WHERE clause when
generating SQL. One SQL pass usually joins fact tables and lookup tables
on certain ID columns. When a qualification is defined on such a column, the
Analytical Engine can use the column in either the fact table or the lookup
table. In certain databases, like Teradata and RedBrick, a qualification on
the lookup table can achieve better performance. By setting the Where
Clause Driving Table property to Use Lookup Table, the Analytical Engine
always tries to pick the column from the lookup table.

If Use lookup table is selected, but there is no lookup table in the FROM
clause for the column being qualified on, the Analytical Engine does not add
the lookup table to the FROM clause. To make sure that a qualification is

Copyright © 2024 All Rights Reserved 1866


Syst em Ad m in ist r at io n Gu id e

done on a lookup table column, the DSS Star Join property should be set to
use Partial star join.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Selecting and Inserting Data with SQL: Select/Insert


The following table summarizes the Select/Insert VLDB properties.
Additional details about each property, including examples where
necessary, are provided in the sections following the table.

Possible Default
Property Description
Values Value

Select ID form
Attribute only
Selection and Allows you to choose whether to Select ID and
Form Selection select attribute forms that are on other forms if Select ID
Option for the template in the intermediate they are on form only
Intermediate pass (if available). template and
Passes available in
existing join tree

(Default) Select
only the
attributes
Allows you to choose whether to needed
select additional attributes (usually
Attribute Form parent attributes) needed on the Select other
attributes in Select only
Selection Option template as the join tree and their
current join tree the attributes
for Intermediate child attributes have already been
if they are on needed
Pass selected in the Attribute Form
Selection option for Intermediate template and

Pass. their child


attributes have
already been
selected.

Copyright © 2024 All Rights Reserved 1867


Syst em Ad m in ist r at io n Gu id e

Possible Default
Property Description
Values Value

Determines whether multiple insert


Selecting and
statements are issued in the ODBC
Inserting Data
call, and if together, the string to User-defined NULL
with SQL:
connect the multiple insert
Select/Insert
statements.

Pure select, no
group by

Use max, no
Allows you to choose whether to use
group by
a GROUP BY and how the GROUP
Constant Pure select,
BY should be constructed when Group by column
Column Mode no group by
working with a column that is a (expression)
constant.
Group by alias

Group by
position

No interaction -
static custom
group

Apply report
filter to custom No
Custom Group
Allows you define how a report filter group interaction -
Interaction with
interacts with a custom group. static custom
the Report Filter Apply report
group
filter to custom
group, but
ignore related
elements from
the report filter

Determines whether data is Only ODBC


Data Retrieval
retrieved using third-party, native Only ODBC
Mode Allow Native API
APIs.

Data Retrieval Defines the parameters used to User-defined NULL

Copyright © 2024 All Rights Reserved 1868


Syst em Ad m in ist r at io n Gu id e

Possible Default
Property Description
Values Value

retrieve data using third-party,


Parameters
native APIs.

Columns created
in order based
on attribute Columns
Allows you to determine the order in weight created in
Data Mart
which datamart columns are order based
Column Order Columns created
created. on attribute
in order in which weight
they appear on
the template

Sets the format for date in engine- YYYY-MM-


Date Format User-defined
generated SQL. DD

Lets you define the syntax pattern


Date Pattern User-defined NULL
for Date data.

Use "." as
decimal Use "." as
Use to change the decimal separator (ANSI decimal
Decimal separator in SQL statements from a standard) separator
Separator decimal point to a comma, for
Use "," as (ANSI
international database users.
decimal standard)
separator

Use this to determine how attributes Lowest weight


Default Attribute Highest
are treated, for those attributes that
Weight Highest weight weight
are not in the attribute weights list.

(Default) Use
(Default) Use
prefix in both
Allows you to choose whether or not prefix in both
Disable Prefix in warehouse
to use the prefix partition queries. warehouse
WH Partition partition pre-
The prefix is always used with pre- partition pre-
Table query and
queries. query and
partition
partition query
query

Copyright © 2024 All Rights Reserved 1869


Syst em Ad m in ist r at io n Gu id e

Possible Default
Property Description
Values Value

Use prefix in
warehouse
partition
prequery but not
in partition query

Distinct/Group If no aggregation is needed and the Use DISTINCT


by Option (When attribute defined on the table is not
No DISTINCT, Use
No Aggregation a primary key, tells the SQL Engine
no GROUP BY DISTINCT
and Not Table whether to use Select Distinct,
Key) Group by, or neither. Use GROUP BY

Group by
expression
Determines how to group by a
GROUP BY ID selected ID column when an Group by alias Group by
Attribute expression is performed on the ID Group by column expression
expression.
Group by
position

GROUP BY Non- Determines how to handle columns Use Max


Use Max
ID Attribute for non_ID attributes. Use Group By

Determines the string that is


Insert Post
inserted at the end of insert and User-defined NULL
String
implicit table creation statements.

Determines the string inserted after


Insert Table
table name in insert statements; User-defined NULL
Option
analogous to table option.

Determines whether to map long Do not use


integers of a certain length as BigInt
Long Integer Do not use
BigInt data types when
Support Up to 18 digits BigInt
MicroStrategy creates tables in a
database. Up to 19 digits

Copyright © 2024 All Rights Reserved 1870


Syst em Ad m in ist r at io n Gu id e

Possible Default
Property Description
Values Value

Sets the maximum number of digits


Max Digits in
in a constant literal in an insert User-defined No limit
Constant
values statement. (0 = no limit).

Merge same
Merge Same metric
expression Merge same
Metric Determines how to handle metrics
metric
Expression that have the same definition. Do not merge expression
Option same metric
expression

Defines the custom SQL string to be


Select Post appended to all SELECT
User-defined NULL
String statements, for example, FOR
FETCH ONLY.

Select
Defines the custom SQL string to be
Statement Post User-defined NULL
appended to the final SELECT .
String

This string is placed after the


SQL Hint User-defined NULL
SELECT statement.

SQL Time Determines the format of the time yyyy-mm-dd


User-defined
Format literal accepted in SQL statements. hh:nn:ss

Sets the format of the timestamp


Timestamp yyyy-nn-dd
literal accepted in the Where User-defined
Format hh:mm:ss
clause.

Allows the Analytical Engine to Do not use


UNION Multiple UNION Do not use
UNION multiple insert statements
INSERT UNION
into the same temporary table. Use UNION

Use Column Determines whether the WCHAR Disabled


Type Hint for data type is used as applicable to
Enable ODBC Disabled
Parameterized return data accurately while using
Column Type
Query parameterized queries.

Copyright © 2024 All Rights Reserved 1871


Syst em Ad m in ist r at io n Gu id e

Possible Default
Property Description
Values Value

Binding Hint for


"WCHAR" and
"CHAR"

Attribute Selection and Form Selection Option for Intermediate


Passes
Normally, the MicroStrategy SQL Engine selects the minimum number of
columns that are needed in each pass. For an intermediate pass, the SQL
Engine usually only selects attribute ID forms. The SQL Engine also selects
the attributes necessary to make the join, usually key attributes. Then in the
final pass, additional attributes or attribute forms that are necessary for
report display can be joined.

This algorithm is optimal in most cases, as it minimizes the size of


intermediate tables. However, in certain schemas, especially denormalized
ones, and schemas that use fact tables as both lookup tables and
relationship tables, such an algorithm may cause additional joins in the final
pass.

A report template contains the attributes Region and Store, and metrics M1 and
M2. M1 uses the fact table FT1, which contains Store_ID, Store_Desc, Region_
ID, Region_Desc, and f1. M2 uses the fact table FT2, which contains Store_ID,
Store_Desc, Region_ID, Region_Desc, and F2. With the normal SQL Engine
algorithm, the intermediate pass that calculates M1 selects Store_ID and F1,
the intermediate pass that calculates M2 selects Store_ID and F2. Then the
final pass joins these two intermediate tables together. But that is not enough.
Since Region is on the template, it should join upward to the region level and
find the Region_Desc form. This can be done by joining either FT1 or FT2 in
the final pass. So with the original algorithm, either FT1 or FT2 is being
accessed twice. If these tables are big, and they usually are, the performance
can be very slow. On the other hand, if Store_ID, Store_Desc, Region_ID, and

Copyright © 2024 All Rights Reserved 1872


Syst em Ad m in ist r at io n Gu id e

Region_Desc are picked up in the intermediate passes, there is no need to join


FT1 or FT2 in the final pass, thus boosting performance.

For this reason, the following two properties are available in MicroStrategy:

l Attribute Form Selection Option for Intermediate Pass

l Attribute Selection Option for Intermediate Pass

l These properties intend to use bigger (wider) intermediate tables to save


additional joins in the final pass and exchange space for time.

l These two properties work independently. One does not influence the
other.

l Each property has two values. The default behavior is the original
algorithm.

l When the property is enabled:

l The SQL Engine selects additional attributes or attribute forms in the


intermediate pass, when they are directly available.

l The SQL Engine does not join additional tables to select more attributes
or forms. So for intermediate passes, the number of tables to be joined
is the same as when the property is disabled.

Attribute Form Selection Option for Intermediate Pass


The Attribute Form Selection Options for Intermediate Pass property
determines whether or not the SQL Engine selects the needed attribute
forms in the intermediate passes, if available. See the description above for
more detailed information.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Copyright © 2024 All Rights Reserved 1873


Syst em Ad m in ist r at io n Gu id e

Attribute Selection Option for Intermediate Pass


The Attribute Selection Option for Intermediate Pass property determines
whether or not the SQL Engine selects additional attributes (usually parent
attributes) needed on the template, other than the needed join ID column in
the intermediate passes. See the description above for more detailed
information.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Bulk Insert String


Bulk Insert String is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

The Bulk Insert String property appends the string provided in front of the
INSERT statement. For Teradata, this property is set to ";" to increase query
performance. The string is appended only for the INSERT INTO SELECT
statements and not the INSERT INTO VALUES statement that is generated
by the Analytical Engine. Since the string is appended for the INSERT INTO
SELECT statement, this property takes effect only during explicit,
permanent, or temporary table creation.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Bulk Insert String = ;

Constant Column Mode


Constant Column Mode is an advanced property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

Copyright © 2024 All Rights Reserved 1874


Syst em Ad m in ist r at io n Gu id e

Constant Column Mode allows you to choose whether or not to use a


GROUP BY and how the GROUP BY should be constructed when working
with a column that is a constant. The GROUP BY can be constructed with the
column, alias, position numbers, or column expression. Most users do not
need to change this setting. It is available to be used with the new Generic
DBMS object and if you want to use a different GROUP BY method when
working with constant columns.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Pure select, no GROUP BY (default)

insert into ZZTP00


select a11.QUARTER_ID QUARTER_ID, 0 XKYCGT,
sum(a11.REG_SLS_DLR) WJXBFS1
from SALES_Q1_2002 a11
group by a11.QUARTER_ID
insert into ZZTP00
select a11.QUARTER_ID QUARTER_ID, 1 XKYCGT,
sum(a11.REG_SLS_DLR) WJXBFS1
from SALES_Q2_2002 a11
group by a11.QUARTER_ID

Use max, no GROUP BY

insert into ZZTP00


select a11.QUARTER_ID QUARTER_ID, MAX(0) XKYCGT,
sum(a11.REG_SLS_DLR) WJXBFS1
from SALES_Q1_2002 a11
group by a11.QUARTER_ID
insert into ZZTP00
select a11.QUARTER_ID QUARTER_ID, MAX(1) XKYCGT,
sum(a11.REG_SLS_DLR) WJXBFS1
from SALES_Q2_2002 a11
group by a11.QUARTER_ID

GROUP BY column (expression)

Copyright © 2024 All Rights Reserved 1875


Syst em Ad m in ist r at io n Gu id e

insert into ZZTP00


select a11.QUARTER_ID QUARTER_ID, 0 XKYCGT,
sum(a11.REG_SLS_DLR) WJXBFS1
from SALES_Q1_2002 a11
group by a11.QUARTER_ID, 0
insert into ZZTP00
select a11.QUARTER_ID QUARTER_ID, 1 XKYCGT,
sum(a11.REG_SLS_DLR) WJXBFS1
from SALES_Q2_2002 a11
group by a11.QUARTER_ID, 1

GROUP BY alias

insert into ZZTP00


select a11.QUARTER_ID QUARTER_ID, 0 XKYCGT,
sum(a11.REG_SLS_DLR) WJXBFS1
from SALES_Q1_2002 a11
group by a11.QUARTER_ID, XKYCGT
insert into ZZTP00
select a11.QUARTER_ID QUARTER_ID, 1 XKYCGT,
sum(a11.REG_SLS_DLR) WJXBFS1
from SALES_Q2_2002 a11
group by a11.QUARTER_ID, XKYCGT

GROUP BY position

insert into ZZTP00


select a11.QUARTER_ID QUARTER_ID, 0 XKYCGT,
sum(a11.REG_SLS_DLR) WJXBFS1
from SALES_Q1_2002 a11
group by a11.QUARTER_ID, 2
insert into ZZTP00
select a11.QUARTER_ID QUARTER_ID, 1 XKYCGT,
sum(a11.REG_SLS_DLR) WJXBFS1
from SALES_Q2_2002 a11
group by a11.QUARTER_ID, 2

Custom Group Interaction with the Report Filter


The Custom Group Interaction With the Report Filter VLDB property allows
you define how a report filter interacts with a custom group.

When a custom group that is created using attribute qualifications is


included on a report with a report filter, the report filter is applied to the

Copyright © 2024 All Rights Reserved 1876


Syst em Ad m in ist r at io n Gu id e

individual custom group elements. However, if you create a custom group


using metric qualifications or banding qualifications, report filters are not
applied by default to the custom group elements.

This can cause unexpected results to be returned in some scenarios. For


example, a custom group displays revenue totals for customers in columns
that represent the range of revenue that the customer is in. A customer that
has contributed $7,500 in revenue displays this revenue total in the column
for customers that contributed $5,000 to $10,000 in revenue. This custom
group is included on a report that has a report filter that restricts results to
data for the year 2007 only.

In this scenario, the report filter is evaluated after the custom group. If the
same customer that has a total of $7,500 only had $2,500 in 2007, then the
report would only display $2,500 for that customer. However, the customer
would still be in the $5,000 to $10,000 in revenue range because the custom
group did not account for the report filter.

You can define report filter and custom group interaction to avoid this
scenario. This VLDB property has the following options:

l No interaction - static custom group (default): Report filter


qualifications are not applied to custom groups that use metric
qualifications or banding qualifications. Filtering is only applied after the
custom group has been evaluated.

l Apply report filter to custom group: Report filter qualifications are


applied to custom groups and are used to determine the values for each
custom group element.

l Apply report filter to custom group, but ignore related elements from
the report filter: Report filter qualifications that do not qualify on attribute
elements that are used to define the custom group elements are applied to
custom groups. These filter qualifications are used to determine the values
for each custom group element. For example, a report filter that qualifies
on the Customer attribute is not applied to a custom group that also uses
the Customer attribute to define its custom group elements.

Copyright © 2024 All Rights Reserved 1877


Syst em Ad m in ist r at io n Gu id e

For information on custom groups and defining these options for a custom
group, see the Advanced Reporting Help.

Level s at Wh i ch Yo u Can Set Th i s

Database instance

Data Retrieval Mode


The Data Retrieval Mode VLDB property determines whether data is
retrieved using third-party, native APIs. You have the following options:

l Only ODBC: Standard methods are used to retrieve data. This option must
be used in all cases, except for connections that are expected to make use
of the Teradata Parallel Transporter API.

l Allow Native API: Third-party native APIs can be used to retrieve data.
MicroStrategy supports the use of the Teradata Parallel Transporter API.
Enabling Teradata Parallel Transporter can improve performance when
retrieving large amounts of data from Teradata, typically 1 Gigabyte and
larger, which can occur most commonly in MicroStrategy when publishing
Intelligent Cubes.

Using MicroStrategy Web, you can create a connection to Teradata and


import your data. When creating this connection, enabling the Teradata
Parallel Transporter options automatically defines this VLDB property as
Allow Native API for the connection. For steps to create this type of
connection in MicroStrategy Web, see the Web User Help.

You can also select this VLDB property option for the database instance
for Teradata connections that are not created through the use of Data
Import.

Level s at Wh i ch Yo u Can Set Th i s

Database instance and report

Copyright © 2024 All Rights Reserved 1878


Syst em Ad m in ist r at io n Gu id e

Data Retrieval Parameters


The Data Retrieval Mode VLDB property defines the parameters used to
retrieve data using third-party, native APIs.

For this VLDB property to take effect, you must defined the Data Retrieval
Mode VLDB property (see Data Retrieval Parameters, page 1879) as Allow
Native API. You can then define the required parameters to retrieve data
using the third-party, native API. For example, you can enable Teradata
Parallel Transporter by defining the following parameters:

l TD_TDP_ID: The name or IP ad

l dress of the machine on which the Teradata data source resides.

l TD_MAX_SESSIONS: The maximum number of sessions that can be used


to log on to the Teradata database when processing queries in parallel. By
default, one session per Access Process Module (AMP) is used, which is
also the maximum number of sessions that can be supported. Type a value
to allow fewer sessions than the number of available AMPs.

l TD_MIN_SESSIONS: The minimum number of sessions required for the


export driver job to complete its processes. The default is one session.
This value must be less than or equal to the TD_MAX_SESSIONS value.

l TD_MAX_INSTANCES: The maximum number of threads that can be used.


This option can be defined if the driver has been configured as a master
and slave environment that allows for multiple threads. This value must be
less than or equal to the TD_MAX_SESSIONS value, as a thread can
include one or more sessions.

l You can include any additional parameters to apply to the connection.


Provide each parameter with the syntax:
ParameterName=ParameterValue.

When providing the parameters and their values, each parameter must be of
the form:

ParameterName=ParameterValue

Copyright © 2024 All Rights Reserved 1879


Syst em Ad m in ist r at io n Gu id e

Separate each parameter definition with a semicolon (;). An example of the


full definition of this VLDB property is provided below:

TD_TDP_ID=123.45.67.89;TD_MAX_SESSIONS=3;TD_MIN_SESSIONS=1;TD_
MAX_INSTANCES=3

Using MicroStrategy Web, you can create a connection to Teradata and


import your data. When creating this connection, enabling the Teradata
Parallel Transporter options prompts you for this information and
automatically updates the VLDB property as required. For steps to create
this type of connection in MicroStrategy Web, see the MicroStrategy Web
Help.

You can also define this VLDB property for the database instance for
Teradata connections that are not created through the use of Data Import.

Level s at Wh i ch Yo u Can Set Th i s

Database instance and report

Data Mart Column Order


This property allows you to determine the order in which data mart columns
are created when you configure a data mart from the information in the
columns and rows of a report.

You can set this property to either of the following options:

l Columns created in order based on attribute weight (default): Data


mart columns are created in an order based on their attribute weights. For
more information about attribute weights, see Data Mart Column Order,
page 1880.

l Columns created in order in which they appear on the template: Data


mart columns are created in the same order as they appear on the report
template.

Copyright © 2024 All Rights Reserved 1880


Syst em Ad m in ist r at io n Gu id e

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Date Format
The Date Format VLDB property specifies the format of the date string literal
in the SQL statements when date-related qualifications are present in the
report.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Default yyyy-mm-dd

Oracle dd-mmm-yy

Teradata yyyy/mm/dd

Date Pattern
Date Pattern is an advanced VLDB property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

The Date Pattern VLDB property is used to add or alter a syntax pattern for
handling date columns.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Default No extra syntax pattern for handling dates

Oracle To_Date ('#0')

Tandem (d'#0')

Copyright © 2024 All Rights Reserved 1881


Syst em Ad m in ist r at io n Gu id e

Decimal Separator
The Decimal Separator VLDB property specifies whether a "." or "," is used
as a decimal separator. This property is used for non-English databases that
use commas as the decimal separator.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

"." as the decimal separator (default)

select a11.DEPARTMENT_NBR DEPARTMENT_NBR,


a11.STORE_NBR STORE_NBR
into #ZZTIS00H5K4MQ000
from HARI_COST_STORE_DEP a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.COST_AMT) > 654.357

"," as the decimal separator

select a11.DEPARTMENT_NBR DEPARTMENT_NBR,


a11.STORE_NBR STORE_NBR
into #ZZTIS00H5K5MQ000
from HARI_COST_STORE_DEP a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.COST_AMT) > 654,357

Default Attribute Weight


The Default Attribute Weight is an advanced property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

Use the Default Attribute Weight property to determine how attribute weights
should be treated, for those attributes that are not in the attribute weights
list.

Copyright © 2024 All Rights Reserved 1882


Syst em Ad m in ist r at io n Gu id e

You can access the attribute weights list from the Project Configuration
Editor. In the Project Configuration Editor, expand Report Definition and
select SQL generation. From the Attribute weights section, click Modify to
open the attribute weights list.

The attribute weights list allows you to change the order of attributes used in
the SELECT clause of a query. For example, suppose the Region attribute is
placed higher on the attribute weights list than the Customer State attribute.
When the SQL for a report containing both attributes is generated, Region is
referenced in the SQL before Customer State. However, suppose another
attribute, Quarter, also appears on the report template but is not included in
the attribute weights list.

In this case, you can select either of the following options within the Default
Attribute Weight property to determine whether Quarter is considered
highest or lowest on the attribute weights list:

l Lowest: When you select this option, those attributes not in the attribute
weights list are treated as the lightest weight. Using the example above,
with this setting selected, Quarter is considered to have a lighter attribute
weight than the other two attributes. Therefore, it is referenced after
Region and Customer State in the SELECT statement.

l Highest (default): When you select this option, those attributes not in the
attribute weights list are treated as the highest weight. Using the example
above, with this setting selected, Quarter is considered to have a higher
attribute weight than the other two attributes. Therefore, it is referenced
before Region and Customer State in the SELECT statement.

Level s at Wh i ch Yo u Can Set Th i s

Database instance only

Disable Prefix in WH Partition Table


The Disable Prefix in WH Partition Table is an advanced property that is
hidden by default. For information on how to display this property, see

Copyright © 2024 All Rights Reserved 1883


Syst em Ad m in ist r at io n Gu id e

Viewing and Changing Advanced VLDB Properties, page 1630.

This property allows you to provide better support of warehouse partitioning


in a distributed database environment.

In a distributed database environment, different tables can have different


prefixes. This is also true for partitioning. On one hand, the partition-
mapping table (PMT) may have a different prefix from the partition base
table (PBT). On the other hand, each PBT may need its own prefix. In
MicroStrategy 6.x and earlier, this is achieved by adding one additional
column (DDBSOURCE) in the PMT to indicate which table source (prefix) to
use. MicroStrategy 7.x and later uses metadata (MD) partitioning and
warehouse (WH) partitioning. MD partitioning can handle distributed
databases easily, because the metadata contains the PMT as well as the
PBT. For WH partitioning, it only has the PMT in the metadata, so it can only
set prefixes on the PMT. Currently, this prefix is shared by both the PMT and
the PBT. In other words, both the partition prequery (using PMT) and the
partition query (using PBT) use the same prefix.

For those projects that need their own prefix in the PBT, the MicroStrategy
6.x approach (using the DDBSOURCE column) no longer works due to
architectural changes. The solution is to store the prefix along with the PBT
name in the column PBTNAME of the partition mapping table. So instead of
storing PBT1, PBT2, and so on, you can put in DB1.PBT1, DB2.PBT2, and
so on. This effectively adds a different prefix to different PBTs by treating
the entire string as the partition base table name.

The solution above works in most cases but does not work if the PMT needs
its own prefix. For example, if the PMT has the prefix "DB0.", the prequery
works fine. However, in the partition query, this prefix is added to what is
stored in the PBTNAME column, so it gets DB0.DB1.PBT1, DB0.DB1.PBT2,
and so on. This is not what you want to happen. This new VLDB property is
used to disable the prefix in the WH partition table. When this property is
turned on, the partition query no longer shares the prefix from the PMT.
Instead, the PBTNAME column (DB1.PBT1, DB2.PBT2, and so on) is used
as the full PBT name.

Copyright © 2024 All Rights Reserved 1884


Syst em Ad m in ist r at io n Gu id e

Even when this property is turned ON, the partition prequery still applies a
prefix, if there is one.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Distinct/Group by Option (When No Aggregation and Not Table


Key)
The Distinct/Group by Option property controls the generation of DISTINCT
or GROUP BY in the SELECT SQL statement. You can select from the
following options:

l Use DISTINCT (default)

l No DISTINCT, no GROUP BY

l Use GROUP BY

If you are using a Vertica database that includes correlated subqueries, to support the
use of the Use GROUP By option listed above, you must also define the Sub Query
Type VLDB property (see Optimizing Queries, page 1791) to use either of the following
options:

Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated subquery

Use Temporary Table, falling back to IN (SELECT COL) for correlated subquery

Upon selecting an option, a sample SQL statement shows the effect that
each option has.

The SQL Engine ignores the option selected for this property in the following
situations:

l If there is aggregation, GROUP BY is used without the use of DISTINCT.

l If there is no attribute (only metrics), DISTINCT is not used.

Copyright © 2024 All Rights Reserved 1885


Syst em Ad m in ist r at io n Gu id e

l If there is COUNT (DISTINCT …) and the database does not support this
functionality, a SELECT DISTINCT pass of SQL is used, which is followed
by a COUNT(*) pass of SQL.

l If the database does not allow DISTINCT or GROUP BY for certain column
data types, DISTINCT and GROUP BY are not used.

l If the select level is the same as the table key level and the table's true
key property is selected, DISTINCT is not used.

When none of the above conditions are met, the option selected for this
property determines how DISTINCT and GROUP BY are used in the SQL
statement.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

GROUP BY ID Attribute
The GROUP BY ID Attribute is an advanced property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.

This property determines how to group by a selected ID column when an


expression is performed on the ID expression. Each of the options is
described below.

The code fragment following each description replaces the section named
group by ID in the following sample SQL statement.

select a22.STORE_NBR STORE_NBR,a22.MARKET_NBR * 10 MARKET_ID,


sum(a21.REG_SLS_DLR) WJXBFS1
from STORE_DIVISION a21
join LOOKUP_STORE a22
on (a21.STORE_NBR = a22.STORE_NBR)
where a22.STORE_NBR = 1
group by a22.STORE_NBR, [group by ID]

The options for this property are:

Copyright © 2024 All Rights Reserved 1886


Syst em Ad m in ist r at io n Gu id e

l Group by expression (default): Group by the expression performed in the


SELECT statement on the ID column.

a22.MARKET_NBR * 10

l Group by alias: Group by the expression alias in the Select statement.

MARKET_ID

l Group by column: Group by the column ID, ignoring the expression


performed on the ID column.

a22.MARKET_NBR

l Group by position: Group by the physical table position of the ID column.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

GROUP BY Non-ID Attribute


The GROUP BY Non-ID Attribute property controls whether or not non-ID
attribute forms—like descriptions—are used in the GROUP BY. If you do not
want non-ID columns in the GROUP BY, you can choose to use a MAX when
the column is selected so that it is not used in the GROUP BY.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Use Max (default)

select a11.MARKET_NBR MARKET_NBR,


max(a14.MARKET_DESC) MARKET_DESC,
a11.CLASS_NBR CLASS_NBR,
max(a13.CLASS_DESC) CLASS_DESC,
a12.YEAR_ID YEAR_ID,

Copyright © 2024 All Rights Reserved 1887


Syst em Ad m in ist r at io n Gu id e

max(a15.YEAR_DESC) YEAR_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from MARKET_CLASS a11
join LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
join LOOKUP_CLASS a13
on (a11.CLASS_NBR = a13.CLASS_NBR)
join LOOKUP_MARKET a14
on (a11.MARKET_NBR = a14.MARKET_NBR)
join LOOKUP_YEAR a15
on (a12.YEAR_ID = a15.YEAR_ID)
group by a11.MARKET_NBR, a11.CLASS_NBR,
a12.YEAR_ID

Use Group by

select a11.MARKET_NBR MARKET_NBR,


a14.MARKET_DESC MARKET_DESC,
a11.CLASS_NBR CLASS_NBR,
a13.CLASS_DESC CLASS_DESC,
a12.YEAR_ID YEAR_ID,
a15.YEAR_DESC YEAR_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from MARKET_CLASS a11
join LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
join LOOKUP_CLASS a13
on (a11.CLASS_NBR = a13.CLASS_NBR)
join LOOKUP_MARKET a14
on (a11.MARKET_NBR = a14.MARKET_NBR)
join LOOKUP_YEAR a15
on (a12.YEAR_ID = a15.YEAR_ID)
group by a11.MARKET_NBR,
a14.MARKET_DESC,
a11.CLASS_NBR,
a13.CLASS_DESC,
a12.YEAR_ID,
a15.YEAR_DESC

Insert Post String


The Insert Post String property allows you to define a custom string to be
inserted at the end of the INSERT statements.

The # character is a special token that is used in various patterns and is


treated differently than other characters. One single # is absorbed and two
# are reduced to a single #. For example to show three # characters in a

Copyright © 2024 All Rights Reserved 1888


Syst em Ad m in ist r at io n Gu id e

statement, enter six # characters in the code. You can get any desired
string with the right number of # characters. Using the # character is the
same as using the ; character.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Insert into TABLENAME


select A1.COL1, A2.COL2, A3.COL3
from TABLE1 A1, TABLE2 A2, TABLE3 A3
where A1.COL1=A2.COL1 and A2.COL4=A3.COL5 */Insert
Post String/*

Insert Table Option


The Insert Table Option property allows you to define a custom string to be
inserted after the table name in the insert statements. This is analogous to
table option.

The # character is a special token that is used in various patterns and is


treated differently than other characters. One single # is absorbed and two
# are reduced to a single #. For example to show three # characters in a
statement, enter six # characters in the code. You can get any desired
string with the right number of # characters. Using the # character is the
same as using the ; character.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Insert into TABLENAME */Insert Table Option/*


select A1.COL1, A2.COL2, A3.COL3
from TABLE1 A1, TABLE2 A2, TABLE3 A3
where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5

Copyright © 2024 All Rights Reserved 1889


Syst em Ad m in ist r at io n Gu id e

Long Integer Support


Long integer support is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

With this VLDB property you can determine whether long integers are
mapped to a BigInt data type when MicroStrategy creates tables in the
database. A data mart is an example of a MicroStrategy feature that requires
MicroStrategy to create tables in a database.

When long integers from databases are integrated into MicroStrategy, the
Big Decimal data type is used to define the data in MicroStrategy. Long
integers can be of various database data types such as Number, Decimal,
and BigInt.

In the case of BigInt, when data that uses the BigInt data type is integrated
into MicroStrategy as a Big Decimal, this can cause a data type mismatch
when MicroStrategy creates a table in the database. MicroStrategy does not
use the BigInt data type by default when creating tables. This can cause a
data type mismatch between the originating database table that contained
the BigInt and the database table created by MicroStrategy.

You can use the following VLDB settings to support BigInt data types:

l Do not use BigInt (default): Long integers are not mapped as BigInt data
types when MicroStrategy creates tables in the database. This is the
default behavior.

If you use BigInt data types, this can cause a data type mismatch between
the originating database table that contained the BigInt and the database
table created by MicroStrategy.

l Up to 18 digits: Long integers that have up to 18 digits are converted into


BigInt data types.

Copyright © 2024 All Rights Reserved 1890


Syst em Ad m in ist r at io n Gu id e

This setting is a good option if you can ensure that your BigInt data uses
no more than 18 digits. The maximum number of digits that a BigInt can
use is 19. With this option, if your database contains BigInt data that uses
all 19 digits, it is not mapped as a BigInt data type when MicroStrategy
creates a table in the database.

However, using this setting requires you to manually modify the column
data type mapped to your BigInt data. You can achieve this by creating a
column alias for the column of data in the Attribute Editor or Fact Editor in
MicroStrategy. The column alias must have a data type of Big Decimal, a
precision of 18, and a scale of zero. For steps to create a column alias to
modify a column data type, see the Project Design Help.

l Up to 19 digits: Long integers that have up to 19 digits are converted into


BigInt data types.

Using this option enables BigInt data that uses up to 19 digits to be


correctly mapped as a BigInt data types when MicroStrategy creates
tables in the database. This option does not require you to create a
column alias.

However, this option can cause an overflow error if you have long integers
that use exactly 19 digits, and its value is greater than the maximum
allowed for a BigInt (9,223,372,036,854,775,807).

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Max Digits in Constant


Max Digits in Constant is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

Copyright © 2024 All Rights Reserved 1891


Syst em Ad m in ist r at io n Gu id e

The Max Digits in Constant property controls the number of significant digits
that get inserted into columns during Analytical Engine inserts. This is only
applicable to real numbers and not to integers.

Level s at Wh i ch Yo u Can Set Th i s

Database instance only

Database-specific setting

SQL Server 28

Teradata 18

Max Const Digits = 0

Insert into #ZZTIS00H6WQMD001 values (4, 339515.0792)

Max Const Digits = 2

Insert into #ZZTIS00H6WTMD001 values (4, 33)

Max Const Digits = 7

Insert into #ZZTIS00H6WVMD001 values (4, 339515.0)

Merge Same Metric Expression Option


The Merge Same Metric Expression Option VLDB property allows you to
determine whether the SQL Engine should merge metrics that have the same
definition, or whether it should process the metrics separately. If you do not
want metrics with identical definitions to be merged, select Do not merge
same metric expression.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Copyright © 2024 All Rights Reserved 1892


Syst em Ad m in ist r at io n Gu id e

Select Post String


The Select Post String property allows you to define a custom string to be
inserted at the end of all SELECT statements generated by the Analytical
Engine.

To include a post string only on the final SELECT statement you should use
the Select Statement Post String VLDB property, which is described in
Select Post String, page 1893.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

The SQL statement shown below displays an example of where the Select
Post String and Select Statement Post String VLDB properties would include
their SQL statements.

with gopa1 as (select a12.REGION_ID REGION_ID


from CITY_CTR_SLS a11
join LU_CALL_CTR a12
on (a11.CALL_CTR_ID = a12.CALL_CTR_ID)
group by a12.REGION_ID
having sum(a11.TOT_UNIT_SALES) = 7.0
/* select post string */)select
a11.REGION_ID REGION_ID,
a14.REGION_NAME REGION_NAME0,
sum(a11.TOT_DOLLAR_SALES) Revenue
from STATE_SUBCATEG_REGION_SLS a11
join gopa1 pa12
on (a11.REGION_ID = pa12.REGION_ID)
join LU_SUBCATEG a13
on (a11.SUBCAT_ID = a13.SUBCAT_ID)
join LU_REGION a14
on (a11.REGION_ID = a14.REGION_ID)
where a13.CATEGORY_ID in (2)
group by a11.REGION_ID,
a14.REGION_NAME/* select post string */
/* select statement post string */

Select Statement Post String


The Select Statement Post String VLDB property allows you to define a
custom SQL string to be inserted at the end of the final SELECT statement.

Copyright © 2024 All Rights Reserved 1893


Syst em Ad m in ist r at io n Gu id e

This can be helpful if you use common table expressions with an IBM DB2
database. These common table expressions do not support certain custom
SQL strings. This VLDB property allows you to apply the custom SQL string
to only the final SELECT statement which does not use a common table
expression.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

The SQL statement shown below displays an example of where the Select
Post String and Select Statement Post String VLDB properties include their
SQL statements.

with gopa1 as
(select a12.REGION_ID REGION_ID
from CITY_CTR_SLS a11
join LU_CALL_CTR a12
on (a11.CALL_CTR_ID = a12.CALL_CTR_ID)
group by a12.REGION_ID
having sum(a11.TOT_UNIT_SALES) = 7.0
/* select post string */)select
a11.REGION_ID REGION_ID,
a14.REGION_NAME REGION_NAME0,
sum(a11.TOT_DOLLAR_SALES) Revenue
from STATE_SUBCATEG_REGION_SLS a11
join gopa1 pa12
on (a11.REGION_ID = pa12.REGION_ID)
join LU_SUBCATEG a13
on (a11.SUBCAT_ID = a13.SUBCAT_ID)
join LU_REGION a14
on (a11.REGION_ID = a14.REGION_ID)
where a13.CATEGORY_ID in (2)
group by a11.REGION_ID,
a14.REGION_NAME/* select post string */
/* select statement post string */

SQL Hint
The SQL Hint property is used for the Oracle SQL Hint pattern. This string is
placed after the SELECT word in the Select statement. This property can be
used to insert any SQL string that makes sense after the SELECT in a Select
statement, but it is provided specifically for Oracle SQL Hints.

Copyright © 2024 All Rights Reserved 1894


Syst em Ad m in ist r at io n Gu id e

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

SQL Hint = /* FULL */


Select /* + FULL */ A1.STORE_NBR,
max(A1.STORE_DESC)
From LOOKUP_STORE A1
Where A1.STORE_NBR = 1
Group by A1.STORE_NBR

SQL Time Format


The SQL Time Format property allows you to determine the format of the
time literal accepted in SQL statements. This is a database-specific
property; some examples are shown in the table below.

Exam p l e

Database Type Time Format

Default yyyy-mm-dd hh:nn:ss

Microsoft SQL Server mm/dd/yyyy hh:nn:ss

Oracle mm/dd/yyyy hh:nn:ss

Sybase IQ hh:nn:ss:lll

Level s at Wh i ch Yo u Can Set Th i s

Database instance, template, and report

Timestamp Format
The Timestamp Format property allows you to determine the format of the
timestamp literal accepted in the WHERE clause. This is a database-specific
property; some examples are shown in the table below.

Copyright © 2024 All Rights Reserved 1895


Syst em Ad m in ist r at io n Gu id e

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Exam p l e

Default yyyy-mm-dd hh:nn:ss

DB2 yyyy-mm-dd-hh.nn.ss

RedBrick mm/dd/yyyy hh:nn:ss

UNION Multiple INSERT


The Union Multiple Insert property allows the Analytical Engine to UNION
multiple INSERT statements into the same temporary table. This is a
database-specific property. Some databases do not support the use of
Unions.

Level s at Wh i ch Yo u Can Set Th i s

Database instance, report, and template

Dat ab ases Au t o m at i cal l y Set t o Use Un i o n


l DB2 UDB

l SQL Server

l Teradata

Use Column Type Hint for Parameterized Query


Use Column Type Hint for Parameterized Query is an advanced property
that is hidden by default. For information on how to display this property, see
Viewing and Changing Advanced VLDB Properties, page 1630.

The Use Column Type Hint for Parameterized Query VLDB property
determines whether the WCHAR data type is used when applicable to return

Copyright © 2024 All Rights Reserved 1896


Syst em Ad m in ist r at io n Gu id e

data accurately while using parameterized queries. This VLDB property has
the following options:

l Disabled (default): This option is recommended unless you are


encountering the data inconsistencies described below.

l Enable ODBC Column Type Binding Hint for "WCHAR" and "CHAR":
This option should be used only if you have enabled parameterized
queries in MicroStrategy for your database and data is not being correctly
displayed on reports. This can include viewing question marks in place of
other valid characters. This can occur for Netezza databases.

By selecting this option, the WCHAR data type is used when applicable
so that the data is returned correctly while using parameterized queries.

Level s at Wh i ch Yo u Can Set Th i s

Database instance only

Creating and Supporting Tables with SQL: Tables


The table below summarizes the Tables VLDB properties that are available.
Additional details about each property, including examples where
necessary, are provided in the sections following the table.

Property Description Possible Values Default Value

Used to alter the pattern


for aliasing column
Alias Pattern names. Automatically set User-defined AS
for Microsoft Access
users.

Defines the column


constraints (for example,
Attribute ID
NULL or NOT NULL) put User-defined NULL
Constraint
on the ID form of
attributes.

Copyright © 2024 All Rights Reserved 1897


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

Character Column
Option and Defines how to support
National multiple character sets User-defined NULL
Character Column used in Teradata.
Option

Used to alter the pattern


Column Pattern User-defined #0.[#1]
for column names.

No Commit after the


Determines whether to final Drop
statement No Commit after
Commit After Final issue a COMMIT
the final Drop
Drop statement after the final Commit after the statement
DROP statement final Drop
statement

No Commit
Sets when to issue a
COMMIT statement after Post DDL
Commit Level No Commit
creating an intermediate Post DML
table.
Post DDL and DML

Defines whether CREATE and


MicroStrategy can INSERT statements CREATE and
CREATE and perform CREATE and are supported INSERT
INSERT Support INSERT statements CREATE and statements are
against the database for INSERT statements supported
a database instance. are not supported

Create Post String

(see Table Prefix,


Table Qualifier, Defines the string
Table Option, appended after the
User-defined NULL
Table Descriptor, CREATE TABLE
Table Space, & statement.
Create Post
String)

Copyright © 2024 All Rights Reserved 1898


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

Drop after final


pass
Drop Temp Table Determines when to drop Drop after final
Do nothing
Method an intermediate object. pass
Truncate table then
drop after final pass

Determines the type of


Permanent table
table that is generated if
Fallback Table the Analytical Engine True temporary
Permanent table
Type cannot generate a table
derived table or common
Fail report
table.

Do not apply
hexadecimal
character
transformation to
quoted strings

Allows string characters Apply hexadecimal


Do not apply
to be converted into character
Hexadecimal hexadecimal
specific character transformation to
Character character
encoding required for quoted strings of all
Transformation transformation to
some Unicode character types
quoted strings
implementations.
Apply hexadecimal
character
transformation to
quoted strings of
type NChar and
NVarChar

Permanent table
Determines the type of
Intermediate Table Derived table
intermediate (temp) table Permanent table
Type
to create. Common table
expression

Copyright © 2024 All Rights Reserved 1899


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

True temporary
table

Temporary view

Determines how many


passes are allowed for a
report that uses
Maximum SQL intermediate tables. If a
Passes Before report exceeds this limit, User-defined No limit
FallBack the table type defined by
the Fallback Table Type
VLDB property is used for
the report.

Determines how many


tables in a single FROM
clause are allowed for a
report that uses
Maximum Tables
intermediate tables. If a
in FROM Clause User-defined No limit
report exceeds this limit,
Before FallBack
the table type defined by
the Fallback Table Type
VLDB property is used for
the report.

National Defines how to support


Character Column multiple character sets User-defined NULL
Option used in Teradata.

Parallel SQL Determines the type of Permanent Table


Execution intermediate table Derived Table with Permanent Table
Intermediate Table created when parallel Fallback Table Type
Type query execution is used. as Permanent Table

Controls whether a 1 (Enabled)


Quoting Behavior project uses unified 1
quoting. 0 (Disabled)

Copyright © 2024 All Rights Reserved 1900


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

Determines the method to Explicit table


Table Creation
create an intermediate Explicit table
Type Implicit table
table.

Table Descriptor

(see Table Prefix,


Table Qualifier, Defines the string to be
Table Option, placed after the word
User-defined NULL
Table Descriptor, TABLE in the CREATE
Table Space, & TABLE statement.
Create Post
String)

Table Option

(see Table Prefix,


Table Qualifier, Defines the string to be
Table Option, placed after the table
User-defined NULL
Table Descriptor, name in the CREATE
Table Space, & TABLE statement.
Create Post
String)

Table Prefix

(see Table Prefix, Defines the string to be


Table Qualifier, added to a table name,
Table Option, for example, CREATE User-defined NULL
Table Descriptor, TABLE prefix.Tablename.
Table Space, & (See Note below.)
Create Post
String)

Table Qualifier Defines the key words


(see Table Prefix, placed immediately
Table Qualifier, before "table." For User-defined NULL
Table Option, example, CREATE
Table Descriptor, volatile Table.

Copyright © 2024 All Rights Reserved 1901


Syst em Ad m in ist r at io n Gu id e

Property Description Possible Values Default Value

Table Space, &


Create Post
String)

Table Space

(see Table Prefix, String appended after the


Table Qualifier, CREATE TABLE
Table Option, Statement but before any
User-defined NULL
Table Descriptor, Primary Index/Partition
Table Space, & key definitions. (See
Create Post Note below.)
String)

A string pattern that


controls how a specific
Unified Quoting
DBMS or database User-defined #0
Pattern
instance quote queries
that are run against it.

To populate dynamic information by the Analytical Engine, insert the


following syntax into Table Prefix and Table Space strings:

!d inserts the date.

!o inserts the report name.

!u inserts the user name.

Alias Pattern
Alias Pattern is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

The Alias Pattern property allows you to alter the pattern for aliasing column
names. Most databases do not need this pattern, because their column
aliases follow the column name with only a space between them. However,

Copyright © 2024 All Rights Reserved 1902


Syst em Ad m in ist r at io n Gu id e

Microsoft Access needs an AS between the column name and the given
column alias. This pattern is automatically set for Microsoft Access users.
This property is provided for customers using the Generic DBMS object
because some databases may need the AS or another pattern for column
aliasing.

Levels at Which You Can Set This

Database instance only

Attribute ID Constraint
This property is available at the attribute level. You can access this property
by opening the Attribute Editor, selecting the Tools menu, then choosing
VLDB Properties.

When creating intermediate tables in the explicit mode, you can specify the
NOT NULL/NULL constraint during the table creation phase. This takes
effect only when permanent or temporary tables are created in the explicit
table creation mode. Furthermore, it applies only to the attribute columns in
the intermediate tables.

Levels at Which You Can Set This

Database instance and attribute

Exam ple

NOT NULL

create table ZZTIS003HHUMQ000 (


DEPARTMENT_NBR NUMBER(10, 0) NOT NULL,
STORE_NBR NUMBER(10, 0) NOT NULL)

Copyright © 2024 All Rights Reserved 1903


Syst em Ad m in ist r at io n Gu id e

Character Column Option and National Character Column Option


The Character Column Option and National Character Column Option VLDB
properties allow you to support the character sets used in Teradata.
Teradata allows character sets to be defined on a column-by-column basis.
For example, one column in Teradata may use a Unicode character set,
while another column uses a Latin character set.

MicroStrategy uses two sets of data types to support multiple character sets.
The Char and VarChar data types are used to support a character set. The
NChar and NVarChar data types are used to support a different character
set than the one supported by Char and VarChar. The NChar and NVarChar
data types are commonly used to support the Unicode character set while
Char and VarChar data types are used to support another character set.

You can support the character sets in your Teradata database using these
VLDB properties:

l The Character Column Option VLDB property defines the character set
used for columns that use the MicroStrategy Char or VarChar data types.
If left empty, these data types use the default character set for the
Teradata database user.

You can define a specific data type by typing CHARACTER SET


CHARACTER_SET_NAME, where CHARACTER_SET_NAME is the name of the
character set. For example, CHARACTER SET LATIN defines
MicroStrategy's Char and VarChar data types to support the Latin
character set.

This character set definition is included in SQL statements as shown in the


example below:

CREATE TABLE text_fields (Text_Field1 VARCHAR(10) CHARACTER SET LATIN,Text_


Field2 VARCHAR(10) CHARACTER SET LATIN,)

Copyright © 2024 All Rights Reserved 1904


Syst em Ad m in ist r at io n Gu id e

l The National Character Column Option VLDB property defines the


character set used for columns that use the MicroStrategy NChar or
NVarChar data types. If left empty, these data types use the default
character set for the Teradata database user.

You can define a specific data type by typing CHARACTER SET


CHARACTER_SET_NAME, where CHARACTER_SET_NAME is the name of the
character set. For example, CHARACTER SET UNICODE defines
MicroStrategy's NChar and NVarChar data types to support the Unicode
character set.

If you use the Unicode character set and it is not the default character set
for the Teradata database user, you should define NChar and NVarChar
data types to use the Unicode character set.

This character set definition is included in SQL statements as shown in the


example below:

CREATE TABLE text_fields (Text_Field1 VARCHAR(10) CHARACTER SET UNICODE,Text_


Field2 VARCHAR(10) CHARACTER SET UNICODE,)

For example, your Teradata database uses the Latin and Unicode character
sets, and the default character set for your Teradata database is Latin. In
this scenario you should leave Character Column Option empty so that it
uses the default of Latin. You should also define National Character Column
as CHARACTER SET UNICODE so that NChar and NVarChar data types
support the Unicode data for your Teradata database.

To extend this example, assume that your Teradata database uses the Latin
and Unicode character sets, but the default character set for your Teradata
database is Unicode. In this scenario you should leave National Character
Column Option empty so that it uses the default of Unicode. You should also
define Character Column as CHARACTER SET LATIN so that Char and
VarChar data types support the Latin data for your Teradata database.

The Character Column Option and National Character Column Option VLDB
properties can also support the scenario where two character sets are used,

Copyright © 2024 All Rights Reserved 1905


Syst em Ad m in ist r at io n Gu id e

and Unicode is not one of these character sets. For this scenario, you can
use these two VLDB properties to define which MicroStrategy data types
support the character sets of your Teradata database.

Levels at Which You Can Set This

Database instance only

Column Pattern
Column Pattern is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.

The Column Pattern property allows you to alter the pattern for column
names. Most databases do not need this pattern altered. However, if you are
using a case-sensitive database and need to add double quotes around the
column name, this property allows you to do that.

Levels at Which You Can Set This

Database instance only

Exam ple

The standard column pattern is #0.#1. If double quotes are needed, the
pattern changes to:

"#0.#1"

Commit After Final Drop


The Commit After Final Drop property determines whether or not to issue a
COMMIT statement after the final DROP statement.

Copyright © 2024 All Rights Reserved 1906


Syst em Ad m in ist r at io n Gu id e

Levels at Which You Can Set This

Database instance and report

Commit Level
The Commit Level property is used to issue COMMIT statements after the
Data Definition Language (DDL) and Data Manipulation Language (DML)
statements. When this property is used in conjunction with the INSERT MID
Statement, INSERT PRE Statement, or TABLE POST Statement VLDB
properties, the COMMIT is issued before any of the custom SQL passes
specified in the statements are executed. The only DDL statement issued
after the COMMIT is issued is the explicit CREATE TABLE statement.
Commit is issued after DROP TABLE statements even though it is a DDL
statement.

The only DML statement issued after the COMMIT is issued is the INSERT
INTO TABLE statement. If the property is set to Post DML, the COMMIT is
not issued after an individual INSERT INTO VALUES statement; instead, it is
issued after all the INSERT INTO VALUES statements are executed.

The Post DDL COMMIT only shows up if the Intermediate Table Type VLDB
property is set to Permanent tables or Temporary tables and the Table
Creation Type VLDB property is set to Explicit mode.

The Post DML COMMIT only shows up if the Intermediate Table Type VLDB
property is set to Permanent tables, Temporary tables, or Views.

Not all database platforms support COMMIT statements and some need
special statements to be executed first, so this property must be used in
projects whose warehouse tables are in databases that support it.

Levels at Which You Can Set This

Database instance, report, and template

Copyright © 2024 All Rights Reserved 1907


Syst em Ad m in ist r at io n Gu id e

Exam ples

Table Creation Type is set to Explicit

No Commit (default)

create table ZZTIS00H8L8MQ000 (


DEPARTMENT_NBR NUMBER(10, 0),
STORE_NBR NUMBER(10, 0)) tablespace users
insert into ZZTIS00H8L8MQ000
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HARI_STORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC,
a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_DEPARTMENT a11,
ZZTIS00H8L8MQ000 pa1,
HARI_LOOKUP_DEPARTMENT a12,
HARI_LOOKUP_STORE a13
where a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR and
a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR and
a11.STORE_NBR = a13.STORE_NBR
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR

Post DDL Commit

create table ZZTIS00H8LHMQ000 (


DEPARTMENT_NBR NUMBER(10, 0),
STORE_NBR NUMBER(10, 0)) tablespace users
commit
insert into ZZTIS00H8LHMQ000
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HARI_STORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC,
a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_DEPARTMENT a11,
ZZTIS00H8LHMQ000 pa1,
HARI_LOOKUP_DEPARTMENT a12,

Copyright © 2024 All Rights Reserved 1908


Syst em Ad m in ist r at io n Gu id e

HARI_LOOKUP_STORE a13
where a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR and
a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR and
a11.STORE_NBR = a13.STORE_NBR
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR

Post DDL & Post DML Commit

create table ZZTIS00H8LZMQ000 (


DEPARTMENT_NBR NUMBER(10, 0),
STORE_NBR NUMBER(10, 0)) tablespace users
commit
insert into ZZTIS00H8LZMQ000
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HARI_STORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
commit
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC,
a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_DEPARTMENT a11,
ZZTIS00H8LZMQ000 pa1,
HARI_LOOKUP_DEPARTMENT a12,
HARI_LOOKUP_STORE a13
where a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR and
a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR and
a11.STORE_NBR = a13.STORE_NBR
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR

Table Creation Type is set to Implicit

No Commit (default)

create table ZZTIS00H8LCMQ000 tablespace users as


select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HARI_STORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC,

Copyright © 2024 All Rights Reserved 1909


Syst em Ad m in ist r at io n Gu id e

a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_DEPARTMENT a11,
ZZTIS00H8LCMQ000 pa1,
HARI_LOOKUP_DEPARTMENT a12,
HARI_LOOKUP_STORE a13
where a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR and
a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR and
a11.STORE_NBR = a13.STORE_NBR
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR

Post DDL Commit

create table ZZTIS00H8LLMQ000 tablespace users as


select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HARI_STORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC,
a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_DEPARTMENT a11,
ZZTIS00H8LLMQ000 pa1,
HARI_LOOKUP_DEPARTMENT a12,
HARI_LOOKUP_STORE a13
where a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR and
a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR and
a11.STORE_NBR = a13.STORE_NBR
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR

Post DML Commit

create table ZZTIS00H8LTMQ000 tablespace users as


select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HARI_STORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
commit
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC,

Copyright © 2024 All Rights Reserved 1910


Syst em Ad m in ist r at io n Gu id e

a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_DEPARTMENT a11,
ZZTIS00H8LTMQ000 pa1,
HARI_LOOKUP_DEPARTMENT a12,
HARI_LOOKUP_STORE a13
where a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR and
a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR and
a11.STORE_NBR = a13.STORE_NBR
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR

Post DDL & Post DML Commit

create table ZZTIS00H8M3MQ000 tablespace users as


select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HARI_STORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
commit
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC,
a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_DEPARTMENT a11,
ZZTIS00H8M3MQ000 pa1,
HARI_LOOKUP_DEPARTMENT a12,
HARI_LOOKUP_STORE a13
where a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR and
a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR and
a11.STORE_NBR = a13.STORE_NBR
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR

CREATE and INSERT Support


The CREATE and INSERT support VLDB property defines whether
MicroStrategy can perform CREATE and INSERT statements against the
database for a database instance. This VLDB property has the following
options:

Copyright © 2024 All Rights Reserved 1911


Syst em Ad m in ist r at io n Gu id e

l CREATE and INSERT statements are supported (default): Allows


MicroStrategy to perform CREATE and INSERT statements against the
database for a database instance. These statements are required for
various MicroStrategy features. This setting is required for the primary
database instance and for databases that are required to support data
mart reports. For information on primary database instances, see the
Installation and Configuration Help.

This setting is recommended for databases that are used to support fully
functioning MicroStrategy projects.

l CREATE and INSERT statements are not supported: MicroStrategy is


prohibited from performing CREATE and INSERT statements against the
database for a database instance. This option can be used if the database
that you connect to is meant to only act as a repository of information that
cannot be modified from within MicroStrategy.

This option can also be used along with the MultiSource Option feature,
which allows you to access multiple databases in one MicroStrategy
project. You can define your secondary database instances to disallow
CREATE and INSERT statements so that all information is only inserted
into the primary database instance. For information on the MultiSource
Option feature, see the Project Design Help.

You can also use this option to avoid the creation of temporary tables on
databases for various performance or security purposes.

This option does not control the SQL that can be created and executed
against a database using Freeform SQL and Query Builder reports.

Levels at Which You Can Set This

Database instance only

Copyright © 2024 All Rights Reserved 1912


Syst em Ad m in ist r at io n Gu id e

Drop Temp Table Method


The Drop Temp Table Method property specifies whether the intermediate
tables, permanent tables, temporary tables, and views are to be dropped at
the end of report execution. Dropping the tables can lock catalog tables and
affect performance, so dropping the tables manually in a batch process
when the database is less active can result in a performance gain. The
trade-off is space on the database server. If tables are not dropped, the
tables remain on the database server using space until the database
administrator drops them.

This VLDB property also allows you to truncate intermediate tables,


permanent tables, temporary tables, and views prior to dropping them.

Levels at Which You Can Set This

Database instance, report, and template

Fallback Table Type


All reports can be resolved using permanent or temporary intermediate
tables. Generating derived tables, common table expressions, and views as
a means of resolving reports is also provided. Derived tables, common table
expressions, and views cannot cover all the scenarios. For example, they
cannot be used when the report contains Analytical Engine SQL,
partitioning, and certain cases of outer joins. In such a scenario, the
MicroStrategy SQL Engine needs a fallback mechanism provided by the
Fallback Table Type property. If the Intermediate Table Type VLDB property
(described below) is set to Derived Table or Common Table Expression or
Views, and the SQL Engine concludes that the report cannot be resolved
using that setting, it reads the Fallback Table Type VLDB property and
resolves the report by generating Permanent tables or Temporary tables
according to the option that you set.

However, there may be scenarios where you do not want to create


permanent tables or temporary tables to support these types of reports. To

Copyright © 2024 All Rights Reserved 1913


Syst em Ad m in ist r at io n Gu id e

prevent the creation of permanent or temporary tables, you can set the
Fallback Table Type VLDB property to Fail report. This causes reports that
rely on the Fallback Table Type to fail, so it should only be used when it is
necessary to prevent the creation of permanent or temporary tables.

Levels at Which You Can Set This

Database instance, report, and template

Hexadecimal Character Transformation


The Hexadecimal Character Transformation property is only relevant when
you are using a Unicode Teradata database for the data warehouse. Most
databases do not need this property, because the ODBC driver handles the
conversion automatically. In some Unicode databases, to process SQL
containing character strings inside quotations, those characters must be
converted to hexadecimal representation. Turning this property on means
characters within quoted strings are converted into hexadecimal using UTF-
8 encoding.

Levels at Which You Can Set This

Database instance only

Exam ples

Do not apply hexadecimal character transformation to quoted strings


(default)

insert into mytable values ('A')

Apply hexadecimal character transformation to quoted strings

insert into mytable values ('4100'XCV)

Copyright © 2024 All Rights Reserved 1914


Syst em Ad m in ist r at io n Gu id e

Where 4100 is the hexadecimal representation of the character "A" using


UTF-8 Unicode encoding.

Intermediate Table Type


The Intermediate Table Type property specifies what kinds of intermediate
tables are used to execute the report. All reports can be executed using
permanent and temporary tables. There are certain scenarios involving
partitioning, outer joins, and analytical functions that the report cannot
execute using derived tables, common table expressions, or views. If this is
the case, the Fallback Table Type VLDB property (described above) is used
to execute the report. The temporary table syntax is specific to each
platform.

This property can have a major impact on the performance of the report.
Permanent tables are usually less optimal. Derived tables, common table
expressions, and true temporary tables usually perform well, but they do not
work in all cases and for all databases. The default setting is permanent
tables, because it works for all databases in all situations. However, based
on your database type, this setting is automatically changed to what is
generally the most optimal option for that platform, although other options
could prove to be more optimal on a report-by-report basis. You can access
the VLDB Properties Editor for the database instance for your database (see
Opening the VLDB Properties Editor, page 1625), and then select the Use
default inherited value check box to determine the default option for your
database.

To help support the use of common table expressions and derived tables,
you can also use the Maximum SQL Passes Before FallBack and Maximum
Tables in FROM Clause Before FallBack VLDB properties. These properties
(described in Maximum SQL Passes Before FallBack, page 1919 and
Maximum Tables in FROM Clause Before FallBack, page 1920) allow you to
define when a report is too complex to use common table expressions and
derived table expressions and instead use a fallback table type.

Copyright © 2024 All Rights Reserved 1915


Syst em Ad m in ist r at io n Gu id e

In cases where queries are performed in parallel (through the use of


Optimizing Queries, page 1791) the intermediate table type is determined by
the VLDB property Parallel SQL Execution Intermediate Table Type, page
1921.

Levels at Which You Can Set This

Database instance, report, and template

Exam ples

The following is an output from a DB2 UDB 7.x project.

Permanent Table (default)

create table ZZIS03CT00 (


DEPARTMENT_NBR DECIMAL(10, 0),
STORE_NBR DECIMAL(10, 0))
insert into ZZIS03CT00
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HSTORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC,
a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HSTORE_DEPARTMENT a11
join ZZIS03CT00 pa1
on (a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR)
join HLOOKUP_DEPARTMENT a12
on (a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR)
join HLOOKUP_STORE a13
on (a11.STORE_NBR = a13.STORE_NBR)
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR

Derived Table

select a11.DEPARTMENT_NBR DEPARTMENT_NBR,


max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC,

Copyright © 2024 All Rights Reserved 1916


Syst em Ad m in ist r at io n Gu id e

a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HSTORE_DEPARTMENT a11
join (select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HSTORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
) pa1
on (a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR)
join HLOOKUP_DEPARTMENT a12
on (a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR)
join HLOOKUP_STORE a13
on (a11.STORE_NBR = a13.STORE_NBR)
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR

Common Table Expression

with pa1 as
(select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HSTORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
)
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC,
a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HSTORE_DEPARTMENT a11
join pa1
on (a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR)
join HLOOKUP_DEPARTMENT a12
on (a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR)
join HLOOKUP_STORE a13
on (a11.STORE_NBR = a13.STORE_NBR)
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR

Temporary Table

declare global temporary table session.ZZIS03CU00(


DEPARTMENT_NBR DECIMAL(10, 0),
STORE_NBR DDECIMAL(10, 0))

Copyright © 2024 All Rights Reserved 1917


Syst em Ad m in ist r at io n Gu id e

on commit preserve rows not logged


insert into session.ZZIS03CU00
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HSTORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC,
a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HSTORE_DEPARTMENT a11
join session.ZZIS03CU00 pa1
on (a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR)
join HLOOKUP_DEPARTMENT a12
on (a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR)
join HLOOKUP_STORE a13
on (a11.STORE_NBR = a13.STORE_NBR)
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR

Views

create view ZZIS03CV00 (DEPARTMENT_NBR, STORE_NBR) as


select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HSTORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC,
a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HSTORE_DEPARTMENT a11
join ZZIS03CV00 pa1
on (a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR)
join HLOOKUP_DEPARTMENT a12
on (a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR)
join HLOOKUP_STORE a13
on (a11.STORE_NBR = a13.STORE_NBR)
group by a11.DEPARTMENT_NBR, a11.STORE_NBR

Copyright © 2024 All Rights Reserved 1918


Syst em Ad m in ist r at io n Gu id e

Maximum SQL Passes Before FallBack


The Maximum SQL Passes Before FallBack VLDB property allows you to
define reports to use common table expressions or derived tables while also
using temporary or permanent tables for complex reports.

Using common table expressions or derived tables can often provide good
performance for reports. However, some production environments have
shown better performance when using temporary tables for reports that
require multi-pass SQL.

To support the use of the best table type for each type of report, you can use
the Maximum SQL Passes Before FallBack VLDB property to define how
many passes are allowed for a report that uses intermediate tables. If a
report uses more passes than are defined in this VLDB property, the table
type defined in the Fallback Table Type VLDB property (see Fallback Table
Type, page 1913) is used rather than the table type defined in the
Intermediate Table Type VLDB property (see Intermediate Table Type, page
1915).

For example, you define the Intermediate Table Type VLDB property to use
derived tables for the entire database instance. This default is then used for
all reports within that database instance. You also define the Fallback Table
Type VLDB property to use temporary tables as the fallback table type. For
your production environment, you define the Maximum SQL Passes Before
FallBack VLDB property to use the fallback table type for all reports that use
more than five passes.

A report is executed. The report requires six passes of SQL to return the
required report results. Usually this type of report would use derived tables,
as defined by the Intermediate Table Type VLDB property. However, since it
uses more passes than the limit defined in the Maximum SQL Passes Before
FallBack VLDB property, it must use the fallback table type. Since the
Fallback Table Type VLDB property is defined as temporary tables, the
report uses temporary tables to perform the multi-pass SQL and return the
report results.

Copyright © 2024 All Rights Reserved 1919


Syst em Ad m in ist r at io n Gu id e

Levels at Which You Can Set This

Database instance, report, and template

Maximum Tables in FROM Clause Before FallBack


The Maximum Tables in FROM Clause Before FallBack VLDB property
allows you to define more reports to use common table expressions or
derived tables while also using temporary or permanent tables for complex
reports.

Using common table expressions or derived tables can often provide good
performance for reports. However, some production environments have
shown better performance when using temporary tables for reports that
require joining a large amount of database tables.

To support the use of the best table type for each type of report, you can use
the Maximum Tables in FROM Clause Before FallBack VLDB property (see
Fallback Table Type, page 1913) to define how many tables are allowed in a
From clause for a report that uses intermediate tables. If a report uses more
tables in a From clause than are defined in this VLDB property, the table
type defined in the Fallback Table Type VLDB property is used rather than
the table type defined in the Intermediate Table Type VLDB property (see
Intermediate Table Type, page 1915).

For example, you define the Intermediate Table Type VLDB property to use
derived tables for the entire database instance. This default is then used for
all reports within that database instance. You also define the Fallback Table
Type VLDB property to use temporary tables as the fallback table type. For
your production environment, you define the Maximum Tables in FROM
Clause Before FallBack VLDB property to use the fallback table type for all
reports that use more than seven tables in a From clause.

A report is executed. The report requires a SQL statement that includes nine
tables in the From clause. Usually this type of report would use derived
tables, as defined by the Intermediate Table Type VLDB property. However,
since it uses more tables in the From clause than the limit defined in the

Copyright © 2024 All Rights Reserved 1920


Syst em Ad m in ist r at io n Gu id e

Maximum Tables in FROM Clause Before FallBack VLDB property, it must


use the fallback table type. Since the Fallback Table Type VLDB property is
defined as temporary tables, the report uses temporary tables to perform the
SQL statement and return the report results.

Levels at Which You Can Set This

Database instance, report, and template

National Character Column Option


For a description of this VLDB property, see Character Column Option and
National Character Column Option, page 1904.

Levels at Which You Can Set This

Database instance only

Parallel SQL Execution Intermediate Table Type


Parallel SQL Execution Intermediate Table Type is an advanced property
that is hidden by default. For information on how to display this property, see
Viewing and Changing Advanced VLDB Properties, page 1630.

The Parallel SQL Execution Intermediate Table Type property determines


the type of intermediate table that is used when Parallel Query Execution
(see Optimizing Queries, page 1791) is employed for reports and Intelligent
Cubes. If Parallel Query Execution is not enabled, or the queries cannot be
processed in parallel, the intermediate table type is determined by the VLDB
property Intermediate Table Type, page 1915.

This VLDB property has the following options:

l Permanent Table: When the queries for a report or Intelligent Cube are
performed in parallel, any intermediate tables are created as permanent

Copyright © 2024 All Rights Reserved 1921


Syst em Ad m in ist r at io n Gu id e

tables. This provides broad support as all databases can support


permanent tables.

l Derived Table with Fallback Table Type as Permanent Table: When


the queries for a report or Intelligent Cube are performed in parallel, any
intermediate tables are created as derived tables. This can improve
performance for databases that support derived tables. However, not all
databases support derived tables. Refer to your third-party database
vendor documentation to determine if your database supports derived
tables.

If you select this option and derived tables cannot be created for your
database, permanent tables are created instead.

Levels at Which You Can Set This

Database instance, report, and template

Quoting Behavior
The Quoting Behavior property controls whether a project uses unified
quoting for all identifiers. You must upgrade your metadata to 2020 and set
the Data Engine version to 12 to enable this feature. Upgrading the
metadata will enable Unified Quoting in all projects in the metadata, all the
supported DBMS will have the correct quoting patterns and all database
instances set to supported DBMS will inherit the patterns. For more
information, see Unified Quoting Behavior for Warehouse Identifiers.

When the property is set to 1, unified quoting is enabled. If the property is


set to 0, it is not.

Levels at Which You Can Set This

Project

Copyright © 2024 All Rights Reserved 1922


Syst em Ad m in ist r at io n Gu id e

Exam ple

You have the query select col name from t1. The column name is col
name, but the database interprets the query as "get the column named col
and alias it as name." When Quoting Behavior is enabled, it will change the
query to select “col name” from “t1” and identify the correct
column.

Table Creation Type


The Table Creation Type property tells the SQL Engine whether to create
table implicitly or explicitly. Some databases do not support implicit
creation, so this is a database-specific setting.

Levels at Which You Can Set This

Database instance, report, and template

Exam ples

Explicit table (default)

create table TEMP1 (


STORE_NBR INTEGER,
TOT_SLS DOUBLE,
PROMO_SLS DOUBLE)
insert into TEMP1
select a21.STORE_NBR STORE_NBR,
(sum(a21.REG_SLS_DLR) + sum(a21.PML_SLS_DLR)) TOT_SLS,
sum(a21.PML_SLS_DLR) PROMO_SLS
from STORE_DIVISION a21
where a21.STORE_NBR = 1
group by a21.STORE_NBR

Implicit table

create table TEMP1 as


select a21.STORE_NBR STORE_NBR,
(sum(a21.REG_SLS_DLR) + sum(a21.PML_SLS_DLR)) TOT_SLS,
sum(a21.PML_SLS_DLR) PROMO_SLS
from STORE_DIVISION a21

Copyright © 2024 All Rights Reserved 1923


Syst em Ad m in ist r at io n Gu id e

where a21.STORE_NBR = 1
group by a21.STORE_NBR

Table Prefix, Table Qualifier, Table Option, Table Descriptor,


Table Space, & Create Post String
These properties can be used to customize the CREATE TABLE SQL syntax
for any platform. All of these properties are reflected in the SQL statement
only if the Intermediate Table Type VLDB property is set to Permanent
Table. Customizing a CREATE TABLE statement is only possible for a
permanent table. For all other valid Intermediate Table Type VLDB settings,
the SQL does not reflect the values set for these properties. The location of
each property in the CREATE TABLE statement is given below.

create /* Table Qualifier */ table /*Table


Descriptor*//* Table Prefix */ZZTIS003RB6MD000 /*Table
Option*/ (
STORE_NBR NUMBER,
CLEARANCESAL DOUBLE)
/* Table Space */
/* Create PostString */

For platforms like Teradata and DB2 UDB 6.x and 7.x versions, the Primary
Index or the Partition Key SQL syntax is placed between the Table Space
and Create Post String VLDB property.

Levels at Which You Can Set This

Database instance, report, and template

Unified Quoting Pattern


This string pattern controls how a specific DBMS or a database instance
quotes queries that are run against it. Depending on the DBMS, the default
value will differ.

Copyright © 2024 All Rights Reserved 1924


Syst em Ad m in ist r at io n Gu id e

You must upgrade your metadata to 2020 and set the Data Engine version to
12 to enable this feature. Upgrading the metadata will enable Unified
Quoting in all projects in the metadata, all the supported DBMS will have the
correct quoting patterns and all database instances set to supported DBMS
will inherit the patterns. For more information, see Unified Quoting Behavior
for Warehouse Identifiers.

Supported DBMS

See Platform Certifications for more information.

Levels at Which You Can Set This

Database instance and DBMS

Default VLDB Settings for Specific Data Sources


MicroStrategy certifies and supports connection and integration with many
third-party databases, MDX cube sources, and other data sources.

These include databases, data sources, and MDX cube sources from third-
party vendors such as IBM DB2, Oracle, Informix, SAP, Sybase, Microsoft,
Netezza, Teradata, and so on. For certification information on these data
sources, refer to the Readme.

Certain VLDB properties use different default settings depending on which


data source you are using. This allows MicroStrategy to both properly
support and take advantage of certain characteristics of each third-party
data source.

You can determine the default options for each VLDB property for a
database by performing the steps below. This provides an accurate list of
default VLDB properties for your third-party data source for the version of
MicroStrategy that you are using.

Copyright © 2024 All Rights Reserved 1925


Syst em Ad m in ist r at io n Gu id e

You have a user account with administrative privileges.

Ensure that you have fully upgraded your MicroStrategy environment and the
available database types, as described in Upgrading the VLDB Options for a
Particular Database Type, page 1634.

To Create a List of Default VLDB Settings for a Data Source

1. In Developer, log in to a project source using an account with


administrative privileges.

2. From the Folder List, expand Administration, then Configuration


Managers, and select Database Instances.

3. From the File menu, point to New, and select Database Instance.

4. In the Database instance name field, type a descriptive name for the
database instance.

5. From the Database connection type drop-down list, select the


appropriate option for the data source to list default VLDB settings for.
For example, you can select Oracle 11g to determine the default VLDB
settings for an Oracle 11g database.

To return a list of default VLDB properties for a data source, only an


appropriate database connection type needs to be defined for the
database instance; a connection to a data source does not need to be
made. After you create the list of default VLDB settings for the data
source, you can delete the database instance or modify it to connect to
your data source.

6. Click OK to exit the Database Instances Editor and save the database
instance.

7. Right-click the new database instance that you created and select
VLDB Properties.

Copyright © 2024 All Rights Reserved 1926


Syst em Ad m in ist r at io n Gu id e

8. From the Tools menu, ensure that Show Advanced Settings is


selected.

9. From the Tools menu, select Create VLDB Settings Report.

A VLDB settings report can be created to display current VLDB


settings for database instances, attributes, metrics, and other objects
in your project. For information on creating a VLDB settings report for
other purposes, see Creating a VLDB Settings Report, page 1627.

10. Select the Show descriptions of setting values check box. This
displays the descriptive information of each default VLDB property
setting in the VLDB settings report.

11. The VLDB settings report now displays all the default settings for the
data source. You can copy the content in the report using the Ctrl+C
keys on your keyboard, then paste the information into a text editor or
word processing program (such as Microsoft Word) using the Ctrl+V
keys.

12. Click Close.

13. You can then either delete the database instance that you created
earlier, or modify it to connect to your data source.

Copyright © 2024 All Rights Reserved 1927


Syst em Ad m in ist r at io n Gu id e

CREATIN G A
M ULTILIN GUAL
EN VIRON M EN T:
I N TERN ATION ALIZATION

Copyright © 2024 All Rights Reserved 1928


Syst em Ad m in ist r at io n Gu id e

This section shows you how to use MicroStrategy to internationalize a


project in your MicroStrategy environment, to make it available to a
multilingual audience. This includes internationalizing data in your data
warehouse and metadata objects in the MicroStrategy metadata repository.
This section also shows you how to display a translated MicroStrategy
interface.

Translating your data and metadata allows your users to view their reports in
a variety of languages. It also allows report designers and others to display
report and document editors and other objects editors in various languages.
And because all translation information can be stored in the same project,
project maintenance is easier and more efficient for administrators.

The image below shows which parts of a report are translated using data
internationalization and which parts of a report are translated using
metadata internationalization:

This section assumes you have an understanding of standard MicroStrategy


metadata objects, as well as how your organization stores translated data in
your data warehouse system.

This section includes the following information:

Copyright © 2024 All Rights Reserved 1929


Syst em Ad m in ist r at io n Gu id e

l About Internationalization, page 1930 provides an introduction to


internationalization in MicroStrategy, with examples; it also provides
information on how caching works in an internationalized environment.

l Best Practices for Implementing Internationalization, page 1933

l Preparing a Project to Support Internationalization, page 1934 provides


steps to take during installation or upgrade to prepare your projects for
internationalization.

l Providing Metadata Internationalization, page 1938 explains how the


metadata can be internationalized.

l Providing Data Internationalization, page 1951 provides steps to connect


to, set up, and store translated data within your data warehouse so that it
can be retrieved and displayed in MicroStrategy reports.

l Making Translated Data Available to Users, page 1962 describes the


hierarchy of preferences that a user can have set, and how that hierarchy
works.

l Achieving the Correct Language Display, page 1982 provides a table of


the functionality that MicroStrategy users can access to take advantage of
internationalization.

l Maintaining Your Internationalized Environment, page 1988 provides


information on using scripts with Command Manager to automate your
internationalized environment; moving translated objects between
projects; adding languages to be supported by a project; adding a custom
language; and applying security to your internationalized environment,
including creating specialized translator user roles.

About Internationalization
For a fully internationalized environment, both metadata internationalization
and data internationalization are required. However, you can

Copyright © 2024 All Rights Reserved 1930


Syst em Ad m in ist r at io n Gu id e

internationalize only your metadata, or only your data, based on your needs.
Both are described below.

This section also describes translating the user interface and how
internationalization affects report/document caching.

About Metadata Internationalization


Metadata internationalization displays translated object strings based on a
user's locale and other language preferences in MicroStrategy, for objects
that are stored in the MicroStrategy metadata, such as metric names and
report names. For example, you have two metrics stored in your metadata
repository, named Cost and Profit. These metadata objects will appear on
reports accessed by both English and Italian users. You can use metadata
internationalization to configure MicroStrategy to automatically display Cost
and Profit to the English users and Metrica costo and Metric proffito to the
Italian users.

Metadata internationalization (or MDI) involves exporting object strings to a


location where they can be translated, performing the linguistic translation,
and importing the newly translated object strings back into the metadata
repository. You can also translate individual objects one at a time, using the
Object Translation Editor.

For steps to perform these procedures, see Providing Metadata


Internationalization, page 1938.

About Data Internationalization


Data internationalization allows a single report definition to contain different
attribute forms for different languages available to users, based on a user's
locale and other language preferences in MicroStrategy. For example, you
want to display a product name stored in your data warehouse to two
different users, one who reads English and one who reads French. Both
users execute and view the same product report. You can use data
internationalization to configure MicroStrategy to automatically display A

Copyright © 2024 All Rights Reserved 1931


Syst em Ad m in ist r at io n Gu id e

Tale of Two Cities to the English user and Un Conte de Deux Villes to the
French user.

Data internationalization (or DI) involves configuring your data warehouse


so that tables and other structures allow MicroStrategy to access data in the
appropriate language for the user requesting the report. If you use multiple
warehouses to store translated data, DI involves connecting MicroStrategy
to the appropriate warehouses.

Depending on the data internationalization model you choose, which is


based on the structure of your translation storage environment (as
described above), you may only be able to translate the DESC (description)
form.

See Providing Data Internationalization, page 1951 for more information.

About Internationalizing the General User Interface


The MicroStrategy general user interface (such as the File menu, Edit menu,
and so on) can also be displayed in various languages. This translation
process is not part of metadata or data internationalization, but steps to
select a preferred interface language are part of this section. MicroStrategy
provides translated strings for the general user interface in several
languages. You can display the MicroStrategy general user interface in a
selected language using the MicroStrategy Developer Preferences
options in Developer and the Preferences link in MicroStrategy Web:

l For steps to select the interface language in Developer, see Selecting the
Interface Language Preference, page 1965.

l For steps to select the interface language in Web, click Help in


MicroStrategy Web.

Caching and Internationalization


For details about caching, see Improving Response Time: Caching

Object caching is not affected by internationalization.

Copyright © 2024 All Rights Reserved 1932


Syst em Ad m in ist r at io n Gu id e

Normal report and document caching behavior is not affected, regardless of


the types of internationalization that you implement. Specifically, data
internationalization methods (SQL-based and connection-based, both
described below) do not affect standard report and document caching
behavior.

Different caches are created for different DI languages, but not for different
MDI languages. When a user whose MDI language and DI language are
French runs a report, a cache is created containing French data and using
the report's French name. When a second user whose MDI language and DI
language are German runs the same report, a new cache is created with
German data and using the report's German name. If a third user whose MDI
language is French and DI language is German runs the same report, the
second user's cache is hit. Two users with the same DI language preference
use the same cache, regardless of MDI preferences.

A report's data internationalization language is displayed in a Data


Language column in the Cache Monitor. This helps the administrator identify
the difference between cached reports, when it is important to be able to
identify these differences.

Best Practices for Implementing Internationalization


l Make sure your database supports the character set(s) that are required
by the various languages you intend to support in your MicroStrategy
project. MicroStrategy recommends using a Unicode database to ensure
all your languages are supported. For details, see Adding or Removing a
Language in the System, page 1989.

l If you will be supporting double-byte languages (such as Japanese or


Korean), make sure that appropriate fonts are available for graph labels,
text fields in documents, and so on. Appropriate fonts to support double-
byte languages are generally Unicode fonts. An example of an effective
Unicode font for double-byte languages is Arial Unicode MS. Most Unicode

Copyright © 2024 All Rights Reserved 1933


Syst em Ad m in ist r at io n Gu id e

fonts ensure that all characters can be displayed correctly when a report
or document is displayed in a double-byte language.

Not all Unicode fonts can display double-byte languages, for example,
Lucida Sans Unicode does not display double-byte languages.

l All SQL-based qualifications contained in a given report should be in a


single language. SQL-based qualifications include such things as report
filters, metrics, and prompts.

l If you have old projects with metadata objects that have been previously
translated, it is recommended that you merge your translated strings from
your old metadata into the newly upgraded metadata using MicroStrategy
Project Merge. For steps, see Translating Already Translated Pre-9.x
Projects, page 1950.

l It is recommended for Developer internationalization that you use a unified


locale. For example, if French is the language selected for the interface,
the metadata objects language preference and report data language
preference, as well as number and date preferences, should also be in
French.

l If you are using or plan to use MicroStrategy Intelligent Cubes, and you
plan to implement data internationalization, it is recommended that you
use a SQL-based DI model. The SQL-based DI model is described in
Providing Data Internationalization, page 1951. Because a single
Intelligent Cube cannot connect to more than one data warehouse, using a
connection-based DI model requires a separate Intelligent Cube to be
created for each language, which can be resource-intensive. Details on
this cost-benefit analysis as well as background information on Intelligent
Cubes are in the In-memory Analytics Help.

Preparing a Project to Support Internationalization


The procedures in this section will help you modify existing MicroStrategy
projects to support both metadata and data internationalization. These

Copyright © 2024 All Rights Reserved 1934


Syst em Ad m in ist r at io n Gu id e

procedures perform several important modifications to your metadata,


including making it Unicode-compliant, providing some new translations for
system objects, and other project-level preparations.

These procedures must be performed whether you plan to support only


metadata internationalization, only data internationalization, or both.

This section includes steps to be taken when installing or upgrading to the


latest version of Developer. You should be prepared to use the steps below
during the installation or upgrade process. For steps to install, see the
Installation and Configuration Help. For steps to upgrade, see the Upgrade
Help.

Adding Internationalization Tables to the Metadata Repository


The first step to internationalizing your data and metadata is to add the
internationalization tables to your MicroStrategy metadata repository.

This step must be performed before you update your project's metadata
definitions.

This step must be completed during your installation or upgrade to the latest
version of Developer. For steps to install, see the Installation and
Configuration Help. For steps to perform a general MicroStrategy upgrade,
see the Upgrade Help.

To Add Internationalization Tables to the Metadata Repository

1. During the upgrade or installation process, select Upgrade existing


environment to MicroStrategy Intelligent Enterprise in the
Configuration Wizard, and click Next.

2. Continue working through the steps in the Installation and


Configuration Help or the Upgrade Help to complete the process.

Copyright © 2024 All Rights Reserved 1935


Syst em Ad m in ist r at io n Gu id e

Updating Your Project's Metadata Definitions


After you add internationalization tables to your metadata repository as
described above, you must update your project's metadata with the latest
definitions.

This procedure may have been completed during your installation or


upgrade to the latest version of Developer. If it was not part of the install or
upgrade, it must be performed to support metadata and data
internationalization. For steps to install, see the Installation and
Configuration Help. For steps to upgrade, see the Upgrade Help.

To Update Metadata Definitions

1. In Developer, double-click the name of the project that you want to


internationalize.

2. Log into the project. You are prompted to update your project. Click
Yes.

The metadata is updated to the latest version of MicroStrategy.

Updating System Object Translations


This optional procedure lets you "automatically translate" system objects
such as folder names, security roles, and user groups, by accessing
translations that come with MicroStrategy for those objects.

If you prefer to provide your own translations (for example if you will be
customizing folder names), you do not need to perform this procedure.

For projects created before MicroStrategy version 8.x, due to changes in


folder structure it is possible that system objects cannot be updated if they
have been renamed.

Copyright © 2024 All Rights Reserved 1936


Syst em Ad m in ist r at io n Gu id e

To Update System Object Translations

1. Reload the project before updating system object translations. To do


this, in the Folder List on the left, within the appropriate project source,
expand Administration, expand System Administration, and select
Projects. Right-click the project, point to Administer project, and click
Unload. After the project unloads, click Load.

2. Right-click the project you have upgraded, and select Project


Configuration.

3. Expand Project Definition, expand Update, then select Translations.

4. Click Update.

Allowing Access to Languages and Language Objects


Internationalization for languages and language objects is controlled
primarily through access control lists (ACLs). You can allow permissions to
specific users for each object that needs to be translated, or for each
language object (an object that represents a language in your system).

Access to Add or Modify a Translation


You can create a specialized user account for a translator that restricts their
access in MicroStrategy to only translating objects into a specific language.
For steps, see Creating Translator Roles, page 1996.

By default, administrators and object owners can translate an object or


modify an existing translation. Use ACLs to provide other users Write access
to an object, if other users need to translate that object. To change ACL
permissions, right-click the object and select Properties, then select
Security on the left. For details on each ACL and what access it allows,
click Help.

You can also provide a user with the Use Repository Translation Wizard
privilege. This allows a user to perform the necessary steps to translate or

Copyright © 2024 All Rights Reserved 1937


Syst em Ad m in ist r at io n Gu id e

modify translations of strings in all languages, without giving the user the
ability to modify an object in any other way. To change a privilege, open the
user in the User Editor and select Project Access on the left, then expand
the Object Manager set of privileges on the right and select the Use
Repository Translation Wizard check box.

Access to Select or Enable Displayed Languages: Language


Objects
By default, MicroStrategy users are provided with appropriate privileges to
Browse and Use language objects, such that analysts can select a language
as their display preference if that language has been enabled for a project.
Project administrators can enable any languages available in the system.

You can modify these default privileges for a specific user role or a specific
language object.

To Modify Access to a Language Object

1. In the Folder List on the left, within the appropriate project source,
expand Administration.

2. Expand Configuration Managers, then select Languages.

3. All language objects are listed on the right. To change ACL permissions
for a language object, right-click the object and select Properties.

4. Select Security on the left.

Providing Metadata Internationalization


Metadata internationalization (MDI) displays translated object strings based
on a user's locale and other language preferences in the software, for
objects that are stored in the MicroStrategy metadata, such as metric
names, report names, the Public Objects system folder, security role names,
user group names, and so on. Metadata translation also includes embedded

Copyright © 2024 All Rights Reserved 1938


Syst em Ad m in ist r at io n Gu id e

text strings (embedded in an object's definition), such as prompt


instructions, aliased names (which can be used in attributes, metrics, and
custom groups), consolidation element names, custom group element
names, graph titles, and threshold text.

Metadata object translation does not include configuration objects (such as


the user object), function names, data mart table names, and so on.

Begin metadata translation by enabling languages for your project's


metadata objects; see Enabling and Disabling Metadata Languages, page
1939. Then use the appropriate set of procedures below, depending on
whether translations already exist for your project or you will be translating
your project for the first time:

l Translating Your Project for the First Time, page 1943

l Translating Already Translated Pre-9.x Projects, page 1950

Enabling and Disabling Metadata Languages


To support the display of translations for metadata object names and
descriptions, you must enable languages for your project. The languages
you enable are those languages you want to support for that project.

You can also disable languages for a project.

Enabling Metadata Languages while Creating a New Project


If you plan to provide an internationalized project, you can enable
internationalization when creating a new project. For information on the
structure of your data warehouse to support internationalization for a new
project, and steps to enable internationalization while creating a new
project, see the Project Design Help.

Enabling Metadata Languages for an Existing Project


After the metadata has been updated and your project has been prepared for
internationalization (usually performed during the MicroStrategy installation

Copyright © 2024 All Rights Reserved 1939


Syst em Ad m in ist r at io n Gu id e

or upgrade), you enable languages so they will be supported by the project


for metadata internationalization.

Gather a list of languages used by filters and prompts in the project. These
languages should be enabled for the project, otherwise a report containing a
filter or prompt in a language not enabled for the project will not be able to
execute successfully.

To Enable Metadata Languages for a Project

1. Log into the project as a user with Administrative privileges.

2. Right-click the project and select Project Configuration.

3. On the left side of the Project Configuration Editor, expand Language


and select Metadata.

4. Click Add to see a list of available languages.

Copyright © 2024 All Rights Reserved 1940


Syst em Ad m in ist r at io n Gu id e

The languages displayed in bold blue are those languages that the
metadata objects have been enabled to support. This list is displayed
as a starting point for the set of languages you can choose to enable for
supporting data internationalization.

To add a new language, click New. For steps to create a custom


language, see Adding or Removing a Language in the System,
page 1989.

5. Select the check boxes for the languages that you want to enable for
this project.

l Enabled languages will appear in the Repository Translation Wizard


for string translation, as well as in Developer's My Preferences and
Web's Preferences, for users to select their own preferred language
for the project.

l Reports that contain filters or prompts in a translated language will


execute successfully if the project has that language enabled.

6. Click OK.

7. Select one of the languages on the right side to be the default language
for this project. The default language is used by the system to maintain
object name uniqueness.

This may have been set when the project was first created. If so, it will
not be available to be selected here.

Once the project default language is set, it cannot be changed unless


you duplicate the project and change the default language of the
duplicated project. Individual objects within a project can have their
default language changed.

If you are enabling a language for a project that has been upgraded
from 8.x or earlier, the default metadata language must be the
language in which the project was originally created (the 8.x Developer

Copyright © 2024 All Rights Reserved 1941


Syst em Ad m in ist r at io n Gu id e

language at the time of project creation). Be sure to select the default


language that matches the language selected when the project was
originally created. You can then add other languages to support the
project. To change a project's default language, you must duplicate the
project and change the default language in the duplicated project.

8. Click OK.

9. Disconnect and reconnect to the project source.

10. Update the out-of-the-box MicroStrategy metadata objects. To do this,


in Developer, right-click the project and go to Project Configuration >
Project definition > Update > Translations > Update.

Disabling Metadata Languages for a Project


You can use the steps below to disable a language for a project. When a
language has been disabled from a project, that language is no longer
available for users to select as a language preference, and the language
cannot be seen in any translation-related interfaces, such as an object's
Translation dialog box.

If a user's preferred language is disabled, the next lower priority language


preference will take effect. To see the language preference priority
hierarchy, see Configuring Metadata Object and Report Data Language
Preferences, page 1967.

Any translations for the disabled language are not removed from the
metadata with these steps. Retaining the translations in the metadata allows
you to enable the language again later, and the translations will still exist.
To remove translations in the disabled language from the metadata, objects
that contain these terms must be modified individually and saved.

Copyright © 2024 All Rights Reserved 1942


Syst em Ad m in ist r at io n Gu id e

To Disable Metadata Languages in a Project

1. Log in to a project as a user with administrative privileges.

2. Right-click the project and select Project Configuration.

3. On the left side of the Project Configuration Editor, expand Language,


then select Metadata.

4. On the right side, under Selected Languages, clear the check box for
the language that you want to disable for the project, and click OK.

Translating Your Project for the First Time


Translating a project involves providing translated strings for metadata
object names and descriptions.

If you use translator roles, be sure to assign the appropriate permissions


and privileges in MicroStrategy to your translators before beginning the
translation steps. See Creating Translator Roles, page 1996 for details.

There are two methods to translate metadata objects, depending on whether


you want to translate a large number of objects or just one or two objects:

l Translate a large number of objects: Extract strings in bulk to a


translation database, translate them, and import them back into
MicroStrategy. The MicroStrategy Repository Translation Wizard is the
recommended method to internationalize your metadata objects. Steps to
access this tool are below.

l Translate one or more objects in a folder: Right-click the object and


select Translate. Type the translated word(s) for each language this
object supports, and click OK. To translate several objects, select them all
while holding Shift or Ctrl, then right-click and select Translate. For
details to use the Object Translation dialog box, click Help.

Copyright © 2024 All Rights Reserved 1943


Syst em Ad m in ist r at io n Gu id e

The rest of this section describes the method to translate bulk object strings,
using a separate translation database, with the Repository Translation
Wizard.

The Repository Translation Wizard does not support translation of


configuration objects (such as the user object). It does support object
descriptors, including embedded text. These are detailed in the introduction
to Providing Metadata Internationalization, page 1938.

If your project has not yet been translated, metadata internationalization


involves the following high-level steps:

All of the procedures in this section assume that your projects have been
prepared for internationalization. Preparation steps are in Preparing a
Project to Support Internationalization, page 1934.

1. Enable languages for the metadata repository (see Enabling and


Disabling Metadata Languages, page 1939)

2. Export object strings to a location where they can be translated (see


Extracting Metadata Object Strings for Translation, page 1944)

3. Perform the linguistic translation (see Translating Metadata Object


Strings in the Translation Database, page 1945)

4. Import the newly translated object strings back into the metadata
repository (see Importing Translated Strings from the Translation
Database to the Metadata, page 1949)

Extracting Metadata Object Strings for Translation


The MicroStrategy Repository Translation Wizard supports Microsoft Access
and Microsoft SQL Server databases as translation repositories. The
translation repository is where strings are extracted to and where the actual
translation process is performed.

You cannot extract strings from the project's default metadata language.

Copyright © 2024 All Rights Reserved 1944


Syst em Ad m in ist r at io n Gu id e

It is recommended that objects are not modified between the extraction


process and the import process. This is especially important for objects with
location-specific strings: attribute aliases, metric aliases, custom group
elements, and document text boxes.

To Extract a Large Number of Object Strings for Translation

1. Open the Repository Translation Wizard. To do this, from the Start


menu, point to All Programs, then MicroStrategy Tools, then select
Repository Translation Wizard.

2. Click Next to begin.

3. To extract strings from the metadata, select the Export Translations


option from the Metadata Repository page in the wizard.

Translating Metadata Object Strings in the Translation Database


The extraction process performed by the Repository Translation Wizard
creates a table in the translation database, with the following columns:

l PROJECTID: This is the ID of the project from which the string is


extracted.

l OBJECTID: This is the ID of the object from which the string is extracted.

l OBJECTTYPE: Each object is associated with a numeric code. For


example, documents are represented by OBJECTTYPE code 55.

l EMBEDDEDID: An embedded object is an object contained inside another


object, for example, a metric object that is part of a report object. If the
string is extracted from an embedded object, the ID of this embedded
object is stored in this column. The value 0 indicates that the string is not
extracted from an embedded object.

Copyright © 2024 All Rights Reserved 1945


Syst em Ad m in ist r at io n Gu id e

l EMBEDDEDTYPE: This is a numeric representation of the type of the


embedded object. The value 0 indicates that the string is not extracted
from an embedded object.

l UNIQUEKEY: This is a key assigned to the extracted string to identify the


string within the object.

l READABLEKEY: This is a description of the extracted string within the


object, for example, Prompt Title, Prompt Description, Object Name,
Template Subtotal Name, and so on. The READABLEKEY is a readable
form of the UNIQUEKEY.

l LOCALEID: This indicates the language of the extracted string in the


TRANSLATION column.

MicroStrategy uses locale IDs to uniquely identify languages. For


consistency, MicroStrategy uses the same locale IDs as Microsoft. The
following table lists the language codes for the languages that
MicroStrategy supports out-of-the-box.

Language Language Code

Chinese (Simplified) 2052

Chinese (Traditional) 1028

English (US) 1033

French (France) 1036

German (Germany) 1031

Italian (Italy) 1040

Japanese 1041

Korean 1042

Portuguese (Brazil) 1046

Spanish (Spain) 3082

Swedish 1053

Copyright © 2024 All Rights Reserved 1946


Syst em Ad m in ist r at io n Gu id e

For custom languages, MicroStrategy assigns a unique language ID based


on the base language that it is derived from.

l TRANSLATION: This is the column where the extracted string is stored.

l TRANSVERSIONID: This is the version ID of the object at the time of


export.

l REFTRANSLATION: This column contains the extracted string in the


translation reference language, which is selected by the user from the
Repository Translation Wizard during export.

This string is used only as a reference during the translation process. For
example, if the translator is comfortable with the German language, you
can set German as the translation reference language. The
REFTRANSLATION column will then contain all the extracted strings in
the German language, for the translator to use as a reference when they
are translating extracted strings.

If no reference language string is available, the string from the object's


primary language is exported so that this column is not empty for any string.

l STATUS: You can use this column to enter flags in the table to control
which strings are imported back into the metadata. A flag is a character
you type, for example, a letter, a number, or a special character (as long
as it is allowed by your database). When you use the wizard to import the
strings back into the metadata, you can identify this character for the
system to use during the import process, to determine which strings to
import.

For example, if a translator has finished only some translations, you may
want to import only the completed ones. Or if a reviewer has completed the
language review for only some of the translations, you may wish to import
only those strings that were reviewed. You can flag the strings that were
completed and are ready to be imported.

l OBJVERSIONID: This is the version ID of objects at the time of import.

Copyright © 2024 All Rights Reserved 1947


Syst em Ad m in ist r at io n Gu id e

l SYNCHFLAG: This is a system flag and is automatically generated during


import. The following values are used:

l 0: This means that the object has not been modified between extraction
and import.

l 1: This means that the object has been modified between extraction and
import.

l 2: This means that the object that is being imported is no longer present
in the metadata.

System flags are automatically applied to strings during the import


process, so that you can view any string-specific information in the log
file.

l LASTMODIFIED: This is the date and time when the strings were
extracted.

Once the extraction process is complete, the strings in the translation


database need to be translated in the extraction table described above. This
is generally performed by a dedicated translation team or a 3rd party
translation vendor.

l If an object has an empty translation in a user's chosen project language


preference, the system defaults to displaying the object's default
language, so it is not necessary to add translations for objects that are
not intended to be translated. For details on language preferences, see
Selecting Preferred Languages for Interfaces, Reports, and Objects, page
1963.

l If you performed a Search for Objects in the Repository Translation Tool,


you may notice that the number of rows in the extraction table might not
match the number of rows returned in the search results. This is because
a search returns all objects that meet the search requirements; the search
does not filter for only those items that can be translated. Thus, for
example, the search may return a row for the lookup table LU_YEAR, but

Copyright © 2024 All Rights Reserved 1948


Syst em Ad m in ist r at io n Gu id e

the extraction process does not extract the LU_YEAR string because
there is no reason to translate a lookup table's name. To determine
whether an object's name can be translated, right-click the object, select
Properties, and look for the International option on the left. If this option
is missing, the object is not supported for translation.

To confirm that your translations have successfully been imported back into
the metadata, navigate to one of the translated objects in Developer, right-
click, and select Properties. On the left, select International, then click
Translate. The table shows all translations currently in the metadata for this
object.

Importing Translated Strings from the Translation Database to


the Metadata
After strings have been translated by a language expert in the translation
database, they must be re-imported into the MicroStrategy metadata.

To Import Translated Strings

1. Open the Repository Translation Wizard. To do this, from the Start


menu, point to All Programs, then MicroStrategy Tools, then select
Repository Translation Wizard.

2. Click Next to begin.

3. To import strings from the translation database back into the metadata,
select the Import Translations option from the Metadata Repository
page in the wizard.

After the strings are imported back into the project, any objects that were
modified while the translation process was being performed, are
automatically marked with a 1. These translations should be checked for
correctness, since the modification may have included changing the object's
name or description.

Copyright © 2024 All Rights Reserved 1949


Syst em Ad m in ist r at io n Gu id e

When you are finished with the string translation process, you can proceed
with data internationalization if you plan to provide translated report data to
your users. For background information and steps, see Providing Data
Internationalization, page 1951. You can also set user language preferences
for translated metadata objects and data in Enabling or Disabling Languages
in the Project to Support DI, page 1958.

Translating Already Translated Pre-9.x Projects


You may have your translated information spread out among several
individual, monolingual projects and you want to add multilingual support
and combine them into a single, all-inclusive multilingual project called a
master project.

Even if you maintain separate production projects in separate languages,


the ideal scenario is to create a single development project where
translations are maintained for all languages that are required by any
regional production projects.

If you use translator roles, be sure to assign the appropriate permissions


and privileges in MicroStrategy to your translators before beginning the
translation steps. See Creating Translator Roles, page 1996 for details.

When translated projects already exist, metadata internationalization


involves the following high-level steps:

All of the procedures in this section assume that you have completed any
final import of translations to your pre-9.x project using the old Repository
Translation Tool, and that your projects have been prepared for
internationalization. Preparation steps are in Preparing a Project to Support
Internationalization, page 1934.

1. Enable languages for the metadata repository (see Enabling and


Disabling Metadata Languages, page 1939). For the master project, be
sure to enable all languages that you will be supporting.

Copyright © 2024 All Rights Reserved 1950


Syst em Ad m in ist r at io n Gu id e

2. Back up your existing translated strings by extracting all objects from


the old translated projects using the MicroStrategy Repository
Translation Wizard (see Extracting Metadata Object Strings for
Translation, page 1944).

3. Merge the translated projects into the master project using the Project
Merge Wizard. Do not merge any translations.

4. You now have a single master project that contains all objects that were
present in both the original master project and in the translated project.

5. Extract all objects from the master project using the MicroStrategy
Repository Translation Wizard (see Extracting Metadata Object Strings
for Translation, page 1944).

6. Provide translations for all objects in the translated language (see


Translating Metadata Object Strings in the Translation Database, page
1945).

7. Import all translations back into the master project (see Importing
Translated Strings from the Translation Database to the Metadata,
page 1949).

8. After translation verification, duplicate the master project so that you


have a development project, a testing project, and at least one
production project.

Providing Data Internationalization


Data internationalization (or DI) allows you to display translated report and
document results to users from your data warehouse. Data
internationalization allows a single report definition to contain different
attribute elements for each language available to users, with the appropriate
element displayed based on the user's locale and other language
preferences in the software.

Data internationalization involves the following high-level steps:

Copyright © 2024 All Rights Reserved 1951


Syst em Ad m in ist r at io n Gu id e

All of the procedures in this section assume that your projects have been
prepared for internationalization. Preparation steps are in Preparing a
Project to Support Internationalization, page 1934.

1. Store the translated data in a data warehouse. Translated data strings


can be stored either in their own columns and/or tables in the same
warehouse as the source (untranslated) data, or in different
warehouses separated by language. Some organizations keep the
source language stored in one warehouse, with all other languages
stored together in a different warehouse. You must configure
MicroStrategy with a DI model so it can connect to one of these storage
scenarios: the SQL-based model and the connection-based model. For
details on each model and steps to configure MicroStrategy, see
Storing Translated Data: Data Internationalization Models, page 1952).

2. Enable the languages in MicroStrategy that will be supported by the


project and configure the system based on where the translated data is
stored (see Enabling or Disabling Languages in the Project to Support
DI, page 1958).

Storing Translated Data: Data Internationalization Models


This section assumes that you understand the structure of your
organization's data storage. Table and column creation, maintenance, and
alteration is beyond the scope of this guide. For information about data
warehouses and how internationalization affects the process of storing and
organizing information in the data warehouse, see the MicroStrategy Project
Design Guide.

You must connect MicroStrategy to your storage system for translated data.
To do this, you must identify which type of storage system you are using.
Translated data for a given project is stored in one of two ways:

l In columns and tables within the same data warehouse as your source
(untranslated) data (see SQL-Based DI Model, page 1953)

Copyright © 2024 All Rights Reserved 1952


Syst em Ad m in ist r at io n Gu id e

l Stored in a different data warehouse from your source (untranslated) data


(see Connection-Based DI Model, page 1953)

SQL-Based DI Model
If all of your translations are stored in the same data warehouse as the
source (untranslated) data, this is a SQL-based DI model. This model
assumes that your translation storage is set up for column-level data
translation (CLDT) and/or table-level data translation (TLDT), with
standardized naming conventions.

This model is called SQL-based because SQL queries are used to directly
access data in a single warehouse for all languages. You can provide
translated DESC (description) forms for attributes with this DI model.

If you are using a SQL-based DI model, you must specify the column pattern
or table pattern for each language. The pattern depends upon the table and
column names that contain translated data in your warehouse.
MicroStrategy supports a wide range of string patterns. The string pattern is
not limited to suffixes only. However, using prefixes or other non-suffix
naming conventions requires you to use some functions so that the system
can recognize the location of translated data. These functions are included
in the steps to connect the system to your database.

Regular (non-locale-specific) connection maps are treated normally by


MicroStrategy if you choose the SQL-based DI model.

This model is recommended if using MicroStrategy Intelligent Cubes. For


steps to point MicroStrategy to the correct columns or tables for each
language, see Connecting the System to a Single Database: SQL-Based DI
Model, page 1956.

Connection-Based DI Model
If the translated data is stored in different data warehouses for each
language, MicroStrategy retrieves the translations using a database

Copyright © 2024 All Rights Reserved 1953


Syst em Ad m in ist r at io n Gu id e

connectivity API, namely ODBC. This model is called connection-based


because a connection to more than one data warehouse must be made to
access data in all languages. This is commonly called warehouse-level data
translation (WLDT).

When using a connection-based DI model, you can connect to as many data


warehouses as necessary, for example, one for each language. For steps to
provide the appropriate database connection information for each data
warehouse, see Connecting the System to more than one Database:
Connection-Based DI Model, page 1956.

Choosing a DI Model
You must evaluate your physical data storage for both your source
(untranslated) language and any translated languages, and decide which
data internationalization model is appropriate for your environment.

MicroStrategy can use either a SQL-based or a connection-based DI model,


but not both. For example, if your project supports 10 languages, and 5 of
those languages are stored in one data warehouse and the other 5 are
stored individually in separate data warehouses, MicroStrategy does not
support this storage solution.

The following table describes common translation storage scenarios, and


shows you which DI model and translation access method must be used.

Data
Translation Access
Translation Storage Location Internationalization
Method
Model

Different tables for each language, Different SQL generated


SQL-based
in one data warehouse for each language

Different columns for each language, Different SQL generated


SQL-based
in one data warehouse for each language

Copyright © 2024 All Rights Reserved 1954


Syst em Ad m in ist r at io n Gu id e

Data
Translation Access
Translation Storage Location Internationalization
Method
Model

Different tables and columns for


Different SQL generated
each language, in one data SQL-based
for each language
warehouse

Different database
One data warehouse for each
connection for each Connection-based
language
language

If you are creating a new data warehouse and plan to implement DI, and you
also use Intelligent Cubes, it is recommended that you use a SQL-based DI
model, with different tables and/or columns for each language. Because a
single Intelligent Cube cannot connect to more than one data warehouse,
using a connection-based DI model requires a separate Intelligent Cube to
be created for each language. This is very resource-intensive. For
information about Intelligent Cubes in general and details on designing
Intelligent Cubes for an internationalized environment, see the
MicroStrategy In-memory Analytics Help.

Connecting the System to the Translation Database


After languages have been enabled for the project, you must configure the
system so that MicroStrategy can retrieve the translated data. This
configuration varies depending on the data internationalization (DI) model
used:

l Connection-based DI model: You must specify a database connection for


each language.

l SQL-based DI model: You must specify a column pattern or table pattern


for each language.

These models are described in detail in Storing Translated Data: Data


Internationalization Models, page 1952.

Copyright © 2024 All Rights Reserved 1955


Syst em Ad m in ist r at io n Gu id e

Connecting the System to a Single Database: SQL-Based DI Model

For a detailed explanation of how to set up tables and columns to support


SQL-based data internationalization, see the Project Design Help,
Internationalization through tables and columns or databases section. The
Project Design Help provides extensive examples and images of table and
column naming patterns, explains the use of only tables, only columns, or
both tables and columns, the use of logical views, and so on.

Your table suffixes for languages should be consistent and unified across
the entire warehouse. For example, if you have Spanish translations in your
warehouse, the suffix should be _SP for all tables that include Spanish
translations, and not _SP, _ES, _EP, and so on.

For detailed steps to connect the system to your translation database, see
the Project Design Help, Enabling data internationalization through SQL
queries section. The Project Design Help includes details to select your
table or column naming pattern, as well as functions to use if your naming
pattern does not use suffixes.

If you are changing from one DI model to another, you must reload the
project after completing the steps above. Settings from the old DI model are
preserved, in case you need to change back.

Connecting the System to m ore than one Database: Connection-Based DI


Model

If you are using a connection-based DI model, you must specify a database


connection for each data warehouse that stores translated data.

Connection mapping can also be performed using Command Manager.

For a detailed explanation of how to set up your databases to support data


internationalization, see the Project Design Help, Internationalization
through tables and columns or databases section. The Project Design Guide
provides extensive examples and images of translation table structures in

Copyright © 2024 All Rights Reserved 1956


Syst em Ad m in ist r at io n Gu id e

different databases, as well as important restrictions on logical views and


supported character sets.

The database connection that you use for each data warehouse must be
configured in MicroStrategy before you can provide translated data to
MicroStrategy users.

The procedure in the Project Design Guide assumes that you will enable the
connection-based DI model. If you decide to enable the SQL-based model, you
can still perform the steps to enable the connection-based model, but the
language-specific connection maps you create in the procedure will not be
active.

The physical schemas of all data warehouses to be used for data


internationalization should be identical.

You must have the Configure Connection Map privilege, at either the user level
or the project level.

Objects displayed in the Connection Mapping Editor are limited to those


objects the user has Browse and Use permissions for.

For detailed steps to connect the system to more than one data warehouse,
see the Project Design Help, Enabling data internationalization through
connection mappings section.

If you are changing from one DI model to another, you must reload the
project after completing the steps in the Project Design Help. Settings from
the old DI model are preserved, in case you need to change back.

You can delete a connection mapping by right-clicking on the connection


map and selecting Delete.

Supporting Data Internationalization for Attribute Elements


If you are using the SQL-based DI model, you must perform an additional
step to support the display of translated attribute elements in reports and
documents.

Copyright © 2024 All Rights Reserved 1957


Syst em Ad m in ist r at io n Gu id e

If the project designer has not already done so, you must define attribute
forms in the project so that they can be displayed in multiple languages.
Detailed information and steps to define attribute forms to support multiple
languages are in the Project Design Help, Supporting data
internationalization for attribute elements section.

Enabling or Disabling Languages in the Project to Support DI


For languages that are stored in your data warehouse to be available for use
in MicroStrategy, you must configure the project to support those languages.

You can also add a custom language to the list of languages available to be
enabled for data internationalization. For steps to add a custom language to
the project, see Adding or Removing a Language in the System, page 1989.

Enabling Languages for Data Internationalization


After translated data has been stored, you must configure the project to
establish which languages will be supported for data internationalization
(DI). You must perform this procedure whether you store translated data
using a SQL-based DI model or a connection-based DI model.

To Enable Data Internationalization Languages in a Project

1. Log in to a project as a user with administrative privileges.

2. Right-click the project and select Project Configuration.

3. On the left side of the Project Configuration Editor, expand Language,


then select Data.

4. Select the Enable data internationalization check box.

5. Select the DI model that you are using. For details, see Storing
Translated Data: Data Internationalization Models, page 1952.

Copyright © 2024 All Rights Reserved 1958


Syst em Ad m in ist r at io n Gu id e

l For a SQL-based DI model, select SQL based.

l For a connection-based DI model, select Connection mapping


based.

6. Click Add.

7. Languages displayed in bold blue are those languages that have been
enabled for the project to support translated metadata objects, if any.
This list is displayed as a starting point for the set of languages you can
choose to enable for supporting data internationalization.

l To display all available languages, or if no metadata languages are


displayed, clear the Display metadata languages only check box.

l To add a new language, make sure the Display metadata languages


only check box is cleared, and then click New. For steps to create a

Copyright © 2024 All Rights Reserved 1959


Syst em Ad m in ist r at io n Gu id e

custom language, see Adding or Removing a Language in the


System, page 1989.

8. Select the check box next to any language or languages that you want
to enable for this project.

If no languages are selected to be enabled to support data


internationalization, then data internationalization is treated by the
system as disabled.

9. Click OK.

10. In the Default column, select one language to be the default language
for data internationalization in the project. This selection does not have
any impact on the project or how languages are supported for data
internationalization. Unlike the MDI default language, this DI default
language can be changed at any time.

If no default DI language is selected, data internationalization is


treated by the system as disabled.

11. For each language you have enabled, define the column/table naming
pattern or the connection-mapped warehouse, depending on which DI
model you are using (for information on DI models and on naming
patterns, see Storing Translated Data: Data Internationalization
Models, page 1952):

l SQL-based DI model: If you selected the SQL-based DI model


above, click the Column Pattern and Table Pattern columns next to
one of the languages you will support. Type the column or table prefix
or suffix and click OK. For examples, click Help.

l Some languages may have the same suffix - for example, English US
and English UK. You can also specify a NULL suffix.

12. Click OK.

Copyright © 2024 All Rights Reserved 1960


Syst em Ad m in ist r at io n Gu id e

13. Disconnect and reconnect to the project source so that your changes
take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.

Disabling a Language for Data Internationalization


You can use the steps below to disable a language for a project. When a
language has been disabled in a project, that language is no longer
available for users to select as a language preference, and the language
cannot be seen in any translation-related interfaces such as an object's
default language in its Properties - International dialog box. Any translations
for the disabled language are not removed from the data warehouse with
these steps.

If a user has selected the language as a language preference, the


preference will no longer be in effect once the language is disabled. The
project's default language will take effect.

If you remove the language currently set as the default data


internationalization language, the system automatically selects the first
language in the list of remaining enabled languages to set the new default
language. This new default data internationalization language should not
have any impact on your project.

If you disable all languages for data internationalization (DI), the system
treats DI as disabled. Likewise, if you do not have a default language set for
DI, the system treats DI as disabled.

To Disable Data Internationalization Languages in a Project

1. Log in to a project as a user with administrative privileges.

2. Right-click the project and select Project Configuration.

Copyright © 2024 All Rights Reserved 1961


Syst em Ad m in ist r at io n Gu id e

3. On the left side of the Project Configuration Editor, expand Language,


then select Data.

4. On the right side, under Selected Languages, clear the check box for
the language that you want to disable for the project.

5. Click OK.

6. Perform the following steps depending on how your project is affected:

l Empty any caches or Intelligent Cubes containing content in the


disabled DI language.

l Language disabling will only affect MDX cubes and regular reports
and documents if an attribute form description in the disabled
language exists in the cube or report. If this is true, the cube, report,
or document cannot be published or used. The cube, report, or
document designer must remove attribute forms in the disabled
language before the cube/report/document can be used again.

7. Disconnect and reconnect to the project source so that your changes


take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.

Making Translated Data Available to Users


After you have performed the necessary steps to configure metadata object
translation and/or data translation in the system, you can specify which
language(s) should be displayed for various users in the interface and in
reports (both report objects and report results). You can specify language
preferences at the project level and at the all-projects level. By selecting
various levels of language preferences, you specify which language is
preferred as a fallback if a first choice language is not available.

These language preferences are for metadata languages only. All data
internationalization languages fall back to the project's default language if a

Copyright © 2024 All Rights Reserved 1962


Syst em Ad m in ist r at io n Gu id e

DI preference is not enabled or translation of a specific report cell is not


available.

The following sections show you how to select language preferences based
on various priority levels within the system, starting with a section that
explains the priority levels:

l Selecting Preferred Languages for Interfaces, Reports, and Objects, page


1963

l Selecting the Interface Language Preference, page 1965

l Configuring Metadata Object and Report Data Language Preferences,


page 1967

l Selecting the Object Default Language Preference, page 1980

Selecting Preferred Languages for Interfaces, Reports, and


Objects
After translated data is stored in your data warehouse and/or metadata
database, and languages have been enabled for the project, you must
specify which languages are the preferred languages for the project and the
user. These selected languages are called language preferences.

The following image shows the different parts of the MicroStrategy


environment that display translated strings based on the language
preferences:

Copyright © 2024 All Rights Reserved 1963


Syst em Ad m in ist r at io n Gu id e

The following language preferences can be configured:

l Interface Language: Determine the language that menu options, dialog


box text, and so on, will display. For steps to set this preference, see
Selecting the Interface Language Preference, page 1965.

l Metadata objects: Determine the language that will be displayed for


MicroStrategy objects that come from the metadata database, such as
metric names, report names, system folder names, and so on. For steps to
set this preference, see Configuring Metadata Object and Report Data
Language Preferences, page 1967.

l Report data: Determine the language that will be displayed for report
results that come from your data warehouse, such as attribute element
names. For steps to set this preference, see Configuring Metadata Object
and Report Data Language Preferences, page 1967.

l Object default language: Determine the fallback language for


MicroStrategy objects. This language is used if a report is executed in a
language that the object lacks a translation for. For steps to set or change
this default preference, see Selecting the Object Default Language
Preference, page 1980.

Copyright © 2024 All Rights Reserved 1964


Syst em Ad m in ist r at io n Gu id e

Each language preference can be configured independently of the others.


For example, it is possible to have a report that displays all metadata object
names in French, while any data from the data warehouse is displayed in
English, and the interface is translated into Spanish. However, for best
performance it is recommended that you use a unified language display in
Developer. For example, if you use French for the interface, the metadata
objects language preference and the report data language preference, as
well as number and date preferences, should also be in French.

Selecting the Interface Language Preference


The interface language preference determines what language Developer
menus, editors, dialog boxes, monitors and managers, and other parts of the
Developer software are displayed in. Use the steps below to set this
preference.

Configuring the Interface Language Preference

1. In Developer, log in to the project.

2. From the Tools menu, select Preferences.

3. On the left, expand International and select Language.The


International: Language dialog box is displayed, as shown below:

Copyright © 2024 All Rights Reserved 1965


Syst em Ad m in ist r at io n Gu id e

4. From the Interface Language drop-down list, select the language that
you want to use as the interface default language

5. The interface language preference can also be used to determine the


language used for the metadata objects and report data, if the
Developer level language preference is set to Use the same language
as MicroStrategy Developer.For more information on the Developer
level language preference, see Selecting the Developer Level
Language Preference, page 1975.

6. Select OK.

7. Disconnect and reconnect to the project source so that your changes


take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.

Copyright © 2024 All Rights Reserved 1966


Syst em Ad m in ist r at io n Gu id e

Configuring Metadata Object and Report Data Language


Preferences
There are several levels at which metadata and report data languages can
be specified in MicroStrategy. Lower level languages are used by the system
automatically if a higher level language is unavailable. This ensures that end
users see an appropriate language in all situations.

Language preferences can be set at six different levels, from highest priority
to lowest. The language that is set at the highest level is the language that is
always displayed, if it is available. If that language does not exist or is not
available in the metadata or the data warehouse, the next highest level
language preference is used.

If a language preference is not specified, or is set to Default, MicroStrategy


automatically uses the next lower priority language preference. If none of
these language preferences are set, the interface preferred language is
used.

When an object is created, its default object language is automatically set to


match the creator's metadata language preference. If the creator has their
metadata language preference set to Default, the new object's default
language is decided based on the rules in this section: the system will first
try to use a default language for all users of the project, then a language
preference set for all users of Developer, then the default language set for
the project (as shown in the table below).

The following table describes each level, from highest priority to lowest
priority, and points to information on how to set the language preference at
each level.

l End user preference settings override any administrator preference


settings, if the two settings conflict.

l Distribution Services deliveries are one exception to the hierarchy below.


For details, see Selecting the Machine Level Language Preference, page
1978.

Copyright © 2024 All Rights Reserved 1967


Syst em Ad m in ist r at io n Gu id e

Language
Preference
Level Setting Location for Setting Location for
Description
(highest to End Users Administrators
lowest
priority)

Web: Preferences link Set in the User Language


The language at the top of any page. Preference Manager. See
User-Project preference for a
Developer: From the Selecting the User-Project
level user for a specific
Tools menu, select My Level Language
project.
Preferences. Preference, page 1970.

Web: Preferences link


The language at the top of any page. Set in the User Editor. See
User-All preference for a Selecting the User-All
Projects level user for all Developer: From the Projects Level Language
projects. Tools menu, select My Preference, page 1972.
Preferences.

In the Project
Configuration Editor,
The language expand Languages,
Project-All preference for all select User Preferences.
Not applicable.
Users level users in a specific See Selecting the All
project. Users in Project Level
Language Preference,
page 1974.

The interface Set in the Developer Set in the Developer


language Preferences dialog box. Preferences dialog box.
preference for all For steps to specify this For steps to specify this
Developer
users of language, see Selecting language, see Selecting
level
Developer on that the Developer Level the Developer Level
machine, for all Language Preference, Language Preference,
projects. page 1975. page 1975.

The language On the user's machine


On the user's machine and
Machine level preference for all and within the user's
within the user's browser
users on a given browser settings.

Copyright © 2024 All Rights Reserved 1968


Syst em Ad m in ist r at io n Gu id e

Language
Preference
Level Setting Location for Setting Location for
Description
(highest to End Users Administrators
lowest
priority)

settings. For steps to


specify this language, see
machine. Selecting the Machine
Level Language
Preference, page 1978.

This is the project Set in the Project


default language Configuration Editor. For
set for MDI. It is steps to specify this
Project
the language Not applicable. language, see Configuring
Default level
preference for all the Project Default Level
users connected Language Preference,
to the metadata. page 1979.

For example, a user has their User-Project Level preference for Project A
set to English. their User-All Projects Level preference is set to French. If
the user logs in to Project A and runs a report, the language displayed will
be English. If the user logs in to Project B, which does not have a User-
Project Level preference specified, and runs a report, the project will be
displayed in French. This is because there is no User-Project Level
preference for Project B, so the system automatically uses the next, lower
language preference level (User-All Projects) to determine the language to
display.

These language preferences apply to strings translated in both the metadata


and the data warehouse. However, MicroStrategy handles missing
translations differently, depending upon whether the string is translated in
the metadata or the data warehouse:

Copyright © 2024 All Rights Reserved 1969


Syst em Ad m in ist r at io n Gu id e

l Metadata: When a translation for an object in the metadata is missing in


the preferred language, the object default language preference is used.
For more information about the object default language preference, see
Selecting the Object Default Language Preference, page 1980.

l Data warehouse: When a translation for data in the data warehouse is


missing in the preferred language (the column or table is present in the
data warehouse but is empty), the report returns no data.

The following sections provide steps to configure each preference level,


starting from the highest priority and ending at the lowest priority.

Selecting the User-Project Level Language Preference

The User-Project Level language preference is the language preference for


a given user for a specified project. It is the highest priority language
setting; to see the hierarchy of language preference priorities, see the table
in Configuring Metadata Object and Report Data Language Preferences,
page 1967.

This preference is specified in the User Language Preference Manager in


Developer. Use the steps below to set this preference.

If an object has an empty translation in a user's chosen project language


preference, the system defaults to displaying the object's default language,
so it is not necessary to add translations for objects that are not intended to
be translated.

Selecting the User-Project Level Language Preference

1. Log in to Developer as a user with Administrative privileges.

2. Right-click the project that you want to set the language preference for
and select Project Configuration.

Copyright © 2024 All Rights Reserved 1970


Syst em Ad m in ist r at io n Gu id e

3. On the left side of the Project Configuration Editor, expand Languages,


and select User Preferences.

4. On the right side, under User Language Preference Manager,click


Modify. The User Language Preference Manager opens, shown below:

5. In the Choose a project to define user language preferences drop


down menu at the top left, select the appropriate project.

6. Select the users from the list on the left side of the User Language
Preferences Manager that you want to change the User-Project level
language preference for, and click > to add them to the list on the right.
You can narrow the list of users displayed on the left by doing one of
the following:

l To search for users in a specific user group, select the group from the
drop-down menu that is under the Choose a project to define user
language preferences drop-down menu.

l To search for users containing a certain text string, type the text
string in the Find field, and click the icon.

Copyright © 2024 All Rights Reserved 1971


Syst em Ad m in ist r at io n Gu id e

This returns a list of users matching the text string you typed.

Previous strings you have typed into the Find field can be accessed
again by expanding the Find drop-down menu.

7. On the right side, select the user(s) that you want to change the User-
Project level preferred language for, and do the following:

You can multi-select users using the CTRL key.

l Select the desired language to be applied to translated metadata


objects from the drop-down menu in the Metadata column. This
language will be displayed for the selected user(s) when connecting
to the selected project.

l Select the desired language to be applied to report results from the


drop-down menu in the Data column. This language will be displayed
for the selected user(s) when connecting to the selected project.

8. Click OK.

Once the user language preferences have been saved, users can no
longer be removed from the Selected list.

9. Click OK.

10. Disconnect and reconnect to the project source so that your changes
take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.

Selecting the User-All Projects Level Language Preference

The User-All Projects level language preference determines what language


will be applied to all projects that a specific user sees when connected to a
project source, unless a higher priority language preference has been
specified for the user. Use the steps below to set this preference.

Copyright © 2024 All Rights Reserved 1972


Syst em Ad m in ist r at io n Gu id e

If the User-Project language preference is specified for the user, the user
will see the User-All Projects language only if the User-Project language is
not available. To see the hierarchy of language preference priorities, see
the table in Configuring Metadata Object and Report Data Language
Preferences, page 1967.

Selecting the User-All Projects Level Language Preference

1. Log in to Developer as a user with Administrative privileges.

2. In the Folder List on the left, within the appropriate project source,
expand Administration, expand User Manager, and navigate to the
user that you want to set the language preference for.

3. Double-click the user.

4. On the left side of the User Editor, expand the International category
and select Language.

5. On the right side of the User Editor, do the following, depending on


whether you have configured metadata object translation (MDI) or data
warehouse translation (DI), or both:

l Select the language that you want to be applied to translated


metadata strings from the Default metadata language preference
for this user drop-down menu.

l Select the language that you want to be applied to translated data


warehouse strings from the Default data language preference for
this user drop-down menu.

6. Click OK.

7. Disconnect and reconnect to the project source so that your changes


take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.

Copyright © 2024 All Rights Reserved 1973


Syst em Ad m in ist r at io n Gu id e

Selecting the All Users in Project Level Language Preference

The All Users In Project level language preference determines the language
that will be displayed for all users that connect to a project, unless a higher
priority language is specified for the user. Use the steps below to set this
preference.

If the User-Project or User-All Projects language preferences are specified


for the user, the user will see the All Users In Project language only if the
other two language preferences are not available. To see the hierarchy of
language preference priorities, see the table in Configuring Metadata Object
and Report Data Language Preferences, page 1967.

Selecting the All Users in Project Level Language Preference

1. Log in to Developer as a user with Administrative privileges.

2. In the Folder List on the left, select the project. From the
Administration menu, select Projects, then Project Configuration.

3. On the left side of the Project Configuration Editor, expand Languages


and select User Preferences. The Language-User Preferences dialog
box is displayed, as shown below:

Copyright © 2024 All Rights Reserved 1974


Syst em Ad m in ist r at io n Gu id e

4. Do the following, depending on whether you have configured metadata


object translation (MDI) or data warehouse translation (DI), or both:

l From the Metadata language preference for all users in this


project drop-down menu, select the language that you want to be
displayed for metadata object names in this project.

l From the Data language preference for all users in this project
drop-down menu, select the language that you want to be displayed
for report results in this project.

5. Click OK.

6. Disconnect and reconnect to the project source so that your changes


take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.

Selecting the Developer Level Language Preference

The Developer level language preference determines the default language


for all objects displayed within Developer, unless a higher priority language

Copyright © 2024 All Rights Reserved 1975


Syst em Ad m in ist r at io n Gu id e

preference has been specified. This is the same as the interface preference.

If the User-Project, User-All Projects, or All Users In Project language


preferences are specified, the user will see the Developer language only if
the other three language preferences are not available. To see the
hierarchy of language preference priorities, see the table in Configuring
Metadata Object and Report Data Language Preferences, page 1967.

This language preference must be configured to match one of two other


language preferences: the Interface language preference or the Machine
level language preference. For information about the Interface language
preference, see Selecting the Interface Language Preference, page 1965.
For information about the Machine level language preference, see Selecting
the Machine Level Language Preference, page 1978

Selecting the Developer Level Language Preference

1. Log in to Developer as a user with Administrative privileges.

2. From the Tools menu, select MicroStrategy Developer Preferences.

3. Expand the International category and select Language.The


International - Language dialog box opens, as shown below:

Copyright © 2024 All Rights Reserved 1976


Syst em Ad m in ist r at io n Gu id e

4. Select one of the following from the Language for metadata and
warehouse data if user and project level preferences are set to
default drop-down menu.

l If you want the Developer language preference to be the same as the


Interface language preference, select Use the same language as
MicroStrategy Developer.For information about configuring the
Interface language preference, see Selecting the Interface
Language Preference, page 1965.

l If you want the Developer language preference to be the same as the


Machine-level language preference, select Use language from
Regional Settings. For information about configuring the Machine-
level language preference, see Selecting the Machine Level
Language Preference, page 1978.

Copyright © 2024 All Rights Reserved 1977


Syst em Ad m in ist r at io n Gu id e

5. Select the language that you want to use as the default Developer
interface language from the Interface Language drop-down menu.

6. Click OK.

7. Disconnect and reconnect to the project source so that your changes


take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.

Selecting the Machine Level Language Preference

This preference determines the language that is used on all objects on the
local machine. MicroStrategy Web uses the language that is specified in the
user's web browser if a language is not specified at a level higher than this
one.

l If the User-Project, User-All Projects, All Users In Project, or Developer


language preferences are specified, the user will see the Machine
language only if the other four language preferences are not available. To
see the hierarchy of language preference priorities, see the table in
Configuring Metadata Object and Report Data Language Preferences,
page 1967.

l A MicroStrategy Distribution Services delivery (such as an email, file, or


printer delivery) uses a different language resolution logic: If the User-
Project, User-All Projects, All Users in Project, and Developer languages
are not able to be displayed, the delivery defaults to the Project Default
level language preference, followed by the Machine level language
preference. This is because Distribution Services runs without a client
session in the Intelligence Server machine; if the Machine level language
took precedence, all users receiving delivered content would receive that
content using the Intelligence Server machine's language. Instead, the
project's default language is the fallback language for Distribution
Services deliveries.

Copyright © 2024 All Rights Reserved 1978


Syst em Ad m in ist r at io n Gu id e

To select the Machine level language preference on a Windows machine,


from the Start menu, select Control Panel, then Regional and Language
Options. Consult your machine's Help for details on using the language
options.

Configuring the Project Default Level Language Preference

This language preference specifies the default language for the project. This
language preference has the lowest priority in determining the language
display. Use the steps below to set this preference.

l If the User-Project, User-All Projects, All Users In Project, Developer, or


Machine-level language preferences are specified, the user will see the
Project Default language only if the other five language preferences are
not available. To see the hierarchy of language preference priorities, see
the table in Configuring Metadata Object and Report Data Language
Preferences, page 1967.

l A MicroStrategy Distribution Services delivery (such as an email, file, or


printer delivery) uses a different language resolution logic: If the User-
Project, User-All Projects, All Users in Project, and Developer languages
are not able to be displayed, the delivery defaults to the Project Default
level language preference, followed by the Machine level language
preference. This is because Distribution Services runs without a client
session in the Intelligence Server machine; if the Machine level language
took precedence, all users receiving delivered content would receive that
content using the Intelligence Server machine's language. Instead, the
project's default language is the fallback language for Distribution
Services deliveries.

Selecting the Project Default Language Preference

The project default language is selected either when a project is first


created, or the first time metadata languages are enabled for the project. It

Copyright © 2024 All Rights Reserved 1979


Syst em Ad m in ist r at io n Gu id e

cannot be changed after that point. The following steps assume the project
default language has not yet been selected.

1. Log in to the project as a user with Administrative privileges.

2. Select the project that you want to set the default preferred language
for.

3. From the Administration menu, select Project, then Project


Configuration.

4. On the left side of the Project Configuration Editor, expand Language.


Do one or both of the following, depending on whether you have
configured metadata object translation (MDI) or data warehouse
translation (DI), or both:

l To specify the default metadata language for the project, select


Metadata from the Language category. Then select Default for the
desired language.

l To specify the default data language for the project, select Data from
the Language category. Then select Default for the desired
language.

5. Select OK.

6. Disconnect and reconnect to the project source so that your changes


take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.

Selecting the Object Default Language Preference


Each MicroStrategy object can have its own default language. The
translation for the object default language is used when the system cannot
find or access a translation for the object in the language specified as the
user or project preference.

Copyright © 2024 All Rights Reserved 1980


Syst em Ad m in ist r at io n Gu id e

This preference is useful especially for personal objects, since most


personal objects are only used in one language, the owner's language. The
object default language can be set to any language supported by the project
in which the object resides.

Some objects may not have their object default language preference set, for
example, if objects are merged from an older, non-internationalized
MicroStrategy system into an upgraded, fully internationalized environment.
In this case, for those objects that do not have a default language, the
system automatically assigns them the project's default language.

This is not true for newly created objects within an internationalized


environment. Newly created objects are automatically assigned the
creator's metadata language preference. For details on the metadata
language, see Configuring Metadata Object and Report Data Language
Preferences, page 1967.

When duplicating a project, objects in the source that are set to take the
project default language will take whatever the destination project's default
language is.

Use the steps below to configure the object default language.

For the hierarchy of language preferences, see the table in Configuring


Metadata Object and Report Data Language Preferences, page 1967.

Configuring the Object Default Language Preference

1. Log in to the project source that contains the object as a user with
administrative privileges.

2. Right-click the object and select Properties.

l You can set the default language for multiple objects by holding the
Ctrl key while selecting multiple objects.

Copyright © 2024 All Rights Reserved 1981


Syst em Ad m in ist r at io n Gu id e

3. Select International.The Properties - International dialog box is


displayed, as shown below:

If the International option is missing, the object is not supported for


translation. For example, there is no reason to translate a table name
for a schema object (such as LU_YEAR), so this object does not have
the International option available.

4. From the Select the default language for the object drop-down
menu, select the default language for the object(s).

5. Click OK.

Achieving the Correct Language Display


The following table lists many of the locations where you might want to
display a given language for users. It tells you where to configure the system
so that the language is displayed or available for selection. For some
language displays, there are different steps in Developer than in
MicroStrategy Web.

Copyright © 2024 All Rights Reserved 1982


Syst em Ad m in ist r at io n Gu id e

Translation or
Language Display that Where to Enable It
You Want to Achieve

In Developer: Regional settings on the Developer user's


machine.

In Web: Click the MicroStrategy > Preferences. You can


Number format (decimal, create a dynamic currency format that changes according to
thousands separator, the locale's default currency symbol. The dynamic format
currency symbol, weight) applies to grid reports, graph reports, and documents
displayed in MicroStrategy Web, MicroStrategy Mobile, and
MicroStrategy Office and exported to PDF. For a graph report,
the dynamic currency is applied to the data label.

Use a Value prompt on a metric. See the Advanced Prompts


Currency conversion
section of the Advanced Reporting Help.

In Developer: Regional settings on the Developer user's


machine.

In Web: Go to MicroStrategy > Preferences > Languages >

Date format and Show Advanced Options.


separators
In Web, if the browser is set to a language unsupported
in MicroStrategy and the user's preferences are set to
Default, the date/time and number formatting display in
English.

In Developer, right-click and Format the attribute or metric


(column header, value, or subtotal) using the font you prefer
Autostyle fonts that
(on the Font tab, specify the font.) From the Grid menu,
support a given language
select Save Autostyle As and either overwrite the existing
autostyle or create a new one.

Fonts that support all Few fonts support all languages. One that does is Arial
languages Unicode MS, which is licensed from Microsoft.

PDFs, portable PDFs, Embed fonts when you are designing the document; this
bookmarks in PDFs, and ensures that the fonts selected by the document designer are
language display in a used to display and print the PDF, even on machines that do

Copyright © 2024 All Rights Reserved 1983


Syst em Ad m in ist r at io n Gu id e

Translation or
Language Display that Where to Enable It
You Want to Achieve

not have the fonts installed. Embedding fonts lets you:

Use language fonts other than Simplified/Traditional Chinese,


English, Japanese, Korean, and Western European in PDFs.

Provide a true unicode environment, where one document can


contain different languages.

Create portable PDFs to email and to publish in Web.

To embed fonts, in the Document Editor in Developer, go to


Format > Document Properties> Export > Embed fonts in
Report Services document PDF.

To view embedded fonts in Developer, the fonts must be


installed on the Developer machine and the Intelligence
Server machine.

To view embedded fonts in Web, the fonts must be installed


on the Intelligence Server machine.

To display PDF bookmarks with the correct font, the language


pack must be installed on the viewer's machine. This is true
for any language other than English or Western European.

The Character Column Option and National Character


Column Option VLDB properties let you support the
Character sets in Teradata
character sets used in Teradata. For examples and details to
databases
enable these properties, see Chapter 1, SQL Generation and
Data Processing: VLDB Properties.

Double-byte language In Developer, from the Tools menu, select Developer


support Preferences.

In Web: Click the MicroStrategy > Preferences.

User changing own In Developer: Go to Tools > Developer Preferences.


language
The list of languages to choose from comes from the
languages enabled for a project; see Enabling Metadata

Copyright © 2024 All Rights Reserved 1984


Syst em Ad m in ist r at io n Gu id e

Translation or
Language Display that Where to Enable It
You Want to Achieve

Languages for an Existing Project, page 1939.

In the User Editor, expand International, and then select


Default language Language.
preference for a particular An administrator needs the Use User Editor and Configure
user Language Settings privileges, and ACL permissions to modify
the user object.

Default language for all Right-click a project, select Project Configuration >
users in a project Language > User Preferences.

Different default language


Right-click a project, select Project Configuration >
for a single user in
Language > User Preferences.
different projects

By default, the project's default language cannot be


translated in the Object Translation Editor. The first column in
the editor corresponds to the project's default language.
Translating the project's
default language To translate terms in the default language, in the Object
Translation Editor, click Options at the top of the Editor.
Move the default language from the Selected View Languages
box to the Selected Edit Languages box.

Function names are not translated. The MicroStrategy system


Function names
expects function names to be in English.

Use the Object Translation Editor. To access this, right-click


An individual object
the object and select Translate.

Caches in an
internationalized See Caching and Internationalization, page 1932.
environment

It is recommended that you use a SQL-based DI model when


Intelligent Cubes setting up internationalization, as described in Providing Data
Internationalization, page 1951. Because a single Intelligent

Copyright © 2024 All Rights Reserved 1985


Syst em Ad m in ist r at io n Gu id e

Translation or
Language Display that Where to Enable It
You Want to Achieve

Cube cannot connect to more than one data warehouse, using


a connection-based DI model requires a separate Intelligent
Cube to be created for each language, which can be
resource-intensive.

Details on this cost-benefit analysis, steps to enable a


language when publishing an Intelligent Cube, and
background information on Intelligent Cubes is in the
MicroStrategy In-memory Analytics Help.

Subscriptions in an Subscribed-to reports and documents behave like standard


internationalized reports and documents, and are delivered in the language
environment selected in My Preferences or User Preferences.

Repository Translation Enable languages the project supports for metadata objects
Wizard list of available (see Enabling Metadata Languages for an Existing Project,
languages page 1939).

Metadata object names


and descriptions (such as For a new project being created, select these in Architect. You
report names, metric can view the database table columns used for
names, system folder internationalization as you create the project.
names, embedded
descriptors such as For an existing project, see Enabling Metadata Languages for

attribute aliases, prompt, an Existing Project, page 1939.

instructions, and so on)

Displayed according to User-Project level language


Configuration objects in preference. Set this by right-clicking the project, selecting My
Developer Preferences> International, and setting the Metadata
language for All Projects.

Attribute elements, for


First translate the element name in your data warehouse.
example, the Product
Then enable the language; see Enabling Languages for Data
attribute has an element
Internationalization, page 1958.
called DVD player

Copyright © 2024 All Rights Reserved 1986


Syst em Ad m in ist r at io n Gu id e

Translation or
Language Display that Where to Enable It
You Want to Achieve

In the Project Configuration Editor, expand Project


Project name and Definition, select General> Modify, > International >
description Translate.You can type both a project name and a description
in the Object Description field.

When designing a project


using Architect, see In Architect, go to Options > Settings. On the Display
columns in the Warehouse Settings tab, select Display columns used for data
Tables area that support internationalization.
data internationalization

See Enabling Metadata Languages for an Existing Project,


Enable a new language for page 1939.
a project to support User adding the language must have Browse permission for
that language object's ACL.

See Adding a New Language to the System, page 1990. Then


see Enabling Metadata Languages for an Existing Project,
Enable a custom language page 1939.
for a project to support
User adding the language must have Browse permission for
that language object's ACL.

Searches are conducted in the user's preferred metadata


language by default.
Searching the project
A language-specific search can be conducted; open a project,
then from the Tools menu select Search for Objects.

Object Manager, Project Merge, and the Project Duplication


Project or object
Wizard contain translation-specific conflict resolution options
migration, or duplication
for migrating translated objects between projects.

In the Derived Element Editor, go to File > Properties >


Derived elements
International.

MicroStrategy Office user This information applies to the legacy MicroStrategy


interface and Excel format Office add-in, the add-in for Microsoft Office applications

Copyright © 2024 All Rights Reserved 1987


Syst em Ad m in ist r at io n Gu id e

Translation or
Language Display that Where to Enable It
You Want to Achieve

which is no longer actively developed.

It was substituted with a new add-in, MicroStrategy for


Office, which supports Office 365 applications. The
initial version does not yet have all the functionalities of
the previous add-in.

If you are using MicroStrategy 2021 Update 2 or a later


languages version, the legacy MicroStrategy Office add-in cannot
be installed from Web.;

For more information, see the MicroStrategy for Office


page in the Readme and the MicroStrategy for Office
Help.

In MicroStrategy Office, go to Options > General >


International.

MicroStrategy passes a user's required language as a


MDX (Multidimensional
database connection parameter to the MDX cube provider;
Expressions) data sources
the cube provider supplies the correct translations.

Maintaining Your Internationalized Environment


You can add or remove languages from your MicroStrategy system, and you
can edit the language objects in the system. You can use MicroStrategy
Command Manager to automate several maintenance tasks. MicroStrategy
Object Manager and Project Merge contain some translation-specific options
for conflict resolution rules.

These maintenance processes and tools are described below. This section
also covers security and specialized translator user roles.

Copyright © 2024 All Rights Reserved 1988


Syst em Ad m in ist r at io n Gu id e

Using Command Manager to Automate Language


Maintenance Tasks
Several Command Manager scripts are designed to make language
maintenance and user maintenance related to internationalized
environments easier and faster. These scripts include:

l List all languages (metadata or data) by project, or all languages


contained under Administration > Configuration Managers > Languages in
Developer's Folder List.

l List available languages (metadata or data) at a specified level, such as


by user/project, or by user and project.

l List resolved languages, which are the languages that are displayed to
users from among the list of possible preferences.

l Alter languages at a specified level, which changes language preferences


for a set of users or for a project.

For these and all the other scripts you can use in Command Manager, open
Command Manager and click Help.

Moving Translated Objects Between Projects


You can use Object Manager and Project Merge to migrate translated
objects between projects. You apply the same MicroStrategy conflict
resolution rules as you use when merging non-translated objects, but you
use these rules specifically for the translated names and descriptions that
are part of each translated object. You can also merge translations even if
objects are identical. For details on all the options for migrating translated
objects using Object Manager or Project Merge, open Object Manager or
Project Merge and click Help.

Adding or Removing a Language in the System


You can add or remove languages and language variants from the system
using the steps below.

Copyright © 2024 All Rights Reserved 1989


Syst em Ad m in ist r at io n Gu id e

Supporting Character Sets

Languages require a wide range of character sets to represent data. To


support the languages you plan to use in your MicroStrategy projects, you
must use databases that support the required character sets and that are
configured accordingly. To determine whether your database supports the
character sets required to display the languages you want to support, see
your third-party database documentation.

Specifically, the database that allocates the metadata must be set with a
code page that supports the languages that are intended to be used in your
MicroStrategy project.

Adding a New Language to the System


You can add new languages to MicroStrategy. Once they are added, new
languages are then available to be enabled for a project to support
internationalization.

Variant languages (also called custom languages) can also be added. For
example, you can create a new language called Accounting, based on the
English language, for all users in your Accounting department. The language
contains its own work-specific terminology.

You must have the Browse permission for the language object's ACL (access
control list).

To Add a New Language to the System

1. Log in to a project as a user with administrative privileges.

2. Right-click the project and select Project Configuration.

3. On the left side of the Project Configuration Editor, expand Language,


then select either Metadata or Data, depending on whether you want to
add the language to support metadata objects or to support data

Copyright © 2024 All Rights Reserved 1990


Syst em Ad m in ist r at io n Gu id e

internationalization. For a description of the differences, see About


Internationalization, page 1930.

4. Click Add.

5. Click New.

6. Click OK.

7. If the language you added to the system is certified by MicroStrategy,


you are prompted to automatically update system object translations
that come with MicroStrategy. The information that is automatically
updated includes translations of the following:

l System folders: The Public Objects folder and the Schema Objects
folder

l Project objects: Autostyles and object templates

l System configuration objects: Security roles and user groups

8. Click Yes. You can also perform this update later, using the Project
Configuration Editor, and selecting Upgrade in the Project Definition
category.

9. Disconnect and reconnect to the project source so that your changes


take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.

Languages can also be added using the Languages Configuration Manager,


by going to Administration > Configuration Managers > Language.

After adding a new language, if you use translator roles, be sure to create a
new user group for translators of the new language (see Creating Translator
Roles, page 1996).

Copyright © 2024 All Rights Reserved 1991


Syst em Ad m in ist r at io n Gu id e

To Add a New Interface Language for MicroStrategy Web Users

This procedure provides high-level steps for adding a new language to the
display of languages in MicroStrategy Web. After the new language is
added, Web users can select this language for displaying various aspects of
Web in the new language. For details and best practices to customize your
MicroStrategy Web files, see the MicroStrategy Developer Library (MSDL),
which is part of the MicroStrategy SDK.

1. In the locales.xml file (located by default in <application-root-


path>/WEB-INF/xml), add a new line for the language key using the
example below:

2. <locale locale-id=" 13313" language="HI" country="Ind" desc="Hindi"


desc-id="mstrWeb.5097" char-set="UTF-8" char-set-
excel="UnicodeLittle" codepage="65001" codepage-excel-"1252"/>

3. Create resource files for the new language, for generic descriptors,
based on existing resource files. For example:

l For Web messages: Messages_Bundle_HI.properties

l For number and date formats in the interface: Format_Config_HI.xml

4. If you want to display feature-specific descriptors for the new language,


you can create resource files based on existing resource files. For
example:

l DashboardDatesBundle_13313.xml

l DossierViewerBundle_13313.xml

Creating a Language Variant: Multi-Tenancy

A language variant is a language that is similar to a standard language,


because the variant is based on the standard language. A variant can be
created for a specific purpose in an organization, for example, Executive
Business English.

Copyright © 2024 All Rights Reserved 1992


Syst em Ad m in ist r at io n Gu id e

Multi-tenancy is providing numerous groups of users access to the same


MicroStrategy environment, but changing the display of objects and object
names or descriptions based on various configuration settings. For more
information on multi-tenancy, see Multi-Tenant Environments: Object Name
Personalization, page 2051.

Removing a Language from the System


A language cannot be removed from the system if it is being used by a
project, that is, if it has been enabled to be supported for a project. To
remove a language from a project, that language must first be disabled from
the project, as described in the steps below.

If a user has selected the language as a language preference, the


preference will no longer be in effect once the language is disabled. The
next lower priority language preference will take effect. To see the language
preference priority hierarchy, see Configuring Metadata Object and Report
Data Language Preferences, page 1967.

To Remove a Language from the System

1. Disable the language from all projects in which it was enabled:

l To disable a metadata language from a project, see Enabling and


Disabling Metadata Languages, page 1939.

l To disable a data language from the project, see Enabling Languages


for Data Internationalization, page 1958.

2. For metadata languages, any translations for the disabled language are
not removed from the metadata with these steps. To remove
translations:

l For individual objects: Objects that contain translations in the


disabled language must be modified and saved. You can use the

Copyright © 2024 All Rights Reserved 1993


Syst em Ad m in ist r at io n Gu id e

Search dialog box from the Tools menu in Developer to locate


objects that have translations in a given language.

l For the entire metadata: Duplicate the project after the language has
been removed, and do not include the translated strings in the
duplicated project.

3. For objects that had the disabled language as their default language,
the following scenarios occur. The scenarios assume the project
defaults to English, and the French language is disabled for the project:

l If the object's default language is French, and the object contains


both English and French translations, then, after French is disabled
from the project, the object will only display the English translation.
The object's default language automatically changes to English.

l If the object's default language is French and the object contains only
French translations, then, after French is disabled from the project,
the French translation will be displayed but will be treated by the
system as if it were English. The object's default language
automatically changes to English.

For both scenarios above: If you later re-enable French for the
project, the object's default language automatically changes back to
French as long as no changes were made and saved for the object
while the object had English as its default language. If changes were
made and saved to the object while it had English as its default
language, and you want to return the object's default language back
to French, you can do so manually: right-click the object, select
Properties, select Internationalization on the left, and choose a new
default language.

Applying Security and Specialized Translator User Roles for


Languages
Each language in MicroStrategy is represented by a specific MicroStrategy
object. You can apply security to a language in MicroStrategy by using the

Copyright © 2024 All Rights Reserved 1994


Syst em Ad m in ist r at io n Gu id e

language object's ACLs, which permit or deny specific use of an object.

You also use the language object's ACLs in combination with MicroStrategy
user privileges to create a translator or linguist role. This type of role allows
a user to translate terms for an object in a given language, but keeps that
user from making changes to the object's translations in other languages or
making changes to the object's name and description in the object's default
language.

Maintaining Language Objects and Controlling Security


Each language that is part of your MicroStrategy system (whether out of the
box or languages you have added) exists as an object that can be edited,
can have ACLs (security) set on it, can have its name translated, and so on.

ACLs can be used on a language object to control user access to certain


languages. You can take advantage of this feature to allow users to serve
themselves in terms of choosing language preferences, while restricting
them from languages that may not be supported for areas of the software the
user commonly uses.

For example, you can create 2 groups of users and provide Group 1 with
browse and use access to the English language object and the French
language object, and provide Group 2 with browse and use access to
Spanish only. In this scenario, users in Group 2 can only choose Spanish as
their language preference, and can only access Spanish data from your
warehouse. If an object which is otherwise available to Group 2 users does
not have a Spanish translation, Group 2 users will be able to access that
object in the project's default language (which may be English, French, or
any other language.)

To Access a Language Object

1. In Developer, from the Folder List on the left, within the appropriate
project source, go to Administration > Configuration Managers.

Copyright © 2024 All Rights Reserved 1995


Syst em Ad m in ist r at io n Gu id e

2. Select Languages.

3. Right-click any language object to edit or otherwise maintain that


object.

Creating Translator Roles


You can set up MicroStrategy so that certain users can translate object
names and descriptions into a given language. At the same time, you can
restrict these users from changing the object's translations in languages
other than the one they are translating, and from making any other changes
to the object.

When an object is translated or an existing translation is edited, the object's


version ID and modification timestamp are changed. This allows you to
easily identify the latest translated objects when merging objects across
projects or environments.

Creating a translator or linguist role can be useful if you have a translator


who needs access to an object to translate it, but who should not have the
ability to make any other changes to the object.

A common approach to setting up the MicroStrategy environment to support


this type of user role requires creating a MicroStrategy user account
specifically for each translator, allowing certain privileges to that user, and
setting ACLs on one or more language objects to allow access to a given
language for translation purposes. The steps below provide a common
approach to setting up your system to support translator roles. The end goal
is to create a list of user accounts made up of translators who have a limited
set of permissions in MicroStrategy to translate a project's objects (schema
objects, application objects, report/document objects), without the ability to
write to any object or make changes to an object.

You can modify this approach to customize your language object security as
it fits your specific needs. Suggestions are provided after the steps, to
modify the translator role setup for specific situations.

Copyright © 2024 All Rights Reserved 1996


Syst em Ad m in ist r at io n Gu id e

The following terms are used:

l Source language: The object's default language

l Reference language: Any language other than the source language which
the translator needs to translate from

l Target language: Any language other than the source language which the
translator needs to translate to

To Create a Translator Role

Cr eat e a User Acco u n t f o r Each Tr an sl at o r

1. Create a user account for each translator.

l Grant each user the Use Developer privilege, in the Analyst privilege
group.

l Grant each user the Use Translation Editor privilege, in the Common
privilege group.

l Grant each user the Use Translation Editor Bypass privilege, in the
Developer privilege group.

This privilege allows the user to use the Translation Editor to change
an object's name and/or description for a given language, and does
not require the user to have Write access to the object whose
name/description is being changed (the system bypasses the normal
check for Write access).

For steps on creating a user account and assigning privileges to a user


account, see the Setting Up User Security section.

Copyright © 2024 All Rights Reserved 1997


Syst em Ad m in ist r at io n Gu id e

Al l o w Each Tr an sl at o r User t o Ad d / Ed i t Tr an sl at i o n s f o r a Gi ven


Lan gu age

1. Grant the View permission on the ACL (access control list) for a
language object to the user account that is allowed to translate objects
into that language. This permission should be granted to the target
language. The View permission allows a user to:

l See an object's name and description in the source language (the


object's default language) as well as in all other languages.

l Translate object names and descriptions in the language the user has
View permission for.

To grant the View permission for a language object, use the following
substeps:

1. In the Folder List, within the appropriate project source, expand


Administration, then Configuration Managers, and select
Languages.

2. Right-click a language from the list of language objects, and


select Properties.

3. On the left, select Security.

4. On the right, click Add to add the appropriate user account to


the security for this language. Navigate to the appropriate
translator user, select the user, and click OK.

5. Click the field in the Permissions column next to the newly added
user and select View.

A user who has View permissions to a language will be able to


add or modify translations in that language using the
Translation Editor. Translating an object's name/description in
the source language (the object's default language) is
equivalent to renaming the object. This may not be desirable,

Copyright © 2024 All Rights Reserved 1998


Syst em Ad m in ist r at io n Gu id e

especially for schema objects. To prevent this, be sure that the


View permission is not granted to the source language (the
default language) of the objects that will be translated.

Al l o w Tr an sl at o r s t o Vi ew Tr an sl at i o n s i n Sp eci f i c Ref er en ce Lan gu ages

1. Grant read-only access to one or more reference languages by granting


the Browse and Read permissions (ACL permissions on the language
object) for those languages that the translator needs to be able to view.
Granting Browse and Read permission to a user for a language allows
the translator to be able to see object names and descriptions in that
language, but not to translate or otherwise make changes to object
names/descriptions in that language. Read-only permission is generally
granted to the source language (the object's default language), so that
the source language can be used as the reference language during
translation.

Allowing translators to see translations for an object in a language


other than just the source language can provide translators useful
context during translation, and is necessary if a translator needs to
see a reference language that is different from the source language.

Use the following substeps:

1. In the Folder List, within the appropriate project source, go to


Administration > Configuration Managers > Languages.

2. Right-click a language from the list of language objects, and select


Properties.

3. On the left, select Security.

4. On the right, click Add to add the appropriate user account to the
security for this language. Navigate to the appropriate translator
user, select the user, and click Custom.

Copyright © 2024 All Rights Reserved 1999


Syst em Ad m in ist r at io n Gu id e

5. Click the field in the Permissions column next to the newly added
user and select Browse and Read.

Be sure you do not grant the Use permission on any language


object that represents a language you do not want the translator
to be able to make changes to.

2. Repeat these substeps for any other languages in the list of language
objects that you want this user to be able to see.

To deny a translator the ability to see an object's name and description


in a given language, assign the user the Deny All privilege for the
language object(s) that the user should not be able to see or add/edit
translations for.

Minim um Requirem ents and Additional Options for Creating a Translator


Role
l The following table shows the minimum privileges and permissions that a
user needs to be able to view a language and to translate schema objects,
application objects, and report/document objects in a MicroStrategy
project:

To View an Object's Name and To Translate an Object's Name and


Description in a Given Language Description in a Given Language

The Use Developer privilege, in the Analyst The Use Developer privilege, in the Analyst
privilege group. privilege group.

The Use Translation Editor privilege, in the

The Use Translation Editor privilege, in the Common privilege group.

Common privilege group. The Use Translation Editor Bypass privilege,


in the Developer privilege group.

The Browse permission on the language The Browse permission on the language
object that the translator will be translating object that the translator will be translating

Copyright © 2024 All Rights Reserved 2000


Syst em Ad m in ist r at io n Gu id e

To View an Object's Name and To Translate an Object's Name and


Description in a Given Language Description in a Given Language

into, and on a reference language object. into, and on a reference language object.

The Read permission on the language object

The Read permission on the language that the translator will be translating into, and

object that the translator will be translating on a reference language object.

into, and on a reference language object. The Use permission on the language object
that the translator will be translating into.

Be sure you do not grant the Use permission on any language object that
represents a language you do not want the translator to be able to make
changes to.

l To provide a translator the greatest possible context for objects:

l Allow the translator user to see an object's name and definition in the
source language and in any other language that the object uses, as well
as the translator's target language. To do this, grant the translator user
the Browse and Read permissions for each language object listed in
Administration > Configuration Managers > Languages. The Browse
and Read permissions allow the user to see translations in the
Translation Editor but not edit the translation strings.

l Grant the user privileges to access the object within the various
Developer object editors. These privileges allow the user to execute the
object so that it opens within its appropriate editor, thus displaying
additional detail about the object. Access can allow context such as
seeing a string as it appears within a dashboard; a metric's
expression/formula; an attribute's forms and the data warehouse tables
that the data comes from; and so on. For example, in the User Editor,
grant the translator the Execute Document and Use Report Editor
privileges from the Analyst privilege group. Also grant Use Custom

Copyright © 2024 All Rights Reserved 2001


Syst em Ad m in ist r at io n Gu id e

Group Editor, Use Metric Editor, Use Filter Editor, and so on, from the
Developer privilege group.

l To deny a translator the ability to see an object's name and description in


any language except the source language and the language that the
translator has permission to Browse, Read, and Use, grant the user the
Deny All privilege for the language objects that the user should not be able
to see.

For example, if you grant a translator Browse, Read, and Use permissions
for the French language object, Browse and Read permissions for the
object's default language, and Deny All for all other languages, the
translator will only see the French translations column and the default
language column in the Translations Editor in Developer.

However, be aware that this limits the translator to only being able to use
the object's default language as their reference language. If the translator
can benefit from seeing context in other languages, it is not recommended
to Deny All for other languages.

l You can create a security role to support per-project translator access. A


security role is a set of project-level privileges. You can then assign the
security role to individual users or groups. A user can have different
security roles in different projects. For example, a user may have a
Translator security role for the project they are supposed to translate, but
the normal User security role in all other projects. Security roles are
assigned to users or groups on a project-by-project basis.

Because security roles are project-level roles, setting up translation


based on security roles does not allow for the translation of configuration
objects, such as database instances, schedules, events, and any other
object that exists at the project source level. A translator can be set up to
translate configuration objects using the information in the next bullet.

Copyright © 2024 All Rights Reserved 2002


Syst em Ad m in ist r at io n Gu id e

l To allow a translator to translate configuration objects (such as user and


group descriptions, database instance names and descriptions, schedule
and event names and descriptions, and any other objects that can be
accessed by all projects in a project source), grant the translator the Use
Translation Editor Bypass privilege at the user level (rather than at the
project level). Also, grant the translator user the following privileges in the
User Editor, which allow the user to access the various configuration
object managers in the Administration folder in Developer:

l Create and edit database instances

l Create and edit database logins

l Create and edit schedules and events

l Create and edit security filters

l Create and edit security roles

l Create and edit users and groups

l Create configuration objects

l To allow users to translate objects using MicroStrategy's bulk translation


tool, the Repository Translation Wizard, grant the user the Use Repository
Translation Wizard privilege.

If this privilege is assigned, be aware that the user will be able to export
strings and import translations for those strings in all languages that the
project supports. This is true no matter what other language restrictions
are applied.

Copyright © 2024 All Rights Reserved 2003


Syst em Ad m in ist r at io n Gu id e

L IST OF PRIVILEGES

Copyright © 2024 All Rights Reserved 2004


Syst em Ad m in ist r at io n Gu id e

This section provides reference information for privileges in MicroStrategy.


For general information about using privileges and security roles, see the
Setting Up User Security section.

Privileges are available to be assigned to users, groups, or security roles. A


privilege is available if it is enabled in the User Editor. If you have not
purchased a license for a product, that product's privileges are grayed out in
both the User Editor and the Security Role editor. To determine your license
information, use License Manager to check whether any of the specified
products are available.

A privilege with the note "Server level only" can be granted only at the
project source level. It cannot be granted for a specific project.

Privileges for Predefined Security Roles


The MicroStrategy product suite contains a number of predefined security
roles for administrators. These roles makes it easy to delegate
administrative tasks.

The predefined project administration roles apply project-level


administrative privileges. The default privileges that are automatically
granted for these out-of-the-box security roles are listed below.

Privileges vary between releases. For the most up to date privileges, see
the dashboard in Privileges by License Type.

For a list of additional out-of-the-box security roles, see Assign Security


Roles.

Platform Administrators and System Administrators have the following


privileges:

Click here for the list of privileges

Client - Reporter

Copyright © 2024 All Rights Reserved 2005


Syst em Ad m in ist r at io n Gu id e

l Use Library Web

l Use send preview Now

l Web add to History List

l Web add/remove units to/from Grid in Document in view mode

l Web advanced drilling

l Web alias objects

l Web change user preferences

l Web change view mode

l Web configure Toolbars

l Web create Derived Metrics and Derived Attributes

l Web define Derived Elements

l Web drill and link

l Web drill on Metrics

l Web execute data mart Report

l Web export

l Web filter on selections

l Web manage Objects

l Web modify subtotals

l Web object search

l Web pivot Report

l Web print mode

l Web re-execute Report against warehouse

Copyright © 2024 All Rights Reserved 2006


Syst em Ad m in ist r at io n Gu id e

l Web simple graph formatting

l Web simultaneous execution

l Web sort

l Web switch Page-by Elements

l Web use Locked Headers

l Web use Report Objects Window

l Web use View Filter Editor

l Web user

Client - Web

l Set OAuth parameters for Cloud App sources

l Use Desktop

l Use office

l Web choose attribute form display

l Web create custom HTML and JavaScript content

l Web create Dashboard

l Web create new Report

l Web Dashboard design

l Web define advanced Report options

l Web define Intelligent Cube Report

l Web define MDX Cube Report

l Web Document design

l Web edit Dashboard

l Web edit drilling and links

Copyright © 2024 All Rights Reserved 2007


Syst em Ad m in ist r at io n Gu id e

l Web edit notes

l Web format Grid and Graph

l Web manage Document and Dashboard datasets

l Web modify the list of Report Objects (use object browser -- all objects)

l Web number formatting

l Web publish Intelligent Cube

l Web Report details

l Web Report SQL

l Web save Dashboard

l Web save Templates and Filters

l Web save to My Reports

l Web save to Shared Reports

l Web set column widths

l Web subscribe others

l Web use Advanced Threshold Editor

l Web use Custom Group Editor

l Web use Design Mode

l Web use Filter Editor

l Web use Metric Editor

l Web use object Sharing Editor

l Web use Prompt Editor

l Web use Visual Threshold Editor

Client - Application

Copyright © 2024 All Rights Reserved 2008


Syst em Ad m in ist r at io n Gu id e

l Use Application

Client - Mobile

l Email screenshot from device

l Mobile run Dashboard

l Mobile run Document

l Print from device

l Use MicroStrategy Mobile

Client - Architect

l Alias objects

l Bypass schema objects security access checks

l Change user preference

l Configure toolbars

l Create custom HTML and JavaScript content

l Create dataset in Workstation

l Create Derived Metrics

l Define MDX Cube Report

l Define query builder Report

l Drill and link

l Execute Document

l Format graph

l Import function

l Import MDX cube

l Modify Report subtotals

Copyright © 2024 All Rights Reserved 2009


Syst em Ad m in ist r at io n Gu id e

l Modify sorting

l Modify the list of Report objects (use object browser)

l Pivot Report

l Re-execute Report against warehouse

l Save custom autostyle

l Send to Email

l Set attribute display

l Use Architect Editors

l Use Consolidation Editor

l Use Custom Group Editor

l Use data explorer

l Use Data Mart Editor

l Use design mode

l Use Developer

l Use Document Editor

l Use Drill Map Editor

l Use Filter Editor

l Use Find and Replace Dialog

l Use Formatting Editor

l Use grid options

l Use History List

l Use HTML Document Editor

Copyright © 2024 All Rights Reserved 2010


Syst em Ad m in ist r at io n Gu id e

l Use Link Editor

l Use Metric Editor

l Use Object Manager

l Use Object Manager Read-only

l Use project documentation

l Use Prompt Editor

l Use report data options

l Use Report Editor

l Use Report Objects Window

l Use Search Editor

l Use SQL statements tab in Datamart/Bulk Export Editors

l Use Subtotal Editor

l Use Template Editor

l Use Thresholds Editor

l Use View Filter Editor

l Use VLDB property editor

l View ETL information

l View SQL

Server - Reporter

l Drill within Intelligent Cube

l Execute Report that use multiple data sources

l Export to .MSTR File

l Export to Excel

Copyright © 2024 All Rights Reserved 2011


Syst em Ad m in ist r at io n Gu id e

l Export to Flash

l Export to HTML

l Export to PDF

l Export to text

l Save personal prompt answers

l Schedule request

l Use analytics

l Use dynamic sourcing

l Use server cache

l View History List

l View notes

l Web run Dashboard

l Web run Document

l Web subscribe to History List

Server - Intelligence

l Add notes

l Administer Caches

l Administer Cubes

l Administer History Lists

l Administer jobs

l Administer quick search indices

l Administer Subscriptions

l Administer user connections

Copyright © 2024 All Rights Reserved 2012


Syst em Ad m in ist r at io n Gu id e

l Assign Security Filters

l Assign security roles

l Audit change journal

l Bypass all object security access checks

l Can certify content

l Configure caches

l Configure change journaling

l Configure connection map

l Configure governing

l Configure language settings

l Configure project basic

l Configure project data source

l Configure security settings

l Configure statistics

l Configure subscription settings

l Create and edit Security Filters

l Create application objects

l Create new folder

l Create schema objects

l Create shortcut to objects

l Duplicate project

l Edit notes

Copyright © 2024 All Rights Reserved 2013


Syst em Ad m in ist r at io n Gu id e

l Edit project status

l Idle and resume project

l Import .MSTR File

l Load and unload project

l Monitor caches

l Monitor Cubes

l Monitor History Lists

l Monitor Jobs

l Monitor project

l Monitor subscriptions

l Monitor user connections

l Publish Content

l Use Freeform SQL Editor

l Use Integrity Manager

l Use Repository Translation Wizard

l Use Translation Editor

l Use Translation Editor bypass

l Use Workstation

l Web administration

Server - Analytics

l Access data (files) from Local, URL, DropBox, Google Drive, Sample
Files, Clipboard, Push API

Copyright © 2024 All Rights Reserved 2014


Syst em Ad m in ist r at io n Gu id e

l Access data from Cloud App (Google Analytics, Salesforce Reports,


Facebook, Twitter)

l Access data from Databases, Google BigQuery, BigData, OLAP, BI tools

l Define Derived Elements

l Define Intelligent Cube Report

l Import table from multiple data sources

l Publish Intelligent Cube

l Save Derived Elements

l Use Intelligent Cube Editor

l Web save Derived Elements

Server - Collaboration

l Use collaboration services

Server - Distribution

l Create Dynamic Address List

l Create Email address

l Create file location

l Create FTP location

l Create print location

l Subscribe Dynamic Address List

l Subscribe to Email

l Subscribe to file

l Subscribe to FTP

l Subscribe to print

Copyright © 2024 All Rights Reserved 2015


Syst em Ad m in ist r at io n Gu id e

l Use Bulk Export Editor

l Use distribution services

l Use link to History List in Email

l Use send now

l Web create alert

l Web subscribe to bulk export

Server - Transaction

l Define Transaction Report

l Execute Transaction

l Web configure Transaction

Power Users have the following privileges:

Click here for the list of Power User privileges

Client - Reporter

l Use Library Web

l Use send preview Now

l Web add to History List

l Web add/remove units to/from Grid in Document in view mode

l Web advanced drilling

l Web alias objects

l Web change user preferences

l Web change view mode

l Web configure Toolbars

l Web create Derived Metrics and Derived Attributes

Copyright © 2024 All Rights Reserved 2016


Syst em Ad m in ist r at io n Gu id e

l Web define Derived Elements

l Web drill and link

l Web drill on Metrics

l Web execute data mart Report

l Web export

l Web filter on selections

l Web manage Objects

l Web modify subtotals

l Web object search

l Web pivot Report

l Web print mode

l Web re-execute Report against warehouse

l Web simple graph formatting

l Web simultaneous execution

l Web sort

l Web switch Page-by Elements

l Web use Locked Headers

l Web use Report Objects Window

l Web use View Filter Editor

l Web user

Client - Web

l Set OAuth parameters for Cloud App sources

l Use Desktop

Copyright © 2024 All Rights Reserved 2017


Syst em Ad m in ist r at io n Gu id e

l Use office

l Web choose attribute form display

l Web create custom HTML and JavaScript content

l Web create Dashboard

l Web create new Report

l Web Dashboard design

l Web define advanced Report options

l Web define Intelligent Cube Report

l Web define MDX Cube Report

l Web Document design

l Web edit Dashboard

l Web edit drilling and links

l Web edit notes

l Web format Grid and Graph

l Web manage Document and Dashboard datasets

l Web modify the list of Report Objects (use object browser -- all objects)

l Web number formatting

l Web publish Intelligent Cube

l Web Report details

l Web Report SQL

l Web save Dashboard

l Web save Templates and Filters

Copyright © 2024 All Rights Reserved 2018


Syst em Ad m in ist r at io n Gu id e

l Web save to My Reports

l Web save to Shared Reports

l Web set column widths

l Web subscribe others

l Web use Advanced Threshold Editor

l Web use Custom Group Editor

l Web use Design Mode

l Web use Filter Editor

l Web use Metric Editor

l Web use object Sharing Editor

l Web use Prompt Editor

l Web use Visual Threshold Editor

Client - Mobile

l Email screenshot from device

l Mobile run Dashboard

l Mobile run Document

l Print from device

l Use MicroStrategy Mobile

Client - Architect

l Alias objects

l Bypass schema objects security access checks

l Change user preference

l Configure toolbars

Copyright © 2024 All Rights Reserved 2019


Syst em Ad m in ist r at io n Gu id e

l Create custom HTML and JavaScript content

l Create dataset in Workstation

l Create Derived Metrics

l Define MDX Cube Report

l Define query builder Report

l Drill and link

l Execute Document

l Format graph

l Import function

l Import MDX cube

l Modify Report subtotals

l Modify sorting

l Modify the list of Report objects (use object browser)

l Pivot Report

l Re-execute Report against warehouse

l Save custom autostyle

l Send to Email

l Set attribute display

l Use Architect Editors

l Use Consolidation Editor

l Use Custom Group Editor

l Use data explorer

Copyright © 2024 All Rights Reserved 2020


Syst em Ad m in ist r at io n Gu id e

l Use Data Mart Editor

l Use design mode

l Use Developer

l Use Document Editor

l Use Drill Map Editor

l Use Filter Editor

l Use Find and Replace Dialog

l Use Formatting Editor

l Use grid options

l Use History List

l Use HTML Document Editor

l Use Link Editor

l Use Metric Editor

l Use Object Manager

l Use Object Manager Read-only

l Use project documentation

l Use Prompt Editor

l Use report data options

l Use Report Editor

l Use Report Objects Window

l Use Search Editor

l Use SQL statements tab in Datamart/Bulk Export Editors

Copyright © 2024 All Rights Reserved 2021


Syst em Ad m in ist r at io n Gu id e

l Use Subtotal Editor

l Use Template Editor

l Use Thresholds Editor

l Use View Filter Editor

l Use VLDB property editor

l View ETL information

l View SQL

Server - Reporter

l Drill within Intelligent Cube

l Execute Report that use multiple data sources

l Export to .MSTR File

l Export to Excel

l Export to Flash

l Export to HTML

l Export to PDF

l Export to text

l Save personal prompt answers

l Schedule request

l Use analytics

l Use dynamic sourcing

l Use server cache

l View History List

l View notes

Copyright © 2024 All Rights Reserved 2022


Syst em Ad m in ist r at io n Gu id e

l Web run Dashboard

l Web run Document

l Web subscribe to History List

Server - Intelligence

l Add notes

l Administer Caches

l Administer Cubes

l Administer History Lists

l Administer jobs

l Administer quick search indices

l Administer Subscriptions

l Administer user connections

l Assign Security Filters

l Assign security roles

l Audit change journal

l Bypass all object security access checks

l Can certify content

l Configure caches

l Configure change journaling

l Configure connection map

l Configure governing

l Configure language settings

l Configure project basic

Copyright © 2024 All Rights Reserved 2023


Syst em Ad m in ist r at io n Gu id e

l Configure project data source

l Configure security settings

l Configure statistics

l Configure subscription settings

l Create and edit Security Filters

l Create application objects

l Create new folder

l Create schema objects

l Create shortcut to objects

l Duplicate project

l Edit notes

l Edit project status

l Idle and resume project

l Import .MSTR File

l Load and unload project

l Monitor caches

l Monitor Cubes

l Monitor History Lists

l Monitor Jobs

l Monitor project

l Monitor subscriptions

l Monitor user connections

Copyright © 2024 All Rights Reserved 2024


Syst em Ad m in ist r at io n Gu id e

l Publish Content

l Use Freeform SQL Editor

l Use Integrity Manager

l Use Repository Translation Wizard

l Use Translation Editor

l Use Translation Editor bypass

l Use Workstation

l Web administration

Server - Analytics

l Access data (files) from Local, URL, DropBox, Google Drive, Sample
Files, Clipboard, Push API

l Access data from Cloud App (Google Analytics, Salesforce Reports,


Facebook, Twitter)

l Access data from Databases, Google BigQuery, BigData, OLAP, BI tools

l Define Derived Elements

l Define Intelligent Cube Report

l Import table from multiple data sources

l Publish Intelligent Cube

l Save Derived Elements

l Use Intelligent Cube Editor

l Web save Derived Elements

Server - Collaboration

l Use collaboration services

Server - Distribution

Copyright © 2024 All Rights Reserved 2025


Syst em Ad m in ist r at io n Gu id e

l Create Dynamic Address List

l Create Email address

l Create file location

l Create FTP location

l Create print location

l Subscribe Dynamic Address List

l Subscribe to Email

l Subscribe to file

l Subscribe to FTP

l Subscribe to print

l Use Bulk Export Editor

l Use distribution services

l Use link to History List in Email

l Use send now

l Web create alert

l Web subscribe to bulk export

Server - Transaction

l Define Transaction Report

l Execute Transaction

l Web configure Transaction

Project Bulk Administrators have the following Object Manager privileges:

l Use Object Manager

l Use Repository Translation Wizard

Copyright © 2024 All Rights Reserved 2026


Syst em Ad m in ist r at io n Gu id e

Project Operations Administrators have the following privileges:

l Schedule Request (in Common Privileges)

l Administer Caches

l Administer Cubes

l Administer Jobs

l Administer Subscriptions

l Administer User Connections

l Idle and Resume Project

l Load and Unload Project

Project Operations Monitors have the following privileges:

l Administer Caches

l Administer Jobs

l Administer User Connections

l Audit Change Journal

l Idle and Resume Project

l Load and Unload Project

l Monitor Caches

l Monitor Cubes

l Monitor History Lists

l Monitor Jobs

l Monitor Project

l Monitor Subscriptions

Copyright © 2024 All Rights Reserved 2027


Syst em Ad m in ist r at io n Gu id e

l Monitor User Connections

l Administer Quick Search Indices

Project Resource Settings Administrators have the following privileges:

l Configure Caches

l Configure Governing

l Configure Language Settings

l Configure Project Basic

l Configure Project Data Source

l Configure Statistics

l Configure Subscription Settings

l Edit Project Status

l Web Administration

Project Security Administrators have the following privileges:

l Create Application Objects (Server - Intelligence)

l Assign Security Filters

l Assign Security Roles

l Configure Change Journaling

l Configure Connection Map

l Configure Security Settings

l Create And Edit Security Filters

Copyright © 2024 All Rights Reserved 2028


Syst em Ad m in ist r at io n Gu id e

Privileges for Out-Of-The-Box User Groups


The privileges that are automatically granted for out-of-the-box groups are
listed below.

l All users are members of the Everyone group and inherit all privileges
granted to that group.

l Installing the MicroStrategy Tutorial may change the default privileges


granted for some of these groups.

The following MicroStrategy user groups have no default privileges:

l 3rd Party Users

l LDAP Public/Guest

l LDAP Users

l Public/Guest

l Warehouse Users

The following are predefined MicroStrategy user groups:

l API

l Architect

l Collaboration Server

l Distribution Server

l Mobile

l Reporter

l Second Factor Exempt

Copyright © 2024 All Rights Reserved 2029


Syst em Ad m in ist r at io n Gu id e

l System Monitor

l Narrowcase System Administrators

l Server Bulk Administrators

l Server Configuration Administrators

l Server Operations Administrators

l Server Operation Monitors

l Server Resource Settings Administrators

l Server Security Administrators

l System Administrators

l User Administrators

l Transaction Server

l Web

The following are legacy predefined groups:

l MicroStrategy Architect

l MicroStrategy Web Reporter

l MicroStrategy Web Analyst

l MicroStrategy Web Professional

Privileges for the Everyone Group


By default the Everyone group does not grant any privileges.

When a project is upgraded from MicroStrategy version 7.2.x or 7.5.x to


version 9.0 or later, the Use Developer privilege in the Client-Architect
privilege group is automatically granted to the Everyone group. This
ensures that all users who were able to access Developer in previous
versions can continue to do so.

Copyright © 2024 All Rights Reserved 2030


Syst em Ad m in ist r at io n Gu id e

Developer Privileges
These privileges correspond to the report design functionality available in
Developer. The predefined Developer group is assigned these privileges by
default. The Developer group also inherits all the privileges assigned to the
Analyst group. License Manager counts any user who has any of these
privileges as a Developer user.

Privilege Allows the user to...

* Define Intelligent Cube


Create a report that uses an Intelligent Cube as a data source.
report

* Publish Intelligent Cube Publish an Intelligent Cube to Intelligence Server.

* Save derived elements Save stand-alone derived elements, separate from the report.

* Use Intelligent Cube


Create Intelligent Cubes.
Editor

** Create HTML container Create HTML container objects in a document.

** Use Document Editor Use the Document Editor.

*** Use bulk export editor Use the Bulk Export Editor to define a bulk export report.

**** Define transaction Define a Transaction Services report using the Freeform SQL
report editor.

Define Freeform SQL Define a new report using Freeform SQL, and see the
report Freeform SQL icon in the Create Report dialog box.

Define MDX cube report Define a new report that accesses an MDX cube.

Define a new Query Builder report that accesses an external


Define Query Builder
data source, and see the Query Builder icon in the Create
report
Report dialog box.

Format graph Modify a graph's format using a toolbar or gallery.

Modify the list of report Add objects to a report, which are not currently displayed in
objects (use Object the Report Objects window. This determines whether the user
Browser) is a report designer or a report creator. A report designer is a

Copyright © 2024 All Rights Reserved 2031


Syst em Ad m in ist r at io n Gu id e

Privilege Allows the user to...

user who can build new reports based on any object in the
project. A report creator can work only within the parameters
of a predesigned report that has been set up by a report
designer. This privilege is required to edit the report filter and
the report limit. For more information on these features, see
the Advanced Reporting Help.

Use Consolidation Editor Use the Consolidation Editor.

Use Custom Group Editor Use the Custom Group Editor.

Use Data Mart Editor Use the Data Mart Editor.

Use design mode Use Design View in the Report Editor.

Use Drill Map Editor Create or modify drill maps.

Use Filter Editor Use the Filter Editor.

Use Find and Replace


Use the Find and Replace dialog box.
dialog

Use the formatting editor for consolidations, custom groups,


Use Formatting Editor
and reports.

Use HTML Document


Use the HTML Document Editor.
Editor

Use Link Editor Use the Link Editor.

Use the Metric Editor. Among other tasks, this privilege allows
Use Metric Editor the user to import DMX (Data Mining Services) predictive
metrics.

Use project Use the project documentation feature to print object


documentation definitions.

Use Prompt Editor Use the Prompt Editor.

Use SQL Statements tab


Use the SQL Statements tab in the Datamart Editor and the
in Datamart/Bulk Export
Bulk Export editor.
editors

Copyright © 2024 All Rights Reserved 2032


Syst em Ad m in ist r at io n Gu id e

Privilege Allows the user to...

Use Subtotal Editor Use the Subtotal Editor.

Use Template Editor Use the Template Editor.

Use Translation Editor Use the Translation Editor. Users with this privilege can
bypass translate an object without having Write access to the object.

Use VLDB Property Editor Use the VLDB Properties Editor.

View ETL information This privilege is deprecated.

Privileges marked with * are included only if you have OLAP Services installed as part of
Intelligence Server.

Privileges marked with ** are included only if you have Report Services installed.

Privileges marked with *** are included only if you have Distribution Services installed.

Privileges marked with **** are included only if you have Transaction Services installed.

In addition, it grants the following privileges from the Common Privileges


group:

Analyst

Drill Within Intelligent Cube Save Personal Answer

Add Notes Schedule Request

Create Application Object Use Server Cache

Create Folder Use Translation Editor

Create Shortcut View Notes

Edit Notes Create Schema Objects

Privileges for the MicroStrategy Web Groups


The default privileges that are automatically granted for the MicroStrategy
Web groups are listed below.

The MicroStrategy Web Reporter group grants the following privileges:

Copyright © 2024 All Rights Reserved 2033


Syst em Ad m in ist r at io n Gu id e

l All privileges in the Web Reporter privilege group (see Web Reporter
privileges).

l All privileges in the Common Privileges privilege group, except for Create
Schema Objects and Edit Notes.

The MicroStrategy Web Analyst group grants the following privileges:

l All privileges granted to the MicroStrategy Web Reporter group.

l All privileges in the Web Analyst privilege group (see Web Analyst
privileges).

l The following additional privileges:

MicroStrategy Web Analyst

Create Application Objects (in Common


Web Drill And Link (in Web Reporter)
Privileges)
Web Simultaneous Execution (in Web
Schedule Request (in Common Privileges)
Reporter)
Use Distribution Services (in Distribution
Web View History List (in Web Reporter)
Services)

Some of these privileges are also inherited from the groups that the Web
Analyst group is a member of.

The MicroStrategy Web Professional group grants the following privileges:

l All privileges granted to the MicroStrategy Web Analyst group.

l All privileges in the Web Professional privilege group (see Web


Professional privileges), except for Web Create HTML Container.

l The following additional privileges:

Copyright © 2024 All Rights Reserved 2034


Syst em Ad m in ist r at io n Gu id e

MicroStrategy Web Analyst

Create Application Objects (in Common


Web Drill And Link (in Web Reporter)
Privileges)
Web Simultaneous Execution (in Web
Schedule Request (in Common Privileges)
Reporter)
Use Distribution Services (in Distribution
Web View History List (in Web Reporter)
Services)

Some of these privileges are also inherited from the groups that the Web
Professional group is a member of.

Privileges for the System Monitors Groups


By default the System Monitors group does not grant any additional
privileges. The default privileges that are automatically granted for the
groups that are members of the System Monitors group are listed below.
Unless otherwise specified, all privileges are from the Administration
privilege group (see Administration privileges).

The Narrowcast System Administrators group does not grant any


privileges by default.

The Server Bulk Administrators group grants the following privileges:

l Use Object Manager

l Use Command Manager

l Use Repository Translation Wizard

The Server Configuration Administrators group grants the following


privileges:

l Create And Edit Database Instances And Connections

l Create And Edit Database Logins

Copyright © 2024 All Rights Reserved 2035


Syst em Ad m in ist r at io n Gu id e

l Create Configuration Objects

l Create And Edit Transmitters And Devices (in Distribution Services)

The Server Operations Administrators group grants the following


privileges:

l Schedule Request (in Common Privileges)

l Administer Caches

l Administer Cluster

l Administer Cubes

l Administer Database Connections

l Administer Jobs

l Administer Subscriptions

l Administer User Connections

l Fire Events

l Idle And Resume Project

l Load And Unload Project

The Server Operations Monitors group grants the following privileges:

l Administer Caches

l Administer Cluster

l Administer Database Connections

l Administer Jobs

l Administer User Connections

l Audit Change Journal

l Idle And Resume Project

l Load And Unload Project

Copyright © 2024 All Rights Reserved 2036


Syst em Ad m in ist r at io n Gu id e

l Monitor Caches

l Monitor Cluster

l Monitor Cubes

l Monitor Database Connections

l Monitor History Lists

l Monitor Jobs

l Monitor Projects

l Monitor Subscriptions

l Monitor User Connections

The Server Resource Settings Administrators group grants the following


privileges:

l Configure Caches

l Configure Governing

l Configure Language Settings

l Configure Project Basic

l Configure Project Data Source

l Configure Server Basic

l Configure Statistics

l Configure Subscription Settings

l Edit Project Status

l Web Administration

The Server Security Administrators group grants the following privileges:

Copyright © 2024 All Rights Reserved 2037


Syst em Ad m in ist r at io n Gu id e

l Create Application Objects (in Common Privileges)

l Assign Security Filters

l Assign Security Roles

l Configure Connection Map

l Configure Security Settings

l Create And Edit Security Filters

l Grant/Revoke Privilege

The System Administrators group grants all MicroStrategy privileges.

The User Administrators group grants the following privileges:

l Configure Contacts Data Security Profile (in Distribution Services)

l Assign Security Roles

l Configure Group Membership

l Create And Edit Contacts And Addresses

l Create And Edit Security Roles

l Create And Edit Users And Groups

l Create Configuration Objects

l Enable User

l Grant/Revoke Privilege

l Link Users And Groups To External Accounts

l Reset User Password

Privileges by License Type


There are two main types of licenses available from the MicroStrategy
product suite that come with privileges: Client product and Server product

Copyright © 2024 All Rights Reserved 2038


Syst em Ad m in ist r at io n Gu id e

licenses. Every license type comes with a unique set of privileges, and
system administrators are responsible for assigning these privileges based
on security roles, user groups, as well as the individual user. Some licenses
and their associated privileges are sold in bundled product packages.

Some privileges can be assigned on a project-by-project basis and are


available in the Security Role Editor. For more information on which
privileges are available in the Security Role Editor, see the dashboard
below.

License Bundles
The following is a list of modern license bundles available to MicroStrategy
Cloud users:

l AI Consumer User: Allows users to view, execute, and interact with


dashboards, HyperIntelligence cards, reports, and documents via
MicroStrategy on a desktop or mobile device. Users can also use
MicroStrategy AI functionality, receive distributed reporting, and access
data through external application integrations.

l AI Power User: Allows users to create, design, save, and share


MicroStrategy dashboards, HyperIntelligence cards, reports, and
documents. Users can use external application connectors to leverage in
MicroStrategy on desktop or mobile devices.

l AI Architect User: Grants users full control of MicroStrategy AI to create


and manage MicroStrategy Cloud environments. Users can access tools
used for administration, development, and testing.

l Cloud Reporter User: Allows users to view, execute, and interact with
dashboards, reports, and documents via MicroStrategy in a web browser.
Users also receive distributed reporting.

Reference the dashboard below to see the license types and privilege set
that comes with each license bundle.

Copyright © 2024 All Rights Reserved 2039


Syst em Ad m in ist r at io n Gu id e

Client Product License Types


The following is a list of the official license types available in Client
products:

l Client - Web: A zero-footprint web interface that allows users to access


analytics on multiple browsers and design, interact with, and consume
information via pixel-perfect reports, documents, or dashboards.

l Client - Reporter: A consumer license that allows end users to view,


execute, and interact with dashboards, reports, and documents via
MicroStrategy in a web browser.

l Client - ReporterPro: A consumer license that allows end users to view,


execute, and interact with dashboards, reports, and documents via the
MicroStrategy application for Windows and Mac.

l Client - Application - API: Allows users to consume federated data and


services in custom applications built by developers using the REST API.

l Client - Application: Allows organizations to build a governed, scalable,


secure, and highly-performant environment that can be used to build and
deploy custom branded applications for Web or Mobile.

l Client - Hyper: A chrome browser extension that can embed analytics into
any website or web application. The HyperIntelligence client automatically
detects predefined keywords on a webpage or web application and
surfaces contextual insights from enterprise data sources using cards.

l Client - Mobile: MicroStrategy Mobile allows organizations to deploy


mobile analytics and build powerful productivity apps that deliver native,
secure, mobile-optimized experiences that take advantage of the unique
capabilities of mobile devices. In addition, the Mobile license includes the
new Dashboard mobile app.

l Client - Architect: License that provides the ability to create the project
schema and build a centralized data model to deliver a single version of

Copyright © 2024 All Rights Reserved 2040


Syst em Ad m in ist r at io n Gu id e

the truth. This license provides access to a collection of tools for


administration, development, and testing purposes.

l Client - Badge: A mobile client application that enables digital identity


badges to be used to authenticate users for physical access, logical
access, peer to peer validation, and multifactor authentication. The mobile
client can also be configured to capture identity and telemetry data to be
used for contextual analytics and workflows.

l Client - Communicator: A mobile client application that provides


analytics, identity discovery, mustering, and two-way communications
features to conduct complex analytics and optimize productivity for Badge
users.

The privileges in any license type you have do not rely on additional licenses
to function properly. However, it is possible to inherit privileges from other
license types. The Client - Reporter and - Web licenses are linked together
in a hierarchy that allows users to inherit specific privilege sets. The
hierarchy is set up such that the Client - Reporter license is a subset of the
Client - Web license.

This means that if you have a Client - Web license, in addition to the
privilege set that comes with that license, you will automatically inherit the
privileges that come with the Client - Reporter license.

Copyright © 2024 All Rights Reserved 2041


Syst em Ad m in ist r at io n Gu id e

However this hierarchy does not work in reverse, so if you have the Client -
Reporter license, you will not inherit the Client - Web privilege set. Keep in
mind that you can still use each of the Client product license types
individually regardless of whether or not they are apart of a hierarchy.

Reference the dashboard below to see the privilege set that comes with
each license type. License types that contain a subset have already been
set up to include the privileges from their subset license.

Server Product License Types


The following is a list of the official license types available in Server
products:

l Server - Intelligence: Provides the core analytical processing power and


job management features for reporting, analysis, and monitoring
applications.

l Server - Reporter: A subset of the Intelligence and Analytics Server,


the Reporter Server is aimed for data consumers to view and interact
with data by executing and viewing cards, dashboards, reports, and
documents – on both Web and Mobile applications. This license type is
required to use MicroStrategy AI.

l Server - Telemetry: Provides real time and automated capture and


distribution of telemetry data for use in analytics, mobile applications, and
other workflows.

l Server - Identity: Provides the organization with the ability to, create,
configure, distribute and manage digital identities (Badge) for users.

l Server - Analytics (add-on): An extension to the Intelligence Server that


adds in-memory to the standard ROLAP functionality of the MicroStrategy
platform. The Analytics Server creates and manages Intelligent Cubes, a
multi-dimensional cache structure that speeds up access to frequently
used data.

Copyright © 2024 All Rights Reserved 2042


Syst em Ad m in ist r at io n Gu id e

l Server - Collaboration (add-on): Collaboration gives users the ability to


communicate with each other by exchanging messages, tagging users,
and sharing filter selections. All interactions are handled through the
Intelligence Server and all users sending or receiving messages must
exist in the Intelligence Server repository.

l Server - Distribution (add-on): Enables a robust, scalable, and efficient


rollout of automated reporting to corporate users, external partners, and
customers, and can distribute millions of reports within a specified time
frame.

l Server - Transaction (add-on): Allows organizations to leverage write-


back functionality in documents, dashboards, and mobile apps in order to
approve requests, submit orders, change plans, and capture information
including comments and images from a mobile device.

l MicroStrategy AI: Provides access to AI-enabled functions that utilize


machine learning and artificial intelligence for data analysis and
representation.

Similar to the Client product license types, the Server - Intelligence and
Server - Reporter license are organized into a hierarchy that allows users to
inherit certain privileges. In this hierarchy, the Server - Reporter license is a
subset of the Server - Intelligence license.

Copyright © 2024 All Rights Reserved 2043


Syst em Ad m in ist r at io n Gu id e

This means that if you have the Server - Intelligence license, in addition to
that license's privilege set you will have access to the privilege set available
in the Server - Reporter license. However this does not prevent you from
using the privilege set of either license individually.

Add-Ons
Server product licenses also include add-on licenses that contain their own
privilege sets. Each of these license types can be obtained separately and
added on top of either the Server - Intelligence or Server - Reporter
licenses. The only restriction is that certain add-ons can only be added to
specific license types:

l Server - Analytics: can only be added on top of the Server - Intelligence


license

Starting July 2024, this license is included in the AI Power User bundle.

l Server - Collaboration: can be added on top of either the Server -


Intelligence or Server - Reporter license

Copyright © 2024 All Rights Reserved 2044


Syst em Ad m in ist r at io n Gu id e

Starting July 2024, this license is included in the AI Power User and AI
Consumer User bundles.

l Server - Distribution: can be added on top of either the Server -


Intelligence or Server - Reporter license

Starting July 2024, this license is included in the AI Power User and AI
Consumer User bundles.

l Server - Transaction: can be added on top of either the Server -


Intelligence or Server - Reporter license

l MicroStrategy AI: can be added on top of either the Server - Intelligence


or Server - Reporter license

Once an add-on has been obtained you will have access to its privilege set,
as well as the privilege set of the license type you combined it with.

Reference the dashboard below to see the privilege sets that come with
each license type. If you are looking at a license type combined with an add-
on, you must select both to see the full list of available privileges.

Compliance with Privileges


Compliance is essentially whether or not your users have been accessing
the privileges available to your enterprise based on the type and quantity of
license(s) purchased. You are considered in compliance if you are using
less than or equal to the amount of licenses available. If your users exceed
the amount of licenses available by using more privileges than there are
licenses available, then you are considered out of compliance. See the drop-
downs below for examples of each scenario.

Once your enterprise has purchased one or more of the license types
available, you will also get access to License Manager. This product
manages the license types your enterprise has by auditing them to keep
track of which ones are in use, and which ones are available.

In compliance example

Copyright © 2024 All Rights Reserved 2045


Syst em Ad m in ist r at io n Gu id e

Let's say an enterprise has purchased 2 Server - Reporter, 1 Server -


Intelligence, 1 Client - Web, and 2 Client - Reporter licenses that contain the
following privileges:

l Server - Reporter: Export to PDF, use analytics, view notes

l Server - Intelligence: Add notes, fire events, configure caches

l Client - Reporter: Web export, web sort

l Client - Web: Use office, document design

Now let's say that there are three employees in the enterprise that have
been using these licenses to access the following privileges:

l Employee 1: Export to PDF, web export

l Employee 2: Use analytics, web sort

l Employee 3: View notes, web sort, web export

This enterprise has 2 Server - Reporter and 2 Client - Reporter licenses, so


it may initially seem like they are out of compliance since all three
employees use both license types. However because of the hierarchies
described above, the Server - Intelligence license inherits the privilege set
of the Server - Reporter license, and the Client - Web license inherits the
privilege set of the Client - Reporter license. This means that any privileges
mapped to the Server - Reporter or Client - Reporter licenses can also be
mapped to the Server - Intelligence or Client - Web licenses.

Copyright © 2024 All Rights Reserved 2046


Syst em Ad m in ist r at io n Gu id e

So in reality, this enterprise is using 2 Server - Reporter, 2 Client - Reporter,


1 Server - Intelligence, and 1 Client - Web licenses which means they are
exactly in compliance.

Out of compliance example

Let's say an enterprise has purchased 2 Server - Reporter, 1 Server -


Intelligence, 1 Client - Web, and 2 Client - Reporter licenses that contain the
following privileges:

l Server - Reporter: Export to PDF, use analytics, view notes

l Server - Intelligence: Add notes, fire events, configure caches

l Client - Reporter: Web export, web sort

l Client - Web: Use office, document design

Now let's say that there are three employees in the enterprise that have
been using these licenses to access the following privileges:

l Employee 1: Export to PDF, web sort, document design

l Employee 2: Export to PDF, use office, web export

l Employee 3: Export to PDF, web sort

In total this enterprise is using 3 Server - Reporter, 3 Client - Reporter, and


2 Client - Web licenses. Even by using inherited privileges (which maps the
Server - Reporter license to the Server - Intelligence license, and the Client
- Reporter license to the Client - Web license), this enterprise is still using 2
Client - Web licenses. Because they only have 1 Client - Web license
available, they are out of compliance.

Copyright © 2024 All Rights Reserved 2047


Syst em Ad m in ist r at io n Gu id e

Privileges by License Type Dashboard


Click here to view the Privileges by License Type Dashboard in
MicroStrategy Library.

To see the Privileges by License Type Dashboard in MicroStrategy Library,


click below.

Copyright © 2024 All Rights Reserved 2048


Syst em Ad m in ist r at io n Gu id e

Copyright © 2024 All Rights Reserved 2049


Syst em Ad m in ist r at io n Gu id e

The following licenses are included in an AI or Cloud bundle but are not
reflected in the dashboard:

l AI Consumer User and Cloud Consumer User


o Drivers - Big Data
o Drivers - OLAP
o Gateway - EMM
o Server - Geospatial

l AI Power User and Cloud Power User


o Drivers - Big Data
o Drivers - OLAP
o Gateway - EMM
o Server - Geospatial

l Cloud Reporter User

l Drivers - Big Data

l Drivers - OLAP

Copyright © 2024 All Rights Reserved 2050


Syst em Ad m in ist r at io n Gu id e

M ULTI -TEN AN T
EN VIRON M EN TS: O BJECT
N AM E PERSON ALIZATION

Copyright © 2024 All Rights Reserved 2051


Syst em Ad m in ist r at io n Gu id e

In a multi-tenant setup, different organizations share a single MicroStrategy


environment to accomplish their reporting needs. This section shows you
how to use MicroStrategy to personalize object names in a project in your
MicroStrategy environment, to support a multi-tenant setup.

Attribute and metric names in a project's metadata are made relevant to


each tenant using object name personalization. Every object can have a
different name stored to support each tenant who uses that object in their
reporting. Each tenant's users see only those object names assigned to their
organization. If there is no specific tenant name assigned to an object that is
viewable by the tenant organization, its users see the base object name.

For example, you have an attribute stored in the metadata repository, with a
base name of Inventory Date. This metadata object will appear on reports
accessed by users in Organization A and Organization B. You can use object
name personalization to configure MicroStrategy to automatically display the
object to Organization A with the name Date In Inventory, and display the
same object to Organization B with the name Date First Purchased.

Object name personalization involves exporting object strings to a location


where they can be updated with tenant-specific names, and importing the
new object strings back into the metadata repository. You can also provide
new names for individual objects one at a time, using the Object Translation
Editor in Developer.

For steps to perform these procedures, see Renaming Metadata Objects,


page 2054.

How a Tenant Language Differs from a Standard


Language
A tenant language is a set of objects that use the names appropriate for a
given tenant. A tenant language appears in MicroStrategy exactly like any
other language. The tenant language's ID is the only property that
differentiates a tenant language from a standard language; the system

Copyright © 2024 All Rights Reserved 2052


Syst em Ad m in ist r at io n Gu id e

calculates the tenant language's ID based on the standard language's ID.


For example, the language ID for standard English is 000 0409, while a
tenant language based on standard English might be 0001 0409. Basing a
tenant language on a standard language allows the system to provide the
best match for all facets of the renamed interface, if one or more parts of the
interface are not renamed for the tenant language.

You can create up to 255 tenant languages based on a standard language.


For example, using English-US as the base language, you can create 255
tenant languages based on English-US. You can create another 255 tenant
languages based on English-UK, and so on.

Granting User Access to Rename Objects and View


Tenant Languages
The procedures in this section will help you modify existing MicroStrategy
projects to support metadata object renaming and tenant languages.

Allowing Access to Objects for Renaming


To perform object renaming, access to specific objects is controlled
primarily through access control lists (ACLs). You can allow permissions to
specific users for each object that needs to be renamed, or for each tenant
language (a set of objects for a given tenant).

Access to Add or Modify an Object Name


By default, administrators and object owners can rename an object or modify
an existing object name. Use ACLs to provide other users Write access to an
object, if other users need to rename that object. To change ACL
permissions, right-click the object and select Properties, then select
Security on the left. For details on each ACL and what access it allows,
click Help.

Copyright © 2024 All Rights Reserved 2053


Syst em Ad m in ist r at io n Gu id e

You can also provide a user with the Use Repository Translation Wizard
privilege, within the Object Manager set of privileges. This allows a user to
perform the necessary steps to rename strings in bulk, for all tenants,
without giving the user the ability to modify an object in any other way. To
change a privilege, open the user in the User Editor and select Project
Access on the left.

Access to Select or Enable a Tenant's Object Names


By default, MicroStrategy users are provided with appropriate privileges to
Browse and Use a tenant's objects, such that analysts can select a tenant
language (the set of objects that use the tenant's names) as their display
preference if that tenant language has been enabled for a project. Project
administrators can enable any tenant language available in the system.

You can modify these default privileges for a specific user role or a specific
tenant language.

To Modify Access to a Tenant's set of Object Names (Tenant


Language)

1. In the Folder List on the left, within the appropriate project source,
expand Administration.

2. Expand Configuration Managers, then select Languages.

3. All tenant languages are listed on the right. To change ACL permissions
for a tenant language, right-click the object and select Properties.

4. Select Security on the left. For details on each ACL and what access it
allows, click Help.

Renaming Metadata Objects


Objects that can be renamed are stored in the MicroStrategy metadata.
These objects include metric names, report names, the Public Objects

Copyright © 2024 All Rights Reserved 2054


Syst em Ad m in ist r at io n Gu id e

system folder, security role names, user group names, and so on. Software
strings stored in the metadata include embedded text strings (embedded in
an object's definition), such as prompt instructions, aliased names (which
can be used in attributes, metrics, and custom groups), consolidation
element names, custom group element names, graph titles, and threshold
text.

Metadata objects do not include configuration objects (such as the user


object), function names, data mart table names, and so on.

Begin object renaming using the following high-level steps:

1. Add tenant languages to the system, for each of your tenants. For
steps, see Adding a New Tenant Language to the System, page 2055.

2. Enable tenant languages for your project's metadata objects. For steps,
see Enabling and Disabling Tenant Languages, page 2056.

3. Provide tenant-specific names for objects using the steps in Renaming


Objects in Your Project, page 2058.

Adding a New Tenant Language to the System


You can add new tenant languages to MicroStrategy. Once they are added,
new tenant languages are then available to be enabled for a project.

You must have the Browse permission for the language object's ACL (access
control list).

To Add a New Tenant Language to the System

1. Log in to a project as a user with administrative privileges.

2. Right-click the project and select Project Configuration.

3. On the left side of the Project Configuration Editor, go to Language >


Metadata.

4. Click Add.

Copyright © 2024 All Rights Reserved 2055


Syst em Ad m in ist r at io n Gu id e

5. Click New.

6. Click OK.

7. Disconnect and reconnect to the project source so that your changes


take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.

Tenant languages can also be added using the Languages Configuration


Manager, by going to Administration > Configuration Managers >
Language.

After adding a new tenant language, enable the tenant language for the
project. For steps, see Enabling and Disabling Tenant Languages, page
2056.

Enabling and Disabling Tenant Languages


To support the display of a tenant's object names and descriptions, you must
enable tenant languages for your project. The tenant languages you enable
are those tenant languages you want to support for that project.

You can also disable tenant languages for a project.

Enabling Tenant Languages for a Project

Gather a list of tenant languages used by filters and prompts in the project.
These tenant languages should be enabled for the project, otherwise a report
containing a filter or prompt in a tenant language not enabled for the project
will not be able to execute successfully.

To Enable Tenant Languages for a Project

1. Log into the project as a user with Administrative privileges.

2. Right-click the project and select Project Configuration.

Copyright © 2024 All Rights Reserved 2056


Syst em Ad m in ist r at io n Gu id e

3. On the left, expand Language and select Metadata.

4. Click Add to see a list of available tenant languages. The list includes
languages that have been added to the system.

5. Select the check boxes for the tenant languages that you want to
enable for this project.

l Enabled tenant languages will appear in the Repository Translation


Wizard for string and object renaming, as well as in Developer's My
Preferences and Web's Preferences, for users to select their own
preferred tenant language for the project.

l Reports that contain filters or prompts in a tenant language will


execute successfully if the project has that tenant language enabled.

6. Click OK.

7. Select one of the tenant languages on the right side to be the default
tenant language for this project. The default tenant language is used by
the system to maintain object name uniqueness.

This may have been set when the project was first created. If so, it will
not be available to be selected here.

Once the project default tenant language is set, it cannot be changed


unless you duplicate the project and change the default tenant
language of the duplicated project. Individual objects within a project
can have their default tenant language changed.

8. Click OK.

9. Disconnect and reconnect to the project source.

10. Update the out-of-the-box MicroStrategy metadata objects. To do this,


in Developer, right-click the project and select Project Configuration.
Expand Project Definition, expand Update, select Translations, and
click Update.

Copyright © 2024 All Rights Reserved 2057


Syst em Ad m in ist r at io n Gu id e

Disabling Tenant Languages for a Project


You can use the steps below to disable a tenant language for a project.
When a tenant language has been disabled from a project, that tenant
language is no longer available for users to select as a tenant language
preference, and the tenant language cannot be seen in any related
interfaces, such as an object's Translation dialog box.

Any object names for the disabled tenant language are not removed from the
metadata with these steps. Retaining the object names in the metadata
allows you to enable the tenant language again later, and the object names
will still exist. To remove object names in the disabled tenant language from
the metadata, objects must be modified individually and saved.

To Disable Tenant Languages in a Project

1. Log in to a project as a user with administrative privileges.

2. Right-click the project and select Project Configuration.

3. On the left side of the Project Configuration Editor, expand Language,


then select Metadata.

4. On the right side, under Selected Languages, clear the check box for
the tenant language that you want to disable for the project, and click
OK.

Renaming Objects in Your Project


Renaming objects in a project involves providing new strings for metadata
object names and descriptions.

There are two methods to rename metadata objects, depending on whether


you want to rename a large number of objects or just one or two objects:

l Rename a large number of objects: Extract strings in bulk to a separate


database, rename them, and import them back into MicroStrategy. The
MicroStrategy Repository Translation Wizard is the recommended method

Copyright © 2024 All Rights Reserved 2058


Syst em Ad m in ist r at io n Gu id e

to rename your metadata objects. Steps to access this tool are below.

l Rename one or more objects in a folder: Right-click the object and


select Translate. Type the new name(s) for each tenant language this
object supports, and click OK. To rename several objects, select them all
while holding Shift or Ctrl, then right-click and select Translate. For
details to use the Object Translation dialog box, click Help.

The rest of this section describes the method to rename object strings in
bulk, using a separate database, with the Repository Translation Wizard.

The Repository Translation Wizard does not support renaming of


configuration objects (such as the user object). It does support object
descriptors, including embedded text. These are detailed in the introduction
to Renaming Metadata Objects, page 2054.

Object renaming involves the following high-level steps:

All of the procedures in this section assume that your projects have been
prepared for object renaming. Preparation steps are in Granting User
Access to Rename Objects and View Tenant Languages, page 2053.

1. Add and enable tenant languages for the metadata repository (see
Adding a New Tenant Language to the System, page 2055 and Enabling
and Disabling Tenant Languages, page 2056)

2. Export object strings to a location where they can be renamed (see


Extracting Metadata Object Strings for Renaming, page 2060)

3. Perform the renaming (see Renaming Objects in Your Project, page


2058)

4. Import the newly renamed object strings back into the metadata
repository (see Importing Renamed Strings from the Database to the
Metadata, page 2064)

Copyright © 2024 All Rights Reserved 2059


Syst em Ad m in ist r at io n Gu id e

To allow users to rename objects using MicroStrategy's bulk translation tool,


the Repository Translation Wizard, grant the user the Use Repository
Translation Wizard privilege. If this privilege is assigned, be aware that the
user will be able to export strings and import new names for those strings in all
languages that the project supports. This is true no matter what other language
restrictions are applied.

Extracting Metadata Object Strings for Renaming


The MicroStrategy Repository Translation Wizard supports Microsoft Access
and Microsoft SQL Server databases as repositories where strings can be
stored for renaming. The repository is where strings are extracted to and
where the actual renaming process is performed.

You cannot extract strings from the project's default metadata language.

It is recommended that objects are not modified between the extraction


process and the import process. This is especially important for objects with
location-specific strings: attribute aliases, metric aliases, custom group
elements, and document text boxes.

To Extract a Large Number of Object Strings for Renaming

1. Open the Repository Translation Wizard. To do this, from the Start


menu, point to All Programs, then MicroStrategy Tools, then select
Repository Translation Wizard.

2. Click Next to begin.

3. To extract strings from the metadata, select the Export Translations


option from the Metadata Repository page in the wizard.

Renaming Metadata Object Strings in the Database


The extraction process performed by the Repository Translation Wizard
creates a table in the database, with the following columns:

Copyright © 2024 All Rights Reserved 2060


Syst em Ad m in ist r at io n Gu id e

l PROJECTID: This is the ID of the project from which the string is


extracted.

l OBJECTID: This is the ID of the object from which the string is extracted.

l OBJECTTYPE: Each object is associated with a numeric code. For


example, documents are represented by OBJECTTYPE code 55.

l EMBEDDEDID: An embedded object is an object contained inside another


object, for example, a metric object that is part of a report object. If the
string is extracted from an embedded object, the ID of this embedded
object is stored in this column. The value 0 indicates that the string is not
extracted from an embedded object.

l EMBEDDEDTYPE: This is a numeric representation of the type of the


embedded object. The value 0 indicates that the string is not extracted
from an embedded object.

l UNIQUEKEY: This is a key assigned to the extracted string to identify the


string within the object.

l READABLEKEY: This is a description of the extracted string within the


object, for example, Prompt Title, Prompt Description, Object Name,
Template Subtotal Name, and so on. The READABLEKEY is a readable
form of the UNIQUEKEY.

l LOCALEID: This indicates the tenant language of the extracted string in


the TRANSLATION column.

l MicroStrategy uses locale IDs to uniquely identify tenant languages.


MicroStrategy assigns a unique tenant language ID based on the base
language that the tenant language is derived from.

l TRANSLATION: This is the column where the extracted string is stored.

l TRANSVERSIONID: This is the version ID of the object at the time of


export.

Copyright © 2024 All Rights Reserved 2061


Syst em Ad m in ist r at io n Gu id e

l REFTRANSLATION: This column is used by translators. This column


contains the extracted string in the translation reference language, which
is selected by the user from the Repository Translation Wizard during
export.

This string is used only as a reference during the translation process. For
example, if the translator is comfortable with the German language, you
can set German as the translation reference language. The
REFTRANSLATION column will then contain all the extracted strings in the
German language.

If no reference language string is available, the string from the object's


primary language is exported so that this column is not empty for any
string.

l STATUS: You can use this column to enter flags in the table to control
which strings are imported back into the metadata. A flag is a character
you type, for example, a letter, a number, or a special character (as long
as it is allowed by your database). When you use the wizard to import the
strings back into the metadata, you can identify this character for the
system to use during the import process, to determine which strings to
import.

For example, if only some objects have been renamed, you may want to
import only the completed ones. Or you may wish to import only those
strings that were reviewed. You can flag the strings that were completed
and are ready to be imported.

l OBJVERSIONID: This is the version ID of objects at the time of import.

l SYNCHFLAG: This is a system flag and is automatically generated during


import. The following values are used:

l 0: This means that the object has not been modified between extraction
and import.

Copyright © 2024 All Rights Reserved 2062


Syst em Ad m in ist r at io n Gu id e

l 1: This means that the object has been modified between extraction and
import.

l 2: This means that the object that is being imported is no longer present
in the metadata.

System flags are automatically applied to strings during the import


process, so that you can view any string-specific information in the log
file.

l LASTMODIFIED: This is the date and time when the strings were
extracted.

Once the extraction process is complete, the strings in the database need to
be renamed in the extraction table described above.

l If an object name is empty in a user's chosen project language


preference, the system defaults to displaying the object's default name,
so it is not necessary to rename objects that are not intended to be
renamed. For details on language preferences, see Selecting Preferred
Languages for Interfaces, Reports, and Objects, page 2065.

l If you performed a Search for Objects in the Repository Translation Tool,


you may notice that the number of rows in the extraction table might not
match the number of rows returned in the search results. This is because
a search returns all objects that meet the search requirements; the search
does not filter for only those items that can be renamed. Thus, for
example, the search may return a row for the lookup table LU_YEAR, but
the extraction process does not extract the LU_YEAR string because
there is no reason to rename a lookup table's name. To determine
whether an object's name can be renamed, right-click the object, select
Properties, and look for the International option on the left. If this option
is missing, the object is not supported for renaming.

To confirm that your new object names have successfully been imported
back into the metadata, navigate to one of the renamed objects in
Developer, right-click, and select Properties. On the left, select

Copyright © 2024 All Rights Reserved 2063


Syst em Ad m in ist r at io n Gu id e

International, then click Translate. The table shows all names currently in
the metadata for this object.

Importing Renamed Strings from the Database to the Metadata


After strings have been renamed in the database, they must be re-imported
into the MicroStrategy metadata.

To Import Renamed Strings

1. Open the Repository Translation Wizard. To do this, from the Start


menu, point to All Programs, then MicroStrategy Tools, then select
Repository Translation Wizard.

2. Click Next to begin.

3. To import strings from the database back into the metadata, select the
Import Translations option from the Metadata Repository page in the
wizard.

After the strings are imported back into the project, any objects that were
modified while the renaming process was being performed, are automatically
marked with a 1. These object names should be checked for correctness.

Making Tenant-Specific Data Available to Users


After you have performed the necessary steps to configure metadata object
renaming, you can specify which tenant language(s) should be displayed for
various users in the interface and in reports (both report objects and report
results). You can specify language preferences at the project level and at
the all-projects level. By selecting various levels of language preferences,
you specify which language is preferred as a fallback if a first choice
language is not available.

Copyright © 2024 All Rights Reserved 2064


Syst em Ad m in ist r at io n Gu id e

The following sections show you how to select language preferences based
on various priority levels within the system, starting with a section that
explains the priority levels:

l Selecting Preferred Languages for Interfaces, Reports, and Objects, page


2065

l Selecting the Interface Language Preference, page 2066

l Configuring Metadata Object and Report Data Language Preferences,


page 2068

l Selecting the Object Default Language Preference, page 2081

Selecting Preferred Languages for Interfaces, Reports, and


Objects
After renamed data is stored in your data warehouse and/or metadata
database, and languages have been enabled for the project, you must
specify which languages are the preferred languages for the project and the
user. These selected languages are called language preferences.

The following image shows the different parts of the MicroStrategy


environment that display renamed strings based on the language
preferences:

Copyright © 2024 All Rights Reserved 2065


Syst em Ad m in ist r at io n Gu id e

The following language preferences can be configured:

l Interface Language: Determine the language that menu options, dialog


box text, and so on, will display. For steps to set this preference, see
Selecting the Interface Language Preference, page 2066.

l Metadata objects: Determine the language that will be displayed for


MicroStrategy objects that come from the metadata database, such as
metric names, report names, system folder names, and so on. For steps to
set this preference, see Configuring Metadata Object and Report Data
Language Preferences, page 2068.

l Report data: Determine the language that will be displayed for report
results that come from your data warehouse, such as attribute element
names. For steps to set this preference, see Configuring Metadata Object
and Report Data Language Preferences, page 2068.

l Object default language: Determine the fallback language for


MicroStrategy objects. This language is used if a report is executed in a
language that the object lacks a name for. For steps to set or change this
default preference, see Selecting the Object Default Language
Preference, page 2081.

Each language preference can be configured independently of the others.


However, for best performance it is recommended that you use a unified
language display in Developer. For the purposes of multi-tenancy, this
means that if the base language for a tenant language is English - US, all of
the language selections for that tenant should be English - US with the
exception of the Metadata Object language, which should be the tenant
language.

Selecting the Interface Language Preference


The interface language preference determines what language Developer
menus, editors, dialog boxes, monitors and managers, and other parts of the
Developer software are displayed in. Use the steps below to set this
preference.

Copyright © 2024 All Rights Reserved 2066


Syst em Ad m in ist r at io n Gu id e

Configuring the Interface Language Preference

1. In Developer, log in to the project.

2. From the Tools menu, select Preferences.

3. On the left, expand International and select Language.The


International: Language dialog box is displayed, as shown below:

4. From the Interface Language drop-down list, select the language that
you want to use as the interface default language.

The interface language preference can also be used to determine the


language used for the metadata objects and report data, if the
Developer level language preference is set to Use the same language
as MicroStrategy Developer. For more information on the Developer
level language preference, see Selecting the Developer Level
Language Preference, page 2077.

5. Select OK.

Copyright © 2024 All Rights Reserved 2067


Syst em Ad m in ist r at io n Gu id e

6. Disconnect and reconnect to the project source so that your changes


take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.

Configuring Metadata Object and Report Data Language


Preferences
There are several levels at which metadata and report data languages can
be specified in MicroStrategy. Lower level languages are used by the system
automatically if a higher level language is unavailable. This ensures that end
users see an appropriate language in all situations.

Language preferences can be set at six different levels, from highest priority
to lowest. The language that is set at the highest level is the language that is
always displayed, if it is available. If that language does not exist or is not
available in the metadata or the data warehouse, the next highest level
language preference is used.

If a language preference is not specified, or is set to Default, MicroStrategy


automatically uses the next lower priority language preference. If none of
these language preferences are set, the interface preferred language is
used.

When an object is created, its default object language is automatically set to


match the creator's metadata language preference. If the creator has their
metadata language preference set to Default, the new object's default
language is decided based on the rules in this section: the system will first
try to use a default language for all users of the project, then a language
preference set for all users of Developer, then the default language set for
the project (as shown in the table below).

The following table describes each level, from highest priority to lowest
priority, and points to information on how to set the language preference at
each level.

Copyright © 2024 All Rights Reserved 2068


Syst em Ad m in ist r at io n Gu id e

l End user preference settings override any administrator preference


settings, if the two settings conflict.

l Distribution Services deliveries are one exception to the hierarchy below.


For details, see Selecting the Machine Level Language Preference, page
2079.

Language
Preference
Level Setting Location for Setting Location for
Description
(highest to End Users Administrators
lowest
priority)

Web: From the


MicroStrategy icon, Set in the User Language
The language
select Preferences. Preference Manager. See
User-Project preference for a
Selecting the User-Project
level user for a specific Developer: From the Level Language
project. Tools menu, select My Preference, page 2071.
Preferences.

Web: From the


The language MicroStrategy icon, Set in the User Editor. See
User-All preference for a select Preferences. Selecting the User-All
Projects level user for all Developer: From the Projects Level Language
projects. Tools menu, select My Preference, page 2073.
Preferences.

In the Project
Configuration Editor,
The language expand Languages,
Project-All preference for all select User Preferences.
Not applicable.
Users level users in a specific See Selecting the All
project. Users in Project Level
Language Preference,
page 2075.

Developer The interface Set in the Developer Set in the Developer

Copyright © 2024 All Rights Reserved 2069


Syst em Ad m in ist r at io n Gu id e

Language
Preference
Level Setting Location for Setting Location for
Description
(highest to End Users Administrators
lowest
priority)

language Preferences dialog box. Preferences dialog box.


preference for all For steps to specify this For steps to specify this
users of language, see Selecting language, see Selecting
level
Developer on that the Developer Level the Developer Level
machine, for all Language Preference, Language Preference,
projects. page 2077. page 2077.

On the user's machine and


within the user's browser
The language
On the user's machine settings. For steps to
preference for all
Machine level and within the user's specify this language, see
users on a given
browser settings. Selecting the Machine
machine.
Level Language
Preference, page 2079.

This is the project Set in the Project


default language Configuration Editor. For
set for MDI. It is steps to specify this
Project
the language Not applicable. language, see Configuring
Default level
preference for all the Project Default Level
users connected Language Preference,
to the metadata. page 2080.

These language preferences apply to strings renamed in both the metadata


and the data warehouse. However, MicroStrategy handles missing
languages differently, depending upon whether the string is renamed in the
metadata or the data warehouse:

l Metadata: When a name for an object in the metadata is missing in the


preferred language, the object default language preference is used. For

Copyright © 2024 All Rights Reserved 2070


Syst em Ad m in ist r at io n Gu id e

more information about the object default language preference, see


Selecting the Object Default Language Preference, page 2081.

l Data warehouse: When a name for data in the data warehouse is missing
in the preferred language (the column or table is present in the data
warehouse but is empty), the report returns no data.

The following sections provide steps to configure each preference level,


starting from the highest priority and ending at the lowest priority.

Selecting the User-Project Level Language Preference

The User-Project Level language preference is the language preference for


a given user for a specified project. It is the highest priority language
setting; to see the hierarchy of language preference priorities, see the table
in Configuring Metadata Object and Report Data Language Preferences,
page 2068.

This preference is specified in the User Language Preference Manager in


Developer. Use the steps below to set this preference.

If an object has an empty name in a user's chosen project language


preference, the system defaults to displaying the object's default language,
so it is not necessary to add names for objects that are not intended to be
renamed.

Selecting the User-Project Level Language Preference

1. Log in to Developer as a user with Administrative privileges.

2. Right-click the project that you want to set the language preference for
and select Project Configuration.

3. On the left side of the Project Configuration Editor, expand Languages,


and select User Preferences.

Copyright © 2024 All Rights Reserved 2071


Syst em Ad m in ist r at io n Gu id e

4. On the right side, under User Language Preference Manager, click


Modify. The User Language Preference Manager opens, shown below:

5. In the Choose a project to define user language preferences drop


down menu at the top left, select the appropriate project.

6. Select the users from the list on the left side of the User Language
Preferences Manager that you want to change the User-Project level
language preference for, and click > to add them to the list on the right.
You can narrow the list of users displayed on the left by doing one of
the following:

l To search for users in a specific user group, select the group from the
drop-down menu that is under the Choose a project to define user
language preferences drop-down menu.

l To search for users containing a certain text string, type the text
string in the Find field, and click the Filter icon:

This returns a list of users matching the text string you typed.

Copyright © 2024 All Rights Reserved 2072


Syst em Ad m in ist r at io n Gu id e

Previous strings you have typed into the Find field can be accessed
again by expanding the Find drop-down menu.

7. On the right side, select the user(s) that you want to change the User-
Project level preferred language for, and do the following:

You can select more than one user by holding CTRL

l Select the desired language to be applied to renamed metadata


objects from the drop-down menu in the Metadata column. This
language will be displayed for the selected user(s) when connecting
to the selected project.

l Select the desired language to be applied to report results from the


drop-down menu in the Data column. This language will be displayed
for the selected user(s) when connecting to the selected project.

8. Click OK.

Once the user language preferences have been saved, users can no
longer be removed from the Selected list.

9. Click OK.

10. Disconnect and reconnect to the project source so that your changes
take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.

Selecting the User-All Projects Level Language Preference

The User-All Projects level language preference determines what language


will be applied to all projects that a specific user sees when connected to a
project source, unless a higher priority language preference has been
specified for the user. Use the steps below to set this preference.

Copyright © 2024 All Rights Reserved 2073


Syst em Ad m in ist r at io n Gu id e

If the User-Project language preference is specified for the user, the user
will see the User-All Projects language only if the User-Project language is
not available. To see the hierarchy of language preference priorities, see
the table in Configuring Metadata Object and Report Data Language
Preferences, page 2068.

Selecting the User-All Projects Level Language Preference

1. Log in to Developer as a user with Administrative privileges.

2. In the Folder List on the left, within the appropriate project source,
expand Administration, expand User Manager, and navigate to the
user that you want to set the language preference for.

3. Double-click the user.

4. On the left side of the User Editor, expand the International category
and select Language.

5. On the right side of the User Editor, do the following:

l Select the language that you want to be applied to renamed metadata


strings from the Default metadata language preference for this
user drop-down menu.

l Select the language that you want to be applied to renamed data


warehouse strings from the Default data language preference for
this user drop-down menu.

6. Click OK.

7. Disconnect and reconnect to the project source so that your changes


take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.

Copyright © 2024 All Rights Reserved 2074


Syst em Ad m in ist r at io n Gu id e

Selecting the All Users in Project Level Language Preference

The All Users In Project level language preference determines the language
that will be displayed for all users that connect to a project, unless a higher
priority language is specified for the user. Use the steps below to set this
preference.

If the User-Project or User-All Projects language preferences are specified


for the user, the user will see the All Users In Project language only if the
other two language preferences are not available. To see the hierarchy of
language preference priorities, see the table in Configuring Metadata Object
and Report Data Language Preferences, page 2068.

Selecting the All Users in Project Level Language Preference

1. Log in to Developer as a user with Administrative privileges.

2. In the Folder List on the left, select the project. From the
Administration menu, select Projects, then Project Configuration.

3. On the left side of the Project Configuration Editor, expand Language


and select User Preferences. The Language-User Preferences dialog
box is displayed, as shown below:

Copyright © 2024 All Rights Reserved 2075


Syst em Ad m in ist r at io n Gu id e

4. Do the following:

l From the Metadata language preference for all users in this


project drop-down menu, select the language that you want to be
displayed for metadata object names in this project.

l From the Data language preference for all users in this project
drop-down menu, select the language that you want to be displayed
for report results in this project.

5. Click OK.

6. Disconnect and reconnect to the project source so that your changes


take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.

Copyright © 2024 All Rights Reserved 2076


Syst em Ad m in ist r at io n Gu id e

Selecting the Developer Level Language Preference

The Developer level language preference determines the default language


for all objects displayed within Developer, unless a higher priority language
preference has been specified. This is the same as the interface preference.

If the User-Project, User-All Projects, or All Users In Project language


preferences are specified, the user will see the Developer language only if
the other three language preferences are not available. To see the
hierarchy of language preference priorities, see the table in Configuring
Metadata Object and Report Data Language Preferences, page 2068.

This language preference must be configured to match one of two other


language preferences: the Interface language preference or the Machine
level language preference. For information about the Interface language
preference, see Selecting the Interface Language Preference, page 2066.
For information about the Machine level language preference, see Selecting
the Machine Level Language Preference, page 2079

Selecting the Developer Level Language Preference

1. Log in to Developer as a user with Administrative privileges.

2. From the Tools menu, select Preferences.

3. Expand the International category and select Language.The


International - Language dialog box opens, as shown below:

Copyright © 2024 All Rights Reserved 2077


Syst em Ad m in ist r at io n Gu id e

4. Select one of the following from the Language for metadata and
warehouse data if user and project level preferences are set to
default drop-down menu.

l If you want the Developer language preference to be the same as the


Interface language preference, select Use the same language as
MicroStrategy Developer.For information about configuring the
Interface language preference, see Selecting the Interface
Language Preference, page 2066.

l If you want the Developer language preference to be the same as the


Machine-level language preference, select Use language from
Regional Settings. For information about configuring the Machine-
level language preference, see Selecting the Machine Level
Language Preference, page 2079.

Copyright © 2024 All Rights Reserved 2078


Syst em Ad m in ist r at io n Gu id e

5. Select the language that you want to use as the default Developer
interface language from the Interface Language drop-down menu.

6. Click OK.

7. Disconnect and reconnect to the project source so that your changes


take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.

Selecting the Machine Level Language Preference

This preference determines the language that is used on all objects on the
local machine. MicroStrategy Web uses the language that is specified in the
user's web browser if a language is not specified at a level higher than this
one.

l If the User-Project, User-All Projects, All Users In Project, or Developer


language preferences are specified, the user will see the Machine
language only if the other four language preferences are not available. To
see the hierarchy of language preference priorities, see the table in
Configuring Metadata Object and Report Data Language Preferences,
page 2068.

l A MicroStrategy Distribution Services delivery (such as an email, file, or


printer delivery) uses a different language resolution logic: If the User-
Project, User-All Projects, All Users in Project, and Developer languages
are not able to be displayed, the delivery defaults to the Project Default
level language preference, followed by the Machine level language
preference. This is because Distribution Services runs without a client
session in the Intelligence Server machine; if the Machine level language
took precedence, all users receiving delivered content would receive that
content using the Intelligence Server machine's language. Instead, the
project's default language is the fallback language for Distribution
Services deliveries.

Copyright © 2024 All Rights Reserved 2079


Syst em Ad m in ist r at io n Gu id e

To select the Machine level language preference on a Windows machine,


from the Start menu, select Control Panel, then Regional and Language
Options. Consult your machine's Help for details on using the language
options.

Configuring the Project Default Level Language Preference

This language preference specifies the default language for the project. This
language preference has the lowest priority in determining the language
display. Use the steps below to set this preference.

l If the User-Project, User-All Projects, All Users In Project, Developer, or


Machine-level language preferences are specified, the user will see the
Project Default language only if the other five language preferences are
not available. To see the hierarchy of language preference priorities, see
the table in Configuring Metadata Object and Report Data Language
Preferences, page 2068.

l A MicroStrategy Distribution Services delivery (such as an email, file, or


printer delivery) uses a different language resolution logic: If the User-
Project, User-All Projects, All Users in Project, and Developer languages
are not able to be displayed, the delivery defaults to the Project Default
level language preference, followed by the Machine level language
preference. This is because Distribution Services runs without a client
session in the Intelligence Server machine; if the Machine level language
took precedence, all users receiving delivered content would receive that
content using the Intelligence Server machine's language. Instead, the
project's default language is the fallback language for Distribution
Services deliveries.

Selecting the Project Default Language Preference

The project default language is selected either when a project is first


created, or the first time metadata languages are enabled for the project. It

Copyright © 2024 All Rights Reserved 2080


Syst em Ad m in ist r at io n Gu id e

cannot be changed after that point. The following steps assume the project
default language has not yet been selected.

1. Log in to the project as a user with Administrative privileges.

2. Select the project for which you want to set the default preferred
language.

3. From the Administration menu, select Projects, then Project


Configuration.

4. On the left side of the Project Configuration Editor, expand Language.


Do one or both of the following:

l To specify the default metadata language for the project, select


Metadata from the Language category. Then select Default for the
desired language.

l To specify the default data language for the project, select Data from
the Language category. Then select Default for the desired
language.

5. Click OK.

6. Disconnect and reconnect to the project source so that your changes


take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.

Selecting the Object Default Language Preference


Each MicroStrategy object can have its own default language. The object
default language is used when the system cannot find or access a name for
the object in the language specified as the user or project preference.

This preference is useful especially for personal objects, since most


personal objects are only used in one language, the owner's language. The
object default language can be set to any language supported by the project
in which the object resides.

Copyright © 2024 All Rights Reserved 2081


Syst em Ad m in ist r at io n Gu id e

Some objects may not have their object default language preference set, for
example, if objects are merged from an older MicroStrategy system that was
not set up for multi-tenancy into an upgraded system that is set up for multi-
tenancy. In this case, for those objects that do not have a default language,
the system automatically assigns them the project's default language.

This is not true for newly created objects within an established multi-
tenancy environment. Newly created objects are automatically assigned the
creator's metadata language preference. For details on the metadata
language, see Configuring Metadata Object and Report Data Language
Preferences, page 2068.

When duplicating a project, objects in the source that are set to take the
project default language will take whatever the destination project's default
language is.

Use the steps below to configure the object default language.

For the hierarchy of language preferences, see the table in Configuring


Metadata Object and Report Data Language Preferences, page 2068.

Configuring the Object Default Language Preference

1. Log in to the project source that contains the object as a user with
administrative privileges.

2. Right-click the object and select Properties.

l You can set the default language for multiple objects by holding the
Ctrl key while selecting multiple objects.

3. Select International.The Properties - International dialog box is


displayed, as shown below:

Copyright © 2024 All Rights Reserved 2082


Syst em Ad m in ist r at io n Gu id e

If the International option is missing, the object is not supported for


renaming. For example, there is no reason to rename a table name for
a schema object (such as LU_YEAR), so this object does not have the
International option available.

4. From the Select the default language for the object drop-down
menu, select the default language for the object(s).

5. Click OK.

Maintaining Your Multi-Tenant Environment


You can add or remove tenant languages from your MicroStrategy system,
and you can edit the object names in the system. This section also covers
security and specialized user roles for object renaming.

Adding a New Tenant Language to the System


You can add new languages to MicroStrategy. Once they are added, new
languages are then available to be enabled for a project. For steps to add a

Copyright © 2024 All Rights Reserved 2083


Syst em Ad m in ist r at io n Gu id e

new tenant language, see Adding a New Tenant Language to the System,
page 2055.

Removing a Tenant Language from the System


A language cannot be removed from the system if it is being used by a
project, that is, if it has been enabled to be supported for a project. To
remove a tenant language from a project, that language must first be
disabled from the project, as described in the steps below.

If a user has selected the language as a language preference, the


preference will no longer be in effect once the language is disabled. The
next lower priority language preference will take effect. To see the language
preference priority hierarchy, see Configuring Metadata Object and Report
Data Language Preferences, page 2068.

To Remove a Tenant Language from the System

1. Disable the tenant language from all projects in which it was enabled.
To disable a metadata language from a project, see Enabling and
Disabling Tenant Languages, page 2056.

2. For metadata languages, any names for the disabled language are not
removed from the metadata with these steps. To remove names:

l For individual objects: Objects that contain names for the disabled
tenant language must be modified and saved. You can use the Search
dialog box from the Tools menu in Developer to locate objects that
have names for a given tenant. In the dialog box, on the International
tab, click Help for details on setting up a search for these objects.

l For the entire metadata: Duplicate the project after the tenant
language has been removed, and do not include the renamed strings
in the duplicated project.

3. For objects that had the disabled language as their default language,
the following scenarios occur. The scenarios assume the project

Copyright © 2024 All Rights Reserved 2084


Syst em Ad m in ist r at io n Gu id e

defaults to Tenant A's language, and Tenant B's language is disabled


for the project:

l If the object's default language is Tenant B's language, and the object
has names for both Tenant A and Tenant B, then, after Tenant B's
language is disabled from the project, the object will only display
Tenant A's names. The object's default language automatically
changes to Tenant A's language.

l If the object's default language is Tenant B's language and the object
contains only Tenant B's names, then, after Tenant B's language is
disabled from the project, Tenant B's names will be displayed but will
be treated by the system as if they belong to Tenant A's language.
The object's default language automatically changes to Tenant A's
language.

For both scenarios above: If you later re-enable Tenant B's language
for the project, the object's default language automatically changes
back to Tenant B's language as long as no changes were made and
saved for the object while the object had Tenant A's language as its
default language. If changes were made and saved to the object while
it used Tenant A's language as its default language, and you want to
return the object's default language back to Tenant B's language, you
can do so manually: right-click the object, select Properties, select
Internationalization on the left, and choose a new default language.

Copyright © 2024 All Rights Reserved 2085


Syst em Ad m in ist r at io n Gu id e

I N TELLIGEN CE SERVER
STATISTICS D ATA
D ICTION ARY

Copyright © 2024 All Rights Reserved 2086


Syst em Ad m in ist r at io n Gu id e

This section lists the staging tables in the statistics repository to which
Intelligence Server logs statistics. The detailed information includes the
table name, its function, the table to which the data is moved in the
Enterprise Manager repository, and the table's columns. For each column
we provide the description and datatypes for DB2, MySQL, SQL Server,
Oracle, Teradata, and Sybase databases. A Bold column name indicates
that it is a primary key, and (I) indicates that the column is used in an index.

STG_CT_DEVICE_STATS
Records statistics related to the mobile client and the mobile device. This
table is used when the Mobile Clients option is selected in the Statistics
category of the Project Configuration Editor and the mobile client is
configured to log statistics. The data load process moves this table's
information to the CT_DEVICE_STATS table, which has the same columns
and datatypes.

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Dataty Dataty
on Dataty Dataty Dataty Dataty
pe pe
pe pe pe pe

Day the
TIMEST
DAY_ID action was DATE DATE DATE DATE DATE
AMP
started.

Hour the
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID action was
T R(3) NT T T T
started.

Minute the
SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID action was
INT R(5) NT NT INT INT
started.

GUID of CHAR CHAR CHAR CHAR CHAR CHAR


SERVERID
the (32) (32) (32) (32) (32) (32)

Copyright © 2024 All Rights Reserved 2087


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Dataty Dataty
on Dataty Dataty Dataty Dataty
pe pe
pe pe pe pe

Intelligenc
e Server
processing
the
request.

Name of
the
Intelligenc VARC VARCH VARC VARC
SERVERMA VARCH VARCH
e Server HAR AR2 HAR HAR
CHINE AR(255) AR(255)
processing (255) (255) (255) (255)
the
request.

Unique
installation
DEVICEINST CHAR CHAR CHAR CHAR CHAR CHAR
ID of the
ID (40) (40) (40) (40) (40) (40)
mobile
app.

Type of
device the
app is
VARC VARC VARC
DEVICETYP installed VARCH VARCH VARCH
HAR HAR HAR
E on, such AR2(40) AR(40) AR(40)
(40) (40) (40)
as iPad,
Droid, or
iPhone.

Operating
system of
VARCH VARCH VARCH VARCH VARCH VARCH
OS the device
AR(40) AR2(40) AR(40) AR(40) AR(40) AR(40)
the app is
installed

Copyright © 2024 All Rights Reserved 2088


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Dataty Dataty
on Dataty Dataty Dataty Dataty
pe pe
pe pe pe pe

on, such as
iOS or
Android.

Version of
the
VARC VARC VARC
operating VARCH VARCH VARCH
OSVER HAR HAR HAR
system, AR2(40) AR(40) AR(40)
(40) (40) (40)
such as
5.2.1.

Version of
the VARCH VARCH VARCH VARCH VARCH VARCH
APPVER
MicroStrat AR(40) AR2(40) AR(40) AR(40) AR(40) AR(40)
egy app.

An integer
value that
increments
whenever
the device
STATECOUN informatio SMALL NUMBE SMALLI SMALLI SMALL SMALL
TER n, such as INT R(5) NT NT INT INT
DEVICETY
PE, OS,
OSVER, or
APPVER,
changes.

Date and
time when
STATECHAN STATECO DATET TIMEST TIMEST TIMEST DATET DATET
GETIME UNTER is IME AMP AMP AMP IME IME
incremente
d.

Copyright © 2024 All Rights Reserved 2089


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Dataty Dataty
on Dataty Dataty Dataty Dataty
pe pe
pe pe pe pe

Timestamp
of when
the record
was
written to
RECORDTIM the DATET TIMEST TIMEST TIMEST DATET DATET
E database, IME AMP AMP AMP IME IME
according
to
database
system
time.

STG_CT_EXEC_STATS
Records statistics related to execution of reports/documents in a mobile
app. This table is used when the Mobile Clients option is selected in the
Statistics category of the Project Configuration Editor and the mobile client
is configured to log statistics. The data load process moves this table's
information to the CT_EXEC_STATS table, which has the same columns and
datatypes.

SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type

Day the TIMES


DAY_ID DATE DATE DATE DATE DATE
action was TAMP

Copyright © 2024 All Rights Reserved 2090


Syst em Ad m in ist r at io n Gu id e

SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type

started.

Hour the
TINYI NUMBE SMALL BYTEI TINYI TINYI
HOUR_ID action was
NT R(3) INT NT NT NT
started.

Minute the
SMAL NUMB SMALL SMALL SMAL SMAL
MINUTE_ID action was
LINT ER(5) INT INT LINT LINT
started.

Unique
installation CHAR CHAR CHAR CHAR CHAR CHAR
DEVICEINSTID (I)
ID of the (40) (40) (40) (40) (40) (40)
mobile app.

An integer
value that
increments
when the
device
information,
such as
STATECOUNTE SMAL NUMB SMALL SMALL SMAL SMAL
DEVICETYP
R (I) LINT ER(5) INT INT LINT LINT
E, OS,
OSVER, or
APPVER (in
STG_CT_
DEVICE_
STATS),
changes.

GUID of the
CHAR CHAR CHAR CHAR CHAR CHAR
USERID user making
(32) (32) (32) (32) (32) (32)
the request.

Copyright © 2024 All Rights Reserved 2091


Syst em Ad m in ist r at io n Gu id e

SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type

GUID of the
session that
executed
the request.
This should
be the same
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID as the
(32) (32) (32) (32) (32) (32)
SESSIONID
for this
request in
STG_IS_
REPORT_
STATS.

GUID of the
MicroStrate
gy Mobile
client
session ID.
A new client INTEG NUMBE INTEG INTEG INTEG INTEG
CTSESSIONID
session ID is ER R(10) ER ER ER ER
generated
every time a
user logs in
to the mobile
app.

ID
correspondi
CHAR CHAR CHAR CHAR CHAR CHAR
MESSAGEID ng to the
(32) (32) (32) (32) (32) (32)
JOBID (in
STG_IS_

Copyright © 2024 All Rights Reserved 2092


Syst em Ad m in ist r at io n Gu id e

SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type

REPORT_
STATS) of
the
message
generated
by the
execution.

Similar to
JOBID but
generated
by the client
and cannot
be NULL.
SMAL NUMBE SMALL SMALL SMAL SMAL
ACTIONID The JOBID
LINT R(5) INT INT LINT LINT
may be
NULL if the
user is
offline
during
execution.

GUID of the
Intelligence
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID Server
(32) (32) (32) (32) (32) (32)
processing
the request.

Name of the
machine VARC VARCH VARCH VARCH VARC VARC
SERVERMACHI
hosting the HAR AR2 AR AR HAR HAR
NE
Intelligence (255) (255) (255) (255) (255) (255)
Server

Copyright © 2024 All Rights Reserved 2093


Syst em Ad m in ist r at io n Gu id e

SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type

processing
the request.

GUID of the
report used CHAR CHAR CHAR CHAR CHAR CHAR
REPORTID
in the (32) (32) (32) (32) (32) (32)
request.

GUID of the
document CHAR CHAR CHAR CHAR CHAR CHAR
DOCUMENTID
used in the (32) (32) (32) (32) (32) (32)
request.

GUID of the CHAR CHAR CHAR CHAR CHAR CHAR


PROJECTID
project. (32) (32) (32) (32) (32) (32)

Name of the
VARC VARCH VARCH VARCH VARC VARC
MSERVERMAC load
HAR AR2 AR AR HAR HAR
HINE balancing
(255) (255) (255) (255) (255) (255)
machine.

Time when
the user
CTREQUESTTI submits a DATE TIMES TIMES TIMES DATE DATE
ME request to TIME TAMP TAMP TAMP TIME TIME
the mobile
app.

Time when
the mobile
CTRECEIVEDTI app begins DATE TIMES TIMES TIMES DATE DATE
ME receiving TIME TAMP TAMP TAMP TIME TIME
data from
MicroStrate

Copyright © 2024 All Rights Reserved 2094


Syst em Ad m in ist r at io n Gu id e

SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type

gy Mobile
Server.

Difference
between
CTRequest
CTREQRECTIM Time and INTE NUMB INTEG INTEG INTE INTE
E CTReceived GER ER(10) ER ER GER GER
Time, in
millisecond
s.

Time when
CTRENDERSTA the mobile DATE TIMES TIMES TIMES DATE DATE
RTTIME app begins TIME TAMP TAMP TAMP TIME TIME
rendering.

Time when
CTRENDERFINI the mobile DATE TIMES TIMES TIMES DATE DATE
SHTIME app finishes TIME TAMP TAMP TAMP TIME TIME
rendering.

Difference
between
CTRenderSt
CTRENDERTIM artTime and INTEG NUMBE INTEG INTEG INTEG INTEG
E CTRenderFi ER R(10) ER ER ER ER
nishTime, in
millisecond
s.

Type of
EXECUTIONTY TINYI NUMB SMALL BYTEI TINYI TINYI
report/docu
PE NT ER(3) INT NT NT NT
ment

Copyright © 2024 All Rights Reserved 2095


Syst em Ad m in ist r at io n Gu id e

SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type

execution:

1: User
execution

2: Pre-
cached
execution

3:
Application
recovery
execution

4:
Subscription
cache pre-
loading
execution

5:
Transaction
subsequent
action
execution

6: Report
queue
execution

7: Report
queue recall
execution

8: Back
button

Copyright © 2024 All Rights Reserved 2096


Syst em Ad m in ist r at io n Gu id e

SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type

execution

Whether a
cache was
hit during
the
execution,
and if so,
what type of
cache hit
occurred:

0: No cache
hit TINYI NUMBE SMALL BYTEI TINYI TINYI
CACHEIND
NT R(3) INT NT NT NT
1:
Intelligence
Server
cache hit

2: Device
cache hit

6:
Application
memory
cache hit

Whether the
report or
document is NUMB SMALL BYTEI TINYI
PROMPTIND prompted: BIT BIT
ER(1) INT NT NT(1)
0: Not
prompted

Copyright © 2024 All Rights Reserved 2097


Syst em Ad m in ist r at io n Gu id e

SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type

1: Prompted

Whether the
job is for a
report or a
document: TINYI NUMBE SMALL BYTEI TINYI TINYI
CTDATATYPE
NT R(3) INT NT NT NT
3: Report

55:
Document

The type of
network
used:
VARC VARC VARC VARC VARC VARC
CTNETWORKT 3G
HAR HAR2 HAR HAR HAR HAR
YPE
WiFi (40) (40) (40) (40) (40) (40)

LTE

4G

Estimated
network INTEG NUMBE INTEG INTEG INTEG INTEG
CTBANDWIDTH
bandwidth, ER R(10) ER ER ER ER
in kbps.

Time at
which the
user either
VIEWFINISHTI clicks on DATE TIMES TIMES TIMES DATE DATE
ME another TIME TAMP TAMP TAMP TIME TIME
report/docu
ment, or
navigates

Copyright © 2024 All Rights Reserved 2098


Syst em Ad m in ist r at io n Gu id e

SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type

away from
the mobile
app.

Difference
between
CTRenderFi
nishTime
INTEG NUMBE INTEG INTEG INTEG INTEG
VIEWTIME and
ER R(10) ER ER ER ER
ViewFinishT
ime, in
millisecond
s.

An integer
value that
increases
with every
manipulatio
n the user
makes after
the
MANIPULATIO report/docu SMAL NUMB SMALL SMALL SMAL SMAL
NS ment is LINT ER(5) INT INT LINT LINT
rendered,
excluding
those that
require
fetching
more data
from
Intelligence

Copyright © 2024 All Rights Reserved 2099


Syst em Ad m in ist r at io n Gu id e

SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type

Server
and/or
result in
another
report/docu
ment
execution.

Average
rendering
CTAVGMANIPR time for INTEG NUMBE INTEG INTEG INTEG INTEG
ENDERTIME each ER R(10) ER ER ER ER
manipulatio
n.

GUID of the
CHAR CHAR CHAR CHAR CHAR CHAR
REPOSITORYID metadata
(32) (32) (32) (32) (32) (32)
repository.

Latitude of DOUBL
CTLATITUDE FLOAT FLOAT FLOAT FLOAT FLOAT
the user. E

Longitude of FLOA DOUBL FLOA FLOA


CTLONGITUDE FLOAT FLOAT
the user. T E T T

STG_CT_MANIP_STATS
Records statistics related to manipulation of reports/documents in a mobile
app. This table is used when the Mobile Clients and Mobile Clients
Manipulations options are selected in the Statistics category of the Project
Configuration Editor and the mobile client is configured to log statistics. The

Copyright © 2024 All Rights Reserved 2100


Syst em Ad m in ist r at io n Gu id e

data load process moves this table's information to the CT_MANIP_STATS


table, which has the same columns and datatypes.

SQL
Terada Sybas MySQ
Serve Oracle DB2
ta e L
Column Description r Data- Data-
Data- Data- Data-
Data- type type
type type type
type

Day the
TIMES
DAY_ID action was DATE DATE DATE DATE DATE
TAMP
started.

Hour the
TINYI NUMBE SMALLI BYTEIN TINYI TINYI
HOUR_ID action was
NT R(3) NT T NT NT
started.

Minute the
SMAL NUMBE SMALL SMALL SMAL SMAL
MINUTE_ID action was
LINT R(5) INT INT LINT LINT
started.

Unique
DEVICEINSTI installation ID CHAR CHAR CHAR CHAR CHAR CHAR
D (I) of the mobile (40) (40) (40) (40) (40) (40)
app.

An integer
value that
increments
when the
device
information,
STATECOUN such as INTEG NUMBE SMALL SMALL SMAL SMAL
TER (I) DEVICETYP ER R(5) INT INT LINT LINT
E, OS,
OSVER, or
APPVER (in
STG_CT_
DEVICE_
STATS),

Copyright © 2024 All Rights Reserved 2101


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
ta e L
Column Description r Data- Data-
Data- Data- Data-
Data- type type
type type type
type

changes.

GUID of the
CHAR CHAR CHAR CHAR CHAR CHAR
USERID user making
(32) (32) (32) (32) (32) (32)
the request.

GUID of the
session that
executed the
request. This
should be the
same as the CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID
SESSIONID (32) (32) (32) (32) (32) (32)
for this
request in
STG_ IS_
REPORT_
STATS.

GUID of the
MicroStrategy
Mobile client
session ID. A
new client
CHAR NUMBE INTEG INTEG INTEG INTEG
CTSESSIONID session ID is
(32) R(10) ER ER ER ER
generated
every time a
user logs in to
the mobile
app.

Similar to INTEG NUMBE SMALL SMALL SMAL SMAL


ACTIONID JOBID but ER R(5) INT INT LINT LINT

Copyright © 2024 All Rights Reserved 2102


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
ta e L
Column Description r Data- Data-
Data- Data- Data-
Data- type type
type type type
type

generated by
the client and
cannot be
NULL. The
JOBID may
be NULL if
user is offline
during
execution.

GUID of the
Intelligence
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID Server
(32) (32) (32) (32) (32) (32)
processing
the request.

Name of the
machine
hosting the VARC VARCH VARCH VARCH VARC VARC
SERVERMA
Intelligence HAR AR2 AR AR HAR HAR
CHINE
Server (255) (255) (255) (255) (255) (255)
processing
the request.

GUID of the
CHAR CHAR CHAR CHAR CHAR CHAR
REPORTID report used in
(32) (32) (32) (32) (32) (32)
the request.

GUID of the
DOCUMENTI document CHAR CHAR CHAR CHAR CHAR CHAR
D used in the (32) (32) (32) (32) (32) (32)
request.

PROJECTID GUID of the CHAR CHAR CHAR CHAR CHAR CHAR

Copyright © 2024 All Rights Reserved 2103


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
ta e L
Column Description r Data- Data-
Data- Data- Data-
Data- type type
type type type
type

project. (32) (32) (32) (32) (32) (32)

The order in
which the
manipulation
s were made.
For each
manipulation,
MANIPSEQUE SMAL NUMBE SMALL SMALL SMAL SMAL
the mobile
NCEID LINT R(5) INT INT LINT LINT
client returns
a row, and
the value in
this column
increments
for each row.

Type of
manipulation:

0: Unknown

1: Selector

2: Panel
Selector
MANIPTYPEI SMALL NUMBE SMALLI SMALLI SMALL SMALL
D 3: Action INT R(5) NT NT INT INT
Selector

4: Change
Layout

5: Change
View

6: Sort

Copyright © 2024 All Rights Reserved 2104


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
ta e L
Column Description r Data- Data-
Data- Data- Data-
Data- type type
type type type
type

7: Page By

Name of the
item that was
manipulated.
For example, VARC VARCH VARCH VARCH VARC VARC
MANIPNAME if a selector HAR AR2 AR AR HAR HAR
was clicked, (255) (255) (255) (255) (255) (255)
this is the
name of the
selector.

Value of the
item that was
manipulated.
For example,
VARC VARCH VARCH VARCH VARC VARC
MANIPVALU if a panel
HAR AR2 AR AR HAR HAR
E selector was
(2000) (2000) (2000) (2000) (2000) (2000)
clicked, this
is the name of
the selected
panel.

If the value
for
MANIPVALU
E is too long
MANIPVALUE to fit in a SMAL NUMBE SMALL SMALL SMAL SMAL
SEQ single row, LINT R(5) INT INT LINT LINT
this
manipulation
is spread
over multiple

Copyright © 2024 All Rights Reserved 2105


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
ta e L
Column Description r Data- Data-
Data- Data- Data-
Data- type type
type type type
type

rows, and
this value is
incremented.

Time when
CTMANIPST the user DATET TIMEST TIMEST TIMEST DATET DATET
ARTTIME submits the IME AMP AMP AMP IME IME
manipulation.

Time when
the mobile
app finishes
processing
CTMANIPFIN DATE TIMES TIMES TIMES DATE DATE
the
ISHTIME TIME TAMP TAMP TAMP TIME TIME
manipulation
and forwards
it for
rendering.

Difference
between
CTMANIPST
CTMANIPTIM DOUBL
ARTTIME and FLOAT FLOAT FLOAT FLOAT FLOAT
E E
CTMANIPFIN
ISHTIME, in
milliseconds.

GUID of the
REPOSITOR CHAR CHAR CHAR CHAR CHAR CHAR
metadata
YID (32) (32) (32) (32) (32) (32)
repository.

A flexible VARC VARCH VARCH VARCH VARC VARC


DETAIL1 column to HAR AR2 AR AR HAR HAR
capture (2000) (2000) (2000) (2000) (2000) (2000)

Copyright © 2024 All Rights Reserved 2106


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
ta e L
Column Description r Data- Data-
Data- Data- Data-
Data- type type
type type type
type

different
states of
manipulation.

A flexible
column to
VARC VARCH VARCH VARCH VARC VARC
capture
DETAIL2 HAR AR2 AR AR HAR HAR
different
(2000) (2000) (2000) (2000) (2000) (2000)
states of
manipulation.

Date and time


when this
RECORDTIM information DATET TIMEST TIMEST TIMEST DATET DATET
E was written to IME AMP AMP AMP IME IME
the statistics
database.

STG_IS_CACHE_HIT_STATS
Tracks job executions that hit the report cache. This table is used when the
Basic Statistics option is selected in the Statistics category of the Project
Configuration Editor. The data load process moves this table's information
to the IS_CACHE_HIT_STATS table, which has the same columns and
datatypes.

Copyright © 2024 All Rights Reserved 2107


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descri Server e L
Column Data- Data- a Data-
ption Data- Data- Data-
type type type
type type type

Day the
job
executi
TIMEST
DAY_ID on hit DATE DATE DATE DATE DATE
AMP
the
report
cache.

Hour the
job
executio TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID
n hit the T R(3) NT T T T
report
cache.

Minute
the job
executi
SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID on hit
INT R(5) NT NT INT INT
the
report
cache.

A
sequenti
al INTEG NUMBE INTEGE INTEGE INTEG INTEG
CACHEINDEX (I)
number ER R(10) R R ER ER
for this
table.

GUID of
CACHESESSIO CHAR CHAR CHAR CHAR CHAR CHAR
the user
NID (I) (32) (32) (32) (32) (32) (32)
session.

SERVERID GUID of CHAR CHAR CHAR CHAR CHAR CHAR

Copyright © 2024 All Rights Reserved 2108


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descri Server e L
Column Data- Data- a Data-
ption Data- Data- Data-
type type type
type type type

the
server
(32) (32) (32) (32) (32) (32)
definitio
n.

Timesta
mp
CACHEHITTIM when DATET TIMEST TIMEST TIMEST DATET DATET
E (I) this IME AMP AMP AMP IME IME
cache is
hit.

Type of
cache
hit:

0:
Report
CACHEHITTYP TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
cache
E (I) T R(3) NT T T T
hit

1 or 2:
Docume
nt cache
hit

Job ID
that
CACHECREAT INTEG NUMBE INTEGE INTEGE INTEG INTEG
created
ORJOBID (I) ER R(10) R R ER ER
the
cache.

GUID for
CREATORSES the CHAR CHAR CHAR CHAR CHAR CHAR
SIONID (I) session (32) (32) (32) (32) (32) (32)
in which

Copyright © 2024 All Rights Reserved 2109


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descri Server e L
Column Data- Data- a Data-
ption Data- Data- Data-
type type type
type type type

cache
was
created.

Job ID
for
partial
cache
hit, or
docume
nt
parent
INTEG NUMBE INTEGE INTEGE INTEG INTEG
JOBID (I) job ID if
ER R(10) R R ER ER
the
cache
hit
originat
ed from
docume
nt child
report.

Timesta
mp of
DATET TIMEST TIMEST TIMEST DATET DATET
STARTTIME when
IME AMP AMP AMP IME IME
the job
started.

Timesta
mp of
RECORDTIME when DATET TIMEST TIMEST TIMEST DATET DATET
(I) the IME AMP AMP AMP IME IME
record
was

Copyright © 2024 All Rights Reserved 2110


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descri Server e L
Column Data- Data- a Data-
ption Data- Data- Data-
type type type
type type type

written
to the
databas
e,
accordi
ng to
databas
e
system
time.

(Server
machine
VARCH VARCH VARCH VARCH
SERVERMACHI name:po VARCH VARCH
AR AR2 AR AR
NE rt AR(255) AR(255)
(255) (255) (255) (255)
number)
pair.

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID (I) the
(32) (32) (32) (32) (32) (32)
project.

GUID of
the
REPOSITORYI metadat CHAR CHAR CHAR CHAR CHAR CHAR
D a (32) (32) (32) (32) (32) (32)
reposito
ry.

The table below lists combinations of CACHEHITTYPE and JOBID that can
occur in the STG_IS_CACHE_HIT_STATS table and what those
combinations mean.

Copyright © 2024 All Rights Reserved 2111


Syst em Ad m in ist r at io n Gu id e

Cache Hit
JobID Description
Type

0 -1 For a normal report, a full cache hit

Real
0 For a normal report, a partial cache hit
JobID

Parent For a child report from a document, a full cache hit, so no


1
JobID child report

Child For a child report from a document, a partial cache hit, child
2
JobID report has a job

STG_IS_CUBE_REP_STATS
Records statistics related to Intelligent Cube manipulations. This table is not
populated unless at least one of the Advanced Statistics Collection
Options are selected in the Statistics category of the Project Configuration
Editor. The data load process moves this table's information to the IS_
CUBE_REP_STATS table, which has the same columns and datatypes.

SQL Teradat Sybas


Oracle DB2 MySQL
Descrip Server a e
Column Datatyp Datatyp Dataty
tion Dataty Datatyp Dataty
e e pe
pe e pe

Day the
action TIMEST
DAY_ID (I) DATE DATE DATE DATE DATE
was AMP
started.

Hour the
action TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID
was T R(3) NT T T T
started.

Copyright © 2024 All Rights Reserved 2112


Syst em Ad m in ist r at io n Gu id e

SQL Teradat Sybas


Oracle DB2 MySQL
Descrip Server a e
Column Datatyp Datatyp Dataty
tion Dataty Datatyp Dataty
e e pe
pe e pe

Minute
the
SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID action
INT R(5) NT NT INT INT
was
started.

GUID of
the
session
that
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID executed
(32) (32) (32) (32) (32) (32)
the action
on the
Intelligen
t Cube.

Job ID
for the
action on INTEG NUMBE INTEGE INTEGE INTEG INTEG
JOBID
the ER R(10) R R ER ER
Intelligen
t Cube.

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID the
(32) (32) (32) (32) (32) (32)
project.

Timesta
mp of
DATET TIMEST TIMEST TIMEST DATET DATET
STARTTIME when the
IME AMP AMP AMP IME IME
action
started.

Timesta DATET TIMEST TIMEST TIMEST DATET DATETI


FINISHTIME
mp of IME AMP AMP AMP IME ME

Copyright © 2024 All Rights Reserved 2113


Syst em Ad m in ist r at io n Gu id e

SQL Teradat Sybas


Oracle DB2 MySQL
Descrip Server a e
Column Datatyp Datatyp Dataty
tion Dataty Datatyp Dataty
e e pe
pe e pe

when the
action
finished.

GUID of
the
Intelligen
CUBEREPO t Cube CHAR CHAR CHAR CHAR CHAR CHAR
RTGUID report (32) (32) (32) (32) (32) (32)
that was
execute
d.

GUID of
the
Intelligen
CUBEINSTA CHAR CHAR CHAR CHAR CHAR CHAR
t Cube
NCEID (32) (32) (32) (32) (32) (32)
instance
in
memory.

Type of
action
against
the
Intelligen
CUBEACTIO t Cube: INTEG NUMBE INTEGE INTEGE INTEG INTEG
NID 0: ER R(10) R R ER ER
Reserve
d for
MicroStr
ategy
use

Copyright © 2024 All Rights Reserved 2114


Syst em Ad m in ist r at io n Gu id e

SQL Teradat Sybas


Oracle DB2 MySQL
Descrip Server a e
Column Datatyp Datatyp Dataty
tion Dataty Datatyp Dataty
e e pe
pe e pe

1: Cube
Publish

2: Cube
View Hit

3: Cube
Dynamic
Source
Hit

4: Cube
Append

5: Cube
Update

6: Cube
Delete

7: Cube
Destroy

If a report
hit the
Intelligen
REPORTGUI CHAR CHAR CHAR CHAR CHAR CHAR
t Cube,
D (32) (32) (32) (32) (32) (32)
the GUID
of that
report.

If the
Intelligen
CUBEKBSIZ t Cube is INTEG NUMBE INTEGE INTEGE INTEG INTEG
E publishe ER R(10) R R ER ER
d or
refreshe

Copyright © 2024 All Rights Reserved 2115


Syst em Ad m in ist r at io n Gu id e

SQL Teradat Sybas


Oracle DB2 MySQL
Descrip Server a e
Column Datatyp Datatyp Dataty
tion Dataty Datatyp Dataty
e e pe
pe e pe

d, the
size of
the
Intelligen
t Cube in
KB.

If the
Intelligen
t Cube is
published
or
CUBEROWSI refreshe INTEG NUMBE INTEGE INTEGE INTEG INTEG
ZE d, the ER R(10) R R ER ER
number
of rows in
the
Intelligen
t Cube.

Name of
the
Intelligen
VARC VARCH VARC VARCH
SERVERMA ce VARCH VARCH
HAR AR2 HAR AR
CHINE Server AR(255) AR(255)
(255) (255) (255) (255)
processi
ng the
request.

GUID of
the
REPOSITOR CHAR CHAR CHAR CHAR CHAR CHAR
metadata
YID (32) (32) (32) (32) (32) (32)
repositor
y.

Copyright © 2024 All Rights Reserved 2116


Syst em Ad m in ist r at io n Gu id e

SQL Teradat Sybas


Oracle DB2 MySQL
Descrip Server a e
Column Datatyp Datatyp Dataty
tion Dataty Datatyp Dataty
e e pe
pe e pe

Timesta
mp of
when the
record
was
written to
RECORDTIM the DATET TIMEST TIMEST TIMEST DATET DATET
E databas IME AMP AMP AMP IME IME
e,
accordin
g to
database
system
time.

STG_IS_DOC_STEP_STATS
Tracks each step in the document execution process. This table is used
when the Document Job Steps option is selected in the Statistics category
of the Project Configuration Editor. The data load process moves this table's
information to the IS_DOC_STEP_STATS table, which has the same
columns and datatypes.

SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type

Day the TIMEST


DAY_ID DATE DATE DATE DATE DATE
docume AMP

Copyright © 2024 All Rights Reserved 2117


Syst em Ad m in ist r at io n Gu id e

SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type

nt was
requeste
d for
executio
n.

Hour the
documen
t was
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID requeste
T R(3) NT T T T
d for
executio
n.

Minute
the
docume
nt was SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID
requeste INT R(5) NT NT INT INT
d for
executio
n.

GUID of
the INTEG NUMBE INTEGE INTEGE INTEG INTEG
JOBID
documen ER R(10) R R ER ER
t job.

Sequenc
e
STEPSEQUE number TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
NCE for a T R(3) NT T T T
job's
steps.

Copyright © 2024 All Rights Reserved 2118


Syst em Ad m in ist r at io n Gu id e

SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID the user
(32) (32) (32) (32) (32) (32)
session.

GUID of
the
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID server
(32) (32) (32) (32) (32) (32)
definitio
n.

Type of
step. For
a
descripti
on, see
Report
and
Docume
nt Steps,
page
2185.
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
STEPTYPE 1:
T R(3) NT T T T
Metadata
object
request
step

2: Close
job

3: SQL
generati
on

4: SQL

Copyright © 2024 All Rights Reserved 2119


Syst em Ad m in ist r at io n Gu id e

SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type

executio
n

5:
Analytica
l Engine
server
task

6:
Resoluti
on server
task

7: Report
net
server
task

8:
Element
request
step

9: Get
report
instance

10: Error
message
send
task

11:
Output
message
send

Copyright © 2024 All Rights Reserved 2120


Syst em Ad m in ist r at io n Gu id e

SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type

task

12: Find
report
cache
task

13:
Docume
nt
executio
n step

14:
Docume
nt send
step

15:
Update
report
cache
task

16:
Request
execute
step

17: Data
mart
execute
step

18:
Docume
nt data

Copyright © 2024 All Rights Reserved 2121


Syst em Ad m in ist r at io n Gu id e

SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type

preparati
on

19:
Docume
nt
formattin
g

20:
Docume
nt
manipula
tion

21: Apply
view
context

22:
Export
engine

23: Find
Intelligen
t Cube
task

24:
Update
Intelligen
t Cube
task

25: Post-
processi
ng task

Copyright © 2024 All Rights Reserved 2122


Syst em Ad m in ist r at io n Gu id e

SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type

26:
Delivery
task

27:
Persist
result
task

28:
Docume
nt
dataset
executio
n task

Timesta
mp of
the DATET TIMEST TIMEST TIMEST DATET DATET
STARTTIME
step's IME AMP AMP AMP IME IME
start
time.

Timesta
mp of the
DATETI TIMEST TIMEST TIMEST DATETI DATETI
FINISHTIME step's
ME AMP AMP AMP ME ME
finish
time.

Time
duration,
in INTEG NUMBE INTEGE INTEGE INTEG INTEG
QUEUETIME
milliseco ER R(10) R R ER ER
nds,
between

Copyright © 2024 All Rights Reserved 2123


Syst em Ad m in ist r at io n Gu id e

SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type

the last
step
finish
and the
next step
start.

CPU
time, in
milliseco
INTEG NUMBE INTEGE INTEGE INTEG INTEG
CPUTIME nds,
ER R(10) R R ER ER
used
during
this step.

FINISHT
IME
minus
STEPDURA INTEG NUMBE INTEGE INTEGE INTEG INTEG
STARTT
TION ER R(10) R R ER ER
IME, in
milliseco
nds.

Timesta
mp of
when the
record
was
RECORDTI DATETI TIMEST TIMEST TIMEST DATETI DATETI
written to
ME ME AMP AMP AMP ME ME
the
databas
e,
accordin
g to

Copyright © 2024 All Rights Reserved 2124


Syst em Ad m in ist r at io n Gu id e

SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type

database
system
time.

(Server
machine
VARCH VARCH VARCH VARCH
SERVERMA name:po VARCH VARCH
AR AR2 AR AR
CHINE rt AR(255) AR(255)
(255) (255) (255) (255)
number)
pair.

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID the
(32) (32) (32) (32) (32) (32)
project.

GUID of
the
REPOSITO metadat CHAR CHAR CHAR CHAR CHAR CHAR
RYID a (32) (32) (32) (32) (32) (32)
repositor
y.

STG_IS_DOCUMENT_STATS
Tracks document executions that the Intelligence Server processes. This
table is used when the Basic Statistics option is selected in the Statistics
category of the Project Configuration Editor. The data load process moves
this table's information to the IS_DOCUMENT_STATS table, which has the
same columns and datatypes.

Copyright © 2024 All Rights Reserved 2125


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type

Day the
document
was TIMES
DAY_ID DATE DATE DATE DATE DATE
requested TAMP
for
execution.

Hour the
document
was TINYI NUMBE SMALLI BYTEIN TINYI TINYI
HOUR_ID
requested NT R(3) NT T NT NT
for
execution.

Minute the
document
was SMAL NUMB SMALL SMALL SMAL SMAL
MINUTE_ID
requested LINT ER(5) INT INT LINT LINT
for
execution.

INTEG NUMBE INTEG INTEG INTEG INTEG


JOBID (I) Job ID.
ER R(10) ER ER ER ER

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID (I) the user
(32) (32) (32) (32) (32) (32)
session.

GUID of
the
Intelligenc CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID
e Server's (32) (32) (32) (32) (32) (32)
server
definition

Copyright © 2024 All Rights Reserved 2126


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type

at the time
of the
request.

Server
VARC VARCH VARCH VARCH VARC VARC
SERVERMACHI machine
HAR AR2 AR AR HAR HAR
NE name or IP
(255) (255) (255) (255) (255) (255)
address.

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID the
(32) (32) (32) (32) (32) (32)
project.

GUID of CHAR CHAR CHAR CHAR CHAR CHAR


USERID
theuser. (32) (32) (32) (32) (32) (32)

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
DOCUMENTID (I) the
(32) (32) (32) (32) (32) (32)
document.

The
timestamp
REQUESTRECTI at which DATE TIMES TIMES TIMES DATE DATE
ME the TIME TAMP TAMP TAMP TIME TIME
request is
received.

Total
queue time
REQUESTQUEU INTEG NUMBE INTEG INTEG INTEG INTEG
of all steps
ETIME ER R(10) ER ER ER ER
in this
request.

Time INTEG NUMB INTEG INTEG INTEG INTEG


STARTTIME
ER ER(10) ER ER ER ER

Copyright © 2024 All Rights Reserved 2127


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type

duration
between
request
receive
time and
document
job was
created.

An offset
of the
RequestR
ecTime.

Time
duration
between
request
receive
time and
document INTEG NUMBE INTEG INTEG INTEG INTEG
FINISHTIME job last ER R(10) ER ER ER ER
step was
finished.

An offset
of the
RequestR
ecTime.

Execution
EXECERRORCO error INTEG NUMB INTEG INTEG INTEG INTEG
DE code. If no ER ER(10) ER ER ER ER
error, the

Copyright © 2024 All Rights Reserved 2128


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type

value is 0.

Number of
reports
SMAL NUMBE SMALLI SMALLI SMAL SMAL
REPORTCOUNT included in
LINT R(5) NT NT LINT LINT
the
document.

Was the
CANCELINDICA document NUMB SMALL BYTEI TINYI
BIT BIT
TOR job ER(1) INT NT NT(1)
canceled?

Number of
PROMPTINDICA SMAL NUMBE SMALLI SMALLI SMAL SMAL
prompts in
TOR LINT R(5) NT NT LINT LINT
the report.

Was the
CACHEDINDICA NUMB SMALL BYTEI TINYI
document BIT BIT
TOR ER(1) INT NT NT(1)
cached?

Timestamp
of when
the record
was
written to
the DATE TIMES TIMES TIMES DATE DATE
RECORDTIME (I)
database, TIME TAMP TAMP TAMP TIME TIME
according
to
database
system
time.

Copyright © 2024 All Rights Reserved 2129


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type

CPU time,
in
millisecon
INTEG NUMB INTEG INTEG INTEG INTEG
CPUTIME ds, used
ER ER(10) ER ER ER ER
for
document
execution.

Total
number of
steps
involved in TINYI NUMBE SMALLI BYTEIN TINYI TINYI
STEPCOUNT
execution NT R(3) NT T NT NT
(not just
unique
steps).

Duration
of
execution, INTEG NUMB INTEG INTEG INTEG INTEG
EXECDURATION
in ER ER(10) ER ER ER ER
millisecon
ds.

Error
message
displayed
VARC VARCH VARCH VARCH VARC VARC
ERRORMESSAG to the user
HAR AR2 AR AR HAR HAR
E when an
(4000) (4000) (4000) (4000) (4000) (4000)
error is
encounter
ed.

Copyright © 2024 All Rights Reserved 2130


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type

Intelligenc
e Server-
related
actions
that need INTEG INTEG INTEG INTEG INTEG INTEG
EXECACTIONS
to take ER ER ER ER ER ER
place
during
document
execution.

Intelligenc
e Server-
related
processes INTEG INTEG INTEG INTEG INTEG INTEG
EXECFLAGS
needed to ER ER ER ER ER ER
refine the
document
execution.

Total time,
in
millisecon
ds, the
PROMPTANSTI INTEG NUMB INTEG INTEG INTEG INTEG
user spent
ME ER ER(10) ER ER ER ER
answering
prompts
on the
document.

1 if the
TINYI NUMBE SMALLI BYTEIN TINYI TINYI
EXPORTINDC document
NT R(3) NT T NT NT
was

Copyright © 2024 All Rights Reserved 2131


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type

exported,
otherwise
0.

If the job
hit a
cache, the
job ID of
the job
CACHECREATO INTEG NUMB INTEG INTEG INTEG INTEG
that
RJOBID ER ER(10) ER ER ER ER
created
the cache
used by
the current
job.

If the job
hit a
cache, the
GUID for
CACHECREATO CHAR CHAR CHAR CHAR CHAR CHAR
the
RSESSONID (32) (32) (32) (32) (32) (32)
session in
which the
cache was
created.

GUID of
the CHAR CHAR CHAR CHAR CHAR CHAR
REPOSITORYID
metadata (32) (32) (32) (32) (32) (32)
repository.

For
CHAR CHAR CHAR CHAR CHAR CHAR
MESSAGEID MicroStrat
(32) (32) (32) (32) (32) (32)
egy use.

Copyright © 2024 All Rights Reserved 2132


Syst em Ad m in ist r at io n Gu id e

STG_IS_INBOX_ACT_STATS
Records statistics related to History List manipulations. This table is used
when the Inbox Messages option is selected in the Statistics category of
the Project Configuration Editor. The data load process moves this table's
information to the IS_INBOX_ACT_STATS table, which has the same
columns and datatypes.

SQL Sybas MySQ


Oracle DB2 Teradat
Descrip Server e L
Column Data- Data- a Data-
tion Data- Data- Data-
type type type
type type type

Day the
manipul
TIMEST
DAY_ID (I) ation DATE DATE DATE DATE DATE
AMP
was
started.

Hour the
manipula TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID
tion was T R(3) NT T T T
started.

Minute
the
manipul SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID
ation INT R(5) NT NT INT INT
was
started.

GUID of
the
session
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID (I) that
(32) (32) (32) (32) (32) (32)
started
the
History

Copyright © 2024 All Rights Reserved 2133


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descrip Server e L
Column Data- Data- a Data-
tion Data- Data- Data-
type type type
type type type

List
manipula
tion.

GUID of
the
server
definitio
n of the
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID Intellige
(32) (32) (32) (32) (32) (32)
nce
Server
being
manipul
ated.

Name
and port
number
of the
Intelligen
ce
VARCH VARCH VARCH VARCH
SERVERMAC Server VARCH VARCH
AR AR2 AR AR
HINE machine AR(255) AR(255)
(255) (255) (255) (255)
where
the
manipula
tion is
taking
place.

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID the
(32) (32) (32) (32) (32) (32)
project

Copyright © 2024 All Rights Reserved 2134


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descrip Server e L
Column Data- Data- a Data-
tion Data- Data- Data-
type type type
type type type

where
the
History
List
message
is
mapped.

Type of
manipula
tion:

0:
Reserve
d for
MicroStr
ategy
use

1: Add:
Add
INBOXACTIO TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
message
N T R(3) NT T T T
to
History
List

2:
Remove:
Remove
message
from
History
List

3:

Copyright © 2024 All Rights Reserved 2135


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descrip Server e L
Column Data- Data- a Data-
tion Data- Data- Data-
type type type
type type type

Rename:
Rename
message

4:
Execute:
Execute
contents
of
message

5:
Change
Status:
Change
message
status
from
Ready to
Read

6:
Request
ed:
Retrieve
message
contents

7: Batch
Remove:
Intelligen
ce
Server
bulk
operatio

Copyright © 2024 All Rights Reserved 2136


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descrip Server e L
Column Data- Data- a Data-
tion Data- Data- Data-
type type type
type type type

n, such
as cache
expiratio
n

ID of the
user
doing CHAR CHAR CHAR CHAR CHAR CHAR
USERID
the (32) (32) (32) (32) (32) (32)
manipul
ation.

ID of the
user that
created CHAR CHAR CHAR CHAR CHAR CHAR
OWNERID
the (32) (32) (32) (32) (32) (32)
messag
e.

GUID of
the
History
List CHAR CHAR CHAR CHAR CHAR CHAR
MESSAGEID
message (32) (32) (32) (32) (32) (32)
being
acted
on.

Name of
the
VARCH VARCH VARCH VARCH
MESSAGETIT report or VARCH VARCH
AR AR2 AR AR
LE documen AR(255) AR(255)
(255) (255) (255) (255)
t
referenc

Copyright © 2024 All Rights Reserved 2137


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descrip Server e L
Column Data- Data- a Data-
tion Data- Data- Data-
type type type
type type type

ed in the
History
List
messag
e.

User-
defined
name of
the
History
List
messag
e. Blank VARC VARCH VARC VARC
MESSAGEDIS VARCH VARCH
unless HAR AR2 HAR HAR
PNAME AR(255) AR(255)
the user (255) (255) (255) (255)
has
renamed
the
History
List
messag
e.

Date and
time
when the
CREATIONTI History DATET TIMEST TIMEST TIMEST DATET DATET
ME List IME AMP AMP AMP IME IME
message
was
created.

STARTTIME Date and DATET TIMEST TIMEST TIMEST DATET DATET

Copyright © 2024 All Rights Reserved 2138


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descrip Server e L
Column Data- Data- a Data-
tion Data- Data- Data-
type type type
type type type

time
when the
manipul IME AMP AMP AMP IME IME
ation
started.

Report
job ID for
the
History
List
Message
Content
Request.
REPORTJOBI INTEG NUMBE INTEGE INTEGE INTEG INTEG
Blank if
D (I) ER R(10) R R ER ER
no job
was
executed
or if a
documen
t was
execute
d.

Docume
nt job ID
for the
History
DOCUMENTJ List INTEG NUMBE INTEGE INTEGE INTEG INTEG
OBID (I) Message ER R(10) R R ER ER

Content
Request.
Blank if

Copyright © 2024 All Rights Reserved 2139


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descrip Server e L
Column Data- Data- a Data-
tion Data- Data- Data-
type type type
type type type

no job
was
executed
or if a
report
was
execute
d.

ID of the
subscript
ion that
SUBSCRIPTI CHAR CHAR CHAR CHAR CHAR CHAR
invoked
ONID (32) (32) (32) (32) (32) (32)
the
manipula
tion.

If the
manipul
ation is a
batch
deletion
of
History
List VARC VARCH VARCH VARCH VARC VARC
ACTIONCOM
message HAR AR2 AR AR HAR HAR
MENT
s, this (4000) (4000) (4000) (4000) (4000) (4000)
field
contains
the
condition
or SQL
stateme
nt used

Copyright © 2024 All Rights Reserved 2140


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descrip Server e L
Column Data- Data- a Data-
tion Data- Data- Data-
type type type
type type type

to delete
the
message
s.

If there
is an
error,
this field
holds the
error
messag
e.

GUID of
the
REPOSITORY CHAR CHAR CHAR CHAR CHAR CHAR
metadata
ID (32) (32) (32) (32) (32) (32)
repositor
y.

Timesta
mp of
when the
record
was
written
RECORDTIM to the DATET TIMEST TIMEST TIMEST DATET DATET
E databas IME AMP AMP AMP IME IME
e,
accordin
g to
databas
e system
time.

Copyright © 2024 All Rights Reserved 2141


Syst em Ad m in ist r at io n Gu id e

STG_IS_MESSAGE_STATS
Records statistics related to sending messages through Distribution
Services. This table is used when the Basic statistics option is selected in
the Statistics category of the Project Configuration Editor. The data load
process moves this table's information to the IS_MESSAGE_STATS table,
which has the same columns and datatypes.

SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type

Day the
job was
TIMES
DAY_ID requested DATE DATE DATE DATE DATE
TAMP
for
execution.

Hour the
job was
TINYI NUMBE SMALLI BYTEIN TINYI TINYI
HOUR_ID requested
NT R(3) NT T NT NT
for
execution.

Minute the
job was
SMAL NUMB SMALL SMALL SMAL SMAL
MINUTE_ID requested
LINT ER(5) INT INT LINT LINT
for
execution.

Message
GUID used
INTEG NUMBE INTEG INTEG INTEG INTEG
MESSAGEINDEX to identify
ER R(10) ER ER ER ER
a
message.

Copyright © 2024 All Rights Reserved 2142


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type

GUID of
the user
session
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID created to
(32) (32) (32) (32) (32) (32)
generate
the
message.

History List
message
ID. If there
is no

History List
message
associated
HISTORYLISTM with the CHAR CHAR CHAR CHAR CHAR CHAR
ESSAGEID subscriptio (32) (32) (32) (32) (32) (32)
n, this
value is
00000000

00000000

00000000

00000000.

Job ID of
report/doc
ument
SCHEDULEJOBI INTEG NUMB INTEG INTEG INTEG INTEG
executed
D ER ER(10) ER ER ER ER
to run the
subscriptio

Copyright © 2024 All Rights Reserved 2143


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type

n instance.
If no job is
created,
this value
is -1. If a
fresh job A
is created
and it hits
the cache
of an old
job B,
SCHEDUL
EJOBID
takes the
value of
the fresh
job A.

Type of
subscribed
object:
TINYI NUMBE SMALLI BYTEIN TINYI TINYI
DATATYPE
3: Report NT R(3) NT T NT NT

55:
Document

GUID of
RECIPIENTCON the CHAR CHAR CHAR CHAR CHAR CHAR
TACTID message (32) (32) (32) (32) (32) (32)
recipient.

Type of SMAL NUMBE SMALLI SMALLI SMAL SMAL


DELIVERYTYPE
subscriptio LINT R(5) NT NT LINT LINT

Copyright © 2024 All Rights Reserved 2144


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type

n:

1: Email

2: File

4: Printer

8: Custom

16: History
List

32: Client

40: Cache
update

128:
Mobile

100: Last
one

255: All

Subscripti
on
instance CHAR CHAR CHAR CHAR CHAR CHAR
SUBSINSTID
GUID used (32) (32) (32) (32) (32) (32)
to send the
message.

Schedule
GUID. If
CHAR CHAR CHAR CHAR CHAR CHAR
SCHEDULEID there is no
(32) (32) (32) (32) (32) (32)
schedule
associated

Copyright © 2024 All Rights Reserved 2145


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type

with the
subscriptio
n, this
value is -1.

Name of
VARC VARCH VARCH VARCH VARC VARC
the
SUBINSTNAME HAR AR2 AR AR HAR HAR
subscriptio
(255) (255) (255) (255) (255) (255)
n.

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
DATAID the data
(32) (32) (32) (32) (32) (32)
content.

The
contact
type for
this
TINYI NUMB SMALL BYTEI TINYI TINYI
CONTACTTYPE subscriptio
NT ER(3) INT NT NT NT
n
instance's
RecipientI
D.

Recipient's
group ID
for group
messages
RECIPIENTGRO CHAR CHAR CHAR CHAR CHAR CHAR
sent to a
UPID (32) (32) (32) (32) (32) (32)
Contact
Collection
or a User
Group.

Copyright © 2024 All Rights Reserved 2146


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type

Name of
the contact
VARC VARCH VARCH VARCH VARC VARC
RECIPIENTCON who
HAR AR2 AR AR HAR HAR
TACTNAME received
(255) (255) (255) (255) (255) (255)
the
message.

Whether
the
address
that the
message
was sent to
is the
ISDEFAULTADD NUMBE SMALLI BYTEIN TINYI
default BIT BIT
RESS R(1) NT T NT(1)
address of
a
MicroStrat
egy user:

0: No

1: Yes

GUID of
the
address
CHAR CHAR CHAR CHAR CHAR CHAR
ADDRESSID the
(32) (32) (32) (32) (32) (32)
message
was sent
to.

ID of the CHAR CHAR CHAR CHAR CHAR CHAR


DEVICEID
device the (32) (32) (32) (32) (32) (32)

Copyright © 2024 All Rights Reserved 2147


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type

message
was sent
to.

Whether a
notification
ISNOTIFICATIO was sent: NUMB SMALL BYTEI TINYI
BIT BIT
NMESSAGE ER(1) INT NT NT(1)
0: No

1: Yes

Address ID
VARC VARCH VARCH VARCH VARC VARC
NOTIFICATIONA the
HAR AR2 AR AR HAR HAR
DDR notification
(255) (255) (255) (255) (255) (255)
is sent to.

Server
definition
GUID
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID under
(32) (32) (32) (32) (32) (32)
which the
subscriptio
n ran.

Server
machine
name or IP
address VARC VARCH VARCH VARCH VARC VARC
SERVERMACHI
under HAR AR2 AR AR HAR HAR
NE
which the (255) (255) (255) (255) (255) (255)
report or
document
job ran.

Copyright © 2024 All Rights Reserved 2148


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type

Project
GUID
under
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID which the
(32) (32) (32) (32) (32) (32)
data
content
resides.

Time at
which the
EXECSTARTTIM DATE TIMES TIMES TIMES DATE DATE
message
E TIME TAMP TAMP TAMP TIME TIME
creation
started.

Time at
which the
EXECFINISH DATE TIMES TIMES TIMES DATE DATE
message
TIME TIME TAMP TAMP TAMP TIME TIME
delivery
finished.

Status of
DELIVERYSTAT the INTEG NUMBE INTEG INTEG INTEG INTEG
US message ER R(10) ER ER ER ER
delivery.

Email
address
VARC VARCH VARCH VARCH VARC VARC
PHYSICALADDR the
HAR AR2 AR AR HAR HAR
ESS message
(255) (255) (255) (255) (255) (255)
was sent
to.

CHAR CHAR CHAR CHAR CHAR CHAR


BATCHID
(32) (32) (32) (32) (32) (32)

Copyright © 2024 All Rights Reserved 2149


Syst em Ad m in ist r at io n Gu id e

SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type

Timestamp
of when
the record DATE TIMES TIMES TIMES DATE DATE
RECORDTIME
was TIME TAMP TAMP TAMP TIME TIME
written to
the table.

GUID of
the CHAR CHAR CHAR CHAR CHAR CHAR
REPOSITORYID
metadata (32) (32) (32) (32) (32) (32)
repository.

STG_IS_PERF_MON_STATS
Records statistics related to notification, diagnostics, and performance
counters logged by Intelligence Server. This table is used when the
performance counters in the Diagnostics and Performance Monitoring Tool
are configured to record statistics information. The data load process moves
this table's information to the IS_PERF_MON_STATS table, which has the
same columns and datatypes.

SQL Teradat
Oracle DB2 Sybase MySQL
Descript Server a
Column Datatyp Datatyp Dataty Dataty
ion Dataty Datatyp
e e pe pe
pe e

Day the
DAY_ID TIMEST
performa DATE DATE DATE DATE DATE
(I) AMP
nce

Copyright © 2024 All Rights Reserved 2150


Syst em Ad m in ist r at io n Gu id e

SQL Teradat
Oracle DB2 Sybase MySQL
Descript Server a
Column Datatyp Datatyp Dataty Dataty
ion Dataty Datatyp
e e pe pe
pe e

counter
was
recorded.

Hour the
performan
TINYIN NUMBER SMALLI TINYIN TINYIN
HOUR_ID ce counter BYTEINT
T (3) NT T T
was
recorded.

Minute
the
performa
MINUTE_ SMALLI NUMBER SMALLI SMALLI SMALLI SMALLI
nce
ID NT (5) NT NT NT NT
counter
was
recorded.

The
server
machine
SERVER_ that logs VARCH VARCHA VARCHA VARCHA VARCH VARCH
MACHINE the AR(255) R2(255) R(255) R(255) AR(255) AR(255)
notificatio
n
message.

The
category
of the VARCH VARCH VARCH
COUNTE VARCHA VARCH VARCH
counter, AR AR AR
R_CAT R2(255) AR(255) AR(255)
such as (255) (255) (255)
Memory,
MicroStra

Copyright © 2024 All Rights Reserved 2151


Syst em Ad m in ist r at io n Gu id e

SQL Teradat
Oracle DB2 Sybase MySQL
Descript Server a
Column Datatyp Datatyp Dataty Dataty
ion Dataty Datatyp
e e pe pe
pe e

tegy
Server
Jobs, or
MicroStra
tegy
Server
Users.

COUNTE
For
R_ VARCH VARCHA VARCHA VARCHA VARCH VARCH
MicroStrat
INSTANC AR(255) R2(255) R(255) R(255) AR(255) AR(255)
egy use.
E

Name of
the VARCH VARCH VARCH
COUNTE VARCHA VARCH VARCH
performa AR AR AR
R_NAME R2(255) AR(255) AR(255)
nce (255) (255) (255)
counter.

Timestam
p of when
the event
EVENT_ DATETI TIMESTA TIMEST TIMEST DATETI DATETI
occurred
TIME ME MP AMP AMP ME ME
in
Intelligen
ce Server.

COUNTE Counter
FLOAT FLOAT DOUBLE FLOAT FLOAT FLOAT
R_VALUE value.

Counter
CTR_ TINYIN NUMBER SMALLI TINYIN TINYIN
value BYTEINT
VAL_TYP T (3) NT T T
type.

PROJECT GUID of CHAR CHAR CHAR CHAR CHAR CHAR

Copyright © 2024 All Rights Reserved 2152


Syst em Ad m in ist r at io n Gu id e

SQL Teradat
Oracle DB2 Sybase MySQL
Descript Server a
Column Datatyp Datatyp Dataty Dataty
ion Dataty Datatyp
e e pe pe
pe e

the
ID (32) (32) (32) (32) (32) (32)
project.

Timestam
p of when
the record
was
written to
RECORD the DATETI TIMESTA TIMEST TIMEST DATETI DATETI
TIME database, ME MP AMP AMP ME ME
according
to
database
system
time.

STG_IS_PR_ANS_STATS
Records statistics related to prompts and prompt answers. This table is used
when the Prompts option is selected in the Statistics category of the Project
Configuration Editor. The data load process moves this table's information
to the IS_PR_ANS_STATS table, which has the same columns and
datatypes.

SQL Teradat Sybas


Oracle DB2 MySQL
Descript Server a e
Column Datatyp Datatyp Dataty
ion Dataty Datatyp Dataty
e e pe
pe e pe

DAY_ID Day the DATE TIMEST DATE DATE DATE DATE

Copyright © 2024 All Rights Reserved 2153


Syst em Ad m in ist r at io n Gu id e

SQL Teradat Sybas


Oracle DB2 MySQL
Descript Server a e
Column Datatyp Datatyp Dataty
ion Dataty Datatyp Dataty
e e pe
pe e pe

prompt
was
AMP
answere
d.

Hour the
prompt
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID was
T R(3) NT T T T
answere
d.

Minute
the
prompt SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID
was INT R(5) NT NT INT INT
answere
d.

Job ID
assigned INTEG NUMBE INTEGE INTEGE INTEG INTEG
JOBID
by the ER R(10) R R ER ER
server.

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID the user
(32) (32) (32) (32) (32) (32)
session.

Order in
which
prompts
PR_ORDER_ were SMALLI NUMBE SMALLI SMALLI SMALLI SMALLI
ID answere NT R(5) NT NT NT NT
d. Prompt
order is
set in

Copyright © 2024 All Rights Reserved 2154


Syst em Ad m in ist r at io n Gu id e

SQL Teradat Sybas


Oracle DB2 MySQL
Descript Server a e
Column Datatyp Datatyp Dataty
ion Dataty Datatyp Dataty
e e pe
pe e pe

Develope
r's
Prompt
Ordering
dialog
box.

Sequenc
e ID. For
ANS_SEQ_ SMALL NUMBE SMALLI SMALLI SMALL SMALL
MicroStr
ID INT R(5) NT NT INT INT
ategy
use.

The COM
object
type of
the object
that the
prompt
resides
in:
PR_LOC_ TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
TYPE • 1: T R(3) NT T T T
Filter

• 2:
Templ
ate

• 12:
Attribu
te

ID of the
CHAR CHAR CHAR CHAR CHAR CHAR
PR_LOC_ID object
(32) (32) (32) (32) (32) (32)
that the

Copyright © 2024 All Rights Reserved 2155


Syst em Ad m in ist r at io n Gu id e

SQL Teradat Sybas


Oracle DB2 MySQL
Descript Server a e
Column Datatyp Datatyp Dataty
ion Dataty Datatyp Dataty
e e pe
pe e pe

prompt
resides
in.

Object
name of
the object VARCH VARCH VARCH VARCH
PR_LOC_ VARCH VARCH
that the AR AR2 AR AR
DESC AR(255) AR(255)
prompt (255) (255) (255) (255)
resides
in.

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PR_GUID the
(32) (32) (32) (32) (32) (32)
prompt.

Name of VARCH VARCH VARCH VARCH


VARCH VARCH
PR_NAME the AR AR2 AR AR
AR(255) AR(255)
prompt. (255) (255) (255) (255)

Prompt
title. This
cannot
be NULL.
This is
the text
VARCH VARCH VARCH VARCH
that is VARCH VARCH
PR_TITLE AR AR2 AR AR
displayed AR(255) AR(255)
(255) (255) (255) (255)
in
Develope
r's
Prompt
Ordering
dialog

Copyright © 2024 All Rights Reserved 2156


Syst em Ad m in ist r at io n Gu id e

SQL Teradat Sybas


Oracle DB2 MySQL
Descript Server a e
Column Datatyp Datatyp Dataty
ion Dataty Datatyp Dataty
e e pe
pe e pe

box,
under
Title.

Type of
prompt.
For
example,
PR_ANS_ TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
element,
TYPE T R(3) NT T T T
expressio
n, object,
or
numeric.

VARCH
VARCH VARCH VARCH VARCH VARCH
PR_ Prompt AR2
AR AR AR AR AR
ANSWERS answers. (4000
(4000) (4000) (4000) (4000) (4000)
CHAR)

For VARCH VARCH VARCH VARCH VARCH VARCH


PR_ANS_
MicroStra AR AR2 AR AR AR AR
GUID
tegy use. (4000) (4000) (4000) (4000) (4000) (4000)

Y: If a
prompt
answer is
required.
IS_
N: If a CHAR CHAR CHAR CHAR CHAR CHAR
REQUIRED
prompt
answer is
not
required.

GUID of CHAR CHAR CHAR CHAR CHAR CHAR


SERVERID
the (32) (32) (32) (32) (32) (32)

Copyright © 2024 All Rights Reserved 2157


Syst em Ad m in ist r at io n Gu id e

SQL Teradat Sybas


Oracle DB2 MySQL
Descript Server a e
Column Datatyp Datatyp Dataty
ion Dataty Datatyp Dataty
e e pe
pe e pe

server
definition.

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID the
(32) (32) (32) (32) (32) (32)
project.

The
Intelligen
ce Server VARCH VARCH VARCH VARCH
SERVERMA VARCH VARCH
machine AR AR2 AR AR
CHINE AR(255) AR(255)
name and (255) (255) (255) (255)
IP
address.

Timesta
mp of the DATET TIMEST TIMEST TIMEST DATET DATET
STARTTIME
job start IME AMP AMP AMP IME IME
time.

Timestam
p of when
the
record
was
written to
RECORDTI DATETI TIMEST TIMEST TIMEST DATETI DATETI
the
ME ME AMP AMP AMP ME ME
database,
according
to
database
system
time.

REPOSITO GUID of CHAR CHAR CHAR CHAR CHAR CHAR

Copyright © 2024 All Rights Reserved 2158


Syst em Ad m in ist r at io n Gu id e

SQL Teradat Sybas


Oracle DB2 MySQL
Descript Server a e
Column Datatyp Datatyp Dataty
ion Dataty Datatyp Dataty
e e pe
pe e pe

the
metadata
RYID (32) (32) (32) (32) (32) (32)
repositor
y.

STG_IS_PROJ_SESS_STATS
Records statistics related to project session. This table is used when the
Basic Statistics option is selected in the Statistics category of the Project
Configuration Editor. The data load process moves this table's information
to the IS_PROJ_SESS_STATS table, which has the same columns and
datatypes.

SQL Sybas MySQ


Oracle DB2 Teradat
Descripti Server e L
Column Data- Data- a Data-
on Data- Data- Data-
type type type
type type type

Day the
project
TIMEST
DAY_ID session DATE DATE DATE DATE DATE
AMP
was
started.

Hour the
project
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID session
T R(3) NT T T T
was
started.

MINUTE_ID Minute the SMALL NUMBE SMALLI SMALLI SMALL SMALL

Copyright © 2024 All Rights Reserved 2159


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descripti Server e L
Column Data- Data- a Data-
on Data- Data- Data-
type type type
type type type

project
session
INT R(5) NT NT INT INT
was
started.

Session
object
GUID. This
is the
same
session ID
used in
STG_IS_
SESSION_
STATS.

If you
close
and CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID reope (32) (32) (32) (32) (32) (32)
n the
proje
ct
conn
ectio
n
witho
ut
loggi
ng
out
from
Intelli

Copyright © 2024 All Rights Reserved 2160


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descripti Server e L
Column Data- Data- a Data-
on Data- Data- Data-
type type type
type type type

genc
e
Serve
r, the
sessi
on ID
is
reuse
d.

Server
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID definition
(32) (32) (32) (32) (32) (32)
GUID.

The
Intelligenc
e Server VARCH VARCH VARCH VARCH
SERVERMA VARCH VARCH
machine AR AR2 AR AR
CHINE AR(255) AR(255)
name and (255) (255) (255) (255)
IP
address.

GUID of
the user CHAR CHAR CHAR CHAR CHAR CHAR
USERID
performing (32) (32) (32) (32) (32) (32)
the action.

Project CHAR CHAR CHAR CHAR CHAR CHAR


PROJECTID
GUID. (32) (32) (32) (32) (32) (32)

Timestam
p of when
CONNECTT DATET TIMEST TIMEST TIMEST DATET DATET
the
IME IME AMP AMP AMP IME IME
session
was

Copyright © 2024 All Rights Reserved 2161


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descripti Server e L
Column Data- Data- a Data-
on Data- Data- Data-
type type type
type type type

opened.

Timestamp
of when
DISCONNE the DATET TIMEST TIMEST TIMEST DATET DATET
CTTIME (I) session IME AMP AMP AMP IME IME
was
closed.

Timestam
p of when
the record
RECORDTI was DATET TIMEST TIMEST TIMEST DATET DATET
ME (I) written to IME AMP AMP AMP IME IME
the
statistics
database.

GUID of
REPOSITO the CHAR CHAR CHAR CHAR CHAR CHAR
RYID metadata (32) (32) (32) (32) (32) (32)
repository.

STG_IS_REP_COL_STATS
Tracks the column-table combinations used in the SQL during report
executions. This table is used when the Report job tables/columns
accessed option is selected in the Statistics category of the Project
Configuration Editor. The data load process moves this table's information
to the IS_REP_COL_STATS table, which has the same columns and
datatypes.

Copyright © 2024 All Rights Reserved 2162


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descriptio Server ta e L
Column Data- Data-
n Data- Data- Data- Data-
type type
type type type type

Day the
report was
TIMEST
DAY_ID requested DATE DATE DATE DATE DATE
AMP
for
execution.

Hour the
report was
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID requested
T R(3) NT T T T
for
execution.

Minute the
report was
SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID requested
INT R(5) NT NT INT INT
for
execution.

Report job INTEG NUMBE INTEGE INTEGE INTEG INTEG


JOBID
ID. ER R(10) R R ER ER

GUID of the
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID user
(32) (32) (32) (32) (23) (23)
session.

GUID of the
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID server
(32) (32) (32) (32) (32) (32)
definition.

GUID of the
CHAR CHAR CHAR CHAR CHAR CHAR
TABLEID database
(32) (32) (32) (32) (32) (32)
tables used.

GUID of the
CHAR CHAR CHAR CHAR CHAR CHAR
COLUMNID columns
(32) (32) (32) (32) (32) (32)
used.

Copyright © 2024 All Rights Reserved 2163


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descriptio Server ta e L
Column Data- Data-
n Data- Data- Data- Data-
type type
type type type type

Description
VARC VARCH VARC VARC
COLUMNNA of the VARCH VARCH
HAR AR2 HAR HAR
ME column AR(255) AR(255)
(255) (255) (255) (255)
used.

The SQL
clause in
SQLCLAUSE TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
which the
TYPEID T R(3) NT T T T
column is
being used.

The number
of times a
specific
column/tabl
e/clause
INTEG NUMBE INTEGE INTEGE INTEG INTEG
COUNTER type
ER R(10) R R ER ER
combination
occurs
within a
report
execution.

Timestamp
DATET TIMEST TIMEST TIMEST DATET DATET
STARTTIME of the job
IME AMP AMP AMP IME IME
start time.

Timestamp
of when the
record was
RECORDTI written to DATET TIMEST TIMEST TIMEST DATET DATET
ME the IME AMP AMP AMP IME IME
database,
according to
database

Copyright © 2024 All Rights Reserved 2164


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descriptio Server ta e L
Column Data- Data-
n Data- Data- Data- Data-
type type
type type type type

system time.

(Server
machine VARC VARCH VARC VARCH
SERVERMA VARCH VARCH
name:port HAR AR2 HAR AR
CHINE AR(255) AR(255)
number) (255) (255) (255) (255)
pair.

PROJECTI GUID of the CHAR CHAR CHAR CHAR CHAR CHAR


D project. (32) (32) (32) (32) (32) (32)

GUID of the
REPOSITO CHAR CHAR CHAR CHAR CHAR CHAR
metadata
RYID (32) (32) (32) (32) (32) (32)
repository.

STG_IS_REP_SEC_STATS
Tracks executions that used security filters. This table is used when the
Basic Statistics option is selected in the Statistics category of the Project
Configuration Editor. The data load process moves this table's information
to the IS_REP_SEC_STATS table, which has the same columns and
datatypes.

SQL Sybas MySQ


Oracle DB2 Teradat
Descrip Server e L
Column Data- Data- a Data-
tion Data- Data- Data-
type type type
type type type

Day the
TIMEST
DAY_ID job was DATE DATE DATE DATE DATE
AMP
request

Copyright © 2024 All Rights Reserved 2165


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descrip Server e L
Column Data- Data- a Data-
tion Data- Data- Data-
type type type
type type type

ed for
executio
n.

Hour the
job was
requeste TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID
d for T R(3) NT T T T
executio
n.

Minute
the job
was
SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID request
INT R(5) NT NT INT INT
ed for
executio
n.

INTEG NUMBE INTEGE INTEGE INTEG INTEG


JOBID (I) Job ID.
ER R(10) R R ER ER

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID (I) the user
(32) (32) (32) (32) (32) (32)
session.

Sequenc
e
number
of the
SECURITYFIL SMALL NUMBE SMALLI SMALLI SMALL SMALL
security
TERSEQ INT R(5) NT NT INT INT
filter,
when
multiple
security

Copyright © 2024 All Rights Reserved 2166


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descrip Server e L
Column Data- Data- a Data-
tion Data- Data- Data-
type type type
type type type

filters
are
used.

Server
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID definitio
(32) (32) (32) (32) (32) (32)
n GUID.

Security
SECURITYFIL CHAR CHAR CHAR CHAR CHAR CHAR
filter
TERID (I) (32) (32) (32) (32) (32) (32)
GUID.

Timesta
mp of
DATET TIMEST TIMEST TIMEST DATET DATET
STARTTIME when
IME AMP AMP AMP IME IME
the job
started.

Timesta
mp of
when
the
record
was
written
DATET TIMEST TIMEST TIMEST DATET DATET
RECORDTIME to the
IME AMP AMP AMP IME IME
databas
e,
accordin
g to
databas
e system
time.

SERVERMAC (Server VARC VARCH VARCH VARCH VARC VARC

Copyright © 2024 All Rights Reserved 2167


Syst em Ad m in ist r at io n Gu id e

SQL Sybas MySQ


Oracle DB2 Teradat
Descrip Server e L
Column Data- Data- a Data-
tion Data- Data- Data-
type type type
type type type

machine
name:p
HAR AR2 HAR HAR
HINE ort AR(255) AR(255)
(255) (255) (255) (255)
number)
pair.

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID the
(32) (32) (32) (32) (32) (32)
project.

GUID of
the
REPOSITORY metadat CHAR CHAR CHAR CHAR CHAR CHAR
ID a (32) (32) (32) (32) (32) (32)
reposito
ry.

STG_IS_REP_SQL_STATS
Enables access to the SQL for a report execution. This table is used when
the Report Job SQL option is selected in the Statistics category of the
Project Configuration Editor. The data load process moves this table's
information to the IS_REP_SQL_STATS table, which has the same columns
and datatypes.

SQL Terada Sybas MySQ


Oracle DB2
Descri Server ta e L
Column Data- Data-
ption Data- Data- Data- Data-
type type
type type type type

DAY_ID Day the DATE TIMEST DATE DATE DATE DATE

Copyright © 2024 All Rights Reserved 2168


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descri Server ta e L
Column Data- Data-
ption Data- Data- Data- Data-
type type
type type type type

SQL
pass
AMP
was
started.

Hour the
SQL
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID pass
T R(3) NT T T T
was
started.

Minute
the SQL
SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID pass
INT R(5) NT NT INT INT
was
started.

INTEG NUMBE INTEGE INTEGE INTEG INTEG


JOBID Job ID.
ER R(10) R R ER ER

Sequen
ce
SQLPASSSEQU number SMALL NUMBE SMALLI SMALLI SMALL SMALL
ENCE of the INT R(5) NT NT INT INT
SQL
pass.

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID the user
(32) (32) (32) (32) (32) (32)
session.

GUID of
the CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID
server (32) (32) (32) (32) (32) (32)
definitio

Copyright © 2024 All Rights Reserved 2169


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descri Server ta e L
Column Data- Data-
ption Data- Data- Data- Data-
type type
type type type type

n.

Start
timesta
DATET TIMEST TIMEST TIMEST DATET DATET
STARTTIME mp of
IME AMP AMP AMP IME IME
the SQL
pass.

Finish
timesta
DATET TIMEST TIMEST TIMEST DATET DATET
FINISHTIME mp of
IME AMP AMP AMP IME IME
the SQL
pass.

Executi
on time,
in
millisec INTEG NUMBE INTEGE INTEGE INTEG INTEG
EXECTIME
onds, ER R(10) R R ER ER
for the
SQL
pass.

SQL
VARC VARCH VARCH VARCH VARC VARC
SQLSTATEMEN used in
HAR AR2 AR AR HAR HAR
T the
(4000) (4000) (4000) (4000) (4000) (4000)
pass.

Type of
SQL
pass: TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
SQLPASSTYPE
0: SQL T R(3) NT T T T
unknow
n

Copyright © 2024 All Rights Reserved 2170


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descri Server ta e L
Column Data- Data-
ption Data- Data- Data- Data-
type type
type type type type

1: SQL
select

2: SQL
insert

3: SQL
create

4:
Analytic
al

5:
Select
into

6: Insert
into
values

7:
Homoge
n.
partition
query

8:
Heterog
en.
portend
query

9:
Metadat
a
portend

Copyright © 2024 All Rights Reserved 2171


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descri Server ta e L
Column Data- Data-
ption Data- Data- Data- Data-
type type
type type type type

pre-
query

10:
Metadat
a
portend
list pre-
query

11:
Empty

12:
Create
index

13:
Metric
qual.
break by

14:
Metric
qual.
threshol
d

15:
Metric
qual.

16:
User-
defined

17:

Copyright © 2024 All Rights Reserved 2172


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descri Server ta e L
Column Data- Data-
ption Data- Data- Data- Data-
type type
type type type type

Homoge
n.
portend
loop

18:
Homoge
n.
portend
one tbl

19:
Heterog
en.
portend
loop

20:
Heterog
en.
portend
one tbl

21:
Insert
fixed
values
into

22:
Datamar
t from
Analytic
al
Engine

Copyright © 2024 All Rights Reserved 2173


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descri Server ta e L
Column Data- Data-
ption Data- Data- Data- Data-
type type
type type type type

23:
Clean
up temp
resourc
es

24:
Return
elm
number

25:
Increme
ntal
elem
browsin
g

26: MDX
query

27: SAP
BI

28:
Intellige
nt Cube
instruc

29:
Heterog
en. data
access

30:
Excel
file data

Copyright © 2024 All Rights Reserved 2174


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descri Server ta e L
Column Data- Data-
ption Data- Data- Data- Data-
type type
type type type type

import

31: Text
file data
import

32:
Databas
e table
import

33: SQL
data
import

Number
of
TOTALTABLEA tables SMALL NUMBE SMALLI SMALLI SMALL SMALL
CCESSED hit by INT R(5) NT NT INT INT
the SQL
pass.

Error
messag
e VARC VARCH VARCH VARCH VARC VARCH
DBERRORMES
returned HAR AR2 AR AR HAR AR
SAGE
from (4000) (4000) (4000) (4000) (4000) (4000)
databas
e.

Timesta
mp of
when DATET TIMEST TIMEST TIMEST DATET DATET
RECORDTIME
the IME AMP AMP AMP IME IME
record

Copyright © 2024 All Rights Reserved 2175


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descri Server ta e L
Column Data- Data-
ption Data- Data- Data- Data-
type type
type type type type

was
written
to the
databas
e,
accordi
ng to
databas
e
system
time.

(Server
machine
VARC VARCH VARC VARCH
SERVERMACHI name:p VARCH VARCH
HAR AR2 HAR AR
NE ort AR(255) AR(255)
(255) (255) (255) (255)
number)
pair.

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID the
(32) (32) (32) (32) (32) (32)
project.

GUID of
the
physical
CHAR CHAR CHAR CHAR CHAR CHAR
DBINSTANCEID databas
(32) (32) (32) (32) (32) (32)
e
instanc
e.

GUID of
DBCONNECTIO CHAR CHAR CHAR CHAR CHAR CHAR
the
NID (32) (32) (32) (32) (32) (32)
databas

Copyright © 2024 All Rights Reserved 2176


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descri Server ta e L
Column Data- Data-
ption Data- Data- Data- Data-
type type
type type type type

e
connect
ion.

GUID of
the CHAR CHAR CHAR CHAR CHAR CHAR
DBLOGINID
databas (32) (32) (32) (32) (32) (32)
e login.

Sequen
ce
number
SQLSTATEMENT TINYI NUMBE SMALLI BYTEIN TINYI TINYIN
of the
SEQ NT R(3) NT T NT T
SQL
stateme
nt.

GUID of
the
metadat CHAR CHAR CHAR CHAR CHAR CHAR
REPOSITORYID
a (32) (32) (32) (32) (32) (32)
reposito
ry.

STG_IS_REP_STEP_STATS
Tracks each step in the report execution process. This table is used when
the Report Job Steps option is selected in the Statistics category of the
Project Configuration Editor. The data load process moves this table's
information to the IS_REP_STEP_STATS table, which has the same
columns and datatypes.

Copyright © 2024 All Rights Reserved 2177


Syst em Ad m in ist r at io n Gu id e

SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type

Day the
report
was
TIMEST
DAY_ID requeste DATE DATE DATE DATE DATE
AMP
d for
executio
n.

Hour the
report
was
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID requeste
T R(3) NT T T T
d for
executio
n.

Minute
the
report
was SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID
requeste INT R(5) NT NT INT INT
d for
executio
n.

INTEG NUMBE INTEGE INTEGE INTEG INTEG


JOBID Job ID.
ER R(10) R R ER ER

Sequenc
e
STEPSEQUE number TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
NCE for a T R(3) NT T T T
job's
steps.

Copyright © 2024 All Rights Reserved 2178


Syst em Ad m in ist r at io n Gu id e

SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID the user
(32) (32) (32) (32) (32) (32)
session.

GUID of
the
Intellige
nce CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID
Server (32) (32) (32) (32) (32) (32)
processi
ng the
request.

Type of
step. For
a
descripti
on, see
Report
and
Docume
nt Steps,
page TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
STEPTYPE 2185. T R(3) NT T T T
1:
Metadata
object
request
step

2: Close
job

3: SQL

Copyright © 2024 All Rights Reserved 2179


Syst em Ad m in ist r at io n Gu id e

SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type

generati
on

4: SQL
executio
n

5:
Analytica
l Engine
server
task

6:
Resoluti
on server
task

7: Report
net
server
task

8:
Element
request
step

9: Get
report
instance

10: Error
message
send
task

Copyright © 2024 All Rights Reserved 2180


Syst em Ad m in ist r at io n Gu id e

SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type

11:
Output
message
send
task

12: Find
report
cache
task

13:
Docume
nt
executio
n step

14:
Docume
nt send
step

15:
Update
report
cache
task

16:
Request
execute
step

17: Data
mart
execute

Copyright © 2024 All Rights Reserved 2181


Syst em Ad m in ist r at io n Gu id e

SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type

step

18:
Docume
nt data
preparati
on

19:
Docume
nt
formattin
g

20:
Docume
nt
manipula
tion

21: Apply
view
context

22:
Export
engine

23: Find
Intelligen
t Cube
task

24:
Update
Intelligen
t Cube

Copyright © 2024 All Rights Reserved 2182


Syst em Ad m in ist r at io n Gu id e

SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type

task

25: Post-
processi
ng task

26:
Delivery
task

27:
Persist
result
task

28:
Docume
nt
dataset
executio
n task

Timesta
mp of
the DATET TIMEST TIMEST TIMEST DATET DATET
STARTTIME
step's IME AMP AMP AMP IME IME
start
time.

Timesta
mp of the
DATETI TIMEST TIMEST TIMEST DATETI DATETI
FINISHTIME step's
ME AMP AMP AMP ME ME
finish
time.

QUEUETIME Time INTEG NUMBE INTEGE INTEGE INTEG INTEG

Copyright © 2024 All Rights Reserved 2183


Syst em Ad m in ist r at io n Gu id e

SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type

duration
between
last step
finish
and the ER R(10) R R ER ER
next step
start, in
milliseco
nds.

CPU time
used
during
INTEG NUMBE INTEGE INTEGE INTEG INTEG
CPUTIME this step,
ER R(10) R R ER ER
in
milliseco
nds.

FINISHT
IME -
STEPDURA STARTT INTEG NUMBE INTEGE INTEGE INTEG INTEG
TION IME, in ER R(10) R R ER ER
milliseco
nds

Timesta
mp of
when the
record
RECORDTI DATETI TIMEST TIMEST TIMEST DATETI DATETI
was
ME ME AMP AMP AMP ME ME
logged in
the
databas
e,

Copyright © 2024 All Rights Reserved 2184


Syst em Ad m in ist r at io n Gu id e

SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type

accordin
g to
database
system
time

(Server
machine
VARCH VARCH VARCH VARCH
SERVERMA name:po VARCH VARCH
AR AR2 AR AR
CHINE rt AR(255) AR(255)
(255) (255) (255) (255)
number)
pair.

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID the
(32) (32) (32) (32) (32) (32)
project.

GUID of
the
REPOSITO metadat CHAR CHAR CHAR CHAR CHAR CHAR
RYID a (32) (32) (32) (32) (32) (32)
repositor
y.

Report and Document Steps


This IS_REP_STEP_TYPE table lists the Intelligence Server tasks involved
in executing a report or a document. These are the possible values in the
STEPTYPE column in the IS_REP_STEP_STATS and IS_DOC_STEP_
STATS tables.

Copyright © 2024 All Rights Reserved 2185


Syst em Ad m in ist r at io n Gu id e

Task name Task description

1: MD Object
Requesting an object definition from the project metadata.
Request

2: Close Job Closing a job and removing it from the list of pending jobs.

3: SQL
Generating SQL that is required to retrieve data, based on schema.
Generation

4: SQL
Executing SQL that was generated for the report.
Execution

5: Analytical
Applying analytical processing to the data retrieved from the data source.
Engine

6: Resolution
Loading the definition of an object.
Server

7: Report Net
Transmitting the results of a report.
Server

8: Element
Browsing attribute elements.
Request

9: Get Report
Retrieving a report instance from the metadata.
Instance

10: Error
Sending an error message.
Message Send

11: Output
Sending a message other than an error message.
Message Send

12: Find
Searching or waiting for a report cache.
Report Cache

13: Document
Executing a document
Execution

14: Document
Transmitting a document
Send

15: Update Updating report caches

Copyright © 2024 All Rights Reserved 2186


Syst em Ad m in ist r at io n Gu id e

Task name Task description

Report Cache

16: Request
Requesting the execution of a report
Execute

17: Data Mart


Executing a data mart report
Execute

18: Document
Constructing a document structure using data from the document's
Data
datasets
Preparation

19: Document
Exporting a document to the requested format
Formatting

20: Document
Applying a user's changes to a document
Manipulation

21: Apply View


Reserved for MicroStrategy use
Context

22: Export Exporting a document or report to PDF, plain text, Excel spreadsheet, or
Engine XML

23: Find Locating the cube instance from the Intelligent Cube Manager, when a
Intelligent subset report, or a standard report that uses dynamic caching, is
Cube executed.

24: Update
Updating the cube instance from the Intelligent Cube Manager, when
Intelligent
republishing or refreshing a cube.
Cube

25: Post-
Reserved for MicroStrategy use.
processing

Used by Distribution Services, for email, file, or printer deliveries of


26: Delivery
subscribed-to reports/documents.

Persists execution results, including History List and other condition


27: Persist
checks. All subscriptions hit this step, although only subscriptions that
Result
persist results (such as History List) perform actions in this step.

Copyright © 2024 All Rights Reserved 2187


Syst em Ad m in ist r at io n Gu id e

Task name Task description

28: Document
Dataset Waiting for child dataset reports in a document to execute.
Execution

STG_IS_REPORT_STATS
Tracks job-level statistics information about every report that Intelligence
Server executes to completion. This table is used when the Basic Statistics
option is selected in the Statistics category of the Project Configuration
Editor. The data load process moves this table's information to the IS_
REPORT_STATS table, which has the same columns and datatypes.

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Data- Data-
on Data- Data- Data- Data-
type type
type type type type

Day the
report was
TIMEST
DAY_ID requested DATE DATE DATE DATE DATE
AMP
for
execution.

Hour the
report was
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID requested
T R(3) NT T T T
for
execution.

Minute the
report was
SMAL NUMBE SMALLI SMALLI SMAL SMAL
MINUTE_ID requested
LINT R(5) NT NT LINT LINT
for
execution.

Copyright © 2024 All Rights Reserved 2188


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Data- Data-
on Data- Data- Data- Data-
type type
type type type type

INTEG NUMBE INTEGE INTEGE INTEG INTEG


JOBID (I) Job ID.
ER R(10) R R ER ER

GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID (I) the user
(32) (32) (32) (32) (32) (32)
session.

GUID of
the
Intelligenc
e Server's
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID server
(32) (32) (32) (32) (32) (32)
definition
that made
the
request.

Server
machine
name, or
IP address VARC VARCH VARCH VARCH VARC VARC
SERVERMAC
if the HAR AR2 AR AR HAR HAR
HINE
machine (255) (255) (255) (255) (255) (255)
name is
not
available.

GUID of CHAR CHAR CHAR CHAR CHAR CHAR


PROJECTID
the project. (32) (32) (32) (32) (32) (32)

GUID of CHAR CHAR CHAR CHAR CHAR CHAR


USERID
the user. (32) (32) (32) (32) (32) (32)

GUID of CHAR CHAR CHAR CHAR CHAR CHAR


REPORTID
the report. (32) (32) (32) (32) (32) (32)

Copyright © 2024 All Rights Reserved 2189


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Data- Data-
on Data- Data- Data- Data-
type type
type type type type

For an ad
hoc report,
the
Template
ID is
created on
the fly and
there is no
correspond
ing object
with this
GUID in the
object
lookup
table.

GUID of CHAR CHAR CHAR CHAR CHAR CHAR


FILTERID
the filter. (32) (32) (32) (32) (32) (32)

1 if an
embedded
filter was
EMBEDDEDFI SMALL NUMBE SMALLI SMALLI SMALL SMALL
used in the
LTER INT R(5) NT NT INT INT
report,
otherwise
0.

GUID of
the
template.
CHAR CHAR CHAR CHAR CHAR CHAR
TEMPLATEID
For an ad (32) (32) (32) (32) (32) (32)
hoc report,
the

Copyright © 2024 All Rights Reserved 2190


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Data- Data-
on Data- Data- Data- Data-
type type
type type type type

Template
ID is
created on
the fly, and
there is no
correspon
ding object
with this
GUID in
the object
lookup
table.

1 if an
embedded
template
EMBEDDEDT was used SMALL NUMBE SMALLI SMALLI SMALL SMALL
EMPLATE in the INT R(5) NT NT INT INT
report,
otherwise
0.

Job ID of
the parent
document
job, if the
current job
PARENTJOBI is a INTEG NUMBE INTEG INTEG INTEG INTEG
D (I) document ER R(10) ER ER ER ER
job's child.

-1 if the
current job
is not a

Copyright © 2024 All Rights Reserved 2191


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Data- Data-
on Data- Data- Data- Data-
type type
type type type type

document
job's child.

GUID for
the
DBINSTANCEI CHAR CHAR CHAR CHAR CHAR CHAR
physical
D (32) (32) (32) (32) (32) (32)
database
instance.

Database
user ID for
the CHAR CHAR CHAR CHAR CHAR CHAR
DBUSERID
physical (32) (32) (32) (32) (32) (32)
database
instance.

1 if this job
is a
PARENTINDIC document NUMBE SMALLI BYTEIN TINYIN
BIT BIT
ATOR job's child, R(1) NT T T(1)
otherwise
0.

Timestamp
REQUESTRE when the DATE TIMEST TIMEST TIMEST DATE DATE
CTIME request is TIME AMP AMP AMP TIME TIME
received.

Total
queue time
REQUESTQU INTEG NUMBE INTEGE INTEGE INTEG INTEG
of all steps
EUETIME ER R(10) R R ER ER
in this
request.

EXECSTARTT Time INTEG NUMBE INTEG INTEG INTEG INTEG

Copyright © 2024 All Rights Reserved 2192


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Data- Data-
on Data- Data- Data- Data-
type type
type type type type

passed
before the
first step
started.
IME ER R(10) ER ER ER ER
An offset
of the
RequestRe
cTime.

Time
passed
when the
last step is
EXECFINISHT finished. INTEG NUMBE INTEGE INTEGE INTEG INTEG
IME ER R(10) R R ER ER
An offset of
the
RequestRe
cTime.

Number of
SMAL NUMBE SMALLI SMALLI SMAL INTEG
SQLPASSES SQL
LINT R(5) NT NT LINT ER
passes.

Job error
JOBERRORC code. If no INTEG NUMBE INTEGE INTEGE INTEG INTEG
ODE error, the ER R(10) R R ER ER
value is 0.

1 if the job
was
CANCELINDI NUMBE SMALLI BYTEIN TINYI
canceled, BIT BIT
CATOR R(1) NT T NT(1)
otherwise
0.

Copyright © 2024 All Rights Reserved 2193


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Data- Data-
on Data- Data- Data- Data-
type type
type type type type

1 if the
report was
ad hoc,
otherwise
0. This
includes
any
executed
job that is
not saved
in the
project as
ADHOCINDIC a report NUMBE SMALLI BYTEIN TINYIN
BIT BIT
ATOR (for R(1) NT T T(1)
example:
drilling
results,
attribute
element
prompts,
creating
and
running a
report
before
saving it).

Number of
PROMPTINDI SMAL NUMBE SMALLI SMALLI SMAL SMAL
prompts in
CATOR LINT R(5) NT NT LINT LINT
the report.

1 if the
DATAMARTIN NUMBE SMALLI BYTEIN TINYIN
report BIT BIT
DICATOR R(1) NT T T(1)
created a

Copyright © 2024 All Rights Reserved 2194


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Data- Data-
on Data- Data- Data- Data-
type type
type type type type

data mart,
otherwise
0.

1 if the
report was
a result of
ELEMENTLOA NUMBE SMALLI BYTEIN TINYI
an element BIT BIT
DINDIC R(1) NT T NT(1)
browse,
otherwise
0.

1 if the
report was
DRILLINDICA the result NUMBE SMALLI BYTEIN TINYIN
BIT BIT
TOR of a drill, R(1) NT T T(1)
otherwise
0.

1 if the
report was
SCHEDULEIN run from a NUMBE SMALLI BYTEIN TINYI
BIT BIT
DICATOR schedule, R(1) NT T NT(1)
otherwise
0.

1 if the
report
CACHECREAT created a NUMBE SMALLI BYTEIN TINYIN
BIT BIT
EINDIC cache, R(1) NT T T(1)
otherwise
0.

PRIORITYNU Query SMAL NUMBE SMALLI SMALLI SMAL SMAL


MBER execution LINT R(5) NT NT LINT LINT

Copyright © 2024 All Rights Reserved 2195


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Data- Data-
on Data- Data- Data- Data-
type type
type type type type

step
priority.

User-
SMALL NUMBE SMALLI SMALLI SMALL SMALL
JOBCOST supplied
INT R(5) NT NT INT INT
report cost.

Number of
FINALRESUL INTEG NUMBE INTEG INTEG INTEG INTEG
rows in the
TSIZE ER R(10) ER ER ER ER
report.

Timestamp
of when the
record was
logged in
the
RECORDTIME DATET TIMEST TIMEST TIMEST DATET DATET
database,
(I) IME AMP AMP AMP IME IME
according
to the
database
system
time.

The error
message
displayed
VARC VARCH VARCH VARCH VARC VARC
ERRORMESS to the user
HAR AR2 AR AR HAR HAR
AGE when an
(4000) (4000) (4000) (4000) (4000) (4000)
error is
encounter
ed.

For
DRILLTEMPLA CHAR CHAR CHAR CHAR CHAR CHAR
MicroStrat
TEUNIT (32) (32) (32) (32) (32) (32)
egy use.

Copyright © 2024 All Rights Reserved 2196


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Data- Data-
on Data- Data- Data- Data-
type type
type type type type

GUID of
the object
that was
drilled
from.

For
MicroStrat
egy use.
CHAR CHAR CHAR CHAR CHAR CHAR
NEWOBJECT GUID of
(32) (32) (32) (32) (32) (32)
the object
that was
drilled to.

For
MicroStrat
egy use.
Enumerati
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
DRILLTYPE on of the
T R(3) NT T T T
type of
drilling
action
performed.

Total
number of
unique
tables
TOTALTABLE SMAL NUMBE SMALLI SMALLI SMAL INTEG
accessed
ACCESS LINT R(5) NT NT LINT ER
by the
report
during
execution.

Copyright © 2024 All Rights Reserved 2197


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Data- Data-
on Data- Data- Data- Data-
type type
type type type type

Length in
characters
of the SQL
statement.
For
multiple
INTEG NUMBE INTEGE INTEGE INTEG INTEG
SQLLENGTH passes,
ER R(10) R R ER ER
this value
is the sum
of SQL
statement
lengths of
each pass.

Duration of
the report
EXECDURATI execution, INTEG NUMBE INTEG INTEG INTEG INTEG
ON in ER R(10) ER ER ER ER
millisecon
ds.

CPU time
used for
report
INTEG NUMBE INTEGE INTEGE INTEG INTEG
CPUTIME execution,
ER R(10) R R ER ER
in
millisecond
s.

Total
number of
TINYI NUMBE SMALLI BYTEIN TINYI TINYI
STEPCOUNT steps
NT R(3) NT T NT NT
involved in
execution

Copyright © 2024 All Rights Reserved 2198


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Data- Data-
on Data- Data- Data- Data-
type type
type type type type

(not just
unique
steps).

Intelligenc
e Server-
related
actions
EXECACTION that need INTEG NUMBE INTEGE INTEGE INTEG INTEG
S to take ER R(10) R R ER ER
place
during
report
execution.

Intelligenc
e Server-
related
processes INTEG NUMBE INTEG INTEG INTEG INTEG
EXECFLAGS
needed to ER R(10) ER ER ER ER
refine the
report
execution.

1 if a
database
error
DBERRORIND occurred NUMBE SMALLI BYTEIN TINYIN
BIT BIT
IC during R(1) NT T T
execution,
otherwise
0.

PROMPTANS INTEG NUMBE INTEG INTEG INTEG INTEG


Total time,
TIME ER R(10) ER ER ER ER

Copyright © 2024 All Rights Reserved 2199


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Data- Data-
on Data- Data- Data- Data-
type type
type type type type

in
millisecon
ds, the
user spent
answering
prompts on
the report.

GUID of
the
Intelligent
CUBEINSTAN Cube used CHAR CHAR CHAR CHAR CHAR CHAR
CEID in a Cube (32) (32) (32) (32) (32) (32)
Publish or
Cube Hit
job.

Size, in
KB, of the
Intelligent
Cube in INTEG NUMBE INTEG INTEG INTEG INTEG
CUBESIZE
memory for ER R(10) ER ER ER ER
a Cube
Publish
job.

1 if any
SQL was
executed
SQLEXECINDI NUMBE SMALLI BYTEIN TINYIN
against the BIT BIT
C R(1) NT T T
database,
otherwise
0.

Copyright © 2024 All Rights Reserved 2200


Syst em Ad m in ist r at io n Gu id e

SQL Terada Sybas MySQ


Oracle DB2
Descripti Server ta e L
Column Data- Data-
on Data- Data- Data- Data-
type type
type type type type

1 if the
report was
EXPORTINDI TINYI NUMBE SMALLI BYTEIN TINYI TINYI
exported,
C NT R(3) NT T NT NT
otherwise
0.

GUID of
REPOSITORYI the CHAR CHAR CHAR CHAR CHAR CHAR
D metadata (32) (32) (32) (32) (32) (32)
repository.

ID of the CHAR CHAR CHAR CHAR CHAR CHAR


MESSAGEID
message. (32) (32) (32) (32) (32) (32)

STG_IS_SCHEDULE_STATS
Tracks which reports have been run as the result of a subscription. This
table is used when the Subscriptions option is selected in the Statistics
category of the Project Configuration Editor. The data load process moves
this table's information to the IS_SCHEDULE_STATS table, which has the
same columns and datatypes.

SQLU Teradat Sybas


Oracle DB2 MySQL
Descrip Server a e
Column Datatyp Datatyp Dataty
tion Dataty Datatyp Dataty
e e pe
pe e pe

Day the
job was TIMEST
DAY_ID (I) DATE DATE DATE DATE DATE
requeste AMP
d for

Copyright © 2024 All Rights Reserved 2201


Syst em Ad m in ist r at io n Gu id e

SQLU Teradat Sybas


Oracle DB2 MySQL
Descrip Server a e
Column Datatyp Datatyp Dataty
tion Dataty Datatyp Dataty
e e pe
pe e pe

executio
n.

Hour the
job was
requeste TINYIN NUMBE SMALLI TINYIN TINYIN
HOUR_ID (I) BYTEINT
d for T R(3) NT T T
executio
n.

Minute
the job
was
MINUTE_ID SMALL NUMBE SMALLI SMALLI SMALL SMALL
requeste
(I) INT R(5) NT NT INT INT
d for
executio
n.

SCHEDULEI INTEG NUMBE INTEGE INTEGE INTEG INTEG


Job ID.
D (I) ER R(10) R R ER ER

GUID of
SESSIONID CHAR CHAR CHAR CHAR CHAR CHAR
the user
(I) (32) (32) (32) (32) (32) (32)
session.

GUID for
server CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID
definitio (32) (32) (32) (32) (32) (32)
n.

GUID of
the
TRIGGERID CHAR CHAR CHAR CHAR CHAR CHAR
object
(I) (32) (32) (32) (32) (32) (32)
that
triggered

Copyright © 2024 All Rights Reserved 2202


Syst em Ad m in ist r at io n Gu id e

SQLU Teradat Sybas


Oracle DB2 MySQL
Descrip Server a e
Column Datatyp Datatyp Dataty
tion Dataty Datatyp Dataty
e e pe
pe e pe

the
subscrip
tion.

Type of
schedule
:0 if it is
SCHEDULE TINYIN NUMBE SMALLI TINYIN TINYIN
a report, BYTEINT
TYPE (I) T R(3) NT T T
1 if it is a
documen
t

0 if the
job does
not hit
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HITCACHE the
T R(3) NT T T T
cache, 1
if it
does.

Timesta
mp of the
DATETI TIMEST TIMEST TIMEST DATETI DATETI
STARTTIME schedule
ME AMP AMP AMP ME ME
start
time.

Timesta
mp of
when the
RECORDTI record DATET TIMEST TIMEST TIMEST DATET DATET
ME (I) was IME AMP AMP AMP IME IME
logged
in the
databas

Copyright © 2024 All Rights Reserved 2203


Syst em Ad m in ist r at io n Gu id e

SQLU Teradat Sybas


Oracle DB2 MySQL
Descrip Server a e
Column Datatyp Datatyp Dataty
tion Dataty Datatyp Dataty
e e pe
pe e pe

e,
accordin
g to
databas
e system
time.

(Server
machine
VARCH VARCH VARCH
SERVERMA name:po VARCHA VARCHA VARCHA
AR AR AR
CHINE rt R2(255) R(255) R(255)
(255) (255) (255)
number)
pair.

GUID of
PROJECTID CHAR CHAR CHAR CHAR CHAR CHAR
the
(I) (32) (32) (32) (32) (32) (32)
project.

GUID of
the
REPOSITO CHAR CHAR CHAR CHAR CHAR CHAR
metadata
RYID (32) (32) (32) (32) (32) (32)
repositor
y.

STG_IS_SESSION_STATS
Logs every Intelligence Server user session. This table is used when the
Basic Statistics option is selected in the Statistics category of the Project
Configuration Editor. The data load process moves this table's information
to the IS_SESSION_STATS table, which has the same columns and
datatypes.

Copyright © 2024 All Rights Reserved 2204


Syst em Ad m in ist r at io n Gu id e

The STG_IS_SESSION_STATS table does not contain project-level


information and is therefore not affected by statistics purges at the project
level. For details about statistics purges, see the System Administration
Help.

SQL Sybas
Oracle DB2 Teradat MySQL
Descript Server e
Column Data- Data- a Data- Data-
ion Data- Data-
type type type type
type type

Day the
session TIMEST
DAY_ID DATE DATE DATE DATE DATE
was AMP
started.

Hour the
session TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID
was T R(3) NT T T T
started.

Minute
the
SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID session
INT R(5) NT NT INT INT
was
started.

GUID of
SESSIONID CHAR CHAR CHAR CHAR CHAR CHAR
the user
(I) (32) (32) (32) (32) (32) (32)
session.

Server
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID definition
(32) (32) (32) (32) (32) (32)
GUID.

(Server
machine VARCH VARCH VARCH VARCH
SERVERMA VARCH VARCH
name:por AR AR2 AR AR
CHINE AR(255) AR(255)
t number) (255) (255) (255) (255)
pair.

Copyright © 2024 All Rights Reserved 2205


Syst em Ad m in ist r at io n Gu id e

SQL Sybas
Oracle DB2 Teradat MySQL
Descript Server e
Column Data- Data- a Data- Data-
ion Data- Data-
type type type type
type type

GUID of CHAR CHAR CHAR CHAR CHAR CHAR


USERID
the user. (32) (32) (32) (32) (32) (32)

Client
machine
name, or
IP
VARCH VARCH VARCH VARCH
CLIENTMAC address if VARCH VARCH
AR AR2 AR AR
HINE the AR(255) AR(255)
(255) (255) (255) (255)
machine
name is
not
available.

Source
from
which the
session
originate
d:

0:
Unknown

EVENTSOU 1: TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN


RCE Develope T R(3) NT T T T
r

2:
Intelligen
ce Server
Administr
ator

3: Web
Administr

Copyright © 2024 All Rights Reserved 2206


Syst em Ad m in ist r at io n Gu id e

SQL Sybas
Oracle DB2 Teradat MySQL
Descript Server e
Column Data- Data- a Data- Data-
ion Data- Data-
type type type type
type type

ator

4:
Intelligen
ce Server

5:
Project
Upgrade

6: Web

7:
Schedule
r

8:
Custom
Applicati
on

9:
Narrowc
ast
Server

10:
Object
Manager

11:
ODBO
Provider

12:
ODBO
Cube
Designer

Copyright © 2024 All Rights Reserved 2207


Syst em Ad m in ist r at io n Gu id e

SQL Sybas
Oracle DB2 Teradat MySQL
Descript Server e
Column Data- Data- a Data- Data-
ion Data- Data-
type type type type
type type

13:
Comman
d
Manager

14:
Enterpris
e
Manager

15:
Comman
d Line
Interface

16:
Project
Builder

17:
Configur
ation
Wizard

18: MD
Scan

19:
Cache
Utility

20: Fire
Event

21:
MicroStr
ategy

Copyright © 2024 All Rights Reserved 2208


Syst em Ad m in ist r at io n Gu id e

SQL Sybas
Oracle DB2 Teradat MySQL
Descript Server e
Column Data- Data- a Data- Data-
ion Data- Data-
type type type type
type type

Java
admin
clients

22:
MicroStr
ategy
Web
Services

23:
MicroStr
ategy
Office

24:
MicroStr
ategy
Tools

25:
Portal
Server

26:
Integrity
Manager

27:
Metadata
Update

28:
Reserved
for
MicroStr
ategy

Copyright © 2024 All Rights Reserved 2209


Syst em Ad m in ist r at io n Gu id e

SQL Sybas
Oracle DB2 Teradat MySQL
Descript Server e
Column Data- Data- a Data- Data-
ion Data- Data-
type type type type
type type

use

29:
Schedule
r for
Mobile

30:
Reposito
ry
Translati
on
Wizard

31:
Health
Center

32: Cube
Advisor

33:
Operatio
ns
Manager

34:
Desktop

35:
Library

36:
Library
iOS

37:
Workstati

Copyright © 2024 All Rights Reserved 2210


Syst em Ad m in ist r at io n Gu id e

SQL Sybas
Oracle DB2 Teradat MySQL
Descript Server e
Column Data- Data- a Data- Data-
ion Data- Data-
type type type type
type type

on

39:
Library
Android

Timestam
p of when
the
record
was
logged in
RECORDTIM DATETI TIMEST TIMEST TIMEST DATETI DATETI
the
E (I) ME AMP AMP AMP ME ME
database,
according
to
database
system
time.

Web
server
machine
from VARCH VARCH VARCH VARCH
WEBMACHI VARCH VARCH
which a AR AR2 AR AR
NE AR(255) AR(255)
web (255) (255) (255) (255)
session
originate
s.

Timestam
p of when
CONNECTTI DATETI TIMEST TIMEST TIMEST DATETI DATETI
the
ME (I) ME AMP AMP AMP ME ME
session
is

Copyright © 2024 All Rights Reserved 2211


Syst em Ad m in ist r at io n Gu id e

SQL Sybas
Oracle DB2 Teradat MySQL
Descript Server e
Column Data- Data- a Data- Data-
ion Data- Data-
type type type type
type type

opened.

Timesta
mp of
DISCONNEC DATET TIMEST TIMEST TIMEST DATET DATET
when the
TTIME IME AMP AMP AMP IME IME
session
is closed.

GUID of
the
REPOSITOR CHAR CHAR CHAR CHAR CHAR CHAR
metadata
YID (32) (32) (32) (32) (32) (32)
repositor
y.

STG_MSI_STATS_PROP
For MicroStrategy use. Provides information about the statistics database
properties. Intelligence Server uses this table to initialize statistics logging.

Column
Column Description
Name

PROP_ Property name, such as statistics database version, upgrade script, and so
NAME on.

PROP_ Property value, such as statistics database version number, time that an
VAL upgrade script was run, and so on.

Copyright © 2024 All Rights Reserved 2212


Syst em Ad m in ist r at io n Gu id e

EN TERPRISE M AN AGER
D ATA D ICTION ARY

Copyright © 2024 All Rights Reserved 2213


Syst em Ad m in ist r at io n Gu id e

Detailed information about Enterprise Manager objects is available from


within MicroStrategy:

l To view details about Enterprise Manager objects, walk through the


Project Documentation wizard in the Enterprise Manager project. To
access this, from MicroStrategy Developer, open the Enterprise Manager
project, click the Tools menu and select Project Documentation.

l To view details about Enterprise Manager schema objects, such as facts,


attributes, and hierarchies, open the Enterprise Manager project in
MicroStrategy Architect. To access this, from MicroStrategy Developer,
open the Enterprise Manager project, click the Schema menu and select
Architect.

l For information about configuring Enterprise Manager and how you can
use it to help tune the MicroStrategy system and information about setting
up project documentation so it is available to networked users, see the
Enterprise Manager Help .

Enterprise Manager Data Warehouse Tables


The following is a list of tables in the Enterprise Manager data warehouse.

Temporary tables are created and used by the data loading process when
data is migrated from the statistics tables to the Enterprise Manager
warehouse. These temporary tables are the following:

l IS_REP_SQL_TMP

l IS_REP_STEP_TMP

l IS_SESSION_TMP1

l IS_PROJECT_FACT_1_TMP

l EM_IS_LAST_UPD_1

l EM_IS_LAST_UPD_2

Copyright © 2024 All Rights Reserved 2214


Syst em Ad m in ist r at io n Gu id e

Fact Tables
l CT_EXEC_FACT

l CT_MANIP_FACT

l IS_CONFIG_PARAM_FACT

l IS_CUBE_ACTION_FACT

l IS_DOC_FACT

l IS_DOC_STEP_FACT

l IS_INBOX_ACT_FACT

l IS_MESSAGE_FACT

l IS_PERF_MON_FACT

l IS_PR_ANS_FACT

l IS_PROJECT_FACT_1

l IS_REP_COL_FACT

l IS_REP_FACT

l IS_REP_SEC_FACT

l IS_REP_SQL_FACT

l IS_REP_STEP_FACT

l IS_SESSION_FACT

l IS_SESSION_MONITOR

CT_EXEC_FACT
Contains information about MicroStrategy Mobile devices and
report/document executions and manipulations. Created as a view based on
columns in the source tables listed below.

Copyright © 2024 All Rights Reserved 2215


Syst em Ad m in ist r at io n Gu id e

Source Tables
l CT_DEVICE_STATS: Statistics table containing information about the
mobile client and the mobile device

l CT_EXEC_STATS: Statistics table containing information about mobile


report and document execution

l IS_SERVER: Lookup table that provides descriptive information about the


server definitions being tracked

l IS_REP: Lookup table that provides descriptive information about the


reports being tracked

l IS_DOC: Lookup table that provides descriptive information about the


documents being tracked

l IS_PROJ: Lookup table that provides descriptive information about the


projects being tracked

l EM_MD: Lookup table for metadata

l EM_USER: Lookup table for users

List of Table Colum ns

Column Name Column Description

CT_DEVICE_INST_
Unique installation ID of the mobile app.
ID

An integer value that increments when the device information, such


CT_STATE_
as DEVICETYPE, OS, OSVER, or APPVER (in CT_DEVICE_
COUNTER
STATS), changes.

CT_STATE_
Date and time when STATECOUNTER is incremented.
CHANGE_TS

Type of device the app is installed on, such as iPad 2, Droid, or


CT_DEVICE_TYPE
iPhone 6.

Copyright © 2024 All Rights Reserved 2216


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

CT_OS Operating system the app is installed on, such as iOS or Android.

CT_OS_VER Version of the operating system, such as 5.2.1.

CT_APP_VER Version of the MicroStrategy app.

EM_USER_ID ID of the user executing the document.

GUID of the session that executed the request. This should be the
IS_SESSION_ID
same as the SESSIONID for this request in IS_REP_FACT.

GUID of the MicroStrategy Mobile client session ID. A new client


CT_SESSION_ID
session ID is generated every time a user logs in to the mobile app.

ID corresponding to the JOBID (in IS_REP_FACT) of the message


IS_MESSAGE_ID
generated by the execution.

Similar to JOBID but generated by the client and cannot be NULL.


CT_ACTION_ID
JOBID may be NULL if the user is offline during execution.

IS_SERVER_ID GUID of the Intelligence Server processing the request.

EM_APP_SRV_ Name and port number of the Intelligence Server machine where
MACHINE the mobile document execution is taking place.

IS_REP_ID GUID of the report used in the request.

IS_DOC_ID GUID of the document used in the request.

IS_PROJ_ID GUID of the project.

IS_REPOSITORY_ID GUID of the metadata repository.

EM_MOB_SRV_ Name and port number of the Mobile Server machine where the
MACHINE mobile document execution is taking place.

CT_REQ_TS Time when the user submits a request to the mobile app.

Time when the mobile app begins receiving data from


CT_REC_TS
MicroStrategy Mobile Server.

CT_REQ_REC_TM_
Difference in milliseconds between CT_REQ_TS and CT_REC_TS.
MS

Copyright © 2024 All Rights Reserved 2217


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

CT_RENDER_ST_
Time when the mobile app begins rendering.
TS

CT_RENDER_FN_
Time when the mobile app finishes rendering.
TS

CT_RENDER_TM_ Difference in milliseconds between CT_RENDER_ST_TS and CT_


MS RENDER_FN_TS

Type of report or document execution:

1: User execution

2: Pre-cached execution

3: System recovery execution


CT_EXEC_TYPE_
4: Subscription cache pre-loading execution
IND_ID
5: Transaction subsequent action execution

6: Report queue execution

7: Report queue recall execution

8: Back button execution

Whether a cache was hit during the execution, and if so, what type
of cache hit occurred:

0: No cache hit
CT_CACHE_HIT_
IND_ID 1: Intelligence Server cache hit

2: Device cache hit

6: Application memory cache hit

Whether the report or document is prompted:


CT_PROMPT_IND_
0: Not prompted
ID
1: Prompted

Whether the job is for a report or a document:


CT_DATATYPE_ID
3: Report

Copyright © 2024 All Rights Reserved 2218


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

55: Document

The type of network used:

3G
CT_NETWORK_
WiFi
TYPE
LTE

4G

CT_BANDWIDTH_
Estimated network bandwidth, in kbps.
KBPS

Time at which the user either clicks on another report or document,


CT_VIEW_FN_TS
or backgrounds the mobile app.

Difference in milliseconds between CT_RENDER_FN_TS and CT_


CT_VIEW_TM_MS
VIEW_FN_TS.

An integer value that increases with every manipulation the user


makes after the report or document is rendered, excluding those
CT_NU_OF_MANIP
that require fetching more data from Intelligence Server or that
result in another report or document execution.

CT_AVG_MANIP_
Average rendering time for each manipulation.
RENDER_TM_MS

CT_LATITUDE Latitude of the user.

CT_LONGITUDE Longitude of the user.

DAY_ID Day the action was started.

CT_TIMESTAMP Time the manipulation was started.

HOUR_ID Hour the action was started.

MINUTE_ID Minute the action was started.

Date and time when this information was written to the statistics
EM_RECORD_TS
database.

Copyright © 2024 All Rights Reserved 2219


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

CT_REQ_
Whether the manipulation request was received.
RECEIVED_FLAG

CT_REQ_
Whether the manipulation was completed.
RENDERED_FLAG

CT_REQ_HAS_
Whether the manipulation request was made by a mobile app.
DEVICE_FLAG

The ID of the job requesting the manipulation. A combination of the


CT_JOB_ID
IS_SESSION_ID, CT_SESSION_ID, and CT_ACTION_ID.

Name of the document used in the request, or if it is a deleted


IS_DOC_NAME
document.

IS_PROJ_NAME Name of the project used for the request or if it is a deleted project.

EM_USER_NAME Name of the user who made the request or if it is a deleted user.

EM_LDAPLINK Name of the user in the LDAP system or if it is a deleted user.

EM_NTLINK Name of the user in Windows or if it is a deleted user.

CT_MANIP_FACT
Contains information about MicroStrategy Mobile devices and
report/document manipulations. Created as a view based on columns in the
source tables listed below.

Source Tables
l CT_MANIP_STATS: Statistics table containing information about the
report or document manipulations

l EM_MD: Lookup table for metadata

l IS_PROJ: Lookup table that provides descriptive information about the


projects being tracked

Copyright © 2024 All Rights Reserved 2220


Syst em Ad m in ist r at io n Gu id e

l IS_DOC: Lookup table that provides descriptive information about the


documents being tracked

l IS_REP: Lookup table that provides descriptive information about the


reports being tracked

l EM_USER: Lookup table for users

List of Table Colum ns

Column Name Column Description

The ID of the job requesting the manipulation. A combination of the


CT_JOB_ID
IS_SESSION_ID, CT_SESSION_ID, and CT_ACTION_ID.

CT_DEVICE_INST_
Unique installation ID of the mobile app.
ID

An integer value that increments when the device information, such


CT_STATE_
as DEVICETYPE, OS, OSVER, or APPVER (in CT_MANIP_STATS),
COUNTER
changes.

EM_USER_ID ID of the user making the request.

IS_SESSION_ID GUID of the session that executed the request.

GUID of the MicroStrategy Mobile client session ID. A new client


CT_SESSION_ID
session ID is generated every time a user logs in to the mobile app.

Similar to JOBID but generated by the client and cannot be NULL.


CT_ACTION_ID
JOBID may be NULL if the user is offline during execution.

EM_APP_SRV_ Name and port number of the Intelligence Server machine where
MACHINE the manipulation is taking place.

IS_REP_ID GUID of the report used in the request.

IS_DOC_ID Integer ID of the document that was executed.

IS_PROJ_ID Integer ID of the project.

IS_MANIP_SEQ_ID The order in which the manipulations were made in a session. For

Copyright © 2024 All Rights Reserved 2221


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

each manipulation, the mobile client returns a row, and the value in
this column increments for each row.

Type of manipulation:

0: Unknown

1: Selector

2: Panel Selector

IS_MANIP_TYPE_ID 3: Action Selector

4: Change Layout

5: Change View

6: Sort

7: Page By

Name of the item that was manipulated. For example, if a selector


IS_MANIP_NAME
was clicked, this is the name of the selector.

Value of the item that was manipulated. For example, if a panel


IS_MANIP_VALUE
selector was clicked, this is the name of the selected panel.

If the value for IS_MANIP_VALUE is too long to fit in one row, this
IS_MANIP_VALUE_
manipulation is spread over multiple rows, and this value is
SEQ
incremented.

DETAIL1 A flexible column to capture different states of manipulation.

DETAILS2 A flexible column to capture different states of manipulation.

CT_MANIP_ST_TS Time when the user submitted the manipulation.

Time when the mobile app finished processing the manipulation and
CT_MANIP_FN_TS
forwarded it for rendering.

Difference between CT_MANIP_ST_TS and CT_MANIP_FN_TS, in


CT_MANIP_TM_MS
milliseconds.

DAY_ID Day the manipulation was started.

Copyright © 2024 All Rights Reserved 2222


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

HOUR_ID Hour the manipulation was started.

MINUTE_ID Minute the manipulation was started.

Date and time when this information was written to the statistics
EM_RECORD_TS
database.

REP_ID ID of the report used in the request.

IS_CONFIG_PARAM_FACT
Contains information about Intelligence Server and project configuration
settings.

Related Lookup Tables


l IS_CONFIG_PARAM: Lookup table for configuration settings

l IS_PROJ: Lookup table for projects

l IS_SERVER: Lookup table for Intelligence Server definitions

List of Table Colum ns

Column Name Column Description

IS_CONFIG_TS Timestamp when the configuration setting was recorded.

IS_MD_ID Integer ID of the metadata being monitored.

ID of the project recording the configuration setting. If the


IS_PROJ_ID configuration setting is an Intelligence Server setting, this value is
0.

IS_SERVER_ID Integer ID of the Intelligence Server definition.

IS_CONFIG_ Integer ID the configuration setting.

Copyright © 2024 All Rights Reserved 2223


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

PARAM_ID

IS_CONFIG_
Value of the configuration setting.
PARAM_VALUE

IS_CUBE_ACTION_FACT
Contains information about Intelligent Cube manipulations. Created as a
view based on columns in the source tables listed below.

Source Tables
l EM_MD: Lookup table for metadata

l IS_CUBE_REP_STATS: Statistics table containing information about


Intelligent Cube manipulations

l IS_CUBE_ACTION_TYPE: Lookup table listing the manipulations that can


occur

l IS_PROJ: Lookup table for projects

l IS_REP: Lookup table for report objects

List of Table Colum ns

Column Name Column Description

DAY_ID Day the action was started.

HOUR_ID Hour the action was started.

MINUTE_ID Minute the action was started.

Date and time when this information was written to the statistics
EM_RECORD_TS
database.

Copyright © 2024 All Rights Reserved 2224


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

GUID of the session that started the action against the Intelligent
IS_SESSION_ID
Cube.

IS_REP_JOB_ID Job ID for the action on the Intelligent Cube

IS_PROJ_ID Integer ID of the project where the Intelligent Cube is stored.

IS_CUBE_REP_ID Integer ID of the Intelligent Cube report that was published, if any

IS_CUBE_INST_ID GUID of the Intelligent Cube instance in memory

Type of action against the Intelligent Cube:

0: Reserved for MicroStrategy use

1: Cube Publish

2: Cube View Hit

IS_CUBE_ACT_ID 3: Cube Dynamic Source Hit

4: Cube Append

5: Cube Update

6: Cube Delete

7: Cube Destroy

IS_REP_ID Integer ID of the report that hit the Intelligent Cube, if any.

If the Intelligent Cube is published or refreshed, size of the


IS_CUBE_SIZE_KB
Intelligent Cube in KB.

If the Intelligent Cube is published or refreshed, number of rows in


IS_CUBE_ROWS
the Intelligent Cube.

IS_REPOSITORY_ID Integer ID of the metadata repository.

IS_DOC_FACT
Contains information on the execution of a document job.

Primary key:

Copyright © 2024 All Rights Reserved 2225


Syst em Ad m in ist r at io n Gu id e

l DAY_ID2

l IS_SESSION_ID

l IS_DOC_JOB_SES_ID

l IS_DOC_JOB_ID

l IS_DOC_CACHE_IDX

Source Tables
l IS_DOCUMENT_STATS: Statistics table containing information about
document executions

l EM_IS_LAST_UPD_2: Configuration table that drives the loading process


(for example, data loading window)

Related Lookup Tables


l EM_USER: Lookup table for users

l IS_DOC: Lookup table for documents

l IS_SESSION: Lookup table for session objects

List of Table Colum ns

Column Name Column Description

Timestamp of when the information was recorded by Intelligence


EM_RECORD_TS
Server into the IS_DOCUMENT_STATS table.

Timestamp of when the Enterprise Manager data load process


EM_LOAD_TS
began.

IS_SERVER_ID Integer ID of the server where the session was created.

IS_SESSION_ID GUID of the current session object.

Copyright © 2024 All Rights Reserved 2226


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

GUID of the session that created the cache if a cache was hit in
IS_DOC_JOB_SES_ID
this execution; otherwise, current session (default behavior).

IS_DOC_JOB_ID Integer ID of the document job execution.

Always 0; not yet available, documents not currently cached.


IS_DOC_CACHE_IDX Integer ID of the cache hit index; similar to Job ID but only for
cache hits. -1 if no cache hit.

Always 0; not yet available, documents not currently cached.


IS_CACHE_HIT_ID
Indicates whether the job hit a cache.

IS_CACHE_CREATE_ Always 0, not yet available. Indicates whether a cache was


ID created.

IS_PROJ_ID Integer ID of the project logged into.

IS_CUBE_EXEC_ST_ Date and time when cube execution was started by Intelligence
TS Server.

IS_CUBE_EXEC_FN_ Date and time when cube execution was finished by Intelligence
TS Server.

EM_USER_ID Integer ID of the user who created the session.

IS_DOC_ID Integer ID of the document that was executed.

Timestamp of the execution request; request of the current


IS_DOC_REQ_TS
session.

Timestamp of the execution request; request time of the original


IS_DOC_EXEC_REQ_
execution request if a cache was hit, otherwise current session's
TS
request time.

IS_DOC_EXEC_ST_
Timestamp of the execution start.
TS

IS_DOC_EXEC_FN_TS Timestamp of the execution finish.

IS_EXPORT_INDIC Integer ID indicating if this was an export job or not.

IS_DOC_QU_TM_MS Queue duration in milliseconds.

Copyright © 2024 All Rights Reserved 2227


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

IS_DOC_CPU_TM_MS CPU duration in milliseconds.

IS_DOC_EXEC_TM_
Execution duration in milliseconds.
MS

IS_DOC_NBR_
Number of reports contained in the document job execution.
REPORTS

IS_DOC_NBR_PU_
Number of steps processed in the document job execution.
STPS

IS_DOC_NBR_
Number of prompts in the document job execution.
PROMPTS

IS_JOB_ERROR_ID Integer ID of the job's error message, if any.

IS_CANCELLED_ID Indicates whether the job was cancelled.

DAY_ID2 Integer ID of the day. Format YYYYMMDD.

HOUR_ID Integer ID of the hour. Format HH (24 hours).

MINUTE_ID2 Integer ID of the minute. Format HHMM (24 hours)

DAY_ID This column is deprecated.

MINUTE_ID This column is deprecated.

IS_DOC_STEP_FACT
Contains information on each processing step of a document execution.
Created as a view based on columns in the source tables listed below.

Source Tables
l IS_DOC_STEP_STATS: Statistics table containing information about
processing steps of document execution

l IS_PROJ: Lookup table for projects

Copyright © 2024 All Rights Reserved 2228


Syst em Ad m in ist r at io n Gu id e

l IS_DOCUMENT_STATS: Statistics table containing information about


document executions

l IS_SESSION: Lookup table for session objects

List of Table Colum ns

Column Name Column Description

Timestamp of when the information was recorded by Intelligence


EM_RECORD_TS
Server into the _STATS tables.

IS_PROJ_ID Integer ID of the project logged into.

IS_DOC_JOB_SES_ GUID of the session that created the cache if a cache was hit in
ID this execution; otherwise, current session (default behavior).

IS_DOC_JOB_ID Integer ID of the document job execution.

IS_DOC_STEP_
Integer ID of the document job execution step.
SEQ_ID

IS_DOC_STEP_
Integer ID of the document job execution step type.
TYP_ID

IS_DOC_EXEC_ST_
Timestamp of the execution start.
TS

IS_DOC_EXEC_FN_
Timestamp of the execution finish.
TS

IS_DOC_QU_TM_MS Queue duration in milliseconds.

IS_DOC_CPU_TM_
CPU duration in milliseconds.
MS

IS_DOC_EXEC_TM_
Execution duration in milliseconds.
MS

DAY_ID Day the job was executed.

HOUR_ID Hour the job was executed.

MINUTE_ID Minute the job was executed.

Copyright © 2024 All Rights Reserved 2229


Syst em Ad m in ist r at io n Gu id e

IS_INBOX_ACT_FACT
Contains information about History List manipulations. Created as a view
based on columns in the source tables listed below.

Source Tables
l IS_INBOX_ACT_STATS: Statistics table containing information about
History List manipulations

l IS_INBOX_ACTION: Lookup table listing the manipulations that can occur

List of Table Colum ns

Column Name Column Description

DAY_ID Day the manipulation was started.

HOUR_ID Hour the manipulation was started.

MINUTE_ID Minute the manipulation was started.

IS_SESSION_ID GUID of the session that started the History List manipulation.

GUID of the server definition of the Intelligence Server being


IS_SERVER_ID
manipulated.

EM_APP_SRV_ Name and port number of the Intelligence Server machine where
MACHINE the manipulation is taking place.

IS_PROJ_ID GUID of the project where the History List message is mapped.

Type of manipulation:

0: Reserved for MicroStrategy use.

IS_INBOX_ACTION_ 1: Add: Add message to History List


ID 2: Remove: Remove message from History List

3: Rename: Rename message

4: Execute: Execute contents of message

Copyright © 2024 All Rights Reserved 2230


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

5: Change Status: Change message status from Ready to Read

6: Requested: Retrieve message contents

7: Batch Remove: Intelligence Server bulk operation, such as cache


expiration

EM_USER_ID ID of the user doing the manipulation.

IS_HL_MESSAGE_
GUID of the History List message being acted on.
ID

IS_HL_MESSAGE_ Name of the report or document referenced in the History List


TITLE message.

IS_HL_MESSAGE_ User-defined name of the History List message. Blank unless the
DISP user has renamed the History List message.

IS_CREATION_TS Date and time when the History List message was created.

IS_ACT_START_TS Date and time when the manipulation started.

Report job ID for the History List Message Content Request. Blank
IS_REP_JOB_ID
if no job was executed or if a document was executed.

Document job ID for the History List Message Content Request.


IS_DOC_JOB_ID
Blank if no job was executed or if a report was executed.

IS_SUBSCRIPTION_
ID of the subscription that invoked the manipulation
ID

If the manipulation is a batch deletion of History List messages, this


IS_ACTION_ field contains the condition or SQL statement used to delete the
COMMENT messages.

If there is an error, this field holds the error message.

Date and time when this information was written to the statistics
EM_RECORD_TS
database.

IS_MESSAGE_FACT
Records all messages sent through Distribution Services.

Copyright © 2024 All Rights Reserved 2231


Syst em Ad m in ist r at io n Gu id e

Source Table
l IS_MESSAGE_STATS: Statistics table containing information about sent
messages

Related Lookup Tables


l IS_SCHED: Lookup table for schedules

l IS_PROJ: Lookup table for projects

l IS_SERVER: Lookup table for Intelligence Server definitions

l IS_DEVICE: Lookup table for devices

l EM_MD: Lookup table for metadata

List of Table Colum ns

Column Name Column Description

Timestamp of when information was recorded by Intelligence


EM_RECORD_TS
Server into the IS_MESSAGE_STATS table.

Timestamp of when the Enterprise Manager data load process


EM_LOAD_TS
began.

IS_MESSAGE_INDEX Reserved for MicroStrategy use.

IS_SESSION_ID GUID of the session object.

DAY_ID Integer ID of the day. Format: YYYYMMDD.

HOUR_ID Integer ID of the hour. Format HH (24 hours).

MINUTE_ID Integer ID of the minute.

IS_HL_MESSAGE_ID Message ID of the job created.

IS_SCHEDULE_JOB_ID Job ID from Intelligence Server for the subscription job.

IS_DATATYPE_ID Type of data generated for the subscription.

Copyright © 2024 All Rights Reserved 2232


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

3: Report

55: Document

IS_RCPT_CONTACT_ID GUID of the user who is receiving the data.

Type of delivery:

1: Email

2: File

4: Printer

8: Custom

IS_DELIVERY_TYPE_ID 16: History List

20: Client

40: Cache

100: (MicroStrategy use only)

128: Mobile

255: (MicroStrategy use only)

IS_SUBS_INST_ID GUID of the subscription.

IS_SUBS_INST_NAME Name of the subscription.

GUID of the schedule that triggered the subscription, or -1 if


IS_SCHEDULE_ID
not applicable.

IS_DATA_ID GUID of the report or document requested.

Type of contact delivered to:

1: Contact

2: Contact group
IS_CONTACT_TYPE_ID
4: MicroStrategy user

5: Count

8: MicroStrategy user group

Copyright © 2024 All Rights Reserved 2233


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

10: LDAP user

31: (MicroStrategy use only)

GUID of the group receiving the subscription, or NULL if no


IS_RCPT_GROUP_ID
group.

IS_RCPT_CONTACT_
Name of the contact recipient.
NAME

Indicates whether the address where the content was


IS_DFLT_ADDR
delivered is the default.

IS_ADDRESS_ID GUID of the address delivered to.

IS_DEVICE_ID ID of the Distribution Services device used in the delivery.

IS_NOTIF_MSG Indicates whether a delivery notification message is sent.

IS_NOTIF_ADDR GUID of the notification address.

IS_SERVER_ID Numeric ID of the server definition.

IS_PROJ_ID Numeric ID of the source project.

IS_EXEC_ST_TM_TS Start time for subscription execution.

IS_EXEC_FM_TM_TS Finish time for subscription execution.

IS_DELIVERY_STATUS Indicates whether the delivery was successful.

IS_PHYSICAL_ADD Physical address for delivery.

IS_BATCH_ID Reserved for MicroStrategy use.

EM_APP_SRV_MACHINE Name of the Intelligence Server.

IS_PERF_MON_FACT
Contains information about job performance .

Copyright © 2024 All Rights Reserved 2234


Syst em Ad m in ist r at io n Gu id e

Source Table
l IS_PERF_MON_STATS: Statistics table containing information about job
performance

Related Lookup Table


l IS_PROJ: Lookup table for projects

List of Table Colum ns

Column Name Column Description

Timestamp of when the information was recorded by Intelligence


EM_RECORD_TS
Server into the _STATS table.

Timestamp of when the Enterprise Manager data load process


EM_LOAD_TS
began.

EM_APP_SRV_
The name of the Intelligence Server machine logging the statistics.
MACHINE

The category of the counter, such as Memory, MicroStrategy Server


IS_COUNTER_CAT
Jobs, or MicroStrategy Server Users.

IS_COUNTER_
MicroStrategy use.
INSTANCE

IS_COUNTER_
The name of the performance counter.
NAME

IS_EVENT_TIME Timestamp of when the event occurred in Intelligence Server.

IS_COUNTER_
The value of the performance counter.
VALUE

IS_CTR_VAL_TYP The type of performance counter.

IS_PROJ_ID Integer ID of the project logged into.

DAY_ID Integer ID of the day. Format YYYMMDD.

Copyright © 2024 All Rights Reserved 2235


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

HOUR_ID Integer ID of the hour. Format HH (24 hours).

MINUTE_ID Integer ID of the minute.

IS_PR_ANS_FACT
Contains information about prompt answers. Created as a view based on
columns in the source tables listed below.

Source Tables
l EM_MD: Lookup table for metadata

l EM_PR_ANS_TYPE: Lookup table for prompt answer type

l IS_PR_ANS_STATS: Statistics table containing information about session


activity

l IS_PROJ: Lookup table for projects

l IS_PROMPT: Lookup table for prompts

l IS_SERVER: Lookup table for Intelligence Server definitions

l LU_OBJ_TYPE: Lookup table for COM object type

List of Table Colum ns

Column Name Column Description

Timestamp when the information was recorded by Intelligence


EM_RECORD_TS
Server into the _STATS table.

IS_REP_JOB_ID Job ID assigned by the server.

IS_SESSION_ID GUID for the user session.

Copyright © 2024 All Rights Reserved 2236


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

PR_ORDER_ID Order in which prompts were answered.

PR_ANS_SEQ Sequence ID. For MicroStrategy use.

PR_LOC_ID ID of the object that the prompt resides in.

PR_LOC_TYPE COM object type of the object that the prompt resides in.

PR_LOC_DESC Object name of the object that the prompt resides in.

PR_ANS_GUID Reserved for MicroStrategy use.

PR_ANSWERS Prompt answers.

PR_ANS_TYPE Prompt answer type.

IS_SERVER_ID Integer ID of the server where the session was created.

PR_ID Integer ID of the prompt.

PR_GUID GUID of the prompt.

PR_TITLE Prompt title.

PR_NAME Prompt name.

Y if a prompt answer is required, N if a prompt answer is not


IS_REQUIRED
required.

IS_PROJ_ID Integer ID of the project logged into.

IS_PROJ_NAME Project name.

EM_APP_SRV_
The Intelligence Server machine name and IP address.
MACHINE

DAY_ID Day the prompt was answered.

HOUR_ID Hour the prompt was answered.

MINUTE_ID Minute the prompt was answered.

IS_REPOSITORY_
Integer ID of the metadata repository.
ID

Copyright © 2024 All Rights Reserved 2237


Syst em Ad m in ist r at io n Gu id e

IS_PROJECT_FACT_1
Represents the number of logins to a project in a day by user session and
project.

Source Tables
l IS_PROJ_SESSION_STATS: Statistics table containing information on
session activity by project

l IS_SESSION_STATS: Statistics table containing information about


session activity on Intelligence Server

l IS_SERVER: Lookup table for Intelligence Server definitions

l EM_USER: Lookup table for users

l IS_PROJ: Lookup table for projects

List of Table Colum ns

Column Name Column Description

IS_SESSION_ID GUID of the session object.

IS_PROJ_ID Integer ID of the project logged into.

IS_SERVER_ID Integer ID of the server where the session was created.

EM_APP_SRV_
The name of the Intelligence Server machine logging the statistics.
MACHINE

EM_USER_ID Integer ID of the user who created the session.

IS_CONNECT_TS Timestamp of the beginning of the session (login).

IS_DISCONNECT_ Timestamp of the end of the session (logout). NULL if the session is
TS still open at the time of Enterprise Manager data load.

IS_TMP_DISCON_ Represents temporary end of a session, if that session is still open.


TS Used to calculate the session time.

Copyright © 2024 All Rights Reserved 2238


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

IS_SESSION_TM_
Duration within the hour, in seconds, of the session.
SEC

Timestamp when the information was recorded by Intelligence


EM_RECORD_TS
Server into the _STATS table.

Timestamp of when the Enterprise Manager data load process


EM_LOAD_TS
began.

DAY_ID Integer ID of the day. Format YYYMMDD.

HOUR_ID Hour the user logged in.

MINUTE_ID Minute the user logged in.

IS_REPOSITORY_
Integer ID of the metadata repository.
ID

IS_REP_COL_FACT
Used to analyze which data warehouse tables and columns are accessed by
MicroStrategy report jobs, by which SQL clause they are accessed
(SELECT, FROM, and so on), and how frequently they are accessed. This
fact table is at the level of a Report Job rather than at the level of each SQL
pass executed to satisfy a report job request. The information available in
this table can be useful for database tuning. Created as a view based on
columns in the source tables listed below.

Source Tables
l IS_REP_COL_STATS: Statistics table containing information about
column-table combinations used in the SQL during report executions

l IS_SESSION: Lookup table for session objects

l IS_REP_FACT: Fact table for report job executions

Copyright © 2024 All Rights Reserved 2239


Syst em Ad m in ist r at io n Gu id e

l IS_DB_TAB: Lookup table for database tables

l IS_COL: Lookup table for columns

List of Table Colum ns

Column Name Column Description

Timestamp when information was recorded by Intelligence Server


EM_RECORD_TS
into the _STATS tables.

IS_JOB_ID Integer ID of the report job execution.

IS_SESSION_ID GUID of the current session object.

IS_COL_GUID GUID of the column object.

IS_TABLE_ID Integer ID of the physical database table that was used.

IS_COL_NAME Name of the column in the database table that was used.

SQL_CLAUSE_ Integer ID of the type of SQL clause (SELECT, FROM, WHERE, and
TYPE_ID so on).

The number of times a specific column/table/clause type


COUNTER
combination occurs within a report execution.

DAY_ID Day the job was executed.

HOUR_ID Hour the job was executed.

MINUTE_ID Minute the job was executed.

IS_REP_FACT
Contains information about report job executions.

Primary key:

l DAY_ID2

l IS_SESSION_ID

l IS_REP_JOB_SES_ID

Copyright © 2024 All Rights Reserved 2240


Syst em Ad m in ist r at io n Gu id e

l IS_REP_JOB_ID

l IS_DOC_JOB_ID

l IS_REP_CACHE_IDX

Source Tables
l IS_CACHE_HIT_STATS: Statistics table containing information about job
executions that hit a cache

l IS_DOC_FACT: Fact table containing information about document job


executions

l IS_DOCUMENT_STATS: Statistics table containing information about


document job executions

l IS_REP_SEC_STATS: Statistics table containing information about job


executions with security filters

l IS_REPORT_STATS: Statistics table containing information about report


job executions

l IS_SCHEDULE_STATS: Statistics table containing information about job


executions run by a schedule

l EM_IS_LAST_UPD_2: Configuration table that drives the loading process


(for example, data loading window)

Related Lookup Tables


l IS_SESSION: Lookup table for session objects

l IS_REP: Lookup table for report objects

l IS_TEMP: Lookup table for template objects

l IS_FILT: Lookup table for filter objects

Copyright © 2024 All Rights Reserved 2241


Syst em Ad m in ist r at io n Gu id e

l IS_SCHED: Lookup table for schedule objects

l IS_DOC: Lookup table for document objects

List of Table Colum ns

Column Name Column Description

Timestamp when the information was recorded by Intelligence


EM_RECORD_TS
Server into the _STATS table.

Timestamp of when the Enterprise Manager data load process


EM_LOAD_TS
began.

IS_SERVER_ID Integer ID of the server where the session was created.

IS_SESSION_ID GUID of the current session object.

GUID of the session that created the cache if a cache was hit in
IS_REP_JOB_SES_ID
this execution; otherwise, current session (default behavior).

IS_REP_JOB_ID Integer ID of the report job execution.

Integer ID of the cache hit index; similar to Job ID but only for
IS_REP_CACHE_IDX
cache hits. -1 if no cache hit.

IS_CACHE_HIT_ID Indicates whether the job hit a cache.

IS_CACHE_CREATE_
Indicates whether a cache was created.
ID

EM_USER_ID Integer ID of the user who created the session.

EM_DB_USER_ID DB User used to log in to the warehouse.

IS_DB_INST_ID Integer ID of the db instance object.

IS_PROJ_ID Integer ID of the project logged in to.

IS_REP_ID Integer ID of the report object.

IS_EMB_FILT_IND_ID Indicates whether the report filter is embedded.

IS_EMB_TEMP_IND_ Indicates whether the report template is embedded.

Copyright © 2024 All Rights Reserved 2242


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

ID

IS_FILT_ID Integer ID of the filter object.

IS_TEMP_ID Integer ID of the template object.

Integer ID of the parent document execution if current report is a


child of a document.

IS_DOC_JOB_ID Integer ID of the parent document execution of the original report


if a cache was hit.

Otherwise, -1.

Integer ID of the parent document object if current report is a child


IS_DOC_ID
of a document. Otherwise, -1.

Timestamp of the execution request; request of the current


IS_REP_REQ_TS
session.

Timestamp of the execution request; request time of the original


IS_REP_EXEC_REQ_
execution request if a cache was hit, otherwise current session's
TS
request time.

IS_REP_EXEC_ST_
Timestamp of the execution start.
TS

IS_REP_EXEC_FN_
Timestamp of the execution finish.
TS

IS_REP_QU_TM_MS Queue duration in milliseconds.

IS_REP_CPU_TM_MS CPU duration in milliseconds.

IS_REP_EXEC_TM_
Execution duration in milliseconds.
MS

IS_REP_ELAPS_TM_ Difference between start time and finish time; includes time for
MS prompt responses.

IS_REP_NBR_SQL_
Number of SQL passes.
PAS

Copyright © 2024 All Rights Reserved 2243


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

IS_REP_RESULT_
Number of rows in the result set.
SIZE

IS_REP_SQL_
Not yet available. Number of characters.
LENGTH

IS_REP_NBR_
Not yet available. Number of tables.
TABLES

IS_REP_NBR_PU_
Number of steps processed in the execution.
STPS

IS_REP_NBR_
Number of prompts in the report execution.
PROMPTS

IS_JOB_ERROR_ID Integer ID of the job's error message, if any.

IS_ERROR_IND_ID Indicates whether the job got an error.

IS_DB_ERROR_IND_
Indicates whether the database returned an error.
ID

IS_CANCELLED_ID Indicates whether the job was canceled.

IS_AD_HOC_ID Indicates whether the job was created ad hoc.

IS_DATAMART_ID Indicates whether the job created a data mart.

IS_ELEM_LOAD_ID Indicates whether the job was the result of an element load.

IS_DRILL_ID Indicates whether the job was the result of a drill.

IS_SEC_FILT_IND_ID Indicates whether the job had a security filter associated with it.

IS_SEC_FILT_ID Integer ID of the security filter applied.

IS_SCHED_ID Integer ID of the schedule that executed the job.

IS_SCHED_IND_ID Indicates whether the job was executed by a schedule.

IS_REP_PRIO_NBR Priority of the report execution.

IS_REP_COST_NBR Cost of the report execution.

Copyright © 2024 All Rights Reserved 2244


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

DAY_ID2 Integer ID of the day. Format YYYYMMDD.

HOUR_ID Integer ID of the hour. Format HH (24 hours).

MINUTE_ID2 Integer ID of the minute. Format HHMM (24 hours).

Integer ID of an attribute, metric, or other object that is drilled


DRILLFROM
from.

DRILLFROM_OT_ID Integer ID for the object type of the object that is drilled from.

Integer ID of an attribute, template, or other object that is drilled


DRILLTO
to.

DRILLTO_OT_ID Integer ID for the object type of the object that is drilled to.

Integer flag indicating the type of drill performed (for example, drill
DRILLTYPE
to template, drill to attribute, and so on).

ERRORMESSAGE Error message returned by Intelligence Server.

IS_CACHE_ Alphanumeric ID of the session that created the cache on


SESSION_ID Intelligence Server.

IS_CACHE_JOB_ID Integer ID of the job that created the cache on Intelligence Server.

IS_REP_PMT_ANS_
Data and time when the prompt was answered.
TS

IS_SQL_EXEC_IND_ Integer ID indicating if this job hit generated SQL and hit a
ID database or not.

IS_EXPORT_IND_ID Integer ID indicating if this was an export job or not.

IS_CUBE_INST_ID GUID of the Intelligent Cube object (if job hits it).

IS_CUBE_SIZE Size of the Intelligent Cube the job hits (if applicable).

IS_REP_PR_ANS_ Time in milliseconds of how long the user took to answer the
TM_MS prompt.

IS_EXEC_FLAG Internal flag that indicates the type of job execution.

Copyright © 2024 All Rights Reserved 2245


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

IS_REPOSITORY_ID Integer ID of the metadata repository.

IS_MESSAGE_ID Internal alphanumeric ID attached to every job.

DAY_ID This column is deprecated.

MINUTE_ID This column is deprecated.

IS_REP_SEC_FACT
Contains information about security filters applied to report jobs. Created as
a view based on columns in the source tables listed below.

Source Tables
l IS_REP_FACT: Contains information about report job executions

l IS_REP_SEC_STATS: Statistics table containing information about job


executions with security filters

l IS_SEC_FILT: Provides descriptive information about the security filters


being tracked

l IS_SF_ATT: Relationship table between security filters and attributes

List of Table Colum ns

Column Name Column Description

Timestamp when the information was recorded by Intelligence


EM_RECORD_TS
Server into the _STATS table.

Timestamp of when the Enterprise Manager data load process


EM_LOAD_TS
began.

IS_PROJ_ID Integer ID of the project logged in to.

Copyright © 2024 All Rights Reserved 2246


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

IS_REP_JOB_SES_ GUID of the session that created the cache if a cache was hit in this
ID execution; otherwise, current session (default behavior).

IS_REP_JOB_ID Integer ID of the report job execution.

IS_REP_SEC_FILT_
Integer ID of the security filter.
ID

IS_ATT_ID Integer ID of the attribute.

DAY_ID Day the job was requested for execution.

HOUR_ID Hour the job was requested for execution.

MINUTE_ID Minute the job was requested for execution.

IS_REPOSITORY_
Integer ID of the metadata repository.
ID

IS_REP_SQL_FACT
Contains the SQL that is executed on the warehouse by report job
executions. Created as a view based on columns in the source tables listed
below.

Source Tables
l IS_REP_FACT: Contains information about report job executions

l IS_PROJ: Lookup table for projects

l IS_REP_SQL_STATS: Statistics table containing information about SQL


statements

Copyright © 2024 All Rights Reserved 2247


Syst em Ad m in ist r at io n Gu id e

List of Table Colum ns

Column Name Column Description

Timestamp when the information was recorded by Intelligence


EM_RECORD_TS
Server into the _STATS table.

IS_PROJ_ID Integer ID of the project logged into.

IS_PROJ_NAME Project name.

IS_REP_JOB_SES_
GUID of the current session object.
ID

IS_REP_JOB_ID Integer ID of the report job execution.

IS_PASS_SEQ_NBR Integer ID of the sequence of the pass.

If a SQL statement is very long, it is broken into multiple rows. This


column represents the Sequence of a SQL Statement. For
IS_REP_SQL_SEQ example, if a SQL is very long and broken into two parts, this table
would contain two rows for that SQL with the value of this column
being '1' and '2'.

IS_REP_EXEC_ST_
Timestamp of the execution start.
TS

IS_REP_EXEC_FN_
Timestamp of the execution finish.
TS

IS_REP_EXEC_TM_
Execution duration in milliseconds.
MS

IS_REP_SQL_
SQL statement.
STATEM

IS_REP_SQL_
Length of SQL statement.
LENGTH

IS_REP_NBR_
Number of tables accessed by SQL statement.
TABLES

IS_PASS_TYPE_ID Integer ID of the type of SQL pass.

Copyright © 2024 All Rights Reserved 2248


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

IS_REP_DB_ERR_
Error returned from the database; NULL if no error.
MSG

DAY_ID Day the job was requested for execution.

HOUR_ID Hour the job was requested for execution.

MINUTE_ID Minute the job was requested for execution.

IS_REPOSITORY_ID Integer ID of the metadata repository.

IS_REP_STEP_FACT
Contains information about the processing steps through which the report
execution passes. Created as a view based on columns in the source tables
listed below.

Source Tables
l IS_REP_STEP_STATS: Statistics table containing information about
report job processing steps

l IS_REPORT_STATS: Statistics table containing information about report


job executions

l IS_SESSION: Lookup table for session objects

l IS_PROJ: Lookup table for projects

List of Table Colum ns

Column Name Column Description

Timestamp when the information was recorded by Intelligence


EM_RECORD_TS
Server into the _STATS table.

Copyright © 2024 All Rights Reserved 2249


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

IS_PROJ_ID Integer ID of the project logged into.

IS_PROJ_NAME Project name.

IS_REP_JOB_SES_
GUID of the current session object.
ID

IS_REP_JOB_ID Integer ID of the report job execution.

IS_REP_STEP_
Integer ID of the sequence of the step.
SEQ_ID

IS_REP_STEP_
Integer ID of the type of step.
TYP_ID

IS_REP_EXEC_ST_
Timestamp of the execution start.
TS

IS_REP_EXEC_FN_
Timestamp of the execution finish.
TS

IS_REP_QU_TM_MS Queue duration in milliseconds.

IS_REP_CPU_TM_
CPU duration in milliseconds.
MS

IS_REP_EXEC_TM_
Execution duration in milliseconds.
MS

DAY_ID Day the job was requested for execution.

HOUR_ID Hour the job was requested for execution.

MINUTE_ID Minute the job was requested for execution.

IS_REPOSITORY_ID Integer ID of the metadata repository.

IS_SESSION_FACT
Enables session concurrency analysis. Keeps data on each session for each
hour of connectivity.

Copyright © 2024 All Rights Reserved 2250


Syst em Ad m in ist r at io n Gu id e

Related Lookup Tables


l IS_SESSION: Lookup table for session objects

l DT_DAY: Lookup table for dates

l TM_HOUR: Lookup table for hours

List of Table Colum ns

Column Name Column Description

IS_SESSION_ID GUID of the session object.

IS_SERVER_ID Integer ID of the server where the session was created.

EM_USER_ID Integer ID of the user who created the session.

IS_CONNECT_TS Timestamp of the beginning of the session (login).

Timestamp of the end of the session (logout). NULL if the session


IS_DISCONNECT_TS
is still open at the time of Enterprise Manager data load.

Integer representation of the day and hour when the connection


IS_CONNEC_M_ID
began. Format: YYYYMMDDHH (24 hours).

Integer representation of the day and hour when the connection


IS_DISCON_M_ID
ended. Format: YYYYMMDDHH (24 hours).

Connection source through which the session was established:

0: Unknown

1: MicroStrategy Developer

2: MicroStrategy Intelligence Server Administrator


EM_CONNECT_
3: MicroStrategy Web Administrator
SOURCE
4: MicroStrategy Intelligence Server

5: MicroStrategy Project Upgrade

6: MicroStrategy Web

7: MicroStrategy Scheduler

Copyright © 2024 All Rights Reserved 2251


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

8: Custom application

9: MicroStrategy Narrowcast Server

10: MicroStrategy Object Manager

11: ODBO Provider

12: ODBO Cube Designer

13: MicroStrategy Command Manager

14: MicroStrategy Enterprise Manager

15: MicroStrategy Command Line Interface

16: MicroStrategy Project Builder

17: MicroStrategy Configuration Wizard

18: MicroStrategy MD Scan

19: MicroStrategy Cache Utility

20: MicroStrategy Fire Event

21: MicroStrategy Java Admin Clients

22: MicroStrategy Web Services

23: MicroStrategy Office

24: MicroStrategy Tools

25: MicroStrategy Portal Server

26: MicroStrategy Integrity Manager

27: Metadata Update

28: COM Browser

29: MicroStrategy Mobile

30: Repository Translation Wizard

32: MicroStrategy Cube Advisor

DAY_ID Integer ID of the day. Format: YYYYMMDD.

Copyright © 2024 All Rights Reserved 2252


Syst em Ad m in ist r at io n Gu id e

Column Name Column Description

HOUR_ID Integer ID of the hour. Format HH (24 hours).

MINUTE_ID Minute the job was executed.

IS_SESSION_MONITOR
For MicroStrategy use. A view table that provides an overview of recent
session activity.

Lookup Tables

Table Name Function

Lookup table for mobile client execution type.

0: Unknown

1: User

2: Pre-cached

3: Application recovery
CT_EXEC_TYPE
4: Subscription cache pre-loading

5: Transaction subsequent action

6: Report queue

7: Report queue recall

8: Back button

Lookup table for mobile client manipulation type:

0: Unknown

1: Selector
CT_MANIP_TYPE
2: Panel Selector

3: Action Selector

4: Change Layout

Copyright © 2024 All Rights Reserved 2253


Syst em Ad m in ist r at io n Gu id e

Table Name Function

5: Change View

6: Sort

7: Page By

8: Information Window

9: Annotations

10: E-mail Screenshots

11: Widget: Video-Play

12: Widget: Video-Pause

20: Widget: Multiple-Download

21: Widget: Multiple-Open

DT_DAY Lookup table for Days in the Date hierarchy.

DT_MONTH Lookup table for Months in the Date hierarchy.

DT_MONTH_OF_
Lookup table for Months of the Year in the Date hierarchy.
YR

DT_QUARTER Lookup table for Quarters in the Date hierarchy.

DT_QUARTER_
Lookup table for the Quarters of the Year in the Date hierarchy.
OF_YR

DT_WEEKDAY Lookup table for the Days of the Week in the Date hierarchy.

DT_
Lookup table for Weeks of the Year in the Date hierarchy.
WEEKOFYEAR

DT_YEAR Lookup table for Years in the Date hierarchy.

EM_APP_SRV_
Lookup table for Intelligence Server machines used in statistics.
MACHINE

EM_CLIENT_
Lookup table for Client Machines used in the statistics.
MACHINE

EM_CONNECT_ Lookup table for the connection source of a session on Intelligence

Copyright © 2024 All Rights Reserved 2254


Syst em Ad m in ist r at io n Gu id e

Table Name Function

SOURCE Server.

EM_DB_USER Lookup table for the database users used in the statistics.

EM_EXISTS_IND Lookup table for the existence status of objects.

EM_HIDDEN_IND Lookup table for the hidden status of objects.

Lookup table for the job status of job executions on Intelligence


Server:

0: Ready

1: Executing

2: Waiting

3: Completed

4: Error
EM_JOB_
STATUS 5: Cancelled
(Deprecated) 6: Stopped

7: Waiting for governor

8: Waiting for prompt

9: Waiting for project

10: Waiting for cache

11: Waiting for children

12: Waiting for fetching results

EM_
Provides information about the projects being monitored and when the
MONITORED_
first and last data loads occurred.
PROJECTS

Provides descriptive information about the owners being tracked. This


EM_OWNER
table is a view based on columns from the EM_USER table.

EM_USER Provides descriptive information about the users being tracked.

Copyright © 2024 All Rights Reserved 2255


Syst em Ad m in ist r at io n Gu id e

Table Name Function

EM_USR_GP Provides descriptive information about the user groups being tracked.

EM_WEB_SRV_
Lookup table for the Web Server Machines used in the statistics.
MACHINE

IS_AD_HOC_IND Lookup table for the Ad Hoc indicator.

IS_ATT Provides descriptive information about the attributes being tracked.

Provides descriptive information about the attribute forms being


IS_ATT_FORM
tracked.

IS_CACHE_
Lookup table for the Cache Creation indicator.
CREATION_IND

Lookup table for the Cache Hit indicator:

0: Reserved
IS_CACHE_HIT_
1: Server cache or no cache hit
TYPE
2: Device cache

6: In-memory view cache

IS_CANCELLED_
Lookup table for the Canceled indicator.
IND

IS_CHILD_JOB_
Lookup table for the Child Job indicator.
IND

IS_COL Provides descriptive information about the columns being tracked.

IS_CONFIG_ Lookup table for Intelligence Server and project-level configuration


PARAM settings.

Provides descriptive information about the consolidations being


IS_CONS
tracked.

IS_CONTACT_ Lookup table for the type of contact delivered to through Distribution
TYPE Services.

IS_CUBE_ Lookup table for the manipulations that can be performed on an


ACTION_TYPE Intelligent Cube.

Copyright © 2024 All Rights Reserved 2256


Syst em Ad m in ist r at io n Gu id e

Table Name Function

IS_CUBE_HIT_
Lookup table for the Cube Hit indicator.
IND

Provides descriptive information about the Intelligent Cubes being


IS_CUBE_VIEW
tracked. This table is a view based on columns from the IS_REP table.

Provides descriptive information about the custom groups being


IS_CUST_GP
tracked.

Lookup table for the job type:

IS_DATA_TYPE 3: Report

55: Document

IS_DATAMART_
Lookup table for the Data Mart indicator.
IND

IS_DB_ERROR_
Lookup table for the Database Error indicator.
IND

Provides descriptive information about the database instances in the


IS_DB_INST
monitored Intelligence Servers.

IS_DB_TAB Lookup table for the database tables being monitored.

IS_DELIVERY_
Lookup table for the Delivery Status indicator.
STATUS_IND

IS_DELIVERY_
Lookup table for the Distribution Services delivery type.
TYPE

IS_DEVICE Provides descriptive information about the devices being tracked.

Provides descriptive information about the document objects being


IS_DOC
tracked.

IS_DOC_STEP_ Lookup table for the step types in document execution. For a list and
TYPE explanation of values, see Lookup Tables, page 2253.

IS_DOCTYPE_ Indicator lookup table for document or dashboard type. Types include:
IND -1: Unknown

Copyright © 2024 All Rights Reserved 2257


Syst em Ad m in ist r at io n Gu id e

Table Name Function

0: HTML document

1: Report Services document

2: Visual Insight dashboard

IS_DRILL_IND Lookup table for the Drill indicator.

IS_ELEM_LOAD_
Lookup table for the Element Load indicator.
IND

IS_ERROR_IND Lookup table for the Error indicator.

IS_EVENT Provides descriptive information about the events being tracked.

IS_EXPORT_IND Lookup table for the Export indicator.

IS_FACT Provides descriptive information about the facts being tracked.

IS_FILT Provide descriptive information about the filters being tracked.

IS_HIER Provides descriptive information about the hierarchies being tracked.

IS_HIER_DRILL_
Lookup table for the Drillable Hierarchy indicator.
IND

IS_INBOX_ Provides a list of the different manipulations that can be performed on


ACTION a History List message.

IS_JOB_
Lookup table for the Job Priority type.
PRIORITY_TYPE

IS_MET Provides descriptive information about the metrics being tracked.

Provides descriptive information about the Intelligent Cubes being


IS_OLAP_CUBE
tracked.

IS_PRIORITY_
Indicator lookup table for priority maps.
MAP

IS_PROJ Provides descriptive information about the projects being tracked.

IS_PROMPT Provides descriptive information about the prompts being tracked.

Copyright © 2024 All Rights Reserved 2258


Syst em Ad m in ist r at io n Gu id e

Table Name Function

IS_PROMPT_IND Lookup table for the Prompt indicator.

IS_REP Provides descriptive information about the reports being tracked.

IS_REP_SQL_
Lookup table for the SQL pass types of report execution.
PASS_TYPE

IS_REP_STEP_ Lookup table for the step types of report execution. For a list and
TYPE explanation of values, see Lookup Tables, page 2253.

Lookup table for the Report Cube Type indicator:

0: Reserved

1: Base Report

2: Working Set Report

IS_REPCTYPE_ 3: Private Base Report


IND 5: Report Services Base Report

6: CSQL Pre-Execution Report

7: OLAP Cube Report

8: OLAP View Report

9: Incremental Refresh Report

Indicator lookup table for report type. Report types include:

-1: Unknown: The server is unable to retrieve the report type.

0: Reserved: Ad hoc reports. May include other reports that are not
persisted in the metadata at the point of execution.

IS_REPTYPE_ 1: Relational: All regular project reports.


IND
2: MDX: Reports built from SAP BW, Essbase, Analysis Services, and
other cube sources.

3: Custom SQL Freeform: MicroStrategy Freeform SQL reports, in


which the SQL is entered directly into the interface.

4: Custom SQL Wizard: MicroStrategy Query Builder reports.

Copyright © 2024 All Rights Reserved 2259


Syst em Ad m in ist r at io n Gu id e

Table Name Function

5: Flat File: Reserved for MicroStrategy use.

IS_SCHED Provides descriptive information about the schedules being tracked.

IS_SCHEDULE_
Lookup table for the Schedule indicator.
IND

Provides descriptive information about the security filters being


IS_SEC_FILT
tracked.

IS_SEC_FILT_
Lookup table for the Security Filter indicator.
IND

Provides descriptive information about the server definitions being


IS_SERVER
tracked.

Lookup table for the session statistics logged by Intelligence Servers.


IS_SESSION
Primary key: DAY_ID, IS_SESSION_ID

Lookup table for SQL clause types; used to determine which SQL
clause (SELECT, WHERE, GROUP BY, and so on) a particular column
was used in during a report execution.

SQL Clause Type attributes:

1: Select: Column was used in the SELECT clause but was not
aggregated, nor does it appear in a GROUP BY clause. For example,
a11.Report column in "Select a11.Report from LU_REPORT a11".
IS_SQL_ 2: Select Group By: Column was used in the GROUP BY clause. For
CLAUSE_TYPE example, a11.Report Column in "select a11.Report, sum(a11.Profit)
from LU_REPORT group by a11.Report".

4: Select Aggregate: Column was used for aggregation. For example,


a11.Report column in "select count(a11.Report) from LU_REPORT".

8: From: Column was used in a FROM clause

16: Where: Column was used in a WHERE clause

17: Order By: Column was used in an ORDER BY clause.

IS_SQL_EXEC_ Lookup table for the SQL Execution indicator.

Copyright © 2024 All Rights Reserved 2260


Syst em Ad m in ist r at io n Gu id e

Table Name Function

IND

Provides descriptive information about the logical tables being


IS_TABLE
monitored.

IS_TEMP Provides descriptive information about the templates being monitored.

Provides descriptive information about the transformations being


IS_TRANS
monitored.

IS_TRANS_MAP Lookup table for the transformation mapping types.

Provides descriptive information about the information transmitters


IS_TRANSMIT
being monitored.

TM_HOUR Lookup table for Hour in the Time hierarchy.

TM_MINUTE Lookup table for Minute in the Time hierarchy.

Transformation Tables

Table Name Function

DT_MONTH_YTD Transformation table to calculate the Year to Date values for Month.

DT_QUARTER_ Transformation table to calculate the Year to Date values for


YTD Quarter.

TM_HOUR_DTH Transformation table to calculate the Hour to Day values for Hour.

Report and Document Steps


This IS_REP_STEP_TYPE table lists the Intelligence Server tasks involved
in executing a report or a document. These are the possible values for the
IS_REP_STEP_TYP_ID column in the IS_REP_STEP_STATS table and the
IS_DOC_STEP_TYP_ID column in the IS_DOC_STEP_STATS table.

Copyright © 2024 All Rights Reserved 2261


Syst em Ad m in ist r at io n Gu id e

Not all steps are applicable to all types of reports. For example, if you are
not using Intelligent Cubes, those steps are skipped.

Task name Task description

0: Unknown Reserved for MicroStrategy use.

1: MD Object The Object Server component in Intelligence Server requests the objects
Request necessary for the report.

2: Close Job Intelligence Server closes the report execution job.

3: SQL The SQL Engine generates the SQL to be executed against the data
Generation warehouse.

4: SQL The Query Engine submits the generated SQL to the data warehouse,
Execution and receives the result.

5: Analytical The Analytical Engine applies additional processing steps to the data
Engine retrieved from the warehouse.

6: Resolution The Resolution Server uses the report definition to retrieve objects from
Server the Object Server.

7: Report Net The Report Net Server processes report requests and sends them to the
Server Report Server.

8: Element The Resolution Server works with the Object Server and Element Server
Request to resolve prompts for report requests.

9: Get Report
Intelligence Server receives the report instance from the Report Server.
Instance

10: Error If an error occurs, Intelligence Server sends a message to the user, and
Message Send logs the error.

11: Output
When the report finishes executing, the output data is sent to the client.
Message Send

12: Find Report


The Report Server searches the cache for a previously run report.
Cache

13: Document Intelligence Server executes the datasets needed for the document, and

Copyright © 2024 All Rights Reserved 2262


Syst em Ad m in ist r at io n Gu id e

Task name Task description

Execution creates the document structure.

14: Document Once a document is executed, Intelligence Server sends the output to the
Send client (such as MicroStrategy Developer or Web).

15: Update Once a report is executed, the Report Server writes the data to the report
Report Cache cache.

16: Request The client (such as MicroStrategy Developer or Web) requests the
Execute execution of a report or document.

17: Data Mart


The Query Engine executes the SQL to create the data mart table.
Execute

18: Document
Intelligence Server prepares the document data, performing tasks such
Data
as dataset joins, where applicable.
Preparation

19: Document Intelligence Server combines the data for the document with the
Formatting structure, and formats the output.

20: Document
Intelligence Server applies the user's manipulations to a document.
Manipulation

21: Apply View


Intelligence Server executes a view report against an Intelligent Cube.
Context

22: Export The Export Engine formats a report or document for export as a PDF,
Engine Excel workbook, or XML.

23: Find
The SQL Engine matches a view report, or a report that uses dynamic
Intelligent
sourcing, with the corresponding Intelligent Cube.
Cube

24: Update
The Query Engine runs the SQL required to refresh the data in the
Intelligent
Intelligent Cube.
Cube

25: Post-
processing Reserved for MicroStrategy use.
Task

Copyright © 2024 All Rights Reserved 2263


Syst em Ad m in ist r at io n Gu id e

Task name Task description

Distribution Services delivers the report to email, files, printers, or


26: Delivery
mobile.

Intelligence Server checks if the conditions for alert-based subscriptions


27: Persist
are met. If so, the subscribed report is executed and delivered. If the
Result
condition is not met, the job is cancelled.

28: Document
Dataset The document is waiting for its dataset report jobs to finish executing.
Execution

Relationship Tables

Table Name Function

IS_ATT_ATT_FORM Relationship table between Attribute and Attribute Form.

IS_ATT_HIER Relationship table between Attribute and Hierarchy.

IS_COL_TABLE Relationship table between Column and Table.

IS_MET_TEMP Relationship table between Metric and Template.

IS_REP_ATT Relationship table for reports and component attributes.

IS_REP_CONS Relationship table between Consolidation and Report.

IS_REP_DOC Relationship table between Report and Document.

IS_REP_FILT Relationship table between Filter and Report.

IS_REP_MET Relationship table for reports and component metrics.

IS_REP_PROMPT Relationship table between Prompt and Report.

IS_REP_TEMPLATE Relationship table between Template and Report.

IS_SCHED_REL_DOC Relationship table for schedules and associated documents.

IS_SCHED_RELATE Relationship table for schedules and associated reports.

Copyright © 2024 All Rights Reserved 2264


Syst em Ad m in ist r at io n Gu id e

Table Name Function

IS_TABLE_FACT Relationship table between Table and Fact.

IS_TEMP_ATT Relationship table between Template and Attribute.

IS_USER_PROJ_SF Relationship table for users and associated security filters.

IS_USR_GP_USER Relationship table between User and User Group.

IS_USR_GP_USR_GP Relationship table between User Group and User Group (Parent).

Enterprise Manager Metadata Tables


The following is a description of Enterprise Manager metadata tables.

Table
Function
Name

Defines all MicroStrategy components being monitored. The abbreviation


specifies the prefix used on tables relevant to the component. When a new
EM_COMP component is added to the MicroStrategy product line, it can be entered in
this table for monitoring.

Examples: Intelligence Server, Narrowcast Server

EM_IS_
Provides the Data Loading process with a working window that identifies the
LAST_
period during which data should be moved into production area tables.
UPDATE

Defines all items in each component of the MicroStrategy product line being
monitored. When a new item is added to a component, it can be entered in
this table for monitoring, without any change to the migration code. This
EM_ITEM table also specifies the item's object type according to server and the
abbreviation used in the lookup table name.

Examples: Report, Server Definition, User

EM_ITEM_ Identifies properties being tracked on a given item for a given component.
PROPS Examples: Attribute Number of Parents, Hierarchy Drill Enabled

Copyright © 2024 All Rights Reserved 2265


Syst em Ad m in ist r at io n Gu id e

Table
Function
Name

Primary key: EM_COMP_ID, EM_ITEM_ID, EM_PROP_ID

Stores logging information for Enterprise Manager data loads. The logging
option is enabled from the Enterprise Manager console, Tools menu,
EM_LOG Options selection.

Data warehouse purges are not logged in this table.

Shows properties of the Enterprise Manager application (for example: which


EM_PROPS
projects and servers are being tracked).

EM_
Contains a list of many-to-many relationship tables and the MicroStrategy
RELATE_
items they relate.
ITEM

Provides the SQL necessary to insert, update, and delete a row from the
lookup item table once the necessary information from the component API is
available. If the SQL must be changed, make the change in this table (no
EM_SQL changes in the code are necessary). This table also provides the SQL used
to transform the logged statistics into the lookup tables.

Example: SQL statements to insert an attribute into the lookup attribute


table in SQL Server

Relationship Tables

Table Name Function

IS_ATT_ATT_FORM Relationship table between Attribute and Attribute Form.

IS_ATT_HIER Relationship table between Attribute and Hierarchy.

IS_COL_TABLE Relationship table between Column and Table.

IS_MET_TEMP Relationship table between Metric and Template.

Copyright © 2024 All Rights Reserved 2266


Syst em Ad m in ist r at io n Gu id e

Table Name Function

IS_REP_ATT Relationship table for reports and component attributes.

IS_REP_CONS Relationship table between Consolidation and Report.

IS_REP_DOC Relationship table between Report and Document.

IS_REP_FILT Relationship table between Filter and Report.

IS_REP_MET Relationship table for reports and component metrics.

IS_REP_PROMPT Relationship table between Prompt and Report.

IS_REP_TEMPLATE Relationship table between Template and Report.

IS_SCHED_REL_DOC Relationship table for schedules and associated documents.

IS_SCHED_RELATE Relationship table for schedules and associated reports.

IS_TABLE_FACT Relationship table between Table and Fact.

IS_TEMP_ATT Relationship table between Template and Attribute.

IS_USER_PROJ_SF Relationship table for users and associated security filters.

IS_USR_GP_USER Relationship table between User and User Group.

IS_USR_GP_USR_GP Relationship table between User Group and User Group (Parent).

Enterprise Manager Metadata Tables


The following is a description of Enterprise Manager metadata tables.

Table
Function
Name

Defines all MicroStrategy components being monitored. The abbreviation


specifies the prefix used on tables relevant to the component. When a new
EM_COMP
component is added to the MicroStrategy product line, it can be entered in
this table for monitoring.

Copyright © 2024 All Rights Reserved 2267


Syst em Ad m in ist r at io n Gu id e

Table
Function
Name

Examples: Intelligence Server, Narrowcast Server

EM_IS_
Provides the Data Loading process with a working window that identifies the
LAST_
period during which data should be moved into production area tables.
UPDATE

Defines all items in each component of the MicroStrategy product line being
monitored. When a new item is added to a component, it can be entered in
this table for monitoring, without any change to the migration code. This
EM_ITEM table also specifies the item's object type according to server and the
abbreviation used in the lookup table name.

Examples: Report, Server Definition, User

Identifies properties being tracked on a given item for a given component.


EM_ITEM_
Examples: Attribute Number of Parents, Hierarchy Drill Enabled
PROPS
Primary key: EM_COMP_ID, EM_ITEM_ID, EM_PROP_ID

Stores logging information for Enterprise Manager data loads. The logging
option is enabled from the Enterprise Manager console, Tools menu,
EM_LOG Options selection.

Data warehouse purges are not logged in this table.

Shows properties of the Enterprise Manager application (for example: which


EM_PROPS
projects and servers are being tracked).

EM_
Contains a list of many-to-many relationship tables and the MicroStrategy
RELATE_
items they relate.
ITEM

Provides the SQL necessary to insert, update, and delete a row from the
lookup item table once the necessary information from the component API is
available. If the SQL must be changed, make the change in this table (no
EM_SQL changes in the code are necessary). This table also provides the SQL used
to transform the logged statistics into the lookup tables.

Example: SQL statements to insert an attribute into the lookup attribute


table in SQL Server

Copyright © 2024 All Rights Reserved 2268


Syst em Ad m in ist r at io n Gu id e

Enterprise Manager Attributes and Metrics


The following sections list the contents of the Enterprise Manager attributes
folders. These include attributes and shortcuts to metrics that are useful in
creating reports in the Enterprise Manager project. The items in the folders
are grouped by the type of reporting you can do with them.

All Indicators and Flags Attributes

Attribute name Function

Ad Hoc Indicator Indicates whether an execution is ad hoc.

Cache Creation
Indicates whether an execution has created a cache.
Indicator

Cache Hit Indicator Indicates whether an execution has hit a cache.

Cancelled Indicator Indicates whether an execution has been cancelled.

Indicates whether a job was a document dataset or a stand-alone


Child Job Indicator
report.

Configuration Object
Indicates whether a configuration object exists.
Exists Status

Configuration
Lists all configuration parameter types.
Parameter Value Type

Connection Source Lists all connection sources to Intelligence Server.

Contact Type Lists the executed contact types.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Database Error Indicates whether a report request failed because of a database


Indicator error.

Datamart Indicator Indicates whether an execution created a data mart.

DB Error Indicator Indicates whether an execution encountered a database error.

Copyright © 2024 All Rights Reserved 2269


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Delivery Status
Indicates whether a delivery was successful.
Indicator

Delivery Type Lists the type of delivery.

Document Job Status


Lists the statuses of document executions.
(Deprecated)

Document Job Step


Lists all possible steps of document job execution.
Type

Indicates the type of a document or dashboard, such as a Report


Document Type
Services document or dashboard.

Lists the object from which a user drilled when a new report was
Drill from Object
run because of a drilling action.

Drill Indicator Indicates whether an execution is a result of a drill.

Lists the object to which a user drilled when a new report was run
Drill to Object
because of a drilling action.

Element Load Indicator Indicates whether an execution is a result of an element load.

Error Indicator Indicates whether an execution encountered an error.

Execution Type Indicates how the content was requested, such as User
Indicator Execution, Pre-Cached, Application Recovery, and so on.

Indicates whether a report was exported and, if so, indicates its


Export Indicator
format.

Hierarchy Drilling Indicates whether a hierarchy is used as a drill hierarchy.

List the types of manipulations that can be performed on a


Inbox Action Type
History List message.

Intelligent Cube Action


Lists actions performed on or against intelligent cubes.
Type

Intelligent Cube Type Lists all intelligent cube types.

Job ErrorCode Lists all the possible errors that can be returned during job

Copyright © 2024 All Rights Reserved 2270


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

executions.

Job Priority Map Lists the priorities of job executions.

Enumerates the upper limit of the priority ranges for high,


Job Priority Number medium, and low priority jobs. Default values are 332, 666, and
999.

Object Creation Date Indicates the date on which an object was created.

Object Creation
Indicates the week of the year in which an object was created.
Week of year

Object Exists Status Indicates whether an object exists.

Object Hidden Status Indicates whether an object is hidden.

Object Modification
Indicates the date on which an object was last modified.
Date

Object Modification Indicates the week of the year in which an object was last
Week of year modified.

Prompt Answer Indicates whether a prompt answer was required for the job
Required execution.

Prompt Indicator Indicates whether a job execution was prompted.

Report Job SQL Pass Lists the types of SQL passes that the Intelligence Server
Type generates.

Report Job Status


Lists the statuses of report executions.
(Deprecated)

Report Job Step Type Lists all possible steps of report job execution.

Report Type Indicates the type of a report, such as XDA, relational, and so on.

Report/Document
Indicates whether the execution was a report or a document.
Indicator

Schedule Indicator Indicates whether a job execution was scheduled.

Copyright © 2024 All Rights Reserved 2271


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Security Filter Indicator Indicates whether a security filter was used in the job execution.

SQL Clause Type Lists the various SQL clause types used by the SQL Engine.

SQL Execution
Indicates whether SQL was executed in the job execution.
Indicator

Application Objects Attributes

Attribute
Function
name

Lists all consolidations in projects that are set up to be monitored by


Consolidation
Enterprise Manager.

Lists all custom groups in projects that are set up to be monitored by


Custom Group
Enterprise Manager.

Lists all documents in projects that are set up to be monitored by


Document
Enterprise Manager.

Lists all filters in projects that are set up to be monitored by Enterprise


Filter
Manager.

Lists all intelligent cubes in projects that are set up to be monitored by


Intelligent Cube
Enterprise Manager.

Lists all metrics in projects that are set up to be monitored by Enterprise


Metric
Manager.

Lists all prompts in projects that are set up to be monitored by


Prompt
Enterprise Manager.

Lists all reports in projects that are set up to be monitored by Enterprise


Report
Manager.

Lists all security filters in projects that are set up to be monitored by


Security Filter
Enterprise Manager.

Lists all templates in projects that are set up to be monitored by


Template
Enterprise Manager.

Copyright © 2024 All Rights Reserved 2272


Syst em Ad m in ist r at io n Gu id e

Configuration Objects Attributes

Attribute name Function

Address Lists all addresses to which deliveries have been sent.

Configuration Object
Lists the owners of configuration objects.
Owner

Configuration Parameter Lists all configuration parameters.

Contact Lists all contacts to whom deliveries have been sent.

DB Connection Lists all database connections.

DB Instance Lists all database instances.

Device Lists all devices to which deliveries have been sent.

Event Lists all events being tracked.

Folder Lists all folders within projects.

Intelligence Server
Lists all Intelligence Server definitions.
Definition

Metadata Lists all monitored metadata.

Owner Lists the owners of all objects.

Project Lists all projects.

Schedule Lists all schedules.

Subscription Lists all executed transmissions.

Transmitter Lists all transmitters.

User Lists all users being tracked.

User Group Lists all user groups.

User Group (Parent) Lists all user groups that are parents of other user groups.

Copyright © 2024 All Rights Reserved 2273


Syst em Ad m in ist r at io n Gu id e

Date and Time Attributes

Attribute
Function
name

Calendar
Lists every calendar week, beginning with 2000-01-01, as an integer.
Week

Day Lists all days, beginning in 1990.

Lists the hours in a day. For example, 09 AM - 10 AM, 10 AM - 11 AM, and


Hour
so on.

Lists all the minutes in an hour. For example, if the hour specified is 10
Minute AM - 11 AM, lists minutes as 10.30 AM - 10.31 AM, 10.32 AM - 10.33 AM,
and so on.

Month Lists all months, beginning with 2000.

Month of Year Lists all months in a specified year.

Quarter Lists all quarters.

Quarter of
Lists all quarters of the year.
Year

Lists all weeks in all years, beginning in 2000. Weeks in 2000 are
Week of Year represented as a number ranging from 200001 to 200053, weeks in 2001
are represented as a number ranging from 200101 to 200153, and so on.

Weekday Lists all days of the week.

Year Lists all years.

Delivery Services Attributes and Metrics

Attribute or metric name Function

Address Indicates the address to which a delivery was sent.

Avg number of recipients per Metric of the average number of recipients in

Copyright © 2024 All Rights Reserved 2274


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

subscription subscriptions.

Avg Subscription Execution Metric of the average amount of time subscriptions


Duration (hh:mm:ss) take to execute.

Avg Subscription Execution Metric of the average amount of time, in seconds,


Duration (secs) subscriptions take to execute.

Contact Indicates all contacts to whom a delivery was sent.

Contact Type Indicates the executed contact types.

Day Indicates the day on which the delivery was sent.

Delivery Status Indicator Indicates whether the delivery was successful.

Delivery Type Indicates the type of delivery.

Indicates the type of device to which the delivery was


Device
sent.

Document Indicates the document that was delivered.

Hour Indicates the hour on which the delivery was sent.

Indicates the Intelligence Server machine that


Intelligence Server Machine
executed the job.

Metadata Indicates the monitored metadata.

Minute Indicates the minute on which the delivery was sent.

Number of Distinct Document Metric of the number of report services document


Subscriptions subscriptions.

Metric of the number of recipients that received


Number of Distinct Recipients
content from a subscription.

Number of Distinct Report


Metric of the number of report subscriptions.
Subscriptions

Metric of the number of executed subscriptions. This


Number of Distinct Subscriptions
does not reflect the number of subscriptions in the

Copyright © 2024 All Rights Reserved 2275


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

metadata.

Metric of the number of subscriptions that delivered


Number of E-mail Subscriptions
content via e-mail.

Number of Errored Subscriptions Metric of the number of subscriptions that failed.

Number of Executions Metric of the number of executions of a subscription.

Metric of the number of subscriptions that delivered


Number of File Subscriptions
content via file location.

Number of History List Metric of the number of subscriptions that delivered


Subscriptions content via the history list.

Metric of the number of subscriptions that delivered


Number of Mobile Subscriptions
content via mobile.

Metric of the number of subscriptions that delivered


Number of Print Subscriptions
content via a printer.

Project Lists the projects.

Report Lists the reports in projects.

Report Job Lists an execution of a report.

Indicates whether the execution was a report or a


Report/Document Indicator
document.

Schedule Indicates the schedule that triggered the delivery.

Subscription Indicates the subscription that triggered the delivery.

Subscription Execution Duration Metric of the sum of all execution times of a


(hh:mm:ss) subscription.

Subscription Execution Duration Metric of the sum of all execution times of a


(secs) subscription (in seconds).

Copyright © 2024 All Rights Reserved 2276


Syst em Ad m in ist r at io n Gu id e

Document Job Attributes and Metrics

Attribute or metric name Function

Day Indicates the day on which the document job executed.

Document Indicates which document was executed.

Document Job Indicates an execution of a document.

Metric of the average difference between start time and


DP Average Elapsed Duration per
finish time (including time for prompt responses) of all
Job (hh:mm:ss)
document job executions.

Metric of the average difference, in seconds, between


DP Average Elapsed Duration
start time and finish time (including time for prompt
per Job (secs)
responses) of all document job executions.

DP Average Execution Duration Metric of the average duration, in seconds, of all


per Job (secs) document job executions.

DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.

DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.

DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.

Metric of the difference between start time and finish


DP Elapsed Duration (hh:mm:ss) time (including time for prompt responses) of a
document job.

Metric of the average difference, in seconds, between


DP Elapsed Duration (secs) start time and finish time (including time for prompt
responses) of a document job.

DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)

Metric of the duration, in seconds, of a document job's


DP Execution Duration (secs)
execution.

Copyright © 2024 All Rights Reserved 2277


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

DP Number of Jobs (IS_DOC_ Metric of the number of document jobs that were
FACT) executed.

DP Number of Jobs with Cache


Metric of the number of document jobs that hit a cache.
Hit

DP Number of Jobs with Error Metric of the number of document jobs that failed.

DP Number of Users who ran


Metric of the number of users who ran document jobs.
Documents

DP Percentage of Jobs with Metric of the percentage of document jobs that hit a
Cache Hit cache.

DP Percentage of Jobs with Error Metric of the percentage of document jobs that failed.

Metric of the duration of all document job executions


DP Queue Duration (hh:mm:ss)
waiting in the queue.

Metric of the duration, in seconds, of all document job


DP Queue Duration (secs)
executions waiting in the queue.

Hour Indicates the hour the document job was executed.

Indicates the Intelligence Server machine that


Intelligence Server Machine
executed the document job.

Metadata Indicates the metadata storing the document.

Minute Indicates the minute the document job was executed.

Project Indicates the project storing the document.

Report Indicates the reports in the document.

User Indicates the user who ran the document job.

Copyright © 2024 All Rights Reserved 2278


Syst em Ad m in ist r at io n Gu id e

Document Job Step Attributes and Metrics

Attribute or metric name Function

Day Indicates the day on which the document job executed.

Document Indicates which document was executed.

Indicates the sequence number for steps in a


Document Job Step Sequence
document job.

Document Job Step Type Indicates the type of step for a document job.

Metric of the average difference between start time and


DP Average Elapsed Duration
finish time (including time for prompt responses) of all
per Job (hh:mm:ss)
document job executions.

Metric of the average difference, in seconds, between


DP Average Elapsed Duration per
start time and finish time (including time for prompt
Job (secs)
responses) of all document job executions.

DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.

DP Average Execution Duration Metric of the average duration, in seconds, of all


per Job (secs) document job executions.

DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.

DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.

Metric of the difference between start time and finish


DP Elapsed Duration (hh:mm:ss) time (including time for prompt responses) of a
document job.

Metric of the average difference, in seconds, between


DP Elapsed Duration (secs) start time and finish time (including time for prompt
responses) of a document job.

DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)

Copyright © 2024 All Rights Reserved 2279


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Metric of the duration, in seconds, of a document job's


DP Execution Duration (secs)
execution.

Metric of the duration of all document job executions


DP Queue Duration (hh:mm:ss)
waiting in the queue.

Metric of the duration, in seconds, of all document job


DP Queue Duration (secs)
executions waiting in the queue.

Hour Indicates the hour the document job was executed.

Metadata Indicates the metadata storing the document.

Minute Indicates the minute the document job was executed.

Project Indicates the project storing the document.

Enterprise Manager Data Load Attributes

Attribute name Function

Displays the timestamp of the end of the data load process for
Data Load Finish Time
the projects that are being monitored.

Data Load Project Lists all projects that are being monitored.

Lists the timestamp of the start of the data load process for the
Data Load Start Time
projects that are being monitored.

A value of -1 indicates that it is the summary row in the EM_IS_


LAST_UPDATE table for all projects in a data load. That
Item ID
summary row has information about how long the data load took.
A value of 0 indicates it is a row with project data load details.

Copyright © 2024 All Rights Reserved 2280


Syst em Ad m in ist r at io n Gu id e

Inbox Message Actions Attributes and Metrics

Attribute or metric
Function
name

Day Indicates the day the manipulation was started

Document Indicates the document included in the message.

Indicates the document job that requested the History List


Document Job
message manipulation.

HL Days Since Last


Metric of the number of days since any action was performed.
Action: Any action

HL Days Since Last Metric of the number of days since the last request was made
Action: Request for the contents of a message.

HL Last Action Date: Any Metric of the date and time of the last action performed on a
Action message such as read, deleted, marked as read, and so on.

HL Last Action Date: Metric of the date and time of the last request made for the
Request contents of a message.

HL Number of Actions Metric of the number of actions performed on a message.

HL Number of Actions by Metric of the number of actions by user performed on a


User message.

HL Number of Actions with Metric of the number of actions on a message that resulted in
Errors an error.

HL Number of Document Metric of the number of document jobs that result with
Jobs messages.

HL Number of Messages Metric of the number of messages.

HL Number of Messages
Metric of the number of messages that resulted in an error.
with Errors

HL Number of Messages Metric of the number of requests for the contents of a


Requested message.

HL Number of Report Metric of the number of report jobs that result from messages.

Copyright © 2024 All Rights Reserved 2281


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Jobs

Indicates the hour the manipulation was started on a History


Hour
List message.

Indicates the manipulation that was performed on a History


Inbox Action
List message.

Indicates the type of manipulation that was performed on a


Inbox Action Type
History List message.

Inbox Message Indicates the message in the History List.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine message.

Metadata Indicates the metadata storing the message.

Minute Indicates the minute the manipulation was started.

Project Indicates the project storing the message.

Report Indicates the report included in the message.

Report Job Indicates the job ID of the report included in the message.

User Indicates the user who manipulated the History List message.

Mobile Client Attributes

Attribute name Function

Indicates whether a cache was hit during the execution and, if


Cache Hit Indicator
so, what type of cache hit.

Day Indicates the day the action started.

Document Identifies the document used in the request.

Copyright © 2024 All Rights Reserved 2282


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Indicates the type of report or document that initiated the


Execution Type Indicator
execution.

Indicates the location, in latitude and longitude form, of the


Geocode
user.

Hour Indicates the hour the action started.

Intelligence Server
Indicates the Intelligence Server processing the request.
Machine

Indicates the metadata repository storing the report or


Metadata
document.

Minute Indicates the minute the action started.

Mobile Device Installation


Indicates the unique Installation ID of the mobile app.
ID

Indicates the type of mobile device the app is installed on,


Mobile Device Type
such as IPAD2, DROID, and so on.

Indicates the version of the MicroStrategy app making the


MSTR App Version
request.

Indicates the type of network used, such as 3G, WIFI, LTE,


Network Type
and so on.

Indicates the operating system of the mobile device making


Operating System
the request.

Indicates the operating system version of the mobile device


Operating System Version
making the request.

Project Indicates the project used to initiate the request.

User Indicates the user that initiated the request.

Copyright © 2024 All Rights Reserved 2283


Syst em Ad m in ist r at io n Gu id e

OLAP Services Attributes and Metrics

Attribute or metric
Function
name

Day Indicates the day the action was started.

Hour Indicates the hour the action was started.

Intelligent Cube Indicates the Intelligent Cube that was used.

Intelligent Cube Action Metric of the duration, in seconds, for an action that was
Duration (secs) performed on the Intellgent Cube.

Intelligent Cube Action Indicates the type of action taken on the Intelligent Cube such
Type as cube publish, cube view hit, and so on.

Indicates the Intelligent Cube instance in memory that was


Intelligent Cube Instance
used for the action.

If the Intelligent Cube is published or refreshed, indicates the


Intelligent Cube Size (KB)
size, in KB, of the Intelligent Cube.

Indicates the type of Intelligent Cube used, such as working


Intelligent Cube Type set report, Report Services Base report, OLAP Cube report,
and so on.

Minute Indicates the minute on which the action was started.

Metric of how many jobs from reports not based on Intelligent


Number of Dynamically
Cubes but selected by the engine to go against an Intelligent
Sourced Report Jobs
Cube because the objects on the report matched what is on
against Intelligent Cubes
the Intelligent Cube.

Number of Intelligent
Metric of how many times an Intelligent Cube was published.
Cube Publishes

Number of Intelligent
Metric of how many times an Intelligent Cube was refreshed.
Cube Refreshes

Number of Intelligent
Metric of how many times an Intelligent Cube was republished.
Cube Republishes

Number of Jobs with Metric of how many job executions used an Intelligent Cube.

Copyright © 2024 All Rights Reserved 2284


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Intelligent Cube Hit

Metric of how many users executed a report or document that


Number of Users hitting
used an Intelligent Cube. That is, the number of users using
Intelligent Cubes
OLAP Services.

Number of View Report


Metric of how many actions were the result of a View Report.
Jobs

Report Indicates the report that hit the Intelligent Cube.

Performance Monitoring Attributes

Attribute name Function

Indicates category of the counter, such as memory,


Counter Category
MicroStrategy server jobs, or MicroStrategy server users.

Counter Instance Indicates the instance ID of the counter, for MicroStrategy use.

Day Indicates the day the action was started.

Hour Indicates the hour the action was started.

Minute Indicates the minute the action was started.

Performance Monitor Indicates the name of the performance counter and its value
Counter type.

Prompt Answers Attributes and Metrics

Attribute or metric
Function
name

Connection Source Indicates the connection source to Intelligence Server.

Copyright © 2024 All Rights Reserved 2285


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Count of Prompt Answers Metric of how many prompts were answered.

Day Indicates the day the prompt was answered.

Document Indicates the document that used the prompt.

Hour Indicates the hour the prompt was answered.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine job.

Metadata Indicates the metadata repository storing the prompt.

Minute Indicates the minute the prompt was answered.

Project Indicates the project storing the prompt.

Prompt Indicates the prompt that was used.

Prompt Answer Indicates the answers for the prompt in various instances.

Prompt Answer Required Indicates whether an answer to the prompt was required.

Prompt Instance Answer Indicates the answer of an instance of a prompt in a report job.

Prompt Location Indicates the ID of the location in which a prompt is stored.

Indicates the type of the object in which the prompt is stored, such as
Prompt Location Type
filter, template, attribute, and so on.

Indicates the title of the prompt (the title the user sees when
Prompt Title
presented during job execution).

Indicates what type of prompt was used, such as date, double,


Prompt Type
elements, and so on.

Report Indicates the report that used the prompt.

Report Job Indicates the report job that used the prompt.

RP Number of Jobs (IS_


Metric of how many jobs involved a prompt.
PR_ANS_FACT)

Copyright © 2024 All Rights Reserved 2286


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Number of Jobs
Metric of how many report jobs had a specified prompt answer
Containing Prompt
value.
Answer Value

RP Number of Jobs Not


Metric of how many report jobs did not have a specified prompt
Containing Prompt
answer value.
Answer Value

RP Number of Jobs with Metric of how many report jobs had a prompt that was not
Unanswered Prompts answered.

Report Job Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution is ad hoc.

Cache Creation Indicator Indicates whether an execution has created a cache.

Cache Hit Indicator Indicates whether an execution has hit a cache.

Cancelled Indicator Indicates whether an execution has been canceled.

Indicates whether a job was a document dataset or a


Child Job Indicator
standalone report.

Connection Source Indicates the connection source to Intelligence Server.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Indicates whether a report request failed because of a


Database Error Indicator
database error.

Datamart Indicator Indicates whether an execution created a data mart.

Day Indicates the day on which the report was executed.

Copyright © 2024 All Rights Reserved 2287


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Indicates the database instance on which the report was


DB Instance
executed.

Drill Indicator Indicates whether an execution is a result of a drill.

Element Load Indicator Indicates whether an execution is a result of an element load.

Error Indicator Indicates whether an execution encountered an error.

Indicates whether a report was exported and, if so, indicates


Export Indicator
its format.

Filter Indicates the filter used on the report.

Hour Indicates the hour on which the report was executed.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine report.

Metadata Indicates the metadata repository that stores the report.

Indicates the minute on which the report execution was


Minute
started.

Number of Jobs with


Metric of how many job executions used an Intelligent Cube.
Intelligent Cube Hit

Project Indicates the metadata repository that stores the report.

Prompt Indicator Indicates whether the report execution was prompted.

Report Indicates the ID of the report that was executed.

Report Job Indicates an execution of a report.

RP Average Elapsed
Metric of the average difference between start time and finish
Duration per Job
time (including time for prompt responses) of all report job
(hh:mm:ss) (IS_REP_
executions.
FACT)

RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job (secs) time (including time for prompt responses) of all report job

Copyright © 2024 All Rights Reserved 2288


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

(IS_REP_FACT) executions.

RP Average Execution
Duration per Job Metric of the average duration of all report job executions.
(hh:mm:ss) (IS_REP_ Includes time in queue and execution for a report job.
FACT)

RP Average Execution Metric of the average duration, in seconds, of all report job
Duration per Job (secs) executions. Includes time in queue and execution for a report
(IS_REP_FACT) job.

RP Average Prompt
Metric of the average time users take to answer the set of
Answer Time per Job
prompts in all report jobs.
(hh:mm:ss)

RP Average Prompt
Metric of the average time, in seconds, users take to answer
Answer Time per Job
the set of prompts in all report jobs.
(secs)

RP Average Queue
Metric of the average time report jobs waited in the
Duration per Job
Intelligence Server's queue before the report job was
(hh:mm:ss) (IS_REP_
executed.
FACT)

RP Average Queue Metric of the average time, in seconds, report jobs waited in
Duration per Job (secs) the Intelligence Server's queue before the report job was
(IS_REP_FACT) executed.

Metric of the difference between start time and finish time of a


RP Elapsed Duration
report job. Includes time for prompt responses, in queue, and
(hh:mm:ss)
execution.

Metric of the difference, in seconds, between start time and


RP Elapsed Duration
finish time of a report job. Includes time for prompt responses,
(secs)
in queue, and execution.

RP Execution Duration Metric of the duration of a report job's execution. Includes


(hh:mm:ss) database execution time.

Copyright © 2024 All Rights Reserved 2289


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Execution Duration Metric of the duration, in seconds, of a report job's execution.


(secs) Includes database execution time.

RP Number of Ad Hoc Metric of how many report jobs resulted from an ad hoc report
Jobs creation.

RP Number of Cancelled
Metric of how many job executions were canceled.
Jobs

RP Number of Drill Jobs Metric of how many job executions resulted from a drill action.

RP Number of Jobs (IS_


Metric of how many report jobs were executed.
REP_FACT)

RP Number of Jobs hitting Metric of how many report jobs were executed against the
Database database.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Cache Creation result in creating a server cache.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not hit a
Cache Hit server cache.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Element Loading result from loading additional attribute elements.

RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Cache Creation a server cache being created.

RP Number of Jobs with Metric of how many report jobs were executed that hit a server
Cache Hit cache.

RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Datamart Creation a data mart being created.

RP Number of Jobs with Metric of how many report jobs failed because of a database
DB Error error.

RP Number of Jobs with Metric of how many report jobs were executed that resulted
Element Loading from loading additional attribute elements.

Copyright © 2024 All Rights Reserved 2290


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Number of Jobs with


Metric of how many report jobs failed because of an error.
Error

RP Number of Jobs with Metric of how many report job executions used an Intelligent
Intelligent Cube Hit Cube.

RP Number of Jobs with Metric of how many report job executions used a security
Security Filter filter.

RP Number of Jobs with


Metric of how many report jobs executed SQL statements.
SQL Execution

RP number of Narrowcast Metric of how many report job executions were run through
Server jobs MicroStrategy Narrowcast Server.

RP Number of Prompted
Metric of how many report job executions included a prompt.
Jobs

RP Number of Report
Metric of how many report jobs executed as a result of a
Jobs from Document
document execution.
Execution

RP Number of Result Metric of how many result rows were returned from a report
Rows execution.

RP Number of Scheduled
Metric of how many report jobs were scheduled.
Jobs

RP Number of Users who


Metric of how many distinct users ran report jobs.
ran reports

RP Prompt Answer Metric of the how long users take to answer the set of prompts
Duration (hh:mm:ss) in report jobs.

RP Prompt Answer Metric of the how long, in seconds, users take to answer the
Duration (secs) set of prompts in report jobs.

RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.

Copyright © 2024 All Rights Reserved 2291


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Metric of how long, in seconds, a report job waited in the


RP Queue Duration (secs) Intelligence Server's queue before the report job was
executed.

Schedule Indicates the schedule that began the report execution.

Schedule Indicator Indicates whether the report execution was scheduled.

Security Filter Indicates the security filter used in the report execution.

Indicates whether a security filter was used in the report


Security Filter Indicator
execution.

SQL Execution Indicator Indicates that SQL was executed during report execution.

Template Indicates the report template that was used.

User Indicates the user that ran the report.

Report Job SQL Pass Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether the execution was ad hoc.

Connection Source Indicates the connection source to Intelligence Server.

Day Indicates the day in which the job was executed.

Hour Indicates the hour in which the report job was executed.

Indicates the metadata repository storing the report or


Metadata
document.

Minute Indicates the minute in which the report job was started.

Project Indicates the project storing the report or document.

Copyright © 2024 All Rights Reserved 2292


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Report Indicates the report that was executed.

Report Job Indicates an execution of a report.

Indicates the SQL statement that was executed during the SQL
Report Job SQL Pass
pass.

Indicates the type of SQL statement that was executed in this


Report Job SQL Pass
SQL pass. Examples are SQL select, SQL insert, SQL create
Type
and such.

RP Execution Duration Metric of the duration of a report job's execution. Includes


(hh:mm:ss) database execution time.

RP Execution Duration Metric of the duration, in seconds, of a report job's execution.


(secs) Includes database execution time.

RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.

RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.

RP Number of DB Tables Metric of how many database tables were accessed in a report
Accessed job execution.

RP SQL Size Metric of how large, in bytes, the SQL was for a report job.

Report Job Steps Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution was ad hoc.

Cache Hit Indicator Indicates whether an execution has hit a cache.

Connection Source Indicates the connection source to Intelligence Server.

Copyright © 2024 All Rights Reserved 2293


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Day Indicates the day in which the job was executed.

Hour Indicates the hour in which the report job was executed.

Minute Indicates the minute in which the report job was started.

Report Indicates the report that was executed.

Report Job Indicates an execution of a report.

Report Job Step Indicates the sequence number in the series of execution
Sequence steps a report job passes through in the Intelligence Server.

Indicates the type of step for a report job. Examples are SQL
Report Job Step Type generation, SQL execution, Analytical Engine, Resolution
Server, element request, update Intelligent Cube, and so on.

RP Average CPU
Execution Duration per Metric of the average duration, in milliseconds, a report job
Job (msecs) (IS_REP_ execution takes in the Intelligence Server CPU.
STEP_FACT)

RP Average Elapsed Metric of the average difference, in seconds, between start


Duration per Job (secs) time and finish time of report job executions. Includes time for
(IS_REP_STEP_FACT) prompt responses.

RP Average Execution Metric of the average difference, in seconds, between start


Duration per Job (secs) time and finish time of report job executions. Includes time for
(IS_REP_STEP_FACT) prompt responses.

RP Average Query Engine


Execution Duration per Metric of the average time, in seconds, the Query Engine
Job (secs) (IS_REP_ takes to process a report job.
STEP_FACT)

RP Average Queue Metric of the average time report jobs waited in the
Duration per Job (secs) Intelligence Server's queue before the report job was

Copyright © 2024 All Rights Reserved 2294


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

(IS_REP_STEP_FACT) executed.

Metric of how long, in milliseconds, a report job execution


RP CPU Duration (msec)
takes in the Intelligence Server CPU.

RP Elapsed Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes time for prompt responses.

Metric of the difference, in seconds, between start time and


RP Elapsed Duration
finish time of report job executions. Includes time for prompt
(secs)
responses.

RP Execution Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes database execution time.

Metric of the difference, in seconds, between start time and


RP Execution Duration
finish time of report job executions. Includes database
(secs)
execution time.

RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.

RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.

RP Number of Jobs (IS_


Metric of how many report jobs were executed.
REP_STEP_FACT)

RP Query Engine
Metric of how long the Query Engine took to execute SQL for a
Duration (hh:mm:ss) (IS_
report job.
REP_STEP_FACT)

RP Query Engine Duration


Metric of the time, in seconds, the Query Engine takes to
(secs) (IS_REP_STEP_
execute SQL for a report job.
FACT)

RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.

RP Queue Duration (secs) Metric of how long, in seconds, a report job waited in the

Copyright © 2024 All Rights Reserved 2295


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Intelligence Server's queue before the report job was


executed.

RP SQL Engine Duration


Metric of how long the SQL Engine took to generate SQL for a
(hh:mm:ss) (IS_REP_
report job.
STEP_FACT)

Report Job Tables/Columns Accessed Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution was ad hoc.

Column Indicates the column that was accessed.

Connection Source Indicates the connection source to Intelligence Server.

Day Indicates the day on which the table column was accessed.

Indicates the table in the database storing the column that


DB Table
was accessed.

Hour Indicates the hour on which the table column was accessed.

Minute Indicates the minute on which the table column was accessed.

Report Indicates the report that accessed the table column.

Indicates which execution of a report accessed the table


Report Job
column.

Metric of how many report jobs accessed the database column


RP Number of Jobs (IS_
or table. The Warehouse Tables Accessed report uses this
REP_COL_FACT)
metric.

Indicates which type of SQL clause was used to access the


SQL Clause Type
table column.

Copyright © 2024 All Rights Reserved 2296


Syst em Ad m in ist r at io n Gu id e

Schema Objects Attributes

Attribute name Function

Lists all attributes in projects that are set up to be monitored by


Attribute
Enterprise Manager.

Lists all attribute forms in projects that are set up to be monitored by


Attribute Form
Enterprise Manager.

Lists all columns in projects that are set up to be monitored by


Column
Enterprise Manager.

Lists all physical tables in the data warehouse that are set up to be
DB Table
monitored by Enterprise Manager.

Lists all facts in projects that are set up to be monitored by Enterprise


Fact
Manager.

Lists all hierarchies in projects that are set up to be monitored by


Hierarchy
Enterprise Manager

Lists all logical tables in projects that are set up to be monitored by


Table
Enterprise Manager.

Lists all transformations in projects that are set up to be monitored by


Transformation
Enterprise Manager.

Server Machines Attributes

Attribute name Function

Lists all machines that have had users connect to the


Client Machine
Intelligence Server.

Intelligence Server
Lists the cluster of Intelligence Servers.
Cluster

Intelligence Server Lists all machines that have logged statistics as an


Machine Intelligence Server.

Web Server Machine Lists all machines used as web servers.

Copyright © 2024 All Rights Reserved 2297


Syst em Ad m in ist r at io n Gu id e

Session Attributes and Metrics

Attribute or metric
Function
name

Avg. Connection Duration Metric of the average time connections to an Intelligence


(hh:mm:ss) Server last.

Avg. Connection Duration Metric of the average time, in seconds, connections to an


(secs) Intelligence Server last.

Connection Duration
Metric of the time a connection to an Intelligence Server lasts.
(hh:mm:ss)

Connection Duration Metric of the time, in seconds, a connection to an Intelligence


(secs) Server lasts.

Connection Source Lists all connection sources to Intelligence Server.

Number of Sessions Metric of how many sessions were connected to an Intelligence


(Report Level) Server. Usually reported with a date and time attribute.

Metric of how many distinct users were connected to an


Number of Users Logged
Intelligence Server. Usually reported with a date and time
In (Report Level)
attribute.

Session Indicates a user connection to an Intelligence Server.

All Indicators and Flags Attributes

Attribute name Function

Ad Hoc Indicator Indicates whether an execution is ad hoc.

Cache Creation
Indicates whether an execution has created a cache.
Indicator

Cache Hit Indicator Indicates whether an execution has hit a cache.

Cancelled Indicator Indicates whether an execution has been cancelled.

Copyright © 2024 All Rights Reserved 2298


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Indicates whether a job was a document dataset or a stand-alone


Child Job Indicator
report.

Configuration Object
Indicates whether a configuration object exists.
Exists Status

Configuration
Lists all configuration parameter types.
Parameter Value Type

Connection Source Lists all connection sources to Intelligence Server.

Contact Type Lists the executed contact types.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Database Error Indicates whether a report request failed because of a database


Indicator error.

Datamart Indicator Indicates whether an execution created a data mart.

DB Error Indicator Indicates whether an execution encountered a database error.

Delivery Status
Indicates whether a delivery was successful.
Indicator

Delivery Type Lists the type of delivery.

Document Job Status


Lists the statuses of document executions.
(Deprecated)

Document Job Step


Lists all possible steps of document job execution.
Type

Indicates the type of a document or dashboard, such as a Report


Document Type
Services document or dashboard.

Lists the object from which a user drilled when a new report was
Drill from Object
run because of a drilling action.

Drill Indicator Indicates whether an execution is a result of a drill.

Drill to Object Lists the object to which a user drilled when a new report was run

Copyright © 2024 All Rights Reserved 2299


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

because of a drilling action.

Element Load Indicator Indicates whether an execution is a result of an element load.

Error Indicator Indicates whether an execution encountered an error.

Execution Type Indicates how the content was requested, such as User
Indicator Execution, Pre-Cached, Application Recovery, and so on.

Indicates whether a report was exported and, if so, indicates its


Export Indicator
format.

Hierarchy Drilling Indicates whether a hierarchy is used as a drill hierarchy.

List the types of manipulations that can be performed on a


Inbox Action Type
History List message.

Intelligent Cube Action


Lists actions performed on or against intelligent cubes.
Type

Intelligent Cube Type Lists all intelligent cube types.

Lists all the possible errors that can be returned during job
Job ErrorCode
executions.

Job Priority Map Lists the priorities of job executions.

Enumerates the upper limit of the priority ranges for high,


Job Priority Number medium, and low priority jobs. Default values are 332, 666, and
999.

Object Creation Date Indicates the date on which an object was created.

Object Creation
Indicates the week of the year in which an object was created.
Week of year

Object Exists Status Indicates whether an object exists.

Object Hidden Status Indicates whether an object is hidden.

Object Modification
Indicates the date on which an object was last modified.
Date

Copyright © 2024 All Rights Reserved 2300


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Object Modification Indicates the week of the year in which an object was last
Week of year modified.

Prompt Answer Indicates whether a prompt answer was required for the job
Required execution.

Prompt Indicator Indicates whether a job execution was prompted.

Report Job SQL Pass Lists the types of SQL passes that the Intelligence Server
Type generates.

Report Job Status


Lists the statuses of report executions.
(Deprecated)

Report Job Step Type Lists all possible steps of report job execution.

Report Type Indicates the type of a report, such as XDA, relational, and so on.

Report/Document
Indicates whether the execution was a report or a document.
Indicator

Schedule Indicator Indicates whether a job execution was scheduled.

Security Filter Indicator Indicates whether a security filter was used in the job execution.

SQL Clause Type Lists the various SQL clause types used by the SQL Engine.

SQL Execution
Indicates whether SQL was executed in the job execution.
Indicator

Application Objects Attributes

Attribute
Function
name

Lists all consolidations in projects that are set up to be monitored by


Consolidation
Enterprise Manager.

Lists all custom groups in projects that are set up to be monitored by


Custom Group
Enterprise Manager.

Copyright © 2024 All Rights Reserved 2301


Syst em Ad m in ist r at io n Gu id e

Attribute
Function
name

Lists all documents in projects that are set up to be monitored by


Document
Enterprise Manager.

Lists all filters in projects that are set up to be monitored by Enterprise


Filter
Manager.

Lists all intelligent cubes in projects that are set up to be monitored by


Intelligent Cube
Enterprise Manager.

Lists all metrics in projects that are set up to be monitored by Enterprise


Metric
Manager.

Lists all prompts in projects that are set up to be monitored by


Prompt
Enterprise Manager.

Lists all reports in projects that are set up to be monitored by Enterprise


Report
Manager.

Lists all security filters in projects that are set up to be monitored by


Security Filter
Enterprise Manager.

Lists all templates in projects that are set up to be monitored by


Template
Enterprise Manager.

Configuration Objects Attributes

Attribute name Function

Address Lists all addresses to which deliveries have been sent.

Configuration Object
Lists the owners of configuration objects.
Owner

Configuration Parameter Lists all configuration parameters.

Contact Lists all contacts to whom deliveries have been sent.

DB Connection Lists all database connections.

Copyright © 2024 All Rights Reserved 2302


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

DB Instance Lists all database instances.

Device Lists all devices to which deliveries have been sent.

Event Lists all events being tracked.

Folder Lists all folders within projects.

Intelligence Server
Lists all Intelligence Server definitions.
Definition

Metadata Lists all monitored metadata.

Owner Lists the owners of all objects.

Project Lists all projects.

Schedule Lists all schedules.

Subscription Lists all executed transmissions.

Transmitter Lists all transmitters.

User Lists all users being tracked.

User Group Lists all user groups.

User Group (Parent) Lists all user groups that are parents of other user groups.

Date and Time Attributes

Attribute
Function
name

Calendar
Lists every calendar week, beginning with 2000-01-01, as an integer.
Week

Day Lists all days, beginning in 1990.

Lists the hours in a day. For example, 09 AM - 10 AM, 10 AM - 11 AM, and


Hour
so on.

Copyright © 2024 All Rights Reserved 2303


Syst em Ad m in ist r at io n Gu id e

Attribute
Function
name

Lists all the minutes in an hour. For example, if the hour specified is 10
Minute AM - 11 AM, lists minutes as 10.30 AM - 10.31 AM, 10.32 AM - 10.33 AM,
and so on.

Month Lists all months, beginning with 2000.

Month of Year Lists all months in a specified year.

Quarter Lists all quarters.

Quarter of
Lists all quarters of the year.
Year

Lists all weeks in all years, beginning in 2000. Weeks in 2000 are
Week of Year represented as a number ranging from 200001 to 200053, weeks in 2001
are represented as a number ranging from 200101 to 200153, and so on.

Weekday Lists all days of the week.

Year Lists all years.

Delivery Services Attributes and Metrics

Attribute or metric name Function

Address Indicates the address to which a delivery was sent.

Avg number of recipients per Metric of the average number of recipients in


subscription subscriptions.

Avg Subscription Execution Metric of the average amount of time subscriptions


Duration (hh:mm:ss) take to execute.

Avg Subscription Execution Metric of the average amount of time, in seconds,


Duration (secs) subscriptions take to execute.

Contact Indicates all contacts to whom a delivery was sent.

Copyright © 2024 All Rights Reserved 2304


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Contact Type Indicates the executed contact types.

Day Indicates the day on which the delivery was sent.

Delivery Status Indicator Indicates whether the delivery was successful.

Delivery Type Indicates the type of delivery.

Indicates the type of device to which the delivery was


Device
sent.

Document Indicates the document that was delivered.

Hour Indicates the hour on which the delivery was sent.

Indicates the Intelligence Server machine that


Intelligence Server Machine
executed the job.

Metadata Indicates the monitored metadata.

Minute Indicates the minute on which the delivery was sent.

Number of Distinct Document Metric of the number of report services document


Subscriptions subscriptions.

Metric of the number of recipients that received


Number of Distinct Recipients
content from a subscription.

Number of Distinct Report


Metric of the number of report subscriptions.
Subscriptions

Metric of the number of executed subscriptions. This


Number of Distinct Subscriptions does not reflect the number of subscriptions in the
metadata.

Metric of the number of subscriptions that delivered


Number of E-mail Subscriptions
content via e-mail.

Number of Errored Subscriptions Metric of the number of subscriptions that failed.

Number of Executions Metric of the number of executions of a subscription.

Number of File Subscriptions Metric of the number of subscriptions that delivered

Copyright © 2024 All Rights Reserved 2305


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

content via file location.

Number of History List Metric of the number of subscriptions that delivered


Subscriptions content via the history list.

Metric of the number of subscriptions that delivered


Number of Mobile Subscriptions
content via mobile.

Metric of the number of subscriptions that delivered


Number of Print Subscriptions
content via a printer.

Project Lists the projects.

Report Lists the reports in projects.

Report Job Lists an execution of a report.

Indicates whether the execution was a report or a


Report/Document Indicator
document.

Schedule Indicates the schedule that triggered the delivery.

Subscription Indicates the subscription that triggered the delivery.

Subscription Execution Duration Metric of the sum of all execution times of a


(hh:mm:ss) subscription.

Subscription Execution Duration Metric of the sum of all execution times of a


(secs) subscription (in seconds).

Document Job Attributes and Metrics

Attribute or metric name Function

Day Indicates the day on which the document job executed.

Document Indicates which document was executed.

Document Job Indicates an execution of a document.

Copyright © 2024 All Rights Reserved 2306


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Metric of the average difference between start time and


DP Average Elapsed Duration per
finish time (including time for prompt responses) of all
Job (hh:mm:ss)
document job executions.

Metric of the average difference, in seconds, between


DP Average Elapsed Duration
start time and finish time (including time for prompt
per Job (secs)
responses) of all document job executions.

DP Average Execution Duration Metric of the average duration, in seconds, of all


per Job (secs) document job executions.

DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.

DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.

DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.

Metric of the difference between start time and finish


DP Elapsed Duration (hh:mm:ss) time (including time for prompt responses) of a
document job.

Metric of the average difference, in seconds, between


DP Elapsed Duration (secs) start time and finish time (including time for prompt
responses) of a document job.

DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)

Metric of the duration, in seconds, of a document job's


DP Execution Duration (secs)
execution.

DP Number of Jobs (IS_DOC_ Metric of the number of document jobs that were
FACT) executed.

DP Number of Jobs with Cache


Metric of the number of document jobs that hit a cache.
Hit

DP Number of Jobs with Error Metric of the number of document jobs that failed.

Copyright © 2024 All Rights Reserved 2307


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

DP Number of Users who ran


Metric of the number of users who ran document jobs.
Documents

DP Percentage of Jobs with Metric of the percentage of document jobs that hit a
Cache Hit cache.

DP Percentage of Jobs with Error Metric of the percentage of document jobs that failed.

Metric of the duration of all document job executions


DP Queue Duration (hh:mm:ss)
waiting in the queue.

Metric of the duration, in seconds, of all document job


DP Queue Duration (secs)
executions waiting in the queue.

Hour Indicates the hour the document job was executed.

Indicates the Intelligence Server machine that


Intelligence Server Machine
executed the document job.

Metadata Indicates the metadata storing the document.

Minute Indicates the minute the document job was executed.

Project Indicates the project storing the document.

Report Indicates the reports in the document.

User Indicates the user who ran the document job.

Document Job Step Attributes and Metrics

Attribute or metric name Function

Day Indicates the day on which the document job executed.

Document Indicates which document was executed.

Indicates the sequence number for steps in a


Document Job Step Sequence
document job.

Copyright © 2024 All Rights Reserved 2308


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Document Job Step Type Indicates the type of step for a document job.

Metric of the average difference between start time and


DP Average Elapsed Duration
finish time (including time for prompt responses) of all
per Job (hh:mm:ss)
document job executions.

Metric of the average difference, in seconds, between


DP Average Elapsed Duration per
start time and finish time (including time for prompt
Job (secs)
responses) of all document job executions.

DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.

DP Average Execution Duration Metric of the average duration, in seconds, of all


per Job (secs) document job executions.

DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.

DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.

Metric of the difference between start time and finish


DP Elapsed Duration (hh:mm:ss) time (including time for prompt responses) of a
document job.

Metric of the average difference, in seconds, between


DP Elapsed Duration (secs) start time and finish time (including time for prompt
responses) of a document job.

DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)

Metric of the duration, in seconds, of a document job's


DP Execution Duration (secs)
execution.

Metric of the duration of all document job executions


DP Queue Duration (hh:mm:ss)
waiting in the queue.

Metric of the duration, in seconds, of all document job


DP Queue Duration (secs)
executions waiting in the queue.

Copyright © 2024 All Rights Reserved 2309


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Hour Indicates the hour the document job was executed.

Metadata Indicates the metadata storing the document.

Minute Indicates the minute the document job was executed.

Project Indicates the project storing the document.

Enterprise Manager Data Load Attributes

Attribute name Function

Displays the timestamp of the end of the data load process for
Data Load Finish Time
the projects that are being monitored.

Data Load Project Lists all projects that are being monitored.

Lists the timestamp of the start of the data load process for the
Data Load Start Time
projects that are being monitored.

A value of -1 indicates that it is the summary row in the EM_IS_


LAST_UPDATE table for all projects in a data load. That
Item ID
summary row has information about how long the data load took.
A value of 0 indicates it is a row with project data load details.

Inbox Message Actions Attributes and Metrics

Attribute or metric
Function
name

Day Indicates the day the manipulation was started

Document Indicates the document included in the message.

Indicates the document job that requested the History List


Document Job
message manipulation.

Copyright © 2024 All Rights Reserved 2310


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

HL Days Since Last


Metric of the number of days since any action was performed.
Action: Any action

HL Days Since Last Metric of the number of days since the last request was made
Action: Request for the contents of a message.

HL Last Action Date: Any Metric of the date and time of the last action performed on a
Action message such as read, deleted, marked as read, and so on.

HL Last Action Date: Metric of the date and time of the last request made for the
Request contents of a message.

HL Number of Actions Metric of the number of actions performed on a message.

HL Number of Actions by Metric of the number of actions by user performed on a


User message.

HL Number of Actions with Metric of the number of actions on a message that resulted in
Errors an error.

HL Number of Document Metric of the number of document jobs that result with
Jobs messages.

HL Number of Messages Metric of the number of messages.

HL Number of Messages
Metric of the number of messages that resulted in an error.
with Errors

HL Number of Messages Metric of the number of requests for the contents of a


Requested message.

HL Number of Report
Metric of the number of report jobs that result from messages.
Jobs

Indicates the hour the manipulation was started on a History


Hour
List message.

Indicates the manipulation that was performed on a History


Inbox Action
List message.

Inbox Action Type Indicates the type of manipulation that was performed on a

Copyright © 2024 All Rights Reserved 2311


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

History List message.

Inbox Message Indicates the message in the History List.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine message.

Metadata Indicates the metadata storing the message.

Minute Indicates the minute the manipulation was started.

Project Indicates the project storing the message.

Report Indicates the report included in the message.

Report Job Indicates the job ID of the report included in the message.

User Indicates the user who manipulated the History List message.

Mobile Client Attributes

Attribute name Function

Indicates whether a cache was hit during the execution and, if


Cache Hit Indicator
so, what type of cache hit.

Day Indicates the day the action started.

Document Identifies the document used in the request.

Indicates the type of report or document that initiated the


Execution Type Indicator
execution.

Indicates the location, in latitude and longitude form, of the


Geocode
user.

Hour Indicates the hour the action started.

Intelligence Server
Indicates the Intelligence Server processing the request.
Machine

Copyright © 2024 All Rights Reserved 2312


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Indicates the metadata repository storing the report or


Metadata
document.

Minute Indicates the minute the action started.

Mobile Device Installation


Indicates the unique Installation ID of the mobile app.
ID

Indicates the type of mobile device the app is installed on,


Mobile Device Type
such as IPAD2, DROID, and so on.

Indicates the version of the MicroStrategy app making the


MSTR App Version
request.

Indicates the type of network used, such as 3G, WIFI, LTE,


Network Type
and so on.

Indicates the operating system of the mobile device making


Operating System
the request.

Indicates the operating system version of the mobile device


Operating System Version
making the request.

Project Indicates the project used to initiate the request.

User Indicates the user that initiated the request.

OLAP Services Attributes and Metrics

Attribute or metric
Function
name

Day Indicates the day the action was started.

Hour Indicates the hour the action was started.

Intelligent Cube Indicates the Intelligent Cube that was used.

Intelligent Cube Action Metric of the duration, in seconds, for an action that was
Duration (secs) performed on the Intellgent Cube.

Copyright © 2024 All Rights Reserved 2313


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Intelligent Cube Action Indicates the type of action taken on the Intelligent Cube such
Type as cube publish, cube view hit, and so on.

Indicates the Intelligent Cube instance in memory that was


Intelligent Cube Instance
used for the action.

If the Intelligent Cube is published or refreshed, indicates the


Intelligent Cube Size (KB)
size, in KB, of the Intelligent Cube.

Indicates the type of Intelligent Cube used, such as working


Intelligent Cube Type set report, Report Services Base report, OLAP Cube report,
and so on.

Minute Indicates the minute on which the action was started.

Metric of how many jobs from reports not based on Intelligent


Number of Dynamically
Cubes but selected by the engine to go against an Intelligent
Sourced Report Jobs
Cube because the objects on the report matched what is on
against Intelligent Cubes
the Intelligent Cube.

Number of Intelligent
Metric of how many times an Intelligent Cube was published.
Cube Publishes

Number of Intelligent
Metric of how many times an Intelligent Cube was refreshed.
Cube Refreshes

Number of Intelligent
Metric of how many times an Intelligent Cube was republished.
Cube Republishes

Number of Jobs with


Metric of how many job executions used an Intelligent Cube.
Intelligent Cube Hit

Metric of how many users executed a report or document that


Number of Users hitting
used an Intelligent Cube. That is, the number of users using
Intelligent Cubes
OLAP Services.

Number of View Report


Metric of how many actions were the result of a View Report.
Jobs

Report Indicates the report that hit the Intelligent Cube.

Copyright © 2024 All Rights Reserved 2314


Syst em Ad m in ist r at io n Gu id e

Performance Monitoring Attributes

Attribute name Function

Indicates category of the counter, such as memory,


Counter Category
MicroStrategy server jobs, or MicroStrategy server users.

Counter Instance Indicates the instance ID of the counter, for MicroStrategy use.

Day Indicates the day the action was started.

Hour Indicates the hour the action was started.

Minute Indicates the minute the action was started.

Performance Monitor Indicates the name of the performance counter and its value
Counter type.

Prompt Answers Attributes and Metrics

Attribute or metric
Function
name

Connection Source Indicates the connection source to Intelligence Server.

Count of Prompt Answers Metric of how many prompts were answered.

Day Indicates the day the prompt was answered.

Document Indicates the document that used the prompt.

Hour Indicates the hour the prompt was answered.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine job.

Metadata Indicates the metadata repository storing the prompt.

Minute Indicates the minute the prompt was answered.

Project Indicates the project storing the prompt.

Copyright © 2024 All Rights Reserved 2315


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Prompt Indicates the prompt that was used.

Prompt Answer Indicates the answers for the prompt in various instances.

Prompt Answer Required Indicates whether an answer to the prompt was required.

Prompt Instance Answer Indicates the answer of an instance of a prompt in a report job.

Prompt Location Indicates the ID of the location in which a prompt is stored.

Indicates the type of the object in which the prompt is stored, such as
Prompt Location Type
filter, template, attribute, and so on.

Indicates the title of the prompt (the title the user sees when
Prompt Title
presented during job execution).

Indicates what type of prompt was used, such as date, double,


Prompt Type
elements, and so on.

Report Indicates the report that used the prompt.

Report Job Indicates the report job that used the prompt.

RP Number of Jobs (IS_


Metric of how many jobs involved a prompt.
PR_ANS_FACT)

RP Number of Jobs
Metric of how many report jobs had a specified prompt answer
Containing Prompt
value.
Answer Value

RP Number of Jobs Not


Metric of how many report jobs did not have a specified prompt
Containing Prompt
answer value.
Answer Value

RP Number of Jobs with Metric of how many report jobs had a prompt that was not
Unanswered Prompts answered.

Copyright © 2024 All Rights Reserved 2316


Syst em Ad m in ist r at io n Gu id e

Report Job Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution is ad hoc.

Cache Creation Indicator Indicates whether an execution has created a cache.

Cache Hit Indicator Indicates whether an execution has hit a cache.

Cancelled Indicator Indicates whether an execution has been canceled.

Indicates whether a job was a document dataset or a


Child Job Indicator
standalone report.

Connection Source Indicates the connection source to Intelligence Server.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Indicates whether a report request failed because of a


Database Error Indicator
database error.

Datamart Indicator Indicates whether an execution created a data mart.

Day Indicates the day on which the report was executed.

Indicates the database instance on which the report was


DB Instance
executed.

Drill Indicator Indicates whether an execution is a result of a drill.

Element Load Indicator Indicates whether an execution is a result of an element load.

Error Indicator Indicates whether an execution encountered an error.

Indicates whether a report was exported and, if so, indicates


Export Indicator
its format.

Filter Indicates the filter used on the report.

Hour Indicates the hour on which the report was executed.

Intelligence Server Indicates the Intelligence Server machine that executed the

Copyright © 2024 All Rights Reserved 2317


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Machine report.

Metadata Indicates the metadata repository that stores the report.

Indicates the minute on which the report execution was


Minute
started.

Number of Jobs with


Metric of how many job executions used an Intelligent Cube.
Intelligent Cube Hit

Project Indicates the metadata repository that stores the report.

Prompt Indicator Indicates whether the report execution was prompted.

Report Indicates the ID of the report that was executed.

Report Job Indicates an execution of a report.

RP Average Elapsed
Metric of the average difference between start time and finish
Duration per Job
time (including time for prompt responses) of all report job
(hh:mm:ss) (IS_REP_
executions.
FACT)

RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job (secs) time (including time for prompt responses) of all report job
(IS_REP_FACT) executions.

RP Average Execution
Duration per Job Metric of the average duration of all report job executions.
(hh:mm:ss) (IS_REP_ Includes time in queue and execution for a report job.
FACT)

RP Average Execution Metric of the average duration, in seconds, of all report job
Duration per Job (secs) executions. Includes time in queue and execution for a report
(IS_REP_FACT) job.

RP Average Prompt
Metric of the average time users take to answer the set of
Answer Time per Job
prompts in all report jobs.
(hh:mm:ss)

Copyright © 2024 All Rights Reserved 2318


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Average Prompt
Metric of the average time, in seconds, users take to answer
Answer Time per Job
the set of prompts in all report jobs.
(secs)

RP Average Queue
Metric of the average time report jobs waited in the
Duration per Job
Intelligence Server's queue before the report job was
(hh:mm:ss) (IS_REP_
executed.
FACT)

RP Average Queue Metric of the average time, in seconds, report jobs waited in
Duration per Job (secs) the Intelligence Server's queue before the report job was
(IS_REP_FACT) executed.

Metric of the difference between start time and finish time of a


RP Elapsed Duration
report job. Includes time for prompt responses, in queue, and
(hh:mm:ss)
execution.

Metric of the difference, in seconds, between start time and


RP Elapsed Duration
finish time of a report job. Includes time for prompt responses,
(secs)
in queue, and execution.

RP Execution Duration Metric of the duration of a report job's execution. Includes


(hh:mm:ss) database execution time.

RP Execution Duration Metric of the duration, in seconds, of a report job's execution.


(secs) Includes database execution time.

RP Number of Ad Hoc Metric of how many report jobs resulted from an ad hoc report
Jobs creation.

RP Number of Cancelled
Metric of how many job executions were canceled.
Jobs

RP Number of Drill Jobs Metric of how many job executions resulted from a drill action.

RP Number of Jobs (IS_


Metric of how many report jobs were executed.
REP_FACT)

RP Number of Jobs hitting Metric of how many report jobs were executed against the
Database database.

Copyright © 2024 All Rights Reserved 2319


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Cache Creation result in creating a server cache.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not hit a
Cache Hit server cache.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Element Loading result from loading additional attribute elements.

RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Cache Creation a server cache being created.

RP Number of Jobs with Metric of how many report jobs were executed that hit a server
Cache Hit cache.

RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Datamart Creation a data mart being created.

RP Number of Jobs with Metric of how many report jobs failed because of a database
DB Error error.

RP Number of Jobs with Metric of how many report jobs were executed that resulted
Element Loading from loading additional attribute elements.

RP Number of Jobs with


Metric of how many report jobs failed because of an error.
Error

RP Number of Jobs with Metric of how many report job executions used an Intelligent
Intelligent Cube Hit Cube.

RP Number of Jobs with Metric of how many report job executions used a security
Security Filter filter.

RP Number of Jobs with


Metric of how many report jobs executed SQL statements.
SQL Execution

RP number of Narrowcast Metric of how many report job executions were run through
Server jobs MicroStrategy Narrowcast Server.

RP Number of Prompted
Metric of how many report job executions included a prompt.
Jobs

Copyright © 2024 All Rights Reserved 2320


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Number of Report
Metric of how many report jobs executed as a result of a
Jobs from Document
document execution.
Execution

RP Number of Result Metric of how many result rows were returned from a report
Rows execution.

RP Number of Scheduled
Metric of how many report jobs were scheduled.
Jobs

RP Number of Users who


Metric of how many distinct users ran report jobs.
ran reports

RP Prompt Answer Metric of the how long users take to answer the set of prompts
Duration (hh:mm:ss) in report jobs.

RP Prompt Answer Metric of the how long, in seconds, users take to answer the
Duration (secs) set of prompts in report jobs.

RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.

Metric of how long, in seconds, a report job waited in the


RP Queue Duration (secs) Intelligence Server's queue before the report job was
executed.

Schedule Indicates the schedule that began the report execution.

Schedule Indicator Indicates whether the report execution was scheduled.

Security Filter Indicates the security filter used in the report execution.

Indicates whether a security filter was used in the report


Security Filter Indicator
execution.

SQL Execution Indicator Indicates that SQL was executed during report execution.

Template Indicates the report template that was used.

User Indicates the user that ran the report.

Copyright © 2024 All Rights Reserved 2321


Syst em Ad m in ist r at io n Gu id e

Report Job SQL Pass Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether the execution was ad hoc.

Connection Source Indicates the connection source to Intelligence Server.

Day Indicates the day in which the job was executed.

Hour Indicates the hour in which the report job was executed.

Indicates the metadata repository storing the report or


Metadata
document.

Minute Indicates the minute in which the report job was started.

Project Indicates the project storing the report or document.

Report Indicates the report that was executed.

Report Job Indicates an execution of a report.

Indicates the SQL statement that was executed during the SQL
Report Job SQL Pass
pass.

Indicates the type of SQL statement that was executed in this


Report Job SQL Pass
SQL pass. Examples are SQL select, SQL insert, SQL create
Type
and such.

RP Execution Duration Metric of the duration of a report job's execution. Includes


(hh:mm:ss) database execution time.

RP Execution Duration Metric of the duration, in seconds, of a report job's execution.


(secs) Includes database execution time.

RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.

RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.

RP Number of DB Tables Metric of how many database tables were accessed in a report
Accessed job execution.

Copyright © 2024 All Rights Reserved 2322


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP SQL Size Metric of how large, in bytes, the SQL was for a report job.

Report Job Steps Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution was ad hoc.

Cache Hit Indicator Indicates whether an execution has hit a cache.

Connection Source Indicates the connection source to Intelligence Server.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Day Indicates the day in which the job was executed.

Hour Indicates the hour in which the report job was executed.

Minute Indicates the minute in which the report job was started.

Report Indicates the report that was executed.

Report Job Indicates an execution of a report.

Report Job Step Indicates the sequence number in the series of execution
Sequence steps a report job passes through in the Intelligence Server.

Indicates the type of step for a report job. Examples are SQL
Report Job Step Type generation, SQL execution, Analytical Engine, Resolution
Server, element request, update Intelligent Cube, and so on.

RP Average CPU
Execution Duration per Metric of the average duration, in milliseconds, a report job
Job (msecs) (IS_REP_ execution takes in the Intelligence Server CPU.
STEP_FACT)

RP Average Elapsed Metric of the average difference, in seconds, between start

Copyright © 2024 All Rights Reserved 2323


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Duration per Job (secs) time and finish time of report job executions. Includes time for
(IS_REP_STEP_FACT) prompt responses.

RP Average Execution Metric of the average difference, in seconds, between start


Duration per Job (secs) time and finish time of report job executions. Includes time for
(IS_REP_STEP_FACT) prompt responses.

RP Average Query Engine


Execution Duration per Metric of the average time, in seconds, the Query Engine
Job (secs) (IS_REP_ takes to process a report job.
STEP_FACT)

RP Average Queue Metric of the average time report jobs waited in the
Duration per Job (secs) Intelligence Server's queue before the report job was
(IS_REP_STEP_FACT) executed.

Metric of how long, in milliseconds, a report job execution


RP CPU Duration (msec)
takes in the Intelligence Server CPU.

RP Elapsed Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes time for prompt responses.

Metric of the difference, in seconds, between start time and


RP Elapsed Duration
finish time of report job executions. Includes time for prompt
(secs)
responses.

RP Execution Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes database execution time.

Metric of the difference, in seconds, between start time and


RP Execution Duration
finish time of report job executions. Includes database
(secs)
execution time.

RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.

RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.

RP Number of Jobs (IS_ Metric of how many report jobs were executed.

Copyright © 2024 All Rights Reserved 2324


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

REP_STEP_FACT)

RP Query Engine
Metric of how long the Query Engine took to execute SQL for a
Duration (hh:mm:ss) (IS_
report job.
REP_STEP_FACT)

RP Query Engine Duration


Metric of the time, in seconds, the Query Engine takes to
(secs) (IS_REP_STEP_
execute SQL for a report job.
FACT)

RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.

Metric of how long, in seconds, a report job waited in the


RP Queue Duration (secs) Intelligence Server's queue before the report job was
executed.

RP SQL Engine Duration


Metric of how long the SQL Engine took to generate SQL for a
(hh:mm:ss) (IS_REP_
report job.
STEP_FACT)

Report Job Tables/Columns Accessed Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution was ad hoc.

Column Indicates the column that was accessed.

Connection Source Indicates the connection source to Intelligence Server.

Day Indicates the day on which the table column was accessed.

Indicates the table in the database storing the column that


DB Table
was accessed.

Copyright © 2024 All Rights Reserved 2325


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Hour Indicates the hour on which the table column was accessed.

Minute Indicates the minute on which the table column was accessed.

Report Indicates the report that accessed the table column.

Indicates which execution of a report accessed the table


Report Job
column.

Metric of how many report jobs accessed the database column


RP Number of Jobs (IS_
or table. The Warehouse Tables Accessed report uses this
REP_COL_FACT)
metric.

Indicates which type of SQL clause was used to access the


SQL Clause Type
table column.

Schema Objects Attributes

Attribute name Function

Lists all attributes in projects that are set up to be monitored by


Attribute
Enterprise Manager.

Lists all attribute forms in projects that are set up to be monitored by


Attribute Form
Enterprise Manager.

Lists all columns in projects that are set up to be monitored by


Column
Enterprise Manager.

Lists all physical tables in the data warehouse that are set up to be
DB Table
monitored by Enterprise Manager.

Lists all facts in projects that are set up to be monitored by Enterprise


Fact
Manager.

Lists all hierarchies in projects that are set up to be monitored by


Hierarchy
Enterprise Manager

Copyright © 2024 All Rights Reserved 2326


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Lists all logical tables in projects that are set up to be monitored by


Table
Enterprise Manager.

Lists all transformations in projects that are set up to be monitored by


Transformation
Enterprise Manager.

Server Machines Attributes

Attribute name Function

Lists all machines that have had users connect to the


Client Machine
Intelligence Server.

Intelligence Server
Lists the cluster of Intelligence Servers.
Cluster

Intelligence Server Lists all machines that have logged statistics as an


Machine Intelligence Server.

Web Server Machine Lists all machines used as web servers.

Session Attributes and Metrics

Attribute or metric
Function
name

Avg. Connection Duration Metric of the average time connections to an Intelligence


(hh:mm:ss) Server last.

Avg. Connection Duration Metric of the average time, in seconds, connections to an


(secs) Intelligence Server last.

Connection Duration
Metric of the time a connection to an Intelligence Server lasts.
(hh:mm:ss)

Connection Duration Metric of the time, in seconds, a connection to an Intelligence

Copyright © 2024 All Rights Reserved 2327


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

(secs) Server lasts.

Connection Source Lists all connection sources to Intelligence Server.

Number of Sessions Metric of how many sessions were connected to an Intelligence


(Report Level) Server. Usually reported with a date and time attribute.

Metric of how many distinct users were connected to an


Number of Users Logged
Intelligence Server. Usually reported with a date and time
In (Report Level)
attribute.

Session Indicates a user connection to an Intelligence Server.

All Indicators and Flags Attributes

Attribute name Function

Ad Hoc Indicator Indicates whether an execution is ad hoc.

Cache Creation
Indicates whether an execution has created a cache.
Indicator

Cache Hit Indicator Indicates whether an execution has hit a cache.

Cancelled Indicator Indicates whether an execution has been cancelled.

Indicates whether a job was a document dataset or a stand-alone


Child Job Indicator
report.

Configuration Object
Indicates whether a configuration object exists.
Exists Status

Configuration
Lists all configuration parameter types.
Parameter Value Type

Connection Source Lists all connection sources to Intelligence Server.

Contact Type Lists the executed contact types.

Copyright © 2024 All Rights Reserved 2328


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Database Error Indicates whether a report request failed because of a database


Indicator error.

Datamart Indicator Indicates whether an execution created a data mart.

DB Error Indicator Indicates whether an execution encountered a database error.

Delivery Status
Indicates whether a delivery was successful.
Indicator

Delivery Type Lists the type of delivery.

Document Job Status


Lists the statuses of document executions.
(Deprecated)

Document Job Step


Lists all possible steps of document job execution.
Type

Indicates the type of a document or dashboard, such as a Report


Document Type
Services document or dashboard.

Lists the object from which a user drilled when a new report was
Drill from Object
run because of a drilling action.

Drill Indicator Indicates whether an execution is a result of a drill.

Lists the object to which a user drilled when a new report was run
Drill to Object
because of a drilling action.

Element Load Indicator Indicates whether an execution is a result of an element load.

Error Indicator Indicates whether an execution encountered an error.

Execution Type Indicates how the content was requested, such as User
Indicator Execution, Pre-Cached, Application Recovery, and so on.

Indicates whether a report was exported and, if so, indicates its


Export Indicator
format.

Copyright © 2024 All Rights Reserved 2329


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Hierarchy Drilling Indicates whether a hierarchy is used as a drill hierarchy.

List the types of manipulations that can be performed on a


Inbox Action Type
History List message.

Intelligent Cube Action


Lists actions performed on or against intelligent cubes.
Type

Intelligent Cube Type Lists all intelligent cube types.

Lists all the possible errors that can be returned during job
Job ErrorCode
executions.

Job Priority Map Lists the priorities of job executions.

Enumerates the upper limit of the priority ranges for high,


Job Priority Number medium, and low priority jobs. Default values are 332, 666, and
999.

Object Creation Date Indicates the date on which an object was created.

Object Creation
Indicates the week of the year in which an object was created.
Week of year

Object Exists Status Indicates whether an object exists.

Object Hidden Status Indicates whether an object is hidden.

Object Modification
Indicates the date on which an object was last modified.
Date

Object Modification Indicates the week of the year in which an object was last
Week of year modified.

Prompt Answer Indicates whether a prompt answer was required for the job
Required execution.

Prompt Indicator Indicates whether a job execution was prompted.

Report Job SQL Pass Lists the types of SQL passes that the Intelligence Server
Type generates.

Report Job Status Lists the statuses of report executions.

Copyright © 2024 All Rights Reserved 2330


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

(Deprecated)

Report Job Step Type Lists all possible steps of report job execution.

Report Type Indicates the type of a report, such as XDA, relational, and so on.

Report/Document
Indicates whether the execution was a report or a document.
Indicator

Schedule Indicator Indicates whether a job execution was scheduled.

Security Filter Indicator Indicates whether a security filter was used in the job execution.

SQL Clause Type Lists the various SQL clause types used by the SQL Engine.

SQL Execution
Indicates whether SQL was executed in the job execution.
Indicator

Application Objects Attributes

Attribute
Function
name

Lists all consolidations in projects that are set up to be monitored by


Consolidation
Enterprise Manager.

Lists all custom groups in projects that are set up to be monitored by


Custom Group
Enterprise Manager.

Lists all documents in projects that are set up to be monitored by


Document
Enterprise Manager.

Lists all filters in projects that are set up to be monitored by Enterprise


Filter
Manager.

Lists all intelligent cubes in projects that are set up to be monitored by


Intelligent Cube
Enterprise Manager.

Lists all metrics in projects that are set up to be monitored by Enterprise


Metric
Manager.

Copyright © 2024 All Rights Reserved 2331


Syst em Ad m in ist r at io n Gu id e

Attribute
Function
name

Lists all prompts in projects that are set up to be monitored by


Prompt
Enterprise Manager.

Lists all reports in projects that are set up to be monitored by Enterprise


Report
Manager.

Lists all security filters in projects that are set up to be monitored by


Security Filter
Enterprise Manager.

Lists all templates in projects that are set up to be monitored by


Template
Enterprise Manager.

Configuration Objects Attributes

Attribute name Function

Address Lists all addresses to which deliveries have been sent.

Configuration Object
Lists the owners of configuration objects.
Owner

Configuration Parameter Lists all configuration parameters.

Contact Lists all contacts to whom deliveries have been sent.

DB Connection Lists all database connections.

DB Instance Lists all database instances.

Device Lists all devices to which deliveries have been sent.

Event Lists all events being tracked.

Folder Lists all folders within projects.

Intelligence Server
Lists all Intelligence Server definitions.
Definition

Metadata Lists all monitored metadata.

Copyright © 2024 All Rights Reserved 2332


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Owner Lists the owners of all objects.

Project Lists all projects.

Schedule Lists all schedules.

Subscription Lists all executed transmissions.

Transmitter Lists all transmitters.

User Lists all users being tracked.

User Group Lists all user groups.

User Group (Parent) Lists all user groups that are parents of other user groups.

Date and Time Attributes

Attribute
Function
name

Calendar
Lists every calendar week, beginning with 2000-01-01, as an integer.
Week

Day Lists all days, beginning in 1990.

Lists the hours in a day. For example, 09 AM - 10 AM, 10 AM - 11 AM, and


Hour
so on.

Lists all the minutes in an hour. For example, if the hour specified is 10
Minute AM - 11 AM, lists minutes as 10.30 AM - 10.31 AM, 10.32 AM - 10.33 AM,
and so on.

Month Lists all months, beginning with 2000.

Month of Year Lists all months in a specified year.

Quarter Lists all quarters.

Quarter of
Lists all quarters of the year.
Year

Copyright © 2024 All Rights Reserved 2333


Syst em Ad m in ist r at io n Gu id e

Attribute
Function
name

Lists all weeks in all years, beginning in 2000. Weeks in 2000 are
Week of Year represented as a number ranging from 200001 to 200053, weeks in 2001
are represented as a number ranging from 200101 to 200153, and so on.

Weekday Lists all days of the week.

Year Lists all years.

Delivery Services Attributes and Metrics

Attribute or metric name Function

Address Indicates the address to which a delivery was sent.

Avg number of recipients per Metric of the average number of recipients in


subscription subscriptions.

Avg Subscription Execution Metric of the average amount of time subscriptions


Duration (hh:mm:ss) take to execute.

Avg Subscription Execution Metric of the average amount of time, in seconds,


Duration (secs) subscriptions take to execute.

Contact Indicates all contacts to whom a delivery was sent.

Contact Type Indicates the executed contact types.

Day Indicates the day on which the delivery was sent.

Delivery Status Indicator Indicates whether the delivery was successful.

Delivery Type Indicates the type of delivery.

Indicates the type of device to which the delivery was


Device
sent.

Document Indicates the document that was delivered.

Hour Indicates the hour on which the delivery was sent.

Copyright © 2024 All Rights Reserved 2334


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Indicates the Intelligence Server machine that


Intelligence Server Machine
executed the job.

Metadata Indicates the monitored metadata.

Minute Indicates the minute on which the delivery was sent.

Number of Distinct Document Metric of the number of report services document


Subscriptions subscriptions.

Metric of the number of recipients that received


Number of Distinct Recipients
content from a subscription.

Number of Distinct Report


Metric of the number of report subscriptions.
Subscriptions

Metric of the number of executed subscriptions. This


Number of Distinct Subscriptions does not reflect the number of subscriptions in the
metadata.

Metric of the number of subscriptions that delivered


Number of E-mail Subscriptions
content via e-mail.

Number of Errored Subscriptions Metric of the number of subscriptions that failed.

Number of Executions Metric of the number of executions of a subscription.

Metric of the number of subscriptions that delivered


Number of File Subscriptions
content via file location.

Number of History List Metric of the number of subscriptions that delivered


Subscriptions content via the history list.

Metric of the number of subscriptions that delivered


Number of Mobile Subscriptions
content via mobile.

Metric of the number of subscriptions that delivered


Number of Print Subscriptions
content via a printer.

Project Lists the projects.

Report Lists the reports in projects.

Copyright © 2024 All Rights Reserved 2335


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Report Job Lists an execution of a report.

Indicates whether the execution was a report or a


Report/Document Indicator
document.

Schedule Indicates the schedule that triggered the delivery.

Subscription Indicates the subscription that triggered the delivery.

Subscription Execution Duration Metric of the sum of all execution times of a


(hh:mm:ss) subscription.

Subscription Execution Duration Metric of the sum of all execution times of a


(secs) subscription (in seconds).

Document Job Attributes and Metrics

Attribute or metric name Function

Day Indicates the day on which the document job executed.

Document Indicates which document was executed.

Document Job Indicates an execution of a document.

Metric of the average difference between start time and


DP Average Elapsed Duration per
finish time (including time for prompt responses) of all
Job (hh:mm:ss)
document job executions.

Metric of the average difference, in seconds, between


DP Average Elapsed Duration
start time and finish time (including time for prompt
per Job (secs)
responses) of all document job executions.

DP Average Execution Duration Metric of the average duration, in seconds, of all


per Job (secs) document job executions.

DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.

DP Average Queue Duration per Metric of the average duration of all document job

Copyright © 2024 All Rights Reserved 2336


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Job (hh:mm:ss) executions waiting in the queue.

DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.

Metric of the difference between start time and finish


DP Elapsed Duration (hh:mm:ss) time (including time for prompt responses) of a
document job.

Metric of the average difference, in seconds, between


DP Elapsed Duration (secs) start time and finish time (including time for prompt
responses) of a document job.

DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)

Metric of the duration, in seconds, of a document job's


DP Execution Duration (secs)
execution.

DP Number of Jobs (IS_DOC_ Metric of the number of document jobs that were
FACT) executed.

DP Number of Jobs with Cache


Metric of the number of document jobs that hit a cache.
Hit

DP Number of Jobs with Error Metric of the number of document jobs that failed.

DP Number of Users who ran


Metric of the number of users who ran document jobs.
Documents

DP Percentage of Jobs with Metric of the percentage of document jobs that hit a
Cache Hit cache.

DP Percentage of Jobs with Error Metric of the percentage of document jobs that failed.

Metric of the duration of all document job executions


DP Queue Duration (hh:mm:ss)
waiting in the queue.

Metric of the duration, in seconds, of all document job


DP Queue Duration (secs)
executions waiting in the queue.

Hour Indicates the hour the document job was executed.

Copyright © 2024 All Rights Reserved 2337


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Indicates the Intelligence Server machine that


Intelligence Server Machine
executed the document job.

Metadata Indicates the metadata storing the document.

Minute Indicates the minute the document job was executed.

Project Indicates the project storing the document.

Report Indicates the reports in the document.

User Indicates the user who ran the document job.

Document Job Step Attributes and Metrics

Attribute or metric name Function

Day Indicates the day on which the document job executed.

Document Indicates which document was executed.

Indicates the sequence number for steps in a


Document Job Step Sequence
document job.

Document Job Step Type Indicates the type of step for a document job.

Metric of the average difference between start time and


DP Average Elapsed Duration
finish time (including time for prompt responses) of all
per Job (hh:mm:ss)
document job executions.

Metric of the average difference, in seconds, between


DP Average Elapsed Duration per
start time and finish time (including time for prompt
Job (secs)
responses) of all document job executions.

DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.

DP Average Execution Duration Metric of the average duration, in seconds, of all


per Job (secs) document job executions.

Copyright © 2024 All Rights Reserved 2338


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.

DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.

Metric of the difference between start time and finish


DP Elapsed Duration (hh:mm:ss) time (including time for prompt responses) of a
document job.

Metric of the average difference, in seconds, between


DP Elapsed Duration (secs) start time and finish time (including time for prompt
responses) of a document job.

DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)

Metric of the duration, in seconds, of a document job's


DP Execution Duration (secs)
execution.

Metric of the duration of all document job executions


DP Queue Duration (hh:mm:ss)
waiting in the queue.

Metric of the duration, in seconds, of all document job


DP Queue Duration (secs)
executions waiting in the queue.

Hour Indicates the hour the document job was executed.

Metadata Indicates the metadata storing the document.

Minute Indicates the minute the document job was executed.

Project Indicates the project storing the document.

Enterprise Manager Data Load Attributes

Attribute name Function

Data Load Finish Time Displays the timestamp of the end of the data load process for

Copyright © 2024 All Rights Reserved 2339


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

the projects that are being monitored.

Data Load Project Lists all projects that are being monitored.

Lists the timestamp of the start of the data load process for the
Data Load Start Time
projects that are being monitored.

A value of -1 indicates that it is the summary row in the EM_IS_


LAST_UPDATE table for all projects in a data load. That
Item ID
summary row has information about how long the data load took.
A value of 0 indicates it is a row with project data load details.

Inbox Message Actions Attributes and Metrics

Attribute or metric
Function
name

Day Indicates the day the manipulation was started

Document Indicates the document included in the message.

Indicates the document job that requested the History List


Document Job
message manipulation.

HL Days Since Last


Metric of the number of days since any action was performed.
Action: Any action

HL Days Since Last Metric of the number of days since the last request was made
Action: Request for the contents of a message.

HL Last Action Date: Any Metric of the date and time of the last action performed on a
Action message such as read, deleted, marked as read, and so on.

HL Last Action Date: Metric of the date and time of the last request made for the
Request contents of a message.

HL Number of Actions Metric of the number of actions performed on a message.

HL Number of Actions by Metric of the number of actions by user performed on a

Copyright © 2024 All Rights Reserved 2340


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

User message.

HL Number of Actions with Metric of the number of actions on a message that resulted in
Errors an error.

HL Number of Document Metric of the number of document jobs that result with
Jobs messages.

HL Number of Messages Metric of the number of messages.

HL Number of Messages
Metric of the number of messages that resulted in an error.
with Errors

HL Number of Messages Metric of the number of requests for the contents of a


Requested message.

HL Number of Report
Metric of the number of report jobs that result from messages.
Jobs

Indicates the hour the manipulation was started on a History


Hour
List message.

Indicates the manipulation that was performed on a History


Inbox Action
List message.

Indicates the type of manipulation that was performed on a


Inbox Action Type
History List message.

Inbox Message Indicates the message in the History List.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine message.

Metadata Indicates the metadata storing the message.

Minute Indicates the minute the manipulation was started.

Project Indicates the project storing the message.

Report Indicates the report included in the message.

Report Job Indicates the job ID of the report included in the message.

User Indicates the user who manipulated the History List message.

Copyright © 2024 All Rights Reserved 2341


Syst em Ad m in ist r at io n Gu id e

Mobile Client Attributes

Attribute name Function

Indicates whether a cache was hit during the execution and, if


Cache Hit Indicator
so, what type of cache hit.

Day Indicates the day the action started.

Document Identifies the document used in the request.

Indicates the type of report or document that initiated the


Execution Type Indicator
execution.

Indicates the location, in latitude and longitude form, of the


Geocode
user.

Hour Indicates the hour the action started.

Intelligence Server
Indicates the Intelligence Server processing the request.
Machine

Indicates the metadata repository storing the report or


Metadata
document.

Minute Indicates the minute the action started.

Mobile Device Installation


Indicates the unique Installation ID of the mobile app.
ID

Indicates the type of mobile device the app is installed on,


Mobile Device Type
such as IPAD2, DROID, and so on.

Indicates the version of the MicroStrategy app making the


MSTR App Version
request.

Indicates the type of network used, such as 3G, WIFI, LTE,


Network Type
and so on.

Indicates the operating system of the mobile device making


Operating System
the request.

Indicates the operating system version of the mobile device


Operating System Version
making the request.

Copyright © 2024 All Rights Reserved 2342


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Project Indicates the project used to initiate the request.

User Indicates the user that initiated the request.

OLAP Services Attributes and Metrics

Attribute or metric
Function
name

Day Indicates the day the action was started.

Hour Indicates the hour the action was started.

Intelligent Cube Indicates the Intelligent Cube that was used.

Intelligent Cube Action Metric of the duration, in seconds, for an action that was
Duration (secs) performed on the Intellgent Cube.

Intelligent Cube Action Indicates the type of action taken on the Intelligent Cube such
Type as cube publish, cube view hit, and so on.

Indicates the Intelligent Cube instance in memory that was


Intelligent Cube Instance
used for the action.

If the Intelligent Cube is published or refreshed, indicates the


Intelligent Cube Size (KB)
size, in KB, of the Intelligent Cube.

Indicates the type of Intelligent Cube used, such as working


Intelligent Cube Type set report, Report Services Base report, OLAP Cube report,
and so on.

Minute Indicates the minute on which the action was started.

Metric of how many jobs from reports not based on Intelligent


Number of Dynamically
Cubes but selected by the engine to go against an Intelligent
Sourced Report Jobs
Cube because the objects on the report matched what is on
against Intelligent Cubes
the Intelligent Cube.

Number of Intelligent Metric of how many times an Intelligent Cube was published.

Copyright © 2024 All Rights Reserved 2343


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Cube Publishes

Number of Intelligent
Metric of how many times an Intelligent Cube was refreshed.
Cube Refreshes

Number of Intelligent
Metric of how many times an Intelligent Cube was republished.
Cube Republishes

Number of Jobs with


Metric of how many job executions used an Intelligent Cube.
Intelligent Cube Hit

Metric of how many users executed a report or document that


Number of Users hitting
used an Intelligent Cube. That is, the number of users using
Intelligent Cubes
OLAP Services.

Number of View Report


Metric of how many actions were the result of a View Report.
Jobs

Report Indicates the report that hit the Intelligent Cube.

Performance Monitoring Attributes

Attribute name Function

Indicates category of the counter, such as memory,


Counter Category
MicroStrategy server jobs, or MicroStrategy server users.

Counter Instance Indicates the instance ID of the counter, for MicroStrategy use.

Day Indicates the day the action was started.

Hour Indicates the hour the action was started.

Minute Indicates the minute the action was started.

Performance Monitor Indicates the name of the performance counter and its value
Counter type.

Copyright © 2024 All Rights Reserved 2344


Syst em Ad m in ist r at io n Gu id e

Prompt Answers Attributes and Metrics

Attribute or metric
Function
name

Connection Source Indicates the connection source to Intelligence Server.

Count of Prompt Answers Metric of how many prompts were answered.

Day Indicates the day the prompt was answered.

Document Indicates the document that used the prompt.

Hour Indicates the hour the prompt was answered.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine job.

Metadata Indicates the metadata repository storing the prompt.

Minute Indicates the minute the prompt was answered.

Project Indicates the project storing the prompt.

Prompt Indicates the prompt that was used.

Prompt Answer Indicates the answers for the prompt in various instances.

Prompt Answer Required Indicates whether an answer to the prompt was required.

Prompt Instance Answer Indicates the answer of an instance of a prompt in a report job.

Prompt Location Indicates the ID of the location in which a prompt is stored.

Indicates the type of the object in which the prompt is stored, such as
Prompt Location Type
filter, template, attribute, and so on.

Indicates the title of the prompt (the title the user sees when
Prompt Title
presented during job execution).

Indicates what type of prompt was used, such as date, double,


Prompt Type
elements, and so on.

Report Indicates the report that used the prompt.

Copyright © 2024 All Rights Reserved 2345


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Report Job Indicates the report job that used the prompt.

RP Number of Jobs (IS_


Metric of how many jobs involved a prompt.
PR_ANS_FACT)

RP Number of Jobs
Metric of how many report jobs had a specified prompt answer
Containing Prompt
value.
Answer Value

RP Number of Jobs Not


Metric of how many report jobs did not have a specified prompt
Containing Prompt
answer value.
Answer Value

RP Number of Jobs with Metric of how many report jobs had a prompt that was not
Unanswered Prompts answered.

Report Job Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution is ad hoc.

Cache Creation Indicator Indicates whether an execution has created a cache.

Cache Hit Indicator Indicates whether an execution has hit a cache.

Cancelled Indicator Indicates whether an execution has been canceled.

Indicates whether a job was a document dataset or a


Child Job Indicator
standalone report.

Connection Source Indicates the connection source to Intelligence Server.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Database Error Indicator Indicates whether a report request failed because of a

Copyright © 2024 All Rights Reserved 2346


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

database error.

Datamart Indicator Indicates whether an execution created a data mart.

Day Indicates the day on which the report was executed.

Indicates the database instance on which the report was


DB Instance
executed.

Drill Indicator Indicates whether an execution is a result of a drill.

Element Load Indicator Indicates whether an execution is a result of an element load.

Error Indicator Indicates whether an execution encountered an error.

Indicates whether a report was exported and, if so, indicates


Export Indicator
its format.

Filter Indicates the filter used on the report.

Hour Indicates the hour on which the report was executed.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine report.

Metadata Indicates the metadata repository that stores the report.

Indicates the minute on which the report execution was


Minute
started.

Number of Jobs with


Metric of how many job executions used an Intelligent Cube.
Intelligent Cube Hit

Project Indicates the metadata repository that stores the report.

Prompt Indicator Indicates whether the report execution was prompted.

Report Indicates the ID of the report that was executed.

Report Job Indicates an execution of a report.

RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job time (including time for prompt responses) of all report job

Copyright © 2024 All Rights Reserved 2347


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

(hh:mm:ss) (IS_REP_
executions.
FACT)

RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job (secs) time (including time for prompt responses) of all report job
(IS_REP_FACT) executions.

RP Average Execution
Duration per Job Metric of the average duration of all report job executions.
(hh:mm:ss) (IS_REP_ Includes time in queue and execution for a report job.
FACT)

RP Average Execution Metric of the average duration, in seconds, of all report job
Duration per Job (secs) executions. Includes time in queue and execution for a report
(IS_REP_FACT) job.

RP Average Prompt
Metric of the average time users take to answer the set of
Answer Time per Job
prompts in all report jobs.
(hh:mm:ss)

RP Average Prompt
Metric of the average time, in seconds, users take to answer
Answer Time per Job
the set of prompts in all report jobs.
(secs)

RP Average Queue
Metric of the average time report jobs waited in the
Duration per Job
Intelligence Server's queue before the report job was
(hh:mm:ss) (IS_REP_
executed.
FACT)

RP Average Queue Metric of the average time, in seconds, report jobs waited in
Duration per Job (secs) the Intelligence Server's queue before the report job was
(IS_REP_FACT) executed.

Metric of the difference between start time and finish time of a


RP Elapsed Duration
report job. Includes time for prompt responses, in queue, and
(hh:mm:ss)
execution.

RP Elapsed Duration Metric of the difference, in seconds, between start time and
(secs) finish time of a report job. Includes time for prompt responses,

Copyright © 2024 All Rights Reserved 2348


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

in queue, and execution.

RP Execution Duration Metric of the duration of a report job's execution. Includes


(hh:mm:ss) database execution time.

RP Execution Duration Metric of the duration, in seconds, of a report job's execution.


(secs) Includes database execution time.

RP Number of Ad Hoc Metric of how many report jobs resulted from an ad hoc report
Jobs creation.

RP Number of Cancelled
Metric of how many job executions were canceled.
Jobs

RP Number of Drill Jobs Metric of how many job executions resulted from a drill action.

RP Number of Jobs (IS_


Metric of how many report jobs were executed.
REP_FACT)

RP Number of Jobs hitting Metric of how many report jobs were executed against the
Database database.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Cache Creation result in creating a server cache.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not hit a
Cache Hit server cache.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Element Loading result from loading additional attribute elements.

RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Cache Creation a server cache being created.

RP Number of Jobs with Metric of how many report jobs were executed that hit a server
Cache Hit cache.

RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Datamart Creation a data mart being created.

RP Number of Jobs with Metric of how many report jobs failed because of a database

Copyright © 2024 All Rights Reserved 2349


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

DB Error error.

RP Number of Jobs with Metric of how many report jobs were executed that resulted
Element Loading from loading additional attribute elements.

RP Number of Jobs with


Metric of how many report jobs failed because of an error.
Error

RP Number of Jobs with Metric of how many report job executions used an Intelligent
Intelligent Cube Hit Cube.

RP Number of Jobs with Metric of how many report job executions used a security
Security Filter filter.

RP Number of Jobs with


Metric of how many report jobs executed SQL statements.
SQL Execution

RP number of Narrowcast Metric of how many report job executions were run through
Server jobs MicroStrategy Narrowcast Server.

RP Number of Prompted
Metric of how many report job executions included a prompt.
Jobs

RP Number of Report
Metric of how many report jobs executed as a result of a
Jobs from Document
document execution.
Execution

RP Number of Result Metric of how many result rows were returned from a report
Rows execution.

RP Number of Scheduled
Metric of how many report jobs were scheduled.
Jobs

RP Number of Users who


Metric of how many distinct users ran report jobs.
ran reports

RP Prompt Answer Metric of the how long users take to answer the set of prompts
Duration (hh:mm:ss) in report jobs.

RP Prompt Answer Metric of the how long, in seconds, users take to answer the
Duration (secs) set of prompts in report jobs.

Copyright © 2024 All Rights Reserved 2350


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.

Metric of how long, in seconds, a report job waited in the


RP Queue Duration (secs) Intelligence Server's queue before the report job was
executed.

Schedule Indicates the schedule that began the report execution.

Schedule Indicator Indicates whether the report execution was scheduled.

Security Filter Indicates the security filter used in the report execution.

Indicates whether a security filter was used in the report


Security Filter Indicator
execution.

SQL Execution Indicator Indicates that SQL was executed during report execution.

Template Indicates the report template that was used.

User Indicates the user that ran the report.

Report Job SQL Pass Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether the execution was ad hoc.

Connection Source Indicates the connection source to Intelligence Server.

Day Indicates the day in which the job was executed.

Hour Indicates the hour in which the report job was executed.

Indicates the metadata repository storing the report or


Metadata
document.

Minute Indicates the minute in which the report job was started.

Copyright © 2024 All Rights Reserved 2351


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Project Indicates the project storing the report or document.

Report Indicates the report that was executed.

Report Job Indicates an execution of a report.

Indicates the SQL statement that was executed during the SQL
Report Job SQL Pass
pass.

Indicates the type of SQL statement that was executed in this


Report Job SQL Pass
SQL pass. Examples are SQL select, SQL insert, SQL create
Type
and such.

RP Execution Duration Metric of the duration of a report job's execution. Includes


(hh:mm:ss) database execution time.

RP Execution Duration Metric of the duration, in seconds, of a report job's execution.


(secs) Includes database execution time.

RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.

RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.

RP Number of DB Tables Metric of how many database tables were accessed in a report
Accessed job execution.

RP SQL Size Metric of how large, in bytes, the SQL was for a report job.

Report Job Steps Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution was ad hoc.

Cache Hit Indicator Indicates whether an execution has hit a cache.

Copyright © 2024 All Rights Reserved 2352


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Connection Source Indicates the connection source to Intelligence Server.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Day Indicates the day in which the job was executed.

Hour Indicates the hour in which the report job was executed.

Minute Indicates the minute in which the report job was started.

Report Indicates the report that was executed.

Report Job Indicates an execution of a report.

Report Job Step Indicates the sequence number in the series of execution
Sequence steps a report job passes through in the Intelligence Server.

Indicates the type of step for a report job. Examples are SQL
Report Job Step Type generation, SQL execution, Analytical Engine, Resolution
Server, element request, update Intelligent Cube, and so on.

RP Average CPU
Execution Duration per Metric of the average duration, in milliseconds, a report job
Job (msecs) (IS_REP_ execution takes in the Intelligence Server CPU.
STEP_FACT)

RP Average Elapsed Metric of the average difference, in seconds, between start


Duration per Job (secs) time and finish time of report job executions. Includes time for
(IS_REP_STEP_FACT) prompt responses.

RP Average Execution Metric of the average difference, in seconds, between start


Duration per Job (secs) time and finish time of report job executions. Includes time for
(IS_REP_STEP_FACT) prompt responses.

RP Average Query Engine


Execution Duration per Metric of the average time, in seconds, the Query Engine
Job (secs) (IS_REP_ takes to process a report job.
STEP_FACT)

Copyright © 2024 All Rights Reserved 2353


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Average Queue Metric of the average time report jobs waited in the
Duration per Job (secs) Intelligence Server's queue before the report job was
(IS_REP_STEP_FACT) executed.

Metric of how long, in milliseconds, a report job execution


RP CPU Duration (msec)
takes in the Intelligence Server CPU.

RP Elapsed Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes time for prompt responses.

Metric of the difference, in seconds, between start time and


RP Elapsed Duration
finish time of report job executions. Includes time for prompt
(secs)
responses.

RP Execution Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes database execution time.

Metric of the difference, in seconds, between start time and


RP Execution Duration
finish time of report job executions. Includes database
(secs)
execution time.

RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.

RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.

RP Number of Jobs (IS_


Metric of how many report jobs were executed.
REP_STEP_FACT)

RP Query Engine
Metric of how long the Query Engine took to execute SQL for a
Duration (hh:mm:ss) (IS_
report job.
REP_STEP_FACT)

RP Query Engine Duration


Metric of the time, in seconds, the Query Engine takes to
(secs) (IS_REP_STEP_
execute SQL for a report job.
FACT)

RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.

Copyright © 2024 All Rights Reserved 2354


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Metric of how long, in seconds, a report job waited in the


RP Queue Duration (secs) Intelligence Server's queue before the report job was
executed.

RP SQL Engine Duration


Metric of how long the SQL Engine took to generate SQL for a
(hh:mm:ss) (IS_REP_
report job.
STEP_FACT)

Report Job Tables/Columns Accessed Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution was ad hoc.

Column Indicates the column that was accessed.

Connection Source Indicates the connection source to Intelligence Server.

Day Indicates the day on which the table column was accessed.

Indicates the table in the database storing the column that


DB Table
was accessed.

Hour Indicates the hour on which the table column was accessed.

Minute Indicates the minute on which the table column was accessed.

Report Indicates the report that accessed the table column.

Indicates which execution of a report accessed the table


Report Job
column.

Metric of how many report jobs accessed the database column


RP Number of Jobs (IS_
or table. The Warehouse Tables Accessed report uses this
REP_COL_FACT)
metric.

Indicates which type of SQL clause was used to access the


SQL Clause Type
table column.

Copyright © 2024 All Rights Reserved 2355


Syst em Ad m in ist r at io n Gu id e

Schema Objects Attributes

Attribute name Function

Lists all attributes in projects that are set up to be monitored by


Attribute
Enterprise Manager.

Lists all attribute forms in projects that are set up to be monitored by


Attribute Form
Enterprise Manager.

Lists all columns in projects that are set up to be monitored by


Column
Enterprise Manager.

Lists all physical tables in the data warehouse that are set up to be
DB Table
monitored by Enterprise Manager.

Lists all facts in projects that are set up to be monitored by Enterprise


Fact
Manager.

Lists all hierarchies in projects that are set up to be monitored by


Hierarchy
Enterprise Manager

Lists all logical tables in projects that are set up to be monitored by


Table
Enterprise Manager.

Lists all transformations in projects that are set up to be monitored by


Transformation
Enterprise Manager.

Server Machines Attributes

Attribute name Function

Lists all machines that have had users connect to the


Client Machine
Intelligence Server.

Intelligence Server
Lists the cluster of Intelligence Servers.
Cluster

Intelligence Server Lists all machines that have logged statistics as an


Machine Intelligence Server.

Web Server Machine Lists all machines used as web servers.

Copyright © 2024 All Rights Reserved 2356


Syst em Ad m in ist r at io n Gu id e

Session Attributes and Metrics

Attribute or metric
Function
name

Avg. Connection Duration Metric of the average time connections to an Intelligence


(hh:mm:ss) Server last.

Avg. Connection Duration Metric of the average time, in seconds, connections to an


(secs) Intelligence Server last.

Connection Duration
Metric of the time a connection to an Intelligence Server lasts.
(hh:mm:ss)

Connection Duration Metric of the time, in seconds, a connection to an Intelligence


(secs) Server lasts.

Connection Source Lists all connection sources to Intelligence Server.

Number of Sessions Metric of how many sessions were connected to an Intelligence


(Report Level) Server. Usually reported with a date and time attribute.

Metric of how many distinct users were connected to an


Number of Users Logged
Intelligence Server. Usually reported with a date and time
In (Report Level)
attribute.

Session Indicates a user connection to an Intelligence Server.

All Indicators and Flags Attributes

Attribute name Function

Ad Hoc Indicator Indicates whether an execution is ad hoc.

Cache Creation
Indicates whether an execution has created a cache.
Indicator

Cache Hit Indicator Indicates whether an execution has hit a cache.

Cancelled Indicator Indicates whether an execution has been cancelled.

Copyright © 2024 All Rights Reserved 2357


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Indicates whether a job was a document dataset or a stand-alone


Child Job Indicator
report.

Configuration Object
Indicates whether a configuration object exists.
Exists Status

Configuration
Lists all configuration parameter types.
Parameter Value Type

Connection Source Lists all connection sources to Intelligence Server.

Contact Type Lists the executed contact types.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Database Error Indicates whether a report request failed because of a database


Indicator error.

Datamart Indicator Indicates whether an execution created a data mart.

DB Error Indicator Indicates whether an execution encountered a database error.

Delivery Status
Indicates whether a delivery was successful.
Indicator

Delivery Type Lists the type of delivery.

Document Job Status


Lists the statuses of document executions.
(Deprecated)

Document Job Step


Lists all possible steps of document job execution.
Type

Indicates the type of a document or dashboard, such as a Report


Document Type
Services document or dashboard.

Lists the object from which a user drilled when a new report was
Drill from Object
run because of a drilling action.

Drill Indicator Indicates whether an execution is a result of a drill.

Drill to Object Lists the object to which a user drilled when a new report was run

Copyright © 2024 All Rights Reserved 2358


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

because of a drilling action.

Element Load Indicator Indicates whether an execution is a result of an element load.

Error Indicator Indicates whether an execution encountered an error.

Execution Type Indicates how the content was requested, such as User
Indicator Execution, Pre-Cached, Application Recovery, and so on.

Indicates whether a report was exported and, if so, indicates its


Export Indicator
format.

Hierarchy Drilling Indicates whether a hierarchy is used as a drill hierarchy.

List the types of manipulations that can be performed on a


Inbox Action Type
History List message.

Intelligent Cube Action


Lists actions performed on or against intelligent cubes.
Type

Intelligent Cube Type Lists all intelligent cube types.

Lists all the possible errors that can be returned during job
Job ErrorCode
executions.

Job Priority Map Lists the priorities of job executions.

Enumerates the upper limit of the priority ranges for high,


Job Priority Number medium, and low priority jobs. Default values are 332, 666, and
999.

Object Creation Date Indicates the date on which an object was created.

Object Creation
Indicates the week of the year in which an object was created.
Week of year

Object Exists Status Indicates whether an object exists.

Object Hidden Status Indicates whether an object is hidden.

Object Modification
Indicates the date on which an object was last modified.
Date

Copyright © 2024 All Rights Reserved 2359


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Object Modification Indicates the week of the year in which an object was last
Week of year modified.

Prompt Answer Indicates whether a prompt answer was required for the job
Required execution.

Prompt Indicator Indicates whether a job execution was prompted.

Report Job SQL Pass Lists the types of SQL passes that the Intelligence Server
Type generates.

Report Job Status


Lists the statuses of report executions.
(Deprecated)

Report Job Step Type Lists all possible steps of report job execution.

Report Type Indicates the type of a report, such as XDA, relational, and so on.

Report/Document
Indicates whether the execution was a report or a document.
Indicator

Schedule Indicator Indicates whether a job execution was scheduled.

Security Filter Indicator Indicates whether a security filter was used in the job execution.

SQL Clause Type Lists the various SQL clause types used by the SQL Engine.

SQL Execution
Indicates whether SQL was executed in the job execution.
Indicator

Application Objects Attributes

Attribute
Function
name

Lists all consolidations in projects that are set up to be monitored by


Consolidation
Enterprise Manager.

Lists all custom groups in projects that are set up to be monitored by


Custom Group
Enterprise Manager.

Copyright © 2024 All Rights Reserved 2360


Syst em Ad m in ist r at io n Gu id e

Attribute
Function
name

Lists all documents in projects that are set up to be monitored by


Document
Enterprise Manager.

Lists all filters in projects that are set up to be monitored by Enterprise


Filter
Manager.

Lists all intelligent cubes in projects that are set up to be monitored by


Intelligent Cube
Enterprise Manager.

Lists all metrics in projects that are set up to be monitored by Enterprise


Metric
Manager.

Lists all prompts in projects that are set up to be monitored by


Prompt
Enterprise Manager.

Lists all reports in projects that are set up to be monitored by Enterprise


Report
Manager.

Lists all security filters in projects that are set up to be monitored by


Security Filter
Enterprise Manager.

Lists all templates in projects that are set up to be monitored by


Template
Enterprise Manager.

Configuration Objects Attributes

Attribute name Function

Address Lists all addresses to which deliveries have been sent.

Configuration Object
Lists the owners of configuration objects.
Owner

Configuration Parameter Lists all configuration parameters.

Contact Lists all contacts to whom deliveries have been sent.

DB Connection Lists all database connections.

Copyright © 2024 All Rights Reserved 2361


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

DB Instance Lists all database instances.

Device Lists all devices to which deliveries have been sent.

Event Lists all events being tracked.

Folder Lists all folders within projects.

Intelligence Server
Lists all Intelligence Server definitions.
Definition

Metadata Lists all monitored metadata.

Owner Lists the owners of all objects.

Project Lists all projects.

Schedule Lists all schedules.

Subscription Lists all executed transmissions.

Transmitter Lists all transmitters.

User Lists all users being tracked.

User Group Lists all user groups.

User Group (Parent) Lists all user groups that are parents of other user groups.

Date and Time Attributes

Attribute
Function
name

Calendar
Lists every calendar week, beginning with 2000-01-01, as an integer.
Week

Day Lists all days, beginning in 1990.

Lists the hours in a day. For example, 09 AM - 10 AM, 10 AM - 11 AM, and


Hour
so on.

Copyright © 2024 All Rights Reserved 2362


Syst em Ad m in ist r at io n Gu id e

Attribute
Function
name

Lists all the minutes in an hour. For example, if the hour specified is 10
Minute AM - 11 AM, lists minutes as 10.30 AM - 10.31 AM, 10.32 AM - 10.33 AM,
and so on.

Month Lists all months, beginning with 2000.

Month of Year Lists all months in a specified year.

Quarter Lists all quarters.

Quarter of
Lists all quarters of the year.
Year

Lists all weeks in all years, beginning in 2000. Weeks in 2000 are
Week of Year represented as a number ranging from 200001 to 200053, weeks in 2001
are represented as a number ranging from 200101 to 200153, and so on.

Weekday Lists all days of the week.

Year Lists all years.

Delivery Services Attributes and Metrics

Attribute or metric name Function

Address Indicates the address to which a delivery was sent.

Avg number of recipients per Metric of the average number of recipients in


subscription subscriptions.

Avg Subscription Execution Metric of the average amount of time subscriptions


Duration (hh:mm:ss) take to execute.

Avg Subscription Execution Metric of the average amount of time, in seconds,


Duration (secs) subscriptions take to execute.

Contact Indicates all contacts to whom a delivery was sent.

Copyright © 2024 All Rights Reserved 2363


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Contact Type Indicates the executed contact types.

Day Indicates the day on which the delivery was sent.

Delivery Status Indicator Indicates whether the delivery was successful.

Delivery Type Indicates the type of delivery.

Indicates the type of device to which the delivery was


Device
sent.

Document Indicates the document that was delivered.

Hour Indicates the hour on which the delivery was sent.

Indicates the Intelligence Server machine that


Intelligence Server Machine
executed the job.

Metadata Indicates the monitored metadata.

Minute Indicates the minute on which the delivery was sent.

Number of Distinct Document Metric of the number of report services document


Subscriptions subscriptions.

Metric of the number of recipients that received


Number of Distinct Recipients
content from a subscription.

Number of Distinct Report


Metric of the number of report subscriptions.
Subscriptions

Metric of the number of executed subscriptions. This


Number of Distinct Subscriptions does not reflect the number of subscriptions in the
metadata.

Metric of the number of subscriptions that delivered


Number of E-mail Subscriptions
content via e-mail.

Number of Errored Subscriptions Metric of the number of subscriptions that failed.

Number of Executions Metric of the number of executions of a subscription.

Number of File Subscriptions Metric of the number of subscriptions that delivered

Copyright © 2024 All Rights Reserved 2364


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

content via file location.

Number of History List Metric of the number of subscriptions that delivered


Subscriptions content via the history list.

Metric of the number of subscriptions that delivered


Number of Mobile Subscriptions
content via mobile.

Metric of the number of subscriptions that delivered


Number of Print Subscriptions
content via a printer.

Project Lists the projects.

Report Lists the reports in projects.

Report Job Lists an execution of a report.

Indicates whether the execution was a report or a


Report/Document Indicator
document.

Schedule Indicates the schedule that triggered the delivery.

Subscription Indicates the subscription that triggered the delivery.

Subscription Execution Duration Metric of the sum of all execution times of a


(hh:mm:ss) subscription.

Subscription Execution Duration Metric of the sum of all execution times of a


(secs) subscription (in seconds).

Document Job Attributes and Metrics

Attribute or metric name Function

Day Indicates the day on which the document job executed.

Document Indicates which document was executed.

Document Job Indicates an execution of a document.

Copyright © 2024 All Rights Reserved 2365


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Metric of the average difference between start time and


DP Average Elapsed Duration per
finish time (including time for prompt responses) of all
Job (hh:mm:ss)
document job executions.

Metric of the average difference, in seconds, between


DP Average Elapsed Duration
start time and finish time (including time for prompt
per Job (secs)
responses) of all document job executions.

DP Average Execution Duration Metric of the average duration, in seconds, of all


per Job (secs) document job executions.

DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.

DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.

DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.

Metric of the difference between start time and finish


DP Elapsed Duration (hh:mm:ss) time (including time for prompt responses) of a
document job.

Metric of the average difference, in seconds, between


DP Elapsed Duration (secs) start time and finish time (including time for prompt
responses) of a document job.

DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)

Metric of the duration, in seconds, of a document job's


DP Execution Duration (secs)
execution.

DP Number of Jobs (IS_DOC_ Metric of the number of document jobs that were
FACT) executed.

DP Number of Jobs with Cache


Metric of the number of document jobs that hit a cache.
Hit

DP Number of Jobs with Error Metric of the number of document jobs that failed.

Copyright © 2024 All Rights Reserved 2366


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

DP Number of Users who ran


Metric of the number of users who ran document jobs.
Documents

DP Percentage of Jobs with Metric of the percentage of document jobs that hit a
Cache Hit cache.

DP Percentage of Jobs with Error Metric of the percentage of document jobs that failed.

Metric of the duration of all document job executions


DP Queue Duration (hh:mm:ss)
waiting in the queue.

Metric of the duration, in seconds, of all document job


DP Queue Duration (secs)
executions waiting in the queue.

Hour Indicates the hour the document job was executed.

Indicates the Intelligence Server machine that


Intelligence Server Machine
executed the document job.

Metadata Indicates the metadata storing the document.

Minute Indicates the minute the document job was executed.

Project Indicates the project storing the document.

Report Indicates the reports in the document.

User Indicates the user who ran the document job.

Document Job Step Attributes and Metrics

Attribute or metric name Function

Day Indicates the day on which the document job executed.

Document Indicates which document was executed.

Indicates the sequence number for steps in a


Document Job Step Sequence
document job.

Copyright © 2024 All Rights Reserved 2367


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Document Job Step Type Indicates the type of step for a document job.

Metric of the average difference between start time and


DP Average Elapsed Duration
finish time (including time for prompt responses) of all
per Job (hh:mm:ss)
document job executions.

Metric of the average difference, in seconds, between


DP Average Elapsed Duration per
start time and finish time (including time for prompt
Job (secs)
responses) of all document job executions.

DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.

DP Average Execution Duration Metric of the average duration, in seconds, of all


per Job (secs) document job executions.

DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.

DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.

Metric of the difference between start time and finish


DP Elapsed Duration (hh:mm:ss) time (including time for prompt responses) of a
document job.

Metric of the average difference, in seconds, between


DP Elapsed Duration (secs) start time and finish time (including time for prompt
responses) of a document job.

DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)

Metric of the duration, in seconds, of a document job's


DP Execution Duration (secs)
execution.

Metric of the duration of all document job executions


DP Queue Duration (hh:mm:ss)
waiting in the queue.

Metric of the duration, in seconds, of all document job


DP Queue Duration (secs)
executions waiting in the queue.

Copyright © 2024 All Rights Reserved 2368


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Hour Indicates the hour the document job was executed.

Metadata Indicates the metadata storing the document.

Minute Indicates the minute the document job was executed.

Project Indicates the project storing the document.

Enterprise Manager Data Load Attributes

Attribute name Function

Displays the timestamp of the end of the data load process for
Data Load Finish Time
the projects that are being monitored.

Data Load Project Lists all projects that are being monitored.

Lists the timestamp of the start of the data load process for the
Data Load Start Time
projects that are being monitored.

A value of -1 indicates that it is the summary row in the EM_IS_


LAST_UPDATE table for all projects in a data load. That
Item ID
summary row has information about how long the data load took.
A value of 0 indicates it is a row with project data load details.

Inbox Message Actions Attributes and Metrics

Attribute or metric
Function
name

Day Indicates the day the manipulation was started

Document Indicates the document included in the message.

Indicates the document job that requested the History List


Document Job
message manipulation.

Copyright © 2024 All Rights Reserved 2369


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

HL Days Since Last


Metric of the number of days since any action was performed.
Action: Any action

HL Days Since Last Metric of the number of days since the last request was made
Action: Request for the contents of a message.

HL Last Action Date: Any Metric of the date and time of the last action performed on a
Action message such as read, deleted, marked as read, and so on.

HL Last Action Date: Metric of the date and time of the last request made for the
Request contents of a message.

HL Number of Actions Metric of the number of actions performed on a message.

HL Number of Actions by Metric of the number of actions by user performed on a


User message.

HL Number of Actions with Metric of the number of actions on a message that resulted in
Errors an error.

HL Number of Document Metric of the number of document jobs that result with
Jobs messages.

HL Number of Messages Metric of the number of messages.

HL Number of Messages
Metric of the number of messages that resulted in an error.
with Errors

HL Number of Messages Metric of the number of requests for the contents of a


Requested message.

HL Number of Report
Metric of the number of report jobs that result from messages.
Jobs

Indicates the hour the manipulation was started on a History


Hour
List message.

Indicates the manipulation that was performed on a History


Inbox Action
List message.

Inbox Action Type Indicates the type of manipulation that was performed on a

Copyright © 2024 All Rights Reserved 2370


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

History List message.

Inbox Message Indicates the message in the History List.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine message.

Metadata Indicates the metadata storing the message.

Minute Indicates the minute the manipulation was started.

Project Indicates the project storing the message.

Report Indicates the report included in the message.

Report Job Indicates the job ID of the report included in the message.

User Indicates the user who manipulated the History List message.

Mobile Client Attributes

Attribute name Function

Indicates whether a cache was hit during the execution and, if


Cache Hit Indicator
so, what type of cache hit.

Day Indicates the day the action started.

Document Identifies the document used in the request.

Indicates the type of report or document that initiated the


Execution Type Indicator
execution.

Indicates the location, in latitude and longitude form, of the


Geocode
user.

Hour Indicates the hour the action started.

Intelligence Server
Indicates the Intelligence Server processing the request.
Machine

Copyright © 2024 All Rights Reserved 2371


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Indicates the metadata repository storing the report or


Metadata
document.

Minute Indicates the minute the action started.

Mobile Device Installation


Indicates the unique Installation ID of the mobile app.
ID

Indicates the type of mobile device the app is installed on,


Mobile Device Type
such as IPAD2, DROID, and so on.

Indicates the version of the MicroStrategy app making the


MSTR App Version
request.

Indicates the type of network used, such as 3G, WIFI, LTE,


Network Type
and so on.

Indicates the operating system of the mobile device making


Operating System
the request.

Indicates the operating system version of the mobile device


Operating System Version
making the request.

Project Indicates the project used to initiate the request.

User Indicates the user that initiated the request.

OLAP Services Attributes and Metrics

Attribute or metric
Function
name

Day Indicates the day the action was started.

Hour Indicates the hour the action was started.

Intelligent Cube Indicates the Intelligent Cube that was used.

Intelligent Cube Action Metric of the duration, in seconds, for an action that was
Duration (secs) performed on the Intellgent Cube.

Copyright © 2024 All Rights Reserved 2372


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Intelligent Cube Action Indicates the type of action taken on the Intelligent Cube such
Type as cube publish, cube view hit, and so on.

Indicates the Intelligent Cube instance in memory that was


Intelligent Cube Instance
used for the action.

If the Intelligent Cube is published or refreshed, indicates the


Intelligent Cube Size (KB)
size, in KB, of the Intelligent Cube.

Indicates the type of Intelligent Cube used, such as working


Intelligent Cube Type set report, Report Services Base report, OLAP Cube report,
and so on.

Minute Indicates the minute on which the action was started.

Metric of how many jobs from reports not based on Intelligent


Number of Dynamically
Cubes but selected by the engine to go against an Intelligent
Sourced Report Jobs
Cube because the objects on the report matched what is on
against Intelligent Cubes
the Intelligent Cube.

Number of Intelligent
Metric of how many times an Intelligent Cube was published.
Cube Publishes

Number of Intelligent
Metric of how many times an Intelligent Cube was refreshed.
Cube Refreshes

Number of Intelligent
Metric of how many times an Intelligent Cube was republished.
Cube Republishes

Number of Jobs with


Metric of how many job executions used an Intelligent Cube.
Intelligent Cube Hit

Metric of how many users executed a report or document that


Number of Users hitting
used an Intelligent Cube. That is, the number of users using
Intelligent Cubes
OLAP Services.

Number of View Report


Metric of how many actions were the result of a View Report.
Jobs

Report Indicates the report that hit the Intelligent Cube.

Copyright © 2024 All Rights Reserved 2373


Syst em Ad m in ist r at io n Gu id e

Performance Monitoring Attributes

Attribute name Function

Indicates category of the counter, such as memory,


Counter Category
MicroStrategy server jobs, or MicroStrategy server users.

Counter Instance Indicates the instance ID of the counter, for MicroStrategy use.

Day Indicates the day the action was started.

Hour Indicates the hour the action was started.

Minute Indicates the minute the action was started.

Performance Monitor Indicates the name of the performance counter and its value
Counter type.

Prompt Answers Attributes and Metrics

Attribute or metric
Function
name

Connection Source Indicates the connection source to Intelligence Server.

Count of Prompt Answers Metric of how many prompts were answered.

Day Indicates the day the prompt was answered.

Document Indicates the document that used the prompt.

Hour Indicates the hour the prompt was answered.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine job.

Metadata Indicates the metadata repository storing the prompt.

Minute Indicates the minute the prompt was answered.

Project Indicates the project storing the prompt.

Copyright © 2024 All Rights Reserved 2374


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Prompt Indicates the prompt that was used.

Prompt Answer Indicates the answers for the prompt in various instances.

Prompt Answer Required Indicates whether an answer to the prompt was required.

Prompt Instance Answer Indicates the answer of an instance of a prompt in a report job.

Prompt Location Indicates the ID of the location in which a prompt is stored.

Indicates the type of the object in which the prompt is stored, such as
Prompt Location Type
filter, template, attribute, and so on.

Indicates the title of the prompt (the title the user sees when
Prompt Title
presented during job execution).

Indicates what type of prompt was used, such as date, double,


Prompt Type
elements, and so on.

Report Indicates the report that used the prompt.

Report Job Indicates the report job that used the prompt.

RP Number of Jobs (IS_


Metric of how many jobs involved a prompt.
PR_ANS_FACT)

RP Number of Jobs
Metric of how many report jobs had a specified prompt answer
Containing Prompt
value.
Answer Value

RP Number of Jobs Not


Metric of how many report jobs did not have a specified prompt
Containing Prompt
answer value.
Answer Value

RP Number of Jobs with Metric of how many report jobs had a prompt that was not
Unanswered Prompts answered.

Copyright © 2024 All Rights Reserved 2375


Syst em Ad m in ist r at io n Gu id e

Report Job Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution is ad hoc.

Cache Creation Indicator Indicates whether an execution has created a cache.

Cache Hit Indicator Indicates whether an execution has hit a cache.

Cancelled Indicator Indicates whether an execution has been canceled.

Indicates whether a job was a document dataset or a


Child Job Indicator
standalone report.

Connection Source Indicates the connection source to Intelligence Server.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Indicates whether a report request failed because of a


Database Error Indicator
database error.

Datamart Indicator Indicates whether an execution created a data mart.

Day Indicates the day on which the report was executed.

Indicates the database instance on which the report was


DB Instance
executed.

Drill Indicator Indicates whether an execution is a result of a drill.

Element Load Indicator Indicates whether an execution is a result of an element load.

Error Indicator Indicates whether an execution encountered an error.

Indicates whether a report was exported and, if so, indicates


Export Indicator
its format.

Filter Indicates the filter used on the report.

Hour Indicates the hour on which the report was executed.

Intelligence Server Indicates the Intelligence Server machine that executed the

Copyright © 2024 All Rights Reserved 2376


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Machine report.

Metadata Indicates the metadata repository that stores the report.

Indicates the minute on which the report execution was


Minute
started.

Number of Jobs with


Metric of how many job executions used an Intelligent Cube.
Intelligent Cube Hit

Project Indicates the metadata repository that stores the report.

Prompt Indicator Indicates whether the report execution was prompted.

Report Indicates the ID of the report that was executed.

Report Job Indicates an execution of a report.

RP Average Elapsed
Metric of the average difference between start time and finish
Duration per Job
time (including time for prompt responses) of all report job
(hh:mm:ss) (IS_REP_
executions.
FACT)

RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job (secs) time (including time for prompt responses) of all report job
(IS_REP_FACT) executions.

RP Average Execution
Duration per Job Metric of the average duration of all report job executions.
(hh:mm:ss) (IS_REP_ Includes time in queue and execution for a report job.
FACT)

RP Average Execution Metric of the average duration, in seconds, of all report job
Duration per Job (secs) executions. Includes time in queue and execution for a report
(IS_REP_FACT) job.

RP Average Prompt
Metric of the average time users take to answer the set of
Answer Time per Job
prompts in all report jobs.
(hh:mm:ss)

Copyright © 2024 All Rights Reserved 2377


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Average Prompt
Metric of the average time, in seconds, users take to answer
Answer Time per Job
the set of prompts in all report jobs.
(secs)

RP Average Queue
Metric of the average time report jobs waited in the
Duration per Job
Intelligence Server's queue before the report job was
(hh:mm:ss) (IS_REP_
executed.
FACT)

RP Average Queue Metric of the average time, in seconds, report jobs waited in
Duration per Job (secs) the Intelligence Server's queue before the report job was
(IS_REP_FACT) executed.

Metric of the difference between start time and finish time of a


RP Elapsed Duration
report job. Includes time for prompt responses, in queue, and
(hh:mm:ss)
execution.

Metric of the difference, in seconds, between start time and


RP Elapsed Duration
finish time of a report job. Includes time for prompt responses,
(secs)
in queue, and execution.

RP Execution Duration Metric of the duration of a report job's execution. Includes


(hh:mm:ss) database execution time.

RP Execution Duration Metric of the duration, in seconds, of a report job's execution.


(secs) Includes database execution time.

RP Number of Ad Hoc Metric of how many report jobs resulted from an ad hoc report
Jobs creation.

RP Number of Cancelled
Metric of how many job executions were canceled.
Jobs

RP Number of Drill Jobs Metric of how many job executions resulted from a drill action.

RP Number of Jobs (IS_


Metric of how many report jobs were executed.
REP_FACT)

RP Number of Jobs hitting Metric of how many report jobs were executed against the
Database database.

Copyright © 2024 All Rights Reserved 2378


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Cache Creation result in creating a server cache.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not hit a
Cache Hit server cache.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Element Loading result from loading additional attribute elements.

RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Cache Creation a server cache being created.

RP Number of Jobs with Metric of how many report jobs were executed that hit a server
Cache Hit cache.

RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Datamart Creation a data mart being created.

RP Number of Jobs with Metric of how many report jobs failed because of a database
DB Error error.

RP Number of Jobs with Metric of how many report jobs were executed that resulted
Element Loading from loading additional attribute elements.

RP Number of Jobs with


Metric of how many report jobs failed because of an error.
Error

RP Number of Jobs with Metric of how many report job executions used an Intelligent
Intelligent Cube Hit Cube.

RP Number of Jobs with Metric of how many report job executions used a security
Security Filter filter.

RP Number of Jobs with


Metric of how many report jobs executed SQL statements.
SQL Execution

RP number of Narrowcast Metric of how many report job executions were run through
Server jobs MicroStrategy Narrowcast Server.

RP Number of Prompted
Metric of how many report job executions included a prompt.
Jobs

Copyright © 2024 All Rights Reserved 2379


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Number of Report
Metric of how many report jobs executed as a result of a
Jobs from Document
document execution.
Execution

RP Number of Result Metric of how many result rows were returned from a report
Rows execution.

RP Number of Scheduled
Metric of how many report jobs were scheduled.
Jobs

RP Number of Users who


Metric of how many distinct users ran report jobs.
ran reports

RP Prompt Answer Metric of the how long users take to answer the set of prompts
Duration (hh:mm:ss) in report jobs.

RP Prompt Answer Metric of the how long, in seconds, users take to answer the
Duration (secs) set of prompts in report jobs.

RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.

Metric of how long, in seconds, a report job waited in the


RP Queue Duration (secs) Intelligence Server's queue before the report job was
executed.

Schedule Indicates the schedule that began the report execution.

Schedule Indicator Indicates whether the report execution was scheduled.

Security Filter Indicates the security filter used in the report execution.

Indicates whether a security filter was used in the report


Security Filter Indicator
execution.

SQL Execution Indicator Indicates that SQL was executed during report execution.

Template Indicates the report template that was used.

User Indicates the user that ran the report.

Copyright © 2024 All Rights Reserved 2380


Syst em Ad m in ist r at io n Gu id e

Report Job SQL Pass Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether the execution was ad hoc.

Connection Source Indicates the connection source to Intelligence Server.

Day Indicates the day in which the job was executed.

Hour Indicates the hour in which the report job was executed.

Indicates the metadata repository storing the report or


Metadata
document.

Minute Indicates the minute in which the report job was started.

Project Indicates the project storing the report or document.

Report Indicates the report that was executed.

Report Job Indicates an execution of a report.

Indicates the SQL statement that was executed during the SQL
Report Job SQL Pass
pass.

Indicates the type of SQL statement that was executed in this


Report Job SQL Pass
SQL pass. Examples are SQL select, SQL insert, SQL create
Type
and such.

RP Execution Duration Metric of the duration of a report job's execution. Includes


(hh:mm:ss) database execution time.

RP Execution Duration Metric of the duration, in seconds, of a report job's execution.


(secs) Includes database execution time.

RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.

RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.

RP Number of DB Tables Metric of how many database tables were accessed in a report
Accessed job execution.

Copyright © 2024 All Rights Reserved 2381


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP SQL Size Metric of how large, in bytes, the SQL was for a report job.

Report Job Steps Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution was ad hoc.

Cache Hit Indicator Indicates whether an execution has hit a cache.

Connection Source Indicates the connection source to Intelligence Server.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Day Indicates the day in which the job was executed.

Hour Indicates the hour in which the report job was executed.

Minute Indicates the minute in which the report job was started.

Report Indicates the report that was executed.

Report Job Indicates an execution of a report.

Report Job Step Indicates the sequence number in the series of execution
Sequence steps a report job passes through in the Intelligence Server.

Indicates the type of step for a report job. Examples are SQL
Report Job Step Type generation, SQL execution, Analytical Engine, Resolution
Server, element request, update Intelligent Cube, and so on.

RP Average CPU
Execution Duration per Metric of the average duration, in milliseconds, a report job
Job (msecs) (IS_REP_ execution takes in the Intelligence Server CPU.
STEP_FACT)

RP Average Elapsed Metric of the average difference, in seconds, between start

Copyright © 2024 All Rights Reserved 2382


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Duration per Job (secs) time and finish time of report job executions. Includes time for
(IS_REP_STEP_FACT) prompt responses.

RP Average Execution Metric of the average difference, in seconds, between start


Duration per Job (secs) time and finish time of report job executions. Includes time for
(IS_REP_STEP_FACT) prompt responses.

RP Average Query Engine


Execution Duration per Metric of the average time, in seconds, the Query Engine
Job (secs) (IS_REP_ takes to process a report job.
STEP_FACT)

RP Average Queue Metric of the average time report jobs waited in the
Duration per Job (secs) Intelligence Server's queue before the report job was
(IS_REP_STEP_FACT) executed.

Metric of how long, in milliseconds, a report job execution


RP CPU Duration (msec)
takes in the Intelligence Server CPU.

RP Elapsed Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes time for prompt responses.

Metric of the difference, in seconds, between start time and


RP Elapsed Duration
finish time of report job executions. Includes time for prompt
(secs)
responses.

RP Execution Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes database execution time.

Metric of the difference, in seconds, between start time and


RP Execution Duration
finish time of report job executions. Includes database
(secs)
execution time.

RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.

RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.

RP Number of Jobs (IS_ Metric of how many report jobs were executed.

Copyright © 2024 All Rights Reserved 2383


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

REP_STEP_FACT)

RP Query Engine
Metric of how long the Query Engine took to execute SQL for a
Duration (hh:mm:ss) (IS_
report job.
REP_STEP_FACT)

RP Query Engine Duration


Metric of the time, in seconds, the Query Engine takes to
(secs) (IS_REP_STEP_
execute SQL for a report job.
FACT)

RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.

Metric of how long, in seconds, a report job waited in the


RP Queue Duration (secs) Intelligence Server's queue before the report job was
executed.

RP SQL Engine Duration


Metric of how long the SQL Engine took to generate SQL for a
(hh:mm:ss) (IS_REP_
report job.
STEP_FACT)

Report Job Tables/Columns Accessed Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution was ad hoc.

Column Indicates the column that was accessed.

Connection Source Indicates the connection source to Intelligence Server.

Day Indicates the day on which the table column was accessed.

Indicates the table in the database storing the column that


DB Table
was accessed.

Copyright © 2024 All Rights Reserved 2384


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Hour Indicates the hour on which the table column was accessed.

Minute Indicates the minute on which the table column was accessed.

Report Indicates the report that accessed the table column.

Indicates which execution of a report accessed the table


Report Job
column.

Metric of how many report jobs accessed the database column


RP Number of Jobs (IS_
or table. The Warehouse Tables Accessed report uses this
REP_COL_FACT)
metric.

Indicates which type of SQL clause was used to access the


SQL Clause Type
table column.

Schema Objects Attributes

Attribute name Function

Lists all attributes in projects that are set up to be monitored by


Attribute
Enterprise Manager.

Lists all attribute forms in projects that are set up to be monitored by


Attribute Form
Enterprise Manager.

Lists all columns in projects that are set up to be monitored by


Column
Enterprise Manager.

Lists all physical tables in the data warehouse that are set up to be
DB Table
monitored by Enterprise Manager.

Lists all facts in projects that are set up to be monitored by Enterprise


Fact
Manager.

Lists all hierarchies in projects that are set up to be monitored by


Hierarchy
Enterprise Manager

Copyright © 2024 All Rights Reserved 2385


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Lists all logical tables in projects that are set up to be monitored by


Table
Enterprise Manager.

Lists all transformations in projects that are set up to be monitored by


Transformation
Enterprise Manager.

Server Machines Attributes

Attribute name Function

Lists all machines that have had users connect to the


Client Machine
Intelligence Server.

Intelligence Server
Lists the cluster of Intelligence Servers.
Cluster

Intelligence Server Lists all machines that have logged statistics as an


Machine Intelligence Server.

Web Server Machine Lists all machines used as web servers.

Session Attributes and Metrics

Attribute or metric
Function
name

Avg. Connection Duration Metric of the average time connections to an Intelligence


(hh:mm:ss) Server last.

Avg. Connection Duration Metric of the average time, in seconds, connections to an


(secs) Intelligence Server last.

Connection Duration
Metric of the time a connection to an Intelligence Server lasts.
(hh:mm:ss)

Connection Duration Metric of the time, in seconds, a connection to an Intelligence

Copyright © 2024 All Rights Reserved 2386


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

(secs) Server lasts.

Connection Source Lists all connection sources to Intelligence Server.

Number of Sessions Metric of how many sessions were connected to an Intelligence


(Report Level) Server. Usually reported with a date and time attribute.

Metric of how many distinct users were connected to an


Number of Users Logged
Intelligence Server. Usually reported with a date and time
In (Report Level)
attribute.

Session Indicates a user connection to an Intelligence Server.

All Indicators and Flags Attributes

Attribute name Function

Ad Hoc Indicator Indicates whether an execution is ad hoc.

Cache Creation
Indicates whether an execution has created a cache.
Indicator

Cache Hit Indicator Indicates whether an execution has hit a cache.

Cancelled Indicator Indicates whether an execution has been cancelled.

Indicates whether a job was a document dataset or a stand-alone


Child Job Indicator
report.

Configuration Object
Indicates whether a configuration object exists.
Exists Status

Configuration
Lists all configuration parameter types.
Parameter Value Type

Connection Source Lists all connection sources to Intelligence Server.

Contact Type Lists the executed contact types.

Copyright © 2024 All Rights Reserved 2387


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Database Error Indicates whether a report request failed because of a database


Indicator error.

Datamart Indicator Indicates whether an execution created a data mart.

DB Error Indicator Indicates whether an execution encountered a database error.

Delivery Status
Indicates whether a delivery was successful.
Indicator

Delivery Type Lists the type of delivery.

Document Job Status


Lists the statuses of document executions.
(Deprecated)

Document Job Step


Lists all possible steps of document job execution.
Type

Indicates the type of a document or dashboard, such as a Report


Document Type
Services document or dashboard.

Lists the object from which a user drilled when a new report was
Drill from Object
run because of a drilling action.

Drill Indicator Indicates whether an execution is a result of a drill.

Lists the object to which a user drilled when a new report was run
Drill to Object
because of a drilling action.

Element Load Indicator Indicates whether an execution is a result of an element load.

Error Indicator Indicates whether an execution encountered an error.

Execution Type Indicates how the content was requested, such as User
Indicator Execution, Pre-Cached, Application Recovery, and so on.

Indicates whether a report was exported and, if so, indicates its


Export Indicator
format.

Copyright © 2024 All Rights Reserved 2388


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Hierarchy Drilling Indicates whether a hierarchy is used as a drill hierarchy.

List the types of manipulations that can be performed on a


Inbox Action Type
History List message.

Intelligent Cube Action


Lists actions performed on or against intelligent cubes.
Type

Intelligent Cube Type Lists all intelligent cube types.

Lists all the possible errors that can be returned during job
Job ErrorCode
executions.

Job Priority Map Lists the priorities of job executions.

Enumerates the upper limit of the priority ranges for high,


Job Priority Number medium, and low priority jobs. Default values are 332, 666, and
999.

Object Creation Date Indicates the date on which an object was created.

Object Creation
Indicates the week of the year in which an object was created.
Week of year

Object Exists Status Indicates whether an object exists.

Object Hidden Status Indicates whether an object is hidden.

Object Modification
Indicates the date on which an object was last modified.
Date

Object Modification Indicates the week of the year in which an object was last
Week of year modified.

Prompt Answer Indicates whether a prompt answer was required for the job
Required execution.

Prompt Indicator Indicates whether a job execution was prompted.

Report Job SQL Pass Lists the types of SQL passes that the Intelligence Server
Type generates.

Report Job Status Lists the statuses of report executions.

Copyright © 2024 All Rights Reserved 2389


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

(Deprecated)

Report Job Step Type Lists all possible steps of report job execution.

Report Type Indicates the type of a report, such as XDA, relational, and so on.

Report/Document
Indicates whether the execution was a report or a document.
Indicator

Schedule Indicator Indicates whether a job execution was scheduled.

Security Filter Indicator Indicates whether a security filter was used in the job execution.

SQL Clause Type Lists the various SQL clause types used by the SQL Engine.

SQL Execution
Indicates whether SQL was executed in the job execution.
Indicator

Application Objects Attributes

Attribute
Function
name

Lists all consolidations in projects that are set up to be monitored by


Consolidation
Enterprise Manager.

Lists all custom groups in projects that are set up to be monitored by


Custom Group
Enterprise Manager.

Lists all documents in projects that are set up to be monitored by


Document
Enterprise Manager.

Lists all filters in projects that are set up to be monitored by Enterprise


Filter
Manager.

Lists all intelligent cubes in projects that are set up to be monitored by


Intelligent Cube
Enterprise Manager.

Lists all metrics in projects that are set up to be monitored by Enterprise


Metric
Manager.

Copyright © 2024 All Rights Reserved 2390


Syst em Ad m in ist r at io n Gu id e

Attribute
Function
name

Lists all prompts in projects that are set up to be monitored by


Prompt
Enterprise Manager.

Lists all reports in projects that are set up to be monitored by Enterprise


Report
Manager.

Lists all security filters in projects that are set up to be monitored by


Security Filter
Enterprise Manager.

Lists all templates in projects that are set up to be monitored by


Template
Enterprise Manager.

Configuration Objects Attributes

Attribute name Function

Address Lists all addresses to which deliveries have been sent.

Configuration Object
Lists the owners of configuration objects.
Owner

Configuration Parameter Lists all configuration parameters.

Contact Lists all contacts to whom deliveries have been sent.

DB Connection Lists all database connections.

DB Instance Lists all database instances.

Device Lists all devices to which deliveries have been sent.

Event Lists all events being tracked.

Folder Lists all folders within projects.

Intelligence Server
Lists all Intelligence Server definitions.
Definition

Metadata Lists all monitored metadata.

Copyright © 2024 All Rights Reserved 2391


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Owner Lists the owners of all objects.

Project Lists all projects.

Schedule Lists all schedules.

Subscription Lists all executed transmissions.

Transmitter Lists all transmitters.

User Lists all users being tracked.

User Group Lists all user groups.

User Group (Parent) Lists all user groups that are parents of other user groups.

Date and Time Attributes

Attribute
Function
name

Calendar
Lists every calendar week, beginning with 2000-01-01, as an integer.
Week

Day Lists all days, beginning in 1990.

Lists the hours in a day. For example, 09 AM - 10 AM, 10 AM - 11 AM, and


Hour
so on.

Lists all the minutes in an hour. For example, if the hour specified is 10
Minute AM - 11 AM, lists minutes as 10.30 AM - 10.31 AM, 10.32 AM - 10.33 AM,
and so on.

Month Lists all months, beginning with 2000.

Month of Year Lists all months in a specified year.

Quarter Lists all quarters.

Quarter of
Lists all quarters of the year.
Year

Copyright © 2024 All Rights Reserved 2392


Syst em Ad m in ist r at io n Gu id e

Attribute
Function
name

Lists all weeks in all years, beginning in 2000. Weeks in 2000 are
Week of Year represented as a number ranging from 200001 to 200053, weeks in 2001
are represented as a number ranging from 200101 to 200153, and so on.

Weekday Lists all days of the week.

Year Lists all years.

Delivery Services Attributes and Metrics

Attribute or metric name Function

Address Indicates the address to which a delivery was sent.

Avg number of recipients per Metric of the average number of recipients in


subscription subscriptions.

Avg Subscription Execution Metric of the average amount of time subscriptions


Duration (hh:mm:ss) take to execute.

Avg Subscription Execution Metric of the average amount of time, in seconds,


Duration (secs) subscriptions take to execute.

Contact Indicates all contacts to whom a delivery was sent.

Contact Type Indicates the executed contact types.

Day Indicates the day on which the delivery was sent.

Delivery Status Indicator Indicates whether the delivery was successful.

Delivery Type Indicates the type of delivery.

Indicates the type of device to which the delivery was


Device
sent.

Document Indicates the document that was delivered.

Hour Indicates the hour on which the delivery was sent.

Copyright © 2024 All Rights Reserved 2393


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Indicates the Intelligence Server machine that


Intelligence Server Machine
executed the job.

Metadata Indicates the monitored metadata.

Minute Indicates the minute on which the delivery was sent.

Number of Distinct Document Metric of the number of report services document


Subscriptions subscriptions.

Metric of the number of recipients that received


Number of Distinct Recipients
content from a subscription.

Number of Distinct Report


Metric of the number of report subscriptions.
Subscriptions

Metric of the number of executed subscriptions. This


Number of Distinct Subscriptions does not reflect the number of subscriptions in the
metadata.

Metric of the number of subscriptions that delivered


Number of E-mail Subscriptions
content via e-mail.

Number of Errored Subscriptions Metric of the number of subscriptions that failed.

Number of Executions Metric of the number of executions of a subscription.

Metric of the number of subscriptions that delivered


Number of File Subscriptions
content via file location.

Number of History List Metric of the number of subscriptions that delivered


Subscriptions content via the history list.

Metric of the number of subscriptions that delivered


Number of Mobile Subscriptions
content via mobile.

Metric of the number of subscriptions that delivered


Number of Print Subscriptions
content via a printer.

Project Lists the projects.

Report Lists the reports in projects.

Copyright © 2024 All Rights Reserved 2394


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Report Job Lists an execution of a report.

Indicates whether the execution was a report or a


Report/Document Indicator
document.

Schedule Indicates the schedule that triggered the delivery.

Subscription Indicates the subscription that triggered the delivery.

Subscription Execution Duration Metric of the sum of all execution times of a


(hh:mm:ss) subscription.

Subscription Execution Duration Metric of the sum of all execution times of a


(secs) subscription (in seconds).

Document Job Attributes and Metrics

Attribute or metric name Function

Day Indicates the day on which the document job executed.

Document Indicates which document was executed.

Document Job Indicates an execution of a document.

Metric of the average difference between start time and


DP Average Elapsed Duration per
finish time (including time for prompt responses) of all
Job (hh:mm:ss)
document job executions.

Metric of the average difference, in seconds, between


DP Average Elapsed Duration
start time and finish time (including time for prompt
per Job (secs)
responses) of all document job executions.

DP Average Execution Duration Metric of the average duration, in seconds, of all


per Job (secs) document job executions.

DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.

DP Average Queue Duration per Metric of the average duration of all document job

Copyright © 2024 All Rights Reserved 2395


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Job (hh:mm:ss) executions waiting in the queue.

DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.

Metric of the difference between start time and finish


DP Elapsed Duration (hh:mm:ss) time (including time for prompt responses) of a
document job.

Metric of the average difference, in seconds, between


DP Elapsed Duration (secs) start time and finish time (including time for prompt
responses) of a document job.

DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)

Metric of the duration, in seconds, of a document job's


DP Execution Duration (secs)
execution.

DP Number of Jobs (IS_DOC_ Metric of the number of document jobs that were
FACT) executed.

DP Number of Jobs with Cache


Metric of the number of document jobs that hit a cache.
Hit

DP Number of Jobs with Error Metric of the number of document jobs that failed.

DP Number of Users who ran


Metric of the number of users who ran document jobs.
Documents

DP Percentage of Jobs with Metric of the percentage of document jobs that hit a
Cache Hit cache.

DP Percentage of Jobs with Error Metric of the percentage of document jobs that failed.

Metric of the duration of all document job executions


DP Queue Duration (hh:mm:ss)
waiting in the queue.

Metric of the duration, in seconds, of all document job


DP Queue Duration (secs)
executions waiting in the queue.

Hour Indicates the hour the document job was executed.

Copyright © 2024 All Rights Reserved 2396


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Indicates the Intelligence Server machine that


Intelligence Server Machine
executed the document job.

Metadata Indicates the metadata storing the document.

Minute Indicates the minute the document job was executed.

Project Indicates the project storing the document.

Report Indicates the reports in the document.

User Indicates the user who ran the document job.

Document Job Step Attributes and Metrics

Attribute or metric name Function

Day Indicates the day on which the document job executed.

Document Indicates which document was executed.

Indicates the sequence number for steps in a


Document Job Step Sequence
document job.

Document Job Step Type Indicates the type of step for a document job.

Metric of the average difference between start time and


DP Average Elapsed Duration
finish time (including time for prompt responses) of all
per Job (hh:mm:ss)
document job executions.

Metric of the average difference, in seconds, between


DP Average Elapsed Duration per
start time and finish time (including time for prompt
Job (secs)
responses) of all document job executions.

DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.

DP Average Execution Duration Metric of the average duration, in seconds, of all


per Job (secs) document job executions.

Copyright © 2024 All Rights Reserved 2397


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.

DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.

Metric of the difference between start time and finish


DP Elapsed Duration (hh:mm:ss) time (including time for prompt responses) of a
document job.

Metric of the average difference, in seconds, between


DP Elapsed Duration (secs) start time and finish time (including time for prompt
responses) of a document job.

DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)

Metric of the duration, in seconds, of a document job's


DP Execution Duration (secs)
execution.

Metric of the duration of all document job executions


DP Queue Duration (hh:mm:ss)
waiting in the queue.

Metric of the duration, in seconds, of all document job


DP Queue Duration (secs)
executions waiting in the queue.

Hour Indicates the hour the document job was executed.

Metadata Indicates the metadata storing the document.

Minute Indicates the minute the document job was executed.

Project Indicates the project storing the document.

Enterprise Manager Data Load Attributes

Attribute name Function

Data Load Finish Time Displays the timestamp of the end of the data load process for

Copyright © 2024 All Rights Reserved 2398


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

the projects that are being monitored.

Data Load Project Lists all projects that are being monitored.

Lists the timestamp of the start of the data load process for the
Data Load Start Time
projects that are being monitored.

A value of -1 indicates that it is the summary row in the EM_IS_


LAST_UPDATE table for all projects in a data load. That
Item ID
summary row has information about how long the data load took.
A value of 0 indicates it is a row with project data load details.

Inbox Message Actions Attributes and Metrics

Attribute or metric
Function
name

Day Indicates the day the manipulation was started

Document Indicates the document included in the message.

Indicates the document job that requested the History List


Document Job
message manipulation.

HL Days Since Last


Metric of the number of days since any action was performed.
Action: Any action

HL Days Since Last Metric of the number of days since the last request was made
Action: Request for the contents of a message.

HL Last Action Date: Any Metric of the date and time of the last action performed on a
Action message such as read, deleted, marked as read, and so on.

HL Last Action Date: Metric of the date and time of the last request made for the
Request contents of a message.

HL Number of Actions Metric of the number of actions performed on a message.

HL Number of Actions by Metric of the number of actions by user performed on a

Copyright © 2024 All Rights Reserved 2399


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

User message.

HL Number of Actions with Metric of the number of actions on a message that resulted in
Errors an error.

HL Number of Document Metric of the number of document jobs that result with
Jobs messages.

HL Number of Messages Metric of the number of messages.

HL Number of Messages
Metric of the number of messages that resulted in an error.
with Errors

HL Number of Messages Metric of the number of requests for the contents of a


Requested message.

HL Number of Report
Metric of the number of report jobs that result from messages.
Jobs

Indicates the hour the manipulation was started on a History


Hour
List message.

Indicates the manipulation that was performed on a History


Inbox Action
List message.

Indicates the type of manipulation that was performed on a


Inbox Action Type
History List message.

Inbox Message Indicates the message in the History List.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine message.

Metadata Indicates the metadata storing the message.

Minute Indicates the minute the manipulation was started.

Project Indicates the project storing the message.

Report Indicates the report included in the message.

Report Job Indicates the job ID of the report included in the message.

User Indicates the user who manipulated the History List message.

Copyright © 2024 All Rights Reserved 2400


Syst em Ad m in ist r at io n Gu id e

Mobile Client Attributes

Attribute name Function

Indicates whether a cache was hit during the execution and, if


Cache Hit Indicator
so, what type of cache hit.

Day Indicates the day the action started.

Document Identifies the document used in the request.

Indicates the type of report or document that initiated the


Execution Type Indicator
execution.

Indicates the location, in latitude and longitude form, of the


Geocode
user.

Hour Indicates the hour the action started.

Intelligence Server
Indicates the Intelligence Server processing the request.
Machine

Indicates the metadata repository storing the report or


Metadata
document.

Minute Indicates the minute the action started.

Mobile Device Installation


Indicates the unique Installation ID of the mobile app.
ID

Indicates the type of mobile device the app is installed on,


Mobile Device Type
such as IPAD2, DROID, and so on.

Indicates the version of the MicroStrategy app making the


MSTR App Version
request.

Indicates the type of network used, such as 3G, WIFI, LTE,


Network Type
and so on.

Indicates the operating system of the mobile device making


Operating System
the request.

Indicates the operating system version of the mobile device


Operating System Version
making the request.

Copyright © 2024 All Rights Reserved 2401


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Project Indicates the project used to initiate the request.

User Indicates the user that initiated the request.

OLAP Services Attributes and Metrics

Attribute or metric
Function
name

Day Indicates the day the action was started.

Hour Indicates the hour the action was started.

Intelligent Cube Indicates the Intelligent Cube that was used.

Intelligent Cube Action Metric of the duration, in seconds, for an action that was
Duration (secs) performed on the Intellgent Cube.

Intelligent Cube Action Indicates the type of action taken on the Intelligent Cube such
Type as cube publish, cube view hit, and so on.

Indicates the Intelligent Cube instance in memory that was


Intelligent Cube Instance
used for the action.

If the Intelligent Cube is published or refreshed, indicates the


Intelligent Cube Size (KB)
size, in KB, of the Intelligent Cube.

Indicates the type of Intelligent Cube used, such as working


Intelligent Cube Type set report, Report Services Base report, OLAP Cube report,
and so on.

Minute Indicates the minute on which the action was started.

Metric of how many jobs from reports not based on Intelligent


Number of Dynamically
Cubes but selected by the engine to go against an Intelligent
Sourced Report Jobs
Cube because the objects on the report matched what is on
against Intelligent Cubes
the Intelligent Cube.

Number of Intelligent Metric of how many times an Intelligent Cube was published.

Copyright © 2024 All Rights Reserved 2402


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Cube Publishes

Number of Intelligent
Metric of how many times an Intelligent Cube was refreshed.
Cube Refreshes

Number of Intelligent
Metric of how many times an Intelligent Cube was republished.
Cube Republishes

Number of Jobs with


Metric of how many job executions used an Intelligent Cube.
Intelligent Cube Hit

Metric of how many users executed a report or document that


Number of Users hitting
used an Intelligent Cube. That is, the number of users using
Intelligent Cubes
OLAP Services.

Number of View Report


Metric of how many actions were the result of a View Report.
Jobs

Report Indicates the report that hit the Intelligent Cube.

Performance Monitoring Attributes

Attribute name Function

Indicates category of the counter, such as memory,


Counter Category
MicroStrategy server jobs, or MicroStrategy server users.

Counter Instance Indicates the instance ID of the counter, for MicroStrategy use.

Day Indicates the day the action was started.

Hour Indicates the hour the action was started.

Minute Indicates the minute the action was started.

Performance Monitor Indicates the name of the performance counter and its value
Counter type.

Copyright © 2024 All Rights Reserved 2403


Syst em Ad m in ist r at io n Gu id e

Prompt Answers Attributes and Metrics

Attribute or metric
Function
name

Connection Source Indicates the connection source to Intelligence Server.

Count of Prompt Answers Metric of how many prompts were answered.

Day Indicates the day the prompt was answered.

Document Indicates the document that used the prompt.

Hour Indicates the hour the prompt was answered.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine job.

Metadata Indicates the metadata repository storing the prompt.

Minute Indicates the minute the prompt was answered.

Project Indicates the project storing the prompt.

Prompt Indicates the prompt that was used.

Prompt Answer Indicates the answers for the prompt in various instances.

Prompt Answer Required Indicates whether an answer to the prompt was required.

Prompt Instance Answer Indicates the answer of an instance of a prompt in a report job.

Prompt Location Indicates the ID of the location in which a prompt is stored.

Indicates the type of the object in which the prompt is stored, such as
Prompt Location Type
filter, template, attribute, and so on.

Indicates the title of the prompt (the title the user sees when
Prompt Title
presented during job execution).

Indicates what type of prompt was used, such as date, double,


Prompt Type
elements, and so on.

Report Indicates the report that used the prompt.

Copyright © 2024 All Rights Reserved 2404


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Report Job Indicates the report job that used the prompt.

RP Number of Jobs (IS_


Metric of how many jobs involved a prompt.
PR_ANS_FACT)

RP Number of Jobs
Metric of how many report jobs had a specified prompt answer
Containing Prompt
value.
Answer Value

RP Number of Jobs Not


Metric of how many report jobs did not have a specified prompt
Containing Prompt
answer value.
Answer Value

RP Number of Jobs with Metric of how many report jobs had a prompt that was not
Unanswered Prompts answered.

Report Job Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution is ad hoc.

Cache Creation Indicator Indicates whether an execution has created a cache.

Cache Hit Indicator Indicates whether an execution has hit a cache.

Cancelled Indicator Indicates whether an execution has been canceled.

Indicates whether a job was a document dataset or a


Child Job Indicator
standalone report.

Connection Source Indicates the connection source to Intelligence Server.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Database Error Indicator Indicates whether a report request failed because of a

Copyright © 2024 All Rights Reserved 2405


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

database error.

Datamart Indicator Indicates whether an execution created a data mart.

Day Indicates the day on which the report was executed.

Indicates the database instance on which the report was


DB Instance
executed.

Drill Indicator Indicates whether an execution is a result of a drill.

Element Load Indicator Indicates whether an execution is a result of an element load.

Error Indicator Indicates whether an execution encountered an error.

Indicates whether a report was exported and, if so, indicates


Export Indicator
its format.

Filter Indicates the filter used on the report.

Hour Indicates the hour on which the report was executed.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine report.

Metadata Indicates the metadata repository that stores the report.

Indicates the minute on which the report execution was


Minute
started.

Number of Jobs with


Metric of how many job executions used an Intelligent Cube.
Intelligent Cube Hit

Project Indicates the metadata repository that stores the report.

Prompt Indicator Indicates whether the report execution was prompted.

Report Indicates the ID of the report that was executed.

Report Job Indicates an execution of a report.

RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job time (including time for prompt responses) of all report job

Copyright © 2024 All Rights Reserved 2406


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

(hh:mm:ss) (IS_REP_
executions.
FACT)

RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job (secs) time (including time for prompt responses) of all report job
(IS_REP_FACT) executions.

RP Average Execution
Duration per Job Metric of the average duration of all report job executions.
(hh:mm:ss) (IS_REP_ Includes time in queue and execution for a report job.
FACT)

RP Average Execution Metric of the average duration, in seconds, of all report job
Duration per Job (secs) executions. Includes time in queue and execution for a report
(IS_REP_FACT) job.

RP Average Prompt
Metric of the average time users take to answer the set of
Answer Time per Job
prompts in all report jobs.
(hh:mm:ss)

RP Average Prompt
Metric of the average time, in seconds, users take to answer
Answer Time per Job
the set of prompts in all report jobs.
(secs)

RP Average Queue
Metric of the average time report jobs waited in the
Duration per Job
Intelligence Server's queue before the report job was
(hh:mm:ss) (IS_REP_
executed.
FACT)

RP Average Queue Metric of the average time, in seconds, report jobs waited in
Duration per Job (secs) the Intelligence Server's queue before the report job was
(IS_REP_FACT) executed.

Metric of the difference between start time and finish time of a


RP Elapsed Duration
report job. Includes time for prompt responses, in queue, and
(hh:mm:ss)
execution.

RP Elapsed Duration Metric of the difference, in seconds, between start time and
(secs) finish time of a report job. Includes time for prompt responses,

Copyright © 2024 All Rights Reserved 2407


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

in queue, and execution.

RP Execution Duration Metric of the duration of a report job's execution. Includes


(hh:mm:ss) database execution time.

RP Execution Duration Metric of the duration, in seconds, of a report job's execution.


(secs) Includes database execution time.

RP Number of Ad Hoc Metric of how many report jobs resulted from an ad hoc report
Jobs creation.

RP Number of Cancelled
Metric of how many job executions were canceled.
Jobs

RP Number of Drill Jobs Metric of how many job executions resulted from a drill action.

RP Number of Jobs (IS_


Metric of how many report jobs were executed.
REP_FACT)

RP Number of Jobs hitting Metric of how many report jobs were executed against the
Database database.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Cache Creation result in creating a server cache.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not hit a
Cache Hit server cache.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Element Loading result from loading additional attribute elements.

RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Cache Creation a server cache being created.

RP Number of Jobs with Metric of how many report jobs were executed that hit a server
Cache Hit cache.

RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Datamart Creation a data mart being created.

RP Number of Jobs with Metric of how many report jobs failed because of a database

Copyright © 2024 All Rights Reserved 2408


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

DB Error error.

RP Number of Jobs with Metric of how many report jobs were executed that resulted
Element Loading from loading additional attribute elements.

RP Number of Jobs with


Metric of how many report jobs failed because of an error.
Error

RP Number of Jobs with Metric of how many report job executions used an Intelligent
Intelligent Cube Hit Cube.

RP Number of Jobs with Metric of how many report job executions used a security
Security Filter filter.

RP Number of Jobs with


Metric of how many report jobs executed SQL statements.
SQL Execution

RP number of Narrowcast Metric of how many report job executions were run through
Server jobs MicroStrategy Narrowcast Server.

RP Number of Prompted
Metric of how many report job executions included a prompt.
Jobs

RP Number of Report
Metric of how many report jobs executed as a result of a
Jobs from Document
document execution.
Execution

RP Number of Result Metric of how many result rows were returned from a report
Rows execution.

RP Number of Scheduled
Metric of how many report jobs were scheduled.
Jobs

RP Number of Users who


Metric of how many distinct users ran report jobs.
ran reports

RP Prompt Answer Metric of the how long users take to answer the set of prompts
Duration (hh:mm:ss) in report jobs.

RP Prompt Answer Metric of the how long, in seconds, users take to answer the
Duration (secs) set of prompts in report jobs.

Copyright © 2024 All Rights Reserved 2409


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.

Metric of how long, in seconds, a report job waited in the


RP Queue Duration (secs) Intelligence Server's queue before the report job was
executed.

Schedule Indicates the schedule that began the report execution.

Schedule Indicator Indicates whether the report execution was scheduled.

Security Filter Indicates the security filter used in the report execution.

Indicates whether a security filter was used in the report


Security Filter Indicator
execution.

SQL Execution Indicator Indicates that SQL was executed during report execution.

Template Indicates the report template that was used.

User Indicates the user that ran the report.

Report Job SQL Pass Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether the execution was ad hoc.

Connection Source Indicates the connection source to Intelligence Server.

Day Indicates the day in which the job was executed.

Hour Indicates the hour in which the report job was executed.

Indicates the metadata repository storing the report or


Metadata
document.

Minute Indicates the minute in which the report job was started.

Copyright © 2024 All Rights Reserved 2410


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Project Indicates the project storing the report or document.

Report Indicates the report that was executed.

Report Job Indicates an execution of a report.

Indicates the SQL statement that was executed during the SQL
Report Job SQL Pass
pass.

Indicates the type of SQL statement that was executed in this


Report Job SQL Pass
SQL pass. Examples are SQL select, SQL insert, SQL create
Type
and such.

RP Execution Duration Metric of the duration of a report job's execution. Includes


(hh:mm:ss) database execution time.

RP Execution Duration Metric of the duration, in seconds, of a report job's execution.


(secs) Includes database execution time.

RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.

RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.

RP Number of DB Tables Metric of how many database tables were accessed in a report
Accessed job execution.

RP SQL Size Metric of how large, in bytes, the SQL was for a report job.

Report Job Steps Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution was ad hoc.

Cache Hit Indicator Indicates whether an execution has hit a cache.

Copyright © 2024 All Rights Reserved 2411


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Connection Source Indicates the connection source to Intelligence Server.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Day Indicates the day in which the job was executed.

Hour Indicates the hour in which the report job was executed.

Minute Indicates the minute in which the report job was started.

Report Indicates the report that was executed.

Report Job Indicates an execution of a report.

Report Job Step Indicates the sequence number in the series of execution
Sequence steps a report job passes through in the Intelligence Server.

Indicates the type of step for a report job. Examples are SQL
Report Job Step Type generation, SQL execution, Analytical Engine, Resolution
Server, element request, update Intelligent Cube, and so on.

RP Average CPU
Execution Duration per Metric of the average duration, in milliseconds, a report job
Job (msecs) (IS_REP_ execution takes in the Intelligence Server CPU.
STEP_FACT)

RP Average Elapsed Metric of the average difference, in seconds, between start


Duration per Job (secs) time and finish time of report job executions. Includes time for
(IS_REP_STEP_FACT) prompt responses.

RP Average Execution Metric of the average difference, in seconds, between start


Duration per Job (secs) time and finish time of report job executions. Includes time for
(IS_REP_STEP_FACT) prompt responses.

RP Average Query Engine


Execution Duration per Metric of the average time, in seconds, the Query Engine
Job (secs) (IS_REP_ takes to process a report job.
STEP_FACT)

Copyright © 2024 All Rights Reserved 2412


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Average Queue Metric of the average time report jobs waited in the
Duration per Job (secs) Intelligence Server's queue before the report job was
(IS_REP_STEP_FACT) executed.

Metric of how long, in milliseconds, a report job execution


RP CPU Duration (msec)
takes in the Intelligence Server CPU.

RP Elapsed Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes time for prompt responses.

Metric of the difference, in seconds, between start time and


RP Elapsed Duration
finish time of report job executions. Includes time for prompt
(secs)
responses.

RP Execution Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes database execution time.

Metric of the difference, in seconds, between start time and


RP Execution Duration
finish time of report job executions. Includes database
(secs)
execution time.

RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.

RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.

RP Number of Jobs (IS_


Metric of how many report jobs were executed.
REP_STEP_FACT)

RP Query Engine
Metric of how long the Query Engine took to execute SQL for a
Duration (hh:mm:ss) (IS_
report job.
REP_STEP_FACT)

RP Query Engine Duration


Metric of the time, in seconds, the Query Engine takes to
(secs) (IS_REP_STEP_
execute SQL for a report job.
FACT)

RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.

Copyright © 2024 All Rights Reserved 2413


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Metric of how long, in seconds, a report job waited in the


RP Queue Duration (secs) Intelligence Server's queue before the report job was
executed.

RP SQL Engine Duration


Metric of how long the SQL Engine took to generate SQL for a
(hh:mm:ss) (IS_REP_
report job.
STEP_FACT)

Report Job Tables/Columns Accessed Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution was ad hoc.

Column Indicates the column that was accessed.

Connection Source Indicates the connection source to Intelligence Server.

Day Indicates the day on which the table column was accessed.

Indicates the table in the database storing the column that


DB Table
was accessed.

Hour Indicates the hour on which the table column was accessed.

Minute Indicates the minute on which the table column was accessed.

Report Indicates the report that accessed the table column.

Indicates which execution of a report accessed the table


Report Job
column.

Metric of how many report jobs accessed the database column


RP Number of Jobs (IS_
or table. The Warehouse Tables Accessed report uses this
REP_COL_FACT)
metric.

Indicates which type of SQL clause was used to access the


SQL Clause Type
table column.

Copyright © 2024 All Rights Reserved 2414


Syst em Ad m in ist r at io n Gu id e

Schema Objects Attributes

Attribute name Function

Lists all attributes in projects that are set up to be monitored by


Attribute
Enterprise Manager.

Lists all attribute forms in projects that are set up to be monitored by


Attribute Form
Enterprise Manager.

Lists all columns in projects that are set up to be monitored by


Column
Enterprise Manager.

Lists all physical tables in the data warehouse that are set up to be
DB Table
monitored by Enterprise Manager.

Lists all facts in projects that are set up to be monitored by Enterprise


Fact
Manager.

Lists all hierarchies in projects that are set up to be monitored by


Hierarchy
Enterprise Manager

Lists all logical tables in projects that are set up to be monitored by


Table
Enterprise Manager.

Lists all transformations in projects that are set up to be monitored by


Transformation
Enterprise Manager.

Server Machines Attributes

Attribute name Function

Lists all machines that have had users connect to the


Client Machine
Intelligence Server.

Intelligence Server
Lists the cluster of Intelligence Servers.
Cluster

Intelligence Server Lists all machines that have logged statistics as an


Machine Intelligence Server.

Web Server Machine Lists all machines used as web servers.

Copyright © 2024 All Rights Reserved 2415


Syst em Ad m in ist r at io n Gu id e

Session Attributes and Metrics

Attribute or metric
Function
name

Avg. Connection Duration Metric of the average time connections to an Intelligence


(hh:mm:ss) Server last.

Avg. Connection Duration Metric of the average time, in seconds, connections to an


(secs) Intelligence Server last.

Connection Duration
Metric of the time a connection to an Intelligence Server lasts.
(hh:mm:ss)

Connection Duration Metric of the time, in seconds, a connection to an Intelligence


(secs) Server lasts.

Connection Source Lists all connection sources to Intelligence Server.

Number of Sessions Metric of how many sessions were connected to an Intelligence


(Report Level) Server. Usually reported with a date and time attribute.

Metric of how many distinct users were connected to an


Number of Users Logged
Intelligence Server. Usually reported with a date and time
In (Report Level)
attribute.

Session Indicates a user connection to an Intelligence Server.

All Indicators and Flags Attributes

Attribute name Function

Ad Hoc Indicator Indicates whether an execution is ad hoc.

Cache Creation
Indicates whether an execution has created a cache.
Indicator

Cache Hit Indicator Indicates whether an execution has hit a cache.

Cancelled Indicator Indicates whether an execution has been cancelled.

Copyright © 2024 All Rights Reserved 2416


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Indicates whether a job was a document dataset or a stand-alone


Child Job Indicator
report.

Configuration Object
Indicates whether a configuration object exists.
Exists Status

Configuration
Lists all configuration parameter types.
Parameter Value Type

Connection Source Lists all connection sources to Intelligence Server.

Contact Type Lists the executed contact types.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Database Error Indicates whether a report request failed because of a database


Indicator error.

Datamart Indicator Indicates whether an execution created a data mart.

DB Error Indicator Indicates whether an execution encountered a database error.

Delivery Status
Indicates whether a delivery was successful.
Indicator

Delivery Type Lists the type of delivery.

Document Job Status


Lists the statuses of document executions.
(Deprecated)

Document Job Step


Lists all possible steps of document job execution.
Type

Indicates the type of a document or dashboard, such as a Report


Document Type
Services document or dashboard.

Lists the object from which a user drilled when a new report was
Drill from Object
run because of a drilling action.

Drill Indicator Indicates whether an execution is a result of a drill.

Drill to Object Lists the object to which a user drilled when a new report was run

Copyright © 2024 All Rights Reserved 2417


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

because of a drilling action.

Element Load Indicator Indicates whether an execution is a result of an element load.

Error Indicator Indicates whether an execution encountered an error.

Execution Type Indicates how the content was requested, such as User
Indicator Execution, Pre-Cached, Application Recovery, and so on.

Indicates whether a report was exported and, if so, indicates its


Export Indicator
format.

Hierarchy Drilling Indicates whether a hierarchy is used as a drill hierarchy.

List the types of manipulations that can be performed on a


Inbox Action Type
History List message.

Intelligent Cube Action


Lists actions performed on or against intelligent cubes.
Type

Intelligent Cube Type Lists all intelligent cube types.

Lists all the possible errors that can be returned during job
Job ErrorCode
executions.

Job Priority Map Lists the priorities of job executions.

Enumerates the upper limit of the priority ranges for high,


Job Priority Number medium, and low priority jobs. Default values are 332, 666, and
999.

Object Creation Date Indicates the date on which an object was created.

Object Creation
Indicates the week of the year in which an object was created.
Week of year

Object Exists Status Indicates whether an object exists.

Object Hidden Status Indicates whether an object is hidden.

Object Modification
Indicates the date on which an object was last modified.
Date

Copyright © 2024 All Rights Reserved 2418


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Object Modification Indicates the week of the year in which an object was last
Week of year modified.

Prompt Answer Indicates whether a prompt answer was required for the job
Required execution.

Prompt Indicator Indicates whether a job execution was prompted.

Report Job SQL Pass Lists the types of SQL passes that the Intelligence Server
Type generates.

Report Job Status


Lists the statuses of report executions.
(Deprecated)

Report Job Step Type Lists all possible steps of report job execution.

Report Type Indicates the type of a report, such as XDA, relational, and so on.

Report/Document
Indicates whether the execution was a report or a document.
Indicator

Schedule Indicator Indicates whether a job execution was scheduled.

Security Filter Indicator Indicates whether a security filter was used in the job execution.

SQL Clause Type Lists the various SQL clause types used by the SQL Engine.

SQL Execution
Indicates whether SQL was executed in the job execution.
Indicator

Application Objects Attributes

Attribute
Function
name

Lists all consolidations in projects that are set up to be monitored by


Consolidation
Enterprise Manager.

Lists all custom groups in projects that are set up to be monitored by


Custom Group
Enterprise Manager.

Copyright © 2024 All Rights Reserved 2419


Syst em Ad m in ist r at io n Gu id e

Attribute
Function
name

Lists all documents in projects that are set up to be monitored by


Document
Enterprise Manager.

Lists all filters in projects that are set up to be monitored by Enterprise


Filter
Manager.

Lists all intelligent cubes in projects that are set up to be monitored by


Intelligent Cube
Enterprise Manager.

Lists all metrics in projects that are set up to be monitored by Enterprise


Metric
Manager.

Lists all prompts in projects that are set up to be monitored by


Prompt
Enterprise Manager.

Lists all reports in projects that are set up to be monitored by Enterprise


Report
Manager.

Lists all security filters in projects that are set up to be monitored by


Security Filter
Enterprise Manager.

Lists all templates in projects that are set up to be monitored by


Template
Enterprise Manager.

Configuration Objects Attributes

Attribute name Function

Address Lists all addresses to which deliveries have been sent.

Configuration Object
Lists the owners of configuration objects.
Owner

Configuration Parameter Lists all configuration parameters.

Contact Lists all contacts to whom deliveries have been sent.

DB Connection Lists all database connections.

Copyright © 2024 All Rights Reserved 2420


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

DB Instance Lists all database instances.

Device Lists all devices to which deliveries have been sent.

Event Lists all events being tracked.

Folder Lists all folders within projects.

Intelligence Server
Lists all Intelligence Server definitions.
Definition

Metadata Lists all monitored metadata.

Owner Lists the owners of all objects.

Project Lists all projects.

Schedule Lists all schedules.

Subscription Lists all executed transmissions.

Transmitter Lists all transmitters.

User Lists all users being tracked.

User Group Lists all user groups.

User Group (Parent) Lists all user groups that are parents of other user groups.

Date and Time Attributes

Attribute
Function
name

Calendar
Lists every calendar week, beginning with 2000-01-01, as an integer.
Week

Day Lists all days, beginning in 1990.

Lists the hours in a day. For example, 09 AM - 10 AM, 10 AM - 11 AM, and


Hour
so on.

Copyright © 2024 All Rights Reserved 2421


Syst em Ad m in ist r at io n Gu id e

Attribute
Function
name

Lists all the minutes in an hour. For example, if the hour specified is 10
Minute AM - 11 AM, lists minutes as 10.30 AM - 10.31 AM, 10.32 AM - 10.33 AM,
and so on.

Month Lists all months, beginning with 2000.

Month of Year Lists all months in a specified year.

Quarter Lists all quarters.

Quarter of
Lists all quarters of the year.
Year

Lists all weeks in all years, beginning in 2000. Weeks in 2000 are
Week of Year represented as a number ranging from 200001 to 200053, weeks in 2001
are represented as a number ranging from 200101 to 200153, and so on.

Weekday Lists all days of the week.

Year Lists all years.

Delivery Services Attributes and Metrics

Attribute or metric name Function

Address Indicates the address to which a delivery was sent.

Avg number of recipients per Metric of the average number of recipients in


subscription subscriptions.

Avg Subscription Execution Metric of the average amount of time subscriptions


Duration (hh:mm:ss) take to execute.

Avg Subscription Execution Metric of the average amount of time, in seconds,


Duration (secs) subscriptions take to execute.

Contact Indicates all contacts to whom a delivery was sent.

Copyright © 2024 All Rights Reserved 2422


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Contact Type Indicates the executed contact types.

Day Indicates the day on which the delivery was sent.

Delivery Status Indicator Indicates whether the delivery was successful.

Delivery Type Indicates the type of delivery.

Indicates the type of device to which the delivery was


Device
sent.

Document Indicates the document that was delivered.

Hour Indicates the hour on which the delivery was sent.

Indicates the Intelligence Server machine that


Intelligence Server Machine
executed the job.

Metadata Indicates the monitored metadata.

Minute Indicates the minute on which the delivery was sent.

Number of Distinct Document Metric of the number of report services document


Subscriptions subscriptions.

Metric of the number of recipients that received


Number of Distinct Recipients
content from a subscription.

Number of Distinct Report


Metric of the number of report subscriptions.
Subscriptions

Metric of the number of executed subscriptions. This


Number of Distinct Subscriptions does not reflect the number of subscriptions in the
metadata.

Metric of the number of subscriptions that delivered


Number of E-mail Subscriptions
content via e-mail.

Number of Errored Subscriptions Metric of the number of subscriptions that failed.

Number of Executions Metric of the number of executions of a subscription.

Number of File Subscriptions Metric of the number of subscriptions that delivered

Copyright © 2024 All Rights Reserved 2423


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

content via file location.

Number of History List Metric of the number of subscriptions that delivered


Subscriptions content via the history list.

Metric of the number of subscriptions that delivered


Number of Mobile Subscriptions
content via mobile.

Metric of the number of subscriptions that delivered


Number of Print Subscriptions
content via a printer.

Project Lists the projects.

Report Lists the reports in projects.

Report Job Lists an execution of a report.

Indicates whether the execution was a report or a


Report/Document Indicator
document.

Schedule Indicates the schedule that triggered the delivery.

Subscription Indicates the subscription that triggered the delivery.

Subscription Execution Duration Metric of the sum of all execution times of a


(hh:mm:ss) subscription.

Subscription Execution Duration Metric of the sum of all execution times of a


(secs) subscription (in seconds).

Document Job Attributes and Metrics

Attribute or metric name Function

Day Indicates the day on which the document job executed.

Document Indicates which document was executed.

Document Job Indicates an execution of a document.

Copyright © 2024 All Rights Reserved 2424


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Metric of the average difference between start time and


DP Average Elapsed Duration per
finish time (including time for prompt responses) of all
Job (hh:mm:ss)
document job executions.

Metric of the average difference, in seconds, between


DP Average Elapsed Duration
start time and finish time (including time for prompt
per Job (secs)
responses) of all document job executions.

DP Average Execution Duration Metric of the average duration, in seconds, of all


per Job (secs) document job executions.

DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.

DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.

DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.

Metric of the difference between start time and finish


DP Elapsed Duration (hh:mm:ss) time (including time for prompt responses) of a
document job.

Metric of the average difference, in seconds, between


DP Elapsed Duration (secs) start time and finish time (including time for prompt
responses) of a document job.

DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)

Metric of the duration, in seconds, of a document job's


DP Execution Duration (secs)
execution.

DP Number of Jobs (IS_DOC_ Metric of the number of document jobs that were
FACT) executed.

DP Number of Jobs with Cache


Metric of the number of document jobs that hit a cache.
Hit

DP Number of Jobs with Error Metric of the number of document jobs that failed.

Copyright © 2024 All Rights Reserved 2425


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

DP Number of Users who ran


Metric of the number of users who ran document jobs.
Documents

DP Percentage of Jobs with Metric of the percentage of document jobs that hit a
Cache Hit cache.

DP Percentage of Jobs with Error Metric of the percentage of document jobs that failed.

Metric of the duration of all document job executions


DP Queue Duration (hh:mm:ss)
waiting in the queue.

Metric of the duration, in seconds, of all document job


DP Queue Duration (secs)
executions waiting in the queue.

Hour Indicates the hour the document job was executed.

Indicates the Intelligence Server machine that


Intelligence Server Machine
executed the document job.

Metadata Indicates the metadata storing the document.

Minute Indicates the minute the document job was executed.

Project Indicates the project storing the document.

Report Indicates the reports in the document.

User Indicates the user who ran the document job.

Document Job Step Attributes and Metrics

Attribute or metric name Function

Day Indicates the day on which the document job executed.

Document Indicates which document was executed.

Indicates the sequence number for steps in a


Document Job Step Sequence
document job.

Copyright © 2024 All Rights Reserved 2426


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Document Job Step Type Indicates the type of step for a document job.

Metric of the average difference between start time and


DP Average Elapsed Duration
finish time (including time for prompt responses) of all
per Job (hh:mm:ss)
document job executions.

Metric of the average difference, in seconds, between


DP Average Elapsed Duration per
start time and finish time (including time for prompt
Job (secs)
responses) of all document job executions.

DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.

DP Average Execution Duration Metric of the average duration, in seconds, of all


per Job (secs) document job executions.

DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.

DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.

Metric of the difference between start time and finish


DP Elapsed Duration (hh:mm:ss) time (including time for prompt responses) of a
document job.

Metric of the average difference, in seconds, between


DP Elapsed Duration (secs) start time and finish time (including time for prompt
responses) of a document job.

DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)

Metric of the duration, in seconds, of a document job's


DP Execution Duration (secs)
execution.

Metric of the duration of all document job executions


DP Queue Duration (hh:mm:ss)
waiting in the queue.

Metric of the duration, in seconds, of all document job


DP Queue Duration (secs)
executions waiting in the queue.

Copyright © 2024 All Rights Reserved 2427


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Hour Indicates the hour the document job was executed.

Metadata Indicates the metadata storing the document.

Minute Indicates the minute the document job was executed.

Project Indicates the project storing the document.

Enterprise Manager Data Load Attributes

Attribute name Function

Displays the timestamp of the end of the data load process for
Data Load Finish Time
the projects that are being monitored.

Data Load Project Lists all projects that are being monitored.

Lists the timestamp of the start of the data load process for the
Data Load Start Time
projects that are being monitored.

A value of -1 indicates that it is the summary row in the EM_IS_


LAST_UPDATE table for all projects in a data load. That
Item ID
summary row has information about how long the data load took.
A value of 0 indicates it is a row with project data load details.

Inbox Message Actions Attributes and Metrics

Attribute or metric
Function
name

Day Indicates the day the manipulation was started

Document Indicates the document included in the message.

Indicates the document job that requested the History List


Document Job
message manipulation.

Copyright © 2024 All Rights Reserved 2428


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

HL Days Since Last


Metric of the number of days since any action was performed.
Action: Any action

HL Days Since Last Metric of the number of days since the last request was made
Action: Request for the contents of a message.

HL Last Action Date: Any Metric of the date and time of the last action performed on a
Action message such as read, deleted, marked as read, and so on.

HL Last Action Date: Metric of the date and time of the last request made for the
Request contents of a message.

HL Number of Actions Metric of the number of actions performed on a message.

HL Number of Actions by Metric of the number of actions by user performed on a


User message.

HL Number of Actions with Metric of the number of actions on a message that resulted in
Errors an error.

HL Number of Document Metric of the number of document jobs that result with
Jobs messages.

HL Number of Messages Metric of the number of messages.

HL Number of Messages
Metric of the number of messages that resulted in an error.
with Errors

HL Number of Messages Metric of the number of requests for the contents of a


Requested message.

HL Number of Report
Metric of the number of report jobs that result from messages.
Jobs

Indicates the hour the manipulation was started on a History


Hour
List message.

Indicates the manipulation that was performed on a History


Inbox Action
List message.

Inbox Action Type Indicates the type of manipulation that was performed on a

Copyright © 2024 All Rights Reserved 2429


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

History List message.

Inbox Message Indicates the message in the History List.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine message.

Metadata Indicates the metadata storing the message.

Minute Indicates the minute the manipulation was started.

Project Indicates the project storing the message.

Report Indicates the report included in the message.

Report Job Indicates the job ID of the report included in the message.

User Indicates the user who manipulated the History List message.

Mobile Client Attributes

Attribute name Function

Indicates whether a cache was hit during the execution and, if


Cache Hit Indicator
so, what type of cache hit.

Day Indicates the day the action started.

Document Identifies the document used in the request.

Indicates the type of report or document that initiated the


Execution Type Indicator
execution.

Indicates the location, in latitude and longitude form, of the


Geocode
user.

Hour Indicates the hour the action started.

Intelligence Server
Indicates the Intelligence Server processing the request.
Machine

Copyright © 2024 All Rights Reserved 2430


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Indicates the metadata repository storing the report or


Metadata
document.

Minute Indicates the minute the action started.

Mobile Device Installation


Indicates the unique Installation ID of the mobile app.
ID

Indicates the type of mobile device the app is installed on,


Mobile Device Type
such as IPAD2, DROID, and so on.

Indicates the version of the MicroStrategy app making the


MSTR App Version
request.

Indicates the type of network used, such as 3G, WIFI, LTE,


Network Type
and so on.

Indicates the operating system of the mobile device making


Operating System
the request.

Indicates the operating system version of the mobile device


Operating System Version
making the request.

Project Indicates the project used to initiate the request.

User Indicates the user that initiated the request.

OLAP Services Attributes and Metrics

Attribute or metric
Function
name

Day Indicates the day the action was started.

Hour Indicates the hour the action was started.

Intelligent Cube Indicates the Intelligent Cube that was used.

Intelligent Cube Action Metric of the duration, in seconds, for an action that was
Duration (secs) performed on the Intellgent Cube.

Copyright © 2024 All Rights Reserved 2431


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Intelligent Cube Action Indicates the type of action taken on the Intelligent Cube such
Type as cube publish, cube view hit, and so on.

Indicates the Intelligent Cube instance in memory that was


Intelligent Cube Instance
used for the action.

If the Intelligent Cube is published or refreshed, indicates the


Intelligent Cube Size (KB)
size, in KB, of the Intelligent Cube.

Indicates the type of Intelligent Cube used, such as working


Intelligent Cube Type set report, Report Services Base report, OLAP Cube report,
and so on.

Minute Indicates the minute on which the action was started.

Metric of how many jobs from reports not based on Intelligent


Number of Dynamically
Cubes but selected by the engine to go against an Intelligent
Sourced Report Jobs
Cube because the objects on the report matched what is on
against Intelligent Cubes
the Intelligent Cube.

Number of Intelligent
Metric of how many times an Intelligent Cube was published.
Cube Publishes

Number of Intelligent
Metric of how many times an Intelligent Cube was refreshed.
Cube Refreshes

Number of Intelligent
Metric of how many times an Intelligent Cube was republished.
Cube Republishes

Number of Jobs with


Metric of how many job executions used an Intelligent Cube.
Intelligent Cube Hit

Metric of how many users executed a report or document that


Number of Users hitting
used an Intelligent Cube. That is, the number of users using
Intelligent Cubes
OLAP Services.

Number of View Report


Metric of how many actions were the result of a View Report.
Jobs

Report Indicates the report that hit the Intelligent Cube.

Copyright © 2024 All Rights Reserved 2432


Syst em Ad m in ist r at io n Gu id e

Performance Monitoring Attributes

Attribute name Function

Indicates category of the counter, such as memory,


Counter Category
MicroStrategy server jobs, or MicroStrategy server users.

Counter Instance Indicates the instance ID of the counter, for MicroStrategy use.

Day Indicates the day the action was started.

Hour Indicates the hour the action was started.

Minute Indicates the minute the action was started.

Performance Monitor Indicates the name of the performance counter and its value
Counter type.

Prompt Answers Attributes and Metrics

Attribute or metric
Function
name

Connection Source Indicates the connection source to Intelligence Server.

Count of Prompt Answers Metric of how many prompts were answered.

Day Indicates the day the prompt was answered.

Document Indicates the document that used the prompt.

Hour Indicates the hour the prompt was answered.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine job.

Metadata Indicates the metadata repository storing the prompt.

Minute Indicates the minute the prompt was answered.

Project Indicates the project storing the prompt.

Copyright © 2024 All Rights Reserved 2433


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Prompt Indicates the prompt that was used.

Prompt Answer Indicates the answers for the prompt in various instances.

Prompt Answer Required Indicates whether an answer to the prompt was required.

Prompt Instance Answer Indicates the answer of an instance of a prompt in a report job.

Prompt Location Indicates the ID of the location in which a prompt is stored.

Indicates the type of the object in which the prompt is stored, such as
Prompt Location Type
filter, template, attribute, and so on.

Indicates the title of the prompt (the title the user sees when
Prompt Title
presented during job execution).

Indicates what type of prompt was used, such as date, double,


Prompt Type
elements, and so on.

Report Indicates the report that used the prompt.

Report Job Indicates the report job that used the prompt.

RP Number of Jobs (IS_


Metric of how many jobs involved a prompt.
PR_ANS_FACT)

RP Number of Jobs
Metric of how many report jobs had a specified prompt answer
Containing Prompt
value.
Answer Value

RP Number of Jobs Not


Metric of how many report jobs did not have a specified prompt
Containing Prompt
answer value.
Answer Value

RP Number of Jobs with Metric of how many report jobs had a prompt that was not
Unanswered Prompts answered.

Copyright © 2024 All Rights Reserved 2434


Syst em Ad m in ist r at io n Gu id e

Report Job Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution is ad hoc.

Cache Creation Indicator Indicates whether an execution has created a cache.

Cache Hit Indicator Indicates whether an execution has hit a cache.

Cancelled Indicator Indicates whether an execution has been canceled.

Indicates whether a job was a document dataset or a


Child Job Indicator
standalone report.

Connection Source Indicates the connection source to Intelligence Server.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Indicates whether a report request failed because of a


Database Error Indicator
database error.

Datamart Indicator Indicates whether an execution created a data mart.

Day Indicates the day on which the report was executed.

Indicates the database instance on which the report was


DB Instance
executed.

Drill Indicator Indicates whether an execution is a result of a drill.

Element Load Indicator Indicates whether an execution is a result of an element load.

Error Indicator Indicates whether an execution encountered an error.

Indicates whether a report was exported and, if so, indicates


Export Indicator
its format.

Filter Indicates the filter used on the report.

Hour Indicates the hour on which the report was executed.

Intelligence Server Indicates the Intelligence Server machine that executed the

Copyright © 2024 All Rights Reserved 2435


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Machine report.

Metadata Indicates the metadata repository that stores the report.

Indicates the minute on which the report execution was


Minute
started.

Number of Jobs with


Metric of how many job executions used an Intelligent Cube.
Intelligent Cube Hit

Project Indicates the metadata repository that stores the report.

Prompt Indicator Indicates whether the report execution was prompted.

Report Indicates the ID of the report that was executed.

Report Job Indicates an execution of a report.

RP Average Elapsed
Metric of the average difference between start time and finish
Duration per Job
time (including time for prompt responses) of all report job
(hh:mm:ss) (IS_REP_
executions.
FACT)

RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job (secs) time (including time for prompt responses) of all report job
(IS_REP_FACT) executions.

RP Average Execution
Duration per Job Metric of the average duration of all report job executions.
(hh:mm:ss) (IS_REP_ Includes time in queue and execution for a report job.
FACT)

RP Average Execution Metric of the average duration, in seconds, of all report job
Duration per Job (secs) executions. Includes time in queue and execution for a report
(IS_REP_FACT) job.

RP Average Prompt
Metric of the average time users take to answer the set of
Answer Time per Job
prompts in all report jobs.
(hh:mm:ss)

Copyright © 2024 All Rights Reserved 2436


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Average Prompt
Metric of the average time, in seconds, users take to answer
Answer Time per Job
the set of prompts in all report jobs.
(secs)

RP Average Queue
Metric of the average time report jobs waited in the
Duration per Job
Intelligence Server's queue before the report job was
(hh:mm:ss) (IS_REP_
executed.
FACT)

RP Average Queue Metric of the average time, in seconds, report jobs waited in
Duration per Job (secs) the Intelligence Server's queue before the report job was
(IS_REP_FACT) executed.

Metric of the difference between start time and finish time of a


RP Elapsed Duration
report job. Includes time for prompt responses, in queue, and
(hh:mm:ss)
execution.

Metric of the difference, in seconds, between start time and


RP Elapsed Duration
finish time of a report job. Includes time for prompt responses,
(secs)
in queue, and execution.

RP Execution Duration Metric of the duration of a report job's execution. Includes


(hh:mm:ss) database execution time.

RP Execution Duration Metric of the duration, in seconds, of a report job's execution.


(secs) Includes database execution time.

RP Number of Ad Hoc Metric of how many report jobs resulted from an ad hoc report
Jobs creation.

RP Number of Cancelled
Metric of how many job executions were canceled.
Jobs

RP Number of Drill Jobs Metric of how many job executions resulted from a drill action.

RP Number of Jobs (IS_


Metric of how many report jobs were executed.
REP_FACT)

RP Number of Jobs hitting Metric of how many report jobs were executed against the
Database database.

Copyright © 2024 All Rights Reserved 2437


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Cache Creation result in creating a server cache.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not hit a
Cache Hit server cache.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Element Loading result from loading additional attribute elements.

RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Cache Creation a server cache being created.

RP Number of Jobs with Metric of how many report jobs were executed that hit a server
Cache Hit cache.

RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Datamart Creation a data mart being created.

RP Number of Jobs with Metric of how many report jobs failed because of a database
DB Error error.

RP Number of Jobs with Metric of how many report jobs were executed that resulted
Element Loading from loading additional attribute elements.

RP Number of Jobs with


Metric of how many report jobs failed because of an error.
Error

RP Number of Jobs with Metric of how many report job executions used an Intelligent
Intelligent Cube Hit Cube.

RP Number of Jobs with Metric of how many report job executions used a security
Security Filter filter.

RP Number of Jobs with


Metric of how many report jobs executed SQL statements.
SQL Execution

RP number of Narrowcast Metric of how many report job executions were run through
Server jobs MicroStrategy Narrowcast Server.

RP Number of Prompted
Metric of how many report job executions included a prompt.
Jobs

Copyright © 2024 All Rights Reserved 2438


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Number of Report
Metric of how many report jobs executed as a result of a
Jobs from Document
document execution.
Execution

RP Number of Result Metric of how many result rows were returned from a report
Rows execution.

RP Number of Scheduled
Metric of how many report jobs were scheduled.
Jobs

RP Number of Users who


Metric of how many distinct users ran report jobs.
ran reports

RP Prompt Answer Metric of the how long users take to answer the set of prompts
Duration (hh:mm:ss) in report jobs.

RP Prompt Answer Metric of the how long, in seconds, users take to answer the
Duration (secs) set of prompts in report jobs.

RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.

Metric of how long, in seconds, a report job waited in the


RP Queue Duration (secs) Intelligence Server's queue before the report job was
executed.

Schedule Indicates the schedule that began the report execution.

Schedule Indicator Indicates whether the report execution was scheduled.

Security Filter Indicates the security filter used in the report execution.

Indicates whether a security filter was used in the report


Security Filter Indicator
execution.

SQL Execution Indicator Indicates that SQL was executed during report execution.

Template Indicates the report template that was used.

User Indicates the user that ran the report.

Copyright © 2024 All Rights Reserved 2439


Syst em Ad m in ist r at io n Gu id e

Report Job SQL Pass Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether the execution was ad hoc.

Connection Source Indicates the connection source to Intelligence Server.

Day Indicates the day in which the job was executed.

Hour Indicates the hour in which the report job was executed.

Indicates the metadata repository storing the report or


Metadata
document.

Minute Indicates the minute in which the report job was started.

Project Indicates the project storing the report or document.

Report Indicates the report that was executed.

Report Job Indicates an execution of a report.

Indicates the SQL statement that was executed during the SQL
Report Job SQL Pass
pass.

Indicates the type of SQL statement that was executed in this


Report Job SQL Pass
SQL pass. Examples are SQL select, SQL insert, SQL create
Type
and such.

RP Execution Duration Metric of the duration of a report job's execution. Includes


(hh:mm:ss) database execution time.

RP Execution Duration Metric of the duration, in seconds, of a report job's execution.


(secs) Includes database execution time.

RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.

RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.

RP Number of DB Tables Metric of how many database tables were accessed in a report
Accessed job execution.

Copyright © 2024 All Rights Reserved 2440


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP SQL Size Metric of how large, in bytes, the SQL was for a report job.

Report Job Steps Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution was ad hoc.

Cache Hit Indicator Indicates whether an execution has hit a cache.

Connection Source Indicates the connection source to Intelligence Server.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Day Indicates the day in which the job was executed.

Hour Indicates the hour in which the report job was executed.

Minute Indicates the minute in which the report job was started.

Report Indicates the report that was executed.

Report Job Indicates an execution of a report.

Report Job Step Indicates the sequence number in the series of execution
Sequence steps a report job passes through in the Intelligence Server.

Indicates the type of step for a report job. Examples are SQL
Report Job Step Type generation, SQL execution, Analytical Engine, Resolution
Server, element request, update Intelligent Cube, and so on.

RP Average CPU
Execution Duration per Metric of the average duration, in milliseconds, a report job
Job (msecs) (IS_REP_ execution takes in the Intelligence Server CPU.
STEP_FACT)

RP Average Elapsed Metric of the average difference, in seconds, between start

Copyright © 2024 All Rights Reserved 2441


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Duration per Job (secs) time and finish time of report job executions. Includes time for
(IS_REP_STEP_FACT) prompt responses.

RP Average Execution Metric of the average difference, in seconds, between start


Duration per Job (secs) time and finish time of report job executions. Includes time for
(IS_REP_STEP_FACT) prompt responses.

RP Average Query Engine


Execution Duration per Metric of the average time, in seconds, the Query Engine
Job (secs) (IS_REP_ takes to process a report job.
STEP_FACT)

RP Average Queue Metric of the average time report jobs waited in the
Duration per Job (secs) Intelligence Server's queue before the report job was
(IS_REP_STEP_FACT) executed.

Metric of how long, in milliseconds, a report job execution


RP CPU Duration (msec)
takes in the Intelligence Server CPU.

RP Elapsed Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes time for prompt responses.

Metric of the difference, in seconds, between start time and


RP Elapsed Duration
finish time of report job executions. Includes time for prompt
(secs)
responses.

RP Execution Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes database execution time.

Metric of the difference, in seconds, between start time and


RP Execution Duration
finish time of report job executions. Includes database
(secs)
execution time.

RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.

RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.

RP Number of Jobs (IS_ Metric of how many report jobs were executed.

Copyright © 2024 All Rights Reserved 2442


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

REP_STEP_FACT)

RP Query Engine
Metric of how long the Query Engine took to execute SQL for a
Duration (hh:mm:ss) (IS_
report job.
REP_STEP_FACT)

RP Query Engine Duration


Metric of the time, in seconds, the Query Engine takes to
(secs) (IS_REP_STEP_
execute SQL for a report job.
FACT)

RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.

Metric of how long, in seconds, a report job waited in the


RP Queue Duration (secs) Intelligence Server's queue before the report job was
executed.

RP SQL Engine Duration


Metric of how long the SQL Engine took to generate SQL for a
(hh:mm:ss) (IS_REP_
report job.
STEP_FACT)

Report Job Tables/Columns Accessed Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution was ad hoc.

Column Indicates the column that was accessed.

Connection Source Indicates the connection source to Intelligence Server.

Day Indicates the day on which the table column was accessed.

Indicates the table in the database storing the column that


DB Table
was accessed.

Copyright © 2024 All Rights Reserved 2443


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Hour Indicates the hour on which the table column was accessed.

Minute Indicates the minute on which the table column was accessed.

Report Indicates the report that accessed the table column.

Indicates which execution of a report accessed the table


Report Job
column.

Metric of how many report jobs accessed the database column


RP Number of Jobs (IS_
or table. The Warehouse Tables Accessed report uses this
REP_COL_FACT)
metric.

Indicates which type of SQL clause was used to access the


SQL Clause Type
table column.

Schema Objects Attributes

Attribute name Function

Lists all attributes in projects that are set up to be monitored by


Attribute
Enterprise Manager.

Lists all attribute forms in projects that are set up to be monitored by


Attribute Form
Enterprise Manager.

Lists all columns in projects that are set up to be monitored by


Column
Enterprise Manager.

Lists all physical tables in the data warehouse that are set up to be
DB Table
monitored by Enterprise Manager.

Lists all facts in projects that are set up to be monitored by Enterprise


Fact
Manager.

Lists all hierarchies in projects that are set up to be monitored by


Hierarchy
Enterprise Manager

Copyright © 2024 All Rights Reserved 2444


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Lists all logical tables in projects that are set up to be monitored by


Table
Enterprise Manager.

Lists all transformations in projects that are set up to be monitored by


Transformation
Enterprise Manager.

Server Machines Attributes

Attribute name Function

Lists all machines that have had users connect to the


Client Machine
Intelligence Server.

Intelligence Server
Lists the cluster of Intelligence Servers.
Cluster

Intelligence Server Lists all machines that have logged statistics as an


Machine Intelligence Server.

Web Server Machine Lists all machines used as web servers.

Session Attributes and Metrics

Attribute or metric
Function
name

Avg. Connection Duration Metric of the average time connections to an Intelligence


(hh:mm:ss) Server last.

Avg. Connection Duration Metric of the average time, in seconds, connections to an


(secs) Intelligence Server last.

Connection Duration
Metric of the time a connection to an Intelligence Server lasts.
(hh:mm:ss)

Connection Duration Metric of the time, in seconds, a connection to an Intelligence

Copyright © 2024 All Rights Reserved 2445


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

(secs) Server lasts.

Connection Source Lists all connection sources to Intelligence Server.

Number of Sessions Metric of how many sessions were connected to an Intelligence


(Report Level) Server. Usually reported with a date and time attribute.

Metric of how many distinct users were connected to an


Number of Users Logged
Intelligence Server. Usually reported with a date and time
In (Report Level)
attribute.

Session Indicates a user connection to an Intelligence Server.

All Indicators and Flags Attributes

Attribute name Function

Ad Hoc Indicator Indicates whether an execution is ad hoc.

Cache Creation
Indicates whether an execution has created a cache.
Indicator

Cache Hit Indicator Indicates whether an execution has hit a cache.

Cancelled Indicator Indicates whether an execution has been cancelled.

Indicates whether a job was a document dataset or a stand-alone


Child Job Indicator
report.

Configuration Object
Indicates whether a configuration object exists.
Exists Status

Configuration
Lists all configuration parameter types.
Parameter Value Type

Connection Source Lists all connection sources to Intelligence Server.

Contact Type Lists the executed contact types.

Copyright © 2024 All Rights Reserved 2446


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Database Error Indicates whether a report request failed because of a database


Indicator error.

Datamart Indicator Indicates whether an execution created a data mart.

DB Error Indicator Indicates whether an execution encountered a database error.

Delivery Status
Indicates whether a delivery was successful.
Indicator

Delivery Type Lists the type of delivery.

Document Job Status


Lists the statuses of document executions.
(Deprecated)

Document Job Step


Lists all possible steps of document job execution.
Type

Indicates the type of a document or dashboard, such as a Report


Document Type
Services document or dashboard.

Lists the object from which a user drilled when a new report was
Drill from Object
run because of a drilling action.

Drill Indicator Indicates whether an execution is a result of a drill.

Lists the object to which a user drilled when a new report was run
Drill to Object
because of a drilling action.

Element Load Indicator Indicates whether an execution is a result of an element load.

Error Indicator Indicates whether an execution encountered an error.

Execution Type Indicates how the content was requested, such as User
Indicator Execution, Pre-Cached, Application Recovery, and so on.

Indicates whether a report was exported and, if so, indicates its


Export Indicator
format.

Copyright © 2024 All Rights Reserved 2447


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Hierarchy Drilling Indicates whether a hierarchy is used as a drill hierarchy.

List the types of manipulations that can be performed on a


Inbox Action Type
History List message.

Intelligent Cube Action


Lists actions performed on or against intelligent cubes.
Type

Intelligent Cube Type Lists all intelligent cube types.

Lists all the possible errors that can be returned during job
Job ErrorCode
executions.

Job Priority Map Lists the priorities of job executions.

Enumerates the upper limit of the priority ranges for high,


Job Priority Number medium, and low priority jobs. Default values are 332, 666, and
999.

Object Creation Date Indicates the date on which an object was created.

Object Creation
Indicates the week of the year in which an object was created.
Week of year

Object Exists Status Indicates whether an object exists.

Object Hidden Status Indicates whether an object is hidden.

Object Modification
Indicates the date on which an object was last modified.
Date

Object Modification Indicates the week of the year in which an object was last
Week of year modified.

Prompt Answer Indicates whether a prompt answer was required for the job
Required execution.

Prompt Indicator Indicates whether a job execution was prompted.

Report Job SQL Pass Lists the types of SQL passes that the Intelligence Server
Type generates.

Report Job Status Lists the statuses of report executions.

Copyright © 2024 All Rights Reserved 2448


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

(Deprecated)

Report Job Step Type Lists all possible steps of report job execution.

Report Type Indicates the type of a report, such as XDA, relational, and so on.

Report/Document
Indicates whether the execution was a report or a document.
Indicator

Schedule Indicator Indicates whether a job execution was scheduled.

Security Filter Indicator Indicates whether a security filter was used in the job execution.

SQL Clause Type Lists the various SQL clause types used by the SQL Engine.

SQL Execution
Indicates whether SQL was executed in the job execution.
Indicator

Application Objects Attributes

Attribute
Function
name

Lists all consolidations in projects that are set up to be monitored by


Consolidation
Enterprise Manager.

Lists all custom groups in projects that are set up to be monitored by


Custom Group
Enterprise Manager.

Lists all documents in projects that are set up to be monitored by


Document
Enterprise Manager.

Lists all filters in projects that are set up to be monitored by Enterprise


Filter
Manager.

Lists all intelligent cubes in projects that are set up to be monitored by


Intelligent Cube
Enterprise Manager.

Lists all metrics in projects that are set up to be monitored by Enterprise


Metric
Manager.

Copyright © 2024 All Rights Reserved 2449


Syst em Ad m in ist r at io n Gu id e

Attribute
Function
name

Lists all prompts in projects that are set up to be monitored by


Prompt
Enterprise Manager.

Lists all reports in projects that are set up to be monitored by Enterprise


Report
Manager.

Lists all security filters in projects that are set up to be monitored by


Security Filter
Enterprise Manager.

Lists all templates in projects that are set up to be monitored by


Template
Enterprise Manager.

Configuration Objects Attributes

Attribute name Function

Address Lists all addresses to which deliveries have been sent.

Configuration Object
Lists the owners of configuration objects.
Owner

Configuration Parameter Lists all configuration parameters.

Contact Lists all contacts to whom deliveries have been sent.

DB Connection Lists all database connections.

DB Instance Lists all database instances.

Device Lists all devices to which deliveries have been sent.

Event Lists all events being tracked.

Folder Lists all folders within projects.

Intelligence Server
Lists all Intelligence Server definitions.
Definition

Metadata Lists all monitored metadata.

Copyright © 2024 All Rights Reserved 2450


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Owner Lists the owners of all objects.

Project Lists all projects.

Schedule Lists all schedules.

Subscription Lists all executed transmissions.

Transmitter Lists all transmitters.

User Lists all users being tracked.

User Group Lists all user groups.

User Group (Parent) Lists all user groups that are parents of other user groups.

Date and Time Attributes

Attribute
Function
name

Calendar
Lists every calendar week, beginning with 2000-01-01, as an integer.
Week

Day Lists all days, beginning in 1990.

Lists the hours in a day. For example, 09 AM - 10 AM, 10 AM - 11 AM, and


Hour
so on.

Lists all the minutes in an hour. For example, if the hour specified is 10
Minute AM - 11 AM, lists minutes as 10.30 AM - 10.31 AM, 10.32 AM - 10.33 AM,
and so on.

Month Lists all months, beginning with 2000.

Month of Year Lists all months in a specified year.

Quarter Lists all quarters.

Quarter of
Lists all quarters of the year.
Year

Copyright © 2024 All Rights Reserved 2451


Syst em Ad m in ist r at io n Gu id e

Attribute
Function
name

Lists all weeks in all years, beginning in 2000. Weeks in 2000 are
Week of Year represented as a number ranging from 200001 to 200053, weeks in 2001
are represented as a number ranging from 200101 to 200153, and so on.

Weekday Lists all days of the week.

Year Lists all years.

Delivery Services Attributes and Metrics

Attribute or metric name Function

Address Indicates the address to which a delivery was sent.

Avg number of recipients per Metric of the average number of recipients in


subscription subscriptions.

Avg Subscription Execution Metric of the average amount of time subscriptions


Duration (hh:mm:ss) take to execute.

Avg Subscription Execution Metric of the average amount of time, in seconds,


Duration (secs) subscriptions take to execute.

Contact Indicates all contacts to whom a delivery was sent.

Contact Type Indicates the executed contact types.

Day Indicates the day on which the delivery was sent.

Delivery Status Indicator Indicates whether the delivery was successful.

Delivery Type Indicates the type of delivery.

Indicates the type of device to which the delivery was


Device
sent.

Document Indicates the document that was delivered.

Hour Indicates the hour on which the delivery was sent.

Copyright © 2024 All Rights Reserved 2452


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Indicates the Intelligence Server machine that


Intelligence Server Machine
executed the job.

Metadata Indicates the monitored metadata.

Minute Indicates the minute on which the delivery was sent.

Number of Distinct Document Metric of the number of report services document


Subscriptions subscriptions.

Metric of the number of recipients that received


Number of Distinct Recipients
content from a subscription.

Number of Distinct Report


Metric of the number of report subscriptions.
Subscriptions

Metric of the number of executed subscriptions. This


Number of Distinct Subscriptions does not reflect the number of subscriptions in the
metadata.

Metric of the number of subscriptions that delivered


Number of E-mail Subscriptions
content via e-mail.

Number of Errored Subscriptions Metric of the number of subscriptions that failed.

Number of Executions Metric of the number of executions of a subscription.

Metric of the number of subscriptions that delivered


Number of File Subscriptions
content via file location.

Number of History List Metric of the number of subscriptions that delivered


Subscriptions content via the history list.

Metric of the number of subscriptions that delivered


Number of Mobile Subscriptions
content via mobile.

Metric of the number of subscriptions that delivered


Number of Print Subscriptions
content via a printer.

Project Lists the projects.

Report Lists the reports in projects.

Copyright © 2024 All Rights Reserved 2453


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Report Job Lists an execution of a report.

Indicates whether the execution was a report or a


Report/Document Indicator
document.

Schedule Indicates the schedule that triggered the delivery.

Subscription Indicates the subscription that triggered the delivery.

Subscription Execution Duration Metric of the sum of all execution times of a


(hh:mm:ss) subscription.

Subscription Execution Duration Metric of the sum of all execution times of a


(secs) subscription (in seconds).

Document Job Attributes and Metrics

Attribute or metric name Function

Day Indicates the day on which the document job executed.

Document Indicates which document was executed.

Document Job Indicates an execution of a document.

Metric of the average difference between start time and


DP Average Elapsed Duration per
finish time (including time for prompt responses) of all
Job (hh:mm:ss)
document job executions.

Metric of the average difference, in seconds, between


DP Average Elapsed Duration
start time and finish time (including time for prompt
per Job (secs)
responses) of all document job executions.

DP Average Execution Duration Metric of the average duration, in seconds, of all


per Job (secs) document job executions.

DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.

DP Average Queue Duration per Metric of the average duration of all document job

Copyright © 2024 All Rights Reserved 2454


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Job (hh:mm:ss) executions waiting in the queue.

DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.

Metric of the difference between start time and finish


DP Elapsed Duration (hh:mm:ss) time (including time for prompt responses) of a
document job.

Metric of the average difference, in seconds, between


DP Elapsed Duration (secs) start time and finish time (including time for prompt
responses) of a document job.

DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)

Metric of the duration, in seconds, of a document job's


DP Execution Duration (secs)
execution.

DP Number of Jobs (IS_DOC_ Metric of the number of document jobs that were
FACT) executed.

DP Number of Jobs with Cache


Metric of the number of document jobs that hit a cache.
Hit

DP Number of Jobs with Error Metric of the number of document jobs that failed.

DP Number of Users who ran


Metric of the number of users who ran document jobs.
Documents

DP Percentage of Jobs with Metric of the percentage of document jobs that hit a
Cache Hit cache.

DP Percentage of Jobs with Error Metric of the percentage of document jobs that failed.

Metric of the duration of all document job executions


DP Queue Duration (hh:mm:ss)
waiting in the queue.

Metric of the duration, in seconds, of all document job


DP Queue Duration (secs)
executions waiting in the queue.

Hour Indicates the hour the document job was executed.

Copyright © 2024 All Rights Reserved 2455


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

Indicates the Intelligence Server machine that


Intelligence Server Machine
executed the document job.

Metadata Indicates the metadata storing the document.

Minute Indicates the minute the document job was executed.

Project Indicates the project storing the document.

Report Indicates the reports in the document.

User Indicates the user who ran the document job.

Document Job Step Attributes and Metrics

Attribute or metric name Function

Day Indicates the day on which the document job executed.

Document Indicates which document was executed.

Indicates the sequence number for steps in a


Document Job Step Sequence
document job.

Document Job Step Type Indicates the type of step for a document job.

Metric of the average difference between start time and


DP Average Elapsed Duration
finish time (including time for prompt responses) of all
per Job (hh:mm:ss)
document job executions.

Metric of the average difference, in seconds, between


DP Average Elapsed Duration per
start time and finish time (including time for prompt
Job (secs)
responses) of all document job executions.

DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.

DP Average Execution Duration Metric of the average duration, in seconds, of all


per Job (secs) document job executions.

Copyright © 2024 All Rights Reserved 2456


Syst em Ad m in ist r at io n Gu id e

Attribute or metric name Function

DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.

DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.

Metric of the difference between start time and finish


DP Elapsed Duration (hh:mm:ss) time (including time for prompt responses) of a
document job.

Metric of the average difference, in seconds, between


DP Elapsed Duration (secs) start time and finish time (including time for prompt
responses) of a document job.

DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)

Metric of the duration, in seconds, of a document job's


DP Execution Duration (secs)
execution.

Metric of the duration of all document job executions


DP Queue Duration (hh:mm:ss)
waiting in the queue.

Metric of the duration, in seconds, of all document job


DP Queue Duration (secs)
executions waiting in the queue.

Hour Indicates the hour the document job was executed.

Metadata Indicates the metadata storing the document.

Minute Indicates the minute the document job was executed.

Project Indicates the project storing the document.

Enterprise Manager Data Load Attributes

Attribute name Function

Data Load Finish Time Displays the timestamp of the end of the data load process for

Copyright © 2024 All Rights Reserved 2457


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

the projects that are being monitored.

Data Load Project Lists all projects that are being monitored.

Lists the timestamp of the start of the data load process for the
Data Load Start Time
projects that are being monitored.

A value of -1 indicates that it is the summary row in the EM_IS_


LAST_UPDATE table for all projects in a data load. That
Item ID
summary row has information about how long the data load took.
A value of 0 indicates it is a row with project data load details.

Inbox Message Actions Attributes and Metrics

Attribute or metric
Function
name

Day Indicates the day the manipulation was started

Document Indicates the document included in the message.

Indicates the document job that requested the History List


Document Job
message manipulation.

HL Days Since Last


Metric of the number of days since any action was performed.
Action: Any action

HL Days Since Last Metric of the number of days since the last request was made
Action: Request for the contents of a message.

HL Last Action Date: Any Metric of the date and time of the last action performed on a
Action message such as read, deleted, marked as read, and so on.

HL Last Action Date: Metric of the date and time of the last request made for the
Request contents of a message.

HL Number of Actions Metric of the number of actions performed on a message.

HL Number of Actions by Metric of the number of actions by user performed on a

Copyright © 2024 All Rights Reserved 2458


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

User message.

HL Number of Actions with Metric of the number of actions on a message that resulted in
Errors an error.

HL Number of Document Metric of the number of document jobs that result with
Jobs messages.

HL Number of Messages Metric of the number of messages.

HL Number of Messages
Metric of the number of messages that resulted in an error.
with Errors

HL Number of Messages Metric of the number of requests for the contents of a


Requested message.

HL Number of Report
Metric of the number of report jobs that result from messages.
Jobs

Indicates the hour the manipulation was started on a History


Hour
List message.

Indicates the manipulation that was performed on a History


Inbox Action
List message.

Indicates the type of manipulation that was performed on a


Inbox Action Type
History List message.

Inbox Message Indicates the message in the History List.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine message.

Metadata Indicates the metadata storing the message.

Minute Indicates the minute the manipulation was started.

Project Indicates the project storing the message.

Report Indicates the report included in the message.

Report Job Indicates the job ID of the report included in the message.

User Indicates the user who manipulated the History List message.

Copyright © 2024 All Rights Reserved 2459


Syst em Ad m in ist r at io n Gu id e

Mobile Client Attributes

Attribute name Function

Indicates whether a cache was hit during the execution and, if


Cache Hit Indicator
so, what type of cache hit.

Day Indicates the day the action started.

Document Identifies the document used in the request.

Indicates the type of report or document that initiated the


Execution Type Indicator
execution.

Indicates the location, in latitude and longitude form, of the


Geocode
user.

Hour Indicates the hour the action started.

Intelligence Server
Indicates the Intelligence Server processing the request.
Machine

Indicates the metadata repository storing the report or


Metadata
document.

Minute Indicates the minute the action started.

Mobile Device Installation


Indicates the unique Installation ID of the mobile app.
ID

Indicates the type of mobile device the app is installed on,


Mobile Device Type
such as IPAD2, DROID, and so on.

Indicates the version of the MicroStrategy app making the


MSTR App Version
request.

Indicates the type of network used, such as 3G, WIFI, LTE,


Network Type
and so on.

Indicates the operating system of the mobile device making


Operating System
the request.

Indicates the operating system version of the mobile device


Operating System Version
making the request.

Copyright © 2024 All Rights Reserved 2460


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Project Indicates the project used to initiate the request.

User Indicates the user that initiated the request.

OLAP Services Attributes and Metrics

Attribute or metric
Function
name

Day Indicates the day the action was started.

Hour Indicates the hour the action was started.

Intelligent Cube Indicates the Intelligent Cube that was used.

Intelligent Cube Action Metric of the duration, in seconds, for an action that was
Duration (secs) performed on the Intellgent Cube.

Intelligent Cube Action Indicates the type of action taken on the Intelligent Cube such
Type as cube publish, cube view hit, and so on.

Indicates the Intelligent Cube instance in memory that was


Intelligent Cube Instance
used for the action.

If the Intelligent Cube is published or refreshed, indicates the


Intelligent Cube Size (KB)
size, in KB, of the Intelligent Cube.

Indicates the type of Intelligent Cube used, such as working


Intelligent Cube Type set report, Report Services Base report, OLAP Cube report,
and so on.

Minute Indicates the minute on which the action was started.

Metric of how many jobs from reports not based on Intelligent


Number of Dynamically
Cubes but selected by the engine to go against an Intelligent
Sourced Report Jobs
Cube because the objects on the report matched what is on
against Intelligent Cubes
the Intelligent Cube.

Number of Intelligent Metric of how many times an Intelligent Cube was published.

Copyright © 2024 All Rights Reserved 2461


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Cube Publishes

Number of Intelligent
Metric of how many times an Intelligent Cube was refreshed.
Cube Refreshes

Number of Intelligent
Metric of how many times an Intelligent Cube was republished.
Cube Republishes

Number of Jobs with


Metric of how many job executions used an Intelligent Cube.
Intelligent Cube Hit

Metric of how many users executed a report or document that


Number of Users hitting
used an Intelligent Cube. That is, the number of users using
Intelligent Cubes
OLAP Services.

Number of View Report


Metric of how many actions were the result of a View Report.
Jobs

Report Indicates the report that hit the Intelligent Cube.

Performance Monitoring Attributes

Attribute name Function

Indicates category of the counter, such as memory,


Counter Category
MicroStrategy server jobs, or MicroStrategy server users.

Counter Instance Indicates the instance ID of the counter, for MicroStrategy use.

Day Indicates the day the action was started.

Hour Indicates the hour the action was started.

Minute Indicates the minute the action was started.

Performance Monitor Indicates the name of the performance counter and its value
Counter type.

Copyright © 2024 All Rights Reserved 2462


Syst em Ad m in ist r at io n Gu id e

Prompt Answers Attributes and Metrics

Attribute or metric
Function
name

Connection Source Indicates the connection source to Intelligence Server.

Count of Prompt Answers Metric of how many prompts were answered.

Day Indicates the day the prompt was answered.

Document Indicates the document that used the prompt.

Hour Indicates the hour the prompt was answered.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine job.

Metadata Indicates the metadata repository storing the prompt.

Minute Indicates the minute the prompt was answered.

Project Indicates the project storing the prompt.

Prompt Indicates the prompt that was used.

Prompt Answer Indicates the answers for the prompt in various instances.

Prompt Answer Required Indicates whether an answer to the prompt was required.

Prompt Instance Answer Indicates the answer of an instance of a prompt in a report job.

Prompt Location Indicates the ID of the location in which a prompt is stored.

Indicates the type of the object in which the prompt is stored, such as
Prompt Location Type
filter, template, attribute, and so on.

Indicates the title of the prompt (the title the user sees when
Prompt Title
presented during job execution).

Indicates what type of prompt was used, such as date, double,


Prompt Type
elements, and so on.

Report Indicates the report that used the prompt.

Copyright © 2024 All Rights Reserved 2463


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Report Job Indicates the report job that used the prompt.

RP Number of Jobs (IS_


Metric of how many jobs involved a prompt.
PR_ANS_FACT)

RP Number of Jobs
Metric of how many report jobs had a specified prompt answer
Containing Prompt
value.
Answer Value

RP Number of Jobs Not


Metric of how many report jobs did not have a specified prompt
Containing Prompt
answer value.
Answer Value

RP Number of Jobs with Metric of how many report jobs had a prompt that was not
Unanswered Prompts answered.

Report Job Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution is ad hoc.

Cache Creation Indicator Indicates whether an execution has created a cache.

Cache Hit Indicator Indicates whether an execution has hit a cache.

Cancelled Indicator Indicates whether an execution has been canceled.

Indicates whether a job was a document dataset or a


Child Job Indicator
standalone report.

Connection Source Indicates the connection source to Intelligence Server.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Database Error Indicator Indicates whether a report request failed because of a

Copyright © 2024 All Rights Reserved 2464


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

database error.

Datamart Indicator Indicates whether an execution created a data mart.

Day Indicates the day on which the report was executed.

Indicates the database instance on which the report was


DB Instance
executed.

Drill Indicator Indicates whether an execution is a result of a drill.

Element Load Indicator Indicates whether an execution is a result of an element load.

Error Indicator Indicates whether an execution encountered an error.

Indicates whether a report was exported and, if so, indicates


Export Indicator
its format.

Filter Indicates the filter used on the report.

Hour Indicates the hour on which the report was executed.

Intelligence Server Indicates the Intelligence Server machine that executed the
Machine report.

Metadata Indicates the metadata repository that stores the report.

Indicates the minute on which the report execution was


Minute
started.

Number of Jobs with


Metric of how many job executions used an Intelligent Cube.
Intelligent Cube Hit

Project Indicates the metadata repository that stores the report.

Prompt Indicator Indicates whether the report execution was prompted.

Report Indicates the ID of the report that was executed.

Report Job Indicates an execution of a report.

RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job time (including time for prompt responses) of all report job

Copyright © 2024 All Rights Reserved 2465


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

(hh:mm:ss) (IS_REP_
executions.
FACT)

RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job (secs) time (including time for prompt responses) of all report job
(IS_REP_FACT) executions.

RP Average Execution
Duration per Job Metric of the average duration of all report job executions.
(hh:mm:ss) (IS_REP_ Includes time in queue and execution for a report job.
FACT)

RP Average Execution Metric of the average duration, in seconds, of all report job
Duration per Job (secs) executions. Includes time in queue and execution for a report
(IS_REP_FACT) job.

RP Average Prompt
Metric of the average time users take to answer the set of
Answer Time per Job
prompts in all report jobs.
(hh:mm:ss)

RP Average Prompt
Metric of the average time, in seconds, users take to answer
Answer Time per Job
the set of prompts in all report jobs.
(secs)

RP Average Queue
Metric of the average time report jobs waited in the
Duration per Job
Intelligence Server's queue before the report job was
(hh:mm:ss) (IS_REP_
executed.
FACT)

RP Average Queue Metric of the average time, in seconds, report jobs waited in
Duration per Job (secs) the Intelligence Server's queue before the report job was
(IS_REP_FACT) executed.

Metric of the difference between start time and finish time of a


RP Elapsed Duration
report job. Includes time for prompt responses, in queue, and
(hh:mm:ss)
execution.

RP Elapsed Duration Metric of the difference, in seconds, between start time and
(secs) finish time of a report job. Includes time for prompt responses,

Copyright © 2024 All Rights Reserved 2466


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

in queue, and execution.

RP Execution Duration Metric of the duration of a report job's execution. Includes


(hh:mm:ss) database execution time.

RP Execution Duration Metric of the duration, in seconds, of a report job's execution.


(secs) Includes database execution time.

RP Number of Ad Hoc Metric of how many report jobs resulted from an ad hoc report
Jobs creation.

RP Number of Cancelled
Metric of how many job executions were canceled.
Jobs

RP Number of Drill Jobs Metric of how many job executions resulted from a drill action.

RP Number of Jobs (IS_


Metric of how many report jobs were executed.
REP_FACT)

RP Number of Jobs hitting Metric of how many report jobs were executed against the
Database database.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Cache Creation result in creating a server cache.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not hit a
Cache Hit server cache.

RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Element Loading result from loading additional attribute elements.

RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Cache Creation a server cache being created.

RP Number of Jobs with Metric of how many report jobs were executed that hit a server
Cache Hit cache.

RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Datamart Creation a data mart being created.

RP Number of Jobs with Metric of how many report jobs failed because of a database

Copyright © 2024 All Rights Reserved 2467


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

DB Error error.

RP Number of Jobs with Metric of how many report jobs were executed that resulted
Element Loading from loading additional attribute elements.

RP Number of Jobs with


Metric of how many report jobs failed because of an error.
Error

RP Number of Jobs with Metric of how many report job executions used an Intelligent
Intelligent Cube Hit Cube.

RP Number of Jobs with Metric of how many report job executions used a security
Security Filter filter.

RP Number of Jobs with


Metric of how many report jobs executed SQL statements.
SQL Execution

RP number of Narrowcast Metric of how many report job executions were run through
Server jobs MicroStrategy Narrowcast Server.

RP Number of Prompted
Metric of how many report job executions included a prompt.
Jobs

RP Number of Report
Metric of how many report jobs executed as a result of a
Jobs from Document
document execution.
Execution

RP Number of Result Metric of how many result rows were returned from a report
Rows execution.

RP Number of Scheduled
Metric of how many report jobs were scheduled.
Jobs

RP Number of Users who


Metric of how many distinct users ran report jobs.
ran reports

RP Prompt Answer Metric of the how long users take to answer the set of prompts
Duration (hh:mm:ss) in report jobs.

RP Prompt Answer Metric of the how long, in seconds, users take to answer the
Duration (secs) set of prompts in report jobs.

Copyright © 2024 All Rights Reserved 2468


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.

Metric of how long, in seconds, a report job waited in the


RP Queue Duration (secs) Intelligence Server's queue before the report job was
executed.

Schedule Indicates the schedule that began the report execution.

Schedule Indicator Indicates whether the report execution was scheduled.

Security Filter Indicates the security filter used in the report execution.

Indicates whether a security filter was used in the report


Security Filter Indicator
execution.

SQL Execution Indicator Indicates that SQL was executed during report execution.

Template Indicates the report template that was used.

User Indicates the user that ran the report.

Report Job SQL Pass Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether the execution was ad hoc.

Connection Source Indicates the connection source to Intelligence Server.

Day Indicates the day in which the job was executed.

Hour Indicates the hour in which the report job was executed.

Indicates the metadata repository storing the report or


Metadata
document.

Minute Indicates the minute in which the report job was started.

Copyright © 2024 All Rights Reserved 2469


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Project Indicates the project storing the report or document.

Report Indicates the report that was executed.

Report Job Indicates an execution of a report.

Indicates the SQL statement that was executed during the SQL
Report Job SQL Pass
pass.

Indicates the type of SQL statement that was executed in this


Report Job SQL Pass
SQL pass. Examples are SQL select, SQL insert, SQL create
Type
and such.

RP Execution Duration Metric of the duration of a report job's execution. Includes


(hh:mm:ss) database execution time.

RP Execution Duration Metric of the duration, in seconds, of a report job's execution.


(secs) Includes database execution time.

RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.

RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.

RP Number of DB Tables Metric of how many database tables were accessed in a report
Accessed job execution.

RP SQL Size Metric of how large, in bytes, the SQL was for a report job.

Report Job Steps Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution was ad hoc.

Cache Hit Indicator Indicates whether an execution has hit a cache.

Copyright © 2024 All Rights Reserved 2470


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Connection Source Indicates the connection source to Intelligence Server.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Day Indicates the day in which the job was executed.

Hour Indicates the hour in which the report job was executed.

Minute Indicates the minute in which the report job was started.

Report Indicates the report that was executed.

Report Job Indicates an execution of a report.

Report Job Step Indicates the sequence number in the series of execution
Sequence steps a report job passes through in the Intelligence Server.

Indicates the type of step for a report job. Examples are SQL
Report Job Step Type generation, SQL execution, Analytical Engine, Resolution
Server, element request, update Intelligent Cube, and so on.

RP Average CPU
Execution Duration per Metric of the average duration, in milliseconds, a report job
Job (msecs) (IS_REP_ execution takes in the Intelligence Server CPU.
STEP_FACT)

RP Average Elapsed Metric of the average difference, in seconds, between start


Duration per Job (secs) time and finish time of report job executions. Includes time for
(IS_REP_STEP_FACT) prompt responses.

RP Average Execution Metric of the average difference, in seconds, between start


Duration per Job (secs) time and finish time of report job executions. Includes time for
(IS_REP_STEP_FACT) prompt responses.

RP Average Query Engine


Execution Duration per Metric of the average time, in seconds, the Query Engine
Job (secs) (IS_REP_ takes to process a report job.
STEP_FACT)

Copyright © 2024 All Rights Reserved 2471


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

RP Average Queue Metric of the average time report jobs waited in the
Duration per Job (secs) Intelligence Server's queue before the report job was
(IS_REP_STEP_FACT) executed.

Metric of how long, in milliseconds, a report job execution


RP CPU Duration (msec)
takes in the Intelligence Server CPU.

RP Elapsed Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes time for prompt responses.

Metric of the difference, in seconds, between start time and


RP Elapsed Duration
finish time of report job executions. Includes time for prompt
(secs)
responses.

RP Execution Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes database execution time.

Metric of the difference, in seconds, between start time and


RP Execution Duration
finish time of report job executions. Includes database
(secs)
execution time.

RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.

RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.

RP Number of Jobs (IS_


Metric of how many report jobs were executed.
REP_STEP_FACT)

RP Query Engine
Metric of how long the Query Engine took to execute SQL for a
Duration (hh:mm:ss) (IS_
report job.
REP_STEP_FACT)

RP Query Engine Duration


Metric of the time, in seconds, the Query Engine takes to
(secs) (IS_REP_STEP_
execute SQL for a report job.
FACT)

RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.

Copyright © 2024 All Rights Reserved 2472


Syst em Ad m in ist r at io n Gu id e

Attribute or metric
Function
name

Metric of how long, in seconds, a report job waited in the


RP Queue Duration (secs) Intelligence Server's queue before the report job was
executed.

RP SQL Engine Duration


Metric of how long the SQL Engine took to generate SQL for a
(hh:mm:ss) (IS_REP_
report job.
STEP_FACT)

Report Job Tables/Columns Accessed Attributes and Metrics

Attribute or metric
Function
name

Ad Hoc Indicator Indicates whether an execution was ad hoc.

Column Indicates the column that was accessed.

Connection Source Indicates the connection source to Intelligence Server.

Day Indicates the day on which the table column was accessed.

Indicates the table in the database storing the column that


DB Table
was accessed.

Hour Indicates the hour on which the table column was accessed.

Minute Indicates the minute on which the table column was accessed.

Report Indicates the report that accessed the table column.

Indicates which execution of a report accessed the table


Report Job
column.

Metric of how many report jobs accessed the database column


RP Number of Jobs (IS_
or table. The Warehouse Tables Accessed report uses this
REP_COL_FACT)
metric.

Indicates which type of SQL clause was used to access the


SQL Clause Type
table column.

Copyright © 2024 All Rights Reserved 2473


Syst em Ad m in ist r at io n Gu id e

Schema Objects Attributes

Attribute name Function

Lists all attributes in projects that are set up to be monitored by


Attribute
Enterprise Manager.

Lists all attribute forms in projects that are set up to be monitored by


Attribute Form
Enterprise Manager.

Lists all columns in projects that are set up to be monitored by


Column
Enterprise Manager.

Lists all physical tables in the data warehouse that are set up to be
DB Table
monitored by Enterprise Manager.

Lists all facts in projects that are set up to be monitored by Enterprise


Fact
Manager.

Lists all hierarchies in projects that are set up to be monitored by


Hierarchy
Enterprise Manager

Lists all logical tables in projects that are set up to be monitored by


Table
Enterprise Manager.

Lists all transformations in projects that are set up to be monitored by


Transformation
Enterprise Manager.

Server Machines Attributes

Attribute name Function

Lists all machines that have had users connect to the


Client Machine
Intelligence Server.

Intelligence Server
Lists the cluster of Intelligence Servers.
Cluster

Intelligence Server Lists all machines that have logged statistics as an


Machine Intelligence Server.

Web Server Machine Lists all machines used as web servers.

Copyright © 2024 All Rights Reserved 2474


Syst em Ad m in ist r at io n Gu id e

Session Attributes and Metrics

Attribute or metric
Function
name

Avg. Connection Duration Metric of the average time connections to an Intelligence


(hh:mm:ss) Server last.

Avg. Connection Duration Metric of the average time, in seconds, connections to an


(secs) Intelligence Server last.

Connection Duration
Metric of the time a connection to an Intelligence Server lasts.
(hh:mm:ss)

Connection Duration Metric of the time, in seconds, a connection to an Intelligence


(secs) Server lasts.

Connection Source Lists all connection sources to Intelligence Server.

Number of Sessions Metric of how many sessions were connected to an Intelligence


(Report Level) Server. Usually reported with a date and time attribute.

Metric of how many distinct users were connected to an


Number of Users Logged
Intelligence Server. Usually reported with a date and time
In (Report Level)
attribute.

Session Indicates a user connection to an Intelligence Server.

All Indicators and Flags Attributes

Attribute name Function

Ad Hoc Indicator Indicates whether an execution is ad hoc.

Cache Creation
Indicates whether an execution has created a cache.
Indicator

Cache Hit Indicator Indicates whether an execution has hit a cache.

Cancelled Indicator Indicates whether an execution has been cancelled.

Copyright © 2024 All Rights Reserved 2475


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Indicates whether a job was a document dataset or a stand-alone


Child Job Indicator
report.

Configuration Object
Indicates whether a configuration object exists.
Exists Status

Configuration
Lists all configuration parameter types.
Parameter Value Type

Connection Source Lists all connection sources to Intelligence Server.

Contact Type Lists the executed contact types.

Indicates whether an execution hit an intelligent cube or


Cube Hit Indicator
database.

Database Error Indicates whether a report request failed because of a database


Indicator error.

Datamart Indicator Indicates whether an execution created a data mart.

DB Error Indicator Indicates whether an execution encountered a database error.

Delivery Status
Indicates whether a delivery was successful.
Indicator

Delivery Type Lists the type of delivery.

Document Job Status


Lists the statuses of document executions.
(Deprecated)

Document Job Step


Lists all possible steps of document job execution.
Type

Indicates the type of a document or dashboard, such as a Report


Document Type
Services document or dashboard.

Lists the object from which a user drilled when a new report was
Drill from Object
run because of a drilling action.

Drill Indicator Indicates whether an execution is a result of a drill.

Drill to Object Lists the object to which a user drilled when a new report was run

Copyright © 2024 All Rights Reserved 2476


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

because of a drilling action.

Element Load Indicator Indicates whether an execution is a result of an element load.

Error Indicator Indicates whether an execution encountered an error.

Execution Type Indicates how the content was requested, such as User
Indicator Execution, Pre-Cached, Application Recovery, and so on.

Indicates whether a report was exported and, if so, indicates its


Export Indicator
format.

Hierarchy Drilling Indicates whether a hierarchy is used as a drill hierarchy.

List the types of manipulations that can be performed on a


Inbox Action Type
History List message.

Intelligent Cube Action


Lists actions performed on or against intelligent cubes.
Type

Intelligent Cube Type Lists all intelligent cube types.

Lists all the possible errors that can be returned during job
Job ErrorCode
executions.

Job Priority Map Lists the priorities of job executions.

Enumerates the upper limit of the priority ranges for high,


Job Priority Number medium, and low priority jobs. Default values are 332, 666, and
999.

Object Creation Date Indicates the date on which an object was created.

Object Creation
Indicates the week of the year in which an object was created.
Week of year

Object Exists Status Indicates whether an object exists.

Object Hidden Status Indicates whether an object is hidden.

Object Modification
Indicates the date on which an object was last modified.
Date

Copyright © 2024 All Rights Reserved 2477


Syst em Ad m in ist r at io n Gu id e

Attribute name Function

Object Modification Indicates the week of the year in which an object was last
Week of year mod

You might also like